doi
stringlengths 17
24
| transcript
stringlengths 305
148k
| abstract
stringlengths 5
6.38k
|
---|---|---|
10.5446/52849 (DOI)
|
Hello there from Atlanta, at Fosdham 2021. I'm Nick Black, the primary author of NotCurses, an ambitious text user interface in Character Graphics Library in the spirit of curses. I'm normally a compiler's and high performance computing guide, definitely not graphics or UI oriented, but I'd written two large NCurses programs and I was determined not to write another, hence not Curses, which aims to have all the portability and robustness of the ubiquitous NCurses but also to move well beyond that library. It does not claim nor attempt source-level compatibility with Curses, though it ought be easy for any experienced Curses programmer to pick up. Of what do we speak when we say text user interfaces in Character Graphics? I define them by two properties. One, we're creating a visual rectilinear presentation, but we're drawing with a pre-made set of glyphs, a font, as opposed to pixels. I don't like to say characters because there's not a bijection between drawable glyphs and characters, as we'll see in a minute. Ideally, this is a fixed-width font. All the glyphs occupy the same height and are at most integer multiples of some common whaling. Two, we're using stream-based I.O. rather than drawing to a memory-mapped frame buffer. Examples include the virtual consoles of Linux FreeBSD, terminal emulators under X or Wayland, or even true hardware terminals. In addition to rendering a stream of glyphs, smart terminals interpret inline control sequences, which can, among other things, move the cursor and apply styles. Under Unicode, glyphs correspond to extended graphene clusters and can be considered the atomic unit of visual display. The cursor will move over them in one motion and backspace will destroy an EGC in its entirety. The glyphs available are a function of our encoding, our font, and our terminal. Control sequence availability is purely a function of our terminal and is almost completely abstracted away by the term info library, of which not curses makes extensive use. So what's wrong with curses? After all, it's been around for over 40 years, making it one of the few pieces of a modern Unix system older than I am. Well, limited Unicode support, poor threading support, palette index color, and a corrupty color pair system control via global variables and other issues. In curses, a superb implementation of curses and takes about as far as it can go. The limitations of curses are fundamental properties of its API. The design goals for a 21st century tuite library included, one, being written and usable in C, but actually intended for safer languages. It's 2021. Pseudo just got pwned with the nostalgic heap exploit. C may be the lingua franca of Unix and system calls, and we do get great performance from it, but hopefully most client code is using something like Rust or even Python. NotCurse's was actively designed with them in mind and the wrappers were written alongside the core. Two modes to accommodate two major application styles. There's direct mode for line driven CLIs using standard IO and rendered mode for full screen apps. Ours is a multi core era and notCurse is designed for multi-threaded use. This means that third safety is well documented and that the interfaces are designed for safety, yet maximum performance. I wanted to generalize drawing surface support. NotCurse's planes can be any size, far larger than and in any position relative to the visual area. Three independent channels exist for each rendered cell, a glyph, foreground color, and a background color. True 24-bit color support runs deep through the API, though they're often reduced to palette-based pseudo color, transparently to the client app, to minimize consumed bandwidth. Perhaps most eye-catching is the quality multimedia support. NotCurse's rides top FFM peg are open image IO and uses state-of-the-art blitters to bring usable images and video to terminal apps. Like many other such libraries, there's a rich collection of pre-built widgets. And finally, performance has been a primary focus throughout development. There are 25 benchmarks in the demo application, and their timings are watched religiously. As just one example, NC display renders images in about one-third of the time as Chaffa, despite using more advanced blitter. Direct mode is as simple as it gets. A mean standard leave control sequences with standard IO. The cursor can be moved, styles can be set, but output is primarily driven through good old printf. That doesn't mean we can't do some pretty impressive things. Here's NCLS, a file listing utility that displays multimedia when found. Now rendered mode is where NotCurse's really shines and represents true advance over existing solutions. It operates similarly to OpenGL. Objects are prepared and arranged, and then rendered unblock to a frame. Frames are rasterized into streams of glyphs and control sequences and written atomically to the terminal. It is only at this time that the display changes. Output is carefully optimized so that only the minimal string is written to the terminal, and the objects forming the frame retain their state across calls. This operating mode can easily result in thousands of frames per second, far beyond the capacities of any known display technology. When you give up for this power is the ability to use standard IO. All output needs to be run through NotCurse's, because NotCurse's needs to have a correct idea of what's on the screen. This is driven through the NotCurse's call NCPileRender, which generates a frame in memory. While Pile cannot be modified during the render operation, different piles can be freely modified by multiple threads. The pile is rendered using the Painter's algorithm. Initially, all cells in the visual area are considered unsolved. Starting from the top plane, any intersecting unsolved cell is analyzed. The glyph channel is solved upon the first encounter with a glyph. The foreground and background colors are solved upon hitting an opaque color. Translucent colors are blended. A cell is solved when all three of these channels are solved. There's no need to use multiple piles except to expose parallelism, though they can be convenient for multimodal UIs. Each rendered mode context contains one or more piles, defined by their visual area and by their component planes. The pile provides a total ordering in the form of a Z-axis, and the directed acyclic binding forest. The binding forest represents grouping or ownership. It is possible to move, reparent, or destroy an entire subtree, and resize callbacks propagate down through binding trees. As mentioned earlier, while rendering a pile, it is not permitted to mutate it, nor any of its planes. Furthermore, only one thread may change the ordering or binding of planes at a time within a pile. The content of distinct planes, however, may be freely mutated by multiple threads. A pile is destroyed implicitly when its last plane is destroyed or reparented. A pile is created when a tree of planes is reparented to null. What are these planes? Well, they're the fundamental drawing surface of not-curses. All output is placed on the planes. Each not-curses context starts with the standard plane, which is always the size of the visual area and cannot be destroyed, reparented, nor moved along the X or Y-axis. It is thus due to this plane that we always have at least one pile. A plane is defined by its geometry and its origins, its active colors and style, a matrix of cells and an associated EGC pool. We'll get to them in a minute. A user-managed opaque pointer and name, a resize callback function, virtual cursor location, and a default cell. It is possible to retrieve the contents of a plane, and a plane can be duplicated wholesale or turned into a visual to bled onto other planes. It is entirely possible to have thousands of planes in a pile without performance degradation, or thousands of planes and thousands of piles. As mentioned, each plane has a matrix of cells, and a rendered frame is likewise represented as a matrix of cells. And indeed, rendering can be thought of as projecting a three-dimensional pile onto a single plane. And there's actually a function in C-plane merge that does exactly that. Cells are tightly packed. Each cell is exactly 16 bytes, plus possible spillover for an EGC pool. Each of these bytes are given over to two 32-bit channels, one for the foreground and one for the back. Two bytes are taken by the 16-bit style mask, that's italics, underscore, reverse video, blink, et cetera. Another byte caches the column width of its EGC. We need that regularly, and we don't have to go to libc. Finally, the remaining five bytes can be interpreted in one of two ways, either as a null-terminated, UTF-8 encoded C-string, or as a 24-bit index into the EGC pool. All currently defined Unicode code points mapped four or fewer UTF-8 bytes. Any EGC that can fit the four G-cluster bytes is directly inlined. The 8 bits of the backstop are always zero, and thus such an EGC is always a valid C-string. Longer EGCs are written into the EGC pool, and an index is stored in the G-cluster instead. Outside of pathological cases, almost every EGC gets inlined. The last data structure essential to understanding and using not-curses is the channel. Channels usually come in pairs, two 32-bit channels and a 64-bit structure. Each channel encodes at a given time, either 24-bit RGB, 8-bit palette index, are the fact that it's a default color. There's also two bits of quote-unquote alpha. This is actually opaque, blend, transparent, or high contrast. There are four reserved bits marked as zero in this diagram. Those are used by internal not-curses bookkeeping for optimization. You can't touch those. We've spoken about design goals in that organization, but how do we drive actual output? We are a text library, and writing text is the most basic operation provided. Nine families of functions are provided to accomplish this in the most convenient way. The Put-See family writes a cell in this one EGC, using the channels and style contained therein. All other families we're going to describe will use the plane's active channels and style. Put-Car writes a single 7-bit ASCII car as a complete EGC. Whether car is signed or unsigned is implementation dependent in C, but ASCII is only a 7-bit character set, so we're safe. Put-WC writes a single wide character as an EGC. A wide character can usually represent one unicode code point, though it should be so unfortunate as to code for a platform with 16-bit watch hour, it can only represent characters from the basic multilingual plane. Right and arbitrary EGC made up of UTF-8 bytes with Put-EGC are made up of multiple wide cars with Put-WEGC. Right an entire series of EGCs out with Put-Stur or Put-W-Stur, where again Put-Stur is UTF-8 encoded input. Finally, formatted output can be written with NC-Plane-Print-F and NC-Plane-V-Print-F, with their usual C semantics. Now for each one of these, I refer to them as families, because there's at least four functions in there. There's the basic one, which writes at the current virtual cursor position. There's one suffixed with YX that also accepts X and Y parameters, goes there first and then writes. There's a lined, which takes an aligned parameter left, right, or center, and places it there. Finally, there's stained, which retains the current styling and coloring, as opposed to rewriting it using styling and coloring of the plane. Boxes can be specified with some starting location and geometry. Six specified NC cells, including the styling channels, one for each of the four corners, one for horizontal lines, and one for vertical lines. There's the ability to leave out arbitrary edges or corners, and the ability to interpolate between corner channels. Simple, double, and rounded prepared variants come included, using all the capabilities of Unicode. NC-Plane parameters prepare geometry for a given plane. NC-Plane gradient fills a rectangular area with a gradient in EGC. NC-Plane polyfill replaces a region of the same EGC with an NC cell. Multimedia, and this is where we really start to leave the competition behind. Currently, support both FFMPEG and OpenImage.io. Gstreamer is coming. Multimedia backend is chosen at compile time, and applications can link against only lib not curses core if they don't require a multimedia backend. That way, there's no dependency chain pulled in and it's a lighter binary. NC-Visual objects can be created with NC-Visual from File. That opens into codes, arbitrary files. NC-Visual from RGBA, which takes decoded RGBA from memory, are NC-Visual from plane, which creates, as I said earlier, an NC-Plane visual that you can bled onto other planes. Those latter two don't need multimedia support, by the way. They just use this common NC-Visual object. I'm going to let this output speak for itself for a few seconds here. This is Oomlaut Design's Piledriver demo, pretty famous. You can see the high performance we're able to get. Your terminal ready for this? Media pixels are converted to character graphics via Blitter. The general state of the art is the half block Blitter, but not curses is introduced to beyond it in the quad Blitter and sex Blitter. The half block Blitter does have two nice properties. It affects a pixel aspect ratio of one to one and a lossless color conversion. Since a given cell can only have a foreground and a background color, there's unavoidable interpolation in higher resolution Blitters. Empirically and especially for large images, the 64-glyph sex Blitter produces the best output. The Braille Blitter doesn't work well for most media due to how Braille is drawn versus block characters and fonts, but it's great for plots, which makes use of the same Blitters we just saw, plus a few more. So low overhead not curses plots can be oriented any direction. They implement auto-scaling and they reuse the general gradient code to color both linear and exponential graphs. All image Blitters are available, plus the four-way and eight-way level plots. A five-row eight-way plot supports a full 40 levels. Braille gives you only half the resolution of an eight-way along the dependent axis, but double the resolution on an independent axis, facilitating the display of a longer or more granular range. And look great. Available widgets include selector, multi-selector, a tree selector, menus, progress bars. These can be heavily customized because in each case, you're just handing them an NC plane that the widget is then drawn on. It can all be handled using standard NC plane infrastructure. Let's finish this up by looking at some actual not-curses applications. These first two were both N-curses programs, large ones, which I ported last year. Moving from N-curses to not-curses cut down my total lines of UI code by about 50% in each case. The Grow Like Disk Manager has features like IO Path Bandwidth, Discovering, and Native ZFS support. It's present in several distributions and you can probably grab it and run it today. The Omphalus Network Discovering Attack Tool. It's a beloved little project of mine, but I'm frankly hesitant to package it. Anybody who's watched two decades of Wireshark CVEs probably agrees that the world doesn't need yet another C program sitting on a raw packet socket. My next two planned big not-curses projects, and these would be not-curses apps from the ground up, including an SDR tool, software-defined radio, and a Debian package manager built to top my fast-repeitorial suite. Thanks a lot for your time. I hope it's been informative. Go watch the demo and hack on. Free hearts, free software, free minds.
|
Notcurses is a C library (with C++, Python, and Rust wrappers) facilitating complex TUIs on modern terminal emulators. Notcurses supports vivid colors, multimedia via FFmpeg or OIIO, sane multithreading, and complex Unicode. Things can be done with Notcurses that simply can't be done with NCURSES or any other implementation of the X/Open Curses specification. I will present Notcurses's design goals, API, and some details of its implementation, which ought serve as a sufficient grounding for any potential Notcurses developers. Your terminal emulator is more powerful than you have ever dreamed. Notcurses 2.1.0 was released in December 2020, about thirteen months after the repository's first commit. It is available from many Linux distributions, as well as the FreeBSD Ports Collection. It is used by the author's "Growlight" block device manager and "Omphalos" network discover tool, as well as other projects. It aims to be a superset of existing TUI libraries' functionality, while achieving and enabling higher performance. A comprehensive reference, "Hacking the Planet with Notcurses", is available for paperback purchase or free download (Apache-licensed XeLaTeX source is also available). Notcurses has been featured on LWN and at the Debian Gaming Miniconf.
|
10.5446/52853 (DOI)
|
కింటికికికికికికికికికికికికికికికికి. కింటికికి కికికికికికికికి. పిహాలికికికికికి. కారికికికికికికికి. jFL? jAF? stL кос waterfall t goal Quran j j j j j j j j j j వంరింరిదివంనికికికికికికికి. మాకిక౿కికికికికికికి. మాకికికికికికికికికి. మాకికికికికికికికి. మాకికికికికికి. Africa Africa is a continent made of 54 countries. Among these countries you have Cameroon. Cameroon is situated in Central Africa, and the Boalab, which is the epicenter of this project, is situated in Yaondi, సిరం విక్చింింింిని కింిక్చిక్చిక్చికిక్చికిక్చిక్చికికిక్చికికికికికికికికికికికిక్చికికికికికికికికికికికికికి సికేలి మాలికిక్చికికికిక్చికికికికికికికికికికికికికికికికికికికికికికికికికికికికికికికికికికికి సినికికిక౿కిక నితికికికికిక్టినికిందియాటితిందిఉపియా కిందికికికికిమారంటికాకిందికికినిందికికికికినికికి కిందికికికికి. ఇరిలాయాయాయాయాయాయాయాయాయాయియాయాయాయాయాయాయాయాయాయా. అపారియాయా. మరినిలాయాయాయాయా. మిలిలిలిలాయాయాయాయాయాయాయాయాయాయాయా. మారిలాయాయాయాయా. నినిలానియాయాయా. నిలాయాయాయాయాయాయాయాయాయాయాయాయా. మిలిలాయాయాయాయాయాయాయాయా. మిలిలిలిలిలిలిలిలి. మిలిలిలిలిలిలిలి. మిలిలిలిలిలిలిలిలి. మిలిలిలిలిలి. మిలిలిలిలిలిలిలి. మిల౿లిలిల౿లిలిలిలి. నునునునినునిస్తారినినునినినునునినినినునినినినినినినునిని. నునునునునినినినినినినినినినినినినినిని. నునినినునినినినునినినినినునుని. నునునునినునినినినిని. నునునినినినిని. నునునినునినినినినునినినినినునినినిని. that prevent imported or donated equipment from being properly used and can facilitate the diffusion of innovation into the healthcare practice in Africa. Local manufacturing can also allow to feed products to particular context, for example, building in resilience to power outages or working with non-properatory locally available consumers, or building equipment with locally available consumers. Our project objectives are divided into two main categories. For the first category of our objectives, we have building capacity of local biomedical engineers to produce, maintain and develop open source medical devices. We contribute to the empowerment of young talented Africans involved in steam, and for the second category, we have the promotion of the use of open source hardware that can rapidly diffuse across the continent. The facilitation or facilitate the strong engagement between biomedical engineers, healthcare professionals and other stakeholders. What about equipment to be built to address the issues listed above? The equipment to be built are classified into four main categories. We have a metallogy equipment, bacteriology equipment, biochemistry equipment and others. And what can we do with those equipment built? With these equipment, we can perform malaria diagnosis, anaemia diagnosis, hemoglobin analysis, stall analysis, urine analysis, typhoid fever diagnostics, stall and urine culture and blood culture. The Mboalab is already contributing to address the disputes through the open source incubator. The capacity and capability building dimension of the project. The progress of the project will be measured through empowerment of young Cameroonians. And the Mboalab has already started to recruit interns. Those interns are both male and female in order to cope with the gender aspect. We have collaboration with communities as you know our project mainly targets rural areas with limited medical equipment. So sometimes it is very difficult to deal with rural communities. The first diagnosis test performed the Mboalab with the prototype and outreach activities. For the capacity and capability building of interns, we will be measured through the number of open designs collected during the literature review. The number of prototype builds, the qualitative criteria like the capacity to provide the documentation of the prototype build, the adoption of open science values such as sharing openness and collaboration. Our outreach activities will be the number of design released on GitHub, the number of blogspots published, the number of invitations received for events, the number of events attended, the number of presentations done during events and feedback received through social media or during events we are attending. And so our process for building our local equipment should be easy. Our expectation accessible design of open source medical devices using local resources in English and French. Cameroon is a bilingual country. We are French and English as official language. The proof of concepts for a local manufacturing of open source devices for medical labs. The book reporting stakeholder engagement, detailing capacity and capability building pathway to impact for local manufacturing of open source hardware in Cameroon and in Africa. As we said, we want our process to be easy. In January, we will start by recruiting and training interns. And the interns will be trained in open science value practices and tools including coding, modernization, 3D printing and how to use platform like GitHub. Then we will move to literature review. In February, in the literature review, we will perform literature review on open source medical devices, identify other open source projects in addition to the list of open source medical devices the Buala team has listed. To collect available design from the open source project identified above. To select the open source medical devices we are going to build. The building of open source medical devices. We will use the available design and build a different equipment with local consumable. When needed, we will improve the design and if necessary, we will develop new designs. For the documentation aspect, we will document the building steps of the open source devices developed in the project. We will share the documentation with open distances in social way that people can freely replicate, adapt or improve the model and the design across the world. I will produce open educational resources. For the dissemination aspect, we will design sharing sites such as Instructable, GitHub, GitLab or WikiFactory. We will publish appropriate implementation science or development journals like the Journal of Open Hardware. We will publish the Buala book reporting all the activities we have done since the beginning. The collaboration protocol, the capability building etc. If successful, we will deal also with national regulators to share our findings and recommendations. For the outreach activities, we will attend and present during events taking place in Cameroon or overseas around technology innovation and health care. The recruiting and training of interns will take about 45 days. The literature review will take 2 months. The building of open source devices takes 6 months. The documentation takes 7 months and the dissemination will take around 8 months. The question which can arise is that if the project is sustainable, I will answer by the affirmative. The project is sustainable because our experience with the Buala open source incubator thought us that once we have a good prototype working well, people and institutions will get interested. For the moment we have built four incubators to support local labs in Cameroon and the feedback we are receiving is excellent and confirmed that our incubator is working well. Without any doubt, we will have the same success with the open source medical devices generated through this process. Is the project risky? Yes, the project is risky. The first major challenge is with access to the internet and even to the electricity, which is still a problem in Cameroon. This could cause delays in interaction between the members of the project, inability to accomplish some tasks and a limited use of online tools and resources. The second difficulty concerns the time interns can take to acquire basics in coding, modernization and 3D printing. Since digital fabrication is something relatively new in our local concepts. Thank you for your kind listening. Any questions you can find me at www.ecute.com and with my regular email. Thank you.
|
The lack of accessible quality healthcare is one of the biggest problems in Africa and other developing countries. This is not only due to the unavailability of resources, but also to the absence of a structured formative process for the design and management of healthcare facilities. This situation strongly contributes to deepening inequalities in access to quality healthcare. Through an Open Society Foundations funded project, the Mboalab aims to remedy these inequalities by building Open-Source devices for medical labs. Local manufacturing can address the infrastructural barriers that prevent imported or donated equipment from being properly used, and can facilitate the diffusion of innovation into healthcare practice. This project is part of the large MboaLab mission to contribute to the Common good and catalyze sustainable local development through Open Science. Open science is the best and fair approach to support local manufacturing. That is why the crux of our approach is the use of “open source hardware”, where designs for easily replicated, high quality diagnostic tools are shared with the potential to transform medical devices through the use of digital fabrication and inexpensive, well-engineered parts from mass-produced consumer goods. During FOSDEM, we will present the different facets of our project: 1) the set of prototypes of high quality and inexpensive open-source devices we are going to build; 2) the capacity and capability building dimension of the project, enabling the empowerment of young cameroonians; 3) Expectations of the project.
|
10.5446/52855 (DOI)
|
Hello everyone, so unfortunately due to the pandemic we were obviously unable to see each other this year in Belgium like we've been doing for the past years, but it is still definitely nice to see that we could at least meet up even virtually this year. So with that being said, my name is Apostolos, I study Informatics at the University of Pyrrhus in Greece and I work on several RF and astronomy related projects. In this presentation I will introduce Virgo to you, which is a free to use and open source spectrometer for radio astronomy, which is a very versatile tool and easy to use at the same time. So for those who are not familiar with radio astronomy, I would like to give a very very brief introduction. So radio astronomy is a branch of observational astronomy and it is a very important branch because it studies objects in radio wavelengths, not in visible light that we are used to observe with traditional optical telescopes, but in radio wavelengths. This is very important because many objects in the sky emit at radio wavelengths instead of only optical light, instead of only visible light, so we can extract very useful and sufficient information about these objects and their roles in the universe. So let's look at what radio telescope actually consists of. So generally it consists of a large parabolic antenna, a large station antenna, then followed by a low noise amplifier and a filter. These two components are very important for amplifying the signal while introducing very little noise and filtering interference out of the signal and noise as well. Then comes the SDR, which is essentially a digitizer. It doesn't have to be an SDR, for example, for very wideband applications. You might need a different spectrometer. This is very common in radio astronomy, but we'll focus on SDR technology here as it's a very trend sort of thing in digital signal processing and radio in general. And you can really do a lot of great radio astronomy experiments with SDRs, which are very inexpensive. So the SDR essentially communicates with the telescope computer. This can be even a Raspberry Pi or any computer really. And it listens for commands. So instructions like observe now, acquire data now at this frequency and this bandwidth. And of course sends the raw data to the computer. Then the computer performs some digital signal processing to process the data and then store the data to, for example, some binary format or a FITS file. And then the user is expected to do some data analysis on the data. Obviously, as you can see on the bottom row of this overly simplified diagram, it's very easy for an observer to get confused with all of these steps and it can really take a lot of time in order to build a reliable solution to cover all these steps from the observation planning to the acquisition of the data and even the analysis of the data. And this is the kind of problem that Virgo attempts to tackle actually. So what is Virgo? Essentially, it's a versatile and very easy to use as spectrometer and radiometer for radio astronomy. It is based in Python, so it's very easy to implement in automation applications. And it's also based on the GNU radio SDR framework. GNU radio is essentially a radio framework, open source radio framework for digital signal processing and it's very often used with SDRs. So Virgo can carry out data acquisition, processing and analysis of observation data and also help observers with planning their observations, as we will see shortly. On top of that, it can also help observers carry out ready-frequency interference surveys, so monitoring whether their environment is capable of acquiring data for radio observation standards. And of course, it is applicable to any radio telescope using any SDR. So with that being said, of course, it is fully open source. You can check out the code on GitHub, contribute if you like, make issues and of course, install the software on your system if you're interested. So let's look at an example observation here to get an idea of what Virgo data looks like. So on the top left, we have the average spectrum. This spectrum is the raw data, essentially, that we have collected. The average spectrum is essentially averaged over time, so we increase the signal-to-noise ratio by integrating for long periods of time. Now the top center plot is almost identical to the top left plot, except instrumentation effects and artifacts have been taken into consideration and sort of cancelled each other out, so we have a much more flat response with the hydrogen line, already astronomical emission, clearly visible. On the top right plot, we have the power versus frequency versus time, which is a plot that essentially tells us everything we need to know about the signal in terms of the frequency domain as well as the time domain. On the bottom left plot, we have the time series, which is essentially the power versus time. And on the bottom right, we have the total power distribution plot, which is a little advanced, but in plain words, it can help observers debug certain instrumentation and interference issues they might be facing. So let's look at some key features regarding the observing aspect of Virgo. So first of all, it's a four-top weighted overlap-add Fourier transform spectrometer, which in simple words means lower spectral leakage. It's a diesel signal processing method, which is slightly more computationally expensive, but not too much. Of course, the parameters, the SR parameters are all adjustable from frequency range, from center frequency to bandwidth to RF gain, things like that. They're all very adjustable with Virgo. The package also supports spec-proline observations. So you can do passband calibration and have the units rescale to signal-to-noise ratio units for a less arbitrary units for the power axis, the y-axis. There's also slope correction using linear regression for essentially correcting for a poorly calibrated spectra. Additionally, there's also RFI mitigation, of course, for the removal of narrowband interference with median filtering and also more wideband interference with channel masking. Of course, there's also continuum support, meaning you can observe things like radio galaxies, the sun, the moon, things like that, things that do not simply have a spectral interest, but also some variability in the domain of time. So total power distribution, this plot on the bottom right that we showed earlier is this, this is the histogram, and it also plots the Gaussian feed, the best Gaussian feed, automatically. So it's very easy to understand if the noise floor, its variability is following a normal distribution, which is useful for debugging certain things. There's again median filtering for getting rid of very short duration bursts of interference, and there's also inherently dispersion support for pulsars, and the giant pulses in general, and for example, theoretically even FRB follow-ups could be an application here. And there's also dynamic spectra support, so the waterfall plots can also be outputted to FITS files for further processing and analysis. So this is a new radio flowchart, essentially consisting the packages real-time digital signal processing pipeline. This is the weighted overlap add spectrometer that I brought up earlier. Its role is essentially again to reduce the spectral leakage. As you can see on the right, we can compare the various FFT windows, traditional FFT windows, with the WALA method, and you can clearly see how the spectral leakage is significantly reduced with WALA. So with little computational expense, you can actually get quite less spectral leakage, like this, with this method. So let's look at some features concerning the planning segment of VUGO. So VUGO allows you to predict the source altitude and azimuth versus time. For example, you can plot the altitude and azimuth of radio galaxy, or of any source really, your choice, to see how it rises above the horizon and things of that nature. You can also plot the telescope position on the 21 cm all-sky survey by the lab survey. As you can see at the bottom left, the red mark indicates the telescope position on the map. And you can also estimate or actually simulate what you would expect to see based on the lab HSI survey. As you can see on the bottom right, you can simply enter your coordinates in galactic format and your antenna beam width, and you should see what you should expect from an observation with your antenna. This is very useful in order to confirm whether your observations are actually containing true sky signals, or just some sort of instrumentation error, or some sort of interference. The package also comes with a built-in tool for conducting RFI service in a very, very rapid manner. So this is very useful in order to confirm whether your observatory is compatible with radio observation standards. And lastly, you have also got a basic calculation toolkit for determining the systems sensitivity and performance of your instrument. This is very useful for estimating whether your system is performing well or not, and also for calculating various things like the radiometer equation for SNR estimations and things like that. Here's an example estimation of the position of the signals A radio galaxy, a very bright radio source in radio astronomy in the sky from the location of the observer. And this is very conveniently plotted with a single line in virga. So let's take a look at some example uses with virga. So all the user has to do is simply import virga as a Python module, define their observation parameters that they wish to coordinate with their SDRs, and simply just run virga.observe to acquire the data and virga.plots to upload the data with the given parameters for RFI mitigation and things like that. And that's pretty much it for the usage. So I think it's time to do a quick demo with an 18 meter large antenna. This is a large radio telescope. And we will use it to try out a hydrogen line observation with virga. So let me move on to my terminal window. So first, as you can see, there's two files, calibration.dot and virga.py. So let's take a look inside virga.py and you can see that there's a bunch of observations around us. And the two commands I just showed a minute ago with virga.observe and virga.plot. So if we try to run this, you can see that virga will inform us that it detected, for example, a user PSDR in this case, and it will begin acquiring data. So while it's acquiring data, it's also doing some digital signal processing. It's applying FFT and integrating and also using the WALA method in real time to significantly reduce the amount of data that's being stored to the disk. So as soon as this is finished, right now, virga is doing further data analysis. And if we go ahead and take a look at the file, we can see that there's clearly a hydrogen line peak right there where we would expect it with some channels masked for RFI. So with that, if you wish to install virga, all you have to do is simply run pip install astrovirga and everything will be installed on your system automatically. And with that, I'd like to thank you all for your attention. Of course, if you'd like to contribute to the project, you're more than welcome to and I'd be happy to hear your questions. Thanks a lot.
|
For the past few decades, radio astronomy has been a rapidly developing area of observational astronomy. This is due to the fact that a variety of celestial objects emit electromagnetic radiation at radio wavelengths, which has led to the development of radio telescopes capable of revealing the otherwise-hidden astrophysical properties of the universe. An important requirement that makes radio astronomy observations and analysis possible is an appropriate software pipeline compatible with the spectrometers with which radio observatories are equipped. In this work, we present Virgo: a versatile software solution for radio telescopes. Virgo is an easy-to-use open-source spectrometer and radiometer based on Python and GNU Radio (GR) that is conveniently applicable to any radio telescope working with a GR-supported software-defined radio (SDR). In addition to data acquisition, Virgo also carries out automated analysis of the recorded samples, producing an averaged spectrum, a calibrated spectrum, a dynamic spectrum (waterfall), a time series (power vs time) and a total power distribution plot. By additionally providing the observer with an important set of utilities, Virgo also makes for a great tool for planning (radio) observations. This includes the ability to compute the position of astronomical sources in the sky for a given date, estimate the right ascension and declination given the observer's coordinates along with the altitude and azimuth the telescope is pointing to and convert equatorial to galactic coordinates with the help of the open-source Astropy package.
|
10.5446/52856 (DOI)
|
you Hello everybody, thank you for watching this Fosdum talk. It's a weird year. I'm presenting to you online. The past few years I was of course in Brussels to be at the actual event. But this year online, and this is the third year that Weavehead is being presented. So two years ago we showed Weavehead for the first time. My colleague HN was showing how we were working on Weavehead, how we were focusing on vectorizing nodes in the graph. Last year it was me, and you could already see how Weavehead was turning more into a full-fledged search engine. It still had the graph data model, but it was more starting to focus on vector search. And this year I'm very happy to announce that Weavehead is a full-fledged database. It has full CRUD support and it's focusing on vector indexing and searching two vectors. It's quite new. So for those who haven't heard of this, no worries. I'm going to show you and explain to you exactly what it does and why we think there's a lot of value potentially for your project in using Weavehead. So I hope you enjoyed this talk. If you have any questions, don't hesitate to reach out to me on the Fosdum website. You can find all my contact information. So thank you for watching. Four things I'm going to show you. The first thing is why do you want to have a vector search engine? What's the need of a vector database? Then I'm going to talk about what Weavehead is, and then I'm going to show you two demos. First, a very simple demo, where you just see how you can use Weavehead and you can see the vector search in action. And then a little bit of a bigger case based on articles where you can see how we can quickly find insights from these articles. So thanks for watching and let's start with why do you need a vector searching? So these vector search engines or vector databases, they're like a hot topic right now, but you might wonder why. So what's so special about them? What is it that they do? And I'm going to try to explain to you what they do. And the easiest way to do that is go a little bit back into history. So if we go back to the 2010s, then we see that deep learning is becoming really feasible. And there's this famous case where machine learning model tries to recognize if there's a cat in a photo. So what you basically see is you see a photo of a cat and then the model gives an output with a percentage. How certain it is that there might be a cat in there and then you can deduce if there's a cat or not in the photo. And you can give it another photo, etc. To do this, you need to somehow represent the photo in a way that the machine can actually read it. And that's done in vectors. And vectors are just representations in a hyperspace, if you will. So they look like you would have coordinates in a three-dimensional space, but then often like 300, sometimes 900, sometimes 1500, sometimes even bigger spaces to actually represent what's in these photos. And what the model does is that it looks at patterns and based on the patterns that it can find in the vector, it makes an estimation, for example, yes, I think there's a cat in this photo, or no, I don't think that there's a cat in this photo. And this is how these machine learning models work. They need vectors as inputs and they sometimes also produce vectors as output, because that's how the model determines what the pattern is and if they can find insights in it. And you can represent anything as a vector. It can be an image, but also text. So a lot that's happening now with natural language processing has to do with vectors. So regardless if it's a glove or were to VAC or fast text or BERT or other transformers, it's always the case that you need vectors in your model to make predictions and determine what something means. Now you might say, well, those are great use cases, right? I can come up with a lot of ideas. So for example, for recognizing cats in photos, you could come up with a medical use case where you use X-rays to see if somebody maybe broke a bone or something and have the machine do that. Or you might want to know if a text is about a certain topic. And that's all true. That's all working fine and working in production and people are building great things with it. So what if you want to know something about a lot of vectors? A simple example. So let's take a Google search query. So we go to Google search and we type in who's the CEO of Tesla? Now you see it returns Elon Musk and it does it pretty quick. So we didn't search for Elon. We didn't search for Musk. We somehow searched for the relation between Tesla and the function of a CEO and it returns Elon Musk. And to do that, it had to browse through thousands, hundreds of thousands, maybe even millions of pages. So the question is like, how could Google do that so fast? What did they do? Well, they were doing basically a reverse search on vectors. And you know companies are good at this. So Google search does it with text. Apple does it with Siri. IBM Watson does it with medical data. But what if you want to build such a solution? What if you have a lot of textual or image data or audio data? And you want to build a solution that can do the same thing like these companies do. Well, that's where the vector search engine comes in because that's the problem that the vector search engine solves. And we've had this vector search engine. So let's talk a little bit more about we've yet about its inner workings. And then quickly let's go to the demo so that you actually can see the next. So at the highest level, we've yet has a very simple class property structure. That means that you have a data object which has a class and then you have a property and then you can add your data. And what it is that you're adding to we've yet really doesn't matter. So you can present news articles, financial transactions, cybersecurity threats, scientific articles, web pages, legal documents, social media posts, insurance documents, or you could even use photos, videos, audios, etc. When you now store the data in a Weaviate, you can store the object through vectors you've created yourself or by using one of the out of the box vectorizers. The custom A&M implementation has full cross support making Weaviate a full-fledged database that you can use in production. So let's do a quick sneak peek demo based on our out of the box news article that I said. So here you see a bunch of articles, just some random titles. And what we can now do in the vector space is that we can say, well, we want to move near a specific concept. So for example, software business and that limit that to three results. And here you see a startup about blockchain, serverless computing and workplace technology. And now let's move away from a concept in the vector space. So we're going to move away from the concept of decentralized networks and we're going to set a low force of 50% there. So now the blockchain business should move a bit down, as you see. We can also do the opposite. So we can move to a concept. So we can say, let's move to Liener-Storwald's invention, which is Git, of course, and do that with a force of 90%. And now you see all of a sudden something related to Git and GitHub pops up in the end results. There are also additional features, for example, like the semantic path where you can see which route we've yet took to the hyperspace from the query to the results. On the documentation on the website, you can find many more additional features and examples on how to use we've yet. Thank you for watching this lightning talk. I hope you like it. I hope you're going to try Weave It Out. We would love to hear your feedback. If you just Google or DuckDuckGo or whatever search engine you use and type in Weave It, you will find the vector search engine Weave It on GitHub or on our own website. You can leave us messages there. You can join a Slack channel. You can leave questions on Stack Overflow. We'd love to hear what you build. And if you have questions about use cases or what you can do, make sure to reach out. Thank you. Bye. you you you you Hello. Hi. Are you? There is no question yet. Okay. I have a question. Yeah, just circle. Am I not the performance metrics of your query? Yeah. Do you have any reports, public reports? Yeah. So we actually, we're going to publish a blog post next week. So we can, we have a retrieval rate of over a million objects in less than 50 milliseconds. So the whole point is actually to make searching through these machine learning data objects to make it fast. That's basically the goal. So the vector representation of the object. You can use Weave It to search through very fast. And again, next week we'll have an article actually with an example also with some Python code that you can actually see how it works. What about data size? The data size of your knowledge base is, can I know? Yeah. So it just works as a, as a normal database. So you can add as many things as you want. So there's not a specific, there's no limitation. It just works like, it's like the same that you would, you know, if you, for example, would use solar or something as a search engine. This is just a search engine for machine learning output. So that's basic. That's the big, the big difference, if you will. Yeah. Let's see. Because I see I get all these messages on the, also on the, because I'm also using the app for, I'm doing both this and the app. But that's nice. I have a little question. Training time, how many days or hours am I now? Yeah, so, so that depends on the model you use, but the power sits in the fact that you don't have to retrain. What you do is you take a data object, you factorize the data object, you edit to Weave It, and you're done. So if you add a million objects to Weave It, you can still search through it very fast. Oh, there's a link. Yeah, yeah. So if you're, you can check it out and try it out if you want to. There's a lot more information on there as well. And we have how many minutes. I think, let's see. Oh, shit. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah. Yeah.
|
Weaviate is a cloud-native, real-time vector search engine that allows you to bring your machine learning models to scale. During this lightning talk, you will see a demo, unique ML-use cases Weaviate solves and you will learn how you can get started with V1.0.0
|
10.5446/52858 (DOI)
|
Good morning everybody and welcome to our lecture and power in the school of the future currently accessing files in a distributed algorithm, some of the basic infrastructure. And hi everybody also from my side and welcome to our presentation about the FUS project and in particular about the development of it called the FUS remote access. My name is Paolo Dungilli, I'm technical inspector at the Italian school department of the autonomous province of Polz and Bolzano in South Tirol Italy. I'm coordinating the FUS project since 2016 and I'm going to introduce it in a few seconds. Marco. Thank you Paolo. Hi everyone again, my name is Marco Varenello, I'm a computer science student at the Free University of Polzano. I'm a freelance developer and system engineering instructor, I'm a member of the document foundation and currently I'm the president of the local Linux user group and also a developer of the FUS project. Thank you Marco. Well so what's FUS? FUS is a project that was born 16 years ago in the province of Polzano which is a small region in the northern part of Italy and FUS stands for Free Upgrade for a digitally sustainable school and was born with the aim to bring freedom and transparency in the schools of South Tirol as far as digitalization is concerned. And why? Because we wanted students and teachers to be able to use the same software both at school and at home without any kind of restriction in a very transparent way and this was only possible by means of free software using GNU Linux and many many free educational tools available for schools and in this way FUS created public value and over time acquired a public value. All public money invested in this project has been made available as software with all source code and documentation since it was born in 2005 and the public value represented by a project FUS was also recognized by national and international initiatives as a Repubblica Digitale in Italy by the Ministry of Innovation and also the European level since all projects of Repubblica Digitale are part of the Digital Skills and Job Coalition of the European Commission. FUS is also part of the Developers Italia initiative and is one of the projects that is in line with the Code for Digital Administration which can be considered the ratification in Italy of the public money public code initiative by the Free Software Foundation Europe. At the local level FUS among other affiliations and the recognitions is members of the Software All-in-Network for Sustainability which aims to gather initiatives which follow the 2030 agenda for sustainable development. FUS of course is not only a project of sustainable digitalization it is a client and server GNU Linux distribution based on the GNU Linux now on the 10 Buster and it is used in around 80 schools which count around 4500 PCs 64 servers it is virtualized with Proxmox and in the past years we gathered around 200 desktop applications which are used in all grades and in all schools. The coverage is not only in all schools of South Tirol so in our region there are also some schools outside our region in Italy. FUS is not only used in schools but also at home as I said before and during this long pandemic period many many students were in need of PCs and notebooks to be able to remotely follow the lessons and study at home. Therefore associations as the Linux user group Potsen, Bolzano, Bulsanne and Ada created a synergy with other associations on our territory starting project called a school swap with the aim of reducing the digital divide among students and together use PCs clean them install GNU Linux and free educational software on them. The result was that in a few months more than 100 PCs were brought to students in need who didn't have any PC at home. Now let's quickly get back to the concept of digital sustainability which is quite important for us a concept we stick to and we introduced it and we used it because it is an extension of the use of free software. It comprehends two other important objectives which are the use of standard open formats and the freely available teaching materials. Free software open formats and freely available teaching materials are the three pillars important pillars that leverage the free sharing of knowledge in schools. FUSA and free software help explaining to teacher and students that technologies in schools need to follow a virtuous circle comprehending the four well-known liberties. In our schools we didn't want technologies that bring our schools into a lock-in situation which is dangerous both for students and teachers and also for families. We chose technologies we were able to control. We decided to invest public money reducing operational expenses which are typical of proprietary software contracts. We didn't want to be controlled by technologies transforming people's data into products used by big multinational companies. In particular as far as web applications are concerned we decided to use free software products where both infrastructure and software are under our control and this is the case of the learning platforms as Moodle or Chamilo we are using, Office Automation software as LibreOffice Online, the Collaboration Suite and NextCloud and video conferencing systems as Big Blue Button. For the past 16 years FUSA listed carefully to suggestions coming from teachers and students but one piece was missing though making users files available remotely which is a solution which turned out very very useful during the pandemic period and not only during that. Now Marco explains us the great job he made in designing and realizing this architecture. So what was the missing piece in this suite of software? The piece was making the files available for the users remotely so when a teacher or a student gets outside of the school in principle was not possible for them to access the files they saved at school so the only way to transport files from and to the school was either to put them on a USB stick or to use a cloud service. So let's get a step back and see how does the school network work. So there's the domain controller which is a server that runs open held up and an NFS version for protected with Kerberos share but there's also a firewall that protects the server from being accessed remotely so accessing via the NFS protocol from the program is unfortunately not possible so we needed to pretend to be a client of the server itself and not access it from the van. So what's the state of the art? Because push from a taxes is based on the latest enterprise introduction already software like Nextcloud. I'm sure you already know it Nextcloud is widely deployed by public administration enterprises, small companies and private users. It has a very large community and shared with Docker and natively support external strategies and our lab identification so these two features were of course essential for this project since our aim was to authenticate users with the potential they already have and provide them files that already are stored in the server. Unfortunately there's not so many choices in the field of the online collaboration and one solution is LibreOffice online which is a server built upon of the original core of LibreOffice that allows by the WAPI protocol provided files directly on a browser rendered system. And about internal PKI and ECME we are using the server as a ECME client which is at the moment the most stable. That's encrypt for providing our public infrastructure free SSL certificates and the small step was internal PKI and ECME so which is a really important part that allows us to protect the traffic between the central proxy and the delegate server. So to sum up what's the use of more taxes? First of all it's a private cloud provided by autonomous province of Pozzano with its own infrastructure. It's an online collaboration suite since it allows the users to collaboratively edit their documents and it's also a solution for accessing your own data outside the school network. Why this solution? First of all for the data under control and the general data protection regulation. Since all the files are stored on the school server used every day there is no problem from this point of view that's transfer or any fungals. It's a distributed storage since it uses the storage on the single service so it tries to optimize annual space on surface schools and it's even useful for the duplication so instead of having a copy of the same file on many different locations the first and the original copy is on the first user itself and finally it uses single sign-on so you can use the same credentials as the network to access it. What's the infrastructure behind this solution? So the main part is the domain and the subdomain access.foos.pc.rit which is an its own DNS zone provided by a separate server from a foos.pc.rit and it provides on this domain both the balanced proxy that allows to access the delegate server and the online collaboration suite which is an instance of LibreOffice Online. On the private infrastructure we have the internal ACME which is used to issue SSL valid certificates for the selected servers and the private DNS which is used of course as an helper of the ACME. Finally every school has its own foos servers as it's always been and a side of the server also has a remote access delegate server. As you may have realized from what Paolo said deploying this solution in 80 different servers may become quite challenging. So that's how we deploy it. First of all a dead-end template is shipped to the virtualization environment of every school configured by a cabinet and then deployed aside of the main server and the already running foos server. Then thanks to Ansible we complete the configuration. In the first step we are going to reconfigure our private DNS and issue for the new virtual machine a certificate which is suitable for later new for public qualified domain name which is of course internal so private domain. And on the second step we are configured even the public DNS we ask let's entry to it. And on the second stage we even reconfigure public DNS we ask let's encrypt to issue a public certificate for this new domain and we are configured for our plans proxy to point a public domain to the new delegate server. So let's take a second to talk about the scalability of this project. This project is really easy to scale we can just add a few more proxy servers maybe switching from Apache to HAProxy we could be for sure a really bad solution and with this we could even scale the Oceans instances to allow more users to color the entire function on purity and with this the solution is easily scalable. Another enhancement we could add is to provide a geographically different entry points to the virtual private network so that the inbound traffic will be split onto two or more nodes. In conclusion take a look on how it looks that's the homepage of Pusrano access it's basically a list of the schools that currently can use this service. Then we have the next log in, side with personalized logo of Pus and the name of the school and then the standard dashboard of next log 20, the file browsing and the online editing with river office online. So what have been the positive outcomes of this project? First of all offer an important service to the users that's what I really really asked service to have and it can improve the user experience a lot. We contributed to the liberal office documentation and we wrote a part in their public wiki about how to build the liberal office online. We even contributed to those small stack development yesterday the issue of integration with Proxima certification system has been closed so a really big achievement and we invested in local skills so of course technicians and other personnel has to be formed to use cloud unit and these technologies so really a positive outcome for our territory. This study has been published between the conference data mantica or edition of 2020 you can find the paper on the website of the conference as a link or as a download. So the final acknowledgments important people that contributed to this project are Dr. Semen, Emilio Lossori and Drabona and Stephanie Fiora. Thank you again. So thank you for your attention we'll be here to answer your question if you are finding and any further information visit fullstopp.it or welcome to mail doing fullstopp.it. Thank you again.
|
How can users of your network be allowed not only to remotely access their files but also to collaboratively edit them? Docker, NextCloud, LibreOffice Online and LDAP are the pillars of the proposed solution. The talk will start describing the context where this proposal was born i.e. the FUSS Project. The analysis of the problem will follow along with the development details of the solution and suggested deployment strategies. FUSS is both a digital sustainability project launched back in 2005 and a GNU/Linux, Debian-based distribution for schools, currently used in around 80 schools in northern Italy’s South Tyrol. The presentation, held by Paolo Dongilli (FUSS Project coordinator) and Marco Marinello (developer) will quickly go through the first 15 years of life of the project and then deeply inspect the proposed solution called "FUSS Remote Access" which helped teachers accessing their didactic materials and files left in schools' servers during lock-downs caused by the Covid-19 pandemic.
|
10.5446/52859 (DOI)
|
Welcome everyone. Thanks a lot for taking time to connect and watch this presentation. I'm really honored to have this opportunity to talk in this symposium. I hope that you will enjoy this presentation. So today I will talk about this imposter syndrome, I don't know if you ever heard about it, how to defeat it and turn it into a booster for our growth. And for doing that I will tell you a story, mine. So let me introduce myself first. Hello everyone, I'm Matteo, I'm a senior software developer, pizza lover, traveler, runner, blockchain and crypto enthusiast. Sometimes not in disorder. I was born in Foggia in the south of Italy. Then I moved to Rome for a few years working and living there. And right now I'm living in Barcelona and I'm working for a US company named Coachava, as an engineering manager. So now that you know me, I would love to know you a bit as well because surely when I'm having this presentation in on-site meetups or conferences, I'm used to ask some questions. Today we don't have this chance, so usually what I ask is, have you ever felt like you don't belong where you are? Is everyone at your office way more talented than you? Do you think that you don't deserve what you achieved? And I did this presentation quite a few times and data are more or less the same every time. People are feeling not that well in their workplace. Here you can see 82%, at least one time in their lifetime. They felt like they were not belonging where they were and where they are. And more or less 50% of people are feeling this way right now at their current workplace. So is this happened to you as well, at least once in your lifetime? Well, as this data already showed us, yes, it happened. And so I'm in good company and we all have this kind of weird feeling at least once in our lifetime. So why they had to talk about this imposter syndrome? Well, a couple of years ago I was on the beach in Mongat. I read an article and it was quite interesting around this topic. So I decided to write a post on LinkedIn. But as you know, over there we have 1,500 charts to express our faults. And it was not enough for me. So I started preparing for this. And then from some slides I had more than 100 slides. They are big pictures so there is not so much text around it. So what I want to do is sharing my experience with you and show you that it's not that bad after all. So let's start with my story. Let's start from the beginning. It's not that bad after all, so let's start with my story. Let's start from January 18, 2015. I was living in Milan and I had this offer from Viacom that is a multinational company based in US. They own many media brands like Paramount, Spike, MTV. And there were many, many interesting projects to work on in this company. Like for example, I worked on the API for Playplex. It was the first attempt to have a unified mobile platform for consuming long-form videos and short-form videos. And it was a huge success. In fact, we reached a huge amount of people like 40 countries, 15 million downloads, 30 million long-form episodes, streams, viewed each month back in 2016. So it was a huge pillar in the strategy of Viacom. And then after that, we started working on the same concept that was having one unique platform, also for websites. And they worked also on that project for all the brands in Viacom. So exciting times, a lot of challenges. I worked with really good people. I learned really a lot. But then after like three years working on these projects, I was looking for something different. Because, yeah, I had a great job. I was working with amazing people and in a great company. But I needed something more and different. And so I started looking around. And then a few months later, I went to Croatia, I was on vacation. And I was questioning myself, what am I still doing in Milan? Because this is what I would like to have in my life. See, sun, sun. It was kind of a recurrent thought. But my ideas were not clear because I was going on LinkedIn, looking to job post. But with not clear idea, I never made the next step that was applying to job opportunities. I tried just some, just for the sake of it. Just for keeping me entertained and trained. And in the meanwhile, I was also studying some topics that were kind of interesting to me. But nothing more than that. Then after this vacation, I came back to Milan and I had still that idea stuck in my mind. As you can see here, Milan is really, really nice. Really beautiful. But then it's also cold and gray for long months in autumn and winter. So again, I was on LinkedIn looking for good opportunities. And as you know, there are way too many options out there. So I needed to narrow down the list. And I remember that I always loved Barcelona. So I started focusing on this city, looking for some job openings that may be fitting my skillset and my ambitions. And I found this opportunity that was quite good. They were looking for senior PHP developers that were interested in pursuing a career in Golan. And so I saw some good overlapping with my experience and my ambitions for the future learning new stuff, improving my skills. Problem is, this opportunity was not active anymore. Was inspired. Nevertheless, I wrote a cold email to the recruiter, Barbara Iranzu, presenting myself and asking if the position was still available. Job posting was not active anymore. So there were high chances that it was not active. And they were already hiring someone, but I tried. And I was lucky because after a few days, Barbara answered me and we started the recruitment process. So I had some interviews with her and then with some people from the company. And in the end, after these steps, I landed a job offer. So I was moving to Barcelona. It was really, really great. Because during those months from September to December, when then I moved to Barcelona, there were acting times because I had to continue working for VACOM. I was packing, doing all the bureaucracy stuff for going to Barcelona. Then December came and it was time to say goodbye to my dear friends over there. And on December the 27th, I had my trolley, my laptop, and my one-way ticket to Barcelona ready to start a new chapter of my life. And on January the 2nd, 2019, I started onboarding in Ubico. Then problems started, I would say, again, because everyone when joins a new company is kind of overwhelmed by new information, new stuff to learn. And here probably it was too much to me because I was starting asking myself, what can I do in here? Because I was really overwhelmed. I was used to work on some technologies like PHP or Java, using GACMAI, MongoDB, AWS. And here instead we were using Golang, Kafka, Google Cloud Platform, on connected cars that was completely new to me. So I was questioning myself. I had many, many doubts in my mind. And the questions that I asked you at the beginning were bugging me. And in these offices, we were more talented than me. How is it possible that they hired me in the first place? What were they thinking? And what the hell am I doing here? I don't belong here. So I had doubts and even more doubts. So was it imposter syndrome, self-doubt, or whatever you want to call it? Well, I was feeling that way. So let's start with some definitions. What is this imposter syndrome? Well, imposter syndrome was first defined in 78 by two psychologists, Pauline Clance and Susan Ames, as a feeling of phoniness in people who believe that they are not intelligent, capable, or creative, despite the evidence of high achievements. Another definition coming from the Cambridge dictionary is the feeling that your achievements are not real or that you don't deserve praise or success. Last definition that I want to give you is from a psychologist, Dr. Valerie Young. People who feel like imposters have high self-expectations around competence. No one likes to fail. But imposters experience shame when they fail. So when I was doing my research around this topic, I realized that there is no one imposter syndrome. There are actually five imposter syndromes. There is the perfectionist that is setting high goals for themselves. These people are control freaks. They feel like if they want something, they don't write. They have to do by themselves. Then there are the superwomen or supermen. They often push themselves to work harder and harder to measure up. Then there are natural geniuses that if they take a long time to master something, they feel shame and the same if they don't do things right at the first line. And there is the soloist that refused any assistance from other people for proving their worth. And they think that if they ask for help, they will reveal their funniness. They are not competent. And then there is the expert that believes that they will never know enough and they feel of being exposed as inexperienced or unknowledgeable. When I was doing my research, I also realized that there are people that are feeling like the exact opposite. They are on the exact opposite side of the spectrum. And these people believe they are smarter and more capable than they really are. And I would say that we meet them quite frequently because, imagine, during a family gathering, it would be Christmas, Easter or any other occasion, I think that it happened to almost everyone at the same table, someone that was talking a lot about a topic. And really, everyone at the table realized that this person doesn't know anything about what he's talking. Nevertheless, this person is continuing talking, talking, talking, talking. So there is a definition for this condition. This cognitive bias is called Dunning-Kruger effect that is named after the researchers David Dunning and Justin Kruger, two psychologists that first described it. They did some researches around this phenomenon. They also did some tests and investigations. And they came up to very interesting results. Like if you look at this diagram, you would see that people who scored in the lowest percentiles in the test of grammar, humor, and logic also tended to overestimate how well they performed. Basically, their tests placed them in the 12th percentile, but they estimated that they were in the 62nd percentile. Also note that people that got the highest scores, well, they underestimated their performance. So for using simple words, high achievers have self-doubts, while low achievers are flamboyant. Another way for describing this Dunning-Kruger effect is the knowledge path. Especially when we approach a new topic to study, our knowledge is very low around the topic. So we start studying this topic. At some point, we know something more and we are feeling way overconfident. And so we reach this peak of monstupidity. This could only drive us to bad effects. In fact, we usually fail because we are overconfident and we fall down in the value of despair. In this process, nevertheless, we are learning something. So our knowledge is growing while our confidence got to the minimum. If we continue studying, then we go through the slope of enlightenment. So our knowledge is growing up at the same pace with confidence until we reach the plateau of sustainability, where we are kind of gurus and we have very good confidence around the topic. This Dunning-Kruger effect is really, really bad because it drives people and doing stuff that is crazy. Like there was a guy a few years ago that confused the invisible ink properties of lemon juice with the optical properties of the things that were around us. He thought that this property of the lemon juice was applicable to everything. And so what he did was smearing all over his face lemon juice and he went to rob two banks. And after a few hours, police came after him and arrested him. And they said, but how is it possible? I had lemon juice on my face. It's impossible that you were able to see me. And actually this guy was not stupid because before going to the banks, he made a test. He smelt lemon all over his face and he took a selfie with a Polaroid. Problem was that the film was defective and so picture was white. So this guy got lemons from life and instead of preparing a simple margarita, decided to rob a bag. Side effect of Dunning-Kruger effect. So this Dunning-Kruger effect is showing us that incompetent people are not only poor performers, but they are also unable to assess the quality of their own work. They are always overestimating their knowledge and their abilities. Also low performance are unable to recognize skills and competence of other people. And that's also part of the reason why they consistently feel better, more capable and more knowledgeable than others. Basically in many cases, this incompetence doesn't leave people disoriented or perplexed or cautious. Instead, these incompetent people are often blessed by an inappropriate confidence. They are buoyed by something that feels to them like knowledge. Just to be clear, not only incompetent people are affected by Dunning-Kruger effect. Even people that are really good experts in some areas of knowledge may mistakenly believe that their intelligence and knowledge can carry over into other areas where they are less familiar. Let me make an example, a brilliant scientist, for example, might be a poor writer, but he is not able to assess it. So could happen. Reality is that everyone is susceptible to this phenomenon. And in fact, most of us probably experience this Dunning-Kruger effect quite regularly. I cannot say if I'm affected by Dunning-Kruger on some topics, but I can say for sure that self-doubts, well, yes, I have them frequently. So am I the only one having these self-doubts? Well, as I showed you at the beginning of the presentation, when I had this opportunity to talk with people around this topic and running some polls, I'm not the only one. More than all people are feeling this way. When there is this researcher, Sandip Ravindan, said that it has been estimated that nearly 70% of people will experience signs and symptoms of imposter phenomenon at least once in their life. So this self-doubts, this imposter syndrome are quite common across all the industries, but the increasing pressure to be successful in the IT field is taking its toll on employees, affecting more than half workers, me included, as it seems. On this topic, there is an anonymous workplace social network named Blind that conducted a survey. In summer of 2018, they asked their users, do you suffer from imposter syndrome? Blind's user base includes thousands of people working for Microsoft, Amazon, Google, just to name a few. At the end of the survey, around 10,000 people answered to this poll, and Blind found out that roughly 58% of people suffered from imposter syndrome. And then, obviously, it was depending also by the company. In some companies, people were more affected by this situation. In other companies, a bit less, but as you can see, Apple and Cisco, workers from those companies, around 45% of people were facing this imposter syndrome. So it's a huge, a huge number. Which are the common traits of this imposter syndrome? Well, you have difficulty accepting praise. You are an over-worker. You feel the need to be the best. You are described as a perfectionist. The fear of failure can paralyze you. You avoid showing confidence. You actually dread success. You compare your struggles and obstacles to those of others. You associate praise with charm over actual talent. You focus more on what you haven't done, and you are convinced that you are not enough. So which are the factors that are contributing to this syndrome? Well, I would say that, for starters, for sure, nature and nurture. Nature in the sense of the fact that some people are more than others, emotionally reactive and self-focused. And nurture in the form of childhood conditioning, because I think that it happened to every one of us that was kind of labeled in a way like, it's a good child or a clever child or a funny child, a shy child. And in the process, we suppress our real feelings. Then obviously there are also sometimes family expectations and perfectionist parents. I think that at least at least once in their lifetime, everyone earning from their parents, you can do better, and all you can. That is not bad, but maybe hearing it too much could affect some emotionally reactive people. And then obviously new settings, academic or professional, like going to university or changing job. Gender stereotypes or racial identities, and then also medical conditions like anxiety and depression, all these contributes to this syndrome. How this syndrome is affecting us, well, as you may imagine, this reduces our productivity, slowing us down, then makes us poor decision makers, because we are not willingly sharing our ideas and our thoughts. We engage in self-protective behaviors. We are seeking regularly external validation. We are poor team members because we are not actively taking part in storming phases. And we feel the need to prepare or work much harder than necessary just to make us feel confident. And obviously all this costs something to us. I would say that it costs us really a lot, because we may be missing career or life opportunities with holding ourselves. It also makes more difficult to grow as human beings, because we are defending ourselves from the outside world. We lose sight of who we are, and basically we are sabotaging our lives. You may think that only regular people like us are affected by this imposter syndrome, but also famous people are affected by it. For example, I don't know if you know, Sherry Sandberg, she's the CEO of Facebook, and she has got a foundation in her name. Well, she said that when she was in high school, she was kind of afraid of embarrassing herself in front of all the class. And even now that she's the CEO of Facebook, she has got a big foundation, there are still days when she wakes up and feels like a fraud. And he had this big problem with self-image and very low self-esteem. And so he was hiding behind this writing and performing. Serena Williams, basically she was living in the shadow of her sister, Venus. Her strip, she felt like not talented enough, and so she was asking, why people want to see me in a movie? Same was happening to Tom Hanks, Lady Gaga, for example. Maybe you know that when she was in high school, she was bullied, and there was a Facebook page about her describing her as a loser, and she said that even now that she's a superstar, there are mornings that she wakes up and she feels like an impostor. And she needs to get over it because she's an example for all the people out there that are looking at her as an example. And then Affinton, another example of people that, famous people that are affected by this feeling. The most relevant example for me, when I did my research, was this one. There is this writer, Neil Gaiman, that a few years ago attended to scientific gathering, they were giving prizes to scientists and artists, and at some point he started talking with an old gentleman about many things. This gentleman said, look at these people, and when I look at them, I think, what can I do in here? Because they made huge things in their life. I just went where I was sent. I was just following orders. And then Neil Gaiman said, okay, but you were the first man on the moon, and this counts something. And I would say that this makes me feel better, and also Neil Gaiman felt better because if Neil Armstrong was feeling like an impostor, well, maybe everyone did. So is this impostor syndrome or bad? Well, as I showed you, many successful people have experienced and are already still experiencing this feeling, but they achieved a lot in their life. So the key is what we do with this feeling. Obviously, if we allow this feeling to push us down, this is limiting because we are hiding who we really are and what we are capable of. But if we are able to recognize the situation and acknowledge it, see and accept who we are and find the courage to go for what we want, then we can use this impostor syndrome as a boost, as a fuel for our journey. This lady, Eli Spittel, she's a software engineer, and she said that she credits her success to impostor syndrome because she would have worked half as hard as she did without it because she was constantly trying to prove to herself and other people that she can do this. Okay, so what should we do for managing this situation? Well, I think that we should be open with people around us because they are surely eager to help. We should find a mentor or a friend to talk about the situation, to take inspiration from. We should identify our strength, recognize our victories. We should not be afraid of making mistakes. We should accept that we had some role in our successes. We should store in a file all nice things that people told us. And then last point that is probably something to you stupid, but it's the most relevant at all is we will die eventually, so stop being scared. So back to my story then. Since I was a child, I was always interested in martial arts. I studied karate for many years, and one specialty of karate is named kata. That is basically a detailed choreographed pattern of movements made to be practiced alone. What you are doing with this exercise is fighting against an invisible enemy, using your best moves. So going through the points that I earlier mentioned to you, I was able to put things in the right perspective and regain balance. So be open with people around us. They are surely eager to help. Well, George, Ed, Oecto, Adam, they helped me a lot as well as a lot of people in Ubiko. And I just hope that I've been able to do the same for them. And find a mentor or a friend to talk about situations, to take inspiration from. And here you can see some people that inspired me, family, friends, and some famous people. Identify our strength, recognize our victories. I don't know if I heard this sentence in a podcast, I read it on a book, but I truly believe that people born in small towns quite often have huge dreams and a limitless dedication to make them come true. Well I was born in Forge, a small town in the south of Italy. You can see there in the map. And I had them, they still have big dreams and goals to achieve. And I'm stubborn and I will not stop until I get there. Just a few examples. Well, as I said, I was born in Forge and in the center of this picture with the red arrow, you can see me back in the high school. And on the right, you can see me a few years later in New York Times Square, working for Biacom, a huge achievement. And another example, as I said, in January 2018, I was looking for something different. I had no clear idea, so I was now and then going or LinkedIn checking for new jobs. But I was also studying some topics that were not related to my day-to-day job. One of these topics that always fascinated me was crypto and blockchain. I was reading papers and stuff, but I never studied formally these topics. So I took the chance to enroll in a couple of courses, one on the economics of Bitcoin held by University of Nicosia with the well-renowned Bitcoin experts, Andreas Antonopoulos and Antonis Polimitis. And then another course from Princeton on Coursera regarding implementing smart contracts and cryptos on blockchain. When I wrote this stuff on my LinkedIn, I was receiving a lot of messages from many, many recruiters because there was and there still is a hype around these topics. And one of these opportunities that came on LinkedIn was quite appealing. So I started, like as a joke, as a test for myself, I was trying to get through all the steps of this recruitment process. I enrolled in this recruitment process and after long steps, I got a job from this company that is named Bitmain. That is the biggest crypto related tech company in the world with a massive influence on reach. It was a huge achievement. Another last example, well, I always been quite shy, but nevertheless, I'm here talking to you. And in November 2019, I went to Chemin in Russia, opening Google conference with this talk as a keynote. Don't be afraid of making mistakes and there you can see some of the mistakes that they made, fix, outfix, broken pipelines, a lot of mistakes. Hopefully we learn from our mistakes and so we make different mistakes the next time. Except that we have had some role in our successes. Well, I think that it was not only luck if I worked for great companies with amazing people. It was not luck at all if I was able to pass technical test, interviews, and then got job offers from many companies, from startups to big corporations. I was just bold enough to chase and face those challenges, getting out of my comfort zone. Story in a file, all nice things that people told us. This paper is basically the farewell letter that my colleagues in Viacom gave me when I was leaving Milan. It was kind of overwhelming because they said to me so many nice things. And here are also messages that I got from my colleagues and friends, Brad from US living in Italy, Beppe from Milan, Natalie from Berlin, Daria from Minsk, Fernando from Berlin, Lily from New York, Kai from Berlin. I was, they wrote me things that surprised me because I was not feeling that way. I will read you just a couple of lines. There can be only one Matto Bruno. Thank you for being such a decent human being and pleasure to work with. The new team that you will work with doesn't know how blessed they are from Brad. I was not feeling that way. I didn't know that I was able to do this impact in the company with people. I was not acknowledging it. Natalie, thank you for your patience and guidance through all these years. Your humor has gotten me through many frustrating days. You are an incredibly skilled developer and just all around. I was some guy. Last from Daria, you've always been the most supportive and friendly colleague I ever met and you were always there if any help needed. And here, messages from my colleagues when I was in Ubico. Thanks for helping me anyways, always. Glad to have you as a colleague for not blaming anyone, you rock man, real tricks there, professional ease, great enabler, unlimited patience. As you can see, people were saying to me, really nice things and this helps me and still helps me. It makes me feel good. So it's nice. So my tip for you is every time you get a message or an email that contains something nice about you, well, save it. Then we will dive into it. So stop being scared. On this topic, I would like to read you some offerings coming from some writers and thinkers. First one is from Tony Robbins. He said, stop being afraid of what would go wrong and start being excited of what could go right. Then from Susie Cassin. Doubt kills more dreams than failure ever will. Last one from Karen Salmondson. Your mission. Be so busy loving your life that you have no time for hate, regret and fear. So how can we turn this bad feeling in positive energy? Well, remember that you achieved a lot in your life. Just think about it. So for this reason, I would suggest you to try to do the same exercise that I just showed you going through all these steps. You will be surprised by the fact that you did great so far. While you are doing this exercise, enjoy the exciting feeling of looking back at your life and your career and seeing how many wonderful things happen to you. And repeat these steps every time you need because an extra boost of energy is always welcome. So if you go to this point and you recognize that you are good, well congratulations, you did a great job. Please keep in mind, have no fear of getting out of your comfort zone because it's there where great things happen. No one is immune from these self-doubts. We should manage them and use them at our advantage to bloom and achieve our goals. We should talk more to ourselves without turning us down as you know. Every one of us is constantly thinking and there is kind of an inner dialogue between us and ourselves. And most of the times we are saying things that are not nice to ourselves. So what we should do is stop in doing that. We should do like sport champions that are using this training of talking positively to ourselves in order to get focused, motivated and energized. We should do the same as sport champions because we are champions too. And then we should talk to people and learn from them because everyone is a master in something. And one more thing, the most important one, don't stop believing in yourself. That's it. Thanks a lot. Thank you. Thank you. Thank you. Thank you. Thank you.
|
Do you feel like you don't belong, you don't deserve what you achieved, everyone in your office is more talented than you? Do you have imposter syndrome... too? Imposter syndrome is common across all industries, but the increasing pressure to be successful in IT is taking its toll on employees, affecting more than half workers, me included :) After many years working in tech for a lot of companies (from startups to big corporations) in many business fields, I found a way to overcome self-doubt and turn this weird feeling in a booster for greater achievements... and I want to share it with you!
|
10.5446/52860 (DOI)
|
Hello, everyone. I'm here to talk a little bit about some trends in the open source space that I find troubling and that I think are making the open source industry move in ways that we had never intended. And they are things that trouble me a little bit. I'm worried about how open and how much collaboration and innovation are going to be happening over the next few years if these trends continue. And so I wanted to talk to you a little bit about that today. But I'm going to start with an introduction. My name's Matt. Matt Yankovic. I work for Percona. I am the head of open source strategy or the Haas, if you will, of Percona. I've been here for over 11 years. If you're not familiar with Percona, we offer open source database solutions, software and services. We've been doing that for quite a while. I have over 15 years in the FOSS space. I've worked with companies like Minus QLAB, Sun Micro Systems, Percona, no matter most. Most of my time has actually been in the DBA in this admin space. A lot of the backend infrastructure I have contributed to different projects over the years. And you can find my email here if you'd like to follow up, ask questions. I'm happy to get on a phone call, get on a Zoom, talk to you guys about this topic or other topics that might be near and dear to my heart. And you can also follow me on Twitter. So without further ado, let me kind of walk through my goal. Really what I'd like to have come out of this session is for all of us to understand this march towards commercialization in the open source space and what it's doing to us. You see, I think that what's happening is we are being pulled into a direction where money and shareholders are now ruling the roost. They're driving what people think open source should be. And I'm worried that the innovation and the collaboration that we all love in the open source space is at risk because of it. And so we're going to talk about those. We're going to talk about the dangers. We're going to talk about the reasons. And hopefully by the end of this talk, you'll get a full understanding of where I see things moving and then we can continue conversations after this on potentially how to fix it. So I want to start with something near and dear to my heart. See I am a child of the 80s. So I was born in 76. I grew up with Saturday morning cartoons. My favorite cartoon is actually over here is Transformers. And so Transformers was by far my favorite. In fact, Transformers came on at 7am on Saturday morning, which meant that every Saturday morning for several years, I was up at 6.30 to watch Transformers. Of course, I bought into all of the hype, including wanting all the toys. And I would just ask, can I have Transformers? Can I have Transformers? And of course, the one time of year when you can really kind of guarantee you're going to get some kind of toy generally is Christmas time. And so at Christmas, I bothered all my aunts, my uncles, my grandparents, my mom, everybody about, you know what I want? You know what I want from Santa? You know what I want? I want Transformers, Transformers, Transformers, Transformers. And lo and behold, you get to Christmas morning. You're all set to go. You're all excited because you're going to go play with Optimus Prime. You're going to go get your star scream. It's going to be so exciting. And you go and you see all these presents and you're there with your cousins and you're all in the Christmasy mood. And you open up the presents and you're like, yes, no, wait a minute. What are those? They're GoBots. Of course, parents, grandparents, older people, a robot is a robot, right? No, GoBots are not Transformers. Oh my gosh. How can you get that wrong? But they did consistently for like three years and I don't understand it. But I'm going to educate you and I'm going to educate them maybe for those that are still around. Transformer versus GoBot. You see, on the right hand side, that is a Transformer. On the left hand side, that is a GoBot. Do you notice how different they are? Yes, they are different. One is pretty awesome. It's very sleek. It looks like something that I would want to play with. The other one looks like something that, well, frankly, was slapped together and tried to just manufacture to get some money because GoBots were kind of the cheap knockoff of what Transformers were. Let's be honest, or Transformers were the better version. Let's put it that way. I don't want the shittier Transformer. I want the Transformer. A lot of times in the open source space, what you see is we want the Transformer experience but we end up with the GoBot. Sometimes that's caused by decisions that are outside of our control. Sometimes that's caused by companies because success, you see, brings imitators. It's not just here. Let's talk about one of the most successful, awesomest movies of my childhood, probably your favorite as well, Star Wars. Who doesn't love Star Wars? See Star Wars was one of the most successful franchises in the history of movies. It spawned what, is it 11 movies now? Countless TV shows? How many toys? I mean, oh my God. Star Wars was the bomb. And because it was so successful, you saw imitators. Now some imitators were obvious imitators. So I don't know if anybody has heard of Turkish Star Wars. Oh, Turkish Star Wars. It is by far a horribly cheesy, horrible production value knockoff of Star Wars. But it's so bad it actually transcends. No one, however, is going to mistake Turkish Star Wars for Star Wars. And similarly, when you have the knockoff, so you have something that imitates, you can kind of tell when it's of subpar quality. The problem is sometimes companies and franchises will actually take something that was so instrumental, so awesome, so awe-inspiring. And they'll try and make it better because they're chasing additional dollars, additional fans, additional reach. And then you get stuff like, you know, you know, you know, you know, this guy right back there, Jar Jar Banks, which probably was the worst invention Star Wars ever had. And when you think about it, you have open source companies who start off as that Star Wars on the galactic trajectory to awesomeness and people love it. And then they start doing things that make you think like, man, they really Jar Jar'd that. And this happens all the time as we try and improve things because everybody wants to imitate the winner and they want to win more. And Steven Spielberg famously said, everybody loves a winner, but nobody loves a winner. And that's true because you see companies get jealous of other people who are successful. You see people who sit there and say, man, I developed that software. They stole it. They're using it to make more money than I am. Or how much money can I make if they actually paid me more? And so you get this friction and this infighting and you have companies that get jealous. And you know, let's look at an example in the real world, which is Microsoft. Microsoft's a classic example of this. Microsoft called Linux a cancer. They hated open source. I think Balmer was the enemy of open source for years and years and years. And in fact, today, there's probably many people who are watching this that still hold ill will towards Microsoft. But where's Microsoft now? Woohoo! They love a winner too. Everything that you see on Microsoft's website is about Linux, open source, and they have done a lot to contribute to sponsoring events, to coming and sending people to events on open source to talk. They've contributed things back to the open source community. They've started to move their software into the open source space. They've done a ton. And it's kind of like they've just said, oops, my bad, let's go ahead and let's start this over because everyone loves a winner. And what you start to realize, especially as you move to this as a service space, open source is the great enabler of that. Open source is what has driven things like Azure, AWS, all these other cloud providers, all these other as a service. If you think about the early days of the web, you wouldn't have the Facebooks or the Twitter's or the LinkedIn's or the Amazon's or several other companies unless there was open source out there. And that's something that we have to be mindful of, that open source has driven more innovation than probably any technology in the last 10 to 15 years. And that's a good thing. But we're entering a dangerous time for open source because it has won so much. It has become such a standard. Now we're starting to see the imitators. We're starting to see the people who aren't true open source believers or don't follow an open source philosophy who are in it for less than stellar reasons. They're in it for the money. And so I'm going to talk a little bit about the business side here. So bear with me for five to 10 minutes here when I talk through this. My question to you is as a community, as a project maintainer, a creator, what does your definition of success look like? Because I guarantee it's probably going to be different than those of investors and those of people who are running a lot of open source companies. And it's that difference that I think is starting to cause a wedge in the community. If you think about this, ask yourself as an individual, when you first started using or contributing or learning about open source, why did you do it? Did you do it to become rich? Did you do it because it was a better way to do things? Did you do it because it made your job easier? Because you believed in it. You loved it. You wanted to contribute back. You wanted to make the world a better place. How you answer that question is going to dictate how you view this topic and where you think that the industry is going. Because the answer to one will then shade all the others. And let me say this. Above all, open source as a business is hard. Open source is not a business model, but a lot of people adopt it as a business model. It is the foundation for their business model. It is the enabler of their business model. And why is open source hard? There's a few reasons. First, let me throw out a couple of statistics for you. You might not know, we did an open source survey where we surveyed the database community and I think this is relevant across all spaces. Two thirds of companies using open source said they prefer to not pay for enterprise version or a support contract or anything. They just want to use the free community version, build their applications off of it. They'll self support. They'll go out and find someone if they need to later on. You're not going to worry about it. So a large portion of the user base in the community is predisposed to not purchase. That makes it really hard for companies to make a go of it. And I know many companies or many individuals who have tried to build their project into an open source company and tried to basically build a career off of it and have had struggles because it's hard to find people who will donate or contribute. And this is part of the reason why. There's a lot of people who want to use, want to maybe contribute back stuff in the free space, but they're just not willing to pay. And they want to innovate. They want to move quick. Now, of the third who will pay, most are focused on buying it or using it for support type reasons or insurance. Insurance is great, except insurance is the only thing most people pay for that they never want to use. How many people here want to use their life insurance? Nobody does. Nobody wants to use life insurance because that's a really bad outcome for you if you have to use your life insurance. Actually, it would be your relatives using your life insurance. But when you look at that with a support or insurance, the mindset is unless I'm opening tickets, unless I'm having problems, is the value there. And so if you get to a point where your systems are fairly stable, a lot of people will say, I don't need support. I'll just go hire a consultant if I need it. I'll go to the community and get that help. And that makes this model very difficult for a lot of folks. And the retention rate for that service generally is lower than proprietary because you're not locked in. You're forced to. You have options. So as a company, you have to provide immense value and you have to continually better yourself and give more to the community. It is about giving more. And if you do, you can be very successful from a business perspective, but it is a constant. What have you done for me lately? How do we get better? How do we provide more value? How do we show customers that we're partners? How do we make this work together? And if you don't, it's very easy for people to fork or to replace features that you have. And that's the downside, but also the vast beauty of open source because it forces projects, it forces innovation, it forces people to get better. And that's a good thing. It's not a bad thing. It's just a hard business model. And when you talk about business models, okay, from a business perspective, executives and investors, they want stickiness. In fact, if you work for an open source company, I guarantee you that your executive team has probably had a conversation at least once a month that has included how do we increase stickiness? It's a conversation that happens all the time. And if you don't know what stickiness is, stickiness is how do we get people to use our software, keep using it, and pay for our software and keep paying for it? And there's a fine line between that and lock-in, okay? Because we want stickiness, but lock-in is bad. So is stickiness, as a whole, isn't necessarily bad? So stickiness, if you generate immense loyalty with your community and with your paying customer base, that's a great thing. Because if they have loyalty to you and you're treating them right and they're doing it because they find value and it is a symbiotic relationship, that's wonderful. There's nothing wrong with that. And if you can do more of it, more power to you. But if your stickiness is predicated on locking someone in and preventing them from easily moving or changing providers, that can be really bad. The question is, and you can ask yourself this, is how easy is it for you? How easy is it for you to move if someone raises your rates? Or if someone doesn't give you the level of support or doesn't keep up with patches or has security issues? If any one of these issues comes up and you're getting a bad experience, how difficult would it be for you to switch providers? Is it an absolutely we couldn't? Is it a multi-year project? Is it a couple months? Is it a couple weeks? And the answer to that question typically will tell you how locked in you are. And when we look at that stickiness factor, it's not all bad, right? Let me give you a really good example from history. MySQL. So MySQL had a retention problem early on before they got bought by Sun, before they got bought by Oracle. MySQL had lower renewal rates than they wanted and that is expected in the industry because what would happen is someone would run into a problem. Oh my God, my database is down. I need help with MySQL and then MySQL would sell them a support contract. So MySQL Enterprise, they would get the answers to the questions, never use the software or the support again. Renewal would come along and they'd be like, yeah, we don't need it. We just needed it for that one time. So the value that the customers were getting out of MySQL Enterprise was very transient. So MySQL introduced the MySQL Enterprise Monitor and actually increased the renewal rate and the stickiness, if you will, of their customer base. Not necessarily a bad thing. The thing with MySQL Enterprise Monitor was it wasn't something that you couldn't do yourself. You could totally monitor and manage everything in your MySQL space. They just made it easier. They made it so you didn't have to have that expertise if you didn't have it. So they made something that was replaceable but it provided a really important value add to the MySQL ecosystem. And there were companies who bought MySQL Enterprise just so they could have the Enterprise Monitor and that really improved the quote-unquote stickiness for them. And as we talk about different open source models, realize where these type of things fit in and a lot of these are driven by the stickiness factor. So the most popular models, a lot of models and a lot of companies start with this pure services model. And when we talk about pure services, this is where you release open source, whether it's GPL, whether it's MIT, whether it's BSG license, whatever license it is, and you're not selling subscriptions to the software, you're selling services. Whether that's support, consulting, managed service, whatever, this particular model is really easy to start with. It is 100% people powered. But what people power means is it doesn't have a lot of profitability. You have pay salaries and you can only sell what you have capacity to deliver. There's all kinds of challenges with this model. Retention and expansion can be harder because you have to continually prove the expertise and have the expertise available. So this is a challenging model. And this is where you can be successful here, but you have to continually show that you're the best. And the best will always have a place. But investors don't like this model that much because it doesn't have the stickiness of other models. And so the stickiness model that kind of became the prevalent model for quite some years has been the open core. And open core has evolved greatly over the time period of open source. And if you're not familiar with open core, this is where you reserve features for paying customers. You have a community version and you have an enterprise version. And in the enterprise version, you have many features that aren't available to the open source space that aren't necessarily open source. And if you want to take advantage of any of those features, or if they're required, you have to pay. This has some value to some customers if they're using those features. It does have medium stickiness because what happens is people in the open source space will develop open source alternatives. So classically, companies start to move more and more features that are required for operations to a closed source enterprise solution. So you might have an open core version that, hey, if you want to back up your database, or you want to back it up hot, you have to pay us. Well, doesn't everybody have to back up their database? That seems kind of bad. That seems like something that the open source community would be like, well, I want to use your product. I just don't want to pay for just a backup tool. So someone will develop a backup tool. So this model is very susceptible to disruption. And so you'll see companies be able to replace some of those enterprise features with open source versions of them. That happens all the time. But more so than that, what you're seeing with this open core version is it's starting to be replaced, right? Because you've got this awesome enterprise version, and now people are starting to develop alternatives to it. They're starting to move them into what is kind of the new hotness for everybody, which is the as a service model. You can't move five feet without seeing something about infrastructure as a service, cloud providers, databases as a service, platform as a service, software as a service. If you're developing applications, it's probably being built as a service, and you're probably building it off of open source. And this is where it provides a really high level of stickiness, because it's about you giving up control to a vendor. You're outsourcing the entire functionality for that particular function, whether it's software, it could be business software, it could be infrastructure, and you're giving it to someone else to maintain own and control. And that does lead to a lot of lock in in a lot of cases, because it's really, really difficult to move from one provider to another in a lot of these cases, because you don't have that open standard. That's why things like Kubernetes have become very popular, because you can go multi-cloud and you can more easily move things between infrastructures. But when you talk about that as a service model, there's that tradeoff. You give up control for ease of use. And that is something that a lot of companies and a lot of individuals are willing to do. And so that's something to be mindful of. So this has a high level of stickiness. This is where a lot of things are moving towards, but this also has a massive amount of lock in. So it's something to be mindful of. Now the final model that I'll talk about is what I call the cyborg Robocop or hybrid model, trying to merge all of those other models together. Some people call this tech-enabled services. Think of this where you give away your open source software for free. You make sure that if it is required to run all the operations, it's out there in the open space. But you might have value added services. You might have AI components. You might have other things that are part of a subscription. And so it's not that you don't necessarily need those things. You can do them on your own. But you're paying for the efficiency or the value that enables you to move faster. And so we're starting to see that, and this is more of a DIY, if you will. So you control your fate. And so that's something that we're seeing out in the space now as well. Now let's talk specifically on commercial open source, because this is where it gets really interesting and this is where the risk factor comes in. So I've kind of talked about the different models. Now let's talk specifically about commercial open source. So if you're going out there to build a company that is an open source company, who are you building your company for? Are you building it for the world? Are you trying to make the world a better place? Are you trying to enable people to do things that they could never do before and revolutionize some part of the industry? Could be. It could also be that you're really focused on a specific community. You might find that your niche is enabling a very specific community to do something better. Could be web developers. Could be veterinarians. It could be people who are developing Go or Python. It doesn't matter. Are you building this to enrich the community, to make more people more effective? It's a good question. Or maybe you're just trying to enhance the users of a particular software. Maybe you are fans of MySQL and want to build a MySQL tool that is going to help all the users of MySQL do something better, more efficient. It's a good way to get in there. Or are you really focused just on the shareholder value? And this is the one that most companies that are in the open source space that are the darlings of Silicon Valley are chasing. It's shareholder value first, shareholder value second, shareholder value third. And this started with the original OG. I mentioned MySQL a few times. When some bought MySQL for a billion dollars. Everyone looked at that as a moment that open source as a business model had arrived. Open source is a legitimate business model. It's right there in the headline. And what that did was it made investors go, whoa, I want a billion dollars too. So they started looking at open source companies in a different light. And the thing that they saw was, wow, look at some of these companies and the reach that they get, the downloads, the users. Oh my, look at how many people will use open source software. Wow. But it's not that they thought, wow, look at how great they're affecting the world, changing the world, making it a better place. Look how they're servicing the community. No, no, no, no. What they saw was, wow, I can monetize that. I can make that worth more money. And the owners or the people who have that open source project right now, you know what? They don't understand business. I do. I'm going to take my business knowledge. I'm going to apply it to your open source model and we're going to become filthy rich. And if you look at open source now, and this is from a couple of years ago, open source is the rule, right? I have talked to investors before who have said they really only want to invest in open source companies. They've talked to people who were going to go proprietary and they're like, you need to be either SaaS or open source, depends on what you're going to do. And here you see that, you know, two years ago, two dozen venture firms that invest a lot in open source. That's even more now. It's even higher. And you know, it's interesting because even they recognize, you know what? Open source really isn't a business model. It's a development model. And I think that the mindset of these investors, however, shows you how some of the open source companies have evolved. Look at this quote by Gary Little from Canva Ventures. Open source works for adoption purposes, but is poor for monetization. Of course, this is all about monetization. That's your monetizing at different levels. So how do we monetize outside of just open source? And that's where you have to think about, oh, what do we bolt on top of the open source project to make this better? And then Jocelyn Goldfein actually says, the beauty of an open source from an investor's perspective is disruption, not innovation. Don't care about innovation. Don't care about, you know, new contributions, but it's the contribution to marketing, not research and development. Don't care about new features. They care about reach. They care about how many people are using the software. And this is a very proprietary way of thinking, right? So if you think about like classic businesses, businesses want more market share. They want to sell more and they want to continue to sell into that audience and get them to be sticky and lock them in. If you look at the commercial open source space, I went and took five commercial open source database companies and looked at where their executives had previously worked. Okay? This is a list. Now, take a look at the list. You notice something missing. There's not a lot of open source pedigree here. You might argue that Sun or Google has some open source background, but they're not open source companies. They do some open source, yes, but they're not open source companies in the pure sense. And those are only a total of four of these companies that are in here out of 40 to 50 executives. You see Oracle, VMware, LinkedIn, BMC, Informatica, Symantec, New Relic, IBM, Cisco. These are not companies and these are not people who have an experience in that open source space. So they're approaching this from a pure business model perspective and they're going to bring, they're going to bring the 90s and 2000 Oracle to the open source space. And that scares the hell out of me and it should scare the hell out of you. And these are the people who the investors really want in these companies because these are proven companies that have made money. You don't take my word for it. Let's talk to or listen to what the CEO of MongoDB said. We didn't open source MongoDB to get help from the community or to make the product better. We open sourced it as a freemium model to drive adoption. They don't care about contributions, they don't care about innovation, they want adoption so they can try and monetize that. And the analysts in the financial analysts, they love that. In fact, here's a financial analyst, Billy Dubberstein. He actually says that open source, especially this is database open source, this is MongoDB, this is good where MongoDB is because if a company wants to switch vendors, it would have to lift all that data from the old database inserted into a new one. That's not only a huge pain, it's also terribly risky should any data get lost. Therefore, most companies tend to stick to their database vendor over time, even if that vendor raises prices. That's how Oracle became a tech powerhouse through the 90s. So analysts want MongoDB to become the next Oracle. That's bad. I don't know if you remember Oracle in the 90s, but they were not a fun company. And it's that lock in, that's the goal. And that's where we're starting to see the pressure for these commercial open source companies. How do we lock people in? How do we move away from pure open? And let's trick people into moving to this as a service and we'll start moving our features to as a service only and they'll never be released to open source. Woo-hoo! That'll be awesome. But the problem is, right now, open source companies are growing, but they're losing hundreds of millions of dollars. Take a look at the revenue numbers. So MongoDB from 267 million to 422 million, Elastic 272 to 428 million. Income for these. So this is the profitability, if you will. So Mongo, 422 million, but they lost 197 million. So they spent 197 million more than they made. And you look at this and you're not seeing the profitability with these companies. So they are struggling and they're getting immense pressure to figure out how to protect the revenue they have and grow it substantially because they're following a very classic NoSQL model. Just like NoSQL has eventual consistency, we have eventual profitability in the open source space right now, which is a horrible model from a business perspective, but we keep on trying it. And this means that we have this rapid cycling of different ideas and different things that erode what is classically open source. And if you don't believe me, look at the licensing whack-a-mole that has been going on recently in the open source space. Right? GPL forces you to contribute if you distribute the code. Okay. Well, do you distribute the code with the cloud? No, not really. Okay, let's do AGPL and let's do more open core because that way we can not have the cloud disrupt us so much or we can control more. Oh, databases in SAS is growing so SSPL because we want to go anti-cloud because we want to control our own cloud and we should be the only ones who control what our software does and who makes money off of our software. And so this is about control and investors, not about the community and not about getting better. Now, there are examples in the OSS space, the open source space, on both sides, good and bad. Let's take one example because a stalwart between of open source and someone who has repeatedly said they're committed to open source, Elastic is a great example of the open source spirit nowadays. They had to change to open core, okay, they went to AGPL, but look, they guaranteed a couple of years ago that they're never going to change away from the licensing scheme that they have. They keep this open source. What? What? Oh, oh, wait a minute. What do you mean? Oh, well, okay. They promised for about a year and a half that they would do it and they stuck within the year and a half to keeping it open, but evidently forever is a year and a half in Elastic time because now, guess what? They're restricted again. Let's look at Elastic as an example. I like this headline down here because I talked about monetizing the cloud, right? If we look right there, Elastic changed open source license to monetize because that's what this is all about. Let's look at why they changed. They came out with a blog and they were like, oh, AWS, not cool, not cool, but they changed for a couple of reasons. They said, number one, trademark violations. Number two, very serious accusation that people copied their proprietary code, decompiled binaries, and then copied the code and then contributed it back to open source projects. So IP theft, intentionally misleading customers, and just not plain nice. You're not plain nice. Those are some fairly serious accusations. If we look at the history of Elastic here, Elastic was fairly open pre-2008. And then as the pressure mounted to continue their growth, they doubled down on open by going open core and by using AGPL, which then made AWS respond by saying, hey, we have these packages that we don't want in the proprietary versions, that we don't want to necessarily follow the same restrictions that you have. So we're going to do the open distro and get our stuff in, and we're going to release that as open, which then Elastic responds, we're going to sue you. And we're going to sue a company called Search Guard, who they claim stole their software. And so then as that works its way through the court system, still under litigation, now Elastic decides, well, we're going to go SSPL and we are no longer open source. So thank you very much, 1600 contributors. Thank you for all of your help, community. We don't need your help anymore. If you'd like to contribute to our proprietary code base, yeah, we might accept it. That's not cool. It really isn't. And you look at these types of statements, right? You look at the statement up here that they're willing, that they're never going to change, and then they change. Can you really trust vendors to not chase the dollars? Can you really trust them to stay on their laurels? To stay true to what they've promised us? And the answer is no. This is not about cloud disruption. Companies are making money. They're not profitable because they have bad business models. Because open source isn't a business model. You need to figure out how to monetize that. You need to figure out how to make it work. But they've decided to follow a path of we need massive growth year over year so we can increase shareholder value and continue to do that. They're burning community bridges as they do that. And this is not about cloud. This is about control money and exit plans. And you look at some of these other things when you talk about infringing code, you can look at the court case here. There's examples. These are very serious accusations because it's not okay to just copy commercial code and release it as open source. That's bad. But I think there's a difference that we have to understand that there's a difference between inspired by and infringed on. I don't know what it is. They're litigating. I'm not a lawyer. But when we talk about inspired by, I can as an open source user look at proprietary code or a proprietary product and say, you know what, I think I can build a better backup solution. I can build a better monitoring solution. I'm going to. I'm going to build an open source version. You can't copy code, but you can be inspired by the other systems. And you know, there's a clear line when you talk about this. For instance, Oracle, Oracle, we'll, you know, you can contribute code to a lot of open source project at Oracle. And in the past, and I think it changes a little bit, they would actually rewrite contributions so they could establish clear ownership of IP because they wanted to de-risk themselves from potentially, you know, getting that finger pointed back at them. But you can still have inspired by code in the open source space and it's okay. And that's part of the disruption. And if that goes away, we've got a big problem. Trademarks, you have to abide by trademarks. Of course, all open source companies want you to follow the trademark rules. Each one has their own, but there's special rules, right? Some things you can do, some things you can't. And so there's that fine line there as well. Now, specifically when we talk about the SSPL, there's a few challenges that really worry me and concern me. First is SSPL is really designed to restrict your usage in a SAS database as a service environment. The language is written in a relatively vague way that you have to follow what they provide in FAQs to really guide you on what's acceptable and isn't. And it's very open to interpretation. So right now it's no databases as a service, but maybe in the future they might say, well, you're going to spin up 100 databases in your data center to support your one SAS application because you're using maybe Kubernetes. Now you're going to have to pay all 100 because you're violating the SSPL if you're not using the quote unquote enterprise version because guess what? That's a service. It's a little bit of a gray area. And that worries me because open source is special because of community collaboration, freedom, innovation, and quality. You can't innovate. You can't innovate there. And you're putting limits on the community by adopting some of these shareholder first practices. And people are going to start thinking that that's the normal. And this is where the big danger comes in. People are using open source, the name, and it's being hijacked to mean something completely different. Contributors are being treated poorly and they're being treated as second class citizens because guess what? We're not doing this because we want contributors. We don't want community. We want adoption. We want money. We want monetization. And now the next generation of developers who are going to develop the next generation of code for us are thinking this is the new normal. And that's just wrong because that, again, is going to stifle the innovation that we have. And as long as there's money to be made or there's money left on the table, these companies are going to continue to change and monkey and restrict things. And that's going to hurt everyone. And as soon as they stop making money, guess what? They're going to ditch this and they're going to slander all of us because they're going to say, oh my God, open source, you can't make money off that. It's not a real thing anymore. This new thing, it's only as a service, it's never going to be open source. Whatever they're going to say, because if you can't make money off of it, they're going to start giving it a bad name. And so that's going to be something we need to overcome because that all hurts the ability to innovate. Because if you can only innovate if you own the code, that's a bad position to be in. Think of it like this, Lucine. If Lucine was SSPL, Elastic would not exist today. Why can Elastic then take what they built off of Lucine and then make it SSPL and claim that it's going to have the same level of innovation? And I'll leave you with this thought. There's a Cherokee proverb that talks about the battle of two wolves inside all of us. One is anger, jealousy, greed, resentment, lies, and fury, or ego. The other one is joy, peace, love, hope, humility, kindness, empathy, and truth. Who wins? Whichever one you feed. Who are we going to feed here? Shareholder value first or the betterment of the community? Whichever one we spend more time in, whichever one we focus more on, that's the one that's going to win. And we need to make sure it's the right one. Thank you.
|
2020 was a very bad year for most of us, under the shadow of all that was going on there was a troubling trend we all need to be aware of: the erosion of the classic open source model and values. For years vendors have been slowly chipping away at the freedom and openness provided by open source, this year saw unprecedented changes to how people view and value OSS. From licensing changes to as a service exclusive, what was open is no longer. Is this being driven solely by the cloud? or is this purely corporate greed and others viewing open Source as a successful business model to replicate? Let's not only discuss but bring to light the trouble trends that threaten open source projects and development as we know it. "[W]e didn't open source it to get help from the community, to make the product better. We open sourced as a freemium strategy; to drive adoption." - MongoDB CEO Dev Ittycheria. Open source was never designed or planned to be used as "Gateway Drug" to move people to a proprietary locked in software, but more and more this is happening. This is not just a MongoDB move; Elastic, Redis, and others have all used the "Cloud" as a convenient excuse to erode classic open source licenses and values. At the same time new "open source compatible" versions of popular projects like MySQL or PostgreSQL pop up on cloud providers platforms. It's like the game "Among Us" only with your open source projects. You never know who or what is really open until you dig deep.
|
10.5446/52861 (DOI)
|
Hi Janos. Hey there. So this time we're giving a talk on tools and concepts for successfully open sourcing your projects. And why are we doing that? Well, we've been quite busy lately and I've been poking you to talk or write about this for a while. That's true. But somehow I always end up doing something else than writing blog posts and having a deadline for a talk actually forces us to summarize the concepts in a way that is digestible. You know the funny part is having a talk about something also forces you to finish an open source release. It's basically talk-driven development. Yeah, conference-driven development. That is true. Okay, so for this conference-driven development, what did we do? We had a preview release of Container SSH. We worked quite a bit in both our jobs at Perkona and Red Hat. And we summarized the concepts that we mostly use in open sourcing. So both in our companies but also for our private projects. Yeah, and some of them evolved naturally and others took quite a bit of discussion and fighting as well. Yes, definitely fighting. We promised a step-by-step guide for this and so this talk is more of a steps and more steps guide because there is no way of no true this you have to do first, this you have to do next. Wait, I put my project on GitHub and that's it, right? Sure. This can be it. That is part of open source. So let's talk about this. This is the to-do group. It stands for talk openly, develop openly. It's a group that is part of the Linux Foundation. And it has quite a lot of how to create open source projects, what is an open source program, how to use open source, what to do. It's lots of guides developed by many, many people who work for open source companies or in open source departments of bigger enterprises and also smaller companies. So it's a very, very useful resource for you and it doesn't try to sell you anything. So we can definitely recommend the to-do group for digging into how to do this right. As you see, there's a list of links here. So apart from the to-do group, we can also recommend opensource.com with lots and lots of blog posts that are technical, organizational, all kinds of blog posts that are related to open source projects as well as concepts. And then of course our own company blogs from Reddit and Perkona where we know, you know, it's open source. Open source companies, right? Yeah. So that's it basically. Let's go straight into the tips. So what are the tips we can give you? Something that Janusz definitely wouldn't give you as a tip is don't be afraid to delete stuff. You have no idea how much code I deleted in the recent release. Have you deleted a lot of things before 2020? Probably not. Yeah. But you deleted what? A few dozen repositories lately? More like quite a few dozen. Yeah. So don't be afraid to delete stuff. It's fine to have worked on something and let it go because you take the experience with you. But you don't need to have code that you developed 20 years ago, 10 years ago on there if it doesn't serve a specific purpose. At least archive the repositories that aren't used anymore, which makes it easier for people to see whether you intend to do some development still or whether it's just something that no one's paying attention to anymore. So yeah, one of my tips that would always give us just delete things. That's good. That's what you gave me. I had almost 300 repositories. And what you're at 100 now? More like 200. But most of them are archives anyway. Okay. When you do a project, also think about is it something you want to be associated with? So, for example, you first you have to think about where are you currently employed? Are you a freelancer? Where are you employed? Is it something you really can make public? I have friends who really, really liked some very niche anime stuff. And you cannot advise everyone to publish everything. There are some things that you better just share within the group. Absolutely. So putting that public is not always advisable. Definitely check whether is it something you want to be associated with because once you put it out on the internet, it's going to stay. Pretty much the right to be forgotten only only goes so far and only in Europe. Exactly. Another thing you should really think about is if you're going to put it up on GitHub or GitLab or whatnot, then the question is, do you want to do it as a personal project? Or do you want to do it as an organization? Each one has its benefits and drawbacks, right? So one of the things that this signals to people, I used to do a lot of orgs for everything, even for some projects that are not mature enough. And it might give off the wrong signal. First of all, if it's under an org, then to make it really visible, it doesn't automatically, it used to not show up so much on your personal account on GitHub. For example, now it does. And it's really important that an organization sends a message that there is more than one person behind it. And that means that, yes, on the positive side of things, an open source project that's developed by more people is always a good thing because then one person disappearing or losing interest doesn't necessarily mean that the project will die. On the downside, if you create an org for something that is essentially just a library, that's just going to feel very empty. So when you look at the space that's there, and there is one library there with a couple of hundred lines of code, then that's going to feel very empty to people. And it seems like there is something missing. Like, yes, here's this org, but where is this? Where are the things that you would associate with the community or with a company or something that is typically associated with an organization? Yeah, the next is open source or freeware. Basically, you know, you can also not open source stuff. You can't just offer it for free. There are some projects or products who do that. It's not my personal choice, but it's perfectly fine to, you know, not open source things. Or on the other hand, you can do something that is open source, but not free. Exactly. For example, one of my favorite games, Space Engineers is technically open source. You can view the source code when you develop mods or whatever you want for it, but you still have to buy the game. So it is possible to make a project that is open source, but not free. The next thing that you have to consider is you have to prepare for people being people. Haters going to hate. Especially on the Internet. Especially on the Internet. So you just got to be prepared to get some shade, have people complain about your code. Why is there no testing? Why are you using this library and all that? Have you thought of dependencies? Why is this so complicated? You have to prepare for everything. And it's not just about code. There's a bunch of other things. There have been in the last years things where someone wrote something on their Twitter, which was against what someone else's ideology is. So they basically barred the GitHub project of the person with issues about it. So you have to just be prepared that the more popular your project gets, the more people problems you will get into. So just be prepared for that. And the other thing that is, you know, when you start an open source project and you're just like, oh yeah, cool. I have five users and it's great. And then down the line, you might want to do it as maybe a full time job or you might want to do a company. And if you don't set up things right in the beginning, like for example, your license, which we're going to talk about, then you might have a problem making money. Of that project and that might prevent you from from pursuing this development. You might not have to. You might just prefer to for the project to stay as a as a hobby project or as a community project, or you might even find a sponsor for it like CNCF or the Apache Foundation or something like that, which is willing to adopt your project. But again, this is something to think about when you start a project. And then, of course, you should get advice and opinions. You should look at the various blogs that we've outlined and you will find many, many more. You will get a lot of opinions. And the one thing to consider here is you can get all of those, but don't get too much because at some point you will just get into it. You'll lose a lot of time and you will lose your own ideas if you just look at what other people are saying. Sometimes you just have to push through with what you've got in your head. Yeah, and that's definitely something that I'm most guilty of. I've been known to go off on a tangent for one person saying this would be a cool feature and then not taking the big picture into account. For example, not releasing something because this and this and this was missing, but it really wasn't that important for the majority of users. So it could have been done in the next release. Yep. So other factors to consider basically is, you know, usually people open source something or in the good old days before it became mainstream to do open source. There were a bunch of people who had to who had a problem and they wanted to fix it. So they went fixed it and they wanted to contribute it to the wider community. So there was kind of this secret community of people who would meet in user groups around the world and share what they did and it would be public in various hosted environments of public code. But nowadays things are different and sending patches via emails is fun. Yeah, but plenty, plenty of things like this. Obviously the infrastructure and environment has changed these days. So do you have a need to fix something for yourself or in our considering helping other people? There's another story of someone who developed something for medical devices where they basically hacked into the device and offered open source code around how to how to change parameters on that device. And that person then got mad that other people forked the project. And basically, you know, this is the prepare for people part. Yeah. Open source is open source. That means that if you put it up under a permissive license, then people might take it. They might make a copy of it. They might rename it, fork it. You name it. And the question is, what is your intention for open sourcing something? And here we come to the second part. Basically, are you seeking fame, glory? And with that you will get rage. People will rage at you. Sometimes it goes the other way around. Sometimes it's rage that drives you to open source something. That is correct as well. So these two points are kind of intertwined where you really, really have to have the urge to fix it and continue fixing it because otherwise you end up with a lot of repositories where nothing's happening anymore. Yeah. And that's especially true if you're trying to convince your company that, hey, let's open source this library. This happened to me many, many times that there was this idea, okay, yeah, cool, we could open source this library. But at the end it was just like we put up the source code, but nobody was ever using it. And that's partially because open sourcing isn't just publishing a piece of software. Because a piece of software alone is very, very hard to use. You have to do a bunch of things that we will get into to make it actually usable and make it contribution friendly and all those things. And then we come to the third point. Basically, do you need an improvement on your profile for job searching? If you do, and that is your main driver for creating an open source project, consider not doing that and rather contributing to something out there that it's either a small project that you have a personal need for. Or it is a project that you really want to get into because it will really help you in your job search. One of them is of course, I mean, Kubernetes. There's many, many, many, many others from many different companies where you can contribute and improve your profile with that. But consider that the larger the project is, the more processes are involved in getting a contribution in. I have a contribution to the goal language in that is waiting since six or eight months to be merged. But that doesn't matter for the job search because you can still put out that you have the pull request. So if you're really, really someone who is new to the field and wants to get some recognition, you've also helped mentor people and having an open source project out there, even if it was a play project, has actually helped them land the job. Yes, it did. So having your, as much as you might think, okay, well, nobody wants to see a millionth blog implementation or something like that, it does tell a lot about you. So when, when you're interviewing for a position and you are showing them your code, it tells a lot about how you code, how you work, what your methodology is. Do you have tests? Do you not have tests? Do you write clean code? Do you not write clean code? Do you have documentation code comments? All that stuff. That is also one point of hiring. When I was hiring people in some companies, I also give them the option of walk me through a project that you did already or finish this technical challenge. Right? Yes, not everybody has a GitHub account. Not everybody needs to have one. Exactly. And sometimes you just work on proprietary code so you cannot do it, but then you get a technical challenge. There must be a way to, you know, test that what you're saying is true. But obviously if you do open source something and you're proud of it, it's an easy way. Yes. But it should be the best that you can do if you intend to use it for a job search, because nothing is worse than having, than showing something to your potential employer that you yourself, no, is not that great. You could talk about what's not great about it. You could, but that's, you know, it's an indicator of why didn't you fix it then? Yeah, why didn't you fix it? I mean, it's still a talking point, but it's an uphill battle. Yeah. So really, if you, if you use something to showcase and again, my GitHub account is definitely not a showcase because there's so much junk on there. You should delete some. I archived a lot of it, but still, if you want to use your GitHub account as a showcase, then you should seriously consider, is that the best picture I can paint of myself? So when we're saying open source, open and source, there's so many meanings of it. We know now the philastic and Amazon, there's been MongoDB as well. There's been so many projects that, you know, say they're open source and their source available. There's a bunch of different meanings. So source as in the code and not the organization that is creating the code and therefore is the source of, you know, power for this. Then open also just means that are you, it can be open source in the terms of published code, but it's not very open. It doesn't accept contributions and that type of thing. Or the license might be restrictive. Exactly. So open source is used in many, many different meanings. So let's maybe define a little bit. Take your pick. I mean, most, I think the probably the most accepted definition is that a project that is published under an OSI approved license. That's the open source initiative. They have a number of licenses which they recognize as being truly open source. And within those still you have a large range of possible licenses, but for example, the SSPL license, which has been used by which MongoDB and Elastic has switched to has not made it into the OSI because it is legally not clear enough or I don't exactly know. So they have written a long post about why exactly they didn't accept this license, but it wasn't up to their standards. So generally when we talk about open source or free and open source nowadays, we mean something that's published under an OSI approved license or something that is very, very close to that. Okay, so we have put up an overview here of if you want to open source your project. These are the things you'll have to think of. It's not an exclusive list. So it means there might be more. But there's definitely more. There's definitely more. But these are the basic things you have to think about. Well, let's go into detail for all of them. So for the technical part. Yes. So as we mentioned, open source in something isn't equal to let's just upload it in GitHub. There are a really large number of tools out there that can help you do this. But you have to be aware that writing the actual code and putting it up is about 30% of the work. Give or take. So, okay, GitHub, GitHub is has established itself as one of the major platforms that hosts source code nowadays. There used to be source fords. You still have GitLab where you can host your staff and so on and so forth. But the expectation nowadays is that whenever you publish source code, it should be in some sort of a Git repository. This Git repository helps with versioning. It supports pull requests. So it's easy for people to propose changes to your source code, which you then have to deal with. And that partially becomes, yeah, that becomes a problem. But GitHub, if you don't know where to put your code, GitHub is a good platform to get started on. They also offer something called GitHub actions. GitHub actions is basically your run of the mail CI CD system. So continuous integration and continuous delivery. You can automate builds of your software. You can automate running tests. There are others out there like Circle CI and Travis and so on and so forth. And these are mostly free for open source projects. So that's a good thing to have, especially we are going to talk about tests and why they are important for open source projects, especially nowadays having your tests being run automatically is very, very important for contributions. GitHub also offers something called GitHub pages. It's basically a free web hosting service where you can, through the power of Git, upload your website. Another option if you don't want to use GitHub pages is Netlify. Netlify offers a free plan and it does a little more than GitHub pages. So you can set custom headers, redirects and so on and so forth. And you have multiple versions of your website and stuff like that. Otherwise the offering is pretty much comparable. Another option is of course what I used to do is just spin up a server on a virtual machine. Then you have to talk about finances. So the thing about open source is when somebody contributes to your projects, they don't know you. They don't know your projects and they are just writing code the way they think is right. And if you then have to go back and explain to them that the code or whatever there, it just doesn't have to be just code. It could be text or whatever is not up to your standards, then that's going to cause a very, very difficult discussion. If their code is going to break something, then again that is something that you have to explain to them. So it is much easier if you have tests for your code that test the functionality and writing tests is its own difficult topic that you actually test the expected functionality and not an implementation detail, but having tests is good. Having linting for your code, so if you expect your contributors to have a certain complexity of methods and don't exceed that complexity, so don't write three, four, five hundred line long functions and stuff like that, linting can help with that. But then on the other side, please don't just implement linting for the sake of linting because that's then going to pop people off if they have to put the spaces exactly where you want them and stuff like that. That just makes everything a lot more difficult. You can see here on this screenshot that in this case all the tests passed and this was actually issued by a feature of GitHub called Dependabot that automatically sends you pull requests for dependency updates, which is important for security. The other thing is you will have users. You will hopefully have users and those users do not know the project the way you do. So you want to make a website and MK docs is one of the best tools to easily create a documentation website. It's written in Python and you can make it look like this too. This is our little project called Container SSH. So it's highly customizable and there is a really good GitHub integration from a fellow called Mikhail Hausenblass. This basically automates the publishing of your MK docs website on GitHub pages. Now, writing the website is of course your job. And that's where it again becomes a people problem because you have to explain what your project does, how to set it up, how to use it. So I don't know how many hours did I spend on writing documentation? Probably 30% of the code time, right? Yeah, roughly 30% of the entire project time I probably spent on writing documentation. Now, the other thing is how do you publish your code? That depends on what kind of a project you're building. If you're building something that you can containerize, then you can build a container image and then Docker has of course now rate limits, but they have an exception for open source projects. So you can host that on the Docker Hub or alternatively, you can go for example to Quay.io to host your code. It becomes a lot more difficult if you can't containerize your source code if you want to build something for desktop applications or something that you install natively on Linux platforms. That is then becoming exceedingly difficult because there are not many great tools for that. You need to go back to, for example, for Linux systems hosting a DBN repository and things like that and not even the big software vendors do that a lot of times. They just prompt you, hey, download the new package and then install it by hand, unfortunately. So there is definitely room for improvement in terms of tooling for those kinds of applications. One additional tool we use is maybe a bit of an outlier. It's called Terraform. Terraform is primarily a tool for managing, creating and managing infrastructure. But in our case, it actually manages our GitHub organization. And the reason for that is that we have over the past few months created more than a dozen repositories and managing those repositories has become a little bit difficult. So we are using Terraform to automatically manage and basically uniformly manage our GitHub account. Now we can go to the organizational part. So there's a lot to unpack here and, for example, I mean, one of the most important things is hosting in domain. Hosting we have kind of covered with GitHub pages and other, you know, Netlify or whatever. But obviously you can also just run a server and put your website on there. So nothing prevents you from doing that. It's just additional effort where you have to also monitor your infrastructure, set it up, get a hosting service or, you know, host it at home. And you need to integrate it with your CI system if you want contributors to the website. So that's... If you want a CI system, if you don't check and just accept PRs without a CI system. Yeah, but somebody needs to deploy it onto that wherever you're hosting your website needs to be deployed there. I just, you know, I just had my server that did it all. Yes, shell script magic. No, go magic. But anyway, moving on, I wouldn't recommend it. No. Although do you want a custom domain or do you just want, for example, if you're hosting on GitHub pages, there's a GitHub domain that is used. So you don't need a custom domain for starters. Yeah, if you want, we didn't have a custom domain in the beginning, but if you want to have a custom domain, that's one of the few things that's actually going to cost you money. Exactly. Because nobody gives away domains for free. Social is one of my not so liked topics because I don't specifically like social media. But if you want to think of it, don't just go and make an account at every social medium, you know, think of where you're actually going to post and then engage there. Yeah, where, for example, Twitter is... We're not using it the way I would recommend. No, no, we're not. Twitter has generally, in my experience, a better tech community than, for example, say Facebook. Yeah. So it's been your mileage, me, extremely wary with social media platforms because sometimes you just run into a lot of people who are not all that nice. It's not that much about nice. It's just Facebook. Yeah, Facebook is not great for the tech community. Recently, I've had, I mean, I stopped using Facebook almost a year ago. And my experience recently was that on Facebook, your visibility, unless you're really putting an insane amount of effort into it, is basically zero. Anyway, depending on what your project is, you're going to decide on which social medium to use or not. It's fine to not. I expect that your users might want to have updates about your project. So if you're not going to give them some sort of an avenue where to read about... But even if you decide to not use any social media, if you decide to not use a website, if you decide to not have newsletters or whatever, you can have a roadmap and a community all on GitHub, for example. There's a discussions board, there's the project board, and there's issues, which is perfectly fine for everything you need. And yes, so any sort of avenue where you can tell your users, look, if you want to keep up with this project, then just go there and then you can read when there's a new release, etc. That's fine. If there's an RSS, feed a lot of people who are very techie, they still, as much as that might surprise someone, still use RSS. So that is helpful to them. So roadmap community is something where just as with social media, you have to think about how you're doing it. Do you want to put out a public medium for discussions on an IRC channel, a Slack, or do you want to create a Slack channel in an existing Slack workspace? Do you want forums? Those are the things you're going to have to think about, and it will heavily depend on what type of project you're using. And then obviously there are things that you have to get familiar with, like terminology, but all of this is not super critical to open sourcing your project. And with terminology, I mean things like upstream, downstream, if you're selling your project at the end, then there's going to be the downstream of it, which is going to be... The commercial offering, basically. The commercial offering, then there's things that we often use in the open source community, like free beer, free speech, free beer for things that you just get offered as they are. You cannot change them. You cannot really take them and modify them. There are just some terminologies to get acquainted with. Yes, but these will also evolve over time as your project grows. Governance is something you really have to think about in terms of, do you just want to be the one person who has all the keys, or do you want to, over time, go into some sort of project governance? Governance. Maybe you want a charger. Maybe you want some type of nonprofit organization around it. So this is just something that will evolve over time, but maybe you already have some ideas and want to think about it beforehand. Yes, but I would definitely think about how are you going to deal with other people wanting to help out with the project. So how welcome are they in your project? Yeah, do you want a user community or a contributors community? That is basically one of the main things. There are people who just want a lot of users and they never want to accept contributions. And here the best way to never get contributions is to make it as hard as possible. Yes, but then think about if you really want to do open source under a permissive license, because it might happen that you use band together and they're just going to fork your project. That is exactly correct. Which might seriously upset you, but you can't really do anything against it if you've put it out under the MIT license, for example. Yeah, so it's your choice. Choose wisely. So we talked about documentation and maybe one of the first items that anybody is going to come across when they when they look at your project is the read me. There are countless open source projects where people tell me, hey, can you look at this and I'm looking at it and I feel like, okay, cool. What is this? And the read me is basically your cover page of your open source project. It's where you introduce yourself and that's why it's important that you at least make the basics clear what your project does, who it is intended for, etc, etc. So get those basics out the door and then you can point to installation docs or what have you right from there. But make sure that your users understand what this project is about when they come across it the first time. The other thing that you should think about is if you want to have contributors, then you should make it easy to develop. This goes into like think about an analogy when you have a new colleague and you need to onboard that new colleague into an existing code base. If you don't have any documentation, then somebody needs to sit next to them and explain each individual part how to set up the development environment and so on and so forth. If you on the other hand at least have some basic documentation of, okay, here are the tools you need. This is how you set up the dev environment. These are the tools that you need to run before you can commit, for example, linting or tests or what have you, then that's going to make your new colleagues life much easier. And the same goes for open source projects. If you have documented how to set up your project, how to write code for it, how to submit a pull request, then that's going to make your contributors life a lot easier. So the other thing that you might want to pay attention to is your pull requests because it's quite easy to end up in a situation where you have an insane amount of pull requests. And what you can see on the screen is our development dashboard that's just listing all the automatic pull requests by depend upon that we need to merge. And if you don't handle your pull request, you don't answer to them. That is not going to be a good look for your projects in the eyes of the community. The same goes for issues, of course. The other thing that we mentioned before is having a communications channel. So in the beginning, I thought, yeah, sure, we should open a separate Slack for this project and yada, yada, yada. And we talked about a lot and this is what it ended up with. It's a single channel on an already existing Discord server and nobody is talking in it because the project is in its very early stages. And that goes right into the next point. Don't be just a megaphone. You have to engage. So just announcing things and never talking to the community, never doing anything, not creating a community is just wasting your time on social and everywhere else. So then you could just as well focus on the code and make it the best you can. Yes, exactly. So on, I understand and I have this feeling as well that I want to talk about my project the whole time, every time. But on the other hand, that is not necessarily what people might want to hear and there's only so much you can talk about an SSH server, for example, people are just going to get fed up with it. So think about what other people want to talk about, help them out. In my case, for example, there are discussions about SSH and since I have done an absolute deep dive into the topic, I tend to help out in those forums as well. Of course, in my signature, there was a link to my project, but that does not mean that I should be constantly talking about it and nothing else and not responding to people and not helping out elsewhere. Next up comes the legal topic, which has two main points, licensing and liabilities. Yes, because you might think that you are creating an open source project. So that's it, right? You just put it up. Everybody can use it. Everybody happy. Turns out there are some legal restrictions and things to think about. One of them is licensing. Yes. First of all, you should choose an open source license. If you don't choose one, you're just really, really open sourcing the code, publishing the code and nothing else. No one is allowed to use your project unless there is a license attached to it. Yes, there's otherwise they're basically stealing it. Yes, legally speaking. And of course, then you could sue them. So nobody is going to take your project seriously without a license and there's ways to go. You can go a very permissive license or you where anybody can make money off of it. There is some more restrictive licenses that require other people that are copying or forking your code to share it under the same license. For example, that would be GPL. For example, there are many ways of doing this. One thing to be aware of, for example, many open source companies require their contributors to sign a contributor license agreement. That is usually a way to, first of all, make sure that the contributor is actually aware of what's happening to their code legally speaking. And on the other hand, it is a vehicle for this company or if you're doing a project that you want to do a license under a commercial and an open source license, then the CLA is the way to go. But we are not lawyers, so please speak to your lawyer about this. The other topic that's really not talked about very much is liability and warranty because it turns out that... Just because you put out something on the internet doesn't mean you're making yourself invulnerable to lawsuits. Exactly. Yes, so if you are... there are many ways to have a legal problem with an open source project. One of them is that you cannot just... in the European Union, you cannot just blanket disclaim liability. So if you are acting with gross negligence, if you're acting with malice, etc. If you have a crypto miner in your code. Yes, that could put you in legal jeopardy. So one thing to keep in mind in certain countries in the EU, if you are a commercial company and you're open sourcing a project of yours, but that project is needed for your customers to use your service, then that might not disclaim your liability totally. So that is then creating what's called a mixed license. The other thing to think about, since we already mentioned Amazon Web Services and Elasticsearch, there is a lawsuit going on Elasticsearch V AWS. That is exactly about trademark use. So if you're putting out a project under a certain name, does your license allow your competitors or anybody who is just using your project to use your name in their service offering, for example? And that is something to think about when it comes to cloud providers and the Elasticsearch V Amazon lawsuit is definitely worth a read. The other thing that's very similar is patents in Europe that is not such a big topic because software patents are not really a thing, but in the US it is definitely the case. Does your license allow your users to use your patents or if you're working for a company, the company's patents? And the next topic is financial. Budget and time. It's the same thing. If you're spending your weekends and every little bit of your free time, that is money. And time is money, friend. So basically you have to think about how you're using your budget, how you're using your time, and whether it's worth your time. You don't want to end up having put half a year or more into a project and then someone comes along and dishes out on the project because they don't like it. And then you feel terrible because you wasted half a year. You want to think about what do you want from this project down the line? Yes, so what is your... Nobody is a saint. Everybody has some sort of value that they attribute to this, if it's appreciation or if it's money or whatever else your motivation is to put into this project. Think about how much that is worth to you, if enough people are going to use it, if it's just going to be a terrible slog to get to the point where you get what you want. And obviously, you know, in your budget you have to account for the domain, the hosting, if you're choosing something that isn't free. Yes, if you're integrating with cloud platforms, testing is going to eat a lot of money. Yeah, and I know many, many people who have made their own window manager, which they have never bothered publishing because they really, really just want it for themselves. Or they put it out on GitHub and never do anything else with it because they're fine just using GitHub or GitLab as their archive, their personal archive. I did that for a long time. And it's perfectly valid if you want to do that. Yeah, exactly. So this is the last slide. This is where we have put all the concepts that we've discussed here. We're happy to discuss that in a Q&A and further on our Discord or anywhere else you want to discuss any of these topics. As a summary, we want to tell you the tools you can to achieve your goals. Not invented here is something that I'm personally very drawn to. But over time I have also learned to use my time wisely and I ended up using GitHub a lot. And same goes for docs and block engines. You don't have to write it all yourself, which is hard to hear, but it's true. Yeah, you want to focus on your project and not reinventing a docs engine unless a doc engine is your project. Choose a license, automate, test your favorite topics. Yes, you really need to do that unless otherwise you're just going to end up with trailing behind pull requests and updates and whatnot. And have fun so you don't end up hating yourself after having spent half a year on your project and maybe profit. Maybe. Who knows. Think of it ahead of time what you want to do with this. Exactly. And as a little summary of what we talked about to start, don't be afraid to delete. That's basically the main point. Yeah, sometimes it's okay to delete stuff. Code, repos, whatever, just projects. Give it to someone else. If nobody's interested, just delete it. Yep. So that's it. We're looking forward to your questions. Thank you for your time. Thank you. Bye.
|
You've just had an idea for a great application but don't think anyone else is interested? You've used your weekends and free time to come up with something that actually solves someone else's problem? You'd love to open source your project so others can use it but you don't know where to start. We'll explain the way from an idea to an open source project using a step by step guide - including links, code snippets, and open source tools you can use to open source your own project. We are using our experience from working at open source companies like Red Hat and Percona as well as our project experience to highlight the steps you can take to get your open source project off the ground. In the Q&A section you can ask specific questions about your project or your idea and we'll give you some tips and tricks that we've used ourselves.
|
10.5446/51973 (DOI)
|
My name is Konstantin, I come from Berlin and I work in a project that is concerned with language learning, especially of the Latin language in German high schools. And yeah, I will talk about how we can nowadays process language and especially ancient languages and use that for teaching and for language learning. So I will show you a few tools and methods. First I'm going to say a few general words about natural language processing and then I'll show a few examples from different use cases and stages. So let me start by explaining what is meant by natural language as opposed to constructed language. So natural languages are things like Latin or German or English. So they are used by humans to interact or to communicate and when we do so we often exchange some thought so we can have some joint reasoning. For example, if you're in a conversation and the other person tells you some secret and you tell the other person a secret then you have more knowledge than before because you did so in a joint fashion. And yeah, so the syntax that we use in natural languages is quite flexible so it can be different depending on the speaker and where you live and many other factors and it's evolving constantly usually. But of course for historical languages like Latin and maybe also in some terms ancient Greek this may not be entirely true because often they are considered as dead and alive. So natural languages where the system is not evolving anymore but of course you can argue both effects really true. And so for and when we talk using natural languages we often produce some kind of ambiguity. For example, we make references to objects but our reference may not be quite precise and the other person may not fully understand it or understand it in a different way than what we thought. On the other hand in constructed languages these are for example programming languages like C or Python or maybe even XML. So these are used by machines. They are used for internal processing so the machine is usually not communicating with other machines or humans except if we tell them to. So the syntax is often quite precise and this is not evolving so fast. There is some kind of evolution but it's very controlled and usually machines have some kind of problem with ambiguity. So when we want to process natural language with a machine we want to often disambiguate some vagueness or inconsistency in the language. And you may note that there are some additional pointers for further reading down in the footnotes. So when we look at natural language processing part of natural language processing you could imagine it like somebody also mentioned in the chat recently that there should be some kind of pipeline. So when you start from the surface text where you just have some ancient textual source you may want to tokenize it so you split it up into small parts and then perform various stages of analysis. So maybe interest in the lexical stage so the vocabulary that is used and kind of how the sentences are constructed and what meaning of the text is actually supposed to be. And when we want to do that automatically using machines I have indicated this small red arrow in the center so this is about where we are currently so machines are pretty good at splitting up text and structuring it and they're also pretty good at analyzing the vocabulary and syntax but semantics is still tricky. I'm sorry to interrupt you Constantine but maybe you can move on with your PowerPoint because we are stuck in the first slide. Oh you're stuck. Okay. Ah yeah the sharing was paused. I'm sorry. That's okay. Let me reshare it again. Can you see it now? Yes. Thank you for mentioning it. So you didn't miss a lot and so this was only the table that I was talking about before but of course you can also download the slide separately and have a look yourself and this is the general pipeline. And so now when we look at the first stage of lexical analysis we may want to compare certain texts to a basic vocabulary as some of you may know in Germany the students of Latin have to learn certain words so they have to acquire a certain vocabulary which is predefined and one very popular list of words that they are supposed to learn is the Bamberg core vocabulary which contains about 1200 items and now you may be interested to compare that basic vocabulary to a text that you want to read. So for example when you want to read Caesar's Scalic War of chapter one and you may be interested in how easy it will be for me to read that text if I know the 500 most frequent words from my basic vocabulary. So then you can have a look and say okay I want three sentences to start with a small text and maybe then the machine can calculate for you how easy will different parts of the Scalic War how easy will it be to read depending on their vocabulary. And then once you have chosen a certain text passage you can also highlight the unknown words so you can easily tell your students okay these are the words that are not covered by your basic vocabulary so please learn them in advance or I can help you to translate that. So you as a teacher you can be well prepared in advance and you don't have to find out where the difficult words are on your own but the machine can tell you instead. So then once you have chosen a certain text that you want to treat you may also want to build some learning materials for that text and as you have seen earlier in the talk of our colleagues from Hamburg they have used a similar or the same framework which is called H5P so it's a free tool where you can create interactive exercises, digital exercises and of course you can also use them for vocabulary learning. And once you have created such exercises you can also evaluate them and give some feedback to the students for example if you group a few exercises to four different parts or four different chunks then you can tell them okay for every part how well did you perform what was the score that you achieved. And for example if your final test and the first test are the same then you can also show some kind of development and say okay in the final test you were 80% better than in the first test because you learned something in between. And so yeah if you want to try that feel free to click on the link or scan the QR code whatever you feel more comfortable with. And meanwhile after you gave feedback to your students you may also be interested in doing some analysis for your own as a teacher or as a researcher to see okay if I give different exercises and different kinds of interaction to my students how does that change the way that they acquire vocabulary the way that they learn language. And so you may be interested to see that okay the points at the top in this diagram show students that improved their skills and the points at the bottom show students that decreased in performance. So you may be interested to see that the right grade points so the students that work with close so like fill in the blank exercises and those students usually perform better afterwards while the students that had a simple vocabulary list where you just have single words and differentiation they perform worse afterwards because maybe this was not suitable for their style of learning. And then once you have this kind of analysis and information then you can decide okay maybe for certain students a certain type of interaction is better for learning Latin. And so so much for the vocabulary part but of course you may also be interested in the syntax part. So how do certain constructs certain constructions work in Latin. And for example you may be interested in the use of the relative pronoun in Caesar then you can let a machine search for instances where the relative pronoun is used and highlight it in red in the center and let it give you also some context so like five words to the right and five words to the left so you can know okay where is the word that is referred to by the relative pronoun. And for every word you have the path of speech for example which is here written in uppercase abbreviations. So for example the verb a noun adjective and but you also have some arrows indicating the syntactic structure. So for example to which verb does a noun relate so which verb is the on which word does it depend in the sense so it becomes easier for students to see the general syntactic structure in ancient texts. This is also what is currently already done in schools at least in German schools but not digitally but in an analog way. I think somebody mentioned it also in the chat that people go through the text and they just underline okay where's the pronoun, where's the verb and where's the noun and so on. And machines can just help you to do that, to help you visualize it and to see it faster and maybe in a more comfortable way. Of course you can also, you may also be interested in the difference between that makes certain authors special. For example if you want to read a letter in author and let it hold orbit in school then you may be interested okay what is special about obit's syntax about his constructions compared to a tree bank. So a given collection of other texts. So if you compare to obit to a lot of other texts and other authors you may see that he's using the combinations of noun and verb and noun modifying adjectives more often. So you may want your students to be prepared for that before they even start reading. And of course you can also get some examples so from obit's text for each of the various constructions. And meanwhile of course there may also be certain combinations that are not so frequent in obit so you may feel like okay these are not important to revise we can skip that. So now there will be a little bit of math but it's just for this one slide. So when you're talking about semantics things get a little trickier. Machines have some as I told you before machines have some problems dealing with semantics. So if you have some Latin sentence like in keep your adverb and so on and you want to know the meaning of every word in the sentence then what you can use is the linguistic theory of distributional semantics. Distributional semantics means basically looking at the neighbors at the context. So in this special case we may want to use just a very small context like only the right at the left neighbor. So what you need to do first for the machine to be able to process this text that you assign identifiers to every word form so each word has its own ID. Then you can represent every word as a sequence of identifiers for example in the context for in is only pre-keep your because to the left there is no context here so there's only the identifier for in which is one and the identifier for pre-keep your which is two and thus you can continue for every word. And now in this sentence you have the same word more than once for example add and if you want to know the general meaning of error not the specific case then what you need to do is you take all the representations for error in the sentence it occurs three times and then you aggregate them to a single representation. This is actually even more complex in reality but I simplified this here just to make a clear point for you. When you have built those aggregate representations then you can ask the machine by formulating theory. For example what is a typical context for error then this would be verbom. Why verbom? Because it's in the context to the right here it's in the context to the left and here it's again the context to the right. So wherever you have error you also have verbom. So this would be of course only a very simple example but when you have longer texts you can do this for many sentences and many texts and at-punts. And of course you may be interested to visualize this kind of information. For example if you want to do research or learn something in school or at university about late antique paragraphs so you want to know something about factuality so about truth then you may be interested to find some words automatically in the texts that belong to the field of truth for example to trust to suggest or it pretends and so on. And you can do this automatically for any kind of text also for an entry text. Now some teachers may say okay but we don't need these word fields we already have nice books with synonyms and so on. But these books are usually not tailored to your specific context because they are written once and then you have to deal with a book in some way but you cannot really adapt it very easily. So natural language processing can help you by letting you define what you want to analyze and the tools that you need. Then other people might say okay the results are only trivial I already knew that and I could have done that myself. Okay that's cool if you are able to do that but for many students this is really a hard thing to do. So to find words in a text that are similar to truth is not easy for them. Also the machine is only looking at the text itself while when you do it yourself you may also be tempted to integrate some background knowledge that you already have. And then it's difficult for the students who don't have that background knowledge. And the last thing is the algorithm that is used for this kind of analysis is black box. And then some people say okay we cannot use it because we don't know how it works but I think that is mainly a research problem. So in research it's important to understand this black box but in schools maybe not so much. In schools we are primarily interested in learning the language no matter how. So when you know some Latin in the end then the way how you get there is maybe not that important anymore. Just has to be suitable and comfortable for your students. And so other use cases where you may want to employ this technology is for example finding similar contexts where truth is also discussed but where it's not explicitly mentioned. So if you define the Latin word truth verum or vera then you may find sentences where other words are mentioned like fidugia or catisma or suspicax or kerta but the word vera is not even mentioned in those sentences. But still the machine is able to find those similar contexts because it has analyzed the meaning as a way that I showed you before. And finally you can also use such analysis to identify peculiar styles for certain authors. So when you have many many ancient texts you may be interested in learning okay which texts were written by Galen or by De Mosfinis or but of course there are some problems for example in this diagram you can see that the larger circles which represent longer texts they all group separately. So in this model there is some mathematical problem concerning the way that styles represented. So you have to be very careful about just visualizing anything and calculating anything because the diagram in the end may influence your analysis, your linguistic analysis. So this is one case where you have to be very careful. So to come to an end what NLP can do for you right now is that you can build some exercise and you can help the students acquire vocabulary and assess their performance. And also you can offer nice visualizations for syntax for vocabulary and so on. And you can also find relevant ancient text passages for about any given topic just by entering some simple query. But there are also some things that NLP cannot yet do very well. For example, providing advanced feedback. If you create exercises, interactive exercises in H5P they will tell the students okay this was right or wrong but they will usually not show why this was right or wrong. So there are no error types or anything you have to do that by yourself. And also it's quite difficult for machines to always discover the syntactic structure correctly. So there will be some mistakes in the automatic analysis that you may need to correct yourself afterwards. And finally, this is what I'm also working on currently. We're still trying to, when you have ambiguous context and there's a lot of ambiguity in ancient texts, then it's sometimes hard to pin them down to just one specific meaning. But for teaching this, it is sometimes important to restrict the context to one specific meaning because it's easier to learn. But of course there may be cases where ambiguity is just fine and where you can handle it easier. Okay, so thank you for your attention. That's it.
|
The lecture was held at the online conference "Teaching Classics in the Digital Age" on 15 June 2020.
|
10.5446/52942 (DOI)
|
So giving a definition of claim, this is the dictionary definition of claim that is stating or asserting something without actually providing proof and they might be proof or they might not be proof. And claim detection or claim identification is like a pre-task of fake news identification. It's in the ecosystem. The difference is that when we are identifying or classifying a claim, we are not interested in the veracity of the claim, whether it's true or false, but we are just identifying whether it's true, whether it's a claim or not. So and there are a variety of claims on Twitter and you really can't put a category into a claims into categories that this is the type of claim and this is another type of claim. There are some patterns like how for a particular topic, what kind of claims or what kind of language people use for saying something, for claiming something. So for example, this is a typical example of when a claim is about an image and the person refers to an image and the claim itself is that the image has been doctored and when you see the media URL in the crawl tweet, it's not really an image, but it's the news itself image or the snapshot of the web page and then it says that soft fake news. But the real image is in the link itself and this is one of the problems that whether you which image you actually use for this task. But this is a typical kind of claim and then other prominent is when claims have numbers are huge sums of money numbers, these kind of facts in the claim, the claims usually have these facts in them and then the images of a Korean pop band and then the image text in the image is not English. So that's another difficulty. Then there are so many tweets that you see when a tweet refers to another tweet. So there are these and that tweet and the tweet that is referred to is usually by a prominent person like a celebrity or a politician. Then this is also common a tweet referring to a situation that happened or a breaking news for example and then this is a live image, but that is blurry. So another difficulty in images and then another kind of image when a tweet or a claim refers to a past event and then image is a graph or again more seen text or overlay text in the image and then well zombies like so many conspiracy theories and the claim about whether there are zombies in San Francisco law centers and an image of person crouching. So in this work we empirically investigate the role of images for claim detection and we are particularly interested in these four questions. So are pre-train visual models useful? What is the effect of domain gap in transformer like models when you use a different pre-train bird model on a different corpus? Then does multi modality help in claim detection? That is one of the questions, the bottom questions. And then fourth is a multi-modal transformer. So this is a trend in the literature in multi-modal models that big companies are training big models which are transformer based for both using both modalities instead of just one modality like in case of bird. So we test that also. So this is a simple framework. So we have a very simple framework of training and testing. So given a tweet, we have three ways we do a classified tweet. One is when we extract features from either modality text or image and then use a SVM. And the B is when we actually fine tune the transformer model for text that is fine tune last few days of the bird for the task. And then the third would be using a multi-modal model that codes both image and text. And similarly training an SVM and then fine tuning the last few days. So for SVM, if we are using both image and text, then the features are accepted and simply concatenated. Features are also normalist so that L2 normalize one. It leads to faster conversion and better results in practice. Then PCA, we use PCA for dimensionality reduction because as usual concatenating features would result in high dimensional features. And PCA also gives you a low dimensional manifold which is good for classification or good for separating different classes. For images, we use a convolutional internet like ResNet for example and then I use the last convolution layer as feature extraction. And then for text, as you know, like bird in different layers and encapsulate different model information and we have found that in such down theme tasks in practice, using just the last layer insert, we should use last four layers and somehow pool them to get one embedding per tweet. And this is how we do it, sum the last four layers embeddings and then average over word embedding. This gives one single embedding for a tweet. So you have one embedding for image, one embedding for text, for a page. And for fine tuning, for bird as I said, we only come as the data sets we have are really small. We either fine tune last two or last four layers and experiment with that and for the Wilbert, here we just fine tune the core tension layers because this will help in answering the question whether multi-modality help because we are going to fix the unimodal branches of the model and just focus on the core tension that by increasing interaction between two modalities by just increasing the training the core tension layers, whether it helps in claim detection in some way or the other end those and that way is that we see accuracy and other metrics. So just not going into detail, this is a typical transformer. So if you know bird, this until this would be a typical bird. And in a multi-modal Wilbert, you have this additional co-attention transformer and with respectively you get the image and the text embedding. So images are divided into regions and then you get similar embedding as you get in text which is implicitly a sequential modality. So these are our tasks and datasets. So we these are so claim detection has been like in talks or in research or hot topic in last one year because of COVID and a lot of datasets and COVID has been released. So this is our these are our three tasks one is claim check within its detection and we have English and Arabic and relatively small datasets, but still we can get something from training by fine tuning not finding the whole model. Then the claim detection, LESA is a recently proposed dataset and a model. Conspiracy detection, this is not client detection, but this also comes into the fake news detection ecosystem. And all of these tasks are binary and to avoid bias in a particular test set, instead of having a single test set, we do five full cross validation and average is reported. So going on through to the image based claim detection, so we test three types of four types of features. So in practice, if our image image that features like there are 1000 object categories in image and dataset and this model encapsulate all that information. So in practice, images that features are considered really good for downstream task. So we test with that. Then there is this places 365 dataset, which is seen categories from indoor outdoor and outdoor man made scenes. You see that you have images in the corpus using having all this information. Then there is another model which encapsulates both image network and places. So it's a hybrid model. And then there is this sentiment because in practice or in literature, previous work in fake news sentiment also is considered one of the features in fake news. So one thing not going to details of every result. So one thing to see is that image that features and the hybrid features are better than only cases in sentiment. And that's also expected because image that features are in general better for any downstream task than the places. So this is what we get from like outcome of this experiment. Then just for the text space, as you would expect, claim is in the text. So you would expect text base claim direction to be much better, obviously. And then here you also see what we wanted to test the main gap. So the birth tweet and COVID quitter birth models last year because of huge amounts of COVID quitter data. These are specifically trained on COVID corpus further after using the vanilla birth model. So you definitely see that using these models, Leza and media and cliff particularly are better and not so much for Clefian. And you can, it can be the reason that the samples are really low to learn anything. But reduce domain gap really makes sense. And fine tuning again, you see a wider gap. Then you find you in the COVID quitter models, the gap is widened between the vanilla birth and these COVID quitter models. So the reduced domain gap models are the further pre-tuned models on COVID quitter corpus are much better. Then going on to the multi-model claim by simply concatenating and using an SVM. Here also, although if you see this best unit model, obviously fine tuning the layers is giving you much better. But if you just compare with the text space SVM, adding modality, image modality increases the performance and both as a QCF1 are increased. It's not just increasing and increasing. And you can see a compare it with the text space SVM. So that's encouraging too. Then going on to the multi-model transformer that is the Wilbert. So here, this is just additional experiments that we did. But the point here is that if you just compare this, that the average of the word embeddings considerably perform better than the pooling embedding. That if you know, Bert has a class token embedding. So instead of that, we average the word embeddings at which is for text. So that's better. And then encouraging result is that fine tuning improves the performance here. And that was a good result for further introspection. So in conclusion, this would be the conclusion. Are pre-trained visual models useful? Yes. Image net hybrid features are better than others. Effective of Tummingap, Twitter Bert models give better features for fine tuning and does multi-modularity help? Yes. It needs further introspection on larger datasets. And Wilbert is useful, but pre-training task is crucial. But that needs to be further checked with better datasets or larger datasets. So for future work, we want to evaluate multi-modular models on larger datasets. And here, since Wilbert, many multi-modular models have been proposed with better visual vocabulary, and that is the point. More entities or more things you can extract from an image or identify from the image, the battery or multi-modular model is. So that will help. And another thing is the multi-linguality. Multi-linguality is always an issue with deeper models and larger models because big research groups always first use English as the main language and use that. And here for Arabic, we had to translate the Arabic tweets to English for using Wilbert. So consider visual models. So obviously, prominently, images have a lot of over-retext and syntax, almost 30 to 40 percent of the images in the dataset. So that's an important thing to consider in future work. Then multiple images, like I showed you in the first example, there could be multiple images. There is hardly any work that handled this. Then handle graphs, for example, one of the tweets for pointing to a graph. Then we want to break down problem of claim detection into two things. One is claim detection because claims are usually in text. And here, important thing that we notice is that the relationship between image and text might be a better indicator and might help to develop a better claim detection model. And relationship could be as simple as mutual information or difficult as a semantically meaningful, whether image and text belong together, for example, or they add more meaning to the whole message. And then the ultimate aim would be to develop domain independent claim detection models. As you saw, COVID Twitter models were much better, but the ultimate goal would be to have one model that can perform across multiple topics. And one of the things that we want to do is curate a dataset that is multi-topic, not just on COVID-19, so we can actually test this. Yeah, so that would be the end of the talk. And these are the links to the code and the extended dataset. Thank you for having me.
|
Fake news is a severe problem in social media. In this paper, we present an empirical study on visual, textual, and multimodal models for the tasks of claim, claim check-worthiness, and conspiracy detection, all of which are related to fake news detection. Recent work suggests that images are more influential than text and often appear alongside fake text. To this end, several multimodal models have been proposed in recent years that use images along with text to detect fake news on social media sites like Twitter. However, the role of images is not well understood for claim detection, specifically using transformer-based textual and multimodal models. We investigate state-of-the-art models for images, text (Transformer-based), and multimodal information for four different datasets across two languages to understand the role of images in the task of claim and conspiracy detection.
|
10.5446/52423 (DOI)
|
Hello everyone, my name is Smriti Prakash Sahu and I am working as a software developer with Siemens. Today, me with my colleague Abdul and Jadik will be presenting about Eclipse SW360 and open source software component app. So these are the topics we will be covering today. So software 360 came from an idea that in larger organization, you might have multiple systems on our applications or software is really software components. You may have license scanner, you may have artifact repository, maybe you do project below materials management, maybe you do code quality checking or source code scanning, all these systems are actually dealing with software components. And in an organization, you would like to integrate these systems. So the likely problem which is happening is that mapping of software components, naming will be necessary. So mapping of software component naming comes from back then different systems are actually using different ways of naming components. Some may use packaging URLs, some may use vendor component version, some are calling vendors differently, some are calling components differently. So there can be very different ways how to express a component name. And for every connection between two systems, you might come up with individual mapping, which is bad because it may cause a lot of effort. So the basic idea of SW360 is for larger organizations to solve as a component catalog, as we say a phone book for components, where all software components which are in the organization and referenced by different other systems can generally be stored as a hub and mapping can be done centrally there. So you don't have to build, you don't have this need to build mapping between individual systems. And when you have a catalog of components, it's very natural that you get to the next step. Imagine that you have different projects or products in your organization and you would like to actually create billiomattles of your products or projects. And from component catalog, you can actually map users to your product or projects and you can manage billiomattles for them. So this is all from my side and the next slides will be covered by my colleagues. Thank you. Hello everybody. This is Abdul Kapoor and today I'll be talking about software components and bill of material in SW360. Let me start with listing out all the new features that has been introduced since last year. SPDX import for bill of material. Here we can import SPDX bill of materials into SW360 to create projects and components. And also in the next is we can trigger the Fossilogy scan for components via SW360 REST API provided the source code for the component is already uploaded on to SW360. And SW360 is now available in Japanese and Vietnamese language. We also added the change log feature where we can keep track of all the changes that has been made in any of the documents such as project, component, release, licenses, etc. We have introduced a lot of new REST API and points for search and attachment handling. We have added links for SW360 REST API documentation and issue tracker in every footer of the page in the application. And we have enhanced the custom fields and external IDs feature. So now one can create the custom fields in live ray and then use the expand way API to retrieve those fields in Portlet and show these fields in the additional data group without changing the current data structure. Custom field support is added in project component and release. And similarly for external ID, we are now supporting multiple external ID values for the same external ID key. And we have enhanced and added the obligation support where we track each of the individual obligations for licenses under each project. And all those obligations can be categorized based on types such as risk permissions, exceptions, etc. And we have different obligations at different levels such as organizational obligations, project level obligations and component level obligations. And the last feature is the clearing request which is used to keep track of the clearing progress of a project. Software build of material driven view. Here let's imagine that you have a different product in your organization and you would like to create a bill of material for your products. And from SW 360, you can map usage of components to your products so you can manage the bill of material for them. And the bill of material enables lots of use cases. And the bill of material enables lots of use cases like open source licensing to create license compliance documentation for a product. Not just that but also you can track vulnerabilities. You can take care of trade compliance. Maybe you can track the use of commercial components inside your product. So the software bill of material driven view focuses on the components inside your organization and with software bill of material you can run all these use cases which you would like to have when you distribute the product to the client. As such SW 360 cannot determine the software bill of material but other tools like antenna and ORT can do it. For documentation compliance SW 360 maintains the clearing status of individual components and it lists the approval status of all the components under each project. And from here we can generate the compliance documentation like read me on in different formats like HTML text which contains all the license text, copyrights, information and acknowledgments. Similarly, we can generate the source code bundle which is basically a collection of all the source code of linked components within a project. And we can create the product clearing report as well and the major change here is the addition of the obligation details in the product clearing report. Moving on the last feature where I'm going to talk about is the clearing request workflow. The user can create the clearing request for each of the project from the clearing request tab in the project portlet and all the components releases with clearing status new will be sent for clearing. Once the clearing request is created a unique CR will be created for each of the project and once it is created the user can now track the clearing progress for each project via request dashboard as shown in the screenshot and clearing team can set the deadline and the status for the clearing request and any component specific changes in the project like adding or removal of any new or existing component will be tracked and added as a comment automatically in the comment section of the CR as shown in the bottom right corner of the screenshot. And this feature basically ensure the transparency in the clearing workflow. Okay, that is all I had from my end for today's presentation. My colleague Jadip will continue with further slides. Thank you. Hi everyone. My name is Jadip Palit. I'm currently working in CMOS technology and services private limited Bangalore India. I'm currently working there as a software developer and I'm one of the contributor in SW360 project. Here I'm going to speak about REST API in SW360. The REST API in SW360 is a springboard application integrated in SW360 stack. It is implemented in hyper media style using hypertext application language. As of today there are a lot of endpoints available for which there is a documentation available for the REST APIs which explains the request structure and response structure with examples and the link to the documentation is available at the footer of every page in the UI. Although there are a lot of endpoints available to perform almost all the basic functionality in SW360, it should be noted that development of new endpoints is users based since we cannot anticipate all the users. In such cases, user can report the user's case which they would like to have good endpoints based on actual users gets added to SW360 in a timely manner. For SW360 REST API, token can be generated from the UI or it can be generated from the REST API authorization server. Token generator in UI can be enabled by setting some properties in configuration file. This token generator page can then be used by the users to create their own token. In case admin don't want to users to create their own tokens for the security reason, then this page can be disabled and admin of SW360 can share client details to a user after reviewing. These client details can then be used by the users to generate their own token. What can be done using REST API in SW360? First users of REST API could be reading information like getting statistics. Statistics like number of processed OSS components in project, number of open source versus number of commercial components, coverage of license compliance information. Second users could be like checking components available, checking for license and vulnerability information. For example, if a component is already available and clearing of that component is already done, then that component can be easily reused in the project. This would help in clearing procedure. Getting clearing information, download OSS disclosure information for products could are other users of REST APIs. What users could be like writing, creating project entries, entire software build of material could also be uploaded using the REST API. The REST API could also be used to create components and add package management ID to the package management ID and external IDs like Maven ID and package URL. Software composition tool like ORT can use the REST API to transfer the entire software build of material to SW360. How to run SW360? SW360 can be easily deployed using Vagrant based setup present at SW360 Vagrant in GitHub. There is also a Docker based setup present at SW360 course in GitHub, which is a multi-container Docker setup in which DBs and different models of SW360 are deployed in separate container. This could be used for production environment. Also there is a single container based Docker setup, which can be used for development and testing purposes. There is also a wiki page in SW360, which has a lot of other information like how to deploy SW360 natively. This was a brief overview of REST API in SW360. Hope it was useful. Thank you very much for your time and patience.
|
SW360 is a Web application for managing the software bill-of-material ("SBOM") of software projects and products. It is an Eclipse project licensed under the EPL-2.0 and thus available for everybody as Open Source Software. The application has a Web UI and REST endpoints for entering or importing the SBOM from dependency or package management systems. In addition, the import of SBOM files using the SPDX spec is supported. Based on the imported SBOM or a software project, a number of functionality is possible, ref to management of vulnerabilities, license and trade compliance or statistics about component usage. The submitted talk introduces and presents SW360. SW360 is an open source software project licensed under the EPL-2.0 that provides both a web application and a REST API to collect, organize and make available information about software components. It establishes a central hub for software components in an organization. SW360 allows for tracking components used by a project/product, assessing security vulnerabilities, maintaining license obligations, enforcing policies, and maintain statistics. For example, SW360 can trigger a license scan process in the open source compliance tool FOSSology and import the resulting clearing reporting. Data is either stored in SW360’s database or on the fly imported from external sources. In future we plan to have federations of SW360 instances that share selected information. Besides its web-based UI, all functionality of SW360 is available through an API that allows an integration into existing devops tools.
|
10.5446/52430 (DOI)
|
Hi everyone, I am Shivam. I am a contributor to Vulnerability project. It is a free and open source vulnerability database. And in this presentation, we are going to learn why we need a vulnerability database, which is open source. Before we begin, let's have a quick summary of how software is built and the role of vulnerability database. So nowadays, software is built by using multiple open source libraries together and then adding some custom logic on top of that. What this implies is if even the one of the used library is vulnerable, then the whole application might be vulnerable to an attack. So the duty of ensuring that secure libraries are used becomes more important. And this becomes the job of the software composition analyst. So on a lower level, what this looks like is the SCA would obtain a software bill of murderers and then iterate over all the found packages and verify whether each of the package is secure or not. And this is where a vulnerability database is used. A vulnerability database is essentially a mapping of packages and their vulnerabilities. Okay. And let's see what are the problems with the existing vulnerability database, which prompted us to begin the vulnerable code project. Okay. So the first one is the license problem. And majority of the solutions have a proprietary license. And this is for obvious reasons. But this has some consequences. So the first one is we can't really obtain the whole database. That means we can't audit it. And this is bad because there's a practice of bloating the data. And why would they do this? Well, if there are multiple vulnerability database providers, how would I signal that my vulnerability database is better than the other? Well, I will claim that my database covers largest number of vulnerabilities. And there's the practice of favoring quantity or quality for marketing purpose. The way vulnerable code gets around this issue is we have an open source license, a liberal license CC by NC, and we provide regular data dumps of the whole database. So we can't really bloat the data, even if we wanted to. So and also we don't have an incentive to do that, right? Okay. We see how we are solving the license problem. Let's see the CPE problem. Well, CPE is for starters, it stands for consumer platform enumeration. It's basically a format used by the National vulnerability data database, NVD to address software components. And there are many issues with it. So, okay, a knife claim is that since the NVD, National vulnerability database covers all the vulnerabilities, why don't we just use NVD rather than going to other third party solutions? Well, here's the interesting thing. NVD uses CPE and CPEs are very bad in addressing open source components. They essentially make the data garbage. This is because CPEs were invented before the explosion of usage of free and open source software. And it is Windows and TRIP in semantics. We'll see what that means. And okay, so here's an example of a CPE. This is a format. CPE is essentially a string. And it has a part, part denotes what kind of the, what kind of component it is. It could be an application or hardware or an operating system. We concern more mostly about applications here. And then there's the notation of vendor, product and version. You can see an example of a CPE for Django project here. Okay, so you might be wondering, what's the problem with CPEs for open source software? Well, here's the delivery. It can't really deal with different packaging systems. Okay, what do I mean by that? Well, from this CPE, can you infer that Django is a Pi Pi package? Well, I can't, but there's this, this gets worse when a package is packaged in more than one packaging systems. For example, open SSL, which could be packaged in a Debian file or an RPM file. So we can't really denote open SSL properly using a CPE. And that makes it problematic to use. So how do we get around this issue? Well, at one level code, we use a package URL. And package URL is also a, it's kind of a URL. So a string of package URL looks like this. Most of you would be familiar with it. But let's have a quick summary of what it is. A package URL has just like every URL has a scheme. It has a PKG scheme. And then it has the type which denotes the type of package. So for example, the type could be Debian, RPM, etc, etc. namespace is the provider of the package. And this could be GitHub, or etc. There are other providers like GitLab, Bitbucket, etc. The name is the name of the package version. Those are pretty self explanatory. You can see from the example, I can pretty easily infer from the package URL stream that Django is a Python package. And this helps when we are trying to infer vulnerabilities automatically. So we have seen how we are solving the CPE problem. Let's see what's the data problem. Okay, so the data problem is caused due to two dependent factors. The first one is that scattered and the data sources are scattered. And this is not a bad thing. The bad thing is that they have different schemas for many reasons, mainly because all the data sources are autonomous entities. So for example, it could be a data source for just NodeJS packages. And they are going to use some schema compared to a data source for vulnerabilities in the Python community. Okay. And let's see what kind of schemas are used. There are some standard schemas, particularly used for denoting package vulnerabilities. And those are Oval and CVRF. Oval is a pretty complicated schema, but it's parsable and it's machine readable. So it's not as bad, but we don't want any software analyst to write parses for Oval documents. We do that for you at vulnerable code. It consumes Oval documents pretty easily. Then there are CVRF documents. These are similar to Oval's. It's fine if standard schemas are used. Then there are the ad hoc machine readable security advisories. These are in formats like JSON, XML, DOML, YAML, etc. They have an ad hoc schemas in the sense that it's just defined for the particular data source and no other data source uses the same schema. And for such data sources, a single parser does the job just for that data source. And there are a bunch of such data sources and we don't want the software analyst to write 20 parsers just to cover all the data sources. So third is human readable security advisories. And these are sometimes HTML pages, markdown documents. And at vulnerable code, we also have parsers for scraping some web pages and then extracting the info. And we also have in certain cases, this can't be done. It's just tricky to extract such info from things like mailing lists or and so what we are planning to do is have a queue of links to such security advisories and then manually review them and convert the data and fit it into the database. That's how we plan to consume the human readable security advisories as well. And that's the gist of data problem. So the future plans of vulnerable code is we want to see some adaptation in other open source libraries like we are planning to have some adaptation by ORT open source review toolkit. And we are pretty excited about that. And it's very exciting to work in this new field. Yeah. And we are also planning to have a thing called community curation of the data. So the thing is, there's always going to be some human error even in these data sources. And to fix this is to have as many eyes on it as possible. So what we want to do is get help from the community to improve our data and enrich it as well. Yeah. And thank you for the precious attention. Goodbye.
|
VulnerableCode is a free and open source database of vulnerabilities and the FOSS packages they impact. It is made by the FOSS community to improve the security of the open source software ecosystem. It’s design solves various pre-existing problems like licensing, data complexity and usability. Using software with known vulnerabilities is one of OWASP’s Top 10 security vulnerabilities . This is increasingly becoming more important as more and more software is built on top of existing free and open source software. From the perspective of software composition analysis, it then becomes increasingly important to know about vulnerable components being used. Naturally a database of mappings of packages and their vulnerabilities is required. Below are some of the problems with existing solutions and how VulnerableCode solves these.
|
10.5446/52431 (DOI)
|
Hello and good afternoon. If you see this message, it should be February the 7th in year 2021. We are on the foster this year. It's a virtual event. We are most likely presenting everything from our homes today. And now it's time for the death room on software composition analysis. There are two death rooms about this topic, very related. In the morning of this day, there was the death room on dependency management. And now in the afternoon, we have software composition. The focus of dependency management was on techniques and tools during the development of the software project for understanding dependencies. And software composition analysis focuses more on existing software, meaning after the development. Our presentations are grouped into three main areas. The first group is about analysis of software projects by open source tools. The second group is about how to share software bill of materials, meaning the results of these analysis. And the third group gives examples, open source tools and software projects about using software composition analysis for other purposes such as vulnerability management. So in the first group, we are very excited to have a number of very interesting open source projects. The first presentation will be about the open source review toolkit. The second presentation about the scan code tool. The third one will be about the for Sology project. And we have a relatively new project in the open source area, which is scan OSS. And last but not least, we will have a presentation on how to analyze the ingredients, the software dependencies of containers. And then we'll be talking about how we start sharing the results of that analysis. We're going to be looking at some of the information that's available today about using software bill of materials. So there'll be a short presentation on what a software bill of materials is for everyone. And then we'll be having a presentation on how we can generate SPDX documents as we are doing builds and so forth. And Cyclone DX is going to be a discussion on how we can generate a Cyclone and generate S bonds with Cyclone. And then there's a double open project. And then there's the Eclipse SW 360, which is able to consume and work with software bill of materials. And the last part we will talk about how and where we can use the results from software composition analysis. And one part is obviously for vulnerability scanning. So there are the projects for a book code. And also the talk about vulnerabilities and how they can be handled with Cyclone DX. After that, there will be the talk about deep scan and a talk about how to automate policies and how to enforce them with the ROT OpusOS review toolkit. General notes on how this will happen. Every presentation will be around 10 to 15 minutes. And they will appear in the presented three groups with four to five presentations in each group. After each talk, there will be a Q&A chat. And so the estimated end is at 18 CIT. And I hope you will have an interesting afternoon and have fun. And please all greet Kate's cat. Have fun. Okay, bye. Enjoy the day.
|
This presentation introduces the devroom on software composition - an emerging topic on understanding what softare is being made of.
|
10.5446/52433 (DOI)
|
Yeah, happy to tell that. So I guess there are, I think, in several of the talks, there are lots of sources of different shapes of vulnerability data, which I think is a, like you can argue is like, terrible at the same time as sort of, that's the real world and probably inevitable. So I think a lot of this becomes an aggregation problem. Like how do you aggregate sources from multiple different places, knowing that they're not all adhering to the same approaches. I remember I sort of had a look at a bunch of the Cyclone DX, Seema started playing around with building some things with it. And one of the questions I asked was like, ah, nothing's required. And Steve, who authored the original spec said, yeah, because vulnerability data is terrible. It does make consuming hard. But I think we need to like have standards that represent the real world more than just idealize, which is one of the reasons why I was suggesting adding like, again, the ability to have different sources, the ability to have ratings from different sources all around the same vulnerability. But isn't your approach very CVSS centric and very centric to the model defined by the US NVD? So the existing Cyclone DX vulnerability extension is or rather can be. Again, like all of the properties are optional. But it does define score as like the three core components of the CVSS score. The proposal I've got up at the moment, the pull request that I mentioned in my talk, relaxes that restriction. So that's that's that that sort of schema is still there, but you can also just have an arbitrary number. Well, I think looking for as many examples of scoring approaches, because like, do they have, is there a primitive at the top? Do we need more specific things? It's not a good question. But it's not specific. It doesn't restrict you to just using that at all. It don't all rather it does today with the proposals I'm making it. Okay. What's your take on that? Shiva, I'm on multiple score, multiple origins and multiple references, I guess, for for vulnerabilities. What's your take on that? Oh, you're muted, you're muted, you're muted. Yeah. So having multiple scores is I don't think it is a bad thing. It's unnecessary. He will because so it a severity of vulnerability depends upon where the ecosystem is. So, for instance, if a distro such as red hat relies on some library, which it has prebuilt inside the distro. So if that library is vulnerable, then for from the perspective of red hat, it's a very serious issue. But consider another distro such as alpine Linux and that particular library is not revealed in that distro. So from the perspective of alpine Linux, this vulnerability is not as serious as red hat. So I think we need to accommodate different scoring systems and an attempt to establish another standard to universal, universally represent all the scores is it's it's not a good idea to do so. Rather, I would, I would say to accept the diversity, because that's necessary. And it represents the wider context. Okay. Very useful. Thank you very much. So switching. So it's interesting because there were two presentation one, two of them more focused on licenses to have them more focused on security and okay as organizer we did on purpose, but there's a tension clearly, when it comes to software composition to either you come from a license angle or you come from a security vulnerability angle. I'd like to expand a bit the discussion there. And what is for first the question is, are these the only two things we ever care about security or license. And so going really what what is the common theme there. I'm going to jump in right there. Go ahead. Actually, we also care a lot about the pedigree and provenance and the whole aspect of reproducible bills. You know, how can we actually tap through the type of information so that people can have the full reproducibility. That's really not being tackled very well yet in a way that we can start to share with the automation. And so I think if you're looking for where areas we need to expand, I think that's definitely one of them. I think that care is nearly the wrong. There's definitely a common way of law thing there where there's like a bunch of licensed people and there's a bunch of scared people and there are both things that evolved from there. I actually think there's a whole bunch of other use cases if this existed. And they just don't have as like those communities around them that are going to create it. I think there's a load of software development like tooling you could build on top of this if it was everywhere. But it's not everywhere and you're probably and it's that tooling is probably not going to be the thing that creates the standards. So I think you've got two passionate communities. The more they work together, the better. I don't think I don't think they're separate use cases or need separate things. I think they just happen to be separate people in separate rooms. Okay, fair enough. Great question. No, no. So what's your topic? What's your take on that Thomas, because you're trying to get a bit of both also. Yes, so yes, from the org side, basically I have my foot in both camps. So on one side, I am in SBDX for a long time, but I also, we're also exporting in cycle in the next time. Ideally, I just want the communities to get together because we're just basically trying a tool that works for everybody. And I see basically good things and bad things in both standards. Ideally, we could just merge them all together because for us, as well, I think in previous comments on the track, what we really should get is get all of these S bombs directly out of the build tool. That's really what we need to get to because that's basically the best point to get started. Because then we don't need a lot more extra. Yes, we still need evaluation tools after work, but there's so much effort now going in and just getting the base information out of the build tools. Everything is built to build code, but do any kind of other metadata information. It's super, super hard. So if you look at the tooling, most effort is spent on actually getting accurate information from the build tools almost. That's like a significant chunk of our work instead of working on basically the other bits. Okay, interesting. And so a question for you actually maybe specifically Gareth and maybe Shivam, you can chime into that. What about open data? I mean, a lot of the data under the security space is proprietary. And there's some efforts, at least with the vulnerable code trying to make it a bit more open. What's your take on the importance of open data, I mean, open as in open licensed and publicly available. Publicly available is one thing, but being openly reusable as in open source license is another thing.
|
The very short time is some placeholder between presentation groups to have questions being asked and answered or just simple to have a break.
|
10.5446/52434 (DOI)
|
Hello and welcome to the Trust Source DeepScan session. In this session, I want to introduce you to DeepScan, which is an open source solution that we have provided to foster the analysis of license indication. So who am I? I am Jan Tieter, I am from EACG. EACG is a consulting company that is specialized in the architecture area. We came to help customers with a lot of open source architectures, open source based architectures. And then probably also due to my education, we drove directly into the open source compliance part because we recognized that even huge projects require a lot of infrastructure, which is open source and then the license topic immediately comes on top. This is why we are familiar with this topic, where we went into a couple of years ago and now we are, have put all our thoughts into Trust Source. Trust Source is a software-as-a-service solution that is helping even large organizations to create the accountability that is required across a complete organization to make sure that compliance processes are managed properly. Within this conglomerate, within all this stack, we have came across a particular solution which is the resultant in DeepScan. And DeepScan is now part of open source solution that I want to introduce you here. Why DeepScan? So whatever you do open source compliance, you have to handle a lot of data. You have to know what is inside the repository that you are using, what is inside the code, what is there any hint of a license, what are the licenses that are applied, what are the components, is there some file that is reused and stuff like that. Because sometimes even a little file comes with a particular association of a license and that is something that you should be aware about. So in the end, this is all done or taken from the source repository and that is why we decided that it is really important to have something that gives us the information what is exactly in the source. And the core idea was to scan the repository, to understand all the text files that are in there and to assess them for, for example, SPDX keys, understand text to see whether there is a license indication there and to compare it with existing license information so to understand what kind of licenses is it, is it a MIT or is it a JSON license. These are pretty close but they are not exactly similar. And so we introduced the similarity analysis as part of this tool. And we also, when we were doing this kind of analysis, we recognized that it is also relevant to understand or catch all the author or copyright holder information that you require for some kind of attribution requirement to complete, fulfill attribution requirements. And it would be great to assemble this in a package so that we can further on use it for machine reading or for machine readable results that we can further process. This is actually the idea. Probably you are familiar with the Open Chain Tooling Group capability map. This is a map which is outlining all capabilities required to manage open source in a compliant way in the most automated fashion. It describes the different capabilities and we, there is another talk where I am introducing this kind of capability map. It helps to give you a lot of orientation when you want to build your own compliance chain. And you can get this information in the talk that is available under this link here. It is also a FOSDEM talk. It gives you details of all these capabilities and gives you a bit more insight here. DeepScan actually focuses on the licensing copyright scanning. DeepScan is available in three flavors. We have an open source solution that is available on GitHub. It is Decor. It is taking care of all the analysis. We package it with a license text that we are maintaining. It will be upgraded every three months around about. It is part of the repository. It is a CLI version. You can scan it privately or on repositories, whatever you like. Then we have a web-based service. It is a web-based service actually. It is under DeepScan.trust.io. It is the same tool. It is just put under web interface. You just enter a URL. You want to scan. It will be passed asynchronously to your service that is checking out the directory or the repository, scanning the things and returning you an UUID that you can use to retrieve the results. You have a nice interface showing the results. That is pretty simple to use. I will demonstrate it later on. We also have another version that is dealing with our authentication. We can access private repositories. We have also the capability to edit and modify the results. Parts of this shall move on to the free version, but we are not there yet. I want to demonstrate now the tools so we can get a grip of it and hope you will enjoy it. Here you find pip installed TS DeepScan. Let's check out DeepScan itself and see how it works. Git clone, HTTP S, Git app.com, trust, source, there is a spelling source, TS DeepScan. Here we go. Now we can use DeepScan to see what is included. TS DeepScan minus O, which is the output file that we can name here. We want to include, include, copyright. Please recognize that there is a capital C in there. Otherwise the parameter won't work. Don't forget the dot to use the current directory. It can also be in any kind of directory that you can give here. Here we go. In the first step, it will build the dataset. If you are using it the first time, this might take quite a second. As we have just installed it, it requires the time. Then it's starting to scan the complete repository. It's looking at all the files that it can find. All text files you can have in the repository readme page. You see an overview of the files used. Here we go. The result file doesn't look too nice in a command line style. I'm going to open it in Firefox. Let's have a look what the result will be. Here you go. You see what you get is a structure of files and their contents. It will outline the different keys it identifies as well as the copyright text. This is a very simple way to do this. There is even a more simple one for those that do not want to have the hassle of installations. You can go to deepscan.trusssource.io and provide the repository URL that you want to scan. Let's take the same. Schedule this for scanning. You must agree to the terms and conditions. You can have a look at them in German as well as in English. You may decide to include the copyright information or not. Whatever you will see is you will get a UUID. This UUID is the one that you can use for later reference when you return to see your results. When you carry on with this ID, you can later return when either when you receive the email that is notifying you or in case you do not want to leave your email there, you may also return and use the check results page, enter the ID that you have received and request the status to be displayed. This will open up a very simple UI which is defining or identifying what you have done, giving the repository URL, telling you about the number of files that have been processed, how many were failed, there are sometimes code page issues or things, then you will find the files and the results and the failed and also tell you how many results it has. Here on the right hand side you will see how many files have license indications and what kind of licenses there are. You will be also able to filter across different files or search for a specific file by name so that you can immediately jump into the result. You will get a structure on where to find the results and you will have a chance to jump directly into the source so to verify what you find. Thank you very much. It's been a pleasure presenting this stuff to you. Hope you liked what you saw. Probably you will make use of it. We are happy to hear. So in case there are additional questions, just let me know. There is a Q&A session later on. I'll be happy to answer questions directly here or just reach out. You will get in touch or can get in touch with us through our web page, either TrustSource.io or EACG.de. We are happy to hear from you and happy to answer your questions. Thank you very much.
|
In this talk I want to present the recently open sourced deepscan tooling, which allows the comfortable analysis of repositories for effective licenses, copyrights and known files. I will show how the tool is structured and how it works. How the similarity analysis is used and what the current results are. Also I will demonstrate how the free analysis service can be used and how it may be used to review and re-assess findings.
|
10.5446/52435 (DOI)
|
Welcome to this OSS Review Toolkit project update. My name is Thomas Stimbergen and I'm the head of open source for heat technologies. We use OSS Review Toolkit or ORT for short for doing all of our FOSS reviews. Besides me being an ORT maintainer, I'm also a contributor to the various project listed here as we're trying to build a FOSS solution for doing FOSS reviews. So what has our team been up to? We added an advisor component to add security vulnerability data to your ORT scans. For now only Nexus IQ is supported as a provider, but more providers will follow soon. We also improved our reporter component. That's the component that you can use to generate various output formats and so show the results in various output formats. So now we have an ASCII output that we also use to generate PDFs. People have been asking us for more options to generate their own third party notices. So now via Apache Framemaker template you can create your own highly customized notices. Also for those who are using GitLab, you can now use the new GitLab license model reporter to display ORT's license finding directly into GitLab's UI. Also we added in the web app and in the static HTML so-called how to fix, displaying how to fix information. So this allows you to not only just throw a violation or an error, but instantly show how this can be resolved. I will demonstrate this later in this presentation. We also added support for SPDS manifest. This allows you to basically to manually define software packages. This is especially useful if either ORT doesn't support your package manager or if it's a program language that doesn't have a package manager really like C or C++. Then to make it easier for you to classify licenses that have an exception, you can now use the WIT operator. So you can now specify LDPL 2.1 and LDPL 2.1 with class parts exception 2.0. So now you can classify as two separate licenses. So you can now basically specify license plus exception when you classify licenses. We now also added the ability to override the declared license for a specific package. So ORT already had the capability to automatically translate the declared license. So it's very Apache 2.0. It does it automatically, but now you can also override it for specific packages if you want to have your own mapping from a declared license to an SPDS identifier. We also added several new configuration options and improvements to how you can use ORT. For instance, ProxySupport, software 360 storage. I will not go on to that since we don't have the time. We also made some performance improvements to reduce the time that it takes for ORT to run in your CISD environment. And we also started a partnership with FOSSID. So you can now use ORT inside FOSSID and from ORT you can also call FOSSID. So to give you an overview, I just included this slide. I don't really have the time to spend everything, but you can see here roughly gives you an idea of how a live pipeline works and for people who haven't seen it yet. So it analyzes the package manager, then it downloads the source, the scanner can be used to scan the actual source code, then you can write policy rules to say, okay, for the licenses that are found and the security advisories that are found and then the reporter, as I said before, will give you all of it in various output formats. So I thought it was best to just demo what we have. So because it's just easier to see features in real life to understand them. So let me start with one of the first features, we see how to fix new text. So here you see an ORT report. So you can see the summary, you can see how many violations, in this case there are 40 violations, there were 14 declared licenses found and 44 detected. And for people who are not familiar with ORT, you can have the table view with different filters. So we now added also that you can see the more options for the information that we have. And then you finally have the three view where you can see all information, basically as a three. So one of the features that I wanted to highlight is how to fix new text. So most tools you have a violation, but how to actually resolve the violation, you have to then click a link and review the documentation. So we now had added a new feature for, which allows you to define how to fix me text in markdown. What it basically adds is again, you see your violation. Okay, now I'm a developer, how do this is missing company license? Okay, how do I fix this? What do I need to add? So you can see exactly the steps, specifically for SBT, how a developer can then add the license in this case for this example company. So you can give really precise instruction to the people that read the report, how they can fix the violations that you've thrown. Another feature that I want to show you is SPX package manifest. So we had this project called move decay, which is a Python project that was using a C libraries. But of course, C doesn't come, doesn't have a standard package manager. But we still wanted to show these C packages in our scans. So you now can do this by simply adding an SPX file to the source code repository. And show it as an example, and then it will literally show up inside of a port. So you see the report of that project being scanned, and you see your SPX document file, and you see exactly the information as we specified it. It's also possible to specify a SPX file on the route, and then basically describe the rectives below it. That's also possible. The final feature that I wanted to show was the ORT helper CLI. This feature has been there for quite a bit, but we never actually showed it in all of our presentations. So if you have ORT compiled, you can go to the helper CLI build install ORT H directory. And if you then type ORT H, it will print the following. ORT helper CLI is basically a helper. As it says, a helper CLI, it allows you to do certain things. So for instance, you can generate.h.orte. YAML files, you can generate scope exclude, but one of the features I really want to see is list licenses. So imagine that you have a large amount of scan results that you have to process through. So then we use the helper CLI. I'll show you the output because what it can do. So the list license commands has the option to basically show you for each license finding in inside the source code, show you exactly where those license findings are and exactly show you in a clustered way what those license findings were. So in all of these four locations, this license text was found in one, this license text was found and in two, you see this is the full BSL license. And then this plus here, this indicates whether it was included or excluded by the developer, meaning that it's going into the release artifact yes or no. So that's where all the features that I want to show. So let's go back. So what do we have planned for 2021? So we're going to add support for making license choices. As the rest of Oort is based on SPDX, again, license choices, we are literally going to allow you to say like, hey, if you find this SPDX expression, I want you to do it with ends and oars, I want you to resolve the oars like this. So that's new feature you will see in the coming months. Then we're going to add, as I said, additional security providers such as Venerable code to our advisory component. Then we are also working on SPDX 3.0. I myself am working on the standard as a maintainer of SPDX. So you will see SPDX 3.0 in Oort. Further improvements that we're working on is again performance. We're working on improving to get the lab integration as well, and we'll be working on the documentation. That's it. Thank you for listening, and I'm happy to answer any questions that you may have.
|
In this session we will provide an update on OSS Review Toolkit (ORT) - which features have been recently added and what they ORT team is currently working on.
|
10.5446/52436 (DOI)
|
Hi, this is Shaheem. I'm working for Phosology since last seven years. I'm a contributor as well as maintainer for Phosology community. I'm here with my colleagues, Gaurav and Anupam, who will be presenting with me today. The topic is Phosology software component analysis and integration. I'll start with introduction. To start with Phosology is an open source license compliance by open source software. Phosology was first published in 2008 by HP. Later in 2015, it has become the Linux Foundation Collaboration product. There are different tasks for OSS compliance using Phosology, license scanning, copyright and authorship, and email detection, as well as we have export control statement and generation of documentation, which is generating reports. Then we also have the export and import of SPDX files. There are two main features of Phosology. One is Nomos and other is the Monk. Nomos have keyword-based scanning and rejects-based scanning. Monk scans for full text. Nomos have high flexibility because it finds all the licenses and Monk has high precision. So recently, we have also developed Ojo scanner, a scanner to detect SPDX license IDs. So if you see the slides, we have the code where you can see the license pattern of SPDX license identifier. Here, the file has a patchy tool license. Phosology detects this license using the Ojo scanner. To automate, we have different options for Ojo, you can make the file decided. You have to check this option and the concluded license, which is scanner matches if Ojo findings are no contradiction with other findings. So recently, last December, we have made a new release. There are many few fixes and updates we have done. So mainly, we have listed a few of them, which is new agents first works with clearly defined Ivo. So you can pull the decisions from clearly defined. We have added the support for PostgreSQL 12 and provided an ability to specify good brands in upload using the version control systems. So now, we have added a future to reuse the deactivated corporate statements. We have also added the support for new event release for Cal Fossa and we have remote open SSL dependency and now using libg-correct. Now obligation refers to the license conclusions and you can auto deactivate copyright statements when you make the irrelevant findings. So we have also made the display time in browser time zone. So wherever you can for the other server, the process time zone will get displayed and we have also added an ability to export the copyrights in a CSV format. Now my colleague and I will take over. Thank you. Thank you for the introduction of physiology. So Shane has shown all the agents and the work for physiology. Probably I'll take you to a different direction where you can automate your stuff with physiology. For example, you can upload or scan component using physiology or the report generation. So for to automate, we support different interfaces. For example, we have the REST API tool, we have the REST client and command line tool. Using REST API, you can manage folders, you can upload components, you can trigger scan, you have option to download reports. For more details, you can look into our physiology webpage where you have the basic REST API call. The link is given here. So we have the physiology REST API wiki page in a GitHub repository. Now REST client, we have different REST clients. So REST clients are available in Python language, She has and Cellscape. You can use it based on what applications you are doing, based on your preference, you can choose any of the REST clients. Also we have FOSSDiver, which is not only a REST API, what can do, it can also manage your bulk scan. For the command line tools, you have to be in the server, you can do automate things on the server itself. You can use different scanning agents using the command line tool. You can upload and download components as well. You can use a different scan and using the CLI. Moving to the next slide. So the very first question that comes into mind, how to upload the component then using the REST API or REST client or command line tool that we discussed. Let's see how we can do that. So to upload a component to FOSSILOGY, you need a FOSSILOGY token or REST token and also the server URL. That's the basic thing. Now once you have these two things handy, you can use this Cellscript provided with this client is upload all from folder. This is a script that we made for the demo and here you have the upload REST SH which is the main script that you call. Once you call this, once it's called this script with your token API token as well as the FOSSILOGY server, you have to also need to provide the folder where you need to upload and the group you are using for the uploading your package. And it will trigger the upload in FOSSILOGY, the same thing you will be able to do using our Python client as well. So CP2 FOSS, using CP2 FOSS, you will be able to upload, scan component in this upload component in the FOSSILOGY server. So we have multiple options from where you can upload a source package to FOSSILOGY server. So for example, you can upload from the file you can upload using URL, you can use the port repository URL from Git and SVN also you can upload directly from the server itself. Now once you uploaded the package, the very first thing comes into mind how to scan the package. So to scan the package you can do it over the using the cell client or the Python client. So on the left hand side you can see the JSON format which is analysis and the decider which is same for both Python and the cell client. So in the analysis part you can give what are the agent you want to use to enable them to make it true or not to use just make it false and decider also similarly you can choose what decision option you want to take. Now you need to use the Waste API client, Waste API call to scan this package. Uploading and scanning generally is a two step process. If you use the Waste client it's a two step process but if you are using the cell client you can it will trigger when you upload the component it will also trigger the scanning. You can also do start scanning from the FOSS job. So that's a CLI tool option that's provided in FOSSILOGY. Now here you have the scanning option. The scanning option is analysis, decider and reuse, analysis and decider already you have seen in the previous slide where you can choose and you can make it true and false for the agents that you choose. For the reuse option it's if you want to reuse any component that you have already uploaded earlier. You can also use this option. Next slide is analyzing the results. So this from this part onwards God will take you forward and thank you all. Thank you. So far we have our uploads on the FOSSILOGY server and the scanning done but what you can do now. So with the REST API we exposed two different endpoints. One is a summary endpoint which tells you information like the idea of the upload, what is the main license selected, how many licenses were found in the upload and similar information in adjacent format. Similarly you have licenses endpoint where you can request scanning from specific agents like I requested here from Nomos Monk Endojo. This will give you a list with the file path and under the scanning you have the scanner findings which are done by the Nomos Monk or Rojo and also the conclusions which are done by a human. Same endpoints you can also access from the Python. You can call this upload summary or upload license to get these objects. Among the command line tools we don't have the upload summary but you can request for specific results from a specific agent by calling FONOMOS license list or Monk list or copyright list. They also provide you various different flags. You can exclude certain file paths from your result or as an example in copyright you can filter only such copyrights which contain a specific word. You can check out the health for them. These are the reports on which you can do analysis but it is not very human friendly so FOSOLG also gives you a report for consumption. So with the shell client you can call this download res.sh which accept a upload ID and a report format. Same thing is also there in the Python which again actually is a two step process done in a single call by the shell client but in Python first you need to generate a report in a specific format which gives you a report ID then using that report ID you have to download its content and now with the Python you can write it to a specific file. So FOSOLG supports five different reporting format. One is depth 5, spdx which is the RDS and spdx2 tag value, readmeusers which is a text file and a unified report which is a word document. So far that is about FOSOLG and how you can use it for your SCA. We also have different projects for example Atarachi which is a standalone license scanner written in Python. It uses text retrieval algorithms rather than rule based scanning like NOMOS. Then you have NIRGIS which again is a Python tool as well as a Python library to extract comments from a source code. It currently supports 25 different languages and can understand various syntax for a single line comment, multi-line comment and comments where they are in a single line format but are written continuously in the source code. Both of these projects were contributed to FOSOLG under Google Summer of Code project and also you have a FOSOLG slides project under the FOSOLG GitHub profile which gives you various presentations which are in English, Japanese and Vietnamese at the moment. You can freely use them. They are all licensed under CC by SA 4.0. So thank you for listening. You can also head over to our YouTube channel. The link is provided over here and if you like our presentation please go to GitHub and give us a stop. So thank you. Thank you very much.
|
FOSSology focusses on license compliance analyses. Recently, a number of new features have been published by the community to integrate better with software composition analysis. The presentation shows an introduction of the main and relevant development here. FOSSology is considered as one of the leading Open Source tools when it comes to license compliance. There are various ways with which someone can analyze a package in FOSSology. One can either do it manually or can do it programmatically. As the industry is heading towards automation, programmatic approach based on software composition analysis is more preferred and becomes more advantageous. Currently, FOSSology provides 3 different ways to integration with software composition approaches: utilizing FOSSology CLI tools, use one of the client libraries and from the REST API. The presentation will help by providing different strategies which can be used to automate analysis of software components in an automated environment. We will see how one can use the CLI tools of FOSSology or other FOSS projects built on FOSSology's REST API to push packages for analysis. We will also see how the enhanced API can provide much more information about a package and how analysis can be triggered on-demand. Finally, we will touch upon how one can gather the package information in a neat report for auditing.
|
10.5446/52437 (DOI)
|
Welcome. So my name is Philippe Omrédin and I've been talking today about scan code. Thank you for joining us virtually in lovely Brussels. So about me, I'm the lead maintainer of scan code, which is a tool to scan code as advertised. One of my claim to fame is that I have my sign of attached to some of the largest deletion of lines of code in the Linux kernel. Actually, these were not lines of code. These were really commands more than code. But a lot of them, tens of thousands of them. And so I'm very good at it. And trust me, it's a skill. I'm good at deleting commands in code. Now, what we want to talk about today is software composition analysis and the challenges that exist there. We're trying to figure out what's in your code. And doing this is really hard, harder than you could think of. It's really easy to install provision and add new dependencies to projects. It's very quick. You can have one node package that pulls hundreds of dependencies, or you install Docker image and also then you have not hundreds, but potentially thousands of dependencies. So there's really an explosion of the number of software packages. And it's probably a good thing. We're doing component-based development for real now. The difficulty is that we don't know exactly at all time where the code is coming from. The other thing is that licensing clarity is far from problem solved. Open source is about licensing. Without license, there's no open source. It's important to know the license if anything, to know whether we're allowed to use and modify the code for anything, as an example. So on the tooling side, really the problem is that there's no single technique or tool that can roll them all and solve all the problems at once. There's no one that's good enough, alone at least. And the other problem is difficult to name and exchange information about software. There's a lot of effort started with SPDX, but there's other projects going on and merging and there's a lot of discussions. Something as simple as determining what's the good name for a package when you're talking about the file package as a Ruby Jam. It's not the same as the file package in Ubuntu, which is based on libmagic. They have the same name, but they're very different. One may be reusing the other in some case, but we don't have a good solution yet to name these, though there are some elements. The last thing is data, and data probably becomes even more important than code. As it sounds, we're direly missing free and open source data about free and open source code. So the vision we have for scan code is we're on a mission. I'm on a mission with the rest of the team to make it easier to reuse free and open source software and make sure that's safer and more efficient. The way we go about doing that is we create tools primarily to do data collection at the lowest level, which is raw primary evidence, such as detecting license, copyright, collecting information from package manifest, inspecting into binaries and files to squeeze any kind of interesting information we can find there. We make that as many tools and libraries so it's reusable to integrate bios for free open source project and non-open source project for that matter. It's more work actually to make sure that you can have a tool that can be reused. We're also striving to really provide the best in class tool in a category, and we make absolutely no compromise on detection accuracy, so no shortcuts, which means sometimes we're not the fastest of the tools, but we're really trying to be the very best when it comes to accuracy. The third one side is to automate the composition analysis with scripted workflows and pipelines. Here, integrating our tools, but also any other tools. The last is to ensure that we can eventually provide both reference data and models that can help automate and bring all these tools together in something that makes sense, and provide reference data against which you can check and that you can feed to create a virtual circle between the commotion and disease, and these reference data. When we talk about reference data, we're talking about license, packages, files, and related information. The approach we have in terms of code is to ensure that primarily we use static analysis, as opposed in contrast with dynamic analysis, that means we build tools that look at the binaries, the source code, and everything that's addressed as opposed to running inside a container or running a package management tool, these kind of things. We're trying to mostly work on things which are data-driven, in most cases. We're trying to use open metadata database and we create them when we don't have them. One thing that's important is we're really trying to ensure that we can vet all the files. A lot of tools in that space, especially commercial ones, tend to have a very lightweight approach, focusing primarily on the surface metadata provided at the package level, and they don't look in details at the files. Anything that's complex should be scripted and customizable, as opposed to be hard-coded. That's why we have this pipeline for composition analysis. Last thing, sorry, not served, but last, we're putting a lot of emphasis on collaboration with other projects for integration of our tools and also trying to integrate other tools in our code. That's a significant bit of work. We're using it, you can see some examples there. We're very proud to have active users which have a very well-known open source project, well-known organizations as well as companies. That's supported by a very active community. We have about over 100 contributors and that's something I'm very proud of. We're putting a lot of emphasis on trying to be welcoming to newbies. We're receiving probably one or two new aspiring contributors every day. That requires quite a bit of work. We're trying to organize the community. We're reaching a size where we have about 100 contributors and there are routinely several hundred participants in the chat channels where we're trying to break things out and structure it a bit more. In terms of alternatives to all tools, the thing with commercial tools, most of them are focused primarily on security. They tend to have a pretty weak support for licensing information. Even when it comes to security, they focus mostly on surface and weaker detection of origin of code. All that is usually supported by proper data. That's not a great combo altogether. Proper data that we should pick with proper tools which are picked. In many cases, weak information which is not traceable because it's not open. In contrast to those tools, they're presented here today and they're great. Most of them are excellent. Maybe not the level of commercial code tools yet, but we're working hard on it and eventually we'll prevail. An example of what we can do with scan code. This is a pipeline in the new scan code IO server which deals with a detailed analysis of a Docker image. If we look at what it looks like, that's scan code IO here. For instance, you have an image here which is Debian image which has been analyzed. The way you start analysis is very simply you give a project name, you upload an image and you say, oh, this is just a plain code base, this is a root file system, this is a Docker image. We're building this pipeline here. You can see an example of the same graphic which is straight from the code. Behind the scenes is a very simple Python script. You run the analysis and you can see the results. We have 84 packages that were detected across the image layers of that Docker image. What we call resources, their files or directories. You can see a bit of details on high level information, files that were in a package, not in a package. Type of packages in this case is based on the image so it makes sense that everything is Debian. You can see the variety of licenses that we've seen. For instance, these different licenses here. In some cases it's a bit dense, there's a lot of details. That's one example. Another example would be look at a Python package. In this case it was pip, it was scan. It's a PyPy package, there's only one. That's easy. What's surprising is that the clear license is MIT at the package level. If you dive at the file level, you'll see that there's much more variety. LGPL, BSD, Python, MIT of course. Plus something which probably needs a bit of review. The point is that we're helping you surface information that only exists at the level of metadata. That's important because most everyone looks at only declared information. That's an example of one of the output tools. It's one of our flagship projects. It's called ScanCodeIO. ScanCode toolkit is a core engine for license and copyright and package manifest passing. It's used in many other projects. We have an approach where we do many small tools that are assembled together for a purpose. Attribute code for instance is there to generate notices that you can integrate in the build. Which would collect all the license information from all the packages you bundle in a given product or system or application. ScanCode result analyzer is an upcoming tool that uses machine learning to create license scans. That means it finds problems and tries to fix them automatically. It's a new tool. It's pretty promising for us. In terms of data, a lot of things are around vulnerabilities and license plus package information. We've been one of the confounders of ClearDefine and we have a tool called ClearCode to help extract all the good data and scan code scans from there. We'll eventually be releasing sooner tools called the packageDB which brings all that in one place. We have a project called package URL which is a way to identify packages that are used in OAS, Sonatype and many other places. If we look for instance the licenseDB, we're talking about licenses. It's a new thing that's been released recently. All the licenses can code available for use and that you can link. Recover, res, JSON, text, YAML and so on. There's about 1700 of them. It's simple and clean and neat but helps to have some kind of UI there to review. In terms of plans, we're looking at bringing more data about code and more code. That's pretty much it. The exciting thing is going to be...
|
This is a presentation of the latest features and updates in ScanCode toolkit and its companion projects. ScanCode toolkit is an open source scanner that thrives to provide best-in-class license, copyright and package manifest detection and data collection. This session presents the current, latest and upcoming features and developments, as well as new projects, data and tools to enable FOSS SCA.
|
10.5446/52443 (DOI)
|
Hello everyone. Today we'll talk about database as a service with Kubernetes. So let me start by taking you back to the early days of a modern open source software. I got involved with open source myself in late 90s. And I remember at that time that was quite complicated. You would often have to download their source code and then some patches to make it work for your particular operating system, your compiler, your compiler, you know, compile it all and hope it works. Since that time, we had started this never ending journey towards simplicity. We got binaries, then packages, repositories, containers, which make it easier and easier to deploy the software we use, which is wonderful. And if you look at the database, the database as a service is state of art, how you can get simple access to the database. Now, database as a service is often termed, which is used in conjunction with public cloud providers. So what do I mean when I am talking about the open source database as a service? I think database as a service is generally two things. From one standpoint, that is an interface. Instead of conventional or old-fashioned database delivery approach, we have to install a bunch of packages, configure servers separately, backups, monitoring, maybe some HA solution. We can just have an API call where we deploy a full database cluster, which includes backup, maybe self-healing, self-patch, self-tuning, and so on and so forth. And that is actually something which is very well suited for open source database software. Now there is another part of that, which is also there, backend management, then software fails, and there are some always edge cases when database doesn't happen, then humans can get involved and fix it so you, as a service consumer, is not impacted. That happens in all public cloud providers out there, often though it is invisible to you. And that is something which cannot be completely open source, right, because that is human doing stuff, but that is something that you can control here by yourself or can choose from other partner to help you. Now let's talk a bit about the cloud or the promise of the cloud. If you remember, a few years back, then cloud was not so common as those days. The cloud was explained by analogy of thinking to electric services and say, hey, folks, instead of producing your own electricity by running generator at home, it's a common sense what you just should have somebody doing that efficiently at scale and providing reliable electricity for you. And that is all good and that is a very appealing explanation which makes sense to me. The thing to note and to remember is electricity, though, you often can choose among different providers, which give you pretty much the same usable electricity and some of them may be more reliable, other more green and others cheaper, but you can still run your TV wherever provider you use. And that is not quite reality where cloud vendors are taking us. If you look at the cloud architect courses and in general reference architectures, you would see what the same cloud vendors who sold the clouds as equivalent to the commodity electricity, they want to lock you in by advising most proprietary solutions where platform has to offer. For example, in a database space, you're not said ever to, hey, just deploy the database on Kubernetes, or even on the EC2, you are told to use solutions like the NamaDB or Amazon Aurora when it comes to Amazon solution. Well, obviously solutions which provides the lock in and stickiness in their platform. And there is a lot of push and marketing for that. So it is rather hard to resist. In the end, though, the choice is yours. You can choose your cloud or rather how you use the cloud, right? You can choose the way of serve them where you are at the mercy of the cloud and completely locked into the solution of one way of freedom. When you think about the freedom as it applies to the cloud infrastructure, that is where I think the Kubernetes comes in play as pretty much ubiquitous API, which provides you substantial compatibility between different cloud vendors. If you do not know what Kubernetes is, right here is a very simple introduction. You can think about that as an operating system, right, like similar to Linux, but for the cluster or data center rather than individual hosts. And that is something which has a lot of momentum. And that is universally supported in public, private, hybrid clouds, right? So if you can deploy your software on the Kubernetes back end, you can pretty much deploy it anywhere and everywhere. Traditionally, though, Kubernetes was not great for running stateful applications. Well, in fact, it was designed for running stateless applications. And the databases are, you know, as far opposite of stateless application as you probably can go. And there are still a lot of people who question to what extent stateful workloads are suitable for Kubernetes. There are a lot of improvements over the last few years which make it feasible, possible, and even recommended. So for example, we got the stateless, stateful sets to manage the, you know, complicated stateful environments like databases. We got the persistent volumes to hold that state even if particular Kubernetes forwards die. And we also got the operator framework to manage all that complexity which databases require. Now, in additional to this kind of fact that that's possibly, is possible to do, there are actually quite a few of the public clouds database as a service which are powered by Kubernetes. Co-crowd cloud and FluxDB, PlanetScale, we all talk publicly about where infrastructure builds fully on Kubernetes. So if you are looking to run database on Kubernetes, you will not be alone. There are probably hundreds of thousands of nodes which are running in production among the clouds, among those and others. Now, if you look at at our work, and I would say they are not the largest contribution, contributor to Kubernetes and databases, but we are working to do our part. We have developed the Kubernetes operators for essentially MySQL and MongoDB to run the databases in the cloud. And our thinking really is what we are looking at the two types of users for this software. One is the direct users of our Kubernetes operators, which will just take them and run them. And then the users which want something as a database as a service, similar to what Amazon provides, when you can have a simple API call or couple of clicks through the web interface, and you get your database which essentially manages itself as much as possible. And the fact that it is using Kubernetes in the back end in this case is not so relevant. So operators that are available for folks who are familiar with Kubernetes and want to work with Kubernetes in a well, in Kubernetes way. If you are not one of those, we also provide the corner database of the service experimental CLI, which essentially allows you to create clusters, rather simply by running some commands, right? After you set up connection to your Kubernetes cluster somewhere in the cloud, you can run the single command to create MySQL cluster. And when you have done that, then you can use QPCL for the port mapping and then use your favorite client like MySQL and credentials provided to connect to that. Now, if you would rather want to expose their IP and have a database accessible outside of Kubernetes, we also have that option, right? We provide a functionality where you can specify your own password rather than using automatically created password. Or as well, while our default deployment is a cluster of the three nodes, right? Because especially with Kubernetes, we do not want to rely on reliability of a single port. If you just need the database for, I don't know, experiment for development, maybe you may not want to have the three nodes deployed, which are, for this use case. In this case, you can also use this advanced deployment and deploy the node with, deploy only one database node without high availability. Because this is not safe configuration, we kind of require you to jump from extra hoops to specify what you do really understand what you are doing because, well, losing your data is quite unpleasant. Now, the next step, what we are taking in this case is making their database as a server functionality included with Percona Monitoring Management, where we actually just recently released the preview functionality of this release. In this case, that gets pretty close to what you get with Amazon RDS. After you've connected PMM to Kubernetes cluster, you are able to deploy the database clusters in just a few clicks and get fully operational monitor database cluster. We're not quite doing their backups yet, right? And some other things still need to be improved, but that is why it's called preview. So if you are interested in this direction of our work, let us know, check it out, maybe you can submit a full request, which is a fully open source project. So to finish it up, you can see what if Percona Database is a server CLI, you can get something similar to Database as a service experience with Kubernetes already. And PMM is taking that either fuller. And in the corner, we are continuing to work to bring more features and extent usability of Database as a service available in open source software. If you're interested in that too, then check out our software or contribute, participate in development. We would really appreciate that. To finish my, let me finish it with this thought, is I believe what the Database as a service has won as a way to consume Database because of really unparalleled convenience it offers. At the same time, vendor locking sucks. And especially if you're coming to force them, I think you probably believe that. And as usually, as with many other problems before, the open source is coming to the rescue. Thank you. With that, I'm ready to answer some of your questions if you have any. Okay, looks like they are done, right? Okay, I just wait a couple of seconds. Yeah, and I guess we're live. Okay. Thanks, Peter, for your talk. We have some questions. The first one is, is this available as CRDs too? I don't know exactly what this refers to, but you will know. Yeah, I mean, so if you look at the concept for the right, in this case, so our, the operator is how we integrate that with their Kubernetes in a pretty standard way, right? And then they speak about the standard of CLI and other graphical user interface, that is for people who are not familiar with Kubernetes. That's a sort of path that shows for integration. Okay, then another question is, how good of a feeling do we have running such DB classes on a production Kubernetes? Well, so I think their data on Kubernetes, right, is I think is always this kind of contentious topic, right? And I have both customers, right, and even huge kind of fortune-tastic operations, which run a lot of the databases on Kubernetes and very successful. And there are folks who say, oh my gosh, you cannot do it ever. And as I mentioned, you also see the increase in number of the even of the public clouds, right, solutions are being built on the Kubernetes, right? So it is pretty robust at this point, if you can figure that out properly, right? There are some limitations, of course, like, for example, if you look at the pod sizes, right, if you look at bare metal, then maybe Iranian databases, which has, you know, hundreds of CPU cores, maybe terabyte plus of memory and so on and so forth, right? You probably don't want to do that with Kubernetes, right? As a practical standpoint, what we suggest to everybody is, if you're not comfortable, do not jump and put your most critical database in this environment first. Start, test, and dev, maybe kind of get you some second tier database on Kubernetes first, right? And as, and if you develop your trust, you can move on to have more production critical databases. Okay. Another question we have is, do you have any code links to database as a service CLI? And does it use the Kubernetes API in the background with some wrappers? Oh, yes. Yes. So if you look at the Kubernetes CLI, no, of course, there is a pretty much CLI which works at Kube CTL, like actually in this case, and that's it. Yeah. So, and let me get the link for folks. So I obviously have it here. Okay. And then we have one last question. Can operators perform backup and restore? And if so, where can it store backups? Yes. So if you do have an operator support for backups, right, we even have a release just a few days ago, their point in time recovery support for operators, which is kind of pretty unique as the database operator goes, right? There is essentially can, you know, every minute, right, or every two seconds to get the database changes and upload it right into some cloud storage. The most common way to use the Kubernetes for us for backups is S3, compatible storage, right? So if you use S3 itself or your admin or write or something like that, the support right, you can also take backup locally, but the usability of that is limited, right in the Kubernetes environment. Okay. So are there any other questions? We still have a few minutes left. And folks, this is obviously open source project. So we would very much welcome your feedback, bugs, even some code contribution, right? So. Okay, we do have some more questions. What happens with events such as node failures and how quickly can the workload be migrated? Well, that is a great question, right? And that is one of the beautiful things with Kubernetes where it provides that kind of back playing for handling node failures. So if you have a node which is, which is, which has failed, then operator will provision another node for performance to the cluster, right, which will rejoin the cluster. How much time it will take depends on, on, on a data size and as well what kind of failure that is, because if you have just computer source failed, then it's very quick to reprovision that. Now, if you also have a storage which was attached to that node has failed, then it will need to essentially re-sync the storage from a database and that will depend on your database size and your network speed. Okay. Another question is, are you already developing help charts for the Persona X-Straight with Stradib cluster operator? Well, it's, it was a, yes, so there is a help chart available, right? Let me send you another doc, doc in the chart, in the chart. So
|
DBaaS is the fastest growing way to deploy databases. It is fast and convenient and it helps to reduce toil a lot, yet it is typically done using proprietary software and tightly coupled to the cloud vendor. We believe Kubernetes finally allows us to build fully OpenSource DBaaS Solution capable to be deployed anywhere Kubernetes runs - on the Public Cloud or in your private data center. In this presentation, we will describe the most important user requirements and typical problems you would encounter building DBaaS Solution and explain how you can solve them using Kubernetes Operator framework.
|
10.5446/52449 (DOI)
|
Îți uita бsa pus practise mai multe. 45 minute Un moment de vachetă este foremost acuteß la cum şi nostię coresetam yield cheesy numărul acest trimmed OR iube cu a tre dropping in rhymoți reminds pны să vii poke la Ps destrue pe care zici unism le veณ pe file, care file cafe binele binele se chaine sinistitant into weitere cauze. La un momentCalențe ad petindare îl îl ocupă décidéului averist. Este o altă oiață care pot să тНе cândnáță ar preia traineeale pentru localizarea dahulul aerovol SMS. vuel께서 inc~!! Q4. Moareană created in runningon in 16 de minute? A? Ia zis恐 Pepper Mauser deship. Alaubii de aici recorri in Ne America, O eng Bowl CruYou Very mixer, second pack Before in in E.み nicely, 앞으로 and termin a și manifumintă coflatea al confrontede alan Sa vedea ici модul lumea care este si depositat, please note că a tua el nu mangează никаких j � verziakariulidenii. Nu este desfacil dat-ul loved Unit candles spune sau o Simmons-ul routinely doarewel, Bai băie悪 Versga o concentrations? Ap contracts��or de lucrurul cu acolo lor götura voulais îspeze. Deasfai încât s-au KLLSL amestecat, creți vad ce dec destroys cu olor ca de frontura readily prevedere- ultrasound a merecat multe prevedere. Clebacul urmă Tuesday la anul abortionul. Îl folie listam Busan opens. Î complexesa arcade vis in pentru că poată v sau în urwwwai ce vorable să L03 verificati ină containersă. Sunt양, apoi utilim meltedennă as� usable în manipuție de account. Ye pa se s mannei apoi de la habiliteul meu noar consumiei Poly XTC Dis progressive Finaps. ahora tulipe acestii catuminate cu S公F workerul baks nu 거tilăوق apoilanda a f pepp turmeric. Vi intervenați la unest timp de vol. bine dacă decăsperită c貨 Prefect cirage pentru totuș titledurul bine. Pentru că, Milwaukee se simplifizea de process denoteutilă. Artură am încapit că afgă un programmes puede리 relativ simple ahora. Acesta terminăoaidul meu envelop. Astiați'... Acum, ca resetul fim din catiu.....s softly,報pa ocase. Caastr-te vrei. Nu-mi place ca-mi places Gești bine pentru vedere Eu Israelul la r Roșu, staram dezvoltare astfel pe la Par Samsung, din coalitiva in Vlad.. Unde au fostと思います pe але, Iub��utea, Vlad de ce se mantost de laдержarea S? Quasta ar trea? Cum maturarea repartă și sust herself gelatin? Cây supporter de ești un containersa din m xenă? S o stie chiar prin cateleții răpArt în assess?" si dupa cei care sunt inceputi, așa că a fost un proces de laborator si după cei care se face comunicația după cei care se face proces de master si proces de laborator care comunicația si rezultate se reporta. Bine, cei care se face depenit de cateva errorul de commandul de execuția după dacă se predefine după ce se vede în interface de făcă o voie o succesă. Bine, bine, așa de la exit. Bine, și atunci atunci un job este bine, atunci a fost așa de la repositorie a fost un build și atunci a fost un image creation a fost un build ce automatică un image și a fost un container cu un specificat în un operfe. Bine, și ai fost o testă de automat? Bine, totul test a fost înălătate de la acest moment și basicamente a fost compaia un server și a fost testă de la test. Bine, și a fost înălătate de la build bot. Bine adicțional, avem o altă testă instalăție când a fost o preță de în un proieț de prior o sa o sa fie succesări o testă de vizion cu care rătării o altă testă pentru o altă produsă care pot să utilită MariaDB, ca SPHP, dvd. de la mySQL și ca soare. A fost o testă de la o altă testă de MariaDB de la un container încât încât un container încât o instalăție când a fost vm-ul. Și acest este mainle proces. Buildbot se conecta în remotul docare crea un imig, încât container, încât proces și totul comunicăție se întâmplă între master și lucrurile. Și care testă se văd în vm-ul încât container? Mainle testă de system să încât o instalăție de package se pot să fie succesări și să încât o instalăție încât systemul. Ok, ok. Acesta pot să fie încât container, dar când încât o envirie de vm-ul se consideră să fie mai similă și că există câteva cealaltă încât container de vm-ul. Acesta este o principale că avem vm-ul de vm-ul. Dar de la unul de bine este că pentru tăi testi avem vm-ul basic, nu avem să instalează ceea cei type de package o să se încât o configura minimă. Că vă putem văd se pot instalează o package de vm-ul și să văd o envirie de vm-ul. Și văd că se pot restriția resursuri, când vreau să fie testări mai multe resursuri? Da, dar acesta este o adăugată pentru Mirvot, când se văd o timpă o să se încâtă vreau oameni care avem ceea cei containere dar acestea vor bine pentru că, de la configura, nu se încâtă încâtă. Și unul de lucrurile de trecură este să fie văd o adăugată care trebuie vreau încâtă oameni de vreo și când se fie configurată, dar în finalul am făcut să fie vreau încâtă și vreau să fie vreau docar containere. Cool. Ce sunt acestei containere? Dea, oameni este un lucru de discrupție de lucrurile de ce au fost încât visualul studiul, construcătul și instalăție. Avem probleme care se fie vreau să fie încâtă dacă nu fie mersă error. Și soluția a fost bine să fie imediată și vizuăție studiului încât să fie dil sponsors demonstration
|
Containers are a central point for the MariaDB buildbot. In fact, almost all our builds run in Docker containers. In this short presentation, I will talk about the container environment used in order to build MariaDB from source both on Linux and Windows. Then, I will present some of the challenges associated with running Windows in a Docker container and finally I will focus on some of the advantages of having a container based continuous integration infrastructure. Due to their obvious advantages in terms of speed and flexibility, when developing our new continuous integration framework, we aimed to have an almost container only environment. Because of this, buildbot.mariadb.org uses almost exclusively Docker containers for all the workers. In this way, using a single dockerfile, we can define in an easy and concise manner all the required environments. The process begins with choosing the operating system. Here, we have quite a large pool of 15+ operating systems varying from different Linux distributions to Windows, each running in a Docker container. The process continues by installing the MariaDB build dependencies. In the end, the buildbot components are installed. Finally, we need to make sure that the buildbot worker process starts on container startup. In this way, we obtain a clean environment that can easily be deployed to different platforms. One of the most challenging things was to run Windows in a Docker container. While the main steps remain the same as described above, I will talk about the main differences between the Windows dockerfile in comparison to a Linux one to emphasise the similarities and differences between the two.
|
10.5446/52450 (DOI)
|
Hello everyone, in this presentation we address the open problem of sharing accelerator devices on multi-terrain environments. More specifically we implement machine learning inference acceleration on Kubernetes using Cata containers running inside Firecracker VMs. Before we get there let's see the current approaches of accelerators in virtual environments. First of the first approach is called hardware partitioning and it offers the ability to partition the physical device and pass-through fragments of isolated resources to the sandbox application. Although this approach can increase resource efficiency compared to traditional full device pass-through there is still the problem of supporting different types of accelerators as it remains vendor or device specific. Another approach is to intercept API calls to vendor libraries in order to allow the use of remote hardware accelerators which means on the one hand that this is still a kind of vendor specific and on the other hand that it can lead to significant overhead due to network latency which at the end makes it a known fit for infrastructure where low latency is of importance like serverless platforms. And then there is also the split driver model where the downside is that the user needs to program the hardware directly using the vendor library and this raises concerns about the portability of an application that could be a problem especially in environments with multiple and different accelerator devices. Another concern here is that with the Paravicture Driver solution the virtual machine runs the whole vendor library in order to schedule a task in the accelerator and then the host is kind of doing the same job using the same runtime and his own scheduler. So we can say that we have something like software static application here and that could be avoided as we will show in the next slide where we introduce VXL. VXL is a universal acceleration framework that exposes generic functions to users and supports multiple acceleration backends while keeping those four design goals of simplicity, performance, portability and virtualization support. We can see here a quick overview of VXL architecture. The core component runtime system is responsible for matching the user functions with the desired acceleration frameworks as shown in the figure on the right. So the flow here is that the user application can call the user-facing API some function prototypes that at the end are intercepted by the VXL runtime and kind of float it to the desired backend. The backend talks like a hardware abstraction layer and can be either a low-level API like OpenCL or a high-level framework like TensorFlow or even a user-facing API like that's an inference. And in the more interesting case of multi-tenor environments where our workloads need to be isolated inside virtual machines, the abstraction layer can be a transport layer like VITIO. So VXL VITIO is used to afloat the function from the VM back to one of the acceleration frameworks on the host. In the current proof of concept, the VITIO acceleration mechanism can be used with QEMK VM guests and AWS Firecracker virtual machines. The core component is written in C but it also offers some bindings for us. What it is important to keep in mind here for the rest of our presentation is that at the end the VXL guest is performing IOCTL calls to a VITIO device and those calls are trapped by the kernel VITIO Excel module which leads to a trap in the virtual machine module. And then this trap is handled by the VITIO Excel backend which in the case of the QEMU is written in C and in the case of Firecracker is written in Rust. And then this VITIO backend is linked to the VXL runtime of the host which invokes one of those four acceleration frameworks. So we saw all those details about the VXL implementation and principle but now we will see how we can orchestrate our functions in an isolated virtualized environment but in the same time we can keep access to the acceleration devices on the host without having to pass through them or use the network stack or anything like that. We choose to deploy our functions on Kubernetes as Cata containers inside lightweight Firecracker virtual machines. Let's first have a look on the figure on the right so that we can have an overview of the execution flow. For simplicity we show only one application but you can think of a figure where there are multiple VMs and multiple Firecracker instances. Our application runs inside a Cata container. The Cata containers run inside the Firecracker VM and the container is started and managed by the Cata agent which also runs inside the virtual machine and is communicating with the container management system on the Kubernetes node. So essentially all the black block boxes on this figure are Cata related. The application can use the V Accelerant time API in order to ask for acceleration. This essentially means that the process from inside the container performs some IOCTL calls to the VTIO Accel device which in turn leads to the VTIO Accel kernel module. Then we have a context switch that tracks back to the Firecracker monitor which is linked to the V Accelerant time on the host and invokes the desired acceleration framework. So in order to put all these to work together we first need to patch the Cata runtime in order to support a newer version of Firecracker where the VTIO Accel back end is ported. We need it to properly configure container D in order to use the device mapper snapshotter because Firecracker uses block devices so we can currently run it only by using the device mapper. Docker was not an option here because we choose to implement these using the runtime class feature of Kubernetes which is not compatible with Docker. And as I said before the Cata agent is the one that actually creates the container from the OCI specification. So in order to expose the guest VTIO Accel device in the container we have to patch the Cata agent and append the OCI request to include the V Accel device before creating the container. Maybe you will think that maybe we could use the device or the privileged flag of the container management system but this was not working because it was probably trying to add host devices because the container this scene is running on the host while the container is invoked and created inside the guest by the Cata agent. So in any case the Cata agent needs somehow to be aware of the VTIO Accel device ID type major and minor numbers on the guest. In order to deploy these Kubernetes we install the patched version of Cata runtime in the cluster. We create the Cata Firecracker runtime class and of course configure container D and then we add some label on its Kubernetes node that supports acceleration and we also install the V Accel binaries which are essentially the Firecracker binary, the one that is linked with VTIO Accel backend, a kernel with the VTIO Accel module, a compatible RodeFS including the custom Cata agent and some configuration files. And then we make sure that the backend acceleration components are installed and working on the labeled nodes and then we can check these by using the NVDI-SMI tool as I will show in a wiring demonstration that I will make for you. Just give me a sec to open a terminal. Okay so in this demonstration I'm going to give you a quick installation overview of Cata containers with V Accel enabled Firecracker virtual machines and then we're going to deploy on our Kubernetes cluster an application that performs image classification using the V Accel framework. First we're going to make sure that there is at least one Kubernetes node labeled with V Accel. So this is the node that I'm currently logged in and we're going to use its GPU. Then we can check that there is a runtime class for Cata Firecracker runtime. The configuration is very simple. We define the name of the runtime class and the handler called CataFC which is configured in container D. Here is the name of the handler. It is configured as the runtime Cata container V2 runtime. This is the patch runtime that is compatible with the V Accel version of Firecracker. And we also have to define the configuration path for the Cata config. We can also see here that we are using device mapper as a default snapshotter for the CRI plugin. The configuration of the plugin is here. Now let's have a look at the binaries that we need to have installed on the node. The path for the Firecracker binaries is here, which is dynamically linked with V Accel library. We also have the kernel which is configured with the VTIO Accel module and the root of s with custom Cata agent that exposes the V Accel device inside the container as we show in the presentation. Now let's have a look at the actual deployment. We are going to create 36 replicas of Firecracker pods that each one will run the EMATS classification application. The application is an HTTP server that routes post requests to a handler. The handler gets an EMATS from the post body and calls the V Accel API to perform an EMATS classification operation. We also define the runtime class that will be used to run our pods. This is the Cata Firecracker runtime class that we saw before. Node selector to make sure that the pods will be deployed on V Accel compatible node. We also create a service as an endpoint to our request that will be routed to our application. So let's deploy this. So it's going to take some time. It's going to take something like a minute to create 36 Firecracker instances. They will all be created in the node that I'm currently logged in. So let's see that. You can see that here the instances have started, some of them, V20. So let's also create... They will all be created in the node that I'm currently logged in so that we can monitor the GPU processes. Let's also create an Inkers route to that service in order to send our request there. So we will send our request here and then it will be routed to the service that we created before. Okay, let's see if the deployment is ready. So we have 36 Firecracker instances running in our machine. Okay now we're going to use the NVIDIA SMI tool in order to monitor the processes that are running in our GPU. We can see that there are currently no processes running. Now let's try to send a post request containing an Inmads and see the result of the classification. You're going to get the... Let's give it a second. Okay so we're going to get an Inmads Cloud Kernels logo from Twitter and run an Inmads classification using these Inmads. And we'll send it to our application and hopefully this will be a hedgehog. Okay, let's see how we do this. We post this in the English route we created before. And normally we will see in the other terminal the GPU process that we ran for a while, then we will get the result. So we saw the process for a while in the terminal and the result was a clock. So yeah, I guess that the hedgehog kind of looks like a clock with some percentage. So now let's send a batch of requests and monitor the GPUs with a different tool, with the NV top tool. I have copied those images here from my PC. So these are the images. Let's identify the objects, the object images. Okay, we can see the GPU usage in the right terminal and also we saw the result of the classification. We'll send some more requests if let's classify the fruit images. Okay, we can see. So that's it. We can see that our application is running inside the Firecracker of Ritzel machine and offloads the computation to the GPU without having to pass through it or use their network stack. That was a very simple example of deploying machine learning inference acceleration on Kubernetes using VXL with Cata containers and Firecracker. I hope you enjoyed it. So if you find this interesting, you can give it a try. There is an installation guide in VXL website and also there is a simplified configuration to deploy these in K3S in a few minutes. You can also have a look in these two presentations, individualization and microcarnel dev rooms. The first one is focusing on VXL in Firecracker and the second one is in the microcarnel dev room and it is even more interesting as it focuses on hardware acceleration for unit kernels which I think is really, really good. So that's all. Thank you very much for watching this. Okay, so we should be live with the Q&A now. I'm not seeing any questions so far in the main dev room chat and we've had quite a bit of time actually for the Q&A on this one. So first of all, thank you very much for the talk. It was quite interesting. And yeah, we've got another 13 minutes until the next talk. So I don't know if any of you have anything that you'd like to add on top of the talk or maybe some pointers or anything else that you'd like to mention. Well, thank you, Seth. My finger, okay. Okay, I'm going to give it a little bit more time to see. There's some clapping going on, but I'm still not seeing any questions or comments at this point. Let's give it another couple of minutes. We'll wait until 25. If there's nothing to cover at that point, we can disconnect from the live chat and people can always keep asking questions in the chat. That's going to give like a 10 minutes gap until the next talk effectively. That's fine. Okay. Crickets. Seems like there's still nobody from anyone. I think we can probably wrap it up there. This room will become public at the end of the reserve Q&A period, so it's going to be taking a little while longer. It's going to go public in about 10 minutes. But until then, if anyone has any questions or any comments or anything, just ask directly in the main DevRum channel. I think one of us are in there. Actually, there's just something that came in just now. So the question is, are there special considerations for multi-node clusters with acceleration? Well, I'm not exactly sure what that means. For a multi-node cluster, you can still use the same deployment, I think. Okay, I will see if it's got any more specific questions or if that covered it. Yeah, I mean, there's a question around the recordings. So FOSDN normally has all of its videos recorded and published on fosdn.org directly accessible from the schedule. That might take a few days to show up because it needs to be merged with the Q&A and the talk itself. And then that goes through a review from the speakers to just check if there's anything that needs changing, whether we need to do some cuts before or after that kind of stuff. Once that's done, it will go live on fosdn.org. And at some point later, that gets copied over to YouTube. I know that in some years, the YouTube copy took quite a while just because of the limitations of the YouTube API in half-fashts. FOSDN can upload all the talks. There are many, many talks being recorded over the weekends that can take a little while to upload. Let's give it another minute or so to see if we get any other questions. If not, then we can wrap it up. I see someone typing, so I'll just give them a minute or a few seconds anyway. Not typing anymore. All right. Well, I think that's a wrap. Thank you very much for presenting and answering those questions. Again, if anyone has more questions, just feel free to ask in the text chat. People will be around for a while to answer those. And then in about eight minutes from now, this talk room will also open if anyone wants to join in and ask more questions. All right. Well, thanks, Jo. And maybe see you in person after another fosdn. Thank you, Stefan. Thanks. Bye-bye.
|
The Serverless computing paradigm facilitates the use of cloud computing resources by developers without the burden of administering and maintaining infrastructure. This simplification of cloud programming appears ideal (in theory) but the catch is that when someone needs to perform a more complex task, things could get a bit more complicated. Hardware acceleration, for instance, has been a pain point, even for traditional cloud computing programming models: IaaS providers chose dedicated solutions to avoid interference and preserve tenant isolation (device passthrough), while losing one of the most important benefits of virtualization, flexibility in workload placement through live migration. Various solutions have been proposed to overcome this limitation (API remoting, hardware slicing etc.). In the Serverless world though, do we need users to interface with a hardware device directly? Most serverless deployments are backed by containers, however, the most popular (and used) one, AWS Lambda, uses a ligthweight VMM (AWS Firecracker) integrated in the container ecosystem, in order to ensure strict isolation, while maintaining scalability. To this end, enabling hardware acceleration on this kind of deployment incurs the same pain points with traditional cloud infrastructure. Kata containers evolved from clear containers and offer hypervisor support for popular orchestrators container deployments such as docker, Kubernetes etc. Through kata containers, AWS Firecracker VMs can be easily provisioned as Pods on a kubernetes system, serving workloads prepared as container images. We build on the kata container runtime and port the necessary components to support vAccel, a lightweight framework for hardware acceleration on VMs, on Firecracker. In this talk, we briefly go through vAccel, its design principles and implementation, while focusing on the integration with kata-containers and the end-to-end system applicability on ML inference workloads. We present a short patch for kata-containers to support AWS Firecracker v0.23.1, and go through the necessary patching to add the vAccel framework on k8s. Finally, we present a short demo that scales image classification purpose-built microVMs across a working K8s cluster with GPUs. Hardware acceleration for serverless deployments has never been more secure!
|
10.5446/52455 (DOI)
|
Hello everybody, my name is Tom Lenz. I'm a professor at the University of Moles in Belgium in the computer science department in the software engineering laboratory. The focus of this presentation will be on the relation between package dependency management and reliance on packages that still have releases with a zero major version component. This research is conducted as part of a project called Seiko Assist, which is a Belgian inter-university project. I will refer to the problem of depending on packages in version, major version zero as the zero space problem. So this research has been conducted together with another resource associated with called Alexandre de Comte. This presentation has also been published or will be published soon in a journal article. The link to the full details of this publication can be found below in this download that's available on archive. Next I will report on the results of some empirical analysis that we conducted on four different package management systems, cargo for rust packages, ruby jumps for ruby packages, packages for PHP packages and NPM for the Node JavaScript ecosystem. In total there is lots of packages available in these ecosystems. You can see the number of packages that's range between 35,000 in the smallest ecosystem and up to over one million different packages in the largest one, which is NPM. So we have package ecosystems of different orders of magnitude. In terms of number, each of these packages tend to have multiple releases, so the number of releases is sometimes even a tenfold of the number of packages ranging from 183,000 to almost 9 million in NPM and even more dependencies going up to 48 million dependencies in NPM. In total this data we have downloaded from the open source repository and dependency metadata in the libraries.io dataset, which if you are interested in, it's a very useful source of data that you can access through this Zenodo digital object identifier. If you study the different packages in these four different package managers, obviously you will find packages with many different releases and each of these different releases they will have a particular version number. All of these packages, they claim that they follow the semantic versioning policy, so basically they will have a three-component versioning system composed of a major version, a minor version and a patch version. The packages will have a version number that is above 1.0.0, so you will have versions like 1.0, 4.0.6, 7.1.3 and so on. This is what we call the one-plus space. We will also find lots of packages that are still in what we call the zero space, which is versions where the major component is zero. For example, 0.0.1, 0.1.0, 0.12.3 and so on. What we wanted to study was to which extent are packages in these four different package managers making use of releases in the one-plus space or of releases in the zero space and how this affects the dependency management in these different package managers. Why did we want to do this? Basically because it is generally considered that if you have packages that have a major version component 1.0, that they are considered to be stable packages, also more mature packages, and hence they are more likely to be popular. On the other hand, if we look at packages whose releases are still in the zero space, then they are often considered as being unstable and still under development. So they are not yet mature. But is this really the case? Is this common wisdom really the case? Well, let us just have a look at two examples. Take for example one package in NPM, which is zero kit web SDK, which is currently in version 4.0.6. So it's already a pretty high first version number. So one could consider this package to be already mature and stable. Is this really the case? I would say not really because in total there has been only five releases of this package over time. In total only 16 commits and there's been one contributor active for this particular package. In fact, it even turns out that this package has no longer been maintained since July 2017. And because of this, it has been archived on GitHub in June 2018, even though it's still available on NPM. If we take another example in the zero space, that we have for example Axios, which is a package that is still in zero release, 0.21.1 to be precise. So one could consider that because it's still in a zero version, it's not production ready yet. But on the other hand, if we look at this package, if we look at this package history, it has had over time 46 different releases, a total of almost 1000 commits and 257 different contributors. Moreover, if you look at the details on GitHub, we found that it has on average 10 million weekly downloads, about 50,000 different packages, other NPM packages that depend on it. And moreover, on GitHub, we also can see that it is used by roughly 3.5 million other GitHub repositories. So one cannot really say that this package is not production ready. So one could argue that the fact that there are packages in the zero space that are already production ready, they tend to follow the zero based versioning scheme, which is actually some term that has been coined by a person that developed this satiric website, zerover.org, where he's making mocking of people that stick to only major version zero for their releases. And use this as a statement to say that if you have a major, that your software's major version should never exceed the first and most important number in computing, namely zero. In fact, it's a satirical website, but nevertheless, there is lots of popular packages that are actually following pre parenthesis, this zero based versioning strategy. In fact, if you again look on that website, you will look, it's fine. Lots of packages like react native that have been developed over many different years. For example, react native, it's very popular project. It has had 345 different releases. It has been existing over 5.5 years, and it's still currently under major version zero today. The same for the project I just mentioned before, Axios, which is still under its version zero. Even though it has been developed over six years, and it has a lot of stars and lots of dependence, and so on, we can go over this list for quite a long time and find lots of extremely popular packages that are still in their major version zero. So this led us to consider the following research question. Is it true that many packages tend to get stuck in zero space? To find these out, we did some analysis to compute the proportion of packages in each of the four studied ecosystems that still have a release in a particular version range. So either only the version range belonging to the zero space, the version range belonging only to the one plus space, or a version range belonging to both. So if a package has releases in both zero dot y dot z, and other releases that have a version superior to one, then we would consider that they are in the both. If we do this for the four different ecosystems, then we actually find that there is only a very small minority of packages in all the ecosystems that have actually crossed the one dot zero dot zero barrier. So in the beginning of their lifetime, they were a package with major version zero, and then they migrated to a version that was superior to one. In many packages in the different ecosystems, we found that they actually got stuck in the zero space and never migrated to version one. The proportion of such packages really depends on the considered ecosystem. In fact, we can find that there is two types of package managers, those like packages and npm, in which there is only a minority of packages that are stuck in zero version space. And then we have on the other hand, Ruby gems and cargo, where large majority of packages are always and will continue to remain in zero space. This is about 75%, so three out of four packages always have had zero release. And for cargo, it's even worse because it's up to 92%. So for this, it seems to be the case that they are actually not following a semantic versioning, but zero case versioning, because they never crossed the one dot zero version barrier. So now let's focus again on those packages that are actually in the both category. So packages that have actually migrated from zero version to one version. Then we did some survival analysis to find out how long did it take for these packages to migrate to the one plus space. So if we compare the time between their first zero release and their first release superior to one, then this survival analysis shows the proportion of packages having done this. And actually what we found that in the major for the majority of packages, it only took a few months to cross the mythical one dot zero barrier, but still for one out of five packages, it took over more than a year. And for the worst ecosystem, namely in RubyGems, in many cases, it took over two years before, if they migrated actually to the one plus space, it took over two years to do this. So now this is about the prevalence of zero releases in these four different package distributions, package managers. Now let's have a look at the effect of this on the dependency management in these four different packages distributions. So of course we know that in these four different package managers, you have lots of dependencies between different packages, you can have packages, you can have dependencies between packages that are all belonging to one plus space. You can also have lots of dependencies between packages between the zero space, but more importantly, there is of course also dependencies that go from packages that are considered stable and that are in the one plus space to packages that are still in the zero space and vice versa. So we wanted to find out to which extent this is the case, and to which extent is this a problem. More in particularly, let us focus on the red arrows, which are dependencies from packages that have a release that is above one dot zero, and that depends on other packages that still have a release with major version zero. So we asked ourselves the question, is it actually a problem for a mature, popular package to depend on a package that still have a zero dot y dot z version. Why did we ask ourselves this question? Because we found several citations of popular books for different ecosystems, that this is considered to be a problem. For example, Jeremy Khan in his blog for NPM said that you cannot trust a project that depends on another project that is still in a zero dot y dot z version, because such projects are not meant to be ready for use. For cargo, Rust, there was a similar reference that you can find on this blog here, where they consider, depending on a package that is still in version zero dot y dot z, that's a bad thing, and that should be avoided. So of course, this is just two different blogs where some people say that it's a problem. Is this generally the case? Difficult to say. So to get some more insight in this, we try to find more qualitative evidence of this by doing a survey on LinkedIn and on Twitter with in total 102 respondents, where we asked them the following question. If you are a developer, both of you need to depend on an open source package distributed through one of these package managers, would you actually trust depending on a package that has major version zero? There were four possible answers. Obviously, I'm sure I don't mind depending on this or no, I would never depend on such a version, and then some intermediate responses which say I would only do this after checking the package or the package history for this particular package I would like to depend on, or I would only do this if there is no real alternative. So what we find in the responses is that the answer about is it can be, can one trust to depend on zero y z versions is really mitigated because more than half of all responses are in these categories where they say I would only do it after checking or if there is not really an alternative. So actually, the one where they say no, there is no problem at all of depending on such a package, it's only one out of three. So two thirds of all the respondents said it's, it might be problematic to depend on such a package show so you shouldn't do it without any checking or without looking for alternatives. Of course, all of this is related to the use of semantic versioning. I don't think I have to explain semantic versioning to most of you, but what is important to recall, if you look at the details of the semantic versioning policy is that in this policy they actually say something explicitly about using a major version zero. A major version zero is actually intended for initial development only. Why? Because if you have a major version zero, then basically anything may change at any time. Even if you do a patch update, you might still introduce breaking changes. Just for a quick reminder for those people in the audience that don't know what semantic versioning is, if we take this traditional three component version numbering, then if you would like to depend on some package that has a particular version number, and you want to be semantic versioning compatible, if you use, you can use dependency constraints to depend on other packages. If you use a tilde constraint, then you signal that the versions of the package you depend on, you allow any increase of their patch number, and you assume that it will not introduce any breaking changes. If you use a carrot constraint to depend on another package version, then you say that you also allow minor updates of the package you depend upon, and semantic versioning implies that also minor updates are still supposed to be backwards incompatibilities, so it will not introduce breaking changes. If, on the other hand, you use a very permissive constraint where you also allow automatic upgrades of the major version component of your dependencies, in that case, it might be that you will have to face backwards incompatibility problems, because a change in the major release number might introduce breaking changes. The problem with this is that not all of the four studies package managers stick to semantic versioning in the same way, and also they also don't use the dependency constraints in the same way. So here on the left we have listed the different common types of dependency constraints that are being used, and the four different package distributions being considered. Since we wanted to do an analysis about the relation between using dependency constraints and how this affects the reliance on zero major versions, we first needed to translate the dependency constraint notation into a uniform interval-based notation in order to have the exact meaning of the notation that might differ between the different ecosystems. For example, if you take cargo and you would specify 1.0, and this would actually mean that you accept any version number between 1.0.0 and the next major release, well, if you would do this for npm, then this would actually mean only all versions between 1.0.0 and 1.1.0 excluded. And even for packages, it actually would mean just one single version that's allowed. So we can see that there is differences across different package distributions about how a particular version number is interpreted. The most important version constraints being used, however, they are used in a more or less systematic way across different package managers. For example, till the 1.3 is interpreted in the same way for all package managers, and the same is true for carret constraints, except that it's not available for RubyGems. So if you now relate this to semantic versioning, then basically everything that's showing in red are constraints that are more restrictive than semantic versioning, because semantic versioning that says that everything except major upgrades are expected to be backwards compatible. So for example, using carret 1.3 is semantic versioning compliant, but using till the 1.3 is more restrictive than somewhere, and then you have the more permissive constraints, which are things like for example anything that's bigger than 1.2.3, or using a stock constraint will be more permissive than somewhere. Now the interesting part here is that if you look at constraints, dependency constraints to a version number with a major version 0 component, then they are always considered as being more permissive than somewhere. Why is the case? Basically, because somewhere dictates that if you have a zero version, any upgrade, even a patch upgrade of your zero version will be considered to be potentially introducing a breaking change. The different ecosystems we studied tend to be more permissive, since they do allow for patch updates, and they assume that patch updates are still considered to be non-breaking. Here we looked at the different documentation in the four different packaging ecosystems to find evidence of this, and they actually do say this, and they do show the deviation against semantic versioning, but not in an extremely explicit way. For example, if you have a version 0, then you can consider actually the minor upgrade as a breaking change indicator, which implicitly assumes that patch updates are non-breaking. Here again the same for the sort of NPM, for cargo it's the same. If you make breaking changes, you can increment the minor versions, so this implicitly assumes that if you upgrade patches, they are not breaking. And here again, this is for packages where if you have a version that's in the zero space, then the current constraint is considered as a version that is allowing patch upgrades as well, and so on. So it appears that the four different packaging ecosystems systems are more permissive than some ver specifies. And when we analyze this, we indeed see that this is the case in all of the four different study ecosystems, the percentage of packages that have a dependency to some other package that's still in version 0, they have no problem with accepting patch updates. You can see here in green the proportion of dependencies that allow for patch updates, which is a big majority. And in the case of Ruby jumps, they even allow for minor updates, even if this would clearly be considered as breaking changes in all of the other ecosystems. So for sure, with respect to dependency management and dependency constraints, the packaging ecosystems are more permissive than what some ver specifies. Now the next question is, if you remember all the pictures I have shown before, is that we found a difference between two of the ecosystems that have lots of packages that are still in zero version space, while in the other two ecosystems, there was many much less packages that are still in zero version space. Why is this the case? We looked at the default initial version that was set for newly created packages in each of the package managers. If you take the cargo package manager, and if you do a cargo init, then it will create an initial version with as default value 0.1.0. So it's clearly in the zero version space. In NPM, it's different. In fact, it's different since April 2014, where if you would use NPM init, it would set the default package version of a newly created package to 1.0.0. Exactly because the ecosystem managers do not agree with the fact that the same ver specification does not adhere to or is different for 0.ex.y versions, which only leads to confusion. For packages, there is no specific initial version that's set. Basically, the version number used for a newly created package is used from the Git tag that can be found in the Git repository where the package is being developed. And for RubyGems, it's like cargo, the initial version will be set as 0.1.0. So cargo and RubyGems, they set as a default 0 version as initial version, while NPM sets a 1 version, and while for packages, it's basically the developer that decides. So because of this, we do find a big difference across the four different package distributions in what is the initial version being set for different packages. So for packages and NPM, we see that a majority starts in version 1.0.0. In NPM, it's maybe still a majority, but I guess it will increase over time since the new policy adopted by NPM started in 2014. While for cargo and RubyGems, the proportion of packages newly created that are starting in 1.0 is really minor simply because the default policy is to set 0.1.0 version. So this explains why basically these two different ecosystems are majorly in zero space because of the default 0.1.0 version that is being set upon creation of a new package. This is actually showing the same thing, but in evolution over time. So there is nothing special to say about this except for the fact that for example in NPM, we see that since 2014, where NPM changed its policy of specifying the creation of a new package to 1.0.0. The proportion of releases being in zero space is actually decreasing. So we see that the effect of a policy can make a big impact. What we also notice if you look at the percentage or the proportion of releases that are still in zero space, that it's really abundant and there is lots of releases that are still in zero space ranging from 2 out of 10 for packages, the green one up to 9 out of 10 for cargo, the blue one. So 0.1.0 releases are really prevalent everywhere. What can we reduce from all of this or recommend from all of this? Basically, if it is the goal of package managers like cargo and RubyGems to stick better to the semantic versioning policy or to inside their package maintainers to move out of the zero version space sooner, then they should probably change their release policies and more in particular to the way of specifying the initial version number of a newly created package. Now let's focus on dependencies. We have lots of packages that are in zero version space and we want to know how much to which extent other packages depend on such packages. So what we see here is the distribution of the number of dependent packages that depend either on a package still in version zero or a package that is in one space. And we see this for the different ecosystems. If you look at these different distributions, it's difficult to see a difference. In fact, we don't really see a difference between depending on a zero version package or one package. There is lots of packages that depend on packages in one space, but there is also lots of packages depending on packages that are still in the zero space. And in fact, from a practical point of view, we try to find out, do we find the difference between those packages that depend on packages with major version zero and those packages that depend on packages that are in major version one. And we couldn't really find the big difference. They had a comparable release frequency, they had a comparable number of dependencies, a comparable number of stores, number of folks and so on. So in practice, there is developers that use dependent packages don't seem to make a big distinction between depending on a one plus package or depending on a zero version package. So if we consider zero packages to be problematic, then in that case, the maintainer of those packages should be invited to cross the one of zero barrier as soon as possible, in order for packages that are already in the one space to safely depend on them without having the problem of having any breaking changes or not being able to respect the semantic versioning policy. From all of this, what can we conclude? Well, there is many different things that we can conclude from this presentation and the findings of our analysis. First of all, if you look at semantic versioning, then the rules are a bit confusing because semantic versioning makes an explicit distinction between its rules for one plus packages and for packages that are still in the zero space. And moreover, the distinct package managers that we studied, they all have a different way of following the semantic versioning. They are not fully semantic versioning compliant, especially when it concerns packages that are still in the zero version space. What we also found that, well, although people believe or think that packages that are still using a zero version number are not yet mature or production ready, in practice, that doesn't seem to be the case. There is lots of packages in the zero version space that are popular, that are mature, and in terms of dependencies to packages in the zero space or run space, we couldn't observe any practical difference between how they are used. Still, there appears to be a big artificial psychological barrier that makes it so that lots of packages never are only very slowly traverse the mythical 1.0 barrier. So what can we do to improve upon this? First of all, we believe that there is a need to increase the awareness of the specific versioning rules and dependency constraints being used by particular package manager. We also believe that there should be more uniformity in these rules across package managers. We also believe that there should be a better alignment between the semantic versioning rules and the rules being used by different package managers. And if there is a difference, this difference should be made more explicit and should be documented more clearly to the package manager community. And finally, the last recommendation is that if package maintainers consider their package to be production ready, they should be incited to move out of the zero space as soon as possible. Okay, so this is the conclusion of my presentation. I hope you have any questions and I would be delighted to respond to them. Thank you for your attention.
|
When developing open source software end-user applications or reusable software packages, developers depend on software packages distributed through package managers such as npm, Packagist, Cargo, RubyGems. In addition to this, empirical evidence has shown that these package managers adhere to a large extent to semantic versioning principles. Packages that are still in major version zero are considered unstable according to semantic versioning, as some developers consider such packages as immature, still being under initial development. This presentation reports on large-scale empirical evidence on the use of dependencies towards 0.y.z versions in four different software package distributions: Cargo, npm, Packagist and RubyGems. We study to which extent packages get stuck in the zero version space, never crossing the psychological barrier of major version zero. We compare the effect of the policies and practices of package managers on this phenomenon. We do not reveal the results of our findings in this abstract yet, as it would spoil the fun of the presentation. This empirical study builds further on our earlier work, in which we have studied different kinds of dependency management issues in software package distributions. The current empirical evolutionary study is based on recent package management metadata of 1.5 million packages, totaling 12 million package releases and 56 million package dependencies. We analyse dependency version constraints to determine: * to which extent packages depend on 0.y.z releases of other packages; * whether packages with major version zero ever cross the psychological barrier of 1.0.0; * whether there is any reluctance to depend on 0.y.z packages; * whether dependency constraints are more permissive than what semantic versioning dictates for packages in major version zero.
|
10.5446/52456 (DOI)
|
Hi everyone, I'm Rhys Harkins. We're still early warning signs for open source breakages. For today's topics, introduce you to a tool we build called Renovate Bot, discuss dependency automation and how people use it. Discuss a new feature we added called Merge Confidence, and then finally how we're using the data from Merge Confidence to give early warning signs for open source breakages. First of all, Renovate Bot, it's an open source tool for dependency automation. It's about three to four years old, we have a few hundred contributors, we have over 4,000 stars now. It's got a fair amount of traction and fairly good adoption from the community. Importantly, apart from being open source, it's multi-platform, multi-language. And the ultimate aim is to reduce open source risks and management time by providing a way for people to keep dependencies up to date. For the visual thinkers out there, let's look at the dependency automation workflow. So what is it that Renovate Bot actually does? First of all, an open source maintainer will publish a new release of a package to the registry. The bot will at some point after that detect that there's a new version available for that package and then will raise a pull request to downstream repositories. So this works both with traditional programming packages, like NPMJS, but also as well for things like Ansible scripts, base images in Docker files, Helm charts and things like that as well. The general idea is that if you have a dependency in a software project, you should have it clearly referenced. You should ideally version it and then you should use Renovate to keep it up to date. Something I think that's important to understand, even for those of us who are deep into dependency management and some of you may have seen tools like Renovate, GreenKeeper, Dependabot for years now. It's important for us to understand that we're really still at the early adopter stage of dependency automation. In my opinion, the majority of projects, especially private projects, commercial software, are definitely not automated to keep dependencies up to date with tools like Renovate. And therefore, I still classify those who use these tools as falling under the early adopter category. Well, hopefully we're moving to the early majority at the moment. Early adopters of dependency automation fall into a couple of categories. So first of all, there was the rare people who kept up to date through other means already. So people used to run, say, NPM outdated or similar tools, or even just do a manual check once a month or once a quarter. So for those people, dependency automation has meant a great saving of time. But the majority of these early adopters probably only keep up to date thanks to automation. In other words, they had ad hoc processes or no processes at all for keeping dependencies up to date. And they might easily get months or years behind without really being aware of it. For the benefits of keeping dependencies up to date, first of all, bug fixes. This is probably the number one benefit. It means that known problems in your dependencies, which might be affecting your users and you aren't aware of it, are fixed. And this was actually the motivation for why I started Renovate Bot myself. Getting access to latest features and APIs is also useful. I mean, in general, people might have updated or anyway. In general, people might have updated regardless if they needed a new feature. But when you do get regular updates, you can see the release nodes, be aware of new features, and also potentially get warnings that code you're using is being deprecated in the next major release, as an example. Another little-known benefit of keeping dependencies up to date is accidental vulnerability fixes. By this, I mean that you may apply a patch that fixes a vulnerability before that patch was actually classified as a vulnerability fix. What typically happens is that a maintainer will get notified that there is a vulnerability, or they might find it themselves. And they will patch it with a subtle, discrete message and release that. And then it might be days or even weeks before the official vulnerability notice is put out. We call that disclosure. So it's quite possible to actually get a pull request and merge it for a vulnerability before it's classified as a vulnerability. So this is why I call it accidental vulnerability fixes, because you're not actually aware that you're doing it. But for the majority of vulnerabilities, you still face easier vulnerability remediation. A lot of people don't really like the pull request you get telling you to fix vulnerabilities and part of that is because if you are a long way behind with your dependencies, it can be quite a stressful situation to have to do a rapid upgrade of many minor feature releases or even possibly major releases, faster than you would be comfortable with. However, if you've been using dependency automation tools to help to date, you're going to get much easier vulnerability remediation if you're just a patch or two, or maybe a minor release or two behind instead of being six months, 12 months or more. So there's benefits keeping up to date. The question is why doesn't everybody do it? And that's because there are still downsides. The thing that most people really worry about is the risk of introducing new bugs. If you update to the very latest, then there is some risk that there is an accidental breakage of some sort in that latest version that either hasn't been found yet or at least hasn't been fixed yet. And this is something that really worries people because if you know that your system is working quite well, maybe with some bugs, but nothing major, then you can feel more reassured just staying with that and staying out of date rather than the risk that something very major could stop working. The second downside of keeping up to date, even with an automation tool, is that it still takes more time than doing nothing. And so therefore, a lot of people take this approach of if it ain't broke, don't fix it. It's just easier to do nothing than it is to keep up to date, even though there are a lot of benefits. So to try to address this, we introduced a feature called Merge Confidence. And Merge Confidence uses crowd data to derive a confidence in each dependency update. That crowd data is based upon test results of pull requests, as well as the adoption rate, i.e. accepted pull requests versus the rollback rate. So just think about when you receive a pull request to update dependencies today. You run your tests, but even if you have very good test coverage, you might still have a concern that your test didn't cover everything. In particular, nobody tests every single functionality of all of their dependencies. It's quite common to mock dependencies and tests, in fact. So therefore, you might know that you're not actually fully testing the functionality of certain dependencies, especially ones that might communicate with a file system or a network where you want to isolate and mock those in your tests. Therefore, there's a risk that things might pass your tests, but have a problem in production. Merge Confidence addresses this because when we get enough volume of people, then the chances that nobody's tests failed because of a breaking change, or an accidentally breaking change, the chance of that's very low. Further, the adoption rate is really helpful. So people that need a new release or maybe like to stay more ahead than the average, they'll be the first to adopt a new version. But they will also be the ones who discover it if it has one of these difficult bugs that is only really discovered with actual use. And therefore, it is really useful to understand how many people have adopted a new release and also whether people have been rolling back. If people update to a new release and then revert that change, that's probably the strongest signal we can have about a dependency update. And of course, it's a negative signal. That's what we call a low confidence update. So merge confidence aims to gather that data and a few other things such as the age of the release and to drive a confidence level. We start with a neutral confidence. And if something seems potentially wrong with the update, you'll be given a low confidence which tells people to either avoid it or be careful. We have a high confidence once we're pretty sure that there's nothing wrong with it. And that's generally based upon the test results. And then a very high confidence is a level you'll reach maybe after three or four weeks with 20 or 30% minimum adoption of that release. This means that the chances that there is something hidden wrong with a release that only you would find would be extremely low. The goal of merge confidence is really to tip the balance, those benefits and the downsides I mentioned earlier. So if you look at the two downsides, risk and time. The risk we talked about before was that you adopt a new version and you hit a problem that no one else has hit before. So by waiting for a high confidence score in releases, meaning that everybody is tested and it's exhibited normal test results, there's a very good way of avoiding catastrophic problems. Similarly, if you're more risk averse than waiting also for some adoption is also a great way to lower that risk. Lower risk also means less time taken. The time taken in reviewing pull requests is often checking release notes, looking if something looks wrong, trying to estimate the chances that this might break something. So if you're not so concerned anymore about something being broken because enough people have already tested it before you and it's past your own test, it means less time for reviewing pull requests. The other thing is that we can also have other ways of using this confidence to reduce the amount of time we spend on it. Let's take a look at an example of a broken patch update that's called by merge confidence. In this example, this was just a patch release, a single patch of post CSS. And the key point here to look at is the pass rate. This pass rate was 75%, which is much lower than a typical 97, 98 or 100% that we would expect from a patch release. The reason why this example is exciting to me is because without merge confidence, three quarters of the people that got this pull request might have thought that it looked fine. And maybe it could have been fine for some of those, but definitely not for all of those. But thanks to merge confidence, those same 75% whose tests at pass would be unlikely to merge it because of this 75% pass rate and the low confidence that we were signed it. And it was true release 815 was actually a partly broken release that was fixed quite after that. If you look ahead to another patch release in the same stream, we can see this is what it typically looks like. So this is still a patch, but it was 814819. In this case, 98% passing, high confidence, and you can also see that the adoption is there as well. So these simple badges just help people be more aware of whether something is good or not. First of all, if it passed your tests, it fails to many others. That's a really helpful thing to know that you maybe should be hesitant to merge that. Unless of course you're very confident in those differences. But also importantly, if it fails your tests, there's often a bit of head scratching to try to work out, well, is it just me or was it a flaky test or something like that? So if you see that it fails your tests, but then it's also found another 10% or more, that'd be a good sign that it's not just you. It's time to take a pass on that release and wait for a new one. Look at the durations then for merge confidence. The test results, it generally takes the vicinity of hours. This is the amount of time it takes for the bot to detect a new release, to raise branches and pull requests in downstream repositories. For those repositories to have time to kick off CI tests and for them to complete, and then for the bot to have then checked the status results and aggregated them together. But generally hours less than a day, you can already find results to identify most low confidence releases and get an idea if a release is likely to become a high confidence release. The people that are waiting on that release, it might be a key bug fix or a feature they're waiting on, can then feel free to start testing it ready for deployment. Adoption statistics takes around days to weeks. This is because people generally don't merge things right away. And in fact, merge confidence itself can encourage people not to merge things instantly and maybe to wait those hours or days first. But after a few days, sometimes a week or two, at that point then we get an idea of adoption. It starts being 10%, 20%, 30% that type of thing. By the time you've got like a week old release that's achieved maybe 15%, 20% adoption, had perfect test results, that's a point where we can assign high to very high confidence to that. What this means is that for the majority of people that don't need a release immediately, they can wait those days to be very confident that it's going to be a good release for them. It's important to understand what it's going to take to convert the majority over to dependency automation. If you're in this type of conference, you're probably really interested in tools and dependencies and things like that, but the majority is not. So I like to remind myself that dependency management for the majority, even dependency management for security reasons, is an overhead to majority of projects. For most projects, having to keep things up to date is just a downside of using dependencies in the first place. It's going to take low risk, hands free updates for us to be able to convert that majority over to dependency automation. Let's take a look at what I call accidental open source breakages today. There are two main types of these accidental breakages. The first of all is some type of mistake that causes existing functionality to just stop working. This is generally the easiest one to diagnose because anyone using that functionality, it simply won't work. This is usually also the easiest one to fix because it's very clear that something should still be working like it was in a previous release. The second type of accidental breakage is a little bit harder to detect and harder to remediate. This comes from an oversight generally. So an oversight changes behavior in a package that consumers were depending upon. Example of this might be that you could be having an API or a library that returns a 403 response, somebody points out that 403 is incorrect. It should be a 401 according to the spec. So you fix the library in return of 401 instead of a 403. Unfortunately, if there are downstream users of that library who are relying on 403 and don't have code to handle 401, then that's going to be essentially a breaking change for them. This technically could be declared as a breaking change and a major update if you're in a semantic versioning ecosystem. But that's typically not done. So this is an example of where you might find someone with tests stop working and they wonder if it's just them because nothing seems particularly wrong with what's being declared in the release. Another key point about accidental open source breakages is that the majority of open source users do not know how or are just simply not interested in reporting broken packages upstream. So when you do see as a maintainer, you see people raise an issue that says, hey, there seems to be a change in this release and this change, that's a pretty rare user. That's not your common user. Therefore let's talk about broken package upstream reporting, i.e. how do maintainers know when something they've released has been broken? The first thing we plan to do is to create a URL per package to collate merge confidence scores and allow drilling down, linking to public pull requests and things like that. That means that somebody who's interested in knowing about the past performance of a library can easily have one location to look at and that concludes the maintainer of the library as well. For more advanced functionality, we're looking at automated alerts for participating maintainers. By participating, I mean this is something people would have to opt into. We wouldn't just start doing this without invitation. That could include email alerts if you want to be alerted, if a package that you have an ownership in has a release with low merge confidence. Also it could be automated creation of issues. So for example, we could automatically create an issue noting a low confidence in a certain release and again giving links to examples of failed tests and public repositories and things like that. This type of approach will be pretty useful for maintainers to get a much more immediate feedback when a package that's been released is showing signs of low confidence. It could also mean that people who are looking to report upstream can benefit from this issue being created as well. Knowing immediately when a released package has an accidental breaking change is really beneficial. But it would be even more beneficial if we could not release breaking changes in the first place. And we have an idea for that as well. The goal here is to prevent accidental breaking changes from being released. And the way this would work is that downstream users could opt into silent testing of pre-releases. Renovab would create branch only commits to trigger CI tests of pre-releases. This means that unless you're watching for branches that have been created or watching your CI results, you wouldn't really notice these branches being created by the bot and running tests on. And the results could then be fed back upstream the same way that typical merge confidence results. If you look at the benefits to the open source consumer this pre-release testing, once you have opted into pre-release testing to a package where the maintainer of course is paying attention to this, then the chances that a GA release would be made with accidentally breaking changes that impact you become almost zero. The only cost to this is in CI minutes. There's pretty much no disruption from your CI silently running tests on pre-releases and that being fed back to the open source maintainers. So therefore when a package is really critical to your project and you opt into it, it means that you can be pretty confident that future releases are not going to break you because you were part of the pre-release testing group. For open source maintainers, there's also great benefits here. First of all, it gives real test feedback from real applications. The majority of applications, the ultimate end users of open source are of course private. So even when people do testing of open source packages using downstream open source packages, like consumers of that library, doesn't necessarily give us a full picture compared to testing on private repositories. So it gives us benefit of real user testing and feedback prior to release. It means that in particular risky new code can get some simple testing. We wouldn't recommend that every single release of a package unless releases are quite stepped apart. We wouldn't recommend that every release is pre-release tested, but it means that if you're releasing a feature or a fix that touches a lot of code that you're uncertain about, you could put out a pre-release and wait for the test results to come back before then publishing the GA version of it. This pre-release testing could also even be used for smoke testing of deprecated features or intentionally breaking changes to be measured. So you know that it might be breaking for some, but you're not sure. So you test against the test group to see how many of those fail the test if you remove a feature that you think very few people are using. And again, the main thing is the only cost here is the time taken to publish a pre-release and look at the results. There's no cost in CI minutes or anything like that to do this. The funny thing we're looking at is going even beyond that, and that is code mods for breaking changes. The other feature is two types of changes. First of all, you've got the non-breaking change, and we wish to reduce any accidental breakages in those with the help of pre-release testing. The second category is the intentional breaking change, such as a major release with a documented change. Often this is for simple reasons like deprecating old versions of Node.js, but there are also cases where functions get renamed or parameters get renamed and things like that. In many of those cases, it would be possible to publish code mods and transforms to help downstream users to make those changes. Therefore, if a breaking change is published with a fully complete transform to migrate code, it doesn't really become breaking anymore. And the Angular project is a great example of that. They have a tool which does that for a lot of updates. With pre-release testing, these transforms could also be validated this same kind of way. It means that if a library author wants to check that the breaking change can be transformed to reduce the break, that would be another functionality of pre-release testing with the merge confidence combined. Thank you very much for tuning in. We've allocated some time for question and answers. So if you haven't asked your question already, please do so and look forward to hearing your feedback.
|
Despite best intentions, Open Source releases with regression errors are published every day. In the best case scenario, a downstream user detects it early thanks to good tests, files an issue, and the maintainer can fix it before too many people have upgraded. Other scenarios involve various degrees of brokenness and games of "is it broken for everyone or just me?". Renovate Bot is an open source dependency automation tool but which also is run as a free app on github.com, where it is installed into almost 200,000 repositories. A feature called "Merge Confidence" helps downstream users know if a release is likely good or not based on automatically sourced crowd data (tests, deployments, rollbacks). Now we are planning to turn the focus upstream to help open source maintainers get an early indication of accidentally breaking releases and even provide a mechanism for downstream users to opt into silent pre-release testing so that major features can be smoke tested downstream before release.
|
10.5446/52461 (DOI)
|
Welcome to the Tool the Docs death room at FOSSTEM. My name is Ralph Müller and I work for Davy Sistl, the digital partner of Deutsche Bahn. In the following minutes I will take you on a journey to the limits of the Docs' code approach with the open source tool called DocToolChain. Please follow me. So let's take a look at DocToolChain and see why it might be useful for you. When you use the Docs' code approach you need a tool which converts your markup, in this case ASCIIDoc, into a nicely rendered representation. So you start out with ASCIIDoc the converter for ASCIIDoc. It reads and processes your documentation which is written in plain text. Since I am a software architect I use architecture documentation as example. For doing so I like to use the ARC 42 template which is a great template for software architecture. ASCIIDoc takes this document and creates HTML5 output out of the box. Since we need some diagrams for architecture documentation we add the diagram plugin in order to make use of plant. And to be able to send around the generate docs in the old fashioned way as email we also use the PDF plugin to create PDF output. Now at this point we have already had to configure ASCIIDoc as converter and two additional plugins. It is quite likely that we already run into problems with incompatible versions and other problems you encounter when you configure such a tool chain. That's why some years ago I took my setup and bundled it as open source tool called DocToolChain. And since we talk about Docs' code I really started to treat my Docs' code and try to automate as much as I can. The result is an open source collection of helpful automation scripts which really push the Docs' code approach to the limits. The open source community donated lots of additional tasks and what you see here is an overview of nearly all of the functionality provided by DocToolChain. On the right side you can see lots of output formats. I guess the published to Confluence task is the most valued one but the others are also quite useful. And on the left side you can see lots of input formats which are converted to images and ASCIIDoc files so that they can be easily included in your documentation. I will now show you in a live demonstration some of the features. Most of the features you see on this diagram are already in everyday use but to make the talk more interesting I will also show features which are brand new or even still under development. So let's switch to the console. We start out with an empty folder and I will now show you how to install DocToolChain in a new way. If you take a look at the documentation of DocToolChain you will notice that there are several ways to install DocToolChain as a command line tool or as a Gradle script as a merged in Gradle project. I will now show you a new way which goes this way. I do a WGet on doctoolchain.github.io slash doctoolchain wrapper. It now goes to GitHub and fetches a really small script. Let's take a look at this. It's just only I think 150 lines or something like this. Lines of code. And this script is quite helpful. I do a change mod on it to make it executable. And now what it does it checks whether DocToolChain is installed on a command line or whether you have Docker up and running and then it decides which version to use. So if you have Docker up and running it will decide to use a Docker image. If you already have DocToolChain installed it will use this command line doctoolchain. And if no doctoolchain is installed at all and Docker is not available like on a build system then it will download doctoolchain and extract it to the user folder and use this as a command line tool. I already prepared the installation. So now I can do a doctoolchain wrapper tasks group doctoolchain to see what kind of tasks we have available. As you can see it tells me that Docker is available but the home folder of doctoolchain also exists so it will now use the home folder version. And it noticed that there is no doctoolchain config file in this folder and it asks me to copy over a default configuration and say yes that's quite nice. And here you can see the first result of the first run. You see all the doctoolchain tasks. You can see lots of export tasks which are the ones shown in the diagram to the left which export data from other tools. And you can see lots of generate tasks which were the ones on the right which generate some output. As I already said I'm a solution architect and I like to use the arc 42 template. So I now need this template in my project to work with. Therefore let's ask doctoolchain to download the arc 42 template. So doctoolchain wrapper and we use the download template tasks this time. Again Gradle spins up, starts the download template task and now it notices that the template is available in four different languages. Let's choose English and it's available with help in each section or without help text. So I always choose a with help version and now it downloads template directly from GitHub and installs it in our folder. It says it added the template to the doctoolchain config. Great, we don't have to do anything there. And it also tells us that we can use the generate hml or generate pdf task to convert the template. But first let's take a look at the file system and here you can see we still have our configuration file as a wrapper and within source docs it installed the arc 42 template. Why source docs? That's a default configuration. As I'm a Java developer I'm used to put my source code in source main Java or source main Groovy and this way the docs live right beside the code. That's what we want to do. We want to do doc test code. And here you can see we have one main document which includes all those other chapters. It's the same what you do with your code. You just splice it up into single small chunks so that it's easier to maintain. And we also have an image folder. So let's follow the instructions and call the generate hml task. Again Gradle spins up and in the future we plan to move those command line tasks away from Gradle to have a little bit more speed. It's great to use Gradle when you have when you use for your own project Gradle as build system and then you can just include those scripts in your own project. Now we use the command line tool. It says it's converted and let's start build hml 5 arc 42 folder arc 42 html. And now our browser starts up. One moment. And here you go. You can see the template as ask it doc render it html the way we are used to it. And here are small question marks. We injected some CSS to hide away the help system so that you can move your cursor over the question mark to see the content of the help system. And back to our shell. Now we can also do the generate PDF. And as expected we now start Gradle up again. The PDF is generated. And we can also take a look at the PDF in just a few seconds. It says that it didn't find custom PDF. No problem. It rewards to the default team. But you can also install in your own project in your own folder custom PDF scene. And now let's again do a start build. Now we have a PDF folder arc 42 arc 42 PDF. And here you are. The arc 42 architecture documentation template as PDF. This time you see the content of the help system directly rendered in the PDF. Now the output you've seen so far is pretty much standard when you work with ask it doc. But what happens if you not only want to produce one document or one html file but you have a whole bunch of documentation which you want to publish to a small site or small website a microsite. Even for that case we have something in doc tool chain we can do instead of generate html or generate PDF. We can do a generate site. And I also do a big preview in addition. So this will take a site template. It's still under development. So what we will see now in a moment is in German. It will create with J bake in the background this microsite start a small server on port 8046. And we can now take a look at this site. And here you go. As I said it's the template is still in German. But here we have the art 42 template included. And you can see this is the English template we downloaded from GitHub. And we have a full blown website which you can deploy to any static site server. The same which you see here is a standard Twitter bootstrap theme. And it can be easily customized as you can see here which is our DB system corporate identity. Now that you've seen the basic features of doc tool chain let's move on to some more advanced features. As an architect you often work with tabular data like this one requirements table. And as you already might know in ASCII doc it's quite nice to work with those tables because you don't have to specify them as ASCII art but you can specify each cell in its own row. But sometimes a simple table like this is not enough for your documentation. Maybe you need a more complex table like this one where you have call spans where you have row spans and where you also need to represent the color of the cell or other features. For instance here is also vertical line and horizontal line. And to create those tables in ASCII doc is a little bit hard. So what we did with doc tool chain is we created a step which is called export axle and it does exactly what it says. It takes a look at the source docs folder and sees if it can find an excel sheet and exports it. And what it prints out here are the names of the sheets it finds in this excel file. So it can export several sheets and if we now go back to our IDE then it created a folder in the source docs which is called axle and let's reload this from disk. Here we have a folder which has the same name as the excel file and within this folder we find all those sheets from the excel sheet one comma separated file. We could use this to import simpler tables but we also have an ASCII doc file which is a representation of the excel sheet with all the features we specified. We have a call span, we have row spans, we have vertical alignments, we have horizontal alignments and this way you can easily maintain tabular data with axle outside of your normal ASCII doc documentation exported as ASCII doc and include it again in your documentation. Some people ask at this point whether I should keep the exported file under source control or should I move it to the build folder. We decided to keep it under source control because the textual data is easier to div than the axle sheet. Now that we are in the IDE another feature I would like to show you is diagrams. I guess you already know plant.uml so that's a standard feature but what if you need more advanced diagrams and draw.io which is now renamed to diagrams.net is a nice tool with which you can draw diagrams any kind of diagrams but you have to switch tools but now we have a plugin for that. There's a plugin for Visual Studio Code and a plugin for IntelliJ and in IntelliJ you can just create an image tag and let's call it demo.draw.io.svg and as you can see it complains that this file does not exist. I just press alt enter create missing file and what it does it creates an empty file and launches a local version of draw.io or diagrams.net. I can draw a diagram here something simple maybe add an arrow and when I switch back you can see it's already included in my ASCII doc switch back and I don't have to save it has auto save features and what's really nice about this is that you only have one SVG file you don't have a data XML file for your image because the data for the image the source is stored by diagrams.net in the metadata of the SVG or it also works with PNG diagrams in the SVG file directly so you only have one file which has two modes one that it can be included as image in your ASCII doc and one that it can be reopened and edited with diagrams.net. The last feature I would like to show you in this short talk is a published to confluence task. I think we've all been in the situation where we are on a project and the team decided some time ago to use confluence as a documentation tool but we want to make use of the doc's code approach and with a published to confluence task you can do both you can use the doc's code approach and publish the results to confluence. I've created for this already an empty confluence space no pages in here and we now have to go over to the doc toolchain config scroll down to the confluence section and you can see there's lots of documentation in here to make the configuration easy for you. We have to specify an input file and this task takes the HTML output of the generate HTML task and so we have to specify the file from the build folder but the task depends on the generate HTML so we can just execute the published to confluence task. We have to specify the API for our confluence server the space key which is for stem and we also have to specify credentials. Don't worry when you see this video this API token is already invalid. Okay now that this is configured let's move over to the console and execute the published confluence task and as you can see Gradle starts up again and first it executes a generate HTML and after that it will move over to the publish to confluence and here it splits up the HTML file by the headlines to create smaller subpages. So smaller subpages are nice because for instance if you want to use the command feature of the pages you don't have to reference in your command hay somewhere up there I've seen something you now have smaller pages and the command feature is a better feedback system in this case and as you can see here there are two chapters in the standard arc 42 template which contain images and also those images are published as attachments. So let's go back to our web page reload this confluence space the home page is still the same no home page but as you can see here on the left we have a arc 42 page with some subpages with our chapters and if we move on to the building block view where there's an image in there you see here the table of content down here is a content itself and since we used the template with the help system we have some click here to expand sections where we can make the help system visible and here's also the image which is an attachment here and this task not just takes all those this content and publishes it but it first checks whether the content it wants to publish really has changed and only publishes those pages which have been changed this makes a notification feature of confluence behave more nicely because otherwise if you publish to confluence people would get lots of notifications for all pages and now if we only change one of those chapters only one notification will be sent out that's it for now I hope you I could generate some interest in doc toolchain you will find more information on doc toolchain.github.io and you might want to follow me on Twitter now let's do some Q&A.
|
The combination of AsciiDoc and Gradle should be well known by now. But what if you want to go beyond? Have you ever tried to include UML diagrams the easy way, convert Excel to AsciiDoc or export your results to Confluence? This talk shows you what you can really do if you treat your docs as code and apply some tricks you only did to your code before. Forget about copy & paste your images to your documentation – let the build do it! Create different docs for different stakeholders and even run automated tests on your docs! In this talk, Ralf will give an short overview of the open source documentation tool chain "docToolchain". He will open up the Docs-as-Code solution space with some new and fresh ideas and show where the Docs-as-Code approach is heading to. As the maintainer of the open source tool called "docToolchain", Ralf tries to push the Docs-as-Code approach further to its limits. Every new idea finds its place in the docToolchain. It is by now a collection of quite helpful tasks of your every day documentation needs.
|
10.5446/52462 (DOI)
|
Hello everyone, I am Divya and today I am going to be speaking about taming the source. That specifically focuses on my experiments or rather my experiences with the Torqueous Horus as a static site generator tool. Now I am sure given the number of virtual conferences that we have all attended so far, we pretty much echo the same sentiment that virtual conferences don't just cut it when it comes to the human interaction and to the level of engagement that we have at in-person conferences. So in order to switch things up a little bit, I have sort of come up with a strategy and I hope you folks will play a lot. Now you may ask why I am doing this but I have realized that even with the best efforts on the part of the organizer or the speaker as an attendee it is very difficult for you or for anyone really to actually be sitting at the other end just listening to a person go on and on about a particular project even if it's extremely interesting. So in lieu of that I think you all should be able to see a screen I mean a slide on your screen with a QR code. What I'd like for you all to do is to actually put down your mobile phones and download any QR scan or QR code scanner app. Sorry. Throughout the presentation I have sort of made this the standard that the QR code scanner slide ends up looking similar to this one. So whenever you see this you have to scan the code and see for yourself what I'd like for you to see. So in this case this was supposed to be my introductory slide and the website that leads you to talks more about me. Of course since this was an example I will not actually leave you to read through the introductory bits. I'm Divya and I am a senior systems administrator with HSPC. Alongside my job I love contributing to open source projects like Kubernetes, litmus chaos and I also had the extreme good fortune of working with Sun on the previous year's Google season of dogs. Now going to these open source contributions and the Google season of dogs I was exposed to docuservice as a static side generator tool and I'm hoping you guys will play along with the QR code bit because I really want for you to be engaged in this presentation and to see for yourself the results of whatever I have been working on as projects instead of actually pasting them on the slides in front of you. So without further ado let's talk about some of the cool stuff that I've had the good fortune of working on and they happen to be on the docuservice. So here's the QR code again. Now the first project that I'm going to be talking about is the documentation for Ruseau which is Sun's Exascale data management system. Now what is Ruseau, how it got involved, I mean how I got involved with the project and what is the end result that is what will be visible to you if you scan the QR code. I will be talking about the involvement bit. So Ruseau as I mentioned is an Exascale data management system. It was developed at Sun but it's used across scientific communities irrespective of whether or not they are based at Sun. So it's written in Python and Flask and as part of Google's season of talks 2020 I had the good fortune of working on the documentation for this data management system. More details about the project and what were the expected deliverables can be found on the link here. So if you wish to check that out and if you are interested in actually understanding what was our end goal at the start of Google season of talks 2020 you can visit this link after the presentation is done. So why we chose docuservice because there was documentation system already in place right. So one of the major reasons that we chose docuservice was the introduction of JSX modules in the versions of Ruseau, the previous two might joining Google season of talks. So there was JSX, there was a couple of JSX modules introduced as part of Google season of code and the documentation obviously required support for those JSX modules. Now in the version of docuservice that is there currently there's no JSX support. Let me clarify that but what is the end goal is to migrate to the version 2 which does have this sort of feature and it was not at a stable version when we started off the project. So our short I mean our short term goals are to actually move to version 2 wherein we have JSX support for the documentation. Now another thing that we wanted to achieve was to remove the source code dependency. Now before I joined the project the documentation was heavily dependent on the code base for Ruseau. Now most of it was handwritten like 75% of it was handwritten but 25% of it derived the documentation from the code base that is the Python and Flask code base for Ruseau. So what typically used to happen was that when there were changes in the documentation or introduction of new modules or anything we required that the code base and the documentation be in sync by recompiling the entire tree structure for both of them. So that was an additional task that on the part of the administrator or the maintainer of the repository and that is something we wanted to eliminate by separating the code base for the project and separating the documentation repository. So currently after the Google season of docs we've achieved that and this also contributes to ease of maintenance really because currently a lot of it is in markdown and previously it used to be in RST now restructure, text or RST is not something everyone's familiar with. It's not difficult but it's not something everyone is familiar with as opposed to markdown which is a more common language so to say. So that contributes to the ease of maintenance and setting up a GitHub workflow is also very simple so that was another factor in choosing Docusaurus and the previous documentation was built on sphinx and like I said the search feature in sphinx was very rudimentary and was not up to the mark. This is a no way a criticism but it's just something that I noticed when I was comparing different tools which we could base our I mean which we could implement a documentation on. So Docusaurus has by far the most advanced features for site-wide search implementation which is powered by Alholia search and from an administrator or a maintainer perspective there's not much that a person has to do to implement this or to maintain this once you specify the API key in the configuration file. So those were the list of reasons we chose Docusaurus. Now obviously the project wasn't without its challenges because which project is really. So the original documentation as I mentioned was derived from source code and the static site generator tool previously used with sphinx. So one of the major problems with this was that a lot of the documentation was handwritten because there is that thing right where you have handwritten documentation and you have documentation that's derived from APIs or from a source code. So when we use sphinx as a static site generator your code base is basically how do I put it. So your sphinx documentation for this particular project was basically residing in the same repository as your code base and that sort of was an overhead for the folks who were maintaining the documentation since this was not something that was supposed to be a priority in their minds and converting that documentation to Markdown was a big challenge because everything was in RST and as I mentioned before restructured text prior to the Google season of Docs Friday when it was not something I was familiar with it is a bygone format but I'm specifically not familiar with it and no good RST to Markdown passers available so converting the handwritten documentation to Markdown was a big challenge and deciding which should be converted to Markdown and which should retain the static format was also one of the challenges that we adjust. The next one next couple of challenges are relatively minor because they are things that have been already addressed or are being addressed in future releases. So nested sidebars were difficult implementation in the version one point x whichever version you speak of now it's been addressed in version two and since our ultimate goal is to move towards it too we don't foresee an issue with that specifically. Static file inclusion is one thing you will see as a common theme in future slides as well. Static customizing your static file directly something I think should be a feature for documentation generators so that is one thing that I think has potential to be worked upon and it is being worked upon and has progressed quite a bit in the past couple of months so another feature I'm excited to see. Now we come to another QR code slide again and this is my second project with you know docusaurus as a static file generator tool it is obviously a little different from the previous one and this involves an open source chaos engineering tool cloud native chaos engineering tool named as litmus chaos so if you scan this you will be led to the version you'll be led to the website for the litmus chaos on the docusaurus version one we are migrating to version two in the later half of the year but currently we are you know sort of fixing the last couple of things that we always have to fix after moving to the next version we're doing that bit so currently what you will see is the docusaurus version one but I am going to be talking about the migration bit and how the experiences when we move to docusaurus version two so what is litmus chaos and how did I get involved as aforementioned I am the sick dog sleep for litmus chaos and it's an open source project so we are contributions is how I got involved with the project what litmus chaos is a cloud native chaos engineering tool and for the uninitiated chaos engineering is a discipline wherein you introduce chaos and improve the resiliency and availability of applications to customers so for more details around the project documentation you can visit the GitHub repo that's mentioned on this slide here and for more details about chaos engineering which honestly is a separate topic to you know delve into by itself you can visit to you can visit the website on the screen that is principles of chaos.org so when I joined litmus chaos as a contributor it obviously was on docusaurus already it's not a collective decision that I was part of but recently we were speaking of exploring other options and seeing if there was a better fit for the documentation that we had but we chose to migrate from version one to two instead of going to another tool and the slide details some of the reasons why that choice was made so one as I mentioned in the rucial section as well was the ease of maintenance honestly I generally believe that Markdown is a simpler format to maintain a lot of folks might disagree might might agree I understand but I come from a place wherein I am looking at also engaging contributors in the community and Markdown honestly is a lower entry level barrier for them as well and docusaurus has an easy integration with many of the hosting platforms so it's easier for us as folks behind the scenes to sort of set it all up and set up the GitHub workflows as well now as aforementioned newer contributions are something we are looking at actively and as the project grows we are going to have folks from different countries where probably English is not even the primary language so it feels very non-inclusive to not have multi-language support at this point because it's probably more you know comfortable for folks to read stuff in a native language and gain a better understanding of the product than you know having all of it written in English so that's that's one thing and my personal favorite which may seem like a very silly and stupid reason for continuing to use orcusaurus and migrating to version 2 is the dark light theme switch so sorry so now in the current version that is the version that you'll probably be seeing on your screens after scanning the QR code we are at version 1 so this is not there and as an IT professional I'm very used to having a dark theme on almost everything that I use whether it be my Instagram or whether it be my Twitter I mean these are the things I use as part of my profession but VS code or even GitHub all of it is on dark code so when the documentation that you know I sort of am a part of in building has that feature I'm I'm I'm pretty excited to actually use it so yeah that's that's one of the that's one of the last reasons that I think we continue to use docusaurus and honestly the last bit when I spoke about the dark light themes which it was a widely requested feature from the community as well so I think I'm not really alone in that regard now like with the previous project we also had major challenges in this one so migration was a big activity that we undertook in the last quarter of 2020 and it was one of the major challenges that as documentation team we had faced with docusaurus because till that more or less docusaurus was not a sort of you know pain I mean nothing in docusaurus was a pain point except the nested side pass that I'll address next so by migration was particularly a challenge was due to the lack of adequate documentation now don't get me wrong it's documented but I feel like it's it doesn't have sufficient coverage of the you know various features that are changing in version 2 and what you can expect there are probably a couple of pages and every specific use case is not addressed and I understand coming from a product perspective it's not very feasible to do so but when you are using the product for example the last bit wherein the content rendering changes across versions that was something we struggled with and we thought we were doing something wrong when the content from previous versions didn't show up in the new version but that was just a feature so that's something that I would like you know for documentation as a whole to address because I understand that every specific use case cannot be you know listed down but having a sort of knowledge documentation or a sort of repository wherein you have the commonly listed issues would be a good thing because I don't want to be going to GitHub and filing a PR every time I sort of face a roadblock during migration but everything said and done it was a lot easier once we got to version 2 because there were a lot of cool features as I mentioned before and the nested sidebars also came in so it was a win-win for everyone but migration was a bit of a challenge for us as our documentation maintainers and leads so features I'm excited to see in future versions most of these are already covered so I'll quickly run through them so the static assets directory customization is something I'm really rooting for because I am generally looking forward to actually customizing the static assets directory in both of my projects since it's it's it's a very outdated concept to have on the static folder under the static directory what if I actually want to change it so that's something that I'm looking forward to and a significant amount of work has already been you know done in this regard multiple themes again it's a good to have I wouldn't say it's absolutely necessary but it's good to have and the reason for this being that you are allowed to customize your documentation the way you want it with your own CSS files but I as a documentation lead or as a maintainer for documentation want to be able to choose themes and not actually write them from scratch and expand that little effort so that I want to be able to choose that and happy to report that this work being a progress I'll be slowly in this area as well now sidebars as I mentioned before has been a constant sort of grouse that I've had it's either that the sidebars are not nested or this you know implementation is difficult so auto generated sidebars is something I am really looking forward to because a couple of other open source projects that I work on have sidebars that are auto generated and honestly somebody should not actually have to go out of populate the sidebar configuration file unless there's an explicit requirement so I'll generate sidebars something I'm looking forward to and better migration CLI so a basis my experience in the litmus chaos project I hope for a better migration CLI that or at least better documentation in this regard so as to address at least some of the pain points that folks go through while migration or even you know having them listed somewhere in the documentation for migration will help because if you are just referencing them in one-liners it's not really helpful as a maintainer or as a lead who's undertaking this exercise to actually have just a one-liner about it with no solution so that that is one thing that I would really really appreciate if it was actually improved now that being said I'm at the end of my presentation and I be now open to taking questions so please please if you have any doubts do let me know and thank you so much for all your patience and time and I hope you all stay safe and see you at the end sign
|
Originally having been developed for the open source projects at Facebook, Docusaurus now serves as an easy-to-use tool across many open-source projects. Having worked with it on the Google Season of Docs for transforming Rucio's documentation & as the SIG DOCS lead for LitmusChaos, this talk is an exposition of my experiences with the static site generator. I hope to benefit others looking at migrating to/using this tool with the contents.
|
10.5446/52465 (DOI)
|
Hello everyone and welcome to my presentation. I'm Peter Eisenhowert. I'm recording today from my home in Germany. I've been to FOSDEM many times and I look forward to coming back one day. So my presentation today is going to be about the experiences with DocBook in the Postgres project. So my affiliation with free and open source software and also with FOSDEM is I work in the Postgres project and normally you would see me over in the Postgres dev room and I also have a presentation there but it's great at FOSDEM with so many different dev rooms and communities coming together to give us opportunities like this where we can branch out and interact with other communities. So I'm really glad to be here today. So normally I'm a programmer, I write code for the Postgres project in C mostly but I'm also you know what could be called the documentation tool Smith in the project just because I do that work and it interests me and I've also done a lot of the writing of the documentation but today we're going to talk about the tools and you know the writing of open source documentation is another interesting discussion maybe for another day. So Postgres just really quick is a database system so that means it's you know so important long-running business software. Postgres is written in C has you know pretty standard build system configure make make install. We have yearly major releases so the most recent major release was Postgres 13 and you know later this year we'll plan to put out Postgres 14 and so on so every major release is over you know big milestone for users to upgrade but because it's database software and it's long-running has you know uptime requirements we don't necessarily make users upgrade every year but we maintain existing releases in the community for at least five years. So and this is kind of important for also the what we're talking about here today because there's at any given time depending on the time of the year about you know five or so branches that are maintained by the community and that also means you know the documentation in those branches has to be maintained it has to be backpatchable and still has to be you know rebuilt and so on at release time. So the Postgres documentation is written in DocBook. We're using the XML version of DocBook now. So you know here's some statistics of how big it is. The point is it's very big right if you make a PDF that maybe you can imagine most easily is 2600 pages so that's way bigger than a normal book you know HTML's over 1000 files and there's also man pages being built several hundred and all the documentation sources are in the main Git repository with the source code so this is all one thing it's built together by the developers it's built together at release time it gets shipped together so and that also means you know we're talking about documentation tooling here documentation tooling even if as maybe a developer is not currently interested in writing documentation it documentation still part of the the sources and the build process so it still has to work for them right so that's important. So here's a picture right it just looks pretty normal if you know sort of how DocBook default output looks like this is you know you can find it's on the website or you can build it locally. So a little bit of the how we came to this some of this happens slightly before my time so my understanding is that DocBook started in approximately 1990 so by the time Postgres project had adopted it in 1998 it was already you know been around for a while but it seems that sort of was the time when it really got adopted originally you know the Postgres project came out of university and it was then sort of inherited by an open-source community. The original documentation was all man pages which if you think about it makes sense because you know man stands for manual so those were the pages of the manual and originally that was all printed and nowadays we kind of look at man more of a as a command line tool but it used to be just the manual right but then of course HTML became the standard way of you know presenting text on a computer screen basically and so then you know some people started writing documentation in HTML but that was quite hard to maintain and so then DocBook was adopted which you know but it's slightly before my time I don't know what the decision process was but it turned about to be you know quite sort of important decision and good decision in my opinion. So for the most for longest time we used the SGML version and I'll talk a little bit about that why that was and then a few years ago we then converted it over to the more modern sort of XML, XSL based source and toolchain. So yeah the current toolchain is just the normal stuff that everybody uses I think nowadays I'd be using the XML DTG and then the XSL style sheets and then XML and an XTL T proc for processing those and that's you know all of that is included easily in normal operating system distribution nowadays so it's quite easy to get to and then for printing we use FOP but you can also use any other XSLFO processor if you want have different requirements. The toolchain that we originally started with is a bit of a more complicated situation I mean it was standard at the time right so you had the SGML version of DocBook you had the DSSL style sheets which was sort of the predecessor of XSL. For me as a programmer it was actually kind of cool because it was scheme so it was kind of interesting to program in but not modern anymore and then open SP and open J for parsing processing and then for man pages we used sort of another separate tool DocBook 2x which also went through a couple different versions so there was a SGML version and later on XML based version so that had a couple different iterations and then for printing we used Jtash which as the name implies is the bridge between the Jtash tool set and Tash and then at some point we also used a word process or the post process and then what we actually use at the time probably long forgotten is Aplixware to do some manual post processing of the print output because it didn't work too well out of the box so the reason you know I mentioned here the old tool chains as you saw we had the you know for a long time we used SGML when XML was already the thing to do and two reasons why it took so long to convert one was that the newer XML based tool chains were very slow compared to what we're used to and the absolute times obviously have changed over the years but I just actually just tried it again the other day to prepare for this presentation so it was basically an order of magnitude slower and that you know it's quite unacceptable it's more like a difference between one minute and ten minutes right so you don't want you don't want that and as I mentioned in Postgres documentation it's very big so those kind of exponential behaviors in there were really bad and so we didn't know what to do about that for a long time and then we basically some people went in and analyzed this and we did basically performance engineering on the XSLT style sheets and then figured out some things to remove some of those exponential explosive behaviors and then make the performance match what we were used to and then the second part was the question of how to transition it right we do we just switch it all over on one day like one giant patch what do we do about the back branches that I mentioned do we then maintain sort of a separate approach there you know how do we educate developers about that in the end it actually turned out to be quite smooth we did it a sort of multiple steps we changed the build process first from DSS SL to XSLT and then a little later we changed the sources from XML to SGM and we kind of tweaked it a little bit so the backpatching makes it easier so it turned out to be we found good solutions at the end but it took a long time to sort of ponder that and prepare that so pros and cons of docbook so I think one important thing and you know hopefully a lot of the people who sort of visit this dev room here agree with this you know semantic markup is important in my opinion you know sort of the bigger your project gets you know if I you know write a letter or something here I all use a standard word processor too and if I take notes or write like small readme's I'll use markdown for example or something lightweight HTML but sort of the bigger your project gets and the more sort of contributors you have especially sort of volunteer and distributed contributors as you can't directly control in a centralized way having a way to you know make sure it all sort of ends up being consistent and standardized so semantic markup is good for that and so yeah and then you know docbook comes a lot with a lot of things out of the box that you would expect for that linking table of contents navigations we can customize it you know there's no we're not relying on anyone to give us permission to customize anything because the whole style sheets are basically the customizations and we can just change anything we want out of that and so and another thing is the that's important is the longevity and the you know the tools rarely break once you have them installed which is sometimes difficult it almost never breaks you don't have a whole sort of tree of dependencies of you know various different modules and things like that you need to install you have a very relatively small tool set and that continues to work and nothing affects that and that is very important so the timelines we work in right we have the stuff we basically write today still has to build in 10 years so that's kind of the way I think about it so we can't rely on you know the tool that I use today we still have to have that a working combination in 10 years from now so that is very important and you know that makes us a little bit conservative in adopting so the new tweaks and technologies and things like that the cons are at least you know so for people to get started and understand all the different pieces right normally you what you want is I have a source and I want to convert it to an output right so you can how do I do that right and there's a lot of different sometimes you have just a single command right you just go from input to output like maybe in Markdown but here you have the all these things like XML SGML document type definitions what is XSL what is XSL T and F O and the style sheets and how does it all fit together you know if you think about it if you kind of like these things then it's actually quite a neat system I think but obviously it's very hard to understand if someone who just wants to get the job done and then you know some people also like don't like semantic markup they just want to write their thing but again as I mentioned you know I think if you sort of work on a long big project you want to encourage people maybe take a little bit more time and not just write things down but think about what they're writing installation of tools used to be very difficult with the old SGML toolchain now it's actually quite easy so hence in parentheses I mentioned you know the documentation is very large both in source and output and that is a continuous challenge with all the different tools and for example just an obvious example if you want to build a PDFs with FOP you need at least one gigabyte or so of memory otherwise it just doesn't work at all and that's growing presumably and also the old JTESH had issues where you had to then increase some memory limits and it's all these kind of things so it does work but it needs some care and it's you know again we had to do some performance changes as well and things like that so it's quite challenging but it's okay and then the the maintenance of the tools and I'll talk about that in the moment in more detail so here is you know by the time we had adopted DocBug in 1998 I said so by you know 2000 we were really into it and it was sort of the standard thing to do the tools we were using were already essentially on the last legs and then you know here I showed the last releases but those were the last the absolute last sort of minor patch releases there was no feature development had already stopped before that and so almost immediately we were like okay you know this DocBug is great and everybody's writing documentation but you know some of us who were more looking at the tooling were like okay these tools are not being maintained anymore what's what everybody's doing everybody's of course using 2xml I explained you know it took us then so 10 years or more to figure out a way to convert over and now we're in this situation which in a way is the same you know 20 years later you know these tools are again not really maintained much anymore you know they're still getting the occasional you know security fix and things like that but you know feature development has stopped a long time ago and you know for example the most sort of one that I'm most concerned about is the DocBug XSL style sheets that you know had their last release in 2016 and they you know there's it's not like there's no work to do it just you know it's it's hard to get people together to do that. FOPP actually was sort of in that situation a while ago there were no releases for several years but now it's actually sort of really recovered really well there's regular releases every year and so that's that's nice the other ones I'm more concerned about and then overlaying that or related to that is all these different standards and specifications that are you know going all over the place so for example a very minor point but sort of illustrating that you know live XML is not adopting XML 1.1 there just saying we're not interested in that we're staying with XML 1.0 similarly XSLT 2 and 3 are not being adopted by a live XSLT again you know not interested or no no no time or resources DocBug 5 is a major re-engineering of DocBug that as far as I can tell is not widely used and one sub problem there is it changes to it changes the schema language from the traditional document type definitions DTD to relaxNG which in a way is quite nice but again relaxNG as far as I can tell it's very little open source tooling there's like one tool I found that is already again 10 years out of date and then you know really nice for example if live XML could implement relaxNG and it has a little bit of support for that but not sort of the full and what we would need so we're again so stuck in between here right the DocBug XSL style sheets as I mentioned they're not being maintained anymore but there are sort of new branches that people working on that are focusing on implementing them using XSLT 2 and but again that's not being adopted by the tool chains that we're using and there's also sort of interesting idea of using CSS to you know instead of XSL4 for printing you CSS for printing or just you know interesting idea you know never mind that but there's no tooling for that right you have the browser that of course does CSS but there's no command line tools or like very few and not sort of easily accessible so the one alternative XSLT processor that supports all that quite nicely is Saxon so we could in theory swap that one in but that's a little bit weird like Saxon is written in Java which is not by itself a problem by that's a little bit of a red flag to sort of the Unix developer community there's sort of mixed you know free and proprietary licenses on different versions with different functionality and features and it's a little bit weird and it's not as far as I can tell not widely packaged in the usual sort of operating systems that open source developers use so it'd be quite a change and not easily accessible and make sort of the make the tooling worse for you know my code developers so where are we now in the Postgres project and maybe did that big sort of 10-year process of updating our tool chain so that that seems fine for now right we there is a worry about you know what's gonna happen the next 10 years right that's sort of something we have to figure out but as far as I can tell that the tool chain is stable for now sort of the focus now is more like on the content and especially some sort of specific kinds of content we had took a long time to figure out how to put images into the documentation in a sustainable way like not just like obviously you can make an image and link it in there and doc will get that much of a problem but making it so that you know someone else can then open the image make changes and things like that it's not all that obvious how to do that you know more linking and stuff like that to make it nice or sort of use on the web right originally it was you built a documentation locally and browse it locally now everybody uses on the web so it's more that's kind of the focus on at least on most people's mind but I'm here for the tooling right so I'm sort of thinking about where this is gonna go the docbook XSL style sheets need some help the problem is I think right I have a job that allows me to work on open source and free software as my job and many others can do that as well and that was obviously very successful but most of those jobs are for big projects business level projects and not so much on the tooling underneath that all and we've experienced some of that over the last few years where it was noticed for example that open SSL didn't really have the resources that it needs and other things like that and some of this was addressed but I think the documentation tooling is probably also in a very similar situation and not much attention is perhaps given to that even though it is you know we need documentation obviously and we need documentation tools to build the documentation and develop our software so we need to figure out how to wait to put more resources or sources into that and you know I think I would have some of the skills to do that but I already have a job so I can dedicate myself to that and then we need to figure out how to you know is doc book five something we should adopt where is this gonna go is everybody's in the open source community seems to be stuck on doc book four but it's you know doc book five has some new nice new features but I don't really know how to get there with the tool change that we have right now all right so that is all for me here's some links so the link to the documentation then the second link is in the documentation we have an appendix that describes the tooling so some of the stuff I talked about here then the links to the sources and links to doc book if you want to learn more about doc book okay so I look forward to taking some of your questions thank you for listening and thank you very much. All right. Okay we're live so hello everyone hello Peter thank you for your presentation thank you for being here today it's a pleasure we don't have that many questions from the audience which might change any time I don't know I have a couple of questions for you so I'm interested your basically have a tool change that needs to stay live for for 20 years or a little bit more or something like that how do you how do you select your tool change how do you come how do you arrive at some tool change choice that actually keeps stable for a decade what's a good area well I don't have a reproducible recipe for that you know the doc book choice was made slightly before I arrived at the project you know and I don't really know what this is in your process at the time was but yeah this is this is sort of a standard problem I think in IT in a way right like what kind of software do you adopt for you know not only as a built dependency but for like your business or something right you have to have some kind of hunch and then evaluate them in terms of longevity and all their standards based and you know well supported and things like that I would think right okay then I thought I have a very specific question because I've been using it like a doc post-press skill documentation user for I don't know for a long time for for not half my life but maybe maybe something around that time and why did it take so long to get pictures in the documentation what's the problem and how did you solve it okay yeah there could be another half hour talk I guess but well the summary is right we want to have a reproducible build process right I mean we had used to have pictures in there which were put in even like long long long time ago before I really was following it and then but if you want to change something there's no you know it's like a jpeg or whatever and how would you change it right so we had to ensure like a process where then later on somebody could go in and patch it right because we work with patches and software and so we will be did now is we basically do everything through svg right because it's a nice text-based format and we have a couple of sort of tools that we sort of endorse for making svg's for example graph viz for you know for sort of flow charts and that kind of stuff it's quite useful and DTAB for making other kind of kind of graphs so but there is also again we have to make sure there is also some of sustainable tools and then we can still use them and you know a couple of years and then it's widely accessible also to other developers you want to change so it's again this kind of process make sure you know make sure this process is repeatable and sustainable so Ben has a couple of questions about about the initial move from from man pages and html to docbook what's the what's the main difference between html and docbook i mean html would have been would have been a long would have a high longevity that wouldn't be a problem and then he goes off what's the difference between versions four and five of docbooks that are so important for you what's the what's the future process that yeah okay so the the difference between docbook is i think docbook is even more abstract than html first of all i mean we read html is obviously also evolved by the bit over 20 years but docbook has a little bit more abstraction again but one of the important things i guess that you couldn't easily reproduce with html sort of so making table of contents automatically making sort of back forward and backward links and automatically numbering chapters and things like that some of that again as i mentioned css is gaining some of that but it's it's not nearly there where it should be right so it's docbook is more like you know like latash compared to tesh maybe right it has more abstraction it does sort of more automatic things by the background uh and then the question what's the difference between docbook four and five uh or what is relevant um i think one thing that is kind of nice is uh the ability to use um well relax and g would be kind of nice uh then you can also use namespaces xml namespaces and xlink for example those are the kind of things i would uh would make use of it's not do you know there's not no real pressure to make that change but it's out there right and it's you know docbook four is not maintained anymore or not really and all the tools are kind of focusing docbook five so i don't know if we should move or we can just stay where we are i suppose but there are a couple nice things in using more modern xml technologies in there so we have a couple of talks about ascii doc so i guess actually we had at least two talks about us maybe three talks about ascii doc today okay everybody seems to be happy about ascii doc everybody raves about docbook and especially about ascii doc so have you considered moving to ascii doc well we haven't considered that but that's actually something i also learned today or yeah i've relearned i mean i used ascii doc in the past when i guess it was quite immature i would say but that was a long time ago to be fair so yeah that's definitely something i would follow up on i don't think we're you know gonna just change the postgres documentation you know thousands of files and things like that but i would imagine for example when you know people build postgres plugins for example separate little projects they want to also write documentation for that and right now people just write full of readme.md or something right but i think using integrating ascii doc there is sort of a sort of a simplified doc book in a way right then you can apply the same styling and things like that that's definitely an idea i will pursue okay i think there are no more questions from the audience
|
PostgreSQL has been maintaining its documentation in DocBook for over twenty years. It's been successful but not without challenges. PostgreSQL is often praised for its excellent documentation, and PostgreSQL is also often criticized for its hard-to-approach documentation. Maintaining documentation for a project such as PostgreSQL comes with a number of challenges. There are many challenges writing good documentation, but in this talk we'll focus on the tooling. First, the documentation is quite big: In a printable format it is over 2000 pages. This pushes many build tools beyond their limits. And of course we also want documentation builds to finish in a reasonable time. Second, we need tooling longevity and portability. Database server software is long-running software. We still need to be able to patch and rebuild everything many years from now. In the world of writing and publishing tools, there is often a new idea every few years with many dependencies and no long-term track record, which makes it difficult for us to adopt things like that. It took us many years to get just a few graphics into the documentation because the tooling issues were too overwhelming. DocBook has been good to us, but there have been plenty of struggles along the way, and there are some concerns about the future.
|
10.5446/52472 (DOI)
|
Welcome to this online talk about rebuilding the Apache Open Office Week. Let's start talking about the Apache Open Office Week as it is now, but first let me introduce myself, even if it is an online talk so you cannot really see me here. But my name is Andre Pesciotti, I'm active as a volunteer in several free and open source projects in my spare time. And these projects include the Apache Open Office, where I serve as project chair and release manager and I'm now staying as an ordinary volunteer helping with mostly releases and the web projects. This is due to my long experience as a PHP developer. Let's start describing the Wiki site and why it is an important resource for the Open Office community. So first, what are we talking about actually? The Open Office Wiki is a website publicly available at wiki.openoffice.org. This might seem obvious, but as everything in Open Office we have multiple instances of everything, so we call this the M Wiki to distinguish it from another Wiki that is available to the project but is meant for the entire Apache community and it is called the C Wiki based on confluence and that one is where organizational activities take place, like planning releases and this kind of stuff, while the M Wiki is more a community resource. It originates from the Pre-Apache Age, meaning that it was available already 20 years ago and it used to be and still is an important reference for developers. So it was born as the place to put stuff that was meant to be published on the web but not suitable for the website. It is still community maintained in a way but most of all it is an important reference, meaning that every time you need information on how something works you will not find it on the website, you will find it on this Wiki. The current situation is that our M Wiki is hosted on a virtual machine owned by the Apache Software Foundation but not operated for me, meaning that it is made available to the Open Office project and it is not managed by Infra, by the Apache Infrastructure team. It will soon be the commissioned and we cannot do anything about this policy of Apache. It is running Media Wiki as an engine, so a very, very popular engine, mostly seen on Wikipedia but indeed the most popular engine for a Wiki is around the world and it is outdated in all possible ways, meaning that it is running on an obsolete operating system, a very ancient version of Ubuntu, obsolete PHP and while it has been imported to PHP 7.0 currently, that one is now obsolete and it runs a very old Media Wiki version too, so we need to update it as soon as possible and most of all it could contain some obsolete content, probably thousands of pages that are obsolete but this is totally another story. As I said, the M Wiki is important as a reference, so we are not interested now on whether it is old content or new content or restructuring it in terms of content. What we need to do is preserve this and being able to maintain it, make minor edits and possible work on content but that is separate. We are focusing on the infrastructure right now. Components of our M Wiki are the Media Wiki engine, of course, and then plugins. Plugins are a bit painful here since we are using a comprehensive set of third-party plugins with different level of support and maintenance. On top of that, we are also using a few custom plugins that were developed specifically for this instance. So they are plugins that are really not well-tracked, not well-known, and that are used by only one site in the world and that were written between 2000 and 2010, so rely on obsolete conventions and have never been ported to modern standards. They will need a lot of work, of course. The other component needing content is the skin. We are using a custom skin graphical theme, just a basic open office theme with open office graphics. The logo was recently revamped but the rest is still old. We are not going to redesign it but only to make it sustainable and usable for our future needs, meaning that it is currently a major item that is blocking the website upgrade since it is not compatible with modern versions of PHP, for example, and it is based on a templating engine that, in turn, has compatibility issues. So it will not be easy to port it but still we are just trying to get it to work and not at this stage, redesign it completely. Let's talk about maintenance of the wiki and why it has proved to be a challenging effort so far. Well, basically the wiki needs a lot of love because there is a complex chain of compatibility requirements in our wiki and its status is really minimally maintained but almost broken. Let's say that, you can see here we still have a custom syntax highlighting plugin and this was the cause for many pages rendering just blank because it was broken. It broke during an upgrade and this situation went on for months and it was causing lots of problems to users and developers since nobody had time to actually look into it. Then the error was trivial actually, it was just a fix of a few lines where one had to replace an old version of code with newer PHP function but that was done only recently thanks to new volunteers coming in and trying to help but still this is emblematic of a situation where we really need to get it in better shape. It is hard, I was saying, as we have different PHP compatibility requirements for MediaWiki Core for plugins and for our custom code. All of them get updated at different stages so it is also hard to put the pieces together. Our current upgrade process is a nightmare. This is our established upgrade process for when we want to upgrade to a newer version of the MediaWiki engine. First, we clone to local. Actually, the process as it is now does not even involve the step of cloning to local since it relies on a dev installation which both of them are hosted on the same server at Apache. But this means that a lot of stuff is hardcoded, we will see. It is an issue so it is not even local at the moment. Then you update the MediaWiki engine and around the basic updates. Then you disable all plugins and then you start updating plugins one by one enabling one by one until you are sure that things don't break. Meaning that in our experience with a newer MediaWiki engine there is some plugin and we use lots of them that breaks so you have to try enabling it or just try updating it if there is a new version. But again you might have different requirements between the MediaWiki engine and plugins in terms of PHP functionality and support so it is something that needs to be done with care. And how we ensure that things don't break this part is still a bit unclear. Meaning that you cannot just test everything here. So basically you don't really test now, you just do some attempts to see if stuff is broken or not. And then you replicate this in production. Of course this is not practical and it is very much error prone. Meaning that you replicate the process in production so it is not an immediate transfer to production of what you did. And this is very bad. Everything that will go broken with this kind of approach. This is that there is still a lot of work to do on the MWiki to fix it. Because we must still fix stuff rather urgently as the Infra team, the Apache Infra team wants to move the wiki to a modern version of Ubuntu so it will be running on Ubuntu 20 which features a modern and secure PHP but again not compatible with our MWiki or not fully compatible with our MWiki. And MWiki engine will need to be updated to the latest version. And our ancient plugins and here I mean the third party plugins but also our custom plugins will have to be updated or in some cases replaced or in some cases adapted to new functionality. What do I mean here? Well updated is obvious like we installed a version that is meant to run on the latest MWiki version. But some of them are no longer maintained and for some of them we will need replacement of our plugins since a new version is out or a better version is available. And in some cases the MWiki engine counts with new functionality that supersedes what we had at the time. So we will really need also to understand why we are using each plugin and what is the best way forward. It might be a straightforward update, a replacement or some custom adaptation needed to get it to work somehow. So it's a challenging task and the approach we propose is a modern one. Rebuilding the MWiki infrastructure rather than rebuilding the MWiki in itself. The challenge is that we don't only want to modernize the MWiki, that is using an old process, very much error prone with a lot of waste of time and get a MWiki that will still be as painful as now to upgrade. We want to modernize the entire way we deal with the MWiki. Because plenty of useful tools have been made available in recent years. In the last 10 years a lot of new technology emerged and just because our MWiki is older does not mean that we cannot use modern technology. It is time to put some modern DevOps conventions at the service of our ancient MWiki. So using better tools, using better conventions and trying to get a modern process around the MWiki. This also means we can involve new volunteers here because nobody would accept to work on a very, very old infrastructure and very old conventions. This will help in recruiting volunteers too. The first and basic step, well the first one would be rebuilding the code base but it is part of this effort too. The first step is dockerizing the MWiki. That means using Docker to manage the MWiki infrastructure, the local infrastructure for the MWiki. Docker is a powerful tool based on containers. So one takes the MWiki project and splits it into several containers. Everything that is needed to make our media week installation work. The web server, PHP, MySQL and we orchestrate all these containers. We put them together using Docker compose. At the end we get a system with different containers working together that are independent, one from each other and they can be even updated independently. Meaning that this allows local work to be much smoother. It is independent from the specific OS used by the developer as Docker can run on Windows, on Linux and on Mac. It is easy for example to upgrade PHP. You just change a line in your container definition. So if I want to see if the wiki is still running fine under PHP 7.4 or under PHP 8. Why not? I just need to update one line and check whether things still work. This part is placing a number of challenges. Since we have logic in our custom code that heavily relies on domain names. That is we have a lot of logic that tries to guess whether we are running the production instance or the other development instance which is still meant to be run on the same server currently. And this must be entirely rewritten and adapted when needed since we want to run it locally and to be able to run it on a local machine for the lots of benefits that this brings. And one part that is not covered in this slide is also reconstructing our codebase. Our codebase is a PHP codebase assembled with media wiki, custom plugins. But it doesn't keep any traces of where things were downloaded from and which patches were applied and to what. This is why it is really bad to update right now. Because there is also the step that we didn't discuss before that our media wiki engine is patched. Probably some of these patches are now obsolete but we need to check individual patches every time we upgrade. So in terms of codebase, Composer is the tool of choice and the entire codebase will have to be translated into one Composer.json file that will download the correct version of the engine of our custom stuff and the whole third party plugins. This will bring to a sane approach where we can rebuild the exact same codebase at any time and even manage updates if we wish but this is a different process. Anyway, this is an important part that I didn't discuss before. And then the other tool or technique that we use from modern times is continuous integration and continuous deployment. The aim is avoiding regressions such as the one that broke syntax highlighting. We will have to write a set of tests that can be executed automatically. At first this can be really easy meaning that you write tests that just check that a given list of URLs corresponding to different types of pages do not give errors. So you take what you used to do manually like I will have 10 sample pages like one normal page, the home page, one page in Italian, one page containing a table of contents, one page containing syntax highlighting and so on. And then we will have this kind of tests and we run them just to ensure that we do not get an error, that we do not get a blank page, we do not get a 500 error code. This is enough for a start. Then one can write more elaborate tests that can really check what works and what doesn't work, that the title is appearing at the right place, that something is shown or not on the page and get closer to the quality of manual inspection. Then we would have continuous integration to test the empty installation but this is less relevant in our case honestly since we don't need to rebuild our wiki each time. And much more important we can test the upgrade scenario based on the tests above, meaning that we can really take the current database code and everything to the updates, any kind of updates from PHP to the media wiki engine to a module to our custom code anything and run tests again and be confident that nothing broke, nothing of what we are observing broke. So the result of this is a confident and wiki upgrade. And in general this will solve the problem then one can get deeper but in terms of our basic needs this is what we really need, confidence in upgrades. But again while we are at it this becomes a global model meaning that we can reuse this approach for all other auxiliary websites and actually for our main website if we wish to. We are able to do this for example for the extension website, the templates website, they are all PHP based websites. They all work the same more or less with different CMS but CMS here is not a major part of our issue, it's just the leco more than process. In general I've already as I said before this will help with involving new people but it will also help with empowering existing people since a major problem we have is in general a conservative approach due to the risk of breaking things. We are not bold in making changes since we fear that things will break and that we will cause damage which is worse than what we were trying to solve. But this approach can give us much more confidence in general and it gives us opportunities to upgrade everything in a safe way. So it would advocate something like that when it is done for the week and works well for the week I would advocate it for other infrastructure too. Okay I'm now finished and if there are any questions we are ready to take questions.
|
The Apache OpenOffice wiki is the major source of information about OpenOffice for developers. A major restructuring is ongoing an d we will discuss what has been done and what remains to be done.
|
10.5446/52474 (DOI)
|
Good morning everyone, welcome to FuzzDem. My name is Alon Mironic and I work for Synopsys where I manage R&D for the Seeker agents. Seeker if you haven't heard about it or rather haven't heard about it yet is probably the best IS tool out there today. But as fascinating as application security is in general and IS is in particular, it's just not my topic today. So this is the last time I'm going to mention Seeker. Instead I want to ask do you remember last year's FuzzDem? Well I sure do. Last year, much like this year, I gave a talk in the community Devron. Last year however, this meant I had to hop on a plane, fly all the way to Brussels, get up on a stage and deliver a talk to an audience which was sitting in the same room as me. This year as you can see things are a bit different. In fact, a couple of weeks after I got back from FuzzDem, the airports were shut down, couple of days later my office was shut down and a couple of days later even I understood that the world is now kind of different. So I did what any good nerd would do, I hopped on Google and Googled how to be a better manager under lockdown. And unsurprisingly, there were a couple of results. What did surprise me is a lot of these results talked about how to make your work situation at home or under lockdown more like an office. So there were suggestions to have virtual coffee with your peers instead of the water cooler talks. There were suggestions to have daily or by-daily sync meetings. One of them even when does FuzzDem say leave your webcam open, have the entire team leave their webcam open and that way since you can all see each other, you kind of have an office feel to it. To be completely honest, and I mean, no disrespect to anyone who came up with these ideas, but to be completely honest, this doesn't make a lot of sense to me. To me, this sounds like you'd be devoting a lot of energy in trying to change the world back to something that you know instead of understanding the world is a bit different now and modifying your own behavior. Moreover, I think this concept of these ideas kind of implicitly or even explicitly assume that first of all, everyone would want to go back to an office or at least an office like work environment. And second, that everyone actually could. Now, this assumption to me is not only wrong, I think it's deeply rooted in privilege. With this new reality, a lot of us are finding a lot of new restrictions, new disturbances if you were to the lives. Some people have health problems. And if not, sometimes they're immediate family, they're significant others, elderly parents are facing health issues and they need to care for them. For some of us, there are essential services or services that we perceive as essential that have been shut down. For some people, for instance, they need to devote more time and energy to childcare because schools and kindergartens are working in some reduced capacity or even not at all. So just the assumption that you're able to sit down and have an eight, nine, 10 hour workday is well, privileged. If you can, fine, great, good for you to be honest, I'm jealous. But for a lot of people, this isn't the situation. But this is not the privilege I want to talk about today. Instead, today, I want to talk about our privilege as managers. As managers, we have a unique capacity to determine, at least to some extent, the work environment of our teams. This to me means that in some small way, we have the ability to make the work environment or even our goals for us in the lives of the people in our team a little bit easier. And if we do not use this privilege, we are wasting it. Now, before I go in to discuss how we can and in fact, should in my eyes do this, I have to take a minute to talk about oxygen masks. And I do not mean to say that we are not going to use the COVID time. So I'm guessing no one here is flying anywhere anytime soon. But for those of you who do remember in the pre-COVID times, sometimes we used to get into these metal tubes and fly across the world. Now, every time you did this before the flight, there was a safety instruction and they all sounded pretty much the same. In the unlikely case of an emergency, oxygen masks will be deployed from the ceiling. If you are travelling with someone who needs your assistance, fasten your oxygen mask first and only then helps them. Now, this metaphor extends, of course, beyond a physical oxygen mask. This metaphor in layman words says that you need to take care of yourself before you try and take care of others. As humans, at least as humans, we are not sociopaths. Usually, if we see someone in need, our instinct is to help them, especially if this is someone we know and even care for, like for instance, our team members. What a lot of us are not always good enough in doing is making sure that we have our own oxygen mask on first. Before you start thinking about how you take care and how you help your team, you need to make sure that this is sustainable. You need to make sure you are taking the time and energy to care for yourself. And for everyone, this means something else. Some of us need some time off with our family. Some of us need some learn time to pursue our hobbies. Some of us need to make sure they catch the football game in the Champions League. This is all fine. Find out what you need, what energizes you and make sure you stick to it. Make sure you take care of yourself. Having said that, let's jump into things. So a lot of people talk about work-life balance. Frankly, I really hate this term. This term kind of suggests that there are two separate entities, work and life, and you need to balance them out. In today's world, even before COVID, when we are always connected, I think this term has been eroded. Post-COVID, where you are essentially at home, your home becomes your work environment and you, for lack of a better term, invite your work to invade your home, I think it's really hard to talk about work-life balance. I think it's also wrong. These are not two separate entities. Everything is life. You don't stop living just because you're sitting in front of a computer and programming. So we shouldn't be talking about work-life balance. We should be talking about life and how we do not allow work to reduce or unjustifiably reduce our quality of life. Now, for programmers or tech workers in general, if you schematically look at it, work means one of two things. The easy part is when you sit down at your computer and write code. And I say this in the most general terms possible. Code, scripts, open bugs, move tickets around in GRR, whatever. The more interesting part or the harder part is the part where you need to communicate with other members of your team or other people outside of your team. And here is where we can make a difference. But before we talk about how we do this, I want you to try and consider communication in the broadest term possible. So if you have a meeting, you're obviously communicating with someone. And if you're writing a Slack message, you're obviously communicating with someone. If you're drafting an email, you're obviously communicating with someone. But that's not that. Any time where you produce some information that is being consumed by anyone else, this is communication. Have you opened the bug? Have you filed a bug? You're communicating with the developer assigned to solve it. Have you submitted a PR or MR or whatever you call it for review? You're communicating with your reviewer. And so on. Since most of us do not work in a vacuum, we really need to be conscious that almost anything that we do, anything that someone else may look at or may consume is communication. And if we want to communicate effectively, especially in this world, we have to take into account personal time zones, which to me is the key concept here. Now, we're all aware of geographical time zones. If it's 10 a.m. over here in Israel where I live, it might be 8 a.m. somewhere else and it might be 12 a.m. somewhere else and it might be the middle of the night in the States. This is kind of obvious to all of us. What is maybe less obvious is the concept of personal time zones. So for me, for instance, I like to start my day a bit late. I like to have a lazy morning, have coffee with my wife, say hello to the cats, and only then start my day. I usually reach my peak productivity around lunch or what most people will call lunch around one or two. Then I'll take a break, have lunch, do the dishes, take care of a bunch of household chores because I'm kind of off because of lunch and go back to work two, maybe two and a half hours later. Conversely, someone else in my team has three young daughters, so he starts his day around 7.7.30, makes sure his daughters are set up for school, whether this means dropping them off or setting them up with the laptops, tablets for the remote school solution they have right now, and starts his work day at about 8. Other people may start their day much later. Another person in my team starts his day by going out to surf when the beaches are open and gets on to start working at about 12 o'clock after his head is surfing, got back home, had a second breakfast, showered, etc. We all live about 30 or 40 minutes away from each other. We're all in the same geographical time zone, but our personal time zones are really different. We have really different schedules for our days because our lives are different. Again, this just becomes much more accentuated under COVID and working from home restrictions. If we want to communicate effectively in this situation, and as I said, almost everything we do is ultimately communication, the best thing we can do is default to async communication. This may surprise some people, and it's not surprising that this does surprise people, because we don't grow up with async communication, at least people in my generation don't. We grow up talking to our parents, to our schoolmates, we finish school, we may or may not go to college, and then we go get a job. Most of our interactions in most of these jobs are in the same office, talking to our peers, talking to our managers, talking to customers, and so on. This is for most people not an option anymore, at least not a good option anymore. So we have this implicit assumption or implicit understanding that synchronous face-to-face communication is more efficient. Well, maybe, but the numbers just don't support it. On average, if you want someone to understand you, people talk at about 150 to 160 words per minute. Conversely, people read at about 250 to 300 words a minute. So in theory, async communication should be twice written, at least async communication, should be twice as efficient as verbal communication. Moreover, with written communication, you can pause, you can go back and reread something you don't understand, or look up a big word or anything like that, and you'd only be taking away your own time. You won't be taking away the other person's time by saying, hey, wait, go back, say that again, which is an effect which is, of course, compounded in meetings. Because if you have 10 people on a meeting and you asked the speaker to repeat something, you're now wasting 10 people's time. So why do we think that verbal or synchronous communication is more efficient? Well, we tend to think that a lot of information is lost in async or written communication. There are studies on this, but in verbal communication, or at least in face-to-face communication, between 73 to 90% of the communication is in fact nonverbal. It's based on facial cues, even intonations in the voice, pauses, etc. All this information gets lost when you write instead of talk. So a lot of people automatically assume that written communication, asynchronous communication loses a lot of information and is therefore inefficient. I disagree with this. I don't disagree with the numbers. I don't disagree with the research. I do think the conclusion that written communication is ineffective is a crutch. It is based on the fact that people are used to and are just better at verbal communication, and they tend to write off other forms of communication as inefficient instead of putting in the energy and actually becoming better at asynchronous communication. Now, we in the open-source community all know this. Open-source is built on asynchronous communication between diverse teams, not even teams, individual contributors scattered across the world in different time zones, in different personal time zones, and in various degrees of language capabilities. So we in open-source mostly know how to do this properly. What lessons should we take to our day job to non-open-source developers to help them become better in this? So the cardinal rule of explicit, of better, and I spoiled myself here, of better written, better asynchronous communication is being explicit. Of course, don't mean using explicit language which your mother will not approve of. I'm talking about saying what you mean and meaning what you say. In other words, leave no place for assumption. And again, we need to remember that any interaction we have is a form of communication. If as a manager, I assign a bug to a developer, I am communicating with him. This is really poor communication because I don't pass on the full meaning of this action. What does assigning a bug mean? Should I drop anything and do this now? Should I put it in my schedule? Am I expected to solve this? Am I expected to assess how much time it needs to be solved and then get back to you? So instead, we always want to be explicit here. There's a big difference between assigning a bug and assigning a bug and adding a comment saying, this bug looks important to me, but it is not urgent. Please finish what you're working on now and then fit this bug into your schedule. I don't particularly care when you get to it, but it has to be done before milestone XYZ due on date ABC. So this doesn't even mean micromanaging. I've left the developer empowered to make his or her own decisions, but I did communicate very explicitly the expectation here. Then confuse expliciteness with micromanaging. The second cardinal rule of effective communication is being flexible. Most people have their preferred style of communication. Some people prefer emails, some people prefer slacks or voice calls, which think we've established are inefficient, but some people still prefer them or even video calls because we are not robots and some people need varying degrees of socialization. Some people it really helps to see someone else's face. Again, even though this is not necessarily the most efficient form of communication, as managers, we probably have our own preferred form of communication, which is great. But we need to remember that each and everyone on our team has their preferred form of communication. Unless there's an extremely good reason to prefer one over the other, why not be flexible? Why not make the members of your team or the other people you are communicating with a bit more comfortable? You need to keep in mind that for most people, their manager is the primary source of communication they have. Their managers, the filter through which they perceive their job. Manager is usually in charge of delegating tasks, giving feedback on the performance of these tasks, prioritizing tasks. And being a professional at worst sounding board to bounce ideas off and at best a professional authority where you get advice from. So given this discrepancy and power, this discrepancy in position, why not make communication a bit easier, a bit more comfortable for everyone, be flexible and meet them where they are comfortable instead of where you are comfortable. And of course, this isn't a brilliant rule. I do not mean you should always do whatever is uncomfortable to you and most comfortable to anyone else. You need to find a solution that works for both or any number of parties. But as a general rule of thumb, if you're as a manager, always completely 100% comfortable, chances are that, well, either everyone else you work with is exactly like you, prefers the exact same style of communication as you. In that case, well, your life is really easy, but you should probably think about diversifying your team. But that's a topic for a completely different talk, maybe next year. If it's not, chances are you're making some people on your team uncomfortable, or at least you have the ability to make the work of some people on your team a bit more comfortable. And if that's the case, why not do that? With that, I think I'm out of time. I will take questions after the talk. Till then, if any of us resonates to you, if you want to continue discussing this, I most certainly invite you to reach out. I'm available on email, on Twitter, on LinkedIn, I'm really not a hard person to find. So thank you for listening. Thank you for your time. Have a great fuzz them.
|
Management is difficult even under the best of circumstances and managing globally-distributed teams is even more so. With the global COVID-19 pandemic and the restrictions it forced on all of us, management is nothing like the best of circumstances. With the pandemic, suddenly everyone is a remotee – even people who have no experience in working remotely, and no desire to work in such an environment. In this talk, I’ll explore how the lessons learned from navigating a globally-distributed open source community can come in to play when managing a suddenly disturbed team. I’ll examine how taking a cue from open source communities can help managers handle this new landscape with flexibility, clemency, and above all, empathy.
|
10.5446/52475 (DOI)
|
Hi everybody and welcome to my talk, Communication Hacks, Strategies for Fostering Collaboration and Dealing with Conflict in Open Source. My name is Naritze Sanchez and I am the Senior Open Source Program Manager at GitLab. Formerly I was on the board of directors for the GNOME Foundation and I was also part of Endless which created Linux distribution, created for people with little access to computers or little to no internet access. And so I've been part of engagement teams for open source communities and in general really love program management, project management, all of that which revolves around communication and I would like to share some of the tips and tricks I've learned along the way with you all today. So I'll go over four main categories. The first is navigating cultural differences. I'll talk about improving feedback, active listening and I will share some of my favorite hacks. I'll dive right in with navigating cultural differences because as we continue to expand our communities, we really attract members from all over the world. It's necessary for us to navigate those cultural differences. As we think about navigating cultural differences, there's this really cool way of thinking about that which Erin Meyer explores in her book, The Culture Map. She talks about differences along these seven indicators. Those are communicating, evaluating, leading, trusting, disagreeing, scheduling and persuading. And while I won't go into all of those today, I will cover some of them and I encourage you to check out her book if you're interested in this topic because she just has a lot of knowledge about these different indicators and cultural differences. The first indicator that we'll talk about is communicating. There are some cultures that are low context, which means that they believe that good communication is precise and simple and clear. Sometimes repetition is used to avoid misunderstandings. On the other hand, there are cultures that value high context communication. And these are cultures where they believe good communication is sophisticated, nuanced, layered. You oftentimes have to read between the lines or speeches just longer. Here you can see I've mapped out some of the different countries and where they lie along this spectrum of low context to high context cultures. The U.S. is very low context communication and other cultures like Indonesia, China and India are much higher context cultures. And the interesting thing here is that even if you share the same language like the U.K. and the United States do where they both speak English, there is a difference in low context versus high context preferences. So sharing a language does not mean that you share the same kind of communication style even. The next indicator is evaluating. And this is about how people give negative feedback. So there are some cultures that give direct negative feedback. And this is delivered frankly, bluntly, honestly. They use negative or the negative messages are not softened by positive ones. Absolutes are used like you always do this or you completely failed. And it's okay to give the negative feedback in front of groups. On the other hand, there's indirect negative feedback cultures where the feedback is delivered softly, subtly, diplomatically, oftentimes positive messages are wrapped the negative ones. So if you've heard of the sandwich effect where you have a positive message negative and then a positive like a sandwiching, that's oftentimes something that these cultures do. Qualifying descriptors are used like you sometimes do this maybe a little bit. And if feedback must be given in private. The next indicator is persuading. And these are about how someone also might be moved to take action. So principles first cultures value the why first. They've been trained to develop the theory or the concept before you present the facts or statements or opinions. On the other hand, applications first cultures value the how or the what they've been trained to begin with facts and statements and then back it up as necessary. And what Erin Meyer mentions in her book is that if somebody from France, for example, has a manager from the United States and is constantly asked to do something without knowing the why, that that can be an extremely frustrating situation. And so with any of these indicators, there's no one right way to do it. But it's good to understand the differences so that we can adapt given the different you know personalities and people on our team and to have empathy for each way of doing things so that you can find something that works in that case. Here I've included an image of what a culture map actually looks like. So the example that I'd like to go with is the current board of directors at the GNOME Foundation. They have directors right now in the from the UK from the United States, Brazil, Mexico and Nigeria. And as you can see here, countries are represented with different colors. And we'll just look at one of the indicators this trusting one, which is about how people build trust. And with some cultures, it's very much about task based trust building. So the United States, for example, is a trust is a task based trust building culture, where trust is built based on, you know, if somebody if you pass somebody emerge a question, they review it, you start building trust or, you know, as you work alongside them and they do what they say they will, that's how you start building the trust. It's oftentimes somewhat situational. So if you work in the same community or the same company, then that's where your relationship mostly holds. And if somebody moves away, then the relationship becomes weaker. Versus other cultures that are much more relationship based trust building, trust is really built around going out to lunches together, getting ice cream together, whatever it might be. You're starting to learn more about their family, their friends, how they react in other situations or how they are in other environments. And those types of relationships tend to persist even once somebody leaves a community or a job or whatever it is. And so with this, having a culture map like this is really useful when you're having, you know, when you're working closely with groups, because of the group of people, because then it might indicate that having a social event like a virtual coffee together or a virtual lunch might be just as important to the working relationship as something like just doing task based activities of having meetings for board members, et cetera. So in this case, you see that both of those things are really important to have with this particular group of people. Some final tips for navigating cultural differences. Invest time in getting to know the people that you work with. Everybody's different, so don't make assumptions. Somebody might look like they're from a certain place or might have a name that you assume is from a country, but they might have a completely other experience and identify with a different culture than what you would have imagined. So don't make assumptions. I think it's also okay to establish expectations when you're in a diverse group. So for example, GitLab has a cross-culture collaboration guide where there are things around like acknowledging that both types of communication styles exist, high context and low context, and saying that, you know, GitLab prefers or tends towards low context. But we need to understand the trade-offs. So for example, in that case, even if, you know, GitLab prefers or tends towards low context, it's good for everybody to be aware that the two exist and make sure that things like interview processes are not biased towards just low context communication. Because then people coming from high context communication countries or cultures might do poorly just because of that difference. So as long as people are aware of the differences and make sure that the processes aren't biased or the way that you're working, I think that it's okay to say generally we prefer, you know, this type of communication style or whatever it is. And the important thing is really for empathy to be your guide, because as you continue to learn all of these things, there are so many nuances, everybody is different. So just understand that these differences exist and do your best to meet somewhere in the middle. All right, next section is about improving feedback. And I have this image of a grumpy cat because while feedback is a gift, it is very challenging for most of us. And I want to really emphasize that both giving and receiving feedback is a skill that we can build. And as we're building this skill, just in general, as we're engaging in any type of feedback, we need to make sure we're understanding our own biases and our tendencies. As we just learned in the previous section about cultural differences, these things really exist. And so we need to make sure that we acknowledge those that we don't have that we're not influenced by our own stereotypes and biases. It's also a good for us to understand that feedback is a good thing. And feedback seeking behavior is typically linked to higher job satisfaction, being more creative, and specifically getting negative feedback is linked with higher performance. So it's really good for us to seek out negative feedback. But receiving feedback can be really tough. And the reason is that we feel bad emotions more strongly than we feel good emotions, because our brains are wired to detect threats. So you might imagine back in the day, there were cheetahs and we had to run away, or we had our fight, flight, or freeze instinct was activated. And so our brains treat negative feedback basically as that cheetah. It thinks, oh my gosh, that is a scary monster. I need to fight, light, or freeze. So I just want to acknowledge that there is actually a biological part of this being so difficult. So tips for receiving feedback. The first one is to take some time. Oftentimes when we just read somebody's comment or we receive the feedback in person, we get a really strong reaction. And so it's OK to take some time to process the feedback. And if you need help taking that time, it's really good to create a script. So something as simple like, thanks for the feedback, I'm going to take some time to process it and I'll come back to you later. This might seem a little bit forced at the beginning, but it's something that will become more natural to you. It'll get better at giving yourself time. And as long as you kind of keep that in mind as like this is actually a part of my processing, like I need to take some time, it'll just get easier to actually find that time. So try creating a script at first. As I mentioned before, because we might get triggered into the flight, fight, or freeze mode. Our body just has a reaction oftentimes. Our heart might accelerate. Our face might get really warm. We might get just very tense and we'll have a physical reaction. And so if this happens to you, just focus on your physical body. There's a breathing exercise called 444, which I found really helpful where you breathe in for four, you hold it for four, and then you breathe out for four. And you repeat this about four times. And it really just helps to get your heartbeat down to a regular heartbeat to really like help us calm down. So try it out. And then the goal of all of these tips are to really get you to process the feedback. Because we want to ask ourselves, you know, what about the feedback was true? What do we think was biased or incorrect? And ultimately, how can we use what we've just heard or read to progress to improve? There's this really cool book called Thanks for the Feedback, the Science and Art of Receiving Feedback Well. And it also talks about different triggers we might have. Because oftentimes we have such complex emotions or we just feel so angry, but we don't know why. And so, you know, when you get into that state, you might want to see if one of these triggers is at fault. The first is a truth trigger, which is about the content of the feedback itself. We sometimes feel that it or we somehow feel that it's incorrect or just not helpful, untrue. There's a relationship trigger, which is not about the content, but about the person who delivered the feedback. Also you might think that the person is incompetent or you might feel betrayed by the person, that they're your friend and now they're giving you this negative feedback. And so the shift goes away from the content itself and to the person who delivered it. And then lastly, there's this identity trigger, which causes us to question our sense of identity. It makes us feel overwhelmed, threatened, ashamed, off balance. We sometimes don't even know what to think of ourselves and we get prompted into the survival mode. So again, think through these as you're trying to process feedback. And one thing that I found really interesting with feedback is that when we work in open source communities, there's this, you know, when people are giving feedback, there's something, there's their original intent of like what they wanted to accomplish with a feedback. And then there's the impact of how the person who hears it actually reacts. And in open source communities, there's a much larger gap between the intent and the impact. In person, you can see how somebody is, what, how their face is, what their tone is like, all of these different things that might help you understand the person's original intent. Like if they're delivering negative feedback or like some kind of feedback and they're just kind of, you know, like laughing or about it or something, they're probably not super upset. And but when it's written communication, then it's really hard to know how the person was acting when, when they delivered this feedback. So the impact of it might be totally different. And an example of this is if I'm here, you know, typing away, and I'm sipping my coffee, and I'm just, you know, looking at different things, and I'm filing a bug report, I say this bug is so irritating, it makes me want to jump out of a window. And this is just kind of my sense of humor, whatever. But somebody might take that, they might see the written communication and think, wow, this person must have been really angry, like they want to jump out a window. Wow. And they might get angry back or they might get really sad or, you know, because they didn't see that you were just, you know, not really that serious, just kind of drinking your coffee and not caring. So again, the intent versus the impact could be much larger. And I think it's important for us to all remember that the impact is just as important as the intent. So how you make somebody feel is just as important as what you intended them to feel. So when there is a gap between the intent and the impact, there's this cool thing called the SBI model to help us understand intent. And basically S stands for situation, B stands for behavior, I stands for impact. I'll show you an example of how this works. Basically you want to be very specific about where the situation was, where it occurred, what the behavior was. You don't want to assume you knew what they were thinking, but really describe the observable behavior and then describe the impact, how it made you feel what your reaction was. So the example that I have here is when you responded, I'm just going to role play, when you responded to the email I sent about engagement ideas last Friday, so I'm being very specific about the email and last Friday behavior, you said that I didn't have background in a background in design, so my opinion shouldn't count. And here I'm focusing on what was written instead of saying that was rude. Then I'm saying the impact. That made me feel excluded from the conversation, even though it's a community wide topic, and I felt hurt by the public comment. And then you want to get at the intent, so what were you hoping to accomplish with that? What were your reasons for saying that? And then that helps you understand what the original intent was, and it helps that person understand what the impact was. So again, a way to close the gap between intent and impact. All right, final, now I'm going to give some tips for giving feedback. And the same author of Thanks for the Feedback mentions that there are three main types that we should really focus on, and they're equally important to give. The first one is evaluation, which helps us understand where we are and what the expectation is. There's coaching feedback that helps people improve, and then there's positive feedback, which motivates and encourages. And it's really important to give the positive feedback, especially when you're giving a lot of evaluation and coaching, and vice versa. You don't want to be just giving positive feedback. You want to be thinking about the three different categories. Another thing to keep in mind is that you want to find the right person at the right place and the right time. So if you need to have a difficult conversation with a community member and you know that it's late in their time zone, they're probably going to sign off right now pretty soon, you don't want to start the conversation then. It's better to make sure that you ask them for some time. Maybe it's even you want to ask for a phone call instead of a chat, but at least find a time that will work for both of you to have maybe an hour conversation or something like that. So that's really important. And then I know we talked about how in different cultures, giving negative feedback might be public or private, but generally it's safe to assume that giving positive feedback in public and negative feedback in private is a good way to go. So keep that in mind. And then I had mentioned how seeking out feedback is linked with higher performance. So some quick tips on how to do that. The first one is to make your request very specific. So asking somebody something like, can you give me feedback on my presentation? The people might just say, oh yeah, it was good or yeah, it was nice. And if you try to make it really specific, like can you give me feedback on how quickly I talk during my presentation? And if I make eye contact, then it's much easier for people to remember that and really check for that along the way. It's also good to ask multiple sources for feedback. So you might want to ask two to three people because they might catch something that the other didn't. And you'll get a richer set of feedback back. All right, the next topic is active listening because communication is not just about talking. It's equally important to listen. And I love this quote by Richard Carlson, which says that being heard and understood is one of the greatest desires of the human heart. And I don't know about you all, but I've definitely either been in the position or have heard when somebody is speaking and somebody is there listening. And the speaker says, you're not listening to me. And the person who's been listening repeats what the person said word for word. And yet the person who was speaking doesn't feel heard and understood. So because of this phenomenon, this can be frustrating for both sides, both the speaker and the listener. I thought it'd be good to go over the different types of listening. The first type is distracted listening, which is pretty obvious. It's basically like maybe you're on your phone, you're distracted, you're multitasking or preoccupied. There's content listening, which is about listening to the facts and planning how to respond. So you might listen to somebody saying something about virtual conferences and you're like, okay, I want to respond to that about virtual conferences. Okay. And then you stop listening because you're focusing on what you're going to respond to. There's identifying listening where you're responding with a similar situation to show that you understand. So if somebody's talking again about virtual conferences and then you say, yeah, virtual conferences, I went to 30 of those last year. I like them for the most part, blah, blah, blah. And then you just start talking and you're no longer listening. Then there's also problem solving, which is listening with the intent to help give feedback or generate ideas on how to solve the situation. And a lot of us default to this because we care about the person who we're listening to and we want to help them or we feel like we need to be proactive somehow. And while this can be helpful sometimes, sometimes people are not ready to go into problem-solving mode. They just want to be heard and understood first or they just might not even be in the right state of mind to start processing that part and actively solving. So active listening is sort of the gold standard that what we should always try to do for when we're listening to someone. And this is when we hear both the facts and the feelings and we respond appropriately to both. And so what that means is that somebody might be pacing up and down. They might be visually very distressed, maybe crying and, you know, just saying deeply and talking to you about something. And maybe instead of going into problem-solving mode, you realize they just need an ice cream and a hug and, you know, maybe a walk around the park and then we'll see if they're ready for problem-solving. And that might be so much better for that person in that time. So by active listening, you're kind of responding to what's needed there and then. Cool. So active listening, awesome stuff. But how do you actually do it? So similar to how a traffic signal has green, yellow, and red to let us know if we're supposed to keep going, slow down, or stop, our body and our physical actions and our verbal cues show how somebody should continue talking, slow down, or stop talking. And so when we're active listening, it's important for us to make sure that everything is green so that people know that they can continue to talk and that we're here to listen. So physical cues first, you want to put down whatever you are working on, look at them in the eye, face them, be really physically present. Verbal cues are just as important, so you can't do one without the other. These are things like saying, really, oh, interesting, what did that person do? What happened next? All of these little things are simple questions and they're simple cues to let somebody know that you're listening to them, that you're green light and they should continue. So this is something that is incredibly important in the active listening process. The next thing is paraphrasing, and this is not repeating word for word what the person has said. It is listening to what they're saying and trying to understand the meaning so that you can repeat that and make sure that you understood what they're trying to communicate. And the reason this is so important is that it might be that somebody is talking and they're trying to sort out their feelings and when you say, okay, so let me make sure I understood this. So if I think what you're saying is this and that, then the person has a chance to hear what you've understood and say, oh, yeah, that's what I said, but that's not what I meant. And they have an opportunity maybe to correct themselves and to make sure that you're both understanding the same thing and that they've properly communicated what they mean. So super important. It's also cool because you don't have to be in agreement with them. So if they are talking and you say something and you say, okay, I just want to make sure I understand this. So you're saying that you think that the FOSDEM logo should be a donkey next year. And then the person says, yes, that is exactly what I meant. You got it. You might think that having a flying donkey, let's say, logo is a bad idea that actually you think that it should be something else or you just disagree. But at least you've understood what the person wanted you to understand. And so it leaves room for disagreement and it buys you time to then be able to present your option, et cetera. And with this, it's really cool because you can oftentimes see people visibly relax when you have understood them, when you paraphrase and they say, yes, that's exactly what I meant. They sigh and their shoulders kind of relax. So definitely try this out. It's something that I am also trying to improve. But you'll see that it can be a very powerful communication tool. Active listening is really important for building relationships. A lot of these tips and principles really do carry on to the virtual world. But I think that it's really important for us to employ them in in-person events once we're able to have those again because those relationships carry on into the online world. All right, I'm going to end now with some of my favorite hacks. I hope you can start using these immediately. And the first one is that it's the writer's job to be understood and formatting helps. So for this, try to avoid long sentences, break them up whenever you can. Don't assume previous knowledge. Make it easy for anybody to jump right into the conversation and start contributing. Do a skim test. So here you can see that I have a set of a bulleted list of things. And sometimes bulleted lists can look like a wall of text, even though it's in bullet points. And so here what I've done is I've pulled out a summary of what each bullet is about. I've bulleted it so that someone can really quickly skim through and see like the basic thing of what each bullet point is. And this just really helps to be able to more easily digest the content and also to refer back to it if needed. So a skim test is really important. Just skim through it and make sure that people can get what they need to. Make sure that there's a call to action, that it's easy for somebody to understand who needs to do the action and by when. And an example of this is when we're writing issues or merger requests or whatever it is. This is an example from our social media team at GitLab where they've created a template for people to fill out. And you can see that they've used different headers. They've even used emoticons, checkboxes, bolds, italics, everything to really make it easy for somebody to give the information that they need. And it just really helps somebody be able to actually complete the task that they need to. So keep that in mind when you're writing issues and filing merger requests. The next tip is yes and. And this is use those words yes and instead of no or yes but. Because I don't know if you've heard the saying that anything before the but doesn't really count. So people will be like, yeah, yeah, yeah, but and this is what they really mean. So just saying yes and kind of acknowledges what people are saying and then still gives you room to disagree. So yes, I heard that you want the FOSDEM logo to be a flying donkey. And I don't think that that would be the most appropriate thing to do. Here's why. So you acknowledge what they said and still disagree. And the reason why this is so important is that people are more likely to listen to you if they feel like you've listened to them. So by just having that one word difference of yes and it can cause an entirely different experience. And along with that, there are these cool collaborative freezes that I've listed here where, for example, if you say something like we must, we need to increase social media engagement. And then you're in a group of people and you want everybody to contribute. Saying that might be less effective than saying something like how might we increase social media engagement? Because that just simple rewording invites people to participate. They're more likely to give suggestions to add to a brainstorm or maybe even take action because they felt like they're able to participate. I'll let you read through the rest of this list, but things like might I suggest we or what are your thoughts? All of these are collaborative freezes and I encourage you to use more of them in your day to day language. I want to end today with this quote by John Powell, which is communication works for those who work at it. And I really want people to focus on this because communication is a technical skill that we all need to improve. Just like we continue to improve other skills, coding, technical writing, all of these things, communication is like the thing that binds us all together. And it's the way that we are going to move forward in our open source communities, in our personal and professional lives too. So I hope you've learned something today that you'll be able to start practicing or that really helps you and I encourage you to go on this journey with me of improving your communication. Thank you very much. And if you have any questions, feel free to message me via Twitter or LinkedIn and I'll do my best to answer some questions. All right. Good day because it has been a super fun conversation today that I want to call them education bases. My name is Lori Jaaaa.
|
During this talk, you'll learn about topics like cross-cultural collaboration, giving and receiving feedback, and active listening -- all things that are vital to the health of our open source communities. During this talk, you'll learn about topics like cross-cultural collaboration, giving and receiving feedback, and active listening -- all things that are vital to the health of our open source communities. After reading many self-help books, watching various TED Talks, and listening to a ton of podcasts, I've condensed my learnings to help you improve your communications skills, deal with conflict, and collaborate better than ever not only in FOSS, but also everywhere else.
|
10.5446/52477 (DOI)
|
Hi everyone and welcome. I'm Sufiya Valin and I work as a project manager for Ericsson Software Technology. I'm here today together with Ray Paik from CubeDev. Ray will introduce himself a little bit later in this session. We are very excited to be participating in FOSSTEM and our topic for today's session is open source documentation. Quick glance on the agenda, we are going to talk a little bit about why documentation is important, some of the common challenges that we see and then we will give you a couple of project specifics insights on what we have done. Yeah. Why is documentation important then? First of all, open source is not only for developers and not all people can read code. We also see it as an important entry point for all users but also other interested. It is key for onboarding and project introduction and documentation should basically always be considered as the national part of the software delivery. What are the common challenges that we see then with open source documentation? More often it is done by a few volunteers and even if you volunteer, you might not have too much experience in writing technical documentation. It is often started at the last minute, meaning just a few days before the release. Also treated separate from the code. We also see lack of consistency and often poor quality and then this vicious cycle just continuously repeats itself. What can we do then? I will now talk a little bit about what we have done for the Linux Foundation networking project. As you can see here, a couple of years ago, a cross-community working group was established for LF networking with the goal of providing a common way for documentation handling. We saw the need here because most projects had their own way and were struggling with finding a sustainable way of working with documentation. We also saw that not all the projects had a designated lead which led to a community effort with a fairly low priority and also an unnecessary overlap for maintaining guidelines and so on. What did we achieve then? First of all, we got some really good consistency across the LFM projects. We provided clear and simple guidelines for how to work with documentation. We ensured that documentation contributions are treated in the same way as code contributions. However, we do keep the project keep their local or their style guides local in their own repos. Moving on to some more project specifics, then I'm going to use the OWNAP project, the Open Network and Automation Platform project. I worked very closely with David McBride, who is the release manager for the OWNAP project. We pushed a lot for having designated milestones for documentation for each release cycle. This goes all the way from planning, having your repo structure in place with associated templates to reviews and all the way to sign off. This is then to spread out the work a bit so that you don't end up having everything in the very last minute before the release. We also don't let project get away with not completing their documentation. I see this one is also very important to get your technical committee and others really advocating for your documentation and talking about the importance of it. If you don't have your documentation ready, you cannot be part of the release. We also have two documentation hackathons per release cycle. I'm going to talk a little bit about this later on in this session, but that has also been very helpful and appreciated. We also provide templates with expected level of content. This is also very helpful, since as I mentioned, even if you volunteer to write documentation, you might not have too much experience. Having some kind of template that indicates the expected level of content and what type of information you are supposed to add is really helpful. We're tracking documentation tasks in Jira the same way as we do with the rest of the work. Last but also very important, the documentation is stored in the respective project repo. This is basically to enforce the responsibility and to have the documentation as close to the source code as possible. With that, I'm going to leave the floor to Ray. Thank you, Sophia, for letting the ground work. I was a community member of LF Networking projects. I remember the iterations we went through with the milestones and making sure that documentation is done by everybody. I definitely appreciate that. I'm going to switch gears here a little bit and talk about open source projects that are led by software companies slightly different than foundation-based projects like LF Networking. Back in 2018, I made a career change to work at a company that had open source projects. It was at GitLab back in 2018. I made another switch about three or four months ago to a new company that I'm at called CubeDeb. You're probably not as familiar with CubeDeb as people are with GitLab, but if you're interested in embedded analytics, I encourage you to check us out on our repo at GitHub. One of the things that I looked at early on as we're trying to continue to grow communities and encourage community participation was how documentation is done and if it's different than what I learned at LF Networking. What it turns out is that a lot of the core principles that Sophia talked about, a lot of them are directly applicable to documentation and company-led open source projects. You can even make an argument that since both companies like CubeDeb and GitLab were not only having an open source project but we're also charging for services and or for the software, there's a bit more scrutiny on documentation by users. It's probably even more critical, not that for foundation-based projects it's not important, but there's definitely a bit more scrutiny. More attention are basically paid to documentation, but some of the core principles that Sophia talked about, things like making sure that documentation is a core product of your project, it's definitely true for company-led open source projects as well. First one is, I think what a lot of people do is when they look at open source software, they go to the repo, whether it's in GitLab or GitHub or other repos, your documentation should be where the code is. It should be very easy to find. People should not have to sort of hunt around through the directory structure to find out where the docs are. Both Cube.js and GitLab, there's a docs or doc directory right next to where the code is, so you don't have to hunt around for it, so it's easy to find. As a matter of fact, when I started talking to folks at CubeDeb about my current role, one of the first things I looked at was the documentation and find out where the docs are located. The other thing at GitLab that I really liked was this definition, I mean, they have this constant called definition of done. Whenever you introduce a new feature or you want to tweak a feature that's existing, one of the key things that you need to do to make sure that that feature gets released on your monthly release cycle is documentation. That's one of the checklist items. I encourage you to check on that link. If the documentation isn't done, it's going to miss the release for that particular month. It's not that catastrophic because GitLab does monthly releases, it'll just make it to the next round a month later. It's one of the disciplines to make sure that everybody just owns the documentation piece of it. There are technical writers that are working with developers to make sure the documentation is done, but even the developers, you need to make sure that documentation is done and take charge. It introduces that discipline. That's one of the things I really like. The next part, I mean, Sophia talked about this as well for LF Networking, the contribution process for documentation. If the contribution process for documentation is drastically different from code, I think it introduced a couple of different major problems. One is that, as Sophia talked about when she opened the session, the documentation is usually a good gateway for people to get introduced to your community or to your product. Then documentation contributions is usually easier when somebody joins a community for the first time because you don't need to necessarily know the details of the architecture of the code. If you see something in the documentation that can be improved, that doesn't require a whole lot of technical capability. It's easy to make your first contribution through docs. It's a good way to get people to get introduced to the community. Once people get familiar with the contributions through docs, you don't want them to have to relearn a new process for contributing to code, for example. Having the processes be consistent between code and documentation, I think it's very important. It helps with onboarding. Once you master the process, learning how the reviews are done, how things get merged, and how it gets released, you don't want people to learn different processes between code and documentation, for example. I think the other problem is, for example, the process for contributing documentation is different than code. It might give people an impression that documentation is somehow like a very different from your product when it's a key ingredient of it, or maybe it's not a first-class citizen. You definitely want to avoid that. I encourage you to have a very consistent steps for contributing to everything in your software, including documentation. Other point that I want to make, in general, in open-source community, we always talk about lowering the barrier to entry. You also want to do that with documentation, too. I want to share similar examples from both GitLab and Qube. If you go to documentation pages of both communities, there's edit this page button. If you read through the documentation, and, for example, if you find the typo or if the paragraph isn't clear, you want to help improve it, all you need to do is click on the edit this page button. On GitLab, there's a piece of real estate at the bottom of the documentation where you can click on that button. On Qube, it's actually on the top right of every single documentation page. I think you can even argue that that might be a better place for it, because once you open a documentation page, you immediately see it. You don't have to scroll all the way down. By clicking that page, it takes you directly to that file, so you don't have to hunt around your directory structure to find out where you need to go. That's another very helpful thing that that encouraged people to improve your documentation and start contributing. The third piece, I think this is also something that Sophia talked about as well, is in general, like you call it recognition or even metrics. When you're recognizing contributions, you want to treat documentation contribution the same way you recognize contribution to code. I have a few examples here. One is this chart. This is something we use at Qube Dev to each month keep track of new versus return contributors. These contributions include both code and docs. We don't separate them. We look at it as contribution. They're both the same. They're same, of course, class citizens. In terms of metrics, you want to look at them the same way and put equal amount of value to both. The other thing, like swags, as a community manager, this is something I like doing in terms of celebrating successes and recognizing somebody. What we had at GitLab was, once you get your first merge request or MR merge, you can request this with the hashtag myfirstMRmerge. I got a lot of these questions. People say, hey, all I did was make some improvements in the documentation. Am I still eligible for the mug? My answer is absolutely yes. As far as I was concerned, MR is an MR. It didn't matter whether it was code documentation or something else. In general, recognition within distinguished between those two. Also, in terms of community leadership, different communities have different programs for recognizing key or active contributors. You might call them ambassadors, steering committee members, advisory board members, or heroes. One thing you definitely want to do is that in these, whatever these groups are called, whatever this leadership team or active contributor members are called, you want to make sure that people that are active and taking a leading position in documentation is recognized there. You don't want to fill these groups just with people who need a lot of code contributions. That's one of the things at CUBE and also at GitLab that we paid a lot of emphasis on, people that are active in documentation, that we want to make sure that they got equal recognition as well. The last item, and this is also something Sophia alluded to earlier as well, is events. Obviously, because of the pandemic, we can't held the event. We can't held normal events like we used to. I mean, FOSM is a great example. We're getting together virtually, rather than being in person in Brussels. But even during the pandemic, I see a lot of value in organizing virtual events because it gives people an opportunity to come together, even if it's over a virtual platform. People to collaborate. I think a lot of us in open source typically work in different time zones and asynchronously that works great, but there's nothing like having events to bring people together to work on fixing a common problem. Organizing synchronous events like this focused on docs like Sophia talked about, it works equally well in company-based open source projects. It not only encourages people to help with your docs, but also to get started in your community for the first time through documentation. A couple of things that I want to point out is for documentation-related events, like I said earlier, you're likely going to have a lot of first-time or new contributors. I have an example here with a list of documentation-related issues that people can tackle really quickly. What I've done a couple of times in a few of the events is go through documentation and find typos or paragraphs that aren't clear and ask people to fix those. Having those easy list of issues that people can check off and submit their first PRs and get them merged quickly, it really helps and gets started and provides an opportunity for early successes. There's nothing like it's a good way to generate momentum. I encourage you, just like normal hackathons, create a list of issues that you want people to look at or help out with. The other possibility, in addition to or even in addition to having those lists of issues that people can fix, is highlight areas of documentation that you want people to look at. For example, you may have introduced a new feature and you just wrote a documentation. You want people to go through that during your meetup or hackathon event, and the people to go through and give people an opportunity to suggest improvements for like a new version of the docs, if you will, that you just released. Also, for a new community members, a good way to get them started is to encourage them to go through a lot of the introductory materials in your docs, like getting started or maybe a deployment guide and have them actually go through the steps. What I typically find in a lot of communities is that those intro guides, after a while, they haven't been reviewed in a period of time because after it was written. There's nothing like a new community members with a fresh set of eyes to go through them and make sure that everything actually still works in the deployment guide, for example, or maybe you made an assumption about certain steps. You figured people knew how to do that step, but maybe it wasn't explicit enough. It's helpful for a newcomer to point out that you need to expand on that or add more detail. Rather than just asking people to go through the entire set of documentation, have several areas where people can focus on so that they can contribute to making your documentation better. Finally, in addition to your own community events, there are what I call industry events that attracts a lot of technical writers. A good example is that a lot of folks may have heard about this event. It's called Write the Docs. They usually have three events throughout the year in different parts of the world. I had an opportunity to participate last year. Of course, it was virtual for the first time, but I really enjoyed meeting with a lot of people, even if it's online. The participants who were attending Write the Docs event, because what I learned was that I met a lot of technical writers who were interested in contributing to Open Source, but they just didn't know where and how to get started. There are a lot of folks out there that want to get started in Open Source, and this is a good way to introduce your community to new people. What we did was, the Write the Docs conference usually has a day zero event on Sunday called the Writing Day. Basically, you have all these sort of round tables, and people can come by and learn about your community. Also, a lot of communities have listed issues that people can stop by and work on. Even if it was virtual, it was very productive. I had a lot of fun meeting new people, so I encourage you to look at events like Write the Docs Writing Day, or you may have other local meetups that have a lot of technical writers attending in your local area. I really encourage you to take a look at it and have fun with that. I think I'm going to turn things over to you, Sophia, and then let's talk about your events in LFN and help wrap things up. Thank you, Ray. That was very interesting. Talking about events and how you can encourage your community engagement, as I mentioned earlier, we do two hackathons dedicated documentation per release cycle. This is a virtual event that runs throughout a full day, and it's basically a Zoom call open, and people can just call in with their questions or concerns, or if they just want to set aside some time for Docs. As Ray said, it's all about getting together and solving issues and work on improvements, and whether you're a new person joining the community and you want to get some introduction, this has been really helpful in getting the documentation done. It says here it's a full day dedicated to released related content. Of course, if we have new people that have joined the community or if we have any open issues at the moment, we do discuss and talk about them or provide some quick introductions in how to work with documentation, perform reviews available for questions and general documentation introduction, as you can see here. The participation for this event is actually part of their release requirements, which also pushes a bit more for documentation. The dates for this event is always set with respect to code freeze, because we all know that until the code is ready, no one will care too much about documentation, so it really helped us once we pushed that out and ensured that we got the code ready, and then we have a couple of weeks where we can focus on documentation and wrapping up other things as well, of course, but this event has been very appreciated and very helpful. And with that, a quick summary, some recommendations then, ensure that documentation is a key part of your product or project. Both Wei and I have been talking about this and I don't think that we can stress it enough. We want to see a consistent process between submitting, reviewing, and merging code and documentation. If you keep those two as close as possible, it's so much easier. Have well documented and easy to follow process for contributing to documentation. We have talked about this as well, and truly to ensure that the community recognize and understands the value of documentation. Without that, it's very difficult to trying to convince people to work more on docs, unless you understand the importance of it. We have summarized this presentation today in a blog post on opensource.com. You have the link here at the bottom of the page, so feel free to go and check that one out. And with that, I think, Radar, we can say thank you to everyone that has been watching. And enjoy the rest of the conference. Thank you.
|
We often see many open source projects struggle with maintaining quality documentation and finding contributors who are interested in helping with project documentation. There are several reasons for this, such as many viewing documentation as a separate product from code or a belief that people will be able to make sense of what the code is doing by reading the code. For these and other reasons, documentation work is often done at the last minute and done by people with low motivation and minimum effort. So the quality of the output will naturally suffer. These issues can be addressed by ensuring that documentation is everyone’s responsibility and that documentation is a core part of the product created using the same development and community processes. Besides, documentation is often an entry point for new community members and is a great place for early contributions. When everyone in the community is actively engaged in documentation--e.g., reviewing documentation fixes--it can help provide a valuable onboarding experience for new community members. In this session, the speakers will share their experience in documentation from both foundation-based open source projects and open source software companies. There will be a discussion on how community contributions for documentation can be encouraged and how community members can apply their learnings from documentation to other areas of open source communities.
|
10.5446/52482 (DOI)
|
Hello, everyone. My name is Anna Videnius. I am a Chief of Staff at the MariaDB Foundation and today I would like to share with you some of my experiences from last year, how to organize online events in the new normal. But first about the MariaDB Foundation. We are a very small non-profit organization with 10 employees, focusing among other things on all the methods related to adoption of the MariaDB Server and actively working with the MariaDB community. In the past twice a year we would organize an unconferences, a small free-spirited events mostly oriented towards the developers of the MariaDB Server and consisting of some presentations and also of some spontaneous joint coding sessions between the developers. It would usually attract about 50 or so persons and it was all nice and cozy and then 2020 happened and everything changed. So yes, 2020 has been a year like no other for all of us. Many changes, many challenges, many things to adjust to both in every day life and at work. Personally I have worked from home for the past 20 years and so did most of my colleagues. So at least during spring 2020 that part seems to be quite easy. We all would say looking at the new rules, oh well I work from home anyway, how different it can be now. Turned out it's very different for quite many of us. Some of us for instance faced huge challenges in everyday life with the family, kids are suddenly home, partner at home and so on. Social life obviously has changed too. In fact I have heard from many of my colleagues and I also noticed myself that until these drastic changes of 2020 happened, many of us who worked from home didn't even fully realize the huge importance of the social interactions we had with our colleagues during the occasional company meetings and conferences that we would attend a few times a year. We kind of knew it but it turned out one of those things that you really need to lose to begin properly cherish. And obviously all those who used to work in the office had a huge adjustment as well. Everyone kind of also knew that it takes more than cleaning up a table and plugging a laptop but again it turned out to be one of those things one need to experience to really understand what's the level of challenges there. And speaking of challenges it was around maybe March, April 2020 when we began to realize that all tickets we bought for this year needed to be unbooked and a hotel cancelled as well and no trips and no events are happening. For real and then the idea of organizing online events began springing simultaneously in quite many minds. The ideas seemed very clear and obvious from start. We are working from home as usual. Other people in our community working from home also as usual or as unusual but still they are there and should be able to watch online events from the sofas to keep learning and interacting and so on. So we can save time on travel, we can avoid jet lag, we can save a lot of costs for hotel, flight, food and generally minimize our carbon prints that way. So all great what's not to laugh let's do it. Then we started thinking deeper and deeper on what is it we actually have a chance to achieve. First realization we had was that the new normal provides us with the possibility to expand our audience. I said previously we worked mostly with the developers of the MariaDB server, people who actually contribute into the code of MariaDB. But now we realize that we have a chance to reach out to users of the MariaDB. It was not entirely new audience for us but still we quickly realized that it meant several new things to consider. Once we realized that we have a chance to reach out to avoid the audience we also decided to bring a touch of formality to the event. We created actual proper calls for papers, selected selection committee and so on. That may be sounds normal for the experienced organizers of the physical events but with our unconference background it was something new and we immediately saw the advantages of this. Working with presenters in such a way from start helped us to give the whole event a structure it needed. At some point early in the organizing process I had this interesting moment of clarity when I was looking at one of the MariaDB mailing lists. There were some 2000 or so persons there and I was thinking I think I know everyone there. That line of thinking brought a realization that now we have a chance to reach out of the bubble and to a much wider audience. Having a formal process of work with the presenters was invaluable for that because we had all the abstracts and information about the presentation available from the start. We had a possibility to consider who might be interested in the presentation, whom to contact with this information, how to tag event on social media and so on. Having this thought as our starting point we came to the most crucial part of the initial planning. Remembering our own experiences as either presenters or attendees of the events and thinking what is that we wished organizers would do for us and what is that we want from the perfect event. So let's consider the perfect event from different angles and first the presenters point of view. We all know how it works in real life. You come to the scene, you plug your laptop, turn to the audience, begin speaking. In the beginning you are horribly nervous but then probably you would be finding some footing as you keep communicating with the audience and things getting better and better. People start asking you questions then you're really having fun, discussion continues in the corridor, you go for a beer with your new contacts and that's what everyone loves about physical events. But now we have corona times and that all effectively is gone. You present in front of the cold eye of the camera, there is no reaction from the audience, no laughter to your jokes, no rotten tomatoes if you make mistakes and no special chemistry or energy exchange with the audience which is really important. So basically it's easy to lose motivation and start thinking why bother. And the only answer is finding a way to improve interactivity despite all these challenges. And I will tell you in a moment how we approached this challenge of minimizing the lack of interactivity in the online event and spoiler alert our message really works. But first let's look at the other side. What is that attendees worked with in the presentation? So why do people attend a conference in particular an online one? Obviously to learn something new. So the first important task would be to make sure they can easily identify interesting for some presentations and navigate through them easily. And this is all the early work with the presenters come so invaluable and having a good scheduler and all the information available in advance is really important. Well it's actually the same in any physical event. But then when the event is online and you know how you can watch it at any point in time and time the questions come why bother tune in as a selected scheduler time. And the answer is surprise surprise again interaction. Attendees wants to connect to each other and to the presenters just as much as presenters want to connect with them. So it seems to be much made in heaven. And now we come to organizers people who bring it all together. We have established beyond any doubt that people want to interact and now we need to see how we can best provide possibilities for that. In real life the presenter will see the audience and feel their reaction during the presentation and answer the questions. Now it's all gone but instead we can clone the presenter. Having the presentation pre-recorded means that presenter can be in the chat during the presentation answering questions and discussing. Only the beautiful cold glass of beer would be missing from the experience of the real life and life events that people really like. And then as far as questions and answers are concerned at least our experience has been that this is the part where it makes sense to make an effort arranging for the live session. During our Maria DB server first in September 2020 we have a combination of pre-planned questions to ask from the presenter and also the most interesting questions from the audience that our host asked. Of course there is a matter of a good internet connection to consider here. One of the most unforgettable experiences from the first day of our server fest was when we were chatting with the presenter just a few minutes before we were about to go live and then his internet connection died out. Luckily there was a possibility for a backup so our founder Michael Vidinius was dragged out of the shower placed in front of the computer and asked questions straight away so it was a most spontaneous interview ever with him. So lesson learned always have a plan B where the live connection is concerned but when possible give it a chance because nothing makes you feel frankly so alive. Having a pre-recorded session gave us of course another idea. Why not make this event time zone friendly? So we divided our planet into three parts and named them New York for America's, Paris for Europe, Middle East and Africa and Singapore, Beijing for Asia and we broadcasted our presentation as the times convenient for each area so each presentation was broadcasted three times. We realized that since our presenters come from different time zones it might not be very convenient for them to participate in those three streams but actually turned out that with some planning it's possible to manage to create such a set of schedulers that almost everyone could take part of in at least two or three sessions. What we didn't quite count on was a strain it will put on our team. The first event that we created in this format Marietta Bezoro Fest in September 2020. Stretch for a whole week with each stream being five hours long and so there was several days when we began streaming in the morning in Paris and then in the evening in New York and then next day suddenly there was a Beijing stream so it was a huge challenge primary for our video streaming wizard Aleksandar Morozov and our host Kai Arnau but they concluded and we all concluded that it was worth it largely based on the reaction from the audience. So let's some look some more at what makes this whole thing ticking. Presenters of course because ultimately it's very much about the content and about the presenters and presenters do love to present. Obviously the very reason people submit their papers for your event is that they want to be there. They work hard, they create something they can be proud of and they want to share the results of their work with the public and also they of course want to hear back from the public feedback, praise, critique, comments, they want to make new contacts and so on. Sounds like a win-win but presenters also hate to present. There is a day job, there is a normal life, what we just talked about kids and so on to take care of. Of course there is additional stress of coping with COVID situation and when one finally finds time and peace to work on the recording turns out it's very different from what one is used to in real life events. First of all it turns out that no matter how brilliant you are in your main line of work and how well you know the subject and no matter how many selfies you took or even TikTok videos it's not guaranteed that you know how to set up video equipment in order to record your presentation. Second it's one thing to come on stage, you deliver your presentation looking into the eyes of the interested audience and then concentrate on discussing with your peers but it's completely different thing to first stare into the coat lenses and wondering where to put your hands, what to do with your eyes, where to look, how to discreetly check your slides without looking too obvious and then there is a shock of watching what you have recorded and coping with the fact that this is how you look and this is how you sound. This is how many times you stumble and search for words and so on it can be quite a shock for many people. I know it was a shock for me for sure. The pressure to re-record can be quite overwhelming and then there is no time to do it and so on so it can become quite a vicious circle that needs breaking and that's again where organizers come into the pictures. So how can we help as organizers to make a recording process as smooth as possible for the presenters? Well as organizers you very quickly learn how everything has to be set up so the first advice I have is don't assume that everybody knows it. It might be your main work for the moment but presenters just don't know the stuff so it's really important to create the kind of instructions that make sense to your presenters and for that you need to sit down with whoever takes care of video editing, processing, preprocess and preparing stream and learn everything possible about their process of working. Then document it and then turn it into a very short, very clear instructions and because many presenters simply don't have time to go into the details, they just want to be told what to do and get on with doing it as easily as possible and then the main load comes on the video editing team to put the whole presentation together and to make it as suitable for the stream. But then of course some people do like to know why they need to do things in a certain way so at least we came up with this process when we first had a short list of instructions but then also an appendix explaining in detail how our streaming process works and why we ask certain certain things for people. And then once you have shared those instructions with the presenters I found that it's really helpful to get in touch with each of them personally and make sure that instructions are clear and we were setting up actually a messenger group with quite many of the presenters and first encouraging them to record a very short session for us so we could give feedback and then going ahead with the main thing. That really helped. What else to consider? One is arranging for subtitles. We all have accents. Yes, even the native speakers have accents. Everyone has an accent from someone else's point of view and having subtitles make it so much easier to follow the presentation. But what it takes preparing subtitles for every single presentation? Well, obviously it takes a lot of work. Our current conclusion is that it would be ideal if presenters could provide a full transcript of their presentation and we encourage them to do so but it's not possible for many people. So yes, it's a huge effort to make subtitles happen but we consider it worthwhile because you need to consider that YouTube will keep recordings of your events forever or at least as long as you want to and it means that once you have this work done a lot of people will be able to take maximum advantage of the great content presenters provide in their presentation. Then there is a way, a matter of keeping everything together. During our work on the MariaDB service, my official title inside the company changed from cheerleader to cheerleader and cat hurler because trying to keep the deadlines and making sure all the presenters follow the rules is akin to trying to keep a liter of kittens in a basket. There is always this one last participant who breaks all the deadlines who also happen to have a great content and maybe also very important member of the community or together and you really somehow want his presentation to be there and even if he misses all the deadlines and you nag and plead and threaten and beg at some point it is just not enough time. So what to do? Well obviously there is no universal solution but my observation has been that assuming Bonafide is a really good idea. Usually people just don't seem to understand as a challenge video recording and streaming team faces so they just think like they would do in the normal life that it's okay to come up with a presentation and last moment it will all just go smoothly into the internet but since it's not the case people need to hear about it. So yes obviously it's all about the team. I consider us incredibly lucky in MariaDB foundation to have such great people to work with. People who are prepared to work very late. We had amazing after-mids night session during our work on different events and especially I would like to name Alexander Morozov who is our video editor and streaming master in St. Petersburg and we usually call him team Morozov. It's only one person but since the early days of our work on the presentations it was noted by many people that he does work of several and then all other guys are doing a great great job so I'm really happy with what we are having in the foundation. Then of course I must thank the presenters for the work. The first event that we ran in the format I just described in 2020 featured 35 presentations by 30 presenters and we run each of them three times for the different times those so it was a lot of work of course for our fantastic presenters as well and having great content and dedicated people around that what makes this whole event worthwhile. So after organizing this big event in September the event went almost straight away into a small MariaDB server mini-fest just a couple of months later and now we are planning to go on every few months these several fests or mini-fests in 2021 because there is so much great content to share with busy people and so many great presenters to work with. And obviously thank you to our audience. Our first online event in September 2020 was such a great success. As I said previously we have only experienced organizing small physical events for 50 or so participants so watching our events unfold in front of our eyes on YouTube and seeing numbers like 10 000 unique views and then even more views after it felt absolutely amazing and it was particularly awesome to get feedback from the people especially when something that we envisioned as what people might like like for the subtitles and then hearing lots and lots of positive comments exactly about that. That felt really really good and gave us motivation to continue. And this is all I would like to share with you today so please ask me questions now.
|
In the MariaDB Foundation we have been trying to respond to the challenges of the "new normal" by organising a series of online events that we called MariaDB ServerFest. Taking a step from holding Unconferences for 50+ participants to an online event for thousands of viewers is a challenging and exciting learning process. In this lightning talk I would like to share some war stories, ideas and experiences from this journey.
|
10.5446/52483 (DOI)
|
Okay, good afternoon everybody. I think you are seeing this probably in the afternoon in Port Fosden. That sign says the code, you can see whatever you want. Thank you very much for joining us today in this online conference that we used to be all together in Brussels. We are going to miss the waffles a lot. I will probably cook a song of them for later. My talk today I would like to talk about open source, I don't know if you have heard about this concept but I will also talk about other topics. I am going to minimize myself in the video so you don't need to see me that big anymore. I would like to focus on at least four key topics. The people that we do develop, maintain, contribute to open source at some point. I will talk of organizations, open source ecosystems. I think it is a concept I would like to highlight during this talk. Also about open source pro-anophysis, the important role to help sustainability. One of the key points here about this, how to help on sustainability, is the way of the maintainer. So let's start. The motivation for this talk started like, I think it was a couple of months ago, I was preparing a talk for OpenCore Summit and I was analyzing, I work in Vitergia, we provide analytics about open source development and how companies develop software in general. But I was analyzing the commercial open source software company index. I don't know if you know about this but basically like 45, 50 software companies, companies that develop software and they say it will provide it as a service. These are the companies that have like $100 million revenue per year at least. So that's very amount of money, so that's significant amount of money. That means that you can do money doing open source software development and even providing service on top of that. So those companies are around in total like 70,000 employees and the numbers from these indexes are generating revenue around $80 billion per year in revenue. So that's in total for these 45, 50 companies, that's huge I would say. But okay, going to what were the people that are maintaining this code, how much active they are, what's the level of dependency these companies have on the maintainers. It was interesting to say, okay, these are the top 50 and you can compare that there is like a people that is doing a lot of identities or identities I would say that are doing all this amazing work on top of everything that are mostly bots actually. They are bots, but, but, but there is one human there and you see the amount of work this person has done is above any average of the rest of the project. So compared to around again 50 companies and these are people that are getting more than 100 or even more millions of dollars in revenue. So they are depending on probably the work of this person that I would say don't bother him on Thursday. I know it's him because I know him also who is the company and the working on but it's not the matter of the of the conference for me was the key point. Open source is depending that much in and this company is depending that much in the in the work of a person that okay companies software development at the end of the day is out people and that's that's very important and actually if we look at how software is developing in general we have development activities like writing code we have also deployment like we have more nowadays automatic ways to deploy things but at the end of the day you have some people involved there also and maintenance like people checking issues solving issues reporting, bad sets all this stuff also done by people but also in all these steps you need some kind of collaboration in one sense of the other of course in open source it's one of the key points for developing open source at all. So again people from my point of view and I think also Toby said this last year in also in the community there from here is said that people are the limited resources in software development ecosystem not only in open source but in software development ecosystem in general. So we if we are managers in a company or we as community members or we as companies that are willing to keep developing software we should be aware of the people we are involved with or we are working with. So coming to this idea of ecosystem when I was thinking about okay what's an organization open source ecosystem if I need to look at organizations open source software ecosystem is not just the software they develop and actually I've seen at least three kinds of layers of how companies or how this ecosystem could be defined. First of all all these projects that the organization is consuming there is a bunch of projects and I would say that nowadays I don't know any single company that doesn't consume or use or whatever at some level have implications of using open source at some point to build their own stack to check what's going on there I don't know they are consuming open source even because they are actually some developers could be using any packages out there but they are also buying other companies support for using open source so it's not only a matter of I'm taking and not giving back I'm also paying for it at some point. But in addition to this there are also companies that are releasing their own open source projects because they want to attract talent because they want to drive awareness about the things they are doing they want to improve their technical staff many many reasons and you can ask many companies what are the benefits of releasing open source and contributing to open source because the last level of the ecosystem are all these companies are all these projects that the company contributes to basically all the projects that you are participating as a company because you want to drive them you want to I don't know to force them to work in a certain way because you want to retain talent of your company there are have been said that for by allowing people to contribute open source the people in your organization is more willing to keep working in your organization so you know how important it is to retain talent nowadays but also to increase the company tech footprint into open source ecosystem or even to be seen as a with open source citizen so all these projects are your open source ecosystem and you could say that okay that's that's a bunch of projects and how complexity this ecosystem could become this is a very rough analysis we did about the Mozilla I think is already written and you can see it online the Mozilla and the Rebel Alliance and this is only the open source prologue Mozilla organization release and what you're seeing there is is all these big projects like five four you need a lot of you see a lot of stars just closing to them actually those start out people and depending on the color you are seeing how active they are in the project so they are the core contributors the people that are doing 80% of the contributions or yes regular contributors doing 15% of the contributions or even the castle contributors doing 5% of the contributions and I think the most challenging thing that you should be aware of is that there are a lot of people that are not just working on a single project but they are working on several and even they are moving from one project to another so again your ecosystem start to be seen as more complex because all these relationships matter because probably you can decide okay this project is not relevant for me anymore so I'm going to stop any contributions to them or any support to this project and actually the people there could be involved in other projects and they would think okay if the company is not supporting this project I'm not going to be interested on keep working on this or even good key company that has been working in very important projects in the company has started contributing to other projects and you as a company should be aware of why these key people for me is contributing to this other technology perhaps because this is something that could be interesting for us so at some point we should be taking care of that also and last but not least when we are talking about consumer releasing and contributing to open source software okay that could be very funny but if you are doing that without any care it can be very risky like as I said in the picture like just jumping in the air and let's go for the fun thing but if you're not taking the good measures or means with you probably you could be facing risky things and I'm going to talk about just three specific topics I think they are very important the first thing is our legal of course if you're not taking a we're not going not being aware about awareness sorry about licensing IP things or stuff like that probably you could be facing some some important programs I'm not going to talk here about also security that could be somehow related with this legal compliance about it and using projects that are secure or mature enough on similar level are things related with people management like okay if I would like to keep my people engaged with the projects engaged with the company probably allowing them to contribute open source has been said that is good but also to attract talent is good to have open source projects I need to identify all these people and again this this putting into the all this management thing about okay we need a people management thing also involved in this open source relationship with my sorry in this relationship with my open source ecosystem and last but not least that usually this is the first thing that people think about when they are talking about open source is they are talking about engineering about developing things about contributing and improving the the tonal legal stuff and of course that's that's one of the key points but I have led that to the end because for me all these three layers are very important and you need to manage from the legal point of view your relationship with your ecosystem from the people point of view your relationship with your open source ecosystem and from the engineering point of view your relationship with this ecosystem and you can imagine these are very three different skills and very different mindsets that you need to put all together to make this so how we can we can make that and this is where open source program office or open source program office programs office or however you want to call them I usually call them Ospo has become an important role in several organizations and if you know I will recommend you to read them to the group guides you can read there like open source program offices central open source is just a designated place where open source is supported, nurtured, shared, explained and grown inside that company in other words from from my point of view in my humble opinion I think Ospo can play an important role to ensure ecosystem sustainability because actually you at some level because you are consuming because you are contributing as I said before legal people an engineering point of view you are depending on that ecosystem and having some kind of office or set of people in the company that is ensuring that everything is aligned with the ecosystem you are surrounded by I think is where you both your you and your ecosystem ecosystem and your all the environment is secure and growth in a very natural way. So the first thing for me is key from the Ospo point of view you need to understand your open source software ecosystem first you need to know which projects I am contributing to I've been talking with companies that they don't they don't even know where of I don't know where my people are contributing to I don't know even how many open source projects or how many open source is being used in my company and probably because it's not key probably some of you might be just thinking about okay you are a software company you should be aware of that but nowadays companies I would say all companies and I'm not talking about 2020 and all these digital transformation hype but all the companies are becoming software companies at some point so if they are a software company for sure they are using open source somewhere and of course if there is not there if that's not their main business like developing software some people might not be aware in the company that how important open source is for them so they don't even know how much open source they are using and then cannot be even aware of all the risk of using open source without care right of course you can you need to understand who are the key contributors who are the key maintainers of these projects who I need to talk with if there is an issue if I need to hire someone or if I need to identify the talent inside my company that is contributing to these projects name means I need to understand all of these things and it's been said that without data you are just a person with an opinion and we have been seeing all these big data thing for all time about marketing sales and all the stuff why not analyzing also open source and you know this is what Viterlja does and there are more and more people nowadays doing it and you have a bunch of open source and free software tools to use I will just mention a few of them the software 3c stiff by mainly developed by Siemens and a bunch of European companies that is now part of the eclipse foundation set of projects it used to be up and have you probably you call that years ago all that you can check projects activity now you have also called on the diet that we presented here in in Brussels one year ago that is a such tool that you can go there and analyze open source projects quite easily you have grimoire lab that also is one of the projects that Viterlja has been participating in and it's also now in the chaos community that community chaos you know stands from community health analytics for open source software so you have there also definitions of metrics to define the health of an open source project but also open source tools to measure that help so basically you have already a bunch of tools to deal with all of these things another way that you might think okay now I know what how my ecosystem looks like and there are a bunch of place and depending on so basically one of the first thing that people think okay let's find a fund or sponsor these projects to ensure that they have some kind of financial sustainability and that's good that does a good idea I recommend you to check out the options there that are out there to for doing that but I would like to highlight please be aware of tip yards because tip yards doesn't create don't create a sustainable way of living for maintainers for contributors if you are doing that imagine that usually you have another salary income somewhere so tip yard is nice but think about other ways to support this and there are other ways one that I would like to comment today or to highlight is this idea of the promote the way of the maintainer I know I am a Mandalorian fan so I usually use this as this is the way most company most of the people in your company start like active users like why not start in as active user not just users just just using the software now if you find an issue you find something that you don't understand feel free to go to the forums to the jithub issues is tracking system or whatever issues system they are using like jithub jitlab jit whatever for this project and ask questions make yourself okay and here I am using this I don't know how it works I have this issues or I found this interesting start by asking because once you get comfortable with the product and you start doing this path probably there are people even in your organization that is able to contribute back like contributors becoming a contributor like okay I enable also to solve some issues that I've seen in the in this forum by going there probably you realize oh I know also how to solve this probably I don't know all the answers but if you already know any answer please provide that reply to questions even if you have some kind of technical knowledge you can even send path checks to try to fix some of those issues or try to improve the project somehow of course that needs some core review and probably some people will discuss you should be doing this in other way you need to test things and you will learn to solve this process and if you continue this process and probably there is not all the people in your company are gonna do all this but you can achieve the point of okay you can become a maintainer because you are able to review and accept patches and the community also will recognize you are one of these people and your organization one of these organizations involved from this like I can we can even code in the main branch like okay and just driving the players as I would like and even participate in the roadmap of governments and I think that's the whole path that also helps to the sustainability of the projects because it's not a matter of just keep the maintainers like they are I keep also renewing the maintainers or even to have new blood in the maintainer people right but one of the points and also related with this idea of sponsoring projects one of the things I've seen a lot is this idea oh I can do this thing I can take the project I can customize by myself I don't need anyone help because I am a big corporation and like the empire I can do this by myself okay feel free to tell what's going on your ecosystem because there might be companies and individuals providing consulting, customization and even support services on those projects and hiding these bounty people bounty hunters I would say that's an option and it's very nice option because there are already people thinking okay what could be my business model for living or how I can do a living on this over development I basically by tip yards are not going to be made a living of this but if people are hiring me for specific things or like I said consulting and customization this is a way to live and that's what that's that's what's it everything from my side I'm more than welcome to your questions so please let me know you have any question and I will respond that answer that thank you.
|
Open source is becoming the main ingredient for companies to success. To achieve it, companies need to manage efficiently their relationship with open source projects. And that’s the main goal for companies’ Open Source Program Office (OSPO). So, they are key for companies success. But, additionally, they could be very important for open source projects sustainability. During this talk, you will learn about the responsibilities and benefits of having an OSPO in your organization, why should companies adapt to open source and adopt an OSPO, how it makes a difference to have a team responsible for viewing, managing, making critical decisions, contributions back to open source projects, and providing oversight for open source initiatives on their company and where they should start. Also, we will give real examples of how companies are doing this today and their impact for the community. Additionally, you will learn about communities and initiatives to help you having a successful OSPO, like OpenChain Project, TODO Group and CHAOSS, the importance of CHAOSS to give actual data and insights about open source projects and a bigger perspective with analytics and how data and metrics from the OSPO can help companies tackle their corporate strategy.
|
10.5446/52484 (DOI)
|
you Hi, my name is Don Goodman Wilson. I want to talk today about the way that we define open source. I want to think about basis for the definitions that we use and the different kinds of definitions that are out there. I want to talk about our insistence on defining open source in terms of licenses when we think about it explicitly and how this is at odds at the way that we use the term when we're not thinking about it explicitly. I want to argue there's a better way for us to define open source that gets us away from strictly licensing and focus on what's more important with open source, specifically the community. So who am I? As I said, my name is Don Goodman Wilson. I'm an ethical tech activist. I'm the director of katsu-don.tech, a developer relations consultancy that helps companies build compelling developer experiences. I'm on the board of the Maintainerati Foundation and I'm a member of the Ethical Source Working Group where we work towards defining what open source could be in the future. So all of this got started a few months ago when this tweet showed up in my Twitter feed. Defold is a game engine. They announced that they were open sourcing this game engine on Twitter and released a source code into GitHub. And the response was swift and predictable because of course it was not released with an OSI approved license. The responses came thick and heavy. As I stated previously, I think it's extremely important that game engines allow source access and the move is always welcome, but this is not open source by the standard OSI definition. No, it's not open source. It's not open source. It's not free software. Please don't use those terms where it's not applicable. In fact, somebody even opened an issue. Defold has since today referred to itself as open source. This is incorrect. As per the OSI's definition of open source software, this software is not open source due to the clause disallowing commercialization of a game engine product based on the engine's code. I find this kind of militant piling on a curious phenomenon. Why do we insist on defining open source strictly in terms of the licenses? Why do we defer to the OSI and who gave them the authority to make this kind of definition? To get to the bottom of this, I want to think a little bit more about the way that our words are defined generally speaking. In general, there are two sources of definitions for words broadly speaking. First is institutional. Institutional definitions are driven, as the name suggests, by institutions. If you are an English speaker, frequently this may be by the Oxford dictionary. The French language has its own government entity that regulates what is and is not French, what is good and bad grammar, what words count as French and which words don't. In this case, when we're talking about open source, we defer to the open source initiative as the institution that gets to define what open source is. In general, what's interesting about institutional definitions is that they're centralized, they're standardized, which is very good. We would like our definitions to be standardized. But the authority for definition often comes by fiat. It is simply claimed by one or more organizations who have maybe legitimate, maybe not a legitimate claim to power and authority. The other source of definitions are colloquial. They're how we use words in practice. Urban dictionary is a great collection of how we use words in practice and to give a sense of what many of the kinds of words that pop up in common parlance mean. Many words are actually defined colloquially before they become part of an institutionalized part of language. And open source is no exception here. There are colloquial definitions of open source that are based more, as I'm going to argue, on facets of community. The source of authority for colloquial definitions is actually, well, quite different. The authority is the community of users of the word. People who use the word get to define what it means. And their claim to authority here isn't the fact that they are using it, communicating with it successfully. So an open source, as I said, it's the same thing. We have the open source initiatives de facto definition, which as I will argue is a reflection of business needs specifically. And we have the colloquial usage, which reflects the community practice. Now, some of you may recognize what I'm talking about here. This is a long debate in linguistics between prescriptivist and descriptivist definitions of words. This talk is essentially an argument for descriptivist view of how we define our terms in open source. This isn't to say the two sources can't be in agreement. But I want to claim that legitimate authority for defining words comes from those who use the word in practice, from those in the community who are successful using the word to communicate. And that in fact, we shouldn't be letting our institutions define our terms for us because when we do that, we're ceding authority to those institutions to not just define our words, but to define the framing that's possible, the narratives that are allowed. They control the way that we're allowed to think about the work that we do. And I don't think that's really in the spirit of open source. Open source is ground, from the ground, it's bottom up. It represents democratic and consensus based work. And the way that we use our word should be a reflection of those values that we have in open source. Why should we care? So open source projects thrive when the people working on them thrive. The way that we define open source is shaping the way that we run our open source programs. It shapes a discourse that we have around those programs. And it shapes the way that we encourage others to behave, right, as in the case with default. And it shapes the kind of communities that we build. And if we want to have healthy communities, then we need to think more carefully about the words that we use to define those communities. Because the way that we define this term is a reflection of our values. And we do value, I think, democracy and consensus rather than centralized corporate power. And to the extent that we do, then we should probably reflect carefully on why we adhere so strictly to the OSI definition of open source. So how do we currently build open source communities? Let's take a moment to examine some of the materials available to us when putting together a new open source community. And I want to highlight some of the things that are made priorities for people new to this, to new to building these kinds of communities, because I think it's kind of curious. So here's the GitHub checklist. This is taken from github.com. This is displayed to users when they create a new project. It's an easy checklist for them to go through. It's all fairly straightforward. We're going to go through this just a little bit. But for those of us who have built successful communities before, there's some interesting things missing. Right. So the first thing is have a license file with an open source license, basic documentation and easy to remember name. The issue queue is up to date. Consistent code conventions and naming conventions. The code as it exists is clearly commented. The legal department is involved. You have a marketing plan. Someone is committed to managing community. It's interesting that the community manager comes very last on the list. But what comes first on the list is having a license file. Why is this the first and ostensibly therefore the most important consideration if we think that community is actually at the core of what open source is? On top of that, we have the received wisdom that our metrics should focus around adoption. The more is better. The more people using the code, the better. Right. But this is not really a measure of community health. Having a large community does not mean that you have a healthy community. Having more people building your software does not mean that you have a better community. Right. So I want to ask why do we emphasize license file? Why do we emphasize adoption rate when starting new open source projects? If we know that these are not really key elements to building a healthy thriving community. And I think the story starts here with the open source initiative. The open source initiative has their own definition for what open source should be. And it's a definition that people are thinking about and referring to when they criticize organizations like Defold for calling their software open source. And I think although we're all familiar, I think with the OSD, it's worth having a look at. So the open source definition has 10 clauses to it. The software must be unrestricted in terms of redistribution. The source code must be included. Derived works are allowed. The integrity of the author's source code is important. It can't discriminate against persons or groups or fields of endeavor. The license must pass through to other people who receive it. The license must not be specific to a product or strict order. Software must be technology neutral. These are all features of licenses. This is not a feature of software. These are features of communities. This is all features of software licenses. This is a very narrow view of what open source is and can be. This view is designed to do one thing and one thing only. And that's to make businesses happy. In open sources, Bruce Perrin says that he and Raymond worried that talk of free software was stifling the development of Linux in the business world. This is a quote. The OSI was conceived as a program to market the free software concept to people who wore ties. I want to let that sink in for just a moment. This is not bad. This is not a problem in and of itself. That's a very specific goal. That goal is somewhat orthogonal to the kinds of issues that we face as open source maintainers who are trying to recruit contributors and maintain a healthy community and ensure that we're building software that is useful and helpful to others. These are strictly business concerns and they're concerned in particular with removing risk. Free software licenses started as a hack of United States copyright law to achieve specific political goals. Contracts like licenses are a tool that businesses use to remove risk from transactions. They're kind of contract. The OSI saw that software licenses could be an effective tool for marketing open source to business because there was risk that needed to be removed. So let's define features of contracts that sufficiently remove that risk to make open source interesting to businesses. Business is risk averse. It doesn't want to take on unnecessary liability. And it relies heavily on legal contracts that lay out the obligations and responsibilities of all parties and explains what happens when those obligations aren't met. I deal in contracts day in and day out. I'm sure many of you do too. You know the sorts of features that go into these contracts. So open source was a risk to businesses because the obligations and consequences of adopting open source or contributing to open source were unclear. And the GPL itself created undue or poorly understood liabilities at the time that the OSI sought to mitigate not necessarily through changing the GPL but to making them clear to businesses. There's also an interest in the OSI's part in demonstrating how open source reduces costs which is another important consideration for businesses. So the financial benefits of open source are very clear to all of us. They reduce the cost by spreading R&D among partner companies. They allow you to recruit a community volunteer workforce. On top of that there's improved quality in software. Many eyes make all bugs shallow. There's faster time to ship. You can make better use of off the shelf software. There's improved interoperability with open standards. And all of these are true. These are features of open source that are appealing to businesses. But where is the community in all of this? And also what authority does OSD have as a definition for open source? I think it has a tremendous authority as a definition of a set of licenses that businesses might find attractive. But in terms of defining how we structure our communities and the way that we distribute our code and the way that we work together, which I think are, and I'll get to this in a moment, the defining characteristics, the truly defining characteristics of open source. OSD has no authority beyond what we choose to give it. And if it doesn't work for us, we're free to abandon it. We're not tied to it. If we are tied to it, we've tied ourselves to it. I'm going to argue that we should do just that. We should untie ourselves from this because it's a free choice. So let's talk about open source usage in practice, right? Because our goal in building open source communities is not on removing risk from business transactions. I mean, that might be a priority, but it's not our final goal. It's something that's a tool to achieve this bigger goal that we're interested in. So we should really be interested in how open source is defined in practice, right? Open source is collaborative, right? That was one of the key features of Linux kernel development was that it pulled people in from all over the world to work on it together in larger numbers than many software projects had ever seen before. It's open to participation. It's not a closed off group in a sealed room in the basement of a building. Theoretically, open source communities are open to anybody who has a healthy contribution to add. They're community driven. A good open source project is not necessarily driven by the needs of a single business entity, for example. In fact, we look at projects that are driven like that as being open source, faux open source, right? There's certainly a certain amount of distrust that comes up when we see projects that are not driven by the community. This is why many successful corporate open source projects have been moved into foundations, have been moved to bottom up decision making process and divorced from the corporate entities that spawned them. And these whatever license best serves these purposes, right? A license is a tool that we use to achieve other goals. It is not the defining aspect of what we do, right? Carpentry is not defined by what qualifies as a hammer. Neither should open source be defined by what qualifies as a license. I don't think that the OSI's definition is at all capable of capturing any of these aspects of open source projects. And I think that's a problem because I think these really are the defining aspects of what open source is. When we talk about open source, these are the things that come to mind, right? Consider open washing and the outrage that accusations of open washing generate. Open washing is when a company or program attempts to co-op the trappings of open source for its own benefit without actually being open. While there are many ways to open wash a project, one of the prominent ways is, as in the case of eucalyptus software, which was accused of open washing, to provide software under an OSI approved license, but without accepting patches or community interaction back. Now, this form of open washing, strictly speaking, is fully compatible with the open source definition as provided by the OSI. The OSD says nothing about an obligation to receive work from others to perform the work in the open. It only specifies the license that software has to be released under. Nevertheless, cases like eucalyptus software generate outrage precisely because they flaunt the community aspect of open source work. Many people call this open source in name only, and I think they're right. It's not open source, even though it does use an OSI approved license, precisely because it rejects the very community foundations that make open source interesting in the first place. So, what do we do? Well, I want to propose that the ethical source definition might be a viable alternative for looking at open source projects and deciding whether or not they are truly open source, independent of the license that they're using, because I think the ethical source definition captures many of these community driven aspects, and more. So, let's go through the ethical source definition and see how this applies to these community driven features of open source. It's worth considering here that the aspects, the clauses of the ethical source definition don't constitute a bright line. If you meet them all in your ethical source, if you don't meet them all, you're not. This is a more of a continuum, right? This is more of a aspirational things that a project should strive for and ideally will achieve. But as long as the project is working towards these kinds of goals, they're going to be much healthier for it and have a much healthier community for it. So, let's look at these in some detail. And the very first clause of the ESD, the ethical source definition, is that the software should benefit the commons. Now, this is a very broad, this is in fact the broadest clause in the entire definition, and it's a little fuzzy and a little vague, but it's like that on purpose. Now, one of the aspects of benefiting the commons means that, well, it needs to be part of the commons, and being part of the commons means that it has to be released in a way that doesn't prohibit modification, it doesn't prohibit derivative works, and it doesn't prohibit interoperability with other things. And the general spirit of the OSD, of source available, or other commons distribution methods, right, because there are plenty of other kinds of digital commons that don't rely specifically on OSI approved licenses. And although licenses are an aspect of being put into the commons, it's worth emphasizing here that the interest is in getting the software in the hands of other people rather than on de-risking business transactions. De-risking business transactions are not a concern here at all, in fact. What's important is that you aren't hoarding things to yourself and keeping them to yourself, right, but that you're giving things to other people. And the mechanism by which you do that is much less important than the fact that you are doing that. By the same token, benefiting the commons means not causing harm, right, so it's very important that you think about the kinds of ways that the software that you're creating really are a benefit to the commons, and you're not just throwing it over the wall, right, that you're not creating harms. This also means thinking about the end users of your software. And we're going to touch on more aspects of that in a moment. This is not something that's generally encouraged by something like the OSI's definition, right. There is actually, in fact, no consideration of anybody beyond the other developers who will be accessing the software. But we want to encourage, with the ethical source definition, thinking beyond just that constituency to the entire range of people that might be affected by the software. So the next clauses are much more specific about what this means, and essentially are drawing out different aspects of what it means to benefit the commons. So ethical source must be created in the open, right. The source code must be publicly available, developed and maintained in public view, and it must welcome public contributions subject to the review and approval of the maintainers, right. So we want to rule out the idea that you can just write some software, put it on a website somewhere with an OSI approved license, and that's it, you're done, right, because there's far, far more to being truly open sources we know than just throwing it over the wall. And so this is a requirement that it be created in the open to be an ethical source project. The software should have a welcoming and just community. And what this means is that the, the maintainers and contributors must publish clear rules for project governance and adopt a comprehensive code of conduct, and moreover that it is consistently and fairly enforced, right. Having a community isn't just throwing up a chat room or opening up your issues and then letting whoever come in and say whatever they like. No, we know this is the community dev room. We know that good communities require moderation, they require guidance, they require repercussions and accountability for their actions when they have bad outcomes or bad intent. These are all things that we need to think about. And successful open source communities must think about these things. And successful communities in general must think about these things. And insofar as open source really is about community, then open source projects that better be thinking about these things. Otherwise, it's very difficult to say that they're open source. In the sense of the term that I defined earlier, they're just mobs of people working on code. The project should put accessibility first. If the software has a user interface, it must be designed with accessibility in mind, ensuring that all the software functionality is available to everybody, including those who may rely on assistive devices. Again, it's important to think about the end users of your project, right. If you're not creating the software with the people who will actually be using it or impacted with it in mind, it's not clear why you're creating it at all in the first place. It's fine to scratch your own itch, but merely scratching your own itch in public. It's not sufficient to say that you have a community that's invested in the success of your project. For the same reason, it's important to prioritize user safety in your project, right. It must be designed with features and safeguards that minimize the risk of abuse or harm to others through the use of the software. Again, it's important to keep these users in mind. They are the reason for your software. It has to protect user privacy, right. So if you collect end user data, the software must be designed with provisions for its operators to delete it or provide it to the end user in a form that they can use. And finally, the software must encourage fair compensation, right. It's very difficult for me to think of a community that exists solely to provide benefit to corporations is truly an open source community and not just outsource cheap labor, right. The community is truly engaged with the software that they're building. That's great. But it's very important to that those who are driving commercial value from the software are providing some form of that value back. That's part of a just and perfectly normal commercial relationship that ought to exist and that frequently doesn't and causes no end of problems, strife, conflict, burnout, right. If you're burning out your maintainers, it's again very difficult to say that you have the foundation for a solid thriving community. So why do these things matter? Not all of these clauses look like they're obviously about what we cloak will you think of as open source software, but I believe that they do. And I believe that they do because they encourage open source maintainers to think about and have a focus on end users from the start. And when you're focusing on your end users from the start, it's going to shape the way that your community forms, instead of focusing on yourself instead of focusing on business needs instead of focusing on the license, focusing on potential revenue streams, or focusing on sales. You're going to encourage participation from a much broader audience, right, so you're going to have an increased diversity among your contributors. Your contributors are going to have genuine passion and drive for what you're doing because you're building something that's focused on being useful to other people. Those contributors as a result are going to be more likely to take ownership of the project and that's a really important aspect of a thriving open source community is that they feel ownership that they feel like this is something that they help build and they're proud of that. Because when that happens, you're going to see more organic community growth that kind of enthusiasm is infectious, right. And what helps with that too is that you're genuinely working to make the world a better place you're not just writing software for the sake of writing software you're writing software for the benefit of others who are not developers, right, or who may be developers but but further down the line you're thinking about those end users. And having that kind of goal and being able to genuinely say you have that kind of goal with a straight face is an incredibly power motivator powerful motivator for your potential community. Why is this how does this work, because it moves the project beyond pure self interest for businesses, right, it's not extractive it's providing value to a community. It's building developer goodwill and interest in your project, and it's driving sustainable engagement. When you can do these things. You're more likely to have what we look at and say is an open source project, right, one with a community that genuinely cares about the source code. Takes ownership for what they're doing drives the technical decisions, participates in a bottom up consensus driven decision processes, and is thinking about how their code is actually of use and benefit to those outside of their small inner circle of developers. So let's take away from all of this. Put your community first. Licenses are important, but they're not at the core of open source. They're not the most important thing of open source. They're one of many tools that we have for building open source, but just one of many. They're not even the most important. They're far from the most important. In fact, they're important, but they're, they're not the most important. Right. Instead, I want to encourage you to reach for the tools that make your community amazing, right. Think about projects in terms of why they exist, who they exist for who they benefit. And I think we're going to have a much better match with our conception of what open source is with what we actually label as open source out there in the world. Thank you. QR code will take you to a link that includes resources, links and citations for the different things that I presented in this talk. And you can download the slides there as well. So I think we are entering the Q and a period here. I'm just going to talk for a few minutes until somebody says that. That we've got this. We have a bunch of questions that are coming up here and the course of the chat. Unfortunately, many of the most uploaded ones are comments for me. It looks like that doesn't seem quite right. I saw some stuff that came up that was looking really quite good. So I think if we need to scroll back up through the chat here, and I'm not sure how to do that without getting a recording of myself playing over what I'm talking about. But some of the things that came up in the course of the conversation here, of course, were very focused on licenses. And I want to make sure to emphasize here that I'm not rejecting the OST as a useful tool for evaluating licenses. And I'm not saying that licenses are not unimportant, but I am saying they're not all there is to open source, that there's a great deal more to open source than just licenses. And I want us to look beyond licenses to see what else there's out there. Let's see here. Somebody says, I don't disagree with your intentions. Sorry, Vittorio says I don't disagree with your intentions, but your seven criteria are mostly impossible to obsess objectively. And I think that's okay. On one hand, very little in this world is open to truly objective interpretation. In fact, even licenses that go up for debate in front of the OSI are subjected to a very period of conversation around them, right? Precisely because people can have differing viewpoints on whether a given license even applies or complies with the OST itself, right? And in many of these criteria, we do have ways or we can think of ways or we can be clever about this, right? If we embrace them and we take them very seriously, there are ways to assess these kinds of things, right? We can assess whether or not a community is welcoming and open. We have tools for doing that and that largely involves having conversations with people from underrepresented groups who want to be a part of the community and whether or not they feel safe and comfortable there, right? It can be as straightforward as that. So I worry this question is actually quite disingenuous. I want to approach this, if we can, from a place of goodwill and positivity, right? Because we do think community is important. So what aspects of community are interesting and important and valuable to you beyond just the license? Maybe the seven criteria and ethical source definition are not the right ones. Maybe they're close. Maybe they're not even close at all. So I think we can offer them up as a candidate for us to have a broader conversation around to really try to get a grasp on the sense of community that we want to capture when we talk about open source. Let's see. We have a few other questions in here. I'm going to pop back into the other room and then listen to an echo of myself while I fetch these. Okay. This is actually quite difficult to do, I'm afraid. So I'm just going to, I'm going to go in here because there aren't very many uploaded messages here. Feel free to join me in the open sources more than just a license room. I think we can put a link or maybe you'll all get bumped in here automatically where we can continue this conversation offline. In the meantime, if the mod has any other questions that they think that need to come up be answered. This is surprisingly difficult to navigate. I didn't expect this. Thanks for noting that the Q&A can't join until we're finished in here and we actually still have about 10 minutes left. Maybe one of the things that I want to mention as well going through some of these comments here where there's a questions about the relevance of software freedom, whether or not I'm advocating for dropping software freedom. Now, another talks that I've given I have advocated for dropping software freedom, of course, but it's not actually what I want to emphasize in this talk quite so much, right? The reason why I'm advocating for dropping software freedom is that the software freedom is actually somewhat orthogonal to questions of community. It is relevant to questions of community insofar as you and your community are building something that could have a negative impact for other people down the road. That's something that you need to have in mind to consider as part of a successful community, I think. And I don't think that's going to apply in all cases. But there are, I guess, I want to emphasize there are much broader class of licenses that don't include software freedom in them in the same way that the OSI approved licenses do. Nevertheless, do facilitate community interaction, community engagement, and are consistent with the kinds of principles that I discussed as part of the ethical source definition, right? And really, it's these features that are most important. Linus Torvalds picked the GPL for, is it the GPL for Linux, right? Precisely for what he felt were its community building benefits, right? He didn't do it for business reasons, right? He didn't choose that license because it was going to make him rich, right? He picked it because he thought that the terms of the GPL would encourage people to participate in the process. And obviously, he was correct, right? There are definitely aspects of that that do encourage people to participate in that. And there are aspects of OSI approved licenses other than the GPL that do encourage people to participate. You know, conversely, other licenses like the SSPL, for example, the server-side public license, actually seem structured to discourage participation, right? And to the extent that, for example, the SSPL is not part of an open-source community, it is to the extent that the SSPL disincentivizes community interaction, not for any of its other business-related features, which are many, but for that reason alone, right? Now, it's because of the structure of the business that people don't want to be involved in that project, like Elastic, right? Once it's gone to SSPL, I mean, I guess we're sort of guessing a little bit of the future here, but I think it's fairly clear. So all of these concerns are certainly intertwined in an interesting and nuanced way, but it's worth our time to sort of untangle all of these threads and really understand what's going on instead of just pointing at the easy target, which is something like software freedom, right? This is a very easy thing to point out and say it either exists or doesn't, and where it doesn't, it's bad, and where it does, it's good. Like, this is not a very nuanced conversation to have when our interest is specifically in community building. Let's see what else we have here that is being upvoted or if I have been. I appreciate all the love that I'm getting in the chat, by the way. Thank you, by the way, everyone, for coming to my talk. I really appreciate having you in the room with me and generating this kind of conversation. I hope that you will join me in just a few minutes for chat and a channel where I can actually read your questions. Here's a really good one. So Kevin says that there's a fundamental assumption here that published software is either an open source project or is for business. This seems to overlook thousands or millions of pieces of software that are not meant for business and are also not intended to foster any sort of community, but would not object to a community forming if they want to. I have a lot of software like this. I have a lot of software that's released under an open source approved OSI approved license, but that has no community whatsoever, right? GitHub is littered with projects like this, although they're more without licenses at all. And I think there's an interesting question about you just don't have a community yet. Does that mean you're not open source? And I don't know. Right. It might be worth delving into the reasons why you don't have a community, right? In my case, it's because I never, I never saw one, right? I didn't. It's not that I wanted one. I didn't know what to do with the software. And so I've developed it in the open under, I think, an MIT license in many cases, right? And so, no, I actually would hesitate to call the software that I've released like that open source because there was never any intention of creating a community. Now, if it's just a matter of you want one, you're looking for one, you just don't have one yet, then we need to look at like what are you doing in your project to facilitate community? What are you doing to manage that community? What are you offering to a community that gives them value to participate in the first place? And if you're doing the right things, but you just don't have a community yet, like, well, we can talk about that, but that's probably open source, right? Let's see. It seems that my time is just about up here. I'm not sure if I'm going to be able to get a third party contribution. Is there, I'm not, I think, I think I'm having difficulty with this question because it feels like it's missing the point just a little bit, right? If you're worried about liability from third party contributions, then maybe you shouldn't have released it into the open in the first place. Like, what do you, as a business, what do you aim to get out of releasing a project into the open and building a community around that project? Right. And that's a really important question you have to ask yourself regardless of what do you believe anything else that I've said or not, right? This is just the fundamentals of building an open source program. And if you're concerned that creating an open source program like this is opening you up to liabilities, then maybe you need to rethink what you're doing because it's not clear that your goals are in front of you. I mean, you know, you can always do the GPL and CLA thing, right? That's what people have done for a long time, but don't expect to build a community around that because you're essentially telling your community that we're going to engage in extractive processes, right? Right. Very much in contravention of one of the forms or one of the clauses of the SD that I mentioned earlier, right? What other options are there out there? I don't know. I'm not a lawyer. I'm barely a business person at that. I'm a community builder, but I think it's worth investigating what those kinds of options look like. So Boris says that he does not understand how broadening the open source definition might help ethical source. So I do want to make clear here that although I'm leaning on the ethical source definition and I am part of the ethical source working group, right? I'm not a public worker and I'm part of the organization for ethical sources. This isn't actually a talk meant to promote the ethical source movement per se. This is a talk that's meant to recalibrate the way that we think about open source in terms of communities, because we talk about open source in terms of communities. So I'm not seeking a broadening of the OSD. I'm seeking to align the implicit ways that we talk about open source with the explicit ways that we define open source and bring these two things together, so that we actually have a guidebook that we can look at and we can use to evaluate, are we doing the right things in building our open source project? Are we doing the right things to attract a healthy thriving community? Are we treating our community with respect, right? And with dignity. Are we treating our users with respect and with dignity? Because I think these are all very important aspects of open source work. And not just, and this is actually separate from and larger than, I don't know if it's larger than, but anyway, it's separate from the kind of concern that we have with the ethical source movement where we're concerned about the downstream effects of the software that we create. And we're conventionally concerned about the
|
The Open Source Initiative's definition of "open source" focuses exclusively on a list of approved licenses: Only software using one of the approved licenses counts as open source. This narrow definition is concerned only with the shape of business contracts designed to de-risk corporate involvement in FLOSS. But we all know that what makes open source amazing is not the licensing, but the community. Open source is defined in practice by its community-driven, collaborative mode of software development. So it should be no surprise that the best open source projects have a laser focus on building thriving communities. Nor should it surprise us that many projects using OSI-approved licenses appear open source in name only. Ethics is the study of how to get along with others. This makes it the perfect tool for understanding how to build thriving, successful communities. In this talk, I argue that the Ethical Source Definition actually provides a more compelling definition of "open source" than the OSD. It better accords with community usage of the term, capturing what makes open source unique and successful.
|
10.5446/52486 (DOI)
|
Hello to everyone at FOSTA. We are in the virtual community, in the room this time. Sorry, we are not in person, but let's keep it warm and supportive for everyone. Anyway, today we'll be discussing switching open source communities while staying authentic to yourself. This is a very important conversation starting in the new reality. Everyone is switching careers and therefore communities to let me first introduce the speakers. This is myself, Anastasia, or just Stacey Raspopina. Now I work at PostgreSQL Professional as a Senior Community Manager, mostly focusing on driving awareness, why events, community relations and developer relations activities. I also do some education, PR advocacy and so on. My co-speaker, Martina Pocchiari, is a researcher in the School of Management of the Erasmus University. She studies brand communities and how they are affected by different factors. Martina will provide some scientific grounds for the techniques that I'll be sharing today. I think this is something of interest because not many researchers these days are focusing on community management. Let's first do some housekeeping notes and discuss the talk structure. We'll have two parts. Part one will be mostly about industry experience and the techniques that I have found helpful for myself while switching different communities. Part two will contain some academic evidence provided by Martina. We expect that it will be of interest to everyone involved in advocacy, including their brand owners and company C-levels. I would especially recommend Martina's part because it is about some solid scientific data. My part will be mostly about practical techniques and use cases from the real world. Of course, we expect and encourage community managers to listen to what we have found and participate in their discussions. We want both technical and non-technical guys to ask us questions and share their experience. Feel free to share your thoughts with us. We will appreciate any kind of advice or remarks. My criticism, of course. Let's talk about the types of transitions. Many community managers might be surviving these days since we are at FOSDEM, which is focusing on the open source as an industry. We'll mostly speak about the specifics of such transitions in the open source community. The first type of transition is about staying in the ecosystem while going to the micro-community of a competing solution. This is still a very comfortable transition, so to say. You're able to leverage what you have in terms of knowledge, key players and many connections that you have been networking with for years in your past job. This is good. Of course, you will have to promote a different product, but still the community itself remains the same. Another case of transition is about coming to a complementary solution of the main flagship product. When you are coming to a vendor delivering this complementary solution, you are also still in the same ecosystem. Of course, this complementary solution will have its micro-community of fans and supporters, but still you can go back and forth in the same font of your familiar ecosystem, which is not so stressful. Let's focus on cases number three and number four, which are a major headache for many people switching jobs. First, let's talk about case number three. This is about transition to a different ecosystem formed around a different flagship product. What is it like? It's probably about the product competing for adoption with your previous product. This is exactly my situation, because I was coming from the MySQL community to a POSBORSQL community. These are both to relational database management systems competing for adoption, but the process for both products is very different. Let's focus on it later, but this is like coming to a different world. And another case is coming to an unrelated community. For example, you can be coming from the database world to the world of software as a service application, which is something that seems to be unrelated, but in fact, you will still find some common points. Anyway, this I have prepared some nice joke here. You feel an alien in both cases, number three and four. You have a very strange status there. Not just a newbie, you are an alien to these two types of communities. Let's discuss the key challenges that make you such an alien. First, it's the lack of knowledge and the lack of sources of knowledge. This might sound strange, because we all have the internet to help us, but the internet is an ocean of content, which is not always good. So, you need some time and the right people to find the right sources of knowledge to educate yourself. And another thing is differences in processes. You need to get to know how product releases are done. You need to understand how people can report a bug or request a feature. And you need to figure out a lot of things, because you are supposed to provide advice on all these processes. Another challenge is that the key players of the new ecosystem are unfamiliar to you. You know nothing and you cannot understand how powerful these are. That person is in the community. The last key challenge is that you have zero trust or negative trust if you are coming from a competing ecosystem. You are an alien, as you can remember from the previous slide, but you need to convert yourself into an agent-mulder, who is a person of high trust, someone who has an extensive knowledge about the ecosystem, and will talk about how this can be done. First of all, you need to somehow cope with the lack of knowledge and differences in processes. My advice is to be open about where you work now, because this will help you to first connect with most proactive people, those who are most willing to help you, probably provide some initial understanding of the ecosystem in general. So be open about it, not just mentioning it one. Probably it is a good idea to mention this several times that you have arrived in the ecosystem, and you would encourage everyone to guide you through the way you have just started. Another recommendation is to find a master yodo for yourself. I mean that you need to find a mentor or better several mentors who could guide you through the ecosystem in the very beginning. I like the term of solid and empty resources offered by Vladimir Tarasov, who is a business and management trainer. The idea is that you need to see who is reliable and who is not reliable, who is willing to help you educate yourself, and who is not willing to educate yourself. You need to get to know people and see how much they can help you, or just do not disturb those who are not ready to communicate with you at this point. If you manage to find a group of people which is supportive and can help you gain up the speed faster, this is a great starting point for your journey. First you need to ask your group of experts and mentors about the reliable sources of knowledge. As I have already mentioned, it might be tricky because some great sources have a poor CEO score, because not everyone in tech is caring about search engine optimization, but these blogs and people who are writing them might be really interesting to you as a starter. My advice is to ask for advice. Let's put it like this. After you gain some knowledge and have at least a general understanding, you can start testing your ideas with your solid group of experts. If you have someone to help you not to sound stupid, as in UBIT it is already a big deal, because this is what all people in church of community management are facing. This is a very common problem. Another thing is about being active in the new community. Once you gain some knowledge and get connected to some people, you are no longer a starter from scratch. You already have some grounds and some valuable information to share. You can start with minor things, sharing what you can and asking your group of experts for advice on how to help these people, which is really a good start. You at least know who is responsible for what, which I strongly recommend you to know within your first week in a new community. When you have a group of mentors, you can go to the next stage and start communicating with the key players, because your mentors can make the right introductions for you when you need those introductions. There are communities where not all expertise is concentrated within one company. Most communities have multiple companies interested in the same product and most experts are scattered within this space, not just being your colleagues, but sometimes even your competitors, but still they are respected and you will of course need to know them and need to communicate with them, especially if you are dealing with conferences and joint activities. To gain trust in among the key players, you need to bring something to the table and this is where you need well-tested ideas and you shouldn't be afraid of the slight changes or minor ideas on improvements, because they tend to pile up and you start very small and you start with minor contributions gaining something step by step. Nothing big can happen at once. Another point is doing your advocacy job the right way, because this is something especially appreciated by technical contributors. They mostly focus on the technical part, but they not always have time to spread the word about what they are doing and if you do this for them, this is generally super appreciated, because people tend to be grateful for becoming more famous, so to say. If you are doing your job and performing your function, you of course will gain respect. My favorite advice is about being your best self beyond technology. I faced this multiple times, because people cannot talk about work all the time. They are still attracted by magnificent personalities and they prefer communicating with not just a job function, they also want someone who could help them relax after a hard working day, talking about non-work related things. This is fine, because this is how new ideas come to our minds as well. We can communicate about non-work related things and then come up with a good idea. I'm sharing some memes, strongly focused on futuristic movies and space and I can compare their time authoring space operas and poetry, and this is why I can talk literature, not just community management stuff, and this is fine. Building trust is a big topic and actually this is what we all should focus on, because the status of a trusted advisor is very much like the status of terminator. You look awkward at the very start, you will make mistakes at the very start, but when you become a reliable machine of community management, you gain respect. Of course mistakes are unavoidable. These are some techniques that worked for me as generators of mistakes made by other community management people. First, I think that no one should pretend to be anyone, they are not. This doesn't happen often, but I know that some public people prefer to exaggerate their role in the company and in the community in general. My advice is to avoid this, because I had a sales representative who tried to act on behalf of every company, which was closing an attractive deal and we generally should be on stick to our actual job title and their selection of responsibilities. The related thing is about being helpful while staying transparent about where you can help. If you are not in a technical role, speak about it openly. If you are not a senior specialist, speak about it openly as well. Being a minor person is not an issue. This is actually an opportunity to learn and grow and this is also about setting the expectations. Starting small is great because you can top the expectations you have set in the very beginning, but if you are trying to pretend to be a super expert in everything, you will face the issue of overpromising sooner or later. My advice is to start small and start modest and then see what else can be achieved. Another thing is not hiding Audi 9 the fact of working for a competitor in the past. Of course, people will be a bit suspicious about it at the start and probably they even make some jokes about it at the start, but the fact that you go open about it will attract them much more than your shyness regarding this fact of your biography. It was real and you are no longer there. You started to learn something new and you will not be perceived like the wrong person for the community if you will take this fact easy and practice some self-irony like I was working for X community and now I'm starting to learn the Z community for example. If you take this easy, your colleagues and everyone in the community will take it easy as well. Another thing is about logic. People in tech like those who sound sane and act sane. If you are offering logical things and stay consistent in supporting their relationships with people, they will appreciate it. If they see that you propose something which is right for them beyond companies and differences, they will respect you. My other advice is to find and over win-win solutions where possible and focus on what we have in common, not the differences. Here is the first interesting case about the joint talk of PostgreSQL and MySQL guys, which I was working I think four years ago. Back then I was a part of MySQL community, but this is something that is still in high demand and popular and honestly we are going to repeat this experience. The idea of this talk was like switching to super system. We were focusing on what we have in common. We were talking with both parties and agreed to do some general advocacy for the open source world and we were seeing this talk as an opportunity to prove that open source database solutions are good for enterprise workloads. This is why we gathered two teams of PostgreSQL and MySQL database developers and delivered a joint talk providing some good benchmarks with modern workloads. This kept a lot of people excited about open source. We had around 100 people present at this live presentation. It was still the time of offline events and we also had a couple of blog posts shared around in the two communities and we had around 20,000 views on these blog posts in the first three days, which is amazing and we also had a decent publication as a follow-up to our blog post. I also have run into their discussion of database to database migrations related to these blog posts. So it did something, it triggered something, there was a very positive buzz in the open source communities. Of course, there were some hollywars too, but in general it was good because if people are talking about open source, this helps to make it more popular. Another interesting case is about the educational opportunities. As you probably know that multi-database environments are trendy these days. So people tend to use various solutions where they fit best and for example, they might use one database for statistics and another database for transactions to get more opportunities on the labor market. DBAs tend to master new database management systems. Of course, in addition to this trend for multi-database environments, there are still database to database migrations when their company management decides on switching to a different database management system. Their technical team just has to adopt it and educate themselves on what this new solution is like. This creates an interesting issue like lack of education in their community and people need to have a trusted advisor who can share some knowledge related to the new solution. Some people might be even experts in their competing field and when they face such an issue, they have a problem about asking for help in public because they are known experts of one database management system and for them it's not always appropriate to explain that they need help with another database management system. So this is where private conversations become more powerful and such people can ask for help. Very often they need some physical knowledge which is easy to digest and put together by a group of experts. I would recommend every content group in the open source world to create this small piece of information for intender specifically for newbies because if you want your community to grow, you shouldn't focus on just expert level knowledge. You should also distribute the newbie level knowledge because this is how you can attract the beginners and educate more people about your product. Another important thing is not trying to persuade people to use your product at this very beginning stage because generally they need help and they would appreciate some help not advocacy and this is why I think we should be happy with their first interaction which is about trusted advice not just something for gain. The third case I want to mention is hearing back from the past community. You can leverage those comebacks of the people from your past in multiple ways but this will only happen if you are not focusing on the linear approach to community. The linear approach is something sales oriented and the non-linear approach is something that goes beyond sales. This is something which is about strategy so to say emotional seed investment. If you were good to your past community it will be very likely to share important information with you. For example you can see some advice on whom to contact. I'm seeing this and that companies not seeking our help because we represent a different vendor and they don't want to migrate to a different database so this might be an opportunity for you. Everyone is dreaming about such cases and I would say that they happen more often than you can imagine given that you were good to people in the past. Another thing is about finding the most active people who contribute to multiple databases. Of course they are unique but you can leverage this knowledge and invite these key players almost always having a high authority in the community to various joint projects and as I have already told you the power of joint projects is great. If a joint project is done the right way you can double the audience and engage with twice and just double your audience. When you support relationships with people from multiple ecosystems you can gain a broader knowledge on what's hot in the industry. People coming from different communities might share with you important news from competitors and general industry trends which is great because when you are focusing on just your own community you kind of see their high level picture and such conversations will help you define their better strategy for the future. Of course here I need to mention a very important issue. This is why I have put this man with Kermit the Frog here. You should also mind your NDA and respect the NDA of your conversation partners because we all have contracts and employers when we have to communicate with people from other ecosystems we should be very mindful of what to share and what not to share with them but still there is a very broad zone on this borderline where you can communicate and help each other and discuss joint projects without breaching anything on your contract or non-disclosure agreement. Let's go to the general advice and key takeaways from this talk. Of course when you focus on the gain only interaction you cannot achieve as much as those who put people first because when you are just focusing on sales and put this straightforwardly most people will remain reserved and you will not be able to gain trust. You should think out of the box and invent multiple ways to help people around you. This will pay off this time of course probably it will not happen in the first half a year even but this time you will be able to gain fantastic results. Another thing is not is about contributing on your own and I would say we shouldn't underestimate non-code contribution because it is not enough to create wonderful product. You should also create the right evangelism for it and to form the right community around it. If you don't do this job, if you don't contribute you are not performing your job function the right way but if you go beyond this and not just interact with people from the community and don't just engage with people but also share something and bring something to the table they will also be willing to share what you even even things you don't expect from them. Another thing is about not pretending to be bigger than you are. You are here to listen to people not to always tell them what to do. I would say practicing self-hyrony and healthy humor would help a lot in gaining new supporters in the new community. Of course when you are not just a function of a community manager but an attractive personality you can gain more because people want to share what they have beyond work with you and if you have enough heart and soul resources to accept this gift of someone else sharing deep emotions with you you can gain much more than those who are just performing their job function and speak only about work. This is a small part of the conversation but sometimes it's very important to encourage people to move on and create better product here. I need to ask Martina to take the virtual stage and continue with her part because this is probably the most interesting one providing the scientific grounds for what I have mentioned and Kudos to Martina who created great research specifically for this presentation. Thank you Stacey for a great first part of this presentation and thank you everyone for joining us today. I would like to compliment Stacey's practical experience with some of the evidence available from organizational science, management science and information science. Stacey has presented four great challenges the community managers face when they switch to a new OSS community. The available scientific evidence can help us to tackle particularly two of these challenges the differences in processes the community managers experience when they switch to a new OSS community and the fact that as a newcomer you start with virtually zero trust or even negative trust. With respect to the first challenge the available scientific evidence recommends you to evaluate whether you can hold a leadership position in your new community. If you indeed hold a leadership position in your new community then you can choose a balance between two leadership styles the transformational leader or the transactional leader and the choice about the balance between these two leadership styles has to depend on who your community members are and what are their motivations to contribute. A transformational leader shapes contributors behavior while a transactional leader guides contributors behavior. Transformational leaders are able to become role models for their community members thanks to their high ethical behavior. They can also articulate a vision that then inspires the rest of the community members. They can promote new ways of solving problems and challenge the traditional assumptions. Finally they can provide individualized attention to the developers their needs and their motivations. On the other hand transactional leaders are on the lookout for possible sources of mistakes or problems and they can adopt a passive or an active approach towards these problematic situations. Passive transactional leaders wait for problems to arise to tackle them. Active transactional leaders try to anticipate sources of problems or problematic behavior and put systems in place to avoid them. There is no optimal leadership style and the reason is that each community member or each subgroup in the community reacts differently to different leadership styles. From scientific literature we know that transformational leaders are able to influence community members that are intrinsically motivated to contribute to the community. Members who are intrinsically motivated are motivated by feelings of altruism, helping others and becoming experts in the development process. On the other hand community members who are extrinsically motivated are continuously seeking rewards or punishment for desirable or undesirable behavior. So as you can see the leadership style has to adapt to the kind of members that you're facing in your community. When we think about switching to a new OSS community now we can see that adopting at least partially a transformational leadership can help you frame your expertise as bringing and promoting new ways of thinking and solving problems and challenging the traditional ways. There is evidence in the scientific literature that adopting a transformational leadership and in particular challenging the old assumptions and bringing new ways to solving problems can in fact increase the intrinsic motivations of community members who are used to being intrinsically motivated and in turn this increase in intrinsic motivation also increases the contribution to the OSS project. So adopting at least partial traits of transformational leadership as a newcomer in an OSS community can in fact increase the value that is generated in the OSS project through the developer's motivation. So a newcomer can leverage intellectual stimulation to increase collective motivation and value. The second challenge that Stacey has introduced before is the fact that as a newcomer you start with zero trust or even negative trust by other community members, other colleagues or your team. There is a solid amount of scientific evidence that speaks about the value of generating and cultivating trust in OSS communities. Scientifically trust is defined as a belief in the honesty, integrity and reliability of the other people around you and even more specifically scientific literature distinguishes between three types of trust. Diadic trust is the one that you develop between yourself and another person. Group trust is what somebody develops towards a collective or a group such as in the case of an OSS project. And finally generalized trust is an individual personality trait but this trait is greatly influenced by the amount of dyadic and group trust that somebody develops in a collective. Why should we care at all about developing trust when we enter a new community? There is widespread consensus in the scientific literature that trust is one of the most crucial social factors that can affect the success of personal and working collaborations. Increasing trust in communities can also lead to higher team satisfaction in work relationships, higher quality of team performance, less negative content shared within the community and overall a higher sense of safety in participating into the community and also a higher exchange of information. This also speaks to another challenge that Stacey has mentioned that is the initial lack of knowledge about the technicalities of the product. So developing trust in yourself and in the community can also help with providing more of the knowledge that you need. So the natural next question is how can we increase trust in yourselves and in the collective when you join a new OSS community? Fortunately the existing evidence in academic literature gives us a clear roadmap towards increasing trust as a new community player. The first way to promote trust in your new community is by crafting or reinforcing a clear identity for the community. If the community is lacking a clear identity then it's a good idea to start crafting the identity together with the group members by underlining the clear purpose of the group and of the collective efforts and creating and reinforcing this community identity over time. A second way to promote trust in your new community is to create multiple opportunities for learning. It is a good idea to design or support specific spaces that are dedicated to informal social learning and workplace learning. This means dedicating space to both getting to know the members and creating spaces to share the knowledge that is specific to the product that is being developed. A third way is having credible and active moderation. Moderation is fundamental to ensure that a feeling of trust is established in the community. Having credible and active moderation helps also in the knowledge sharing that is so important as an newcomer. And finally there has to be a model for the enforcement of appropriate behavior. In appropriate behavior has to be stopped immediately and cannot be tolerated if you want to establish an increased trust in yourself as a community leader or as a new community player. To conclude you can see here the main takeaways from Stacy's part and I can add from my academic experience that it is important to leverage your diverse skillset as a new community player and be aware of the possibility to use transformational and transactional leadership at your advantage. Finally it is important to invest and believe in the power of trust to ensure a better technical performance and a better collective environment. These are all the works that Stacy and I have referenced throughout the presentation. Feel free to consult any of them and feel free to ask us if you need any guidance. Finally don't forget to stay in touch. These are our contacts if you have further questions or if you want to discuss any of these issues more in depth. Thank you so much for listening and I'm looking forward to the Q&A session.
|
The new reality requires extreme flexibility. Many people might be switching open source communities these days stepping in their new roles. The pandemic might bring you to where you would never expected to be. What if you need to start working for a competing vendor and become an advocate of a totally different technology ecosystem? “Just do it” would answer many of your questions, but not all of them. Ethical issues related to integrity, practical failures related to lack of knowledge, the inability to use background info accumulated previously would become factors preventing you from success. In our talk, we will combine practical advice from a community manager who moved from MySQL to PostgreSQL environment, with recent findings from academic research in community management. We’ll talk about practical techniques to transform a good community professional into a person of high trust. We will discuss ways to build your community not only around vendors, projects or technologies, but also around yourself, as an inspiring community professional. Finally, we’ll speak about scientifically based ways of building trust in a new community, and leveraging the old ties in the ethical and constructive way.
|
10.5446/52487 (DOI)
|
Hello and a very warm welcome to you all today for a session that we're just going to take a little bit of time to zoom out so we can zoom in on communities and specifically talking about open source communities. So just to give you a little bit of context to the picture that you're seeing here, this is a picture of a shed that is down the back garden that we converted into a teenager room for my 15 year old son. I have taken over as an office in 2020 as I started working from home. So I spent a lot of my time down here, luckily enough in a quiet area, but it's down the back end of a garden. So things have changed for me in 2020 and I'm sure things have changed for a lot of people, everyone practically in 2020. So I'd just like to take a little bit of time to look at little things we can do to acknowledge firstly that things have changed, but also how to keep that human connection that we have in open source communities. So just like to go through a little bit introduction first of who I am. So I'm the director of the EU open source ecosystem development team. I'm based in Cork in Ireland, which is down the second largest city in Ireland down the south of Ireland right on the coast. I've been in software for a long, long time now. I've started back in 96 with Motorola, writing highly proprietary software that was geared for the mobile switching centers, which are used to connect your your mobiles together when when making and translating the digits that you dial to make a phone call. So in more recent times, in around 2017, so I haven't worked very much in proprietary across a number of companies like IBM, Motorola and then into Huawei where I am now. I switched in around 2017 to working with open source communities, started off with the Linux Foundation and networking teams on a project called own app, which was just starting off, which was the open network automation platform. And that's where I got my first taste and experience of open source communities and and the actual, you know, the ability to be able to meet and work with communities from different companies across different parts of the world. And it was actually an experience that I really enjoyed and since 2017 and I never looked back and I've changed my career from proprietary now switching over to working with with open source communities. And then more recently, we started off on the now community lead for a new exciting project called open harmony. So I'm now the community lead working with the open source foundations. And so open harmony itself, we do have a stand here at Fosdome. So if you get a chance to go out and have a look at the the open harmony stand, exciting project basically trying to utilize all your consumer devices as a virtual resource pool to give a better experience in terms of the applications that you can run on those devices. So please, if you get a chance, have a look at our stand. And I've also continued as a project technical lead within the own app community in the Linux Foundation and networking team, where own app itself is over 50% of the code for the Linux Foundation networking. It's a very large project across multiple different operators and vendors alike. I'm also a member of the Edge Native Steering Committee in the Eclipse Foundation. So I represent Huawei within that steering committee and working with the community in the in the Eclipse Foundation. So just a couple of things I'd like to point out as we go through this presentation is to be aware of the human side of OSS communities. And so I always feel it's a good example to give you a few details about me and who I am and a little bit more about me than just my business introduction. Three things you didn't know about me is I coach a sport called hurling. So this is a hurly and this is a slitter. So at the weekend, it's a sport that's played Gaelic Games Association in Ireland. And it's something that I do with the weekends and coach some kids how to play and used to play it when I was younger myself. So some of my favorite places to visit one of them is the Dingle Peninsula. You know, it's a beautiful area of Ireland and Kerry in County Kerry in Ireland. It's a place I visit every year. It became famous if you're a Sky Wars fan for Luke Skywalker and Ray as the island Skellig Michael, which is an island that Luke Skywalker was hiding from in episode 7 and also featured in other episodes as well. So if you're a Star Wars fan, you might recognize those beehive huts, which were monks used to live there before in reality. And then also just to let you know I have three kids aged 15, 12 and 9. And so some of them are down at the moment doing homeschooling because their schools are closed. So again, that's just some of the context about a little bit about me that you may not have known. So what did 2020, what did I experience and how did it affect me and the way I work with open source communities? So in the start of 2020, you know, I actually was here at Fosdham in Belgium, experienced the true Fosdham community, been in all those large lecture halls and also into the small developer rooms, meeting outside for a get in the coffee or a bagel and meeting people was the main thing. Met loads of people, made great connections, learned about new projects I had no idea of before. So it was a great experience. But then when I came back from Fosdham in February, that's when news that a pandemic was starting to break here in my country in Ireland, where we had a lot of cancellation of travel. So for me, I travel a lot with my job. So first thing I noticed was, okay, you know, you're now canceling some travel that you had planned. And then eventually, I actually taught, you know, in Q1 that this was going to be a short term thing. So I did feel that, you know, this wasn't going to be going on as long as it has. I think a lot of people didn't understand, you know, much about COVID-19 at the time. And, you know, I felt that maybe I was working from home was going to be a short term thing. So I carried on normal or normal ish. In Q2, I did see the first of many physical OSS events being either canceled or rescheduled to later in the year, such as ONES, for example, that was by a plan to be at in North America, that had been scheduled, you know, to later in the year in September as a virtual event. But work in my projects continued on as normal, because when I look at it, like from an on app perspective, which I was still working on, I knew the people I've known them for a number of years now. So we carried on as normal. We ran our weekly Zoom calls. And for the most part, it felt fairly normal. And I also discovered in Q2, a little bit about my locality here. So where I live, you know, so because we were in in bubbles of where we could actually go within five kilometers of our home, I discovered a whole series of things around my home. But I did start to miss the, my, my, my, my workmates, because I'm used to I've worked all my career in an office. It was the first time for me experiencing working from home on a prolonged basis. So in Q3, I joined a new exciting, you know, project called Open Harmony. And one of the things I noticed was that as I joined that, we had a lot of virtual meetings, our online meetings, you know, to actually get the project off the ground. But one I noticed was I started to have a little bit of online meeting fatigue. And it was also the frustration that as we were normally, if you're starting a community, you may actually get to try and actually meet together. And these were all brand new people for me. So I've never met them. And they didn't know me. And I didn't know really know them. And that's when I had a little bit of frustration that, you know, we had to do everything virtually. So it became more difficult. And, you know, just, just speaking of Open Harmony, I mentioned that there's a stand that the event, please go out and visit it, you know, if you get a chance. But also, I noticed that feelings of isolation started, started to begin in Q3 for me. So then in Q4, a little bit of virtual event overload. And I also experienced my first time of hosting a stand or actually, you know, being at a stand at a virtual event and manning that stand and giving presentations from a virtual stand. And again, this for me wasn't the same thing as at the physical thing. And I'll get to that in a while of things we can maybe do to improve the stand experience from the physical to the virtual. So I started worrying a little bit about the loss of the human side of communities. And that's when I became determined as well to deal with any feelings of isolation I was having. And at the same time, at the end of 2020, we had news that a vaccine coming out. So hope was coming out. And I also decided at that point to talk to you guys at FOSDM 2021 about it, because the best thing we can do is to talk. So isolation, okay, I'm not a doctor. But if you look at Cambridge Dictionary, they say it's a condition of being alone, especially when it makes you feel unhappy. So now I give you context, I'm down here at the end of my garden. But I'm not alone. I have my wife and I have three kids were in our inner bubble. But I'm a lot more alone than I've ever been in my life before in terms of interaction with people. So we're limiting our number of contacts. So it's not natural for me as a social person. So humans interacting virtually is no real substitute for me personally in a way into face to face communications. And I was at FOSDM last year where Andrew Hutchins gave a fantastic heartfelt talk about recognizing burnout. And it was really a brave presentation that he gave. But he did note that isolation can become a potential cause for burnout. And burnout and depression can go in hand and hand. So I do think we need to actually identify and actually acknowledge that there is people feeling a little bit potentially isolated in 2020 going into 2021 with the way the pandemic has had an effect. And just to take a few little actions in the way we run our communities and the way we operate can go a long way. And that's what today's talk is all about. So again just looking at it from the open source communities basis from what I've seen. Again this is just my experience. The code itself kept growing. It was actually immune to the virus. Developers kept developing code. We had projects that still went out on time. We actually had new open source projects that were starting that were helping combat the pandemic. For example code was being written now to develop visualization tools for healthcare workers. But the code itself was reviewed by more and more people that we had never met before. Okay so then I'll get to that in a minute. But also then from a governance point of view or the foundations point of view they too had to adapt. So I work with foundations like the Linux Foundation and the Eclipse Foundation. They would have had to adapt to running their you know events physically and also the way they actually interact and gain new members. And they would have also had a loss of funding from potentially from running events which is some of the you know the way they fund their actual activities through the year. So they've had to adapt as well. And events then themselves as I mentioned moved online and we had the introduction of the virtual stand or the virtual boot. And again there was a rush to develop platforms that tried to make them as human connection as possible but not quite the same as the real thing. So just a few things that we can do to improve the experience of virtual events. And then the public itself, the pandemic effect on the public was from an open source point of view was that they actually got a lot of new you know tools that were based on open source that helped in the combat of COVID-19. Such as you know contact tracing apps or also even because they're more and more public or working from home. A lot of the cloud network connectivity to actually bring and bring these virtual tools back to people's homes was built on open source software. Even a lot of the work that we do in the network function virtualization or software defined networks is built on open source. And then the developers us ourselves so I still develop, I'm a developer as well. You know weak transition mainly for the most part all over the world working from home. Open source teams had to work virtually by virtual meetings and you may like me felt a little bit isolated. So which is one of the things I'd like to do today now is to move on to well what little things we can do to improve you know just small things. Small things as I said I'm not a doctor, I just suggest my own experiences that I'd like to share. So one of the things that I tried to do at the start of this call was to give you a little bit of introduction of who I am and some of my interests and where I'm from and you know my hobbies and my favorite places to visit. So now you know a little bit about me but when we're running community calls and community meetings especially when they're meetings that are not with a thousand people online where it's not physically possible to go around and ask everyone you know a little bit about themselves. But there's a lot of smaller virtual meetings where there might be just 10-15 people dialing in for a small project weekly meeting or steering committee call. It's a little bit to spend a little bit of time to try and see actually how do you actually do you know any of these people and do you know where they're from. Things like that are very important for example is what time zone they're in because maybe you as host could be changing the or have a perception of where they actually are based on their accent and the way they talk but they actually may not be physically living there. They may be in it living in another area with a different culture. You don't know until you actually facilitate getting to know your members and physical events allow that you know at having a coffee or having a beer afterwards you get to know people but that was taken away from us you know with the pandemic for open source communities. So we just have to make a little bit of adjustments to actually find out and know your members which is also helpful because the more you know and especially for if you're a project technical lead or you're a host of a steering committee it helps you actually get the most out of the meetings I feed personally anyway if you know what people actually want what their goals are. Some people may want to be always just contributing technically some people that might have a business aspect to wider involve but you're trying to keep them involved and keep the community growing and keep the community producing good software that is going to be used and useful. So the more you know your members I think the better you actually can make that happen. So some of the things I would annulce as well I mentioned earlier on about you know is there's more and more people that you've never met before and this comes even into code reviews so I've experienced this myself this year where for example in the Linux Foundation networking team we would have used GERAT and I would have seen just comments coming in from people that I've never met and sometimes you might see something that means no harm you see something like why did you not add the associated J unit for this code modification but because maybe you haven't met them before you're probably thinking that seems to be a bit to the point you know and you may feel a little bit that it's someone being slightly aggressive with you when maybe they're completely the opposite they're just pointing out a mistake or that improvement now they could have done something a little bit better they could have written a more verbose comment in the current review like something like look at his best practice for this code modification to also have J unit tests it really will help our code quality and coverage please add the associated J unit test you can add it here now that seems a lot more you know from a personal point of view from someone you haven't met it actually looks and feels a bit better that someone's trying to be constructive but it does take more time but what I'm saying is if in if you think of it at point number three what I'm here is if the same text that you saw the more short version of the GERAT review was from someone you knew and had beers with in FOSDEM for example you may not you may just say oh yeah I forgot the J unit I'll add it and you don't think of it as being someone being aggressive because you know they're not an aggressive person but we had the danger we have now is that more and more we're seeing more and more comments from people we have no idea who they are and so just have to be a bit more conscious and I think if you just take a small little bit of time to be a small bit more verbose and constructive in your comments it can help right because these same people could be struggling with work and you know getting their code reviewed and seeing comments that look like they're aggressive may not may not be a nice feeling that they would have and may actually make sure that they might not come back and add another code to this project you know you know we have to take that into account so then also from a from a virtual boot experience okay so from at the start of the year I would have said you know FOSDEM 2020 and as I said was a physical event and then 2021 now here we are we're talking virtually and I'm pre-recording this in my shed out the back in Ireland so I'm not there with you but when you think of the events we also have stands and we have a virtual boot stand here at you know for the open harmony community which I hope you will go and visit but when you're at physical events when you have a 15 minute break you go out and you might have a coffee and there's a set of stands there and there's people there and you can see the people and you can actually interact with the people whereas when it's a virtual event and you have a 15 minute break where do you go for me I have to walk down the garden I get my cup of coffee there but I'm gonna come back up it's 15 minutes gone and I'm back into the next session or else I'm the likelihood is I go check my my work email and start working I may not even go back to the sessions so we have to understand that virtual events has this item of when you're when you're showing a stand and how can you make them less scary for someone to actually go into and actually interact as humans and interact not just you know one of the biggest fears I have is when you go in and you arrange your virtual boot that just just has it download this brochure and off you go and so some of the some of the things we can do a little bit better is maybe put pictures of who is answering the chat box or who's who's who's responding to you if you do ask a question and you know even try and have the event platforms allow you know to move yourself as an icon towards a stand and actually have a meeting a virtual meeting with the people on the stand so it's actually a it's a an interactive experience with people that you can see and or even just arrange your your virtual space really trying and twice this this human contact versus just simply just putting the brochure up at the top and download it and off you go and so that's some of the things we can do I think from a virtual boot perspective so finally I just want to finish with a little bit because of role play okay so I'm again just take this as a these are all fictitious characters and you know but it's just something maybe to get you a little bit thinking about the next time you're hosting a call or you're on a call in your open source communities whether it's a project call or a steering committee call and just maybe take note of who is on the call and maybe try and work a little bit on on the human side of these calls so Harry is the host he's a ptl so he's running his weekly meeting Harry takes a very good care of his project he's a diligent ptl he knows exactly what's going on in and going into the project and he keeps an eye on most of the activities that are happening so then we I'll introduce you to Vera the veteran Vera is the top prog merchant he's an absolute great asset to the team highly skilled has contributed to this project for years the one thing I would say though is Vera given half the chance will take the entire meeting she likes the sound of her own voice we all know people like that and that would actually when given the chance will take the entire call which is you know which is good because she's giving good technical content but you do need to be aware of the person that you know you know the person so that's Vera the veteran and then we also have on this call again it's only a small call for the sake of time for Fosdam and Nile Danubi so Nile is the newest member to the open source community it's only a second project call and he's made his first contribution during the week so he's very proud about that it's his first contribution ever to open source and but Nile does not know anyone on this call he's never met them before and he's only ever seen Vera via the Gert review where she's had a few harsh comments maybe on his review that he actually made and changed and actually got in so he's actually he's coming onto his call proud as punch and he's waiting to actually you know to meet the members of the team. Aiva interest is someone that has been dialing into these calls hasn't contributed code much but hasn't basically been asked you know about her skill sets and what people don't know is that Aiva she has fantastic skills that are actually really required for this new feature that is being planned for this release but no one has actually understands or even asked what type of experience she has or what what what kind of skill set she has so you know it's it's unknown. From Shirley Shai is the final person around this role play call and Shirley Shai everyone just assumes is shy because she never turns on her camera. What actually people don't know is that Shirley is in a in her local time zone is dialing in extremely late and is normally in her pajamas ready for bed so but no one has ever asked her where she was from and no one has any idea what pressure she is under to make this call in terms of the time zone that the call is on. So just to get you a little bit thinking maybe this is the wrong way if this is your situation if you know your community in this regard this is the wrong way I would suggest that Harry would start to call he say hi all welcome to the meeting we have a very busy week ahead Vera can you please take us through the new feature design for Project X I think you can take the full hour because it's really important we get through this today. Maybe the wrong way to start this call because that's the only way it's going to go and I would suggest just taking a little bit of time potentially to start the call like this. Harry says welcome everybody I'd like to acknowledge everyone for dialing in your participation in these difficult times is really appreciated on today's agenda Vera would kindly take us through the new feature design for Project X before that I just wanted to congratulate Nile D'Nuby for his first commits to the project I want to round the applause everyone for for that Nile if you'd like to introduce yourself please do as I know there's a lot of new faces on this call for you I also think it will be a benefit for a quick we limit it to maximum 10 minutes round table for everyone to introduce us feel free to introduce yourself where you're from what you're interested in or anything else you'd like to share with us to me it's bounded into a start of a call so you put it it will may potentially get you to allow to find out that I've a interest has this skill that's going to be a great asset for Project X something Harry the host may never have known if they didn't ask so that's just one of the a little bit of role play for this call so the final thoughts is just just take that five ten minutes show a little bit of empathy and awareness not just in calls also in the way you arrange your boots and in the way you actually write your your code comments just take note of the way people may be feeling in 20 in 2021 now you know with the effects of the pandemic and again if you please follow me into the chat room if you'd like to discuss and have a conversation on this further and any other experiences I've had please visit our open harmony stand as well I hope to be there as well and I can talk to you and thank you for your time and I hope I hope it's been useful so take care thanks everyone
|
Open Source is all about the community being able to engage with each other efficiently. This is why events such as FOSDEM are so essential to enable us to meet members of our communities face to face to create these essential connections. In 2020 the annual OSS physical event circuit was broken due to the pandemic, and was replaced with online versions. It has meant we spent the year on weekly zoom project team/ steering committees & more meetings online. Then we have yet more zoom webinar or some other online tool for our flagship OSS events. This can be sufficient when you know the people on a mature community, as you may have met them before, but what for new OSS communities just starting out?. How can we get that personal connection that is needed to help avoid unnecessary conflicts due to simple misunderstandings. What people see in print, say in a gerrit review comment, can seem a lot more severe if you do not really know the person who had typed it. That is why I'd like to give my experience from 2020. Lessons I have learned and adapted in my meeting. To take the time to step back/zoom out in our OSS commnity meetings, and take just a little time so we can zoom in on the people.
|
10.5446/52488 (DOI)
|
Hi folks and welcome to my talk on embedded Linux license compliance for hackers and makers. This talk is aimed at individuals and small businesses who are distributing open source software and may have questions about license compliance, what do you need to do, what are the tools that are available, what are the best practices that you should be following. I'm going to talk about distribution. Examples of distribution would be selling a physical product which contains open source software installed on the device. It would also be providing a free download of something like a Linux distro image for an SD card for Raspberry Pi hardware or similar hardware. Basically any action where you're providing someone else with a copy of some open source software would be distribution. So that's kind of a brief intro for the talk. I also want to give you a brief bit of background about me. I've been involved in Open Embedded and the Octo project since around 2013. I work across pretty much the whole embedded stat, kernel, UBOO, distributions, everything. I'm currently working as principal engineer at Consolco Group and the company website is consolco.com. Contact details for me if you want to follow up with any questions or feedback at all. You can find me on Twitter, you can send me an email, you can look at my personal website. I've also got a YouTube channel where I upload videos of my talks and videos of learning Rust at the minute. So quick disclaimer before we begin. Start off by saying I'm not a lawyer. This presentation is not legal advice. What I am going to be talking about is the best practices based on my experience as a developer, my experience as a member of open source communities. So if in doubt, consult an appropriate lawyer. So I want to expand a little bit on my introduction. There's lots of information and tools available for open source license compliance and there's lots of presentations already covering that. So why do we need another presentation here targeted at hobbyists, hackers and makers? I think most of the information that's available isn't really well targeted for these groups. It's not really well targeted for small businesses either. And people generally are distributing these devices containing open source software in small volumes. And there's a lot of the tools that are advertised for open source license compliance are complex. The methods for using them take a lot of time and effort. There's kind of assumptions that you've got a legal department and some of the presentations. And yeah, what I wanted to do is just present things, find a more tailored for this audience of hackers, makers and small businesses. So why if you're in this group, should you care about license compliance or for large corporations, this is often about reducing legal risk of being sued for noncompliance and so to gain influence in relevant open source communities. But maybe not all of that applies for hackers and makers. Maybe you're not as concerned that someone's actually going to take you to court. But it's likely that the priorities that you're going to have are around empowering users to be able to customize the operation of a device to suit them and being a good citizen of free software and open source movements. And the other thing as well is when you're building software images, actually capturing the source code and the build scripts, which as we'll see is part of the requirements to be able to fill license compliance. That does really help with reproducibility of builds and helps you debug things. So sources do often disappear off of the internet and you don't want to be coming back trying to look at a problem with the version of your image that was 12 months old and suddenly find you can't rebuild it because the sources have disappeared. So those are kind of some of the motivations why you might want to go through this process of open source license compliance. So before I go any further, I want to take a minute to talk about distribution as it relates to open source software. So there's a couple of modes of distribution that I'm considering here. The first of those is that you might be disputing a physical device that's got some sort of open source software installed on it or programmed into it. And if you're disputing this, then for purposes of this presentation, I'm going to assume that the person you're giving this to has got internet access and can access online resources that you provide as well as what's actually in the box you give them. The other type of distribution we could consider is when you're just providing a software image for download from a website, this might be sort of an SD card image. You could program on an SD card, put in a single board computer like a Raspberry Pi, and talking about sort of an image that's going to be containing kernel, bootloader, root file system, and other components. I'm not talking about just a single software package here. And it's important when considering this distribution, it really doesn't matter whether there's any price charged to the person who's receiving this. It is distribution, whether it's free or whether there is a price charged. The one case where we can ignore things really is if you're in a small business and you're distributing these images to somebody else within the same organization as part of the job that you're doing, then that's not really considered distribution to a third party. And I wouldn't worry about license compliance as much in that case. So when we talk about open source license compliance, what are the actual common license conditions that we need to comply with? Well, we can group licenses into two broad categories, the permissive licenses like BSD license and MIT license. These licenses require you to provide the license text and any copyright notices usually. And you can provide these in the file system on device. You can put them in the documentation. You can probably also put these on a website as well. The other broad group is copy left licenses, which require you to provide complete corresponding source code for any binaries that you distribute that are covered by that license. Now you can publish that source code directly, similar to the way you publish the license text and notices. But some licenses, for example, GPL allow you to just publish an offer letter saying that you will provide the complete corresponding source code on request. What we're going to talk about today is this kind of publishing the source code directly. So I want to give some general guidelines to follow. And the first of these is you should use a proper embedded Linux build system to produce the software image that you're going to distribute. Again, whether this is an image file that can be downloaded and copied onto an SD card or whether it's an image that's actually programmed into a device which you're going to distribute. And the sort of build systems I would recommend here are either build route or embedded slash yok2 project. These systems have some really excellent tools to help you collect the license text, the notices, and any source code, which you would need to archive to be able to perform your license compliance. The important thing here is you should avoid modifying the software image in a post build script that runs outside of the embedded Linux build system that you're using. And avoid adding additional software during any manufacturing test processes because either of these approaches kind of bypasses the tools that are present in the embedded Linux build system to collect license text and source code. So other things you should avoid, I highly recommend avoiding desktop and server distros because it is very, very difficult to collect license text and source code for all the packages which are installed in your device in a way that you can distribute them and in a way that they can actually be rebuilt from source code easily. I'd also say avoid openwrt is an embedded Linux build system but it doesn't really have any of the tools for license compliance that other embedded Linux build systems have. If you're using containers, I strongly recommend avoiding images pulled from Docker Hub and similar container registries because you probably have no idea what the source code is that's been used to generate the binaries that are in those images. And similarly avoid building container images with a Docker file because again there's just no tools to look at the output and understand what own source licenses you need to include the text of and what source code you need to collect. For container images you can build these with own embedded, you probably can with things like builder as well. So there are some things that I wouldn't say you need to avoid but I would say you need to consider and use them carefully. So if you're using pre-compiled tool chains, for example the armed pre-built tool chains, you need to make sure that you go and collect the source code for this pre-built tool chain because libraries from the tool chain typically can end up in any software image that you might then distribute. So for example the tool chain may contain G-Lipsy that's covered under a copy left license so you need to be able to provide that source code. I also say language specific package managers have some issues so NPM definitely has some issues, cargo has less issues but still be careful with it. They don't really offer easy ways to collect the license text of a package and all of its dependencies. And some of them don't really offer a way of collecting the correct version of the source code for binaries either. And the last one to be careful with is third-party make files in projects that you use. Watch out for make files which download additional content during the build process or make use of online tools during the build. I've seen both of these in the wild and they make reproducibility and license compliance exceptionally difficult. So let's talk about how you can go about publishing some of this information that you need to release in order to achieve in source license compliance. And we'll start with how you would publish license text and copy right notices where these are required by a license. I think the first thing I would suggest is if you can format the text and notices into HTML file or a plain text file and include that in the software image itself preferably with some way of accessing that through the UI if your device has any sort of user interface then that would definitely fit the build. An alternative you could do is to actually collect up the license text and notices into a Git repository. This would let you update this with a new commit every time you release a new version of your software image and you can take advantage of free repository hosting by companies like GitHub and GitLab so that you don't have to pay for your own web posting for this information. And if you're provided an image for people to download anyway then you can just include the link to this repository where all the licensing information is. Alternatively you could include a bit of paper with the link on it with a physical product or just have a think about how people would access the license information if their internet connection goes down for some reason. The other thing is how would you publish actual source code and I would recommend putting these online. I would recommend avoiding the option to publish an offer letter saying that you'll provide the sources on request. I'd recommend just straight away going ahead and providing the sources online. Now there's a bunch of cheap files and services you can use online. Backblaze B2 is one that I'm quite a fan of and if you also sign up for a free Cloudflare account and use Cloudflare as the front end to backblaze then you won't pay any transit costs for anyone downloading data from backblaze via Cloudflare. You could also use something like the storage box service that Hetzner and Germany provide or any other number of inexpensive storage providers. What I would advise is if you can deduplicate archives between releases where possible it's going to save you a lot of data and a lot of bandwidth. So for example you've got bash within your software image and you've got using the same version of bash for multiple versions of your software image. There's no reason why you need multiple copies of that bash source archive. The final part of publishing the source code is to ensure that any patches which are applied to the software during the bell are also included. So watch out for what I call hidden patches here. So things like said scripts or the processes that modify the source code before it is built are essentially the same as patches and you do need to release those scripts as well and ensure that you've got the patch order recorded as well to make sure the patches can actually be applied properly to the source code. And let's talk about providing the build scripts. So yeah to give an example GPL version 2 says that you need to include the scripts used to control the compilation and installation of your software. And I think the easiest way of doing this is to provide the sources for the entire build system that you're using. So build route or open embedded provide an archive of the version that you're using. If you're using open embedded provide all the layers that you're adding to the build as well. And also ensure that any local configuration is included if this isn't tracked in some sort of git repository. So for it, this would be your local.conf file. Just make sure there aren't any important changes in there that would be needed in order to reproduce the image or make sure that you're capturing that local.conf file as part of your sources. Testing testing is very important. Mistakes are easy to make. That's why we have tests that applies to all software and that also applies to this process of releasing your sources and license text. The gold standard for checking that you've actually captured all the source code that goes into your images to check the image itself can actually replicate it from the sources and the build scripts that you publish. There isn't really the same gold standard test for ensuring that you've published all the license text and copyright notices. But if you've covered everything for capturing the sources, then the like note is you're going to have covered everything for capturing the license text as well. So automate this test if possible. This is something that you should be able to put into some sort of continuous integration service or put into your release scripts to try rebuilding the image from the sources that you provide and then make sure you run this test on every software release that you make. So let's talk practically about how you would actually do this using two popular build systems. So first of all for Buildroot, I'm not a Buildroot expert but I do know that you can run make legal dash info within Buildroot in order to produce a directory containing both the licenses and copyright notices as well as all the source code that is used by Buildroot to produce your final image. So this is a little less configurable than the tools provided by it and embedded, but it's well documented and it's really easy to use. And if you want some more info on this, there was a talk by Luca at FOSDAM 2020 last year and type of license compliance for embedded Linux devices with Buildroot. Moving on to open embedded in Yocto project, which is kind of the area that I work in quite a lot, we provide an archiver class that you can enable to capture the source code that Bitbake downloads as part of the build. Alternatively, you can just archive the downloads directory that Bitbake uses to cache the downloaded files in, but it's a little less flexible and might need some manual post-processing if you go that way. The archiver is definitely much more featureful and useful way of doing this. You should also be capturing the licenses directory that is produced by Bitbake or you could enable installation of the license text into the target image itself. All of this is covered in Yocto project documentation and it's also covered by a couple of previous talks that I've done. So there's two listed here with similar titles presented a talk called license compliance and embedded Linux with the Yocto project at ELCE in Leon in France in 2019 and a presented a talk called open source license compliance in Yocto project at Linaro connect 2020. There is some overlap between those talks, but there is also some material that you need to each talk, so I'd recommend giving both of those a look if you're after more information. So I wanted to give you some links to other projects which you might find relevant in the area of open source license compliance. The reuse project is about providing license metadata within a project in a consistent and machine passable format. The open chain project aims to address what happens when you're building and releasing a software image. It also incorporates some open source release from one or more vendors and your ability to provide license compliance is really dependent on whether your vendors have done their license compliance job properly, so that's definitely one that's worth checking out if you're in that situation. OSS review toolkit can help you to go through each of the software packages within your image and review the license compliance status of these. The software heritage project is focused on being a permanent archive for home source software source code and lastly, Fosology is a tool for again looking at each of the packages that are involved in a software release and allowing you to review the license conditions and check that you've got accurate licensing data for all of those. So let's finish up my talk about the open work that needs to be done in these areas. I'd like to see a review of the status of license compliance tools within some of the other embedded Linux build systems, so openwrt, ptxdist, probably others that I don't know the names of, and if there are gaps within the license compliance tools within these distros, then yeah, there's definitely some work to be done there to fill those gaps. I'd also like to see some improvements in the state for language specific package managers. A lot of these are really not built around the idea of reproducibility or of being able to archive the source code that goes into a build for later use and for license compliance. And we've also got some more work to do to integrate the various embedded Linux build systems with some of the other projects and tools that were mentioned on the previous slide. So that's everything that I wanted to cover today done. If you're watching this live, we've now got some time for Q&A. If you're watching this back after Foster was finished, then I welcome any questions by email or on Twitter or anywhere else that you can find me. Cheers. Here we go. Hey Paul. Hey, I think we are live now. It's looking good. So yeah, why don't we start off with Bradley Kuhn's initial question, which is where should we put the information for Hacker's habeas? I've written a lot of information, but he's obviously putting it in the wrong place. Yeah, so my thought on this is that there is a lot of information if you go looking for information on license compliance, but what I specifically wanted to do with this talk and would submit it to the embedded dev room is to reach out for those people who aren't necessarily looking for license compliance information right now. And yeah, I think that's one of the main things to me is going to be reaching those people where they are, maybe when they're not realizing that they need to be doing something about this. Okay, and then we had a lot of questions about how to provide the complete and the, what is it, the complete corresponding source code. Do you consider online sufficient, SD card, CD, the old CD ROM? You want to talk about that for a second? Yeah, so there's a couple of different ways of looking at this. One is that license compliance is kind of an ongoing process, rather than something people get right immediately on the first try. And I think the other thing is, I think there is a distinction between the situation that people are in when they have fairly limited resources. The situations I'm talking about, oh, when you've got somebody who's an individual maker or a small business, selling tens of units or maybe hundreds of units at the most. I think in those cases, people are going to take a little more of a lenient view. And it is going to be more about making a good effort and trying to empower users in the best way you can to allow them to exercise their freedoms under the various open source licenses of the software. So, yeah, there are questions about what is perfect by the letter compliance with the license. But I think the first step is to make an effort and don't let the fact that you're worried about whether you can do something perfectly stop you from making a good effort at it. If you get to the sort of size of a large business, then, yeah, you should be seeking proper legal advice and trying to get as close to that compliance ideal as you can. But if what you've got is limited resources, you're an individual person making a few devices, like say one of the examples I gave in the conversation in the dev room a minute ago was distributing an Arduino device with some software programmed into it that is GPL licensed. Don't let the fact that you're worried that you can't afford the unit cost of adding an SD card into the box with every device prevent you from at least making a good effort to provide people with the source code. And if someone comes to you and says, you know, this doesn't work for me, then, you know, do you best to help them out? I think that's, in my view, that's the right approach for small businesses. And then as I say, if things start to scale up in large organizations, I think it's fair to expect more of larger organizations. Yeah, I would agree with that. And, you know, with the caveat that, you know, I'm not a lawyer, this isn't legal advice. Maybe get other opinions as well as mine. Right. And I think I think actually I just saw Bradley type in something and which is which is exactly true. You said the title of this is targeted for hackers and makers and the level of detail required by I think a large corporation is much different or that can be provided by a large corporation is much different than a one man shop or a couple man shop just doing some software for fun or building a device for fun and telling it to a few people. So, yeah, I think it's best efforts within what your actual capabilities are, taking account of the size of the organization and business you're doing. But, you know, the key thing I would come back to is don't let worries about whether you can achieve perfection, prevent you from just making a good effort at it. Right, right. I think that's pretty much all the time we've got. We have just a minute left. So, so thank you very much, Paul. It was a very good talk. The chat room will open up in a minute so we can continue the conversation in the
|
This presentation will cover the practices and tools you can use to improve compliance with open source licenses as a hobbyist or small business using OpenEmbedded/Yocto Project, Buildroot or other Embedded Linux build systems. The focus will be on practical steps that don't require excessive time, effort or consultation with expensive lawyers. This presentation will also discuss license compliance pitfalls to avoid. No legal advice will be given in this talk. Many presentations and articles about open source license compliance focus on the needs of large corporate users of open source and aren't well suited to individual hackers/makers or small businesses. However, even if you don't have the public profile and deep pockets of a large organisation you should be thinking about how you can empower users of your software or physical product to take advantage of their rights around open source software. Rather that trying to develop a comprehensive enterprise-grade open source policy what you need is some steps to get started, some rules of thumb and some tools which are relatively straightforward to use. License compliance is a process and even if you don't feel you have the resources to achieve perfect compliance it's important to take the steps that are within your reach.
|
10.5446/52491 (DOI)
|
G地 I게isiau, i chi'r cyhoeddfa mewn craapereth Aoteach y gall 대박 nawrau a type y bodies o gymweithwلاed CBl trafwyno am Poseig, yn dod o'r unrhyw unrhywun nad oedd ar adael Fyreig. more of you to start selling your open hardware maybe. So, yeah, like first off, I want to run through a little bit of a kind of background, why you might want to do this, how open can help. I mean, I know I'm preaching to the choir here, but you know, bear with me. And then I'll talk about some of my journey in going from open software person or software person in general, I suppose, through into electronics and hardware and manufacturing stuff. So, yeah, let's get started. I mean, this I'm going to run through reasonably quickly, because I think a lot of you would already know a lot of this sort of stuff. I mean, you know, the Internet of Things, as we said at the moment, often has things tied, you know, tightly coupled to servers, the other side of the world, which is not really very useful way of doing things. So, you know, if there's a server outage there, it affects you despite the fact that you're nowhere near where the servers are having problems. So, you know, we have that kind of a problem. Then that's on top of the fact that lots of us are installing always on microphones and potentially cameras in our homes that are just going to send all of this data on a whim over to Silicon Valley or anywhere. We don't really know. We don't really care in some ways. We're just kind of going, oh, well, you know, it's useful. I can control things. And I kind of put up with the fact that I'm giving all this data to somewhere else, you know, maybe we don't want to do that. And then, you know, there's a cool startup and you buy a device and it's really awesome. And then like five years down the line, the send you emails going like, oh, it's so amazing. We got bought by Google and like, we're going to be fantastic now. And oh, yeah, by the way, this thing that you've started, you know, taken into your home or into your life and, you know, started to really enjoy and love. Yeah, we're kind of, we're just going to turn that off. Really sorry about that. But like, you know, isn't it cool? We got bought or just Google decided that they tried that for a bit and then they're going to just go, well, actually, you know, it's not just Google. It's many startups, many big companies. You know, it's that kind of if I buy a fridge, I'm potentially going to have it for like 10, 20 years, something like that is the service that it's tied to going to be around for that long. Like how can I ensure that that's the case? And then we get into kind of, you know, if you're into this kind of faster cycle of replacing things, what are the sort of sustainability questions around that? We want to be kind of getting away from lots of e-waste and plastic being used for everything. And similarly, there are questions around sort of conflict minerals and what is it that goes into and our devices that we're using and who makes them and what are the working conditions for them. And there's a whole load of, you know, interesting questions that aren't really unpacked as much as they should be around this sort of space. So, you know, like we're the sort of people who kind of go, well, open source can solve some of these problems. We use it. We take it on board to solve some of these problems. And I think it can help here. You know, we can do things. This is the WLED software. This is some awesome open source software for running on the little ESP8266 and the ESP32 devices and controlling those in the pixels. So this is a screenshot from a board that I make. I didn't write the software for this. This is, you know, some really awesome open source software. I've added some open hardware to it. There are other people building devices, you know, hardware and stuff to go to run this software. But, you know, this is being served off the device itself. So there's a little web server on there that lets me configure things. So the data just stored on the device doesn't matter if I disappear. My company stops making the devices that you're running this on. It's not going to make any difference to you. It's your device. You're running it on your network. You can control it. And similarly, you know, when we want to talk out to the outside world and to other things, then leveraging open standards. So using MQTT or DMX, you know, they're pretty common open standards. You don't have that. This is tied to a specific provider. And if their service goes away, then all of the functionality disappears. You know, I mean, if you're talking to an MQTT server and it gets turned off, then yeah, your device is going to stop working. But, you know, you can spin up your own MQTT server or you can connect to a different one. You know, you can find a co-op running a MQTT server and give them a bit of money to run the service and you can carry on using it. And similarly, you know, as we get into the sort of electronics and the hardware side of things, open schematics help. I mean, you know, this is a screen grab from this recent blog post looking at checking to see if the Amazon Echo, like there's a mute button on the front of it, which turns red LED on, and you kind of assume that it's turned the microphone off. But who knows? And this person did work that out. You know, he cracked his device open, tracked, you know, followed the trace through, but then got to a chip and it's just like, I don't know what that chip is. And so he was, you know, had the skills that he could etch the top off the chip and look at it and understand what he was seeing. Like I couldn't do that. I suspect lots of other people can do that and non geeks definitely can't do that sort of thing. So open hardware where it's just like, oh, you can go and look at the schematics. You can see what chips are used. It means that you can start to understand that like when, you know, does this mute button, and in this case, the mute button does it like it turns off the power to the microphone. So it definitely, you know, there's no software involved. It's purely electronics. So the mute button does work, but how do you work that out? How do you sort of satisfy yourself and an open hardware can help with that because it can open up, you know, what's being used and where it's being used. And this is more of a work in progress, but we can start to open up our supply chains. You know, if it's an open hardware and I'm already telling you what parts are in the device and showing you the schematics for the for the PCB, then like, why shouldn't I go? Well, actually, this is where all the components come from. So you can get sort of, you know, we can start to build up an idea of of where things come from. This is from a device I made a few years ago, the Acres Bell, showing that I basically electronics is mostly made in the Far East. I mean, it's not all China. There's lots of different countries in the Far East. For some reason, the tactile switches I got were from from France. So, you know, one electronics component out the whole device wasn't from the Far East. With the My Baby's Got LED board, the one I was talking about earlier, that's moved a little bit. I've just recently done the kind of looking through that for that. And the fuse and the fuse holder are made in America. So like that's kind of shifted the the centre of gravity of the supply chain a bit further west. Like, you know, there's lots more that we would need to do to get into properly understanding the ethics around the supply chains and understanding the supply chains properly. But with open hardware, we can maybe start to take some of those steps and start to crowd source it. And if you've done some work on it, then I can leverage that if I'm using the same parts in my devices. So as I've kind of touched upon a little bit, this is a kind of journey that I'm running along at the moment. I have a long way to go with all of it. But I thought I'd share kind of how I've gotten there so far. And hopefully show that it's not rocket science. You know, there's bits of computer science in there and bits of electronics engineering. But I don't have any qualifications in electronics. My background is software and then the last 10 years or so, actually a bit more than that now. I've been doing electronics as well and just slowly building up my skills in there. And you know, I'm not at the level where I could lay out a Raspberry Pi or something of that kind of class. But there's a whole load of internet of things stuff that don't need to be that complicated. And I totally, within the capabilities of less advanced electronics people, shall we say, to be able to pull together. And we can use open tools for all these things. That's what's awesome about this stuff and the open source world. KeyCAD is just great these days. Really good for laying out your electronics. You know, you lay out a schematic, shows where the different things go. This is the schematic from the My Baby's Got LED board, which is this board here that I've been talking about. You plug a PC power supply in the back of it and then you connect that to a load of LEDs. And then you can control them with that UI I showed earlier and make lots of fancy, like pretty patterns with your lights. So, yeah, this is the schematic. It's the first step. You kind of lay out, you know, this connects to that. In a kind of more conceptual scale. Once you've got that done and there's a bit of bouncing back and forth between these two things. But, you know, mostly it's like, yeah, lay out where stuff connects and how this sort of how it's all wired up electronically. And then work out how it's going to be laid up physically. So this is the kind of saying, OK, well, that's going to be positioned there and there's going to be a wire that's going to run across to here. And all that kind of stuff, which then lets you lay out what the board looks like. And then the output from that is something that you can send off to be manufactured. I mean, you can mill the PCBs yourself. I've done that once or twice or etch them and that kind of thing. But these days it's super cheap and super easy to get printed circuit boards manufactured, especially if you don't mind waiting like a week or two. I mean, you know, yeah, like any of these days, the amount of time you have to wait is just coming down and down. So these are a couple of kind of my go-tos. There's Osh Park who manufacture stuff in the States. Do really nice purple PCBs or they've got some really cool after dark ones. If you're worried about it, but not worried if you're interested in kind of having ones that look cool. Drew is probably kind of milling around. I don't know how you mill around in the virtual kind of online conference, but I'm sure he'll be around somewhere doing who is works for Osh Park and is always happy to chat to people about how to get things manufactured at Osh Park. You go to their website, you upload your Gerber files, then they send you pay them a bit of money. They send you PCBs in the post same sort of thing for JLC PCB. They're over in China. You get a bit more options over the colours of your PCBs. So, yeah, like the ones I use for some small batches, like if I'm getting a few initial ones just to test out, see if it works properly. If I'm getting more done, then I'll often move towards European circuits because I prefer to manufacture in the same country that I'm in. So European circuits are up in Glasgow here in the UK. They're really nice. They'll do PCB assembly as well. We'll get to that in a minute, but yeah, they can do. They'll do boards for you and there's loads more places. I mean, mostly it's kind of ask around, see who your friends are using and try them. And so far, so kind of like, you know, this is just the maker movement, right? This is people who are taking boards, working out how to build things for themselves so that they can get all of this, you know, open goodness and not be beholden to all sorts of other things. And that's like, you know, I'm totally a maker. That's fantastic. There are more and more people doing this sort of thing, but not everybody wants to make their own circuit boards or solder up their own circuit boards. And and like we need to get this stuff out into the into the masses, you know, otherwise everybody else is just beholden to whatever Apple, Amazon, Google, whoever decide to make for them. And like it feels like we could be just there's many, many niches that aren't being served by those massive corporations. And and those niches are totally big enough for us to be building sustainable and interesting businesses in. And and that will also help us, you know, help to provide more of a groundswell of like, well, why do I have to connect to the some servers in China or in California and send all my data there? Like, why? Why can't I just keep all my data locally? You know, like all of those people do that. You know, the more companies there are, the more options there are for people, the more they can demand that the bigger corporations don't just hoover up all. I mean, not just the big corporations and the startups and what have you. Don't just hoover up all of our data because that's the only way that you can do it. Like it's not and we can show that and the more of us you show that the better. And so the thing that you get with going from I can make a few boards and order them from our park or something and make up something myself like to then OK, how do I sell them to some other people? And you start to get into kind of batch assembly and manufacturing stuff. And so, you know, if you want to assemble a bunch of boards, I mean, even if you're doing one board, then I would still do it this way. This is me in does the pool the makerspace that I'm part of. Well, we've got a reflow oven, which is the thing in the corner behind the shiny stencil at the moment. And that just lets you, you know, lay out all the components. You get a stencil, you smear the solder paste through, you place all the components carefully with tweezers and then you pop them in the reflow oven and that does all the soldering. So you want to do surface mount stuff preferably because that's much better to to turn into something you might want to manufacture. And and if you're not doing this a few boards like this or you want to do, you don't want to do the assembly yourself. You can pay other people to do the assembly for you. European circuits, for example, they they assemble the museum in a box PCBs that is another project down part of and do a really good job. And we just get assembled boards that are fully soldered up and ready to go. And so once you're getting up, you know, this is great if you're doing like maybe tens of something once I get above that. If I'm doing more than 50 or 100, then I'm probably going to get somebody else to do the assembly for me just because like, you know, it's quite nice to do this, but there are machines that can do it much better than I can do it. So I'm totally affordable to kind of put those sorts of things in. And then, you know, getting beyond just like, can you make a circuit board? It's like, well, a circuit board's cool, but it's not a product. You need to put it in a box for it to be a product. You know, it needs to be a thing, a finished thing. I mean, you know, having said that, I do sell the pretty much better PCBs for the my baby's got a board, but you know, this is where we're going with these things. It wants wants to be something that looks like it's a device you've gone and bought from the Apple store or something. And free card is really good for doing the 3D design for things. And then you need to worry. I mean, you can 3D print stuff from this, but mostly 3D printing isn't that great a technology for how you scale these things up. Because one, you know, it's plastic and ideally we could do with getting away from plastic. And two, it's like the way that you scale up making things in plastic is that you injection mold it. And that means you need to find 10 grand or more to get your tool made that you're going to inject the molten plastic into. So suddenly, like if you're making a small, you know, making a few hundred of something or a thousand of something like that, still a 10 or on your bill of materials. Like you want to get this cost down as low as possible. So, yeah, I'm much more a fan of other ways of doing manufacturing. This is the museum in a box being assembled. So that's laser cut. And you can kind of farm that out and you can cut one or you can cut 100. And then we get into little batch productions so you can see you can do it out of plastic or these nice different colours. Or you can similarly your laser cut will quite happily cut plywood as well. And so that, you know, that lets you do sort of more sustainable materials and pick your kind of scale to match how many you're trying to make at the minute. We're doing batches of 100 at a time for museum in a box at the moment. And at some point we'll sell more and then we might think about the injection moulding or something, but maybe not. You know, maybe we could also go with CNC machining. So this is the CNC router in the megaspaces. This is a part that I was milling out. And like that mean you can mill it out of wood so you can make boxes and things out of wood and you can do one, you know, one or many. And so like maybe these are different ways of exploring things and CNCing is a way of producing your different enclosures. Apple are using that for the Apple Watch to be able to machine aluminium. So it's a kind of, it is scalable on some level, but it's and it scales from one rather than injection moulding, which scales from however many thousand you might want to make. So, yeah, like there are these different ways that we might want to make things. And then once you've made stuff, then you need to sell it. And, you know, this is my kind of like one plug for the board going by these boards. This is on tindi, tindi.com is good for selling electronics particularly. It's working out how you're going to get word out, how you're going to get that out into the shops and let people buy it. But it's quite easy to do and scales from you selling one or two or something and lets you build up stuff that shopifies great if you want to run the web store yourself. Because we need to get this stuff out into the world. We need people to be able to buy it. We need people to be able to kind of play around with stuff. And so what's next? What's the future with all this sort of stuff? I mean, as I say, and as you can see, this is like this is still a journey I'm going along and trying to find ways to make products in a more sustainable way in a way that doesn't have to share your data with with the company that you bought it off just because you happen to buy it off them where you can control things. And like how do we make manufacturing more sustainable? We need to solve these sorts of problems if we're going to kind of tackle climate change and we need to get that sort of stuff out into the world more because people aren't going to stop buying cool gadgets because of climate change. Like we've worked that out so far. So we still need to find ways for people to be able to buy interesting and exciting stuff without killing the planet. And there's loads of challenges in the kind of software stuff as well. I mean, the, you know, I'm interested in how we get web servers, the web servers that were running on the device. How do we get certificates into that so we can do HTTPS? At the moment, it's reliant on the fact that you're running on a local network that you're trusting, which isn't great. Like it's it's you're winning on one side, but you're kind of losing on another side. So like how do we get a step forward so that we can win on both sides? And like how do we manage many devices? And then just, you know, what more products are there to make? I mean, I'm kind of looking for gaps where there are ways to produce a nice little device that lets people kind of, you know, control their smart home without having to have a microphone connected to Amazon. And and lots of, you know, what are the niches? How do we fit the fill those niches with ways that turn something at the moment is, oh, yeah, you kind of need to know how to use a soldering iron and to wire up a circuit and build a little device and maybe do some 3D printing into like, oh, well, you just buy that. And then you just need to do a little bit of configuration on it. And then it talks to these other things. So, yeah, I guess kind of watch the space and come along on the journey with me. I could more of us who are doing this in the manufacturing the better so we can get like more of these devices out filling more niches, working with each other with the open standards and just, you know, making the world better. So, yeah, thanks for listening. OK. I did turn down a thought might have worked with my headphones being fine. OK. So, are we? Yeah, I think we're on. So, so Adrian, thank you for the talk. It was really interesting. Still just showing me. So what about a so one question we had here was talking. This is great talking up from Alex London talking about products. He wonders about certifications. It's a little bit ended, but yeah, I mean, it's a it's a good. It's an interesting. It's an important question. It's one of the kind of, I guess, missing blocks in in how we can do stuff at a really small scale upwards. I mean, there are some of the things we can do like the ESP 266 and 30 ASP 32 modules that I'm using on the board. Our ones are pre certified, and so that covers a lot of the kind of radio side of things. Like if your module been pre certified, then as long as you don't modify it, then you don't need to go through all the kind of radio certification for FCC C marketing that kind of thing. But there's still a bunch of CSE mark testing and things that that kind of needs to be done and is is still quite expensive like it's, you know, five 10 grand sort of thing to get products through that. So I mean, a lot of the time people end up selling stuff that isn't fully certified. I mean, I'd like us. It'd be nice to get to a way a pay a point when we could do like. Like for cars in the UK. If you're making a kit car, you don't have to write off five of them to show that it's safe in a crash. Like, you know, obviously Ford needs to write off the kind of escort or whatever. Not that they made that anymore, but to prove that it's safe. And that that makes sense because there's millions of them out there. But if you're making a kit car, you take it along somewhere and somebody like has a look over it and says, yeah, this looks like it's OK. And then you get a single vehicle type approval that means it's OK to drive on the road. Like it would be great if we could get to something similar like where the kind of the burden of getting certification is similar to the scale at which you're making things. Because it's mostly about mitigating risk. So, yeah, that's one of the things I'd like to chip away at. I'm not totally sure. I mean, maybe I need to talk to more kind of policy people to kind of try and help them to to kind of point poke away at that. It feels like that's about, yeah, I guess we're slowly breaking down all of the other problems. You know, it was hard to design boards. Keycard makes it much easier and all the kind of Osh Park, JLC, et cetera, et cetera, make it super easy to get the boards, making the kind of enclosures is easy. Now we've got 3D printers in the likes. And, and that, you know, like the certification is one of those last steps, maybe because there aren't enough of us doing it yet. So, you know, everyone else needs to join in as well. And then we can all start to kind of agitate for more of this sort of thing. So, we're going to Paul Barker's talk about an hour ago, which where we talked about license compliance and scaling that to the size of the organization. Then, uh, uh, razor asks a question regarding standards. What do you think of the W3CWT approach? I'm, I can't remember too much about it off the top of my head. It's something I've, I've seen in the past from what I remember. I think it possibly required you to have kind of like it was set up around a hub sort of approach, if I remember rightly, because I think that's probably why I haven't dug into it in great detail. Because I'm, I'm always much more of the opinion that our devices should be first class citizens on the web. And so we shouldn't be kind of going through little kind of bridges and gateways and so on and so forth. Like if my, my board wants to talk to, to, I don't know, to Twitter or something. Like it should make the same calls that the Twitter API uses, which is difficult, but I lived through WAP on the mobile phone world when we were getting web browsers. Exactly. I was running web browsers for the mobile phones at that point and we had a proper web browser that did like the same things that Netscape did. And, and then WAP came along and won out in the kind of marketing world because everyone was like, oh, you can't possibly do all this complicated web stuff on mobile phones. And it's like, and that should be able to dead end for like five, 10 years or something. And, and so I'm always wary of repeating that mistake, I suppose, and, and finding the right ways, you know, trying to try and be first class citizens. So I think that's why I didn't, but yeah, I'll go and have a look at it again and see, see where things are because some of my, my thought thinking is coming around more to like, you know, run a little web server on the device and manage and configure things on the device itself. Because I suppose partly like my, you know, the devices have moved on. Like I started doing this like the first connected device I did was in like what connected IoT device was back in like 2008 with Bob Lino. So, you know, at that point it was running on that mega three to eight and now I've got a, yeah, ESP 32 or whatever running with loads more memory. Since we get, since we tend to get a abruptly cut off here, I just want to ask everybody who added some really including you, Adrian added some really great links in the comment section in the main bedroom. If you could add those once the, once the discussion topic opens up for the talk here, maybe you could copy those links over into the into that talk so though so people can find them if they watch your talk later on. That would be a great. Yeah. Yeah, that's a top idea. I'll, yeah, I'll pull them across and yeah, I'm going to be hanging around in the chat.
|
We all know the benefits of open software, but not as many of us take the step into designing and building the hardware to run it on. Consumers are left with a choice of mass-market devices - hoping the company doesn't turn off its servers, and doesn't sell their data - or going DIY and soldering up things themselves. We need a wealth of Indie Manufacturers, building open hardware devices to provide more options and freedom to end users. This talk shows MCQN Ltd's path on that journey and how you could follow it too. We'll look at the whole process of making an open hardware product, from the design through to batch manufacturing and shipping. Sharing the open-source tools used plus the services and kit you'd need.
|
10.5446/52494 (DOI)
|
Hello everyone, I'm Cyril Herubisch and I would like to talk about the Mutant C, which is a PDA you can build at home. Basically it's a hardware shell for the Raspberry Pi for FactorBoard. But first of all, let me just try to explain why I think the project is interesting. And to be honest, I always wanted to build a customized computer that would fit into our pocket. And when I found out that somebody is working basically on the thing I always wanted to have, I decided to join the project and started to contribute. So this is how it looks like. This is a version 3 revision of the hardware. I do have the version 2 here. It looks mostly the same. The main differences are how it's built inside and what features the version 3 has. So let's start with the specification. It has a sliding display with a touch screen. The resolution is not that great, but I think it's okay for the size of the device. And also I would like to try to fit an A-ing display in there as well, because there is a 3.7 inches big A-ing display that would fit the hardware without any modification. The complicated part would be the software, since you have to use a special library to draw on that A-ing. But let's see, maybe I will manage to fit it in there and do something with that. It has a hardware keyboard and joystick for a mouse, a rechargeable battery, there is a buzzer, leds and so on. It also has a docking connector that you can use to charge the device. And there are also buses connected to there so you can talk to the device as well. It has real-time clock that can also wake up the machine, which means that you can program the RTC controller from the Linux via the standard kernel RTC interface, turn the device off, and when the alarm fires it will turn on the power switch and the machine will boot, so you can actually use that device as the alarm clock. It has space inside, reserved from the add-on board. There is an add-on connector that has body or buses from the Raspberry router there, and you can also turn on and off the power to that connector. I do plan to fit the GPS module in there, but you can basically put anything you want there and swap the modules when you want as well. It includes various sensors inside, there is a gyroscope accelerometer and so on. And why I think it's interesting to play with the device, it's reasonably easy to build. We use SDMD parts, but we tried to limit the number of these, and also we tried to use the bigger ones so you can solder it easily at home. It's also affordable to build even in small quantities. It's easily upgradeable, you can basically swap the board, the single board computer inside as long as the connector is compatible, it should work, and it's easily repairable. You basically build that thing so you have everything you need to repair a broken part, and it's also easy to modify. Not only you have the full documentation for it, it also uses standard components such as Arduino that are easy to program. So it's actually easy to modify the firmware even for less experienced programmers. So how it's put together, the body is 3D printed and it's held together by a few screws. The most components are soldered or connected via connectors to the main PCB, that's the PCB that the keyboard buttons are soldered on. There are two more small PCBs, one of them is for the display connector and few LEDs, and the second one is for the joystick that implements the mouse. And it has a sliding display connected via a flat cable, which is something that's kind of impressing that you can 3D print plastic parts that slide quite nicely, and there is a locking mechanism for the display as well. I just find it impressive that you can 3D print stuff like that these days. So this is how the main board is connected, I simplified it so that it fits on the slide. So as you can see on the right side there is the charger that is completely independent from the rest. It was chosen to be like that for the sake of the simplicity, so you don't have to worry about the battery or battery protection or anything like that, it's just done by the module. On the left side you can see that the Raspberry Pi is connected to the main controller, which is Arduino Pro Micro via USB, and the USB implements the HID protocol, which means keyboard and mouse, and there is also the UART, the serial port that you can use to talk to the Arduino, and you can read the battery voltage or internal temperature through that port, there is a command line tool for that at the moment. And you can also set some parameters like the thermal shutdown threshold, basically when the machine thinks it's overheating, when the Arduino measures that the temperature is too high, it turns the machine off. And there is also one GPIO connected from the Raspberry to the Arduino that's there for the power off signal, that basically means when the Raspberry operating system is halted, the GPIO changes the state so the Arduino knows that it's safe to cut the power at that moment. The Arduino also implements the keyboard via 16-bit i2c expander, the joystick and few LEDs and so on, and it also can switch power on and off for the LCD, it also can switch the power for the expansion board and so on and so on. Then there is this picture that shows how the Raspberry buses are connected, basically the SPI is used for the display along with a few more signals that are needed. The i2c is connected to the real time clock and few more sensors, it's also routed to the expansion board header and the docking connector, the same with the UART that's connected to the expansion board and the docking connector and that's basically it. Now we should know how it's put together more or less. If you want a more detailed picture there are PDFs in the repository, we love the repository so that's it. If you like the project you can have a look at the home pages or the git repository, the git repository should contain everything you need to build the project. There are 3D printed parts in there, there are also files that you need to produce the PCBs. Basically you can just send these to one of the factories that produce PCBs and you will get them later on and so on and so on. There is a firmware, there is a command line tool for talking to the Arduino or the serial as well and there is also a YouTube channel where you can see the device in action. Let's see if we get online or not. It looks like it's coming up. Hello Cyril, thanks for joining for this Q&A and thanks for the talk. I think we have a couple of questions from the audience. The first one was will it be open source hardware? Actually I think it's a compilment with the definitions because the files should be all there in the git repository and it's a combination of MIT and GPL for the software. I haven't really checked but it should be open source completely. I guess the schematics and the mechanical drawings also need to have some kind of open source license for it to qualify as open source hardware. I think it should be MIT but I will have to check later on. The next question was about the plan to add connectivity. It was mentioned as modem but I guess there could be other ideas as well. I guess that as long as it fits into the slot for the expansion module you can just build that yourself. All the things are there. And so what sort of interfaces does this expansion module provide? There is a header. As I said there is a header for expansion module and there is a UR and I2C and Favre wrote it in there and there is a space inside in the PDA so that you can fit something relatively small but I guess that small modem or something like that should fit probably. Another question was about I know if it's compatibility or the plan to use the compute module 4 from Raspberry Pi. I guess that this would be complicated because it uses different form factor and the whole thing is optimized to really fit the Raspberry Pi form factor. The case would have to be modified quite a lot so I don't think that it will be easily possible. Another question was about whether you plan to sell this kit or something like that for those of us who cannot put together a PCP. Yeah I'm not really a guy who would like to sell stuff. I like to build it. But you know what I'm trying to do? I'm trying to buy the components for something like 5 devices and distribute it among my friends which is easier than selling the kits. So if you find somebody who is geographically close who wants to build the PDA you can just buy the PCs together it will be cheaper like that. And do you have an idea of the overall cost for building this extension PDA? Well basically the most pricey part is the display. If you buy it of AliExpress it's something like 15 or 20 euros and the rest is basically for the price of the peanuts. It's really cheap. So would you say the overall cost would be below 50 euros or below 60 something like that? Yeah but basically you have to really buy for more than single device.
|
MutantC is a open source and open hardware shell for a RPi form factor boards. It includes hardware keyboard, sliding display with touchscreen, battery with charging circuit, etc. This short talk will be introduction of the project, it's goals and of the v3 hardware revision.
|
10.5446/52496 (DOI)
|
Hello, my name is Suhasini. I work as a software developer at Analog Devices. My colleague, Peter and I are here to talk about network audio in Android Automotive and some of our work done as part of the Genevieve Android Automotive Special Interest Group. Using Android in the Automotive domain is not as easy as it is in the smartphones. And that is because of the various complex applications and cars. Just to give you a brief about the various applications, we have some applications in the power train with battery management and fuel level indicators. And then we have the infotainment segment which has telephony and voice assistance similar to the phones. But other applications like rare seat entertainment units and complex audio processing applications like noise cancellation which are much more complicated than the phones. We also have ADAS or the advanced driver assistance systems like cruise control or parking assist, etc. And then with the autonomous driving evolving fast, we also have a lot of perception. And control applications coming up. And then there's the connected car applications with vehicle to vehicle and vehicle to everything connectivity. These are just some of the applications. There are a lot more applications in the cars. And all of these applications are realized via various networks in car like A to B, CAN, LEN, etc. So the question really was can Android Automotive cater to all these needs of these applications? And we wanted to investigate further but more in the audio and infotainment segment. So we started with the audio system design architecture. We wanted to see what are the various rules that Android could play in this audio system, the audio subsystem in the car. A common car audio system architecture is shown in this diagram here. You have the head unit and an amplifier in the trunk, some speakers and mics and also the rear seat entertainment unit. We have more mics and speakers for more advanced system applications. With such a system there comes various system design approaches. One of the options is that Android only provides the sources and syncs and the rest of the configuration and the controlling and processing of audio happens externally. Another option is for Android to control the system completely to do the configuring and the controlling as well as provide some sources and also do the audio routing all by itself. Now these two are actually extreme options and we can't say that just one of them will be used because in most systems there will be a need for a combination of both of these coexisting. Like for example in the system shown just some time ago maybe the rear seat entertainment unit might use the first approach giving out only the source and syncs because in a sense it's a lean node and without much complexity. But with the head unit operating system maybe it takes the second approach controlling the entire system doing the processing and routing. And sometimes maybe it might also be that Android is merely routing the audio streams but then the configuring of the entire system and controlling happens from somewhere else. So we really need to have Android be flexible to all of these approaches. One problem statement which was common to all of these approaches was that extracting raw PCM streams out of the Android context. And we started to investigate this as the first step. Pauling Piotr will give an overview of our POC. Hi, my name is Piotr Kraptrik, I'm a Tieto-Evry employee and a member of GeneVy team. This is the first design that we've had in mind intention was to find out what are the challenges when implementing audio hold without support of SOC vendors. We assumed we would like to have hardware independent solution that will expose audio data to external clients for further processing. Either it would be just playback, applying additional audio effects or mixing with another audio sources. We wanted also to check if there is a need for additional service or feature. That's the reason why you can see here a separate GeneVy audio framework and GeneVy audio how. We were planning to adhere any features that we believe Android is lacking. At the time when we were doing this design we've encountered some missing features or problems with for example defining non-linear volume curve or means to control audio effects globally by the system, not by the applications that can be used for calibration of the platform. But currently Android already is somehow improved or solved the problems. So this is still empty placeholder and there is no real functionality behind it. But still we are investigating other possible gaps and features because Android was designed to work with phones not with automotive environment and mobile configuration is rather straightforward compared to one that we can found on mobile cars like plenty of speakers, safety sounds, external amplifiers or audio zones. At the same time Android is still evolving and we can see much of the new features coming with Android new versions. So still we've got this placeholder but it's for this moment and for this POC empty. For demonstration purposes we chose X86 platform. It gave us possibility to utilize pure ASP code with already working emulator. The setup shows also that hardware independent solution should work also in virtual environment that is common solution nowadays as Android can be treated as kind of a virtual device and in this configuration is working with Linux virtual machine. It somehow actually looks like some Jiggle automotive setup with hypervisor and host and guest. Android is also putting some effort in virtualization adding virtual support and creating projects like trout and cuttlefish that can be used as testing facility. This POC was actually run with Android Q emulator running on Windows host and Ubuntu 18 deployed on virtual box virtual machine. Using with emulator and virtual machine is based on DCP that is realized by the virtual box routing mechanisms. So the following components was created for this POC. We've got additional device configuration. It's based on automotive variant for X86 target. We have created audio hall that is derived from emulator and audio relay component that is responsible for forwarding audio data from audio hall to external clients. Zones and audio context features of Android can be still utilized as content for those who are transferred via separate sockets in this case external client can manipulate the volume or mix it with other sources depending on priorities that can be configured there. So configuration of context and zones are also delivered with this POC for the demonstration purposes we've defined two zones. In first zone context are separated in second zone they are mixed together and delivered by a single channel. This configuration can be easily changed and adopted to current needs. This diagram shows audio data flow in current concept. Note that the audio hall serves audio data via named sockets to audio relay. Reason behind is that Android selling policies forbids how to communicate via network socket. Other solution more tailored to chosen configuration can be of course applied like shared memory or other platform IPCs so in this case audio relay component can be removed from the flow. So in the end we've got simple prototype solution that allows to get audio data from Android and pass it to external receiver like host, audio subsystem or external amplifier. You can get the code from the mentioned link. Feel free to use it, comment it and suggest any changes. Thank you. Thank you. When we talk about the role of Android it's not just confined to the head unit especially if we're talking about cases where it's used as the master controller. It will have to interact with several other hardware some on the same SOC and some others elsewhere in the car connected via network and this is where it slightly differs from phones because in phones the Android operating system need not worry about a device that is far off somewhere down in the network but in car it is a very real use case, a very possible scenario where you have the trunk amplifier or a race seat entertainment which is connected to your head unit via a network. So we need to Android to be able to control or operate these remote nodes as well and not only would we be controlling the devices like trunk amplifier or RSC we also have the task of transporting the audio itself over the network and to do that we have protocols like audio video bridging or A to B which help with this each having its own pros and cons of course. In the following slides we will just briefly talk about the two protocols that we are investigating in this group for integration with Android, AVV and A to B and also provide some resources for their open source code. Audio video bridging or TSM as it is now called is a series of standards which was brought in to help realize the time sensitive applications over Ethernet. So Ethernet in essence has a few hurdles when it comes to catering to audio applications that one of them is that there is no concept of time in Ethernet its packets and it is also asynchronous communication asynchronous transmission and each device has its own clock and when we talk about audio transmission we of course needed to be synchronous. So to overcome this we came up with the GPTP protocol or the 802.1 AS protocol and to cater to delays that are non-deterministic they introduced the 802.1 Q AV standard and stream reservation protocol was introduced to cater to bandwidth reservation needs and 1722 or the audio video transport protocol for interoperability. There are many more standards in this. The AVTP protocol has a line-up implementation and it's linked here. The next link the TSM.ReadTheDux link has a nice document on getting started with AVV. The other network we investigated is the automotive audio bus. This is a low latency digital bus for high-fidelity audio. It can transport multi-channel audio with a deterministic latency of less than 50 microseconds. AVB is a synchronous bus so it also has a clock on the same bus. In addition to audio you can also send some control information with I2C over distance and GPIO over distance and even the other nodes on the network are all powered by the bus itself so you don't need any specific any separate power supplies to these other nodes. Some implementation for A2B already exists and there is a link to the kernel driver here. This is not yet in the mainstream line-up there is still some work left to get it there but it should get you started. Both A2B and AVV have line-up implementations that needs to be extended to Android. The next step would be to integrate a network and connect the raw streams that we extracted from the POC to the audio network. And not just this there are tons of other avenues that need to be investigated. Some of them are listed here. It could be the Bluetooth headset handling which is quite different from the phone in the fact that when you're talking about the phone Bluetooth module that acts as a master but then in the car the Bluetooth module has to act like the receiver. And some other audio processing like ECNN handling for voice or other audio processing algorithms that needs to be run on external DSP. There's also some calibration mechanisms that we can further look at how to support them in Android. And there is also utilizing the DSPs in various system configurations. We could have a virtual machines or containers or actual multiple hardware. And it's also when we have virtual machines or containers how do we pass audio streams between different virtual machines or containers that that also would be a point of investigation. And then there's this aspect of latency in Android. So how much of latency would Android add and is there a scope for reducing that when we want low latency what is the path that people want to take. These are some investigation points that are a lot more of them that needs to be investigated. This was the team that was working on this presentation. So our contact details are here. You have the ASAG mailing list link. We also have contact details for myself, Piotr Rovaseem here. Please do join us at the ASAG Audio Hardworking Group to contribute. You could email us at the mailing list to join. And also if you want to find out more about this group please visit the Viki page, the link here. Thank you so much for listening to us. Okay so thank you very much, Seohoshini and Piotr. So questions and answers. So we have a question on the hardware. Hang on a second. Okay so could you give more details of the hardware side of the automotive bus, e.g. number of wires, max length, bandwidth, etc. Okay, take it away. So the number of wires A to B connects with a twisted pair cable, unshielded twisted pair so it's just a pair of wires between two nodes. The max length between two nodes can be up to 10 meters and the total length of the entire network from the master to your last sleeve would be 40 meters. And what was the other thing? Max bandwidth. Bandwidth is about 50 Mbps in A to B today. Okay cool. So in general in working with Android in automotive, how have you noticed that over the several versions of Android, how have you noticed things changing? Do you think that the Android developers are getting a better idea of how automotive audio works? Maybe this is more to me. Yeah definitely. It seems that from release to release there is more attention paid to the audio side beginning starting with this audio zones features that came sometime ago and now this should be fully working connected to the multi-display features so that each display should have its own audio zone that can be switched also by the system. And yeah it was actually a bit redesigned compared to the mobile world and I expect some new also changes coming through with new releases. So definitely audio is something that Android developers paying attention to. I think that currently we will have some work put to the virtualization side so the virtual sound utilization. But let's see this coming. So if I understand rightly your approval concept was based on Android 10 is that right? Yeah actually we started I think even online, I started thinking of it but in the end it was based on what we can find in Android 10. Okay cool. And have you looked at updating to Android 11 or is that not an issue? No we haven't been considering it right now. I think that we have to think about this virtualization issue and perhaps base communication on some future setup. This could be something quite interesting to explore. Cool. Okay and one final thing I'd like to throw out there. So you are both working in the context of the Genevieve special interest group on Android automotive. Could you tell people a little bit about that group and how people can join in to it and yeah what kind of things do you do in general? Well I could answer that. So Genevieve automotive, Android automotive special interest group was formed to look at how Android automotive, what design changes can be there or what improvements can be there in Android automotive for the various applications that are coming up today. We have two subgroups here, one is the vehicle HAL and the other one is the audio HAL subgroup. I can speak more on the audio HAL subgroup. We are investigating several topics around audio in Android automotive. So right now we are investigating the network audio devices but then there are other topics that we would be investigating like some of them were already mentioned in the talk but we do have some section for Bluetooth. There's this topic on latency and how that can be handled in the car and several other topics that are there. People can join this group by writing a mail to the mailing list. I can just put the link in the chat here and yeah there's also a wiki page that people can look at to understand the various other projects that are there and how they can join as well. Maybe I can say several words about the second stream this vehicle HAL. It's up to somehow combine the world of Android where the component responsible for providing data from automotive is predefined by Google to something about 80 values but in the automotive world there are some models consisting of thousands actually and tens of thousands of values that can be somehow of use to applications and in this vehicle HAL group people are working on trying to combine those two worlds and get the common idea how to serve those two applications in some useful way. Okay good well I wish you every success with your work on Android automotive and with Junivy. So unless you have anything further to say I will close this chat down. So I want to thank you both very much once more for a very interesting talk and yeah we hope to see you again at FOSDEM next year. Thank you. Thank you. Bye.
|
The modern vehicle audio system is built with a number of networked components that are needed for many complex and integrated functionalities such as active noise cancellation, warning sounds, diagnostics, etc. And thus, complex and flexible audio setups are a fundamental design need for modern vehicles. GENIVI AASIG analyzes various scenarios of integrating Android in this complex setup and analyzes the maturity and gaps of Android Automotive solution in this context. This talk aims to highlight some of the findings of the group and discuss further investigation topics in this area The talk will give a short overview of the audio system design choices with Android. As part of the analysis towards integrating Android in the complex setup that exists today, the first step group took was to extract raw PCM stream out of Android context. The talk will discuss the implementation of this and the various design tradeoffs and decisions that was taken. The next step would be to connect Android to audio network. Currently A2B and AVB are being investigated. Further investigations includes topics like: ECNR handling, calibration mechanisms, how to utilize DSPs in various system configuration (virtualization, containerization, multi HW).
|
10.5446/52231 (DOI)
|
Hi, I'm here to talk about building little languages. So first obvious question is what is a little language? Well, it's not a big language like Rust. It's usually something small and simple for some specific problem where it is designed just to solve that problem really well. Hopefully not Turing complete, you know, often used in other languages, often interpreted or somehow otherwise not compiled to machine code. You know, your configuration files, hopefully not Turing complete, hopefully not like sent mail. CSS as another example, you know, relatively simple, often in combination with other languages. Through query languages, often little languages, though SQL gets pretty big, templating languages, and you know, there's kind of a spectrum between big and little. So my particular little language that I built was motivated by this specific example of transit data. So in addition to being a Rust enthusiast, I'm a transit enthusiast and I like to know all sorts of obscure facts about the transit system in my hometown of Boston. And fortunately the transit agency in Boston publishes all sorts of data in open formats, including real-time locations for all the buses and trains, so you can see where everything is. And they publish this in protocol buffer format. You just download it from a URL, it gets updated every 30 seconds. And of course I have a script set up on my little rented server out on the Internet to go and download this data every 30 seconds and keep a giant archive so I can go back and see how things were working during the terrible snowstorm. Well, in theory, I still have to actually have some tools for dealing with this data. And part of the problem is that it's a lot of data. You can't just, you know, you can just stick it in a database, but it's like 200 megabytes a day. So it starts to add up for my cheap rented server on the Internet. So I just stored the cheap way which is concatenated and exipped, compresses really well. But there's not really a good way to search through it. So I figure I'll just build my own little search tool. And the way to do that is to build a little query language that lets me find out, for example, where is bus 5001? And protocol buffers, in case you don't know, they're like a binary format. The data model is kind of like your JSON or whatever. It has structures. It has arrays. It has fields and messages. And so there's other similar formats with other similar query language. This path was kind of an inspiration in some way, and there's just, you know, a basic example of what it looks like. And so getting a little bit more into what is behind the language. There's sort of different parts that you have to go through. My language has a parser. Every language has to have a parser more or less, which reads these expressions. And then produces a parse tree. And then, because protocol buffers actually have a pretty well-defined schema, and it tells you what type all the parts of the message are supposed to be, and so on, there's actually a type checker. Very, very simple type checker. But it's there nonetheless. And then the actual evaluator that actually takes that expression and runs it against my giant archive of a year's worth of bus data and spits out the little subset that talks about just that one bus that I'm interested in, because it's the shiny new hydrogen fuel cell bus that we've been waiting for for a year, and we want to know where it is. So parser. You heard in the previous talk about how there are these awesome tools for building parsers. I just built my own. It ended up not being that complicated. You just take, it just reads through the string and either successfully matches it and returns some successful result and the remainder of the string that it didn't parse, or it fails to parse, it returns an error, and then maybe you'll get a parse error. And the parser reads this expression, the string that in the earlier slide returns a parse tree. And that gets fed into the next section, which is the type checker. So the parse tree is literally just like a struct tree of strings, more or less. And now we have to make sure that all the actual words in there, like the entity.vehicle that you saw earlier, that there are actual messages that are defined for this. And so these definitions are in this description file that Google ships. And fortunately Google ships, a whole bunch of tools to deal with protocol buffers. Unfortunately none of them are really designed with Rust in mind. Fortunately there are tools that are designed with C in mind, and also fortunately Rust is pretty good at talking to C. So actually what I ended up doing is using the C tools, they produce a C file from this description of what this transit data format is supposed to be like. I compile that into an SO, shared library, and then just load that from Rust and have a little FFI wrapper around it that lets me use this C code generated by this Google tool directly from Rust. And that lets me get at what are the valid field names, what are the data types for those things, are the expressions that were entered in that original thing that we're parsing, are they valid? And so the type checker does that and then returns a sort of more refined version of the tree to the next stage which actually executes that. This also takes advantage of some other things that Rust is pretty good at, which is the iterators, which are a wonderful, wonderful feature, and make it very easy to sort of build things on top of. And the way this evaluator works is basically I built a little iterator type that will take some generic input, actually something that implements a specific type of trade, and then basically return successive protocol buff messages out of that. And then you just run the sort of expression and see if it matches. And the code for that turns out to be surprisingly compact because of the powerful abstractions that Rust has. So this is the actual code for the actual evaluator. Chose this iterator and then it just filters and sees does this message match the tag? Does it match the filter expression? And then if it is, well, call the callback and if it doesn't and we still have more sort of messages, more expressions to check, then, you know, recur and go deeper and check if it continues to match. And this is actually, so the actual, some of the back story of this is that I actually originally started writing this in C. And I desperately wanted to write it in Rust, but unfortunately Rust didn't run, or the standard compiler didn't have a version for my computer, which is this lovely little arm thing. And I started writing it in C. They actually released the arm build, an official arm build of the compiler like two days after I did that. And then I kept going with C regardless. I got very, very frustrated. The code got very big and confusing. And then eventually I got so frustrated that I rewrote the whole thing in Rust and it ended up being much, much more compact. So I guess the things that I learned from all of that, Rust is great for writing compilers, not really surprising. You know, the first really big program written in Rust was the Rust compiler. So that was sort of the guiding force behind a lot of the way the language was designed. And so it was designed pretty well for writing compilers. Things like the data types, the way the enums work, the way you have match statements. Basically what you need to write a compiler. Rust's FFI makes it really easy to reuse C code. It would have been a lot harder to actually get all of these descriptions of what the messages are supposed to be like and actually basically write my own compiler for that format as well and get all the data out of the sort of published standard or have to basically code it up by hand and it would only work for transit data. But thanks to relatively easy FFI, I could just reuse all of that stuff and wrap it in nice, safe interfaces. And Rust is great for building composable abstractions. So the iterators were sort of very, very efficient and made it very easy to write this. Sort of going back to the code filter, the filter, you know, is a trait and you just write some, write an implementation for it and you have some, you know, that's eval function that you can implement for different types of filters. Makes it very easy to write this particular code and then you don't have a giant like switch statement like you would if you were doing C. So that makes it more compact. And yeah, Rust is generally great. And I guess the conclusion is little languages are really not that hard. You shouldn't be afraid if you see a problem where it's applicable to go and write your own. And you know, I would encourage you to do so. I feel like there's been this huge sort of renaissance of new languages, big like Rust and little like all the other things that people do. So I would encourage you to go and build your own. And you know, you can go look at my code up on the Internet. And I guess I'm a little bit out of ideas, but do you guys have any questions? Anyone? Comments? Wait before you ask your question. I was informed that they did not show up on the recording last time. Hold on. Here I come. Here you go. I'm assuming you're wanting to be able to see things, do things. You had an initial foundation that you expanded on, added features to after you had the general framework in place. During that expansion, were there any points in which you wanted to do something that Rust wasn't giving you? Very good question. So I mean, there were ultimately I managed to successfully battle Rust and get what I wanted. But there were definitely moments of difficulty and battling with the borrower checker and sort of transitioning it from, oh, we'll parse this message out of a buffer to, oh, let's build a streaming interface and I have all sorts of clever ideas for how to do that. And then sort of fighting the system and fighting the borrower checker when it wouldn't let me do what I want and then having to take a two week break for various other reasons and then coming back and being like, oh, I'm missing a pair of curly braces. That completely solves all my problems. So I'm kind of looking forward to things like non-lexical lifetimes that will hopefully make these things a little bit easier. Anyone else? We all just really want to go to lunch.
|
Languages are an underrated tool for solving engineering problems, in part because creating them has been difficult. Rust's unique combination of features make it an excellent language for writing compilers ranging from the Rust compiler itself to small domain specific languages. This talk will describe the implementation of a compiler for a query language for protocol buffers and how much easier it is to write one in Rust compared to C.
|
10.5446/52865 (DOI)
|
Hi everybody. Our paper is about the decentralized runtime enforcement of message sequences in message-based systems. We consider a message-based system that consists of distributed components which collaborate via a synchronous message passing. These systems may consist of off-the-shelf components developed by different vendors and hence we have not access to their code. Inside systems the sequence of messages may lead to the occurrence of bad behaviors. Due to the off-the-shelf components it's not possible to prevent such message sequences from mission at design time. So we aim to prevent the formation at runtime. However the order of messages cannot be determined exactly due to the absence of global clock. As an example, SME billing that consists of different locations named A to E where the location is restricted and a visitor must enter the restricted location for rarely go past. Its location is equal with a smart security camera and a smart door and the visitor must use a smart door to enter the location. When the smart door of location has opened by the visitor the door sends a message to a central system. The only legal path to the restricted location is through the consecutive locations A, C and then E which can be detected by this message sequence. Similarly, if a visitor is entered a restricted location by pathing through the consecutive locations A, B and then E it can be inferred that the visitor accesses the restricted location illegally. The path between different locations of this billing is such that if the consecutive locations B and D are visited then the visitor will return to the location A. Hence the message sequence which is generated by pathing through the locations A, B, D, C and E does not violate the security rule because the visitor returns to the previous locations A by pathing through the consecutive locations B and D. Actually, if a case an unwanted message sequences in which the occurrence of some messages contributes to the formation of a sequence while some other messages may cancel the effect of the previous ones. Before I explain how we prevent the unwanted sequence formation at runtime, note to some preliminaries. We assume that the message B systems two messages sent directly from one process to another will be delivered and processed in the same order that they are sent. Each process has a unique identifier and message queue in which a process sends a message like M to a target process using its identifier. Each process takes messages from its queue one by one in 5.4 there and invokes a handler regarding the name of the message. As there is no global clocking system use happen before relation to determine the order of messages which is implemented by the vector clock. To formalize the unwanted message sequences we use the sequence automaton. This automaton is an extension of the non-determinates defined automaton where transitions are partitioned into two sets of forward and backward transitions. Forward transitions contribute to the simple pass from initial states to final states. The backward transitions cancel the effect of the occurrence of messages labeled over the forward transitions. Between the source and the destination set of a backward transitions there must be a path made of at least one forward transitions. For instance this automaton describes that which message sequences must not be formed at runtime. The labeling of transitions denotes sending a message. For example the triple P1 M7 P3 denotes the action send P1 M7 P3. Here if first the message M7 in send and then M3 is send while the message M5 is not send in middle of M7 and M3 then the message sending sequence M7 M3 is formed. However in a message sending sequence M7 M5 M3 sending the message M5 has canceled the effect of M7 and so sending M3 will not form a sequence as the reaching state Q0 is not a final state. But the sequence is formed by sending the messages M7 M5 M7 and M3. To prevent the sequence formation at runtime we need to use two auxiliary functions. First functions defines the preceding transitions of transition as explained by example. Consider this automaton when the message M6 is sent a transition like Q5 to Q6 may lead to the formation of sequence from the initial states of Q6. To form such a sequence it's necessary that athletes a message over one of the preceding transition of Q5 to Q6 which here is the transition Q0 to Q5 has occurred and no message over the backward transitions has canceled the effect of the preceding transition. The pre-transition of the transition Q5 to Q6 is the set of preceding transitions whose label messages can be sent before M6 in a sequence and have a same destination as the source state of Q5. The second function is weight transition. This function defines the set of forward transitions which the effect or cancel by backward transition. Here QMQPrain is a backward transition and QNMPrain Q0 is a forward transition. For example in this automaton a backward transition Q4 to Q0 can cancel the effect of all forward transition on a path from Q0 to Q4. Up to now I explain how the unwanted message sequences can be formalized by sequence automaton. Now I want to explain how this automaton can be decentralized among monitors in which monitors have partial access to the automaton. For this we break down the automaton into a set of transition tables. Each table belongs to a monitor and contains the information of transitions corresponding to its process. For each transition the set of its pre-transitions is also stored in the table. Since the effect of a pre-transition may be violated by the occurrence of its white transitions it's necessary to store the set of white transitions for each pre-transition in a table too. For example here this automaton are broken down to the three transition tables which belongs to the monitor of M1, M2 and M3. The first table contains transitions which the sender of messages is P2 and noted by the green box. In the choreography-based setting monitors collaborate with each other using monitoring messages. A monitor sends the monitoring message ask to inquire if a message has been sent and receive the response by the monitoring message reply. For example here the monitors communicate with each other to avoid sequence formation of send M1, send M2 at one time. As the message M2 is the last message in this sequence the process P2 must be blocked before sending the message M2 until its monitor gets information about the partial sequence formation from others and makes sure that sending the message M2 does not complete sequence formation. For this the monitor M2 sends the monitoring message ask to M1 to check if the message M1 has been sent. The monitor M1 responds to the monitoring to by sending the monitoring message reply. The monitor M2 may receive that the message M1 has not been sent. However due to the delay of the network this response may be received late and meanwhile the inquired message M1 may be sent before receiving this response by the monitor M2. So the process P1 has sent the message M1 before the message M2 and the sequence has been formed. Hence to avoid the sequence formation the inquired message M1 must not be sent by the process P1 until the message M2 is sent by the process P2 and then the monitor M2 notifies the monitor M1. In our enforcement setting monitors must inform the result of the sequence formation by pulling or pushing strategy. We use a pulling strategy for collaboration among monitors. I explain the reason by an example that with this strategy monitors find out the order of messages more accurately. Consider the property that the message M2 must never send after the message M1. Assume that the process P1 sends the message M1 after the message M2 has been sent. But the vector clocks of these messages are concurrently depicted in the figure. With the pushing strategy the monitor M1 must inform the monitor M2 the moment that the message M1 has been sent that is J and 0. When the process P2 sends the message M2 the monitor M2 cannot conclude about the violation of the property as it has not received the moment that the message M1 was sent. After pushing the moment of sending the message M1 by the monitoring M1 the monitoring M2 cannot conclude the order among the two messages accurately and decide on the property which is not held as the vector clocks of the messages are concurrent. However with the pulling strategy the monitoring M2 inquires about sending status of the message M1 from the monitoring M1 after its process sends the message M2. If the process P1 has not sent the message M1 yet then its monitor responds with a false result. Upon reception of this response the monitor M2 can conclude accurately that the property is not violated. Now I want to explain our prevention algorithm. In our algorithm the process Px maintains the variable which denotes the list of messages labeled on transitions reached final states. The process is blocked before sending the last messages until its monitor makes sure that sending the message does not complete a sequence formation. Also the process and its monitor have traditional variables which are respectively the list of triples consist of a message that the process is going to send, a vector clock of the process opens sending the message and the type of a monitoring message that the monitor must send to other messages that is ask or notify, a pair of a message which the process Px has been blocked on it and the status of a message to be sent in which it can be either okay or error and the list of messages that must not be sent by the process Px until its monitor receives a notify message. The monitor Mx also maintains a transition table and a variable which is the list of triples that consists of the transition that is taken before, a vector clock of the process opens sending the message and over that transition and a result of a partial sequence formation up to that transition. Regarding the position of the message M in the unwanted sequences either M is the last message of a sequence or not and our algorithm behaves differently. If M is not the last message in any sequence like the message Px MP1 in this figure the process Px sends the message and appends a triple the message M its vector clock type of monitoring message that here is asked to the end of the shared list. The monitor Mx takes a message from the shared list. If the type of the messages asks it finds those rules of the transition table whose labeled messages on its transition equals the message M. For each row the monitor Mx inquires about the taken status of the pre-transition and white transitions in the row by sending appropriate monitoring messages to the monitors corresponding to the sender of the messages over to transitions. Then Mx adds a temporary record to its history. Adding the temporary record is helpful when another monitor inquires the monitor Mx about the taken status of the transition. In such cases the monitor Mx must postpone its response to the inquiry until the result of the transition be defined. Based on the blocking status of the inquired message the inquired monitor replies to the monitor Mx. If there exists at least one taken pre-transition like transition Q0 to Q1 that was not taken before its white transition like Q1 to Q0 and its taken time is before the sending moment of the message M it's concluded that the bad prefix is going to be formed. If M is the last message of a sequence the first seven steps of the algorithm are the same as the previous case with these differences. The message M has not assigned vector clock as the process Px is blocked before sending the message M. Also the process Px sets the shared block message to the message M and then is blocked. After the seven steps if a sequence up to the message M is formed based on the value of the result then the monitor Mx updates the shared block pair to error to informs process that sending the message M leads to a complete sequence formation. Otherwise it updates the shared pair to OK to informs process that the message M can be sent safely. The process Px either sends M or sends an error message regarding the status of the message in a shared pair and appends a triple M its vector clock up and sending M notify monitoring message to the end of the shared list. The monitor Mx takes the triple with the message type notify from the shared list and then sends the corresponding monitoring messages. Finally the monitor MY which receives the notified message from monitor Mx removes the message label on the inquired transition from shared waiting list. To evaluate our algorithm we investigate the effect of different parameters on the efficiency of our algorithm including the number of processes, the maximum number of message handlers of processes, the maximum message communication chain between processes and length of the message sequences. The maximum message communication chain denotes the maximum number of processes in chain of message handlers to send messages to each other. We develop a test case generator which produces message-based applications with different value of such parameters and set of message sequences according to the generated application. We also developed a simulator which simulates the execution of each application and our prevention algorithm and then measures the communication overhead of our algorithm. We generate four applications with three, six, nine and twelve processes where each process has maximum five message handlers and the maximum message communication chain in each application is four, five, six and seven respectively. Our results show that as the length of message communication chain increase the number of monitoring messages and the average blocking time grow linearly for complex applications. We also evaluate the average number of monitoring messages, the average of memory consumption of the monitors and the average time to enforce a property for the application with nine processes. The result shows that the average number of monitoring messages and the average memory consumption of the monitors grow linearly as the length of sequences increase. Also as the length of the sequence increases the monitors involve a more collaboration and hence more time to gather all responses from other monitors. Concluding, I'd like to say we address the choreography-based runtime prevention of message sequence formation in message based systems. The distributed process communicates via a sequence message passing. We have assumed that there is no global clock and the network may postpone delivery of messages. Our proposed algorithm is fully decentralized in a sense that each process is equal with the monitor which has partial access to some parts of the property specification. Our experimental results show that with the increase of the complexity of application or the length of message sequences, the number of monitoring messages, memory consumption, and time to pre-wind the sequence formation grow linearly.
|
In the new generation of message-based systems such as network-based smart systems, distributed components collaborate via asynchronous message passing. In some cases, particular ordering among the messages may lead to violation of the desired properties such as data confidentiality. Due to the absence of a global clock and usage of off-the-self components, there is no control over the order of messages at design time. To make such systems safe, we propose a choreography-based runtime enforcement algorithm that given an automata-based specification of unwanted message sequences, prevents certain messages to be sent, and assures that the unwanted sequences are not formed. Our algorithm is fully decentralized in the sense that each component is equipped with a monitor, as opposed to having a centralized monitor. As there is no global clock in message-based systems, monitors may prevent the sequence formation conservatively if the sequence consists of concurrent messages. We aim to minimize conservative prevention in our algorithm when the message sequence has not been formed. The efficiency and scalability of our algorithm are evaluated in terms of the communication overhead and the blocking duration through simulation.
|
10.5446/52866 (DOI)
|
Hello everybody, my name is Borzupan Akhtarpur from Michigan State University. I'm going to talk about partially about distributed runtime verification until partial synchrony. This is joint work with my students Ritam Ganguli and Anik Montaz. Let me first motivate what the idea is. We are interested in runtime verification, which basically means a monitoring mechanism where the monitor inspects the execution of a system and evaluates that with respect to a formal specification. This formal specification is usually in terms of some temporal logic, regular expressions, or some finite state machine. In this particular work, we are interested in distributed RV where one or more monitors observe the behavior of the system at runtime and they collectively want to verify the correctness of the system at runtime. In distributed RV, one of the challenges is that the processes that we want to monitor do not share a global clock. That means the order of events that happen in the system cannot be ordered. The order cannot be determined. For instance, look at the figure on the right side where we have processes P1 and P2. Process P1 hosts a variable X1, process P2 hosts a variable X2. The initial state is the values 1, 2 respectively. Then in process 1, X1 becomes 0, X2 becomes 0. Let's imagine the formal specification of the property that we want to monitor is that next, that means in the next state, X1 plus X2 is strictly greater than 1. Now, these two events, that mean X1 equals 0 and X2 equals 0, are concurrent with each other because they cannot be ordered. Depending on which one actually in the physical time happens first, we can have two different verdicts. For instance, if X2 happens first before X1, then this property is false because X1 plus X2 is equal to 1, which is not strictly greater than 1. On the contrary, if X1 equals 0 happens first, then X1 plus X2 equals to 2, which is strictly greater than 1. So as you can see, even in this very simple example, depending on what order of events we consider, there can be different verdicts for the evaluation of our specification. Now, enumerating all of these possibilities is not practical at runtime. It's not even practical offline. So the question is what do we do? We are not the first who worked on this problem. The VJ Garg looked at the predicate detection problem for a long time. He has this notion of slicing, which is very well established. This is in a fully asynchronous setting. We, in my group, we also looked into decentralized runtime verification for LTS specification, even again in a completely asynchronous setting. The main problem with a complete asynchronous setting is that it doesn't scale well. Now, in the synchronous setting where all the processes assume a global clock, there is also some work on monitoring LTL specifications. But the problem with this line of work is that in a large distributed system, it is not possible to assume a global clock. Now, more recently, in another paper which was published in the runtime verification conference in 2017, there is a notion of partial synchrony that was used in order to do predicate detection using SMT solving. So now what we are going to do is we are also going to adopt this notion of partial synchrony where we employ a clock synchronization algorithm that ensures bounded clock skew epsilon between all the clocks. And this is going to limit the level of non-determinism in the system. So if you look at the figure on the right side, there are so many concurrent events if there is no such a clock. But if we have a clock synchronization algorithm that guarantees bounded clock skew of epsilon, then in this area of the computation where we have four concurrent events, that means E3 would be concurrent with E2 and E3 and E2, and E1, and E4 would be concurrent with E2, and E2, and E4. And again, the same for E2, and E2, and E2, and E4. Using this scheme of bounded clock skew would limit the window of non-determinism to only events E1, 3 with E2, 3 and E1, 4 with E2, 4. So our conjecture is that by employing such a mechanism, we can limit the level of non-determinism and we can do monitoring in practical settings. Now I'm going to hand it over to my very talented students who will give the rest of the talk. Thank you, Vosu. Now for the preliminaries, first we go over the LTL. We are not going into the details of the LTL semantics, but here we have three examples of temporal operators, the first one being eventually P, which says that P should appear sometime in the future. The next one is globally Q, which stands for Q should hold at every state in the trace, and P until Q tells a trace, represents a trace, where Q is true in the future and P should hold at every state before that. Now LTL is defined for infinite traces, but in context of RV, we need something to do with finite traces. So that's why we have a three-valued LTL, which represents the verdicts as a set of B3, as a set B3, where as top bought and don't know. The top or a true represents a formula that is permanently satisfying, a bought or a false represents a formula that is permanently violated, and the unknown represents a trace that can go either wise. For example, we remain in the don't know state until we see a P in the future for eventually P, and once we see P in the future, the verdict goes to top. For globally Q, we remain in the unknown state because we can always see a not Q in the future, and that can make the verdict go false. So that's why we remain in the don't know state. For P until Q, we remain in the don't know state for every P we see, and once we see a Q, we are verdict changes to a top. Now here for a LTL3 monitor, you see we have a monitor for the formula A and B. Here, we remain in the don't know or the unknown state for every A that we see, and we only move to the accepting or the top state for a B, and we can always go to the reject or the bought state for not A and not B. Now here is a distributed computation model that we have for two processes, P1 and P2. Each of the processes can have local states, E1 and E11 to E14 are events of P1, and E21 to E24 are events of P2. There can be a communication between the processes as well. Here, E12 is ascending and E22 is a receiving event. Now each event has a local time by the local clock of that process, and there is a maximum clock skew which makes all the events less than epsilon apart in the system. Each of the local events of each of the processes can be arranged by the happen before relationship. The happen before relationship also makes sure that the sending event happens before the receiving event. So we have a happen before relationship between E12 and 22 over here, and due to the clock skew, we also have some more happen before relationship between the events that are outside the epsilon window with the events that are inside the epsilon window. Happen before relationship are also transitive, meaning if E happened before F and F happened before G, we have E happened before G. Coming to consistent cut, consistent cut should have all the events that can be ordered by a happen before relationship. Here, E11, E12 and E21 is a consistent cut. Whereas, since E12 strictly happens before E22, that's why this is not a consistent cut, the last one being also a consistent cut. The frontier of a consistent cut is represented by front C, and here E14 and E24 is the front of the shown consistent cut. Coming to hybrid logical clock, it is represented by a tuple of tau as the local clock, sigma as the maximum, seen global local clock and omega being the causality. When there's a message pass and you see that the event in P2 changes its sigma from 0 to 10 since tau of P1 is 10 and the causality reflects the change. Same goes for the local event of P2, which is accounted for the changing tau and omega. Now, this is repeated for all the other communication and all the other events in the system, and this also gives us few more consistent cuts that we see in the system. Just like C0 and C2 does not represent a consistent cut, but C is a consistent cut over here. Coming to the formal problem statement, we have given a distributed computation, we have a sequence of consistent cuts, can be represented by the fronts of those consistent cuts, and the evaluation of the LTL formula with respect to a distributed computation is based on the LTL3 verdict of that computation. So, the SMPT based solution will be taken over by Hanik. Thank you, Ritham, and hello everyone. So now we're going to talk about the SMPT based solution that we have for the problem that we have been talking about so far. So, given a distributed computation, our approach is to transform this monitoring problem into an SMT problem and then have an SMT solver solve this problem. In our approach, we have two SMT instances, one instance for the distributed computation itself and the other instance for obtaining all possible paths in the LTL3 monitor. So, we start this by developing an uninterpreted function called row, and the purpose of this function is to give us a sequence of consistent cuts that starts from the beginning of the monitoring state and ends at the end of the monitoring state. So, let's take an example on the right here where we have a distributed computation, and we have two processes here. One process hosts the variable x, the other one hosts y, and then the formula that we want to monitor here is always x is strictly greater than y. So, our row has to give us a sequence of consistent cuts such that this formula holds throughout the entirety of the computation. So, the way we start off is we say we map row 0 to an empty set, and then we take either one of these events that's up to the SMT solver, and then when row is 2, we take the first two events. And in this case, what's happening here is x is 3, y is 1, and the formula x is greater than y is holding here. So, we take another event and then another event, and in all cases, our formula x is strictly greater than y holds. And then we reach the end of the computation where we have 9 and 7, and it still holds. So, we do get a sequence of consistent cuts where the formula holds through the entirety of the cuts. But let's take an example here and let's say instead of, in the third event of the second process, instead of 5, let's say the value of y becomes 6. And in this case, let's say if our epsilon is as big as it is right now, the third event from both the processes falls within the window of epsilon. And since it falls within the window of epsilon, we can say that these two events are concurrent. And if they're concurrent, then x is greater than y, no longer holds here. And in this case, our uninterpreted function row will not be able to generate a sequence of consistent cuts that satisfy the constraints that we have given them. Speaking of constraints, now we're going to talk about the constraints that we're adding to our row so that it gives us the consistent cuts, a sequence of consistent cuts that we need. The first constraint would obviously be that every element of row must be a consistent cut. And the second constraint would be that in the current state, when we have a consistent cut, it must always be one more than the previous consistent cut. And then the other constraint would be the previous consistent cut will always have to be a subset of the current consistent cut. And another constraint that we have over here is we want to make sure that our running computation indeed ends on the last state, QM. And when our monitoring path does not have any self-loops, all we have to do is start from the last state and then walk our way to the first state. And if we do have loops and then cycles, then we have some methods in our algorithm where the cycles are converted into acyclic paths, and then we do a similar constraint there as well. And then finally, we make sure that our first mapping, row zero, is indeed an empty set. And we do this for every reachable state in the automata so that every possible path is explored. However, we know that the RV problem is an NP-complete problem, and that means for a bigger computation, our runtime becomes exponentially large. To the point it's no longer monitorable. So we needed to come up with techniques to optimize our algorithm or our solution so that it falls under, it falls to an acceptable runtime range. One of such techniques that we employed was segmentation. So in this technique, let's say we have, we take a computation like the one that we have over here on the example, and we take this computation and chop it up into smaller segments. And what we do is we consider each of these segments as an individual computation, and then we run our monitor on each of these computations, and we have SMD instances for each of these computations. But there's one problem here, though. The problem is that there could be instances where, where if there were a pair of events where concurrent in the original computation is no longer concurrent in the segmented computation. In this example here, let's say if our epsilon is big enough, then the, then even three in P1 and even two in P2 could have been concurrent in the main computation. But it will no longer be concurrent here because these are two separate computations after the segmentation. Same goes for even six in P1 and even five in P2. So how do we solve this? We solve this by increasing our segmentation to the point that it overlaps with its adjacent segmentations. And the increased amount is exactly epsilon or, or a clock skew. So this ensures that any computation or any, any pair of events that would have been concurrent in our main computation will be concurrent in the segmented computation as well. Now we do another level of optimization where we take, where we harness the power of multi-core processing in multi-core architecture CPUs. What we do here is we assign each of these segments to each core of the CPU and then have it do its own solution. And then we take all the solutions and merge them together. But again, there's a problem here. So we solve this, the problem being that there could be paths that would remain unexplored if we take each of these segments and these segments have no way of communicating with each other. So we create a reachability matrix here. And the purpose of this reachability matrix is for each segment, for each segment, for each state, it checks whether or not it, that state can reach all the other states. And if it can reach that state, then we say we mark it with true. If it can't reach that state, then we mark it with false. And then once we've created this reachability matrix, we create a reachability tree from here. And the purpose of this tree is to start from the starting state and then go to all possible states. It can go to ending at the leaf nodes or leaf states rather. And then with this tree, we can determine whether or not a path, whether or not there exists a path that is unknown, whether or not there exists a path that is acceptable, or whether or not there exists a path that is rejecting. Now I'll pass the mic to Ritham again for the next part. Thank you. Thank you, Anik. Now to go over the case studies and evaluation. Now, first we go over the experimental setup. The experimental setup is of two phases. The first one being the data collection phase, where we have a synthetic experiment and the one with Cassandra. The next phase is of the verification of the trace that is being generated in the data collection phase. We make sure that all the events that are generated are evenly spread out over the entire length of the trace. And the parameters that we are studying here are the number of processes, the computation duration, the number of segments, the event rate, the maximum clock skew, the number of messages sent per second, and the formula that is under monitoring. First to go over the impact of assuming partial synchrony, we see that with increasing value of clock skew, the runtime increases exponentially. This is expected due to the more number of concurrent events due to a larger clock skew. With the change of predicate structure, we see that the disjunction takes more time than the conjunction. This is because to make a conjunctive formula false, we need to have any one of them false and that makes the entire formula false. But for disjunction, all the sub-formulas needs to be false in order to make the formula false. This accounts for more time. Going with the LDL formula, we see there's a linear amount of more time needed for formulas with more automata depth. And we see that for increasing segment length, the runtime decreases at first, but then bottoms out due to the overhead of creating the SMT encodings. In our monitoring solution. Now, next we have Cassandra. Cassandra is an open source distributed no SQL database from Apache. And since Cassandra does not implicitly support normalization, we thought it would be a good way of having normalization in Cassandra using runtime verification. Here we have two databases, one of student, the other one enrollment, and we have, we make sure that we don't have read write, order write, order delete, anomaly in the database in Cassandra. With parallelization, we see that increasing number of cores decreases the runtime by a hefty amount and then bottoms out due to the time needed for the SMT encodings to generate. And the other one shows that with with event rate of one and two, we kind of break even for even number of processes as high as nine on 10. Now, the big question is how realistic it is. Now we have tested our solution for two scenarios. The one the first one being the extreme load scenario, which represents something like Netflix where we see one million writes per second. And the next one being the moderate load scenario, which represents Google's right, which only allows 500 requests per 100 seconds for project and 100 requests per second per user, which kind of boils down to five events per second. And using the grass, we see that we are well, like we are doing good in the moderate load scenario. Coming to future work and conclusion, so the notable improvements of our work will be, we are the first one that is using partial synchrony. We achieve a great deal of scalability by SMT encoding, and we are making sure we are not losing out on any verdict, as well as parallelization using multi core optimization. Looking forward to the future work, we hope to scale up our technique to monitor cloud services and have a tradeoff between accuracy and scalability. For distributed RB, we aim to have a distributed runtime verification technique for timed temporal logic and for continuous signals in cyber physical system like stream runtime verification or something like that. That's it. Thank you. Have a good day.
|
In this paper, we study the problem of runtime verification of distributed applications that do not share a global clock with respect to specifications in the linear temporal logics (LTL). Our proposed method distinguishes from the existing work in three novel ways. First, we make a practical assumption that the distributed system under scrutiny is augmented with a clock synchronization algorithm that guarantees bounded clock skew among all processes. Second, we do not make any assumption about the structure of predicates that form LTL formulas. This relaxation allows us to monitor a wide range of applications that was not possible before. Subsequently, we propose a distributed monitoring algorithm by employing SMT solving techniques. Third, given the fact that distributed applications nowadays run on massive cloud services, we extend our solution to a parallel monitoring algorithm to utilize the available computing infrastructure. We report on rigorous synthetic as well as real-world case studies and demonstrate that scalable online monitoring of distributed applications is within our reach.
|
10.5446/52867 (DOI)
|
The name is Sukena Firmli and I'm a PhD student in Muhammadi School of Engineers. This work is in collaboration with Oracle Labs and it's called CSR++, a fast, capable, day-friendly graph data structure. So this work has been published as a conference paper and accepted in the 24th International Conference on Principles of Design. And it's mainly centered on designing and implementing a efficient data structure for just to our graphs and to enable mutations, meaning updates on graphs. Okay, so to start with, I'm gonna give a bit of context on graph analytics and systems. So initially the graph models, the graphs allow to model reward data as a relationship between entities called vertices, for example, Ali, Mark and Jane in this example, and relationships between them called edges to gain new insights that are not seen when using classic other models such as relational models. So the graphs systems are actually aimed to process real large graph data and they rely on efficient data structures to support the FATs, for example, fast read-only workloads, meaning analytic workloads such as algorithms, a drink, a wiki connected components and other algorithms. They also rely on efficient data structures to allow fast mutability, meaning that we need to be able to update graphs in a tiny manner in a fast way, but also they need to allow for low memory consumption, low memory footprints. Since we're talking about large scale big data graphs that can hold billions of edges and need to be stored efficiently in the memory. So graphs actually as approved, that graphs are becoming very important in today's technology trends. You see that Gartner's ranked graphs as number five trend in their top 10 data and analytic technology trends by 2019. So to give more details about the work that we've been carrying on, I'm gonna talk about graph data structures and then mutations. So graph representations aim to store vertices and edges in a efficient way. Some of the classic example of those data structures, we find the HACCC list and then CSR, but the main gain from implementing a graph in HACCC list is the high immutability performance. CSR on the other hand is a compact way to store graphs. It's the fastest read only data structures there is. It has really high performance, high analytic performance since we only store text arrays, compact arrays of vertices and edges. It has really low memory. So when we're talking about graph updates, we're talking about insertions and the deletion of vertices and edges. There are multiple techniques, but the main ones are actually listed here. So we have the in place updates meaning that we don't have to perform copies of the whole graphs. We have, so for example, the boost graph library that we're gonna evaluate later. We're gonna show some evaluation numbers on that. That means that we actually collect a number of updates, collect a number of edge instructions, for example, into batches and then perform them at once to allow for optimal update operations. And then we have this technique called snapshotting or creating deltas that is basically, that is mainly useful when we have data structures that are immutable like CSR. Whenever we have to perform analytics and perform scans, we will have to scan both the original data structure, which is for example CSR. And then we'll have to scan the deltas as well. So that actually, that is the trade off and that lowers the analytic performance. So this naturally leads to asking the main question, which is like how to enable fast in place updates while maintaining high analytic and low memory for a print. So the answer is that we developed is CSR++ which is a fast, capable, update friendly graph data structure and we're gonna talk more in details about the design ideas and solutions to answer this main question. So first of all, we're gonna talk about the design, meaning the verdicts, the edge data structures in CSR++. We're gonna talk more about the properties since we're mainly addressing property graphs in this case. And then we're gonna talk about the update protocol. And next we're gonna give some breakdown of the performance analysis of CSR++ and then we're gonna conclude by a small question. So initially, since we talked about the graph data structures, we gave too many examples, the adjacency list and then CSR. So the adjacency list allows for per vertex flexible edge updates, meaning that if we want to update an edge, we only have to, if we want to insert an edge for example, we only have to access the edge array and then perform updates on it. Whereas in CSR we'll have to copy the whole edge array because the edges are stored in a compact manner in a continuous manner. So in one side CSR is actually very fast since we have compact format, we have fast indexed array indexed access. So it's more cache friendly and it allows for better locality. But the adjacency list allows for faster updates and faster edge array handling in case of mutation. So the idea here is to combine both the array continuity of CSR, meaning that we design CSR++ as compact arrays of vertices for example, with the flexibility of updates of adjacency list. So to present a high level design of CSR++ we really can give the details on how we store the vertices and then how we store the edges. So the vertices are stored in segments, meaning the graph is actually a list of segments, each segment stores a fixed number of vertices. And then this way we have a cache friendly way to traverse these vertices. And then we have this optimization which is low degree lining, meaning that unlike the adjacency list where every vertex has to have a pointer to its neighbor's list, in this case in CSR++ we have this optimization where in case we have a single neighbor in vertex we actually embed that edge in the vertex array. That way we can actually store the actual edge instead of the pointer that allows for memory optimization. And the edges themselves are actually arrays, they are expandable arrays, meaning that when we want to update an array of edges in case we want to insert a new edge, when we don't have space we can actually grow that array, we can double, we can allocate double the original size. And that allows for really fast in place updates. And then for really fast in place updates. And then in a multi-threaded context we mentioned that CSR++ is actually concurrence data structure, so in case we have multiple writes we allow for, we have a synchronization mechanism that's based on locks. Those locks actually protect each segment from concurrent writes and allows for a synchronized updates. So it is worth mentioning that we keep the edges sorted, meaning like we keep the semi-sort property of the graph for better cache usage, it actually gives really better performance when we're applying analytic read only. So basically we talked about how CSR++ kind of like combines ideas from A.J.C.'s list and then CSR to allow for fast analytics, but as well as fast in place updates. And then we're talking about, when we're talking about CSR, we're talking about compact ways to store entities. So here we have, in CSR++ we have compact vertex arrays as well as compact edge arrays when we're loading the graph. Meaning that we have a smart loading by implementing a smart allocation protocol that could allow us to allocate the edges initially in a continuous manner. So here we're gonna talk about details on how segments are implemented, are designed. So basically the graph in CSR++ is implemented as presented as chunks called segments holding a fixed number of vertices. So here we have an example of a segment. Each segment actually has a lock and then that lock enables for synchronized writes. So the segment is basically compounded of a lock, a list of vertices, which is here like we have four vertices for example. And then we have a vector of pointers to vertex properties. We're gonna talk about vertex properties later, but here we're talking about topology. We need like the vertex and the edge structures. So basically each vertex here holds three information, three data. So we have the length, which means, which stands for the degree of the vertex, the number of neighbors. And then we have a pointer to the neighbor's list. So here we have a union structure, meaning that if the length is superior, is greater than one, then we're gonna have a pointer to the edge list. But if we have the pointer, if we have the length equal to one, as I said before, we're gonna inline the edge. We're gonna embed it in the segment structure and then we're gonna store it there instead of the pointer. That allows for really good memory consumption. And for the edges, how they're presented, they're actually hold three fields. First of all, the deleted flag. We're gonna talk about deletions later in the update protocol. And then the vertex ID, meaning like the index of the vertex in the segment. And then the segment ID. Since we have the graph as represented as a list of segments, we need to actually locate the edge in which segment it is. So we're going to talk in this slide about how CSR++ stores properties. So we need to store basically vertex properties and then edge properties. So in this example, we have, for example, two segments. And then the way we store vertex properties CSR++ is in arrays that are parallel to the segments. So as I mentioned before, each segment holds a vector of pointers and each pointer actually points to an array for vertex properties. So for example, if we have the index of a vertex, we can deduce the index of the property value of that vertex in a given vertex property array. So this way, this actually allows for fast index accesses to the vertex properties. So for edge properties, actually we have the same segmentation approach as for edges. That is to say, we have for each edge array, we have a parallel vertex, we have a parallel edge property array which stores the edge property values in an aligned matter. So for example, we have two edge properties, the example here below. So each, so all the edge property values for the two properties are actually stored in the same array, but we have this logical formula to access the edge property values for a given edge and for a given index. So this way actually allows for a better fast per vertex update, meaning that if you want, for example, to update an edge, we don't have to copy the whole edge property array of the whole segment, of the whole vertices of the same segment, but we actually can only update a certain given specific edge property array in a separate way. That allows also for easy relocation of the property arrays and then easy reordering. Next, I'm gonna talk about, so now that we've talked about the design of the topology, meaning how to restore vertices and how we store edges and then properties, we're gonna talk about the update protocol of CSR++. So the updates actually, the updates can be defined as four, three main update protocols, the vertex insertion, the edge insertions, and then the deletions for both vertices and edges. So for vertex insertion, we're gonna start with this one. So this slide presents how the vertex insertion is implemented. So to be able to insert a new vertex, we always check for the last segment. So usually, after we load, we populate the segments, we have either some space left in the last segment or we don't. So in case we have, we're gonna actually make this vertex valid, meaning that we're gonna, so initially the length and the degree of the vertex actually minus one. So that's how we know that the vertex is valid. So in case, so we're gonna check for the last segment and see if we have enough space. And then if we do, we're gonna find the first in valid vertex and then make it valid by incrementing the degree of that vertex. So the other case is if the last segment is full, actually allocate a new segment and then allocate a new vertex property arrays along with it. And then the way we actually keep track of those newly allocated segments that we have this interaction layer that we have here. So this interaction layer is nothing but just an array of pointers to those segments. And if we want to allocate a new segment, we only have to copy this to expand this array by doing like a copy on right. And this is usually very fast. So the only cost here is I mentioned here is the copy on right of the segment pointer array, what we call the interaction layer. So next is the edge insertion update protocol. So as I mentioned before, we have in CSR++, we implement global expandable arrays. I mean that when we want to insert a new edge, if we have enough space, if we don't have enough space in an array, we actually double the allocate new array and then double the size of the original and then perform copy on right. We use relocation method in C++, but this can be actually implemented in a smarter way and then allowing for faster copy on right, faster relocation. So the thing here is that, for example, here, we have six, four for our edges. So in order to insert a new edge, we actually double the edge array and then we keep space, but we keep extra space for new incoming edges. So basically CSR++ is actually concurrent data structures. So we said that we protect each segment with locks. So in our case, as we're gonna mention in later, in evaluation section, we actually evaluate CSR++ in a multithreaded context and we allow for concurrent writes into the edge and then the edge arrays. So to be able to have synchronized and to protect those right operations, we actually use lightweight spin locks and every time a thread actually can hold a lock and then perform the updates on a certain edge array. For deletions, for the update protocol, to perform deletions, so we have logical, we have implemented logical deletions of vertices or vertices and edges. So for vertices, as I mentioned before, we have this length field that stands for the degree of the vertex, which is initially minus one. So minus one means like valid vertex. So the obvious way to, the optimal way to actually implement the logical deletion for vertices is to use that field and then set it to actually minus one if you want to mark it as deleted. For edges, we have a separate deleted flag and then we have to set it to one to mark it as deleted. So the cost of this is the fact that we'll have to add extra conditional branches in traversal for me when deletions become very frequent, meaning that we have a lot of Xs in segment arrays and then in edge arrays, we're allowed to perform compaction. By compaction, we mean like we're gonna reuse the space that was used by deleted vertices or edges and then kind of like compact the arrays and then remove the unused space basically. That compaction would mean actually like a whole copy of the segments, the vertex arrays, et cetera. It is very expensive, but like in real world, in real use cases, deletions are not that frequent. So next, we're gonna talk about the performance analysis breakdown of CSR++ compared to other systems. I'm gonna give the performance analysis configuration first. So the graphs that we use are live journal Twitter. So live journal has 68 million edges, Twitter has 1.4 billion edges. For the algorithms we used, PageRank, weekly connected components, breadth research, and then we did PageRank. And then for the evaluated data structures, we used CSR implementation in CREAM wall. We used the adjacent list implementation from the Boost Graph library. And then we used Lama, the open source code of Lama. For the machine, we used a two socket, 30-suit, six core machine that has 384 gigabytes of RAM. So first, we're gonna start with read-only workloads. This is an example of the two PageRank algorithms that we perform on Twitter and the live journal graphs. So as we see here, CSR++ is very close to the fast read-only data structure, which is CSR. It is also close enough to Lama with only 15% overhead slowdown compared to CSR. This shows that CSR++ is very well scalable, is very well scalable to the structures, and it scales well with the number of threads, as well as the size of the graphs. So next, we're gonna talk about the in-place updates. So the in-place updates, meaning perform edge insertions or verdicts insertions. This example, we give some numbers on the edge insertion performance of CSR++. We show that even with a single threaded update, perform update operation, we reach up to one order of magnitude faster than CSR performance. Here we have a 363 seconds compared to 40 seconds with CSR++. So in this slide, we talked about the in-place updates performance of CSR++ versus Lama. What we did is that we applied 1000 batches of new insertions of new edges on CSR++ and Lama. And then as we see here in this figure, we have Lama that memory usage that explodes after applying around 380 batches. But whereas CSR++ continues to run the in-place updates and while consuming almost as stable memory for prints along the way. So this increasing memory causes the system to run out of memory. So next, we're gonna talk about the memory consumption, which is very big challenge when designing the data structures. So we evaluated CSR++ against CSR and the Lama, and then we found out that CSR++ has a moderate memory of ahead of 33% compared to CSR. Both are small graphs and large graphs, and then it scales well after updating the data structures. So what we did here is basically test CSR++ and CSR++ and Lama in both contexts when the first context is where we don't update the graph, meaning like we only read the graph. And then we found out that this is like a 33% average overhead over compared to CSR here. And then we actually tested the RKs where after applying mutations. So as we mentioned before, CSR++ design is that we have to allocate double the size of the original edge areas whenever we have to add new edges whenever we don't have enough space so that we have that extra space for new incoming edges. That's what's causing this memory overhead. Whereas in Lama, we actually have to create new snapshots every time we have to add new edges. So as a conclusion, we showed that CSR++ is a new, scalable, concurrent graph data structure that relies on segmentation technique. We also showed that CSR++ achieves best of the both worlds from CSR design in the adjacency list. It allows for close analytic performance to CSR, which is the fastest read-only data structure with only 10% overhead on average. It also allows for fast mutability, meaning like we have up to two times speedup compared to Lama. And then it has a low memory footprint overhead, only 33% overhead compared to CSR. So that is all. Thank you for watching.
|
The graph model enables a broad range of analysis, thus graph processing is an invaluable tool in data analytics. At the heart of every graph-processing system lies a concurrent graph data structure storing the graph. Such a data structure needs to be highly efficient for both graph algorithms and queries. Due to the continuous evolution, the sparsity, and the scale-free nature of real-world graphs, graph-processing systems face the challenge of providing an appropriate graph data structure that enables both fast analytic workloads and low-memory graph mutations. Existing graph structures offer a hard trade-off between read-only performance, update friendliness, and memory consumption upon updates. In this paper, we introduce CSR++, a new graph data structure that removes these trade-offs and enables both fast read-only analytics and quick and memory-friendly mutations. CSR++ combines ideas from CSR, the fastest read-only data structure, and adjacency lists to achieve the best of both worlds. We compare CSR++ to CSR, adjacency lists from the Boost Graph Library, and LLAMA, a state-of-the-art update-friendly graph structure. In our evaluation, which is based on popular graph-processing algorithms executed over real-world graphs, we show that CSR++ remains close to CSR in read-only concurrent performance (within 10% on average), while significantly outperforming CSR (by an order of magnitude) and LLAMA (by almost 2x) with frequent updates.
|
10.5446/52870 (DOI)
|
So this work is about relax, queues and stacks from pre-tried operations. I'm Armando Castañeda and this is a joint work with Sergio Rashman and Sherry Rheda. So linearizable non-blocking on weight-free implementations requires synchronization. Sometimes a lot of synchronization which might limit scalability. It has been shown strong step complexity lower bounds for some data types. So it has been shown that sometimes expensive synchronization mechanisms are going to avoid them for some data types, including queues and stacks. So people have proposed relax queues and stack implementations. And as far as we know, all these implementations use read-modified write operations. So no one has tackled this problem from the perspective of computability. So the only work that has taken this perspective to some extent is the one by Shabit and Toghefeld. They use the consensus number of several queues and stack implementations, relaxations. But they don't provide any implementation. So it's known that queues and stacks have consensus number two. So there is no read-write non-blocking or weight-free implementation of this data type. So the question we start with in this paper is if there are many meaningful queues and stack implementations, we treat write implementations. So we want to study what can be done using only read-subs and writes when we study queues and stacks. So we care about read and write because it's not a simple base of operation that any mobile can have and they have consensus number one. So contributions, we propose the notion of queues and stack with multiplicity. And we provide two algorithms, a weight-free stack implementation with multiplicity from reads and writes operations and non-blocking queue implementation with multiplicity from reads and writes operations. I also introduce the notion of queues with a weak empty value. We provide a weak prequeue implementation with weak empty for consensus number two objects and then we provide a weak prequeue implementation with weak empty and multiplicity from only read-read options. So I will explain the ideas in these contributions and I want to start talking about coordinate conditions. So in this ability to standard coordinate condition, it says that operations can be totally bolder while respecting the execution order of non-concurrent operations. So the idea is that we have an execution and then for each of the operations, we pick a point inside of the interval of every operation that's the realization point. This point induces a sequential execution and this sequential execution should satisfy the specification of the data type of problem that we're talking about. So the realization ability is a sort of generalization of the realization ability where concurrent operations now can be realized at the same point, like in the figure we have here. So now several operations can be realized at the same point. So we kind of move from the dimension one in the realization ability where everything is aligned and now we have several lines where a several operations can be realized at the same time. So an internal realization ability is a generation of certainizations where now the points can get stretched and now operations can be realized in an interval. So now we can talk about interval linearizations. So now what we need to find is inside of the interval operation an interval which can be a point or an interval and then we have this interval linearization. Like in here that we have this interval that spans several operations like this points and operation, this operation and this spans also this interval of another operation. All the time respecting the order of non-concurrent operations. Interesting these three current conditions induce a strict hierarchy. So we have that linearizability is strictly contained, certainizability and certainizability strictly contained in terms of flexibility. Here containing means what you can specify using these formalisms. So progress conditions we consider two standard conditions non-blocking and with freedom, non-blocking means that as long as a process takes steps this one operation completes. So that means that some operations might be blocked in execution forever but if that happens there are infinitely many operations that are completed. With freedom means that as long as a process takes steps all its operations complete. So this is the strongest progress condition. So with freedom implies non-blocking but not vice versa. Non-blocking is the starting point with freedom. So here we understand with multiplicity. So multiplicity essentially says that concurrent the queue or pop operations can take the same item but that can happen only if the operations are concurrent. So we still require the fee for leave of order depending on the case and we specify these using certainizability. So for the case of the queue we have now this execution, we have here a push operation, we have here another push operation and then we can now have two popular operations that take the latest item because this is a stack that they are concurrent they take the same item. Here we have again this push three and then these two concurrent operations then they take the three. Okay so for the case of the queue we have something similar but in the fee for leave we have an in queue, another in queue, the queue and then we have three the queue operations that take the same item. This is the idea of queues and stacks with multiplicity. So for the implementations we adopt a modular approach in both the case of the queues and stacks. So we start with some known algorithm and then we derive from that algorithm our implementation. For the case of the stack we start with this nice and simple linearizable weight free stack implementation from AFNE, AFEG, GATNI and MOLISON. So it's based on consensus number two objects. It has an infinite array of items, items, these are swap objects in the bottom and then we have a top, this is a fetch and increment object which points to the top of the stack. So in the items array is where the items will be stored. So whenever a push and a process want to push an item it does fetch and increment on the top object that reserves a slot in the items array and then it puts the value there. So these two operations are not coming so it might happen sometime between getting reserved in the slot and then putting it in the slot. So when a process wants to take an item, perform a pop-up operation, it just reads the top of the stack and it starts reading from that position to the bottom of the stack from top to one. And every time it tries to, it does this swap operation so that it can get an item. Whenever it gets something missing from bottom, it gets an item and it returns that value. If it gets nothing after scanning all these entries, it returns empty. So that kind of thing is very simple but the linearization proof of this implementation is not easy at all. Actually when one starts playing with the algorithm and running some executions, it's easy to see that linearization, linearizability is not an easy task. So the idea in our implementation is to replace this top object, this fetch and increment object with a relaxed version of fetch and increment in which concurrent operations can read the same value. So this is the first idea. The second idea is that we will replace every entry of items with a container with multiplicity. So it's a relaxed version of a container. Container provides two operations. Get and set. It's like just a set. So it provides two operations. Get and set. So the set operation puts something in the container and then the get operation gets something from the container. It doesn't, there is no order that should be required. So this idea of container with multiplicity is a concurrent get operation can obtain the same item. So this is what we do. So first we need to replace this top fetch and increment object with only read and writes. So we use any read write implementation of a container in it allows to one. And then this is what we use to handle the top of the stack. So now instead of doing fetch and increment, we do these two operations with an increment which is not atomic. So there might be something happening between these two operations. So now it's clear that a tool or process can get the same position in top. So to resolve this issue, we do this thing of relaxed container. So now every entry of the items in the original algorithm is replaced with an array with each an entry dedicated to each of the processes. So now when a processes deposits a value in items, now gives two entries that the row of the array which is the container and then its position, its ID dedicated to the process. So when the process writes this value, it's logically doing put into the container, this container at this position. So now this part of the code which is a forward where it scans all the positions in a given container, it's implementing a get operation of the container. So every time a process wants to get a value from this stack, it reads the top and it is starting scanning all the containers from the top of the stack to the bottom. So it goes to one of these containers, it starts scanning all the positions in the container and trying to get a value. So that implements the get operation. So before we had this swap operation so that only one process can get one item, but we don't have that anymore. So that swap operation is implemented just with return writes in a very simple manner. Every time a process is reading an entry of this container, it scans all these entries. If it reads something, this thing from bottom, it just returns that value. But before it marks this entry as taken and it writes again bottom. But now the observation is that since this is asynchronous, there might be many things happening in between. It's possible that two processes read the same entry at the same time and they both see the same item and return the same item. But this could happen only in concurrency, in case of concurrency. And these are the two main observations for approving the visibility of these algorithms. So the first one is the one that I just said before. Now push operations can deposit at the same row, at the same, that means at the same container if and only they are concurrent. This is the first observation. The second observation is that pop operations can take the same item. But that can happen only if and only they are concurrent. So these are the two observations. So the CLS-Avid proof is a reduction proof to the base algorithm. So what we do is we take an execution of our algorithm. We remove, we take for each of the items, if this is taken by more than one operation, we remove all these operations with one. We just keep one of them. And then that execution is linearized using the linearization proof of the base algorithm. And once we have this linearization, we go back to our implementation and we put back the operation that we removed. And this is how we obtain the standardization. So the approach seems kind of simple, but there are some sort of these. But we rely on this linearization proof of the original algorithm, which is kind of hard. So for the case of the queue, we follow the same approach. We start with this simple, linearizable, non-blocking queue implementation, which is a variant of implementation of Lee or Unherrlich and Wink. So the idea is pretty much the same in the case of the stat. So we have this array of some objects where the items will be stored, these array items. And we have this tail, which is a fetch and increment object that points to the tail of the queue. So whenever a process wants to do a queue, again, it reads the tail, preserves a slot, and then it puts the item in that slot. And the queue is essentially, it follows the same idea. The process starts reading from the beginning of the queue, from position one in the items array to the tail. And it starts reading and it tries to get something doing swap operations. This is in line 11. It's trying to get something doing swap operation. It gets something, something from bottom, and it's an item, and it returns that item. Otherwise, it reads again the tail to check if there is nothing new has been in queue in the queue. And if there is nothing new, the line 16 is satisfied, so it can declare that the queue is empty. Otherwise, it restarts. So it's easy to see that this implementation is not blocking. It's not blocking. The queue operation is wait-free, but the queue operation is not blocking because of this thing that the queue operation can be scanned again and again if there are new and new items that are stored in the queue. And we follow the same approach to obtain our read-write implementation. Exactly the same idea, the fetch and increment object is replaced with a read-write counter and the fetch and increment operation is implemented using these two operations. And again, we implement this relaxed container with multiplicity in the same way that we did before. So this is the way we obtained the cylinder as a non-blocking queue with multiplicity. This is non-blocking. Because our base algorithm is non-blocking. Okay, so before going to the next contributions, I would like to explain three implications that we get from our algorithms. These two algorithms that we just saw. The first one is that we avoid cost-simplification operations. It has been shown by a TAL that any implementation of a stack or a queue must use either read-modified write operations or read-after-write patterns, which is like the one we use in the flag principle. So these two synchronization mechanisms are costly. And using this notion of multiplicity, we can evade these results because our implementation is only reads and writes and we don't use these kinds of expensive mechanisms, read-after-write. And the same happens for work stealing. Work stealing is a popular load balancing dynamic load balancing technique. And it has been shown by a TAL that any implementation of work stealing must use read-modified write operations or read-after-write patterns. But our algorithms imply work stealing with multiplicity solutions using only reads and writes. So we can evade, again, these impossibility results for these data type. Third, our implementation implied K-out of order queues with multiplicity using only read-after-writes. K-out of order is a realization of queues in which introduced by queue-shedal, in which essentially the queue operation can take any of the latest K values in the queue. So this is an implementation that has been proposed. And we have a queue-shedal that has proposed implementation using only read-modified write operations. So using the notion of multiplicity, we can obtain K-out of order with multiplicity using only reads and writes. So now we also introduced this notion of queues with empty values. So we obtain non-blocking implementation of a queue. So another question is why not to obtain a weight-free read-write queue with multiplicity? So if we start with the base algorithm, which is linearizable, weight-free, and uses consensus number two objects, we could derive maybe a weight-free implementation using the same techniques. So this is not so simple because this is an open question. It's an open question. It started in 93, actually, from Africa, Dal. If there is a linearizable weight-free queue implementation, uses consensus number two objects, that is unknown. And there is people that have been working on that. But there is no solution to that problem. And the challenge for implementing queues using consensus number two objects is in detecting if the queue is empty. We sometimes call this as the chasing the tape problem. As you saw in the base algorithm that we used, it has this condition. If it can declare or not that the queue is empty, actually, it's hard to declare that the queue is empty in all cases and being weight-free. So we propose a relaxation of the queues with a weak empty value. So essentially, the idea is the following. The queues operation, if I end up in an operation, the queues operation returns weak empty, then the queue might be empty. From the perspective of that operation, the queue might be empty. It's not necessarily empty, but it could be. So we specify these using interval anxiety. So in this relaxation, we have something like this. We have queues and the queues returning values or empty values as usual. But we might have now this queue operation, the queue operation like the one here, in which this will be linearized not in a single point, but it will be interval linearized. So its realization will span an interval. And the idea is the following. When the queue operation starts, the queue is in a state. In this case, it has one and two. And then, when this operation is running, these two items are taken. So from the perspective of this operation, these two items are taken so the queue might be empty because the operation might not be aware that there is a new item in queue, like in this case, we have these three. So we know that there is a new item because we see from the picture, but the operation might not be aware of this item. So it declares, it plays conservative and say, well, the queue might be empty in this case. So this is the idea. And we obtain a quite free implementation using again the same base algorithm that we used before. So what we do now is that we take this algorithm, which is non-blocking, and we use these notion of queues with weak empty values to go from non-blocking to weight free. So we obtain this implementation again from concept number two. We don't modify that in a very simple way. So the idea is that instead of just doing, instead of scanning again and again the tail, we just scan two times. And if the condition for returning empty satisfied, we return empty. Otherwise, we return weak empty. And that's it, that is the only thing that we do. It's a very simple algorithm. Although the algorithm is very simple, the internalizability proof of the algorithm is not simple at all. Actually, it's the most complex proof in the paper. And even though that we follow this reduction approach again, we reduce the internalizability proof totally to the internalizability proof of the base algorithm, but still the proof is kind of complicated. So this is the way we obtain this weight free queue using objects from concept number two. And then to obtain a retry queue implementation with multiplicity and with weak empty, we do the same trick by replacing the tail with a relaxed page and increment, and replace each entry of item with container with multiplicity. And this is how we obtain it. We follow the same approaches before. So to conclude, we have a wealth feature work. So we want to study multiplicity for other data types. We want to see if our implementations could lead to a scalable retry implementations of the QS&A stack or any other data type. We have some partial results for work stealing. That's a good candidate. We think that there is something in there. So we'll dig more on that in the future. Also we want to know multiplicity with scalar implementation regardless of the base operations. In this paper, we focus on grids and writes because we had this perspective of more on the sideability, what you can do with these operations. But in general, we are focusing on scalability. It doesn't matter the base operations. We just want to have something which is faster. And that's it. Thanks for watching.
|
Considering asynchronous shared memory systems in which any number of processes may crash, this work identifies and formally defines relaxations of queues and stacks that can be non-blocking or wait-free while being implemented using only read/write operations. Set-linearizability and Interval-linearizabilty are used to specify the relaxations formally, and precisely identify the subset of executions which preserve the original sequential behavior. The relaxations allow for an item to be returned more than once by different operations, but only in case of concurrency; we call such a property multiplicity. The stack implementation is wait-free, while the queue implementation is non-blocking. Interval-linearizability is used to describe a queue with multiplicity, with the additional relaxation that a dequeue operation can return weak-empty, which means that the queue might be empty. We present a read/write wait-free interval-linearizable algorithm of a concurrent queue. As far as we know, this work is the first that provides formalizations of the notions of multiplicity and weak-emptiness, which can be implemented on top of read/write registers only.
|
10.5446/52874 (DOI)
|
Hi, in this talk, I'm going to talk about distributed distance approximation. This is joined for a quiz. Berti and Kona, who was affiliated with MIT at the time, Karen Sen Zurile from the Technion, Mina Da Leroy-Fard from MIT, and Virginia Vasilevska Williams from MIT. So this work revolves around the congest model of computation in which we have a network of n nodes. Communication takes place in synchronous rounds. The initial knowledge of each node is only its immediate neighbors. We have an unbounded local computation inside each node. Our complexity measure is the amount of rounds and we have a bandwidth risk friction. That is, in each round, each node can send a message of at most all of log n bits to each one of its neighbors. So this talk revolves around distance computation. And in particular, we are going to focus on two distance parameters, the diameter and the radius. But we will start with defining the eccentricity of a node v. So the eccentricity of a node v in the graph is the maximum distance from v to any other node in the graph. For this definition, it is very easy to define the diameter and the radius of a given graph. The radius of g is simply the minimal eccentricity in g. The diameter of g is simply the maximal eccentricity of a node in g. Furthermore, we define the distance from a node v to some subset of nodes s to be the minimal distance between v to any node in s. Good. Now we can set our main result, which is that we show a near complete characterization of the trade-off between the approximation ratio to round complexity for the following distance parameters for a directed radius, and directed diameter. And here in the last parameter, we have a small caveat where we don't actually show a complete characterization, but there is a small range of approximation ratios, which we remain open. So that could be a bit more precise about what I mean by a near complete characterization and acting state, our result. So we have a graph here where in the x-axis we have the approximation ratio, and in the y-axis we have the rally call complexity. Things that will be marked in red indicate results of previous work, and things that are marked in purple indicate new results from this work. Now we start off with weighted diameter, for which we know from previous work that any algorithm that computes a 2 minus epsilon approximation to the weighted diameter requires a nearly linear amount of rounds. And of course, well, not of course, but there is a corresponding upper bound due to the fact that the all pairs of shortest paths problem can be solved on weighted directed graphs in near linear time exactly. And now for approximation ratios beyond 2, it is known from previous work that any approximation to the weighted diameter requires square root of n rounds, of course, up to poly logarithmic factors. We show that this is actually tight by showing an approximation algorithm in all tilde of square root of n rounds that computes a 2 plus epsilon approximation to the weighted diameter. Now for directed ray radius, for which it was known from previous work that any algorithm that computes a 1 and a half minus epsilon approximation to the directed ray radius, of course, near linear time. And again, this is tight since the all pairs shortest paths problem can be solved exactly in near linear time on weighted direct graphs. We show that actually any algorithm that approximates the directed radius within a factor of 2 minus epsilon, we require nearly linear time. And we also show for approximation ratios beyond 2 that any approximation of the directed ray radius requires a square root of n rounds. And we also show that computing a 2 plus epsilon approximation to the directed radius can be done in all of square root of n times up to poly logarithmic factors. The picture you see here is exactly the same for weighted ray radius and for the same reasons as well. And so I continue on to directed the diameter for which we show that any approximation requires at least square root of n rounds and we also show that one can compute a 2 plus epsilon approximation to the directed the diameter in all tip of square root of n rounds. However, the range of approximation ratios between 1 and a half and 2 remains open for few to work. And due to lack of time, I will focus here on the approximation algorithms that I've met mentioned. And all of these algorithms actually boil down to a single algorithm, which is a 2 plus epsilon approximation to all eccentricities in a weighted directed graph. And in particular, this gives us a 2 percent epsilon approximation to the radius and the diameter as well. Where the key technique here is to compute a set that is called a pseudo center on which I will now allow, allow, allow, allow, allow, right. I'll start with telling you what is the center. The center of G is the node whose eccentricity is the smallest, that is whose eccentricity is precisely the ray, the ray, the ray, the ray, the ray, the radius. Well, but computing a center is actually hard. It requires a near linear amount of rounds by previous work. So an idea introduced by previous work is to compute a set of a small subset of vertices C that sort of mimics the role of the pseudo center. And the fact that the distance of every node V in the graph from the set C is at most the ray, the radius. Note that if our set C was precisely just the center, this was also true. So now let me tell you about how previous work actually computes this pseudo center in a given graph. And let us assume that we are now working in an undirected, unweighted graph and we have algorithms to compute single short, short, short, short distance exactly. And so we start with a set WB1, which initially equals V, that is the set of nodes in V graph. Then we sample some subset of nodes S uniformly at random, a small size, a height of log N. And we add S to our pseudo center set C. Now we do the following. We consider the farthest node from S, A, which is this node here, where the nodes in green are the nodes that we sample to S. And now we do the following. For each iteration I, for each node whose distance from A is at least the distance of A from the set S, we are going to remove it from the WI. And we're going to call the remaining nodes WI plus one. And we are going to repeat this algorithm that is sampling and then finding the furthest node and then removing nodes until no nodes are left. So for example here, we're going to continue on and on and on until no nodes are left in the graph and all nodes here are white. Okay, good. Now I need to convince you that this process actually terminates sufficiently quickly so that we will have our set S indeed of small, our set C, indeed of small size. So I'm going to very briefly go over this for a due to lack of time. And the claim here is that actually the size of WI decreases by at least half each iteration. And for this you need to consider the set X of the size of WI over two closest vertices to A. And it's not hard to prove that the type probability S I actually intersects S. So what we will have is that the distance of A from the set of nodes which is WI without X is at least the distance of A from the nodes in S I. And so all of the nodes in WI without X will be removed with high probability which are at least half of the nodes in WI. And so after Tato log N iterations our algorithm will halt. And so we will have a super center C of size Tato log square M. Now assuming that we are actually given a super center C and it is denoted here by the nodes in green. How do we actually compute the approximation to all eccentricities? First of all we compute a single source, shortest pass from all the nodes in C. And then we consider the farthest node from C and we denote its distance from C to be D. Now each node V is going to approximate its eccentricity by D close the maximum distance of V to any node in C. Now let's see why this is indeed an A2 approximation to the eccentricity of V. So first of all we see that D is at most R because the distance of any node to the set C is at most R and R is of course the minimal eccentricity in the graph. So of course at R is at most the eccentricity of V. And while the maximum distance from V to any node in C is at most the eccentricity of V because the eccentricity of V is the maximum distance from V to any node in the entire graph. So of course the value that we have here is at most twice the eccentricity of V. Now why is this value at least the eccentricity of V? Because note that indeed plus max plus the maximum distance from V to any node in C we can actually simulate a path. Well not simulate but the distance from V to each other node in the graph by at most this number of tops. Because well we can get from V to any node in C that we need and this amount of distance and from there we need to spend at most D rounds in order to get to our final node. And thus this is a two approximation. Now notice here that we happily employed single source shortest paths computations. However in the distributed setting we currently do not have an exact algorithm to compute single source shortest paths in O tilde of square root of N rounds. And so we have to resort to approximations which in turn will give us not a two approximation to all eccentricities but a two plus epsilon approximation to all eccentricities. For the morning in our paper we work with directed and weighted graphs and because we are working with approximations we need to define a more general notion of a pseudo center in our work. What we call an approximate pseudo center was the definition you see here. Because computing a pseudo center requires computations of exact single source shortest paths but we can only do approximate approximations. Now the fact that we can only compute an approximate single source shortest path arises some complications in our algorithm. For example if we take a look at a similar graph to the one we saw previously. So now we have sampled for our set S to these vertices. But now we compute single source shortest paths from them but not exactly approximately. So what we actually got is that this node is the node which is farthest from the nodes we sampled. This node is actually closer to them which means that in the step where we remove nodes it could be the case that we are actually removing less nodes than we need. And actually this process might not get to an ampli set as quickly as we need it. This is one complication. Another complication that could arise is that we are removing nodes who are well not sufficiently close to the pseudo center and we are treating them as if they are already good in the sense of closeness to the pseudo center. But we show in our work this is actually not too big of a problem and we can guarantee sufficient closeness to the sample set of the pseudo to the sample set which is the approximate pseudo center in order to get a two plus epsilon approximation to all eccentricities. Now as an additional result that I did not discuss here, we prove the affirmation lower bounds for approximating directed slash weighted radius within the factor of two minus epsilon using reductions from two party communications communication complexity where we actually use functions that will not use previously in order to show lower bounds for congestive using this framework. Furthermore, we consider a slightly different variant of diameter and ray radius one that is well known in the sequential set setting that is the bichromatic diameter and radius problem in which our set is had a vertices v as partitioned into two sets s and n t. And given some vertex s, we define the s t eccentricity of s to be the maximum distance from s to any node in t. And with this definition, we define the s t diameter to be the maximum s t eccentricity of any node in s and the s t radius to be the minimal s t eccentricity of any node in s. And we also actually compute slash approximate these parameters of s t eccentricity diameter and ray radius. We actually show the first upper and lower lower bounds in congest for these parameters. So to start off, we have shown in this work an incomplete characterization of the trade of between the approximation ratio to round cut complexity for weighted diameter, directed radius, weighted radius, and directed diameter again with the caveat of a small range of approximation factors which remain open. As to open question, it's still very much not clear what is the trade of particular approximation ratio to round cut complexity for the underrated, unweighted case of diameter and ray radius. Thank you.
|
Diameter, radius and eccentricities are fundamental graph parameters, which are extensively studied in various computational settings. Typically, computing approximate answers can be much more efficient compared with computing exact solutions. In this paper, we give a near complete characterization of the trade-offs between approximation ratios and round complexity of distributed algorithms for approximating these parameters, with a focus on the weighted and directed variants. Furthermore, we study bi-chromatic variants of these parameters defined on a graph whose vertices are colored either red or blue, and one focuses only on distances for pairs of vertices that are colored differently. Motivated by applications in computational geometry, bi-chromatic diameter, radius and eccentricities have been recently studied in the sequential setting [Backurs et al. STOC'18, Dalirrooyfard et al. ICALP'19]. We provide the first distributed upper and lower bounds for such problems. Our technical contributions include introducing the notion of approximate pseudo-center, which extends the pseudo-centers of [Choudhury and Gold SODA'20], and presenting an efficient distributed algorithm for computing approximate pseudo-centers. On the lower bound side, our constructions introduce the usage of new functions into the framework of reductions from 2-party communication complexity to distributed algorithms.
|
10.5446/52875 (DOI)
|
Hello, my name is Stella Fowle and I'm a PhD student at the University of Freiburg Junior. I would like to talk about approximating vibratite minimum vertex cover in the congest model. This is a joint work with my supervisor Fabian Kuhn. The plan of the talk is the following. In the introduction, I will present the minimum vertex cover problem and our first simple result. Then I will talk about the state of the art of the problem in the distributed setting and our main contributions. As for the next two sections, I'll explain the key ideas and algorithms that led to our main results. Finally, I conclude this presentation with some open problems. So let's start. In the minimum vertex cover problem, we are given an arbitrary graph and asked to find a vertex cover with minimum cardinality. That is a smallest possible size subset of vertices that contains at least one node from every edge in the graph. For example, the yellow nodes of the graph on the right make up a minimum vertex cover of that graph. In this work, we study the distributed complexity of vibratite minimum vertex cover and the standard congest model. In this model, the network is modeled as an n-node undirected graph where each node has a unique order of log n-bit identifier. The computation proceeds in synchronous communication round. For each round, each node can perform arbitrarily local computation and send one message to each of its neighbors, where the size of each message is restricted to order of log n-bit. At the end, every node should know its own part of the output. So for example, whether it belongs to the vertex cover or not. Our first contribution is a simple linear time algorithm to exactly solve the minimum vertex cover problem. While in general graphs, by a known fact, a minimum vertex cover is at least as large as a maximum matching, for vibratite graphs, Kineshi's well-known theorem states that equality holds there. In the proof of the theorem, a way of constructing a minimum vertex cover from a maximum matching is provided. And a direct implementation of the theorem's constructive proof in the congest model gives our first result. We will next briefly present the implementation. Assume that we are given a vibratite graph and the bipartition of the nodes into sets A and B. A vertex cover of the same size can be then found in the following simple manner. We first compute a maximum matching M star of the vibratite graph. That can be done in O of M star log M star in the congest model by running the algorithm of Ahmadi, Kuhn and Oshman. Let A0 be the set of unmatched nodes in A. That is nodes which are only connected to unmatched edges. And let L be the set of nodes that are reachable from A0 over alternating paths. That is, over paths that alternate between edges in the matching and edges outside the matching. By doing a parallel BFS exploration on alternating paths starting at all nodes in A0, that L can then be computed in O M star rounds. Lastly, every node in A that was not hit by the BFS search is obviously not an L and hence can consider itself in C star. In contrast, every node in B that was hit by the BFS search is an L and can thus count itself in C star. It is then not hard to show that C star is a vertex cover that contains exactly one node of every edge in M star. And since by a known fact the size of the minimum vertex cover is at least that of the maximum matching, we deduce that C star is the minimum vertex cover of size equal to that of M star and can be computed in O of opt locked opt rounds in the congest model where opt is the minimum vertex cover size. Now we move on to talk about what's known for the distributed minimum vertex cover problem and our main contributions afterwards. When studying the distributed minimum vertex cover problem, the focus has mostly been on establishing how many synchronous communication rounds are needed to solve or approximate the problem in the less restricted local model. So solving MPC exactly on general and bipartite graphs takes set of D rounds where D is the diameter of the graph. Indeed, on one hand every problem in the local model can be trivially solved in diameter time where each node gathers all the information of the graph in order of D rounds and brute force the solution locally. On the other hand an omega of D rounds is necessary simply because the problem of finding an optimal vertex cover on path is equivalent to finding a two-coloring on path where it is known that solving the two-coloring paths problem takes precisely set to of N rounds. As for the restrictive congest model, in general graphs the best known algorithm for exact MPC runs in order of opt squared given by Ben Besad, Kuvar Abayashu and Schwarzman where opt is the size of the minimum vertex cover. This upper bound is close to tight because of the lower bound omega tilde of N squared given by sensor hillow, query and paths. On the bipartite side we give a simple O opt log opt algorithm for solving exact minimum vertex cover as we have seen in the introduction. As for the lower bound an omega tilde of D plus square root of M follows from the work of Ahmadi, Kun and Oshman. Where in their disk paper they prove the lower bound for the maximum matching but the same argument also works for the minimum vertex cover problem. As things seem a bit pessimistic when working with optimal MVC somehow expected when working with NP-hard problems naturally things might get brighter with approximation. So here's what we know about 1 plus epsilon approximate minimum vertex cover where epsilon is sufficiently small constant. It was shown by Haffari, Kun and Maus that the minimum vertex cover problem and in fact all distributed covering and packing problems can be 1 plus epsilon approximated in time polylog N divided by epsilon in the local model. The upper bound obviously still hold for the bipartite case. However due to the work of Goose and Swamila it was further proved that there exists no sub logarithmic time approximation scheme for bipartite vertex cover even for bipartite graphs of maximum degree 3. As the complexity of the distributed minimum vertex cover problem in the local model is now understood quite well there has recently been increased interest and also understanding the complexity of the problem and the more restrictive congest model. However in the congest model for better than a two approximation nothing faster was known than the bounds we have just seen for computing an exact solution in the general and bipartite case. We note that the distributed 1 plus epsilon approximations for MVC and related problems quite heavily exploit the power of the local model thus cannot be used for congest. So in this work we focused on the bipartite case and asked ourselves can we efficiently get a good approximate minimum vertex cover in the congest model? Yes and this is our contribution. We give polylog arithmetic time algorithms for both the randomized and deterministic cases. Where both algorithms run in order of polylog and divided by epsilon bounds with the randomized complexity being faster and for a fixed epsilon matching the logarithmic lower bound by whose and Swamila mentioned above. To achieve our main results polylog arithmetic time algorithms for approximating bipartite minimum vertex cover we will first show how to solve the problem and time linear in the diameter. Then how one can use that to reach faster algorithms. The core of the algorithm that will help us achieve an approximate bipartite minimum vertex cover and time linear in the diameter is the method to transform an approximation solution of the maximum matching problem into an approximate solution of that of the minimum vertex cover on bipartite graphs and it will take order d plus k rounds in the congest model. But this matching needs to have an extra property a guarantee that no short augmenting paths exist in the graph and by short augmenting paths I mean augmenting path of length at most 2k minus 1. Recall that given a matching an augmenting path is a path that starts and ends with unmatched vertices and alternates between matched and unmatched edges. Also note that by a known fact a matching with no short augmenting paths is an approximate one but the converse is not necessarily true. We will next show how our approximation scheme works by adapting the constructive proof of Kinesis theorem mentioned earlier. To simplify things let's explain the algorithm by running an example here on the left. Let K be a positive integer parameter and for example we choose K to be equal to 2. Let G be a bipartite graph the bipartition A and B can be easily computed by constructing the spanning tree in order of d rounds. We also assume that we are given a matching with the guarantee that no short augmenting paths exist. Hence for K equals 2 any existing augmenting path has to be of length at least 5. First we define A0 to be the set of unmatched nodes in A. Next for every i from 1 to K we define the sets A i and B i in the following manner. B i is the set of nodes in B that are reachable from some unmatched node in A0 over a shortest alternating path of odd length 2i-1. Similarly we define the sets A i for every i with the difference that the shortest alternating path should be of even length 2i. Computing the B i's and A i's can be easily implemented in the congested model by running the first 2k iterations of parallel breadth-first search over alternating paths starting from each unmatched node in A0. And depending whether the node is in the odd or even level iteration it will join the corresponding set B i or A i respectively. We demonstrate how this construction works on our example where we only need to run the first four rounds of parallel BFS starting from the nodes in A0. Notice that the nodes that have not been hit by a BFS don't join any new set. The time required to do this parallel BFS and obtain the B i's and A i's is obviously order of k rounds. Now we define i star to be some i in the set 1 to k such that the size of the set B i attains its minimum. So i star is equal to 2 in our example here. Computing i star in step 4 can be done in order d plus k rounds by using the already computed BFS spanning tree and a simple pipelining scheme. One can compute the sizes of all the sets B i and determine the index i star of the smallest such set. Lastly for step 5 the algorithm outputs the set C i star that contains from A all the nodes that are not in A0 till A i star minus 1 and from B it takes in all the nodes from B1 till B i star. So the gray part in our example here will then represent C i star. Of course the root of the tree can broadcast i star to all nodes B of the BFS tree in time order of d rounds. Then nodes can decide whether to join C i star or not accordingly. Therefore the overall time complexity of the algorithm is order d plus k rounds as claimed. Now to prove that C i star is our desired approximate vertex cover. First we show it is a vertex cover. Indeed every edge is incident to at least one node from C i star. So from the gray part else there would exist an edge connecting a node in A from the top left outside C i star to a node in B from the bottom right outside C i star. An example is this red edge but regardless whether this edge is matched or unmatched one can see that such edge cannot exist due to the way we constructed our sets A i and B i via the parallel BFS. Hence C i star is a vertex cover. To finally show it is an approximate vertex cover we need the following two observations. The first observation is that each set A i is exactly the set of nodes that can be reached over the matching edges from nodes in B i. Hence for each of the i's we have the size of set A i equal to that of B i. So in our example the size of A 1 is equal to that of B 1. Similarly A 2 is equal to B 2. The second observation is that all nodes in the sets B 1 up to B i star are match nodes. So in our example B 1 and B 2 contain only match nodes. Else there would exist a short augmenting path from a node in A 0 to an unmatched node in B. Moreover and this is due to how we define C i star from every matching edge our vertex cover C i star contains exactly one node except for the matching edges that connect B i star to A i star where B i star is the smallest of the B i's and hence by simple calculation it's at most a one over k factor of the matching. Using these two observations one can do the math and prove that C i star is a 1 plus 1 over k approximate vertex cover. From here there are really two challenges that remain for eventually getting faster polylogarismic algorithms. One we need to efficiently get this matching that we have assumed so far which guarantees that no short augmenting path exists. We do that and we have an algorithm that solves our problem and time linear in the diameter. The second challenge is that the diameter might be large and small diameter is needed in order to run our previous algorithm and eventually approximate bipartite MPC efficiently. For the first challenge if we allow randomization then there exists an efficient congest algorithm that gives a matching in bipartite graphs with the desired guarantee. The algorithm is given by Lotker, Pachamir and Petit. In combination with our key approximation scheme we are able to directly get a randomized algorithm that for a fixed integer parameter k approximates bipartite minimum vertex cover and time linear in the diameter as promised. As for the deterministic case we run a polylogarismic time approximate maximum matching algorithm in the congest model by Ahmadi, Kuhlin and Oshman. Unfortunately this algorithm does not guarantee that at the end there are no short augmenting paths. We resolve this by first removing a small number of nodes of the graph such that the remaining graph has no short augmenting paths. Thus we can apply our key approximation scheme on the remaining graph to get a vertex cover there. Eventually these removed nodes can be added back to get a vertex cover of the original graph. Now the problem of finding a small set of nodes to remove to get rid of all short short augmenting paths can be phrased as a minimum set cover problem which we can approximately solve by an efficient algorithm in the congest model. To overcome the second challenge we decompose the graph into clusters of small diameter by adapting existing fast low diameter graph clustering algorithms. For the randomized we use Miller-Pingen-Zuse low diameter clustering and for the deterministic case we modify the recent polylogarithmic time network decomposition of Rajan and Khadbari. We can then solve the problem inside the clusters using our linear and diameter time algorithms. Note that we need to do this adaptation to get a clustering with specific properties. This will help us deal with edges between clusters that might be left uncovered. Therefore we have seen how we can use the time linear and diameter algorithms together with existing adapted low diameter graph clustering algorithms to obtain polylogarithmic time approximation schemes for the minimum vertex cover problem in the congest model both deterministic and randomized. We can now conclude this presentation. Recall that for computing an optimal vertex cover on general graphs it is known that omega tilde of n squared rounds are necessary in the congest model. It is therefore an interesting open question to investigate if it is possible to approximate minimum vertex cover within a factor smaller than 2 for general graphs in the congest model or to even understand for which families of graphs this is possible. I would like to thank you for your attention.
|
We give efficient distributed algorithms for the minimum vertex cover problem in bipartite graphs in the CONGEST model. From K\H{o}nig's theorem, it is well known that in bipartite graphs the size of a minimum vertex cover is equal to the size of a maximum matching. We first show that together with an existing O(nlogn)-round algorithm for computing a maximum matching, the constructive proof of K\H{o}nig's theorem directly leads to a deterministic O(nlogn)-round CONGEST algorithm for computing a minimum vertex cover. We then show that by adapting the construction, we can also convert an \emph{approximate} maximum matching into an \emph{approximate} minimum vertex cover. Given a (1−δ)-approximate matching for some δ>1, we show that a (1+O(δ))-approximate vertex cover can be computed in time O(D+\poly(lognδ)), where D is the diameter of the graph. When combining with known graph clustering techniques, for any \eps∈(0,1], this leads to a \poly(logn\eps)-time deterministic and also to a slightly faster and simpler randomized O(logn\eps3)-round CONGEST algorithm for computing a (1+\eps)-approximate vertex cover in bipartite graphs. For constant \eps, the randomized time complexity matches the Ω(logn) lower bound for computing a (1+\eps)-approximate vertex cover in bipartite graphs even in the LOCAL model. Our results are also in contrast to the situation in general graphs, where it is known that computing an optimal vertex cover requires ~Ω(n2) rounds in the CONGEST model and where it is not even known how to compute any (2−\eps)-approximation in time o(n2).
|
10.5446/52876 (DOI)
|
Hello, my name is Viktor Kolobov and today I'm going to talk to you about fast deterministic algorithms for higher dynamic methods. This is a joint work with Keram SensoryLED, Netadafny, AmiPaz and Gregorish Ratsman. I hope you'll enjoy it. So in this one we deal with distributed graph algorithms, so let's first recall the so-called static setting. So in this setting we have a network of nodes and the nodes switch to compute the function of the input and the network to port. So this can be, for example, a legal coloring of the graph or maybe some sort of independent setting. The network is modeled as a static graph, where edges correspond to bi-directional communication links between the nodes, so which they can exchange messages. So the sum of the network is synchronous, so it proceeds in rounds. And in this model, this is the main complexity measure, so if you have an algorithm which computes some tasks. So you will first look at how many rounds it took for the nodes to compute the output. And now we usually ignore the local computation that the nodes perform, although for our algorithms and for many other literature, this is not a problem. Messages can be either unbounded in size, this is the so-called local model, over just as restricted to all Flogian bits, and this is not the congested model. So in our work we also focus on all Flogian bit messages. So in this work, we study a highly dynamic setting where unboundedly many topology changes may occur per round. This can be seen as an opposite extreme to the static setting that is congested when the network remains completely static. So in addition, an additional motivation for us as opposed to just, in addition to just studying this model for its own sake is that we think it's a necessary step towards starting more intricate dynamic settings. So in real world networks, maybe the changes are not so frequent, but maybe it's also an realistic to assume that the network is completely static. So by looking at what's possible for the highly dynamic setting where the adversary is very powerful, it would make it easier to study more realistic. And then we can see the dynamic settings. So while working in a dynamic setting, one hope is to fix solution faster than computing that from scratch. This is also applicable in the distributed setting where suppose we have some graph and we compute some solution. Now we introduce a change in the graph and now we want to fix the solution faster than computing it from scratch by applying the congested loop. So in some distributed dynamic settings, this may be possible, although in our highly dynamic setting congested returns are usually applicable because the topology may change every run. So in the dynamic setting is also in stock, but as to the center is the dynamic setting in the center like setting for the computation happens on one computer. So suppose your input is some graph and experiences some topology changes, then you can deal with those changes incrementally and always maintain a global solution. And because we work in the distributed setting. This is more problematic because changes also affect communication so if an edge is deleted from the network then the nodes which were the endpoints of this edge can only communicate. Thus, the global solution is hopeless, unless of course your algorithm finishes at zero miles. And hence we need to be more careful when we refer to notions such as fix and solution. And indeed because finding a global solution is impossible. So then we try to look at tasks which we have some notion of a local solution. And in this work we focus on locally checkable label X or NCL for short. So this notion was introduced by now in stock man 95. And the roughest thinking we say that labeling is an LSE and if it's validity can be checked by all the nodes locally so if all of them agree on the consistency of the labeling, this labeling is globally, and if the labeling is not correct. So as an example of LSE I can consider maximum metric so it is drawing the red edges correspond to the matched edges. And you can see that in this graph we did have a maximum metric. So if we look at the local view of each node, we can be convinced that it is indeed a maximum metric. So all nodes will be convinced and thus it is a quite maximum metric. For example, it's also easy to see that if the metric was not maximum, then at least one of the nodes will detect this, because it would be un-matched and it would have an un-matched label. So in this discussion we would like to see whether we can locally fix an LSE and by that we mean that there are some to be changed in the network and we would like to fix the labeling by only looking at the labeling of a node and its labels and apply the relevant correction only to that neighborhood. So the question comes whether we can always locally fix an LSE and while we're at it, we also want to do that with a reasonable role complexity. So consider, so as it turns out the answer to the question is no, so consider the LSE by the name of almost endless orientation. This just requires an orientation of the edges such that all six have degree one. And by six I mean nodes which don't have outgoing edges according to the orientation of the edges. So you can easily be convinced that this is indeed an LSE and for example you can see in this graph that the middle node can locally change the consistency of the edges that it sees. So suppose we have an additional graph component like so and also assume that there are linear in the number of nodes. So suppose that we connect them by an edge like so. So this is just a single topology change, but it requires all the edges of the component to change the orientation so surely this SL cannot be fixed locally because a single topology change the edges which are far away to change the orientation. And now suppose we delete this edge, and we add an additional edge from the other side. Now, not only are we convinced that this SL cannot be locally fixed. It cannot be fixed with a reasonable amortized ground complexity. Thus the answer to this question is no. So the next question becomes when can we locally fix it and see more broadly what problems can be locally fixed. So various of these questions were studied in the literature and I'll give a partial survey. So calling a method of error in 2014 showed how to fix various problems in a single round using large messages. Also, Cesar Illel aromatic training showed in 2016 that MIS can be fixed in over one house using the small messages and the algorithm is optimized. Then the line of work of a study on that she will be soluble for 2019 do a gem for making and group the kind for making a day study the satellite setting but as a byproduct they obtain in a distributed setting away to fix MIS in over one hour. So we have a couple of messages. And the algorithm is deterministic. And this can be seen as an improvement to the work of Cesar Illel aromatic training. So we have a very soon and oblivious adversary which is not aware of the hundreds of nodes. Also, so long in 2016 and show how to fix matching in all four of the rounds. So we have a part of the Xolobon in 2016 and show how to fix various tasks in all four of the rounds. Now, what's common between all these works is that they assume that only a single topology change occurs at a time. And the only difference that it works is that you have some draft and you complete a solution. Now, a single topology change occurs. The algorithm is given time to stabilize and only then a new one changes occur. In this work we study the highly dynamic setting and actually the only work we are aware of, the only previous work we're aware of which also deals with the highly dynamic setting is the work of Papad and Kudin Maas from 18 who show how to fix various tracking tasks with worst case of log in complexity using polylog and messages and randomize. Also, we have a subsequent work in this model where we study a listing subgraphs. So now moving to our results, we have algorithms which are deterministic have all of one of the time and use only log in all of log in messages. If you consider either edge or node instructions and deletions as a single topology change that our organization can be sure to work for either maximum matching data plus one coloring or MIS. With the caveat that for MIS we need to start from an empty graph for the organization to work and for all other problems you consider you can start from some graph and any valid solution. So if you no longer consider node deletion as a single topology change, then we can also do with two approximation of minimum weight but as cover. And if you only consider edge in such a time deletions as a single topology change then we can also give you the grid plus one color. So we have obtained algorithms for various important tasks, although our framework can be shown to extend to other local effects of the labeling. And if you're interested in the full version of the paper, we give us some complicated combinatorial characterization of what labeling we can locally fix, although this is just a sufficient condition and this is no exhaustive. We also have an intermediate solution guarantee. And by that we mean that if the labeling L of some neighborhood Nv is fixed by the algorithm, then this labeling will remain correct unless new topology change occur. So if you consider this a curve for nodes in the board, we also obtain all of them worst case right and by that we mean there is no starvation. So if the labeling of some neighborhood comes dirty and it's spending to be fixed. And also no new changes occur for this number. Then you could take it most often rounds for this particular name to get it's labeling fixed. And we believe that this can be improved. Maybe I love the lines of the work of Bapadere, Conan Mars, which obtained off log and worst case complexity. So now I'll give an outline of our algorithm so algorithm is chopped into epochs of five rounds. And we maintain an invariant that at the end of epoch I, which is round five plus four nodes which will fix the algorithm are correct. The label is correct with respect to the graph topology and the spark of epoch I surround five. And the way we achieve that is via two stages fixing. So first when a topology change occurs at a node that this load reverse to a different label. And the start of the epoch sends this label to its neighbors. The actual fix happens in the end of an epoch where a node gets the label of itself and maybe it's neighbors fix. Of course, if a topology change occurred during the epoch for notes which are designated for a fix, then they just about this does not hold them on position because we can account for the additional delay by the batch to get from the new topology change which caused a note. And we also require that fix notes must be at least for ops part for consistency. And we use we obtain this by using free runs to the topology timestamps. So, a time steps maybe unbounded in size but we remedy this by hashing them into off log and bits using a technique by a theater level to be from it. Next I would like to give you an illustration of our algorithm on the problem of matching. So we have a graph like so and the red edges correspond to the maximum matching. And suppose that we delete edges you five w and V you four. So we have zero relevant nodes, the verb to a default label, which in the case of maximum matching is just being matched. Also the red circles over the notes represent the disc nodes are dirty and they haven't yet obtained the final correction which occurs at the end of an epoch. So we have to match the two nodes one to three types of some propagated. So you and we have unmatched a boss. So one of them should be matched to you three and you for if I've got to have a boss. So that V wings and in the in its free of neighborhood. It has the minimal time steps so only this node gets fixed. So you for and w, they make dirty also you five gets fixed because it has the minimal time stepping is free of neighborhood. So in round five, there are no new to quality changes. In the round six to eight, again, that's the so being propagated. But suppose that in our mind a new to quality change your course. So the edge w you three gets deleted. So w of course. You for still gets fixed. And as you can see, if no new. So this does not help the organization because we pay for the delay. But you to budget change and if no new to budget changes occur for the dirty notes then eventually we converge to a legal maximum magic. And as another note, here we maintain the fact that the notes which are designated to be fixed are at least four hops apart but specifically for what's my page. You could make do just three hops. So finally, I would like to briefly discuss the algorithm we have for maximum independent set which is the most intricate case that we study. So suppose we have a maximum independent set as a drawing where the red nodes correspond to those in the MS. So suppose we connect these two nodes, we are an edge. So now we want to know that this will leave the MS. And as you can see, by applying a name, the organization, we will no longer get off one complexity because what now many nodes may enter a day in the S. So we need the if one we want to mark all neighbors of a node is dirty, then this will buy the amount of complexity which we want. So the route we take it stand is for the fix node to indicate a to neighbors to some neighbors to become dirty. So the idea here comes into play the fact that we start from an empty graph. So if we start from an empty graph, then all nodes start in the MS. So observe that if I know the re enters the MS, that means is sometimes it's like the MS, then you have to do it again. Then this can be blamed on a topology change which is going to be the first place. So this ties to the complexity of the naive green distributed MS algorithm in the static setting where you just add at least one node to the MS every month. And the right time of this, the algorithm is just linear in the size of the MS. So it may be much smaller than and the number of nodes. So this is the main idea there is summary. This one will take a term in the state algorithms in a highly dynamic setting with all of our water is running time using all of the messages. So for example, what other additional tasks can we can be locally fix, whether we can deal with tasks of larger areas. So this work we only consider tasks of this one, whether we can improve the worst phase complexity. Of course, maybe we can also consider other dynamic settings. So, thank you.
|
This paper provides an algorithmic framework for obtaining fast distributed algorithms for a highly-dynamic setting, in which \emph{arbitrarily many} edge changes may occur in each round. Our algorithm significantly improves upon prior work in its combination of (1) having an O(1) amortized time complexity, (2) using only O(logn)-bit messages, (3) not posing any restrictions on the dynamic behavior of the environment, (4) being deterministic, (5) having strong guarantees for intermediate solutions, and (6) being applicable for a wide family of tasks. The tasks for which we deduce such an algorithm are maximal matching, (degree+1) -coloring, 2-approximation for minimum weight vertex cover, and maximal independent set (which is the most subtle case). For some of these tasks, node insertions can also be among the allowed topology changes, and for some of them also abrupt node deletions.
|
10.5446/52879 (DOI)
|
Hello, I'm Sergio Rashbaum. I will present the joint work with Haggitatiya and Armando Castagnela, titled Locally Solvable Tax and the Limitations of Valency Arguments. Already in the early times of Bozzi in 1989, Nancy Lynch presented a paper describing 100 impossibility proofs for distributed computing. And she observed that the limitations imposed by local knowledge are actually behind all of the arguments that she described. That is, that to prove an impossibility result, one needs to analyze the indistinguishability structure imposed by the local knowledge. And since there have been many impossibility proofs for consensus since then, but actually there are of two styles, a local style and a global style. The global style is the classic FLPA synchronous, one failure impossibility proof. And the global style was used to show that in a synchronous system, one is T plus one runs to solve consensus if T processes can crash. The local proof style, which started with FLP, has been since then used many times for various tasks like approximate agreement, randomized consensus, concurrent data structures, and so on. In different models of computation like message passing, shared memory, both with read-write operations and with more powerful communication primitives, in synchronous, asynchronous, partially synchronous models, and so on. Remember how the local FLP style proof works? At each state, a single configuration of an hypothetical protocol solving a task, consensus in the original FLP case, that holds some property. Then an indistinguishability analysis is performed of the successors of this configuration to derive some local structural properties of the successors determined by the model of computation. And then we use model properties, plus the task properties, to pick a successor configuration based on the valencies, on the future decisions of that configuration. The new variant is preserved in the FLP case by valency from one configuration to the next one, and so on. And this prevents a decision to be reached because a by valent configuration cannot be final. Instead, in the global proof style, what we do is to consider right from the start a set of final configurations of the hypothetical protocol, the set P of final configuration. You do the indistinguishability analysis right there to understand the structure of that set of configuration P. Typically, you show that these configurations are connected. And then using this global property connectivity plus the task specification, consensus, then one finds that there has to be a configuration in P where decisions are incorrect. So this global proof style, which was used, started to use early on, became very successfully in 1993, where in these three famous stuff papers that show that there is no weight-free, read-write-share-memory protocol for case set agreement. Remember that case set agreement is a generalization of consensus where at most k different values can be decided. So consensus is the case where k equal to 1. And that was the first time that topological properties other than just graph connectivity were used to derive an impossibility result. And since then, the area of using topology to derive impossibilities has presented many results for many other tasks. So if we compare the local style versus the global style, the global style is very powerful because it can be used to determine solvability and complexity of any task as a function of the topological properties of the final configurations P of the protocol. But the difficulty is how to identify a set of final configurations P where the impossibility is forced. Somehow you need to assume a canonical universal protocol that terminates after a bounded number of steps. In contrast, the local approach is very flexible because at each moment one needs to analyze only interactions of pending operations in a given configuration. So that's why this approach has been used in many different models. For example, consider a model where processes communicate by reading and writing a shared memory and also applying test and set operations. How do you define what is a canonical protocol in the system? So the local approach doesn't need to define a local canonical protocol just as it has to analyze the possible interactions among these operations. So the questions we ask are the following. The local style is very flexible, so we would like to use it for any task, but are local structural properties of the successor of a configuration sufficient to prove an impossibility for any task? Or maybe there are tasks for which they are not sufficient, like set agreement and renaming. Nobody has found yet a local impossibility proof for set agreement and renaming. And if global structural analysis is unavoidable for some tasks, why can we explain what are the reasons for this? In fact, the very question of what is a local impossibility proof needs to be answered. So this is precisely what we do first to provide a formal definition of a local impossibility proof notion. And based on this notion, we prove that nobody has defined local impossibility proofs for set agreement and renaming because it is impossible to do so. It will concentrate a little bit more for this talk, for the case of set agreement, but in the paper we also derive the impossibility for renaming. So to do so, we need to define a model of computation. We need to use a weight-free model where processes communicate by reading and writing a shared memory. And we would also like the model to have a round structure, which also is a uniform regular structure that can be easily analyzed. We need to be round-based because we want to consider the successors of a configuration after some number of rounds. The model that we selected is the iterated immediate snapshot model, which has these properties. And we show that there is no local impossibility proof for n-1 set agreement, nor for 2n-2 renaming in the IIS model. The other paper that we know of that considers these questions was published in stock of last year by Alistair Aspen, Selim, Gelashvili and Zuhl. And they define a different notion of local impossibility proof through a game between a prover and a verifier. In a model that is a non-uniform version of our iterated immediate snapshot model, which means that processes can take different number of steps. This leads to subdivisions which are not uniform. And they show that in this game there is no local impossibility proof for a case set agreement. In our approach, we can also prove the same for renaming. That's one difference. And the other difference is that dealing with uniforms of divisions, with executions, where always processes take the same number of steps, simplifies both the technical presentation and the definitions. And this is what is usually done when one proves a task impossibility result, because we can always assume that the protocols terminate after the same number of rounds are provided that the number of possible inputs to the task is finite, such as in set agreement. To get a little bit more detail into the model that we use, we assume a full information, wait-free protocol, that terminates after our rounds. Each process gets an input value to the task, set agreement, or renaming, and executes our rounds. And in each round, each process executes an immediate snapshot to a fresh memory. So it's not very important for this talk, but I will illustrate it with the figures now. What is important to remember is that this model is equivalent to the standard wait-free read-write-share memory model, with respect to task solvability, a task is solvable in the iterated immediate snapshot model, if and only if it is solvable in the usual wait-free model. So we need to consider configurations that is global states, which a configuration is simply a set of local states, which we will call it a simplex following the traditional topology notation. And in the case of three processes, so you have three configurations, one consists of three local states, which are represented by a triangle, where the vertices of the triangle are the local states of each of the three processes, that we draw by black, gray, and white in this example. One thing that is nice of this view is that you can consider partial configurations. Sigma prime consists of the local states of only the black and the gray process. And then you can consider many configurations, sets of configurations, which we call a simplex complex. And if you have two configurations where the local states of say the gray and the white process are the same, then you identify the two configurations and you can draw them nicely as two triangles sharing a nature. This is the example of the initial configurations for two process binary consensus, where the IDs, you have a black and a white process. The successes of the R rounds of an R illustrated in the video, when you have a simplex, an initial simplex sigma, the successors of the one round are three configurations, depending on the interleaving, so who takes the first immediate snapshot, then the successor of the two rounds, and so on. And you can consider also successors of partial configurations, which in this example would be solve executions, where a process sees only itself. For three processes, these pictures look like this. You have a simplex and you can consider the successors where only the black and the white processes see themselves, or you can consider all these successors. Here you can see from one round how the successes of two rounds are obtained for each one of the triangles. So the idea is the following, consider an R round full information protocol, the successor operator after one round, and after I rounds you can consider what are all configurations extending after I rounds, a given configuration. So for a J round simplex sigma prime, you can consider the successor after R minus J rounds, are the final configurations in the future of sigma prime, and the important notion is the valency of sigma prime, which is the set consisting of all decisions in the final configurations extending sigma prime. So a local proof consists of picking a sequence of simplexes starting with sigma 0, each time that we have selected a simplex of sigma i minus 1 in the simplex, you analyze the successors after one round of sigma i, and based on the valencies of those simplexes, you pick a new simplex sigma i plus 1 in the next round, and it continues in this way until the last round R minus 1, where once you selected the simplex sigma R minus 1, the protocol is supposed to reveal correct decisions only in this region. In consensus, in fact, we know that the protocol is unable to reveal correct decisions, this is the FLPR window, but we show that for set agreement, actually the protocol can reveal correct decisions, provided the valencies are properly defined. So here is the illustration of the FLP, impossibility resolved for two processes, you can select the sequence of sigma 0, sigma 1, after two rounds, if the protocol has to reveal these decisions in this region, and there will be at least one configuration where the decisions are wrong, processes decide different values, but can you do the same for set agreements, in this example three processes should decide at most two different values, can you pick a sequence of simplexes so that in the end the protocol must reveal a triangle with three different decisions, where in that region what we show is actually that there is no local proof for set agreement, which means that since set agreement is impossible, there must be mistakes, there must be a triangle with three different values, decision values, but the protocol can always hide its mistakes locally, no matter which sequence of simplexes sigma i are selected, the protocol is able to reveal correct decisions in the region that you selected, so this is what is illustrated in this figure, where for two rounds we select a simplex sigma prime, and then the protocol reveals decisions which are consistent with these valencies of sigma prime, and all of the triangles inside that region decide on at most two different values, there is an error because set agreement is impossible, but the error will always be pushed outside of the region that we are locally inspecting, to conclude we have provided a simple formalization of local style proof in the weight-free round-based iterated Begat snapshot model, we have defined new notions of valency task, which is what the protocol has to do to reveal decisions locally, and local solvability means that the protocol is actually able to reveal decisions that are consistent with the valencies, we showed that there are locally solvable valency tasks for set agreement, and this implies that there is no local impossibility proof for set agreement, and we do the same for 2n-2 renaming, and technically we do it through weak symmetry breaking, which is the problem that is equivalent to 2n-2 renaming, and this implies that there is no local impossibility proof for renaming, and in future work we would like to do the same for other tasks besides renaming and set agreement, in other weight-free shared memory models, and of course in models that are not weight-free, like T-resilient model, where it is not clear how to define bounded termination, and in fact it's models where it is not clear how to define round structure. Thanks for your attention. Thank you.
|
An elegant strategy for proving impossibility results in distributed computing was introduced in the celebrated FLP consensus impossibility proof. This strategy is *local* in nature, as at each stage, one configuration of an hypothetical protocol for consensus is considered, together with future valencies of possible extensions. This proof strategy has been used in numerous situations related to *consensus*, leading one to wonder why it has not been used in impossibility results of the two other well-known tasks: *set agreement* and *renaming*. This paper provide an explanation why the proof strategies for showing the impossibility of these tasks have a global nature. We show that *a protocol can always solve such tasks locally*, in the following sense. Given a configuration and all its future valencies, if a single successor configuration is selected, then the protocol can reveal all decisions in this branch of executions, satisfying the task specification. This result is shown for both set agreement and renaming.
|
10.5446/52880 (DOI)
|
In this presentation, we'll talk about rational behaviors in community-based blockchain. This is a journey with Bruno Bia, Maria Podobu-Tujigau and Sarah Tutipier-Jouvet. In a blockchain system where we have multiple players, multiple participants, nodes, that communicate by exchanging messages to one another, the goal is to build a distributed ledger. So there is no central authority, but all the players in the system should have the exact same blockchain, the same ledger locally. The ledger should be temporary-resistant, so an information already in the ledger should be hard to modify and invert if possible, and it should be built in an app-only manner. Basically, the structure of a blockchain from the point of view of only one player is just a chain of blocks, where each block is linked to the previous one by its hash, which is basically its identifier, and said that if the block 2 is modified even slightly, the hash will totally change. That is what guaranteed a temporary resistance property. Each block contains transactions that depends on the application that one wants to implement, and any new block that should be added to the blockchain should be coherent with the whole blockchain that is already there. So, add a new block, one can think about using consensus. A consensus algorithm is an algorithm that satisfies the three following properties. Termination, where every correct player, players that follow and execute the algorithm, eventually decide a value, so they decide on a block at one point. Validity, which states that a decided value should be valid with respect to a given predicate, and the agreement's property that states that two correct players that decide should decide the exact same value, the exact same block. How one can use consensus to build a blockchain? We can do it as well. First, the genesis block that basically set up the whole system, that a block will also designate, select a committee, which is just a subset of all the players in the system, that committee will run a consensus algorithm to produce block. That block will be sent to the whole network, and a new committee will be selected with respect to the whole history now of the blockchain. That committee again will run a consensus instance, produce a new block, etc. And this kind of block chains already exist out there and are even used. For example, we have TenderMine that is implemented and used in a lot of applications. Hot stuff that is at the core of the Libra, excuse me, the DM, the Facebook DM's blockchain project. And one thing important in this kind of block chains is that once a block is produced, the committee that produces that block is eventually rewarded. These block chains work as follows when we focus on one committee. So for producing one block, they work in multiple rounds. And each round is an attempt to produce a block, to accept the block that is proposed. And focusing on just round one that is basically the same as the others, there is a proposed phase where a block is proposed to the whole committee. Then a vote phase where each member of the committee votes or not for the proposed block. And if there are sufficiently many votes for a block proposed, and the new, which is our threshold here, then we consider that the block is produced. If the block is not produced, we go to the next round, we have a new proposed phase, etc. These kind of block chains have been analyzed in the literature since the first time they were proposed in 2014 by Kwon. Then we had some analysis on TenderMine, which was the first algorithm. Then Huster was proposed last year in Budsy. And they are new-avenue proposals about these kind of algorithms. And they have been studied formally, and that's nice. But the thing is that they were always analyzed considering on two types of players. We have the correct that basically always follow the algorithm, and the basal thing that can behave arbitrarily, and that can even model an adversarial behavior. But what about its third type of players, which we can say are rational, we can call rational players, or some people like to call selfish players, players that wants to maximize their own gain, even if they can defied from time to time. And that is the goal of this paper, and to present our result, first our model. We consider only one committee. So what is happening inside only one committee. And we assume that we have another set of N players, and each player knows its index in that set. That is not a huge assumption, because that is how the current consensus algorithm for blockchain are used. We also assume that we have a synchronous communication, and messages cannot be lost. So when a message is sent by a player, it is received by all the players at the end of the corresponding phase. And all our players are rational. So they try to maximize their expected gain, and what do we try to compute all the different Nash Equilibrium. Situations where all the different players cannot improve individually their own gain. In each round, we have two phases, the proposed and the vote assessment. In this proposed phase, there is one designated player that proposes a block. And first of all, assume that the proposal is always signed. Then that block will be proposed and sent to the rest of the committee. And in the vote phase, each member, each player in that committee will actually send a vote to all the others. Then everyone will collect the vote that were sent, and then they will count if they have enough votes to decide. If there is enough votes, then they decide, otherwise they go to the next round. And then we will have a new proposal, a new proposed phase, a new proposal, and then they will continue until they reach a decision. There we can see multiple actions that are done. And we have cost of executing the protocol, and basically we can see that at each round, the players can send a message or not. If they send a message, they pay a cost C send. That is basically the cost of sending a message. And if a block is produced, meaning that we have at least new votes for that block, the players are rewarded. And for the reward, we study two cases. First, the case where all the players in the committee are rewarded, no matter if they vote or not, for the block, once it is produced. And the second one is when the block is produced, we only reward those that voted for that block. The objective of all the players is to maximize their expected gain. And what they can do is just deciding if they can send or not a message. That's all they can do here. And we're asking our question, which is our consensus property is guaranteed in the presence of rational players. We can change it by asking yourself if we have equilibrium that satisfies the consensus. Recall that new is the minimum number of votes to consider a block as produced. We consider here the case where all the players are rewarded once a block is produced. Our result are the following. If we require only one vote to consider a block as produced, in equilibrium, we have exactly one player that votes. And the others will do nothing. In this equilibrium, we have exactly one vote, which is enough, the block is produced, and since we assume that all blocks are always valid, then we have the consensus properties. Now, when we assume that we need strictly more than one vote to consider a block as produced, we have two type of equilibrium. In the first equilibrium, no player votes. Basically, because they think that the others will do nothing. And so they prefer not voting as well, because one vote is not enough to produce a block. So we do not have termination. But in the second type of equilibrium, we have exactly new players that votes. In that case, we have enough votes to consider a block as produced, and exactly the number that is enough. Since the block is always valid, then we consider the block as produced. And so we have the consensus properties. This is for the case where we reward all the committee members or the players once the block is produced. Now, when we go to the second type of rewarding, where we reward only the players that vote for the block that is produced, then we have slightly the same kind of equilibrium. First, the case where we need only one vote to consider a block as produced, at least one vote, we have basically the same equilibrium. And here, all the players vote for the proposed block, because if they do not vote, they will not be rewarded. So they have an incentive to vote here. And when we consider needing more than one vote to consider a block as produced, we basically have the same type of equilibrium as before as well. Either no player votes because they think that the others will do nothing, so they do not have the incentive to do so. Or enough player votes, and over here, enough is even all of them. If they think that enough will vote, they all will vote as well, because if they do not vote, they do not have a reward. One thing interesting here is that in this setting, when we reward only the one that's voted, more messages are sent when at least one message is sent, which is basically the only difference with the previous setting. Now, we can consider a bit more realistic scenario where some invalid block can be proposed. And to do that, we model it by considering trembling hand effect, which basically means when the proposer wants to propose a block, with a really small probability, that block can be invalid. Everyone knows that such an event can happen, but no one knows in advance if the block is valid or not. And so we can draw the action space now of everyone. Since the proposal can be valid or not, then they can decide to check or not the validity, and that is the only way to know if the block is valid or not. Checking the validity. If they do not check, they do not have any validity information. And basically, in any kind of scenario, they can decide to vote or not for the proposal, knowing that the block is valid or not. And we will do this, the similar analysis as before. And before doing that, let us just recall the cost and highlight the new cost that we have. Previously, we only had the cost of sending a message. Now, checking the validity also has a cost, which is C check. If enough votes are sent for a block that is proposed, new again, in this case, we consider the block as produced, and we reward our players by R. Again, we will consider the two cases where we either reward all the players or we reward only the one that votes. We assume that when an invalid bug is produced, then all the players will incur a cost kappa minus kappa. And we assume that kappa is greater than this to have a guarantee that the cost is pretty high in general. Now, each player will decide either to check or to send a vote. They now have two kind of actions they can manage. And their goal again is to maximize their expected game by reducing the cost of their actions, basically. And our results are the following. If we consider that we need only one vote to consider a block as produced and we reward all the players once a block is produced, then we have one equilibrium. In that equilibrium, we have exactly one player again, assume the proposer that checks the validity of the proposal and votes only if that proposal is valid. The others do nothing. In this equilibrium, we basically do have the consensus. Because only valid blocks will be produced and will get the vote and invalid blocks will not get a vote. So that is a really nice equilibrium. Now, when we consider needing strictly more than one vote to consider a block as produced, we have the following equilibrium. And they resemble the one we had where we had only valid blocks as well. Either no player votes, not checked. The proposal validity, so no block is produced anyway, we do not have termination because all the players, all the players, things that the others will do nothing. So one vote will not be enough to produce a block. And the second type of equilibrium is equilibrium where we have exactly one player that checks the validity of the proposal and votes only if that proposal is valid. Then we have no minus one other players that vote without checking the validity and the rest of the players do nothing. Basically here, we always have new minus one votes for any block proposed valid or invalid, but we'll have in addition one more only when the blocks are valid. So only valid blocks will reach the threshold of new and the block will be produced. So we have consensus because only valid blocks are produced and the invalid one are not produced. Then we can go on what happens when we reward only the voters. Here the equilibria are slightly different, not just slightly, we have a huge difference with the previous equilibrium. First, we have a similar equilibria as before when we consider that we require only one vote to consider the block as produced. All the players will check the validity of the proposal and they will all vote only if that block is valid. We have consensus. In this case, they all want to vote such that only valid blocks, they want to vote to get the reward, excuse me, but they do not want to make an invalid block accepted because of themselves. So if they think that the others will check, then they will check as well. But we now have a new equilibrium and in this equilibrium, all the players vote every time without even checking the validity. So we have domination directly because any kind of block will receive and votes, but we cannot ensure the validity because if an invalid block is proposed, then that block will be produced. And we are too. That is the biggest difference with the previous equilibrium. And actually, we have the same when we consider requiring more than one vote to consider a block as produced. So here, either no player votes, no checks, the validity, so we do not have elimination. Or the second equilibrium is an equilibrium where everyone sends without even checking the validity of the proposal. So all blocks are accepted and produced, but we cannot ensure the validity. And we have another type of equilibrium where as indicates where we reward all of them. We have enough. We have all time we have new minus one players that vote. So we always have new minus one votes for every type of blocks that is proposed. And then in case of valid blocks, we have n minus new minus n minus new plus one other votes, such that when the block is valid, everyone votes. And everyone is rewarded. But if the block is invalid, we only have new minus one votes. And so the block is not accepted. And this is our result for this paper to conclude in this paper. We analyze the rational behavior in committee based blockchains. We do that under two different reward schemes. We assume that all our players are rational in in the committee. And we assume that the communication is synchronous. We found that in all the settings we study, either having only valid blocks or not having or having invalid blocks from time to time. We always have good equilibrium. However, these good equilibria are not unique. We may have coordination failures or free writing that violates the contentious properties in the end. Moreover, our analysis shows that there is one reward scheme that seems to be better than the second one. Basically, rewarding all the players once a block is produced is better than rewarding only the one that votes for the block. Why? Because first, in the good equilibrium, less messages are sent. Energy consumption wise, it's way better because we require less messages. More often and more interestingly, when we reward all the players once a block is produced, when invalid blocks can be produced, proposed, excuse me, we cannot have equilibrium that violates the validity property. Basically, no invalid blocks can be produced. There are errors when we consider rewarding only the voters only those that voted for a block produced. We do have an equilibrium that violates the validity property. So rewarding all the players seems to be an interesting reward scheme that needs to be investigated more with respect to less hypothesis on the model and more work needs to be done. Thank you.
|
We study the rational behaviors of participants in committee-based blockchains. Committee-based blockchains rely on specific blockchain consensus that must be guaranteed in presence of rational participants. We consider a simplified blockchain consensus algorithm based on existing or proposed committee-based blockchains that encapsulates the main actions of the participants: \emph{voting} for a block, and \emph{checking its validity}. Knowing that those actions have costs, and achieving the consensus gives rewards to committee members, we study using game theory how strategic participants behave while trying to maximize their gains. We consider different reward schemes, and found that in each setting, there exist equilibria where blockchain consensus is guaranteed; in some settings however, there can be coordination failures hindering consensus. Moreover, we study equilibria with trembling participants, which is a novelty in the context of committee-based blockchains. Trembling participants are rational that can do unintended actions with a low probability. We found that in presence of trembling participants, there exist equilibria where blockchain consensus is guaranteed; however, when only voters are rewarded, there also exist equilibria where validity can be violated
|
10.5446/52881 (DOI)
|
Hello, I will be presenting a work by a tableham myself, Gilan Stel, about an information theoretic approach to the HOSF protocol. So a little bit about the task that we're trying to solve in the network and computation model. We wanna solve the task of Byzantine consensus in a partially synchronous network. We have n parties, f of them can be Byzantine, and we assume that f is strictly less than a third of the total number of parties. The network is partially synchronous, so that means it starts off completely asynchronous, any message can be delayed any finite amount of time. And then at some point in time, called the GST or the global stabilization time, the network becomes synchronous with some known bound delta on maximum message delay. Now it is important to note that parties know when GST is going to occur, or even if it has previously occurred sometime in the past. In addition, parties may crash and then be rebooted any number of times while the network is still asynchronous. And if that happens, only information that they store and persistent memory is saved after a reboot. We want to achieve information theoretic security, so we don't assume our adversary is computationally bounded. And as a standard setting, we only assume that we have authenticated channels, that the adversary cannot break. And by authenticated channels, we mean that when a party receives a message, it knows who sends that message. In consensus protocols, every party i has some input xi. Now a protocol is called a consensus protocol that has the following three properties. The first one is correctness. All non-faulty parties that complete the protocol have the same output. The second one is validity. And validity means that if all parties are non-faulty and they have the same input x, then any non-faulty party that terminates outputs that value of x. And finally, termination, if all non-faulty parties participate in the protocol, they eventually complete it. A common approach to solving the task of consensus in purpose synchrony are primary backup protocols or passes like protocols. In this type of protocol, we have meta rounds called views. And each view has a designated leader. Now we think of the view as an attempt by that leader to reach consensus. Now this attempt might fail, so we might need to run many of those views. And in between each two consecutive views, we also have a view change protocol. Two important metrics for the efficiency of such protocols are the maximal size of messages sent throughout the protocol and the maximum possible amount of persistent memory that any one-party requires. Now with cryptography, it is relatively easy to achieve constant message size and constant persistent memory requirements. So a little bit about the information theoretic solutions to this problem and this approach. In the information theoretic setting, only Casters thesis on PBFT achieved constant persistent memory requirement. Now Casters thesis has a few shortcomings. The first one is that it isn't peer reviewed. The second one is that it requires linear size messages, linear in N in the view change protocol. And technically that linear message size was achieved with cryptographic hash functions. And without those, the protocols are less efficient. The main result of our paper is the following theorem. There exists an optimally resilient, which means N is greater than or equal to 3F plus one. Information theoretic is secure. Byzantine consensus protocol in partial synchrony with constant message size and constant persistent memory requirement. Now the protocol, ITHS, information theoretic hot stuff, only requires authenticated channels. In addition, it has standard performance for information theoretic solutions on other metrics. For example, constant rounds per view, O of N squared messages overall per view and O of N transient memory requirements. So now we will move on to describe different primary backup solutions to the task of consensus and partial synchronous. The first very famous solution is the Texas solution by Leslie Lampert. And in this protocol, we have two rounds per view. The first one is a lock round and the second one is a dumb round. And in addition, when you have the view change protocol that we think of is happening before each view. Now, this solution is not secure in the face of a Byzantine adversary. It is only secure in the face of a crash fault or an emission fault advocacy. And conceptually, the big idea here is that the lock round is supposed to help protect the consensus value. And by that we mean that if consensus has been reached, then having that first lock round will help maintain consensus. The next solution we're gonna look at is the PBFT protocol by Castel and Liskov. So this protocol enhances the risk of the original Paxus protocol and allows us to deal with the Byzantine adversary as well. And this is done by adding another round at the beginning, a proposed round, that is supposed to guarantee non-oppublication. Now, this comes at the cost of a linear message size in the view change protocol. In addition, the PBFT protocol also achieves a strong likeness guarantee called optimistic responsiveness. And intuitively, optimistic responsiveness means that we go as fast as the network allows us to go. Now, more specifically, if we have a non-faulty leader and messages are actually delayed much less than the upper bounded message delay delta, we don't want to achieve consensus in time that is proportional to delta, but in time that is proportional to the actual message delay. And PBFT achieves this. The next solution we're going to look at is the Tenderin protocol by Buchman et al. This protocol manages to lower the maximum message size back to constant size, but loses the optimistic responsiveness property. Now, this happens because of a problem called the hidden lock problem. Intuitively, in the view change, a non-faulty leader needs to wait to hear but locks from all non-faulty parties. And if it wants to guarantee that it hears from all non-faulty parties, then it has to wait delta time to make sure that their messages have arrived. The last solution we're going to look at is the Hotsp protocol by Abel Hamet al. Now, this protocol achieves constant message size and constant persistent memory requirement, but also achieves optimistic responsiveness as well. Now, this is done by adding another round before the lock round, the key round, which solves the hidden lock problem. Now we'll move on to describe our solution, the ITHS protocol. So the ITHS protocol has all of the rounds that the Hotsp protocol has. They propose a key three, a lock and a done round with slightly different names and adds three more rounds in the middle and echo a key one and a key two round, which we'll describe soon. As the diagram suggests in previous solutions, such as the Hotsp protocol, in each round, the leader sends a message to all parties and then they all reply to the leader with a signed message. Now, this allows the leader in a given round to prove that it received a large enough number of messages from a large enough number of parties in a previous round by simply supplying those signatures. Clearly, this is not possible if we don't assume cryptography. So we replace this fan out, fan in type of round with an all to all type of round. Now, for anybody that's familiar with their reliable broadcast protocol, we had to add also an echo round in order to guarantee non-equivocation in the case of a faulty leader. So now we want to talk about our protocol in some more detail and primary backup protocols in general in some more detail. In this type of protocol, in order to guarantee safety and by that we mean that we never commit to two different values. We don't have two parties that commit to two different values. We want to make sure that if a commitment takes place, let's say to a value val in some view V, no other value should ever advance past even the proposal round in any future view V10 so that V tag is greater than V. Now, the way this is implemented is using a locking mechanism. So before sending a lock message with some value val, a non-faulty party sets a local field lock to equal the current view number V and another local field lock val to be the current value descent val. In any future view, that non-faulty party won't be willing to send any advanced message, any message from an advanced round about any other value val tag, unless it receives ample proof that that val tag already advanced past the proposal round in a view later than V. Now, before committing to a value, a non-faulty party receives N minus F lock messages with that same val. Out of those N minus F, at least F plus one are sent by non-faulty parties and intuitively those F plus one parties are gonna be our sentinels. They're gonna make sure that our safety property continues to hold. And intuitively those F plus one parties are never gonna allow any other value to advance past even the proposal stage and then any other value won't ever receive ample proof to open any of the locks. Now we can take our previous intuition and translate it into an important insight of the Paxus protocol. This insight has two parts. So the first one is that if the commitment takes place, we know that F plus one parties set their lock and lock val fields to appropriate values. Now this means that in any future view, they will be willing to inform the view leader about their current lock. Now, that leader can always wait for at least one of those F plus one non-faulty parties and always hear about lock information. A second part of that insight is that it is really enough to receive any late message, any message that is later than a proposed message in order to prove that no commitment took place because if a commitment takes place, no such message should ever be possible in any later view. So if some party is locked on some view lock and with some value val, and it receives a key message in this diagram, a key five and a val five message from a later view with a different value, then it knows that its lock is irrelevant and it can discard it from that. So as stated before, in order to prove that parties can open their locks since no commitment took place, it is enough to show that some other value advanced past the proposal round in the later view. This is done by either proving that a lock was achieved in some protocols or in the hostile protocol, showing that a key was set by some non-faulty party. Now, those proofs need to be transferable. So if a leader receives such a proof and believes that, for example, it received a key and it believes that it can open any lock from an older view, but a different value, it needs to be able to pass that proof on past the proof that a key was set to other parties that might have locks. Now, with cryptography, this is relatively easy. We can use signatures and pass on the signatures as well. Without cryptography, this transferability is entirely non-trivial to achieve. And the first attempt, and it might have been in a very natural first attempt, is just broadcasting every message using Brahma's reliable broadcast. Now, in this case, if some non-faulty party, for example, the leader receives a message, for example, a key message or a lock message, then eventually every non-faulty party will receive that message as well. Now, this means that if a leader receives enough proof that any lock can be opened, then eventually every non-faulty party will receive that proof as well and will be willing to open their locks. Unfortunately, this approach requires unbounded space. This happens because we might need to maintain broadcast sessions from the distant past into the far future. We can't actually bound the number of views we're gonna have, and we might need a broadcast session in each or maybe many broadcast sessions in each view. And each one of those sessions requires some persistent space. A second attempt at achieving transferability notices that we don't actually need to broadcast all messages forever. We only need to show which keys were achieved when. If we remember our intuition from before, even one key that was achieved that was set after a lock was set is enough to open that lock if that key is non-faulty. So in order to achieve this transferability, we can save a full history of our key information. And then we need to send that history to all parties. And then if they have a lock, they will be willing to open their lock if they see that enough keys were set. Now, this is an accentuation, but this still requires unbounded space. I need to save all of my history of keys and even unbounded message size. I need to send a full history of my keys as well. Our third attempt at achieving transferability uses a technique from the Hastov protocol. So from the Hastov protocol, there was notice that if we add around before the lock round, we guarantee that information is passed on in future views as well. More specifically, if some non-faulty party sets a lock with some value val, then it first received N-minusf key messages with that same value val. Now, out of those messages, at least F plus one were sent by non-faulty parties. And we know that we can always wait for at least one of those parties to send its key information in a future view. In our protocol, we know that if a key three is set, then any future leader will hear about key two. And if a leader hears about a key two, then all of their parties will hear about key ones. So each time we add around here, we add a layer of transferability. Now, if we just send one key one, this might not be enough. That is because a key one can change many times while a key two only changes once. So intuitively, when a leader hears about a key two, it expects that everybody will hear about key ones that show that any relevant lock can be opened. Now, a key one, in this case, can change many times. It can for some time have a different value than any lock's value, a given lock's value. And then revert back to the lock's value later. Now, at that point, when it has the same value as a lock's value, it doesn't actually show that the lock can be opened. As stated before, in order to prove to a locked party that it can open its lock, since no commitment took place, we only need to show it that some other value has to be closed around. Now, if some party has a lock with some value val, and the key two is achieved with some different value val tag in a later view, then before setting that key two, a party receives n minus f key one messages. Now, those key one messages have the same value val tag, and at least f plus one of them were sent by non-faulty parties. Now, again, as stated before, the issue with this is that those parties might update their key one values many, many times, but they're not always the same value. So, the key one values many, many times before the key two value is updated again. However, if we look at the last two key one values that those parties set, they're always going to show us that the lock is irrelevant. Either the last one is different than the lock, if it doesn't change. If it's not just enough to show that the lock is irrelevant, or we have two different key ones that are set after the lock, and at least one of them doesn't have the same value as the lock. So, if we either show one key one that is later than the lock and has a different value, or two key ones that are later than the lock, we know that the lock can be opened. So, now we want to actually be convinced that this type of proof actually shows that the lock can be opened. So, the first case, as said before, is that if we have a single key one that is set after a lock with a different value, for example, the last key set is key five with some value of five, which is different than that. Now, if we send the last two keys, in this case key four and key five, we'll see that key five was set later and it has a different value, and the lock can be open. On the other hand, it might be the case that we've changed our keys many times after a lock was set. So, for example, in this diagram, if we send the last two keys and their corresponding values, the key four and key five with val four and val five, we're only going to send ones that have different values. It can't be the case that both val four and val five equal the lock, the lock's value. So, at least one of them will provide proof that the lock is irrelevant. Finally, using this insight, we see that two keys are actually enough. We need to save two keys of each type, and this only requires a constant amount of persistent storage, just the last two keys of each type, and constant message size as well. We can just send the keys and their value. In conclusion, in this work, we present ITHS. ITHS is an optimally resilient, information-theoretically secure, Byzantine consensus protocol and partial synchrony with constant message size and constant persistent space requirement. Thank you very much.
|
This work presents Information Theoretic HotStuff (IT-HS), a new optimally resilient protocol for solving Byzantine Agreement in partial synchrony with information theoretic security guarantees. In particular, IT-HS does not depend on any PKI or common setup assumptions and is resilient to computationally unbounded adversaries. IT-HS is based on the Primary-Backup view-based paradigm. In IT-HS, in each view, each party sends only a constant number of words to every other party. This yields an O(n2) word and message complexity in each view. In addition, IT-HS requires just O(1) persistent local storage and O(n) transient local storage. Finally, like all Primary-Backup view-based protocols in partial synchrony, after the system becomes synchronous, all nonfaulty parties decide on a value in the first view a nonfaulty leader is chosen. Moreover, like PBFT and HotStuff, IT-HS is optimistically responsive: with a nonfaulty leader, parties decide as quickly as the network allows them to do so, without regard for the known upper bound on network delay. Our work improves in multiple dimensions upon the information theoretic version of PBFT presented by Miguel Castro, and can be seen as an information theoretic variant of the HotStuff paradigm.
|
10.5446/52882 (DOI)
|
So, hi, my name is Alexander Spiegelman. Today I will present ACE, Abstract Consensus Incapsulation for Liveness Boosting of State Machine Replication. This is a joint work with Alec Winberg and Dalia Malki. Okay, so what is a state machine application? We simply have a state and we simply want to replicate it reliably on many servers. And in order to do it, we usually use consensus as a building block. So each server maintains its local log. And in order to update to the next state, servers propose the next state and they propose it to the consensus. And the consensus, using their consensus, they agree on the unique state and then they simply edit to their log. Okay, so, but unfortunately, deterministic asynchronous consensus is impossible due to the famous FLP result. So the question is therefore, what is the practical approach? And the answer is that practically most, if not all deployed system for go async, for go async and instead assume eventual synchrony, which assumes that there is a known global stabilization time after which the communication is synchronous. And under this model, the protocol usually operate in a view by view manner where each view is divided into two phases. In the first phase, there is a designated leader that try to drive progress. Other parties participate, but start a timer. And then if the timer expires, the parties use to the view change phase in which they exchange some information in order to safely reg the current view and proceed to the next. Okay, but unfortunately, timeouts are vulnerable due to several reasons. So first, if our timeouts are too aggressive, if we try to, if they are too close to the real network delay, so even small procreation in the delays might demote good leaders. And we might time out leaders, even though they are good just before they are slightly slower than our timeouts. Now, on the other end, if we try to set the conservative timeouts, then it can cause serious delays in case the reader is actually faulty because we will need to wait until, all the timeout until we will be able to switch to the next view. Okay, also the timeouts are open to DDoS attacks and the attacker does not need to control the entire network. It's enough for him to be able to delay, adaptively delay the current leader in order to avoid progress. Okay. But luckily, timeouts are not inherent. And what we can, the other thing we can do to circumvent the impossibility result is the user indemnization. Okay, and in randomized protocol, we usually require the terministic safety and termination with probability one. Okay. So this brings me to our goal in this paper, which is to design a framework, which is to design ACE, a model agnostic framework for asynchronous boosting. But our framework ACE can take any, or any eventually synchronous leader-based consensus, view-by-view consensus algorithm, and automatically turn it into a randomized, fully asynchronous one. And by model agnostic, we mean that the ACE, the framework does not really care about, about the model assumptions in these algorithms. It's upstox them away. For example, it can be instant instantiated with a crash, with a crash, with a crash failures algorithm like Paxos, always a Byzantine failure algorithm like PBFT, or HotSight, okay. So in the first step, we take a single, we take a single view of the overview, view-by-view protocol, and the couple it. So a single view, as I said before, consists of two phases. The first is the leader-based phase in which the leader applies to drive progress. And the second one is the view change phase. And the existing algorithms, they use timeouts in order to move from the first phase to the second phase. And the second phase is the view change phase. So we do this in order to move from the first phase to the second phase. What we do with the couple, with the couple, the two phases, and get rid of the timer and get rid of the timer. We do it by defining LBV, stands for leader-based view obstruction, which, which, which expose an external API to engage, to start, to start the view and it's, and to view, to wage an exchange. Okay. Now, in order, now, in order to define the properties of engage and wage an exchange, I need to first define what's the proper execution of these LBVs, because the properties, they will be defined over the proper execution. Okay. So each LBV is parameterized within the identification and it's designated leader. Okay. In this example, the first one, we have two LBVs, the first and the second and the first one, the leader of the first one is P1 and the second is P2. Okay. So first, a party in work engage with some initial state as zero to the LBV. Now, the, the state itself is a, the, the, the self, the state itself can vary from protocol to protocol and really, really, really, and it's really, and it's really internal, internal to the protocol that we use in order to, in instance, LBV. Now the engage operation might return or might not a value. And at some later point, we invoke vaginal exchange. Now this, the vaginal exchange returns two things and it's the state, the new state, and also it might return a value. Now we take this state with, which we take the state and reuse it in the, in the engage invocation of the next LBV, which in turn also might return a value or not. And so on and so on. This is how a proper execution on LBV works. Okay. Now, now for the, for the, for the properties. So the properties are defined over the proper execution and first, so let's start with the madness properties. So for every LBV instance, if the leader is correct and no, no correct party invoke regent exchange, then all the, the correct invocation of engage eventually return. This is the first property. And the second that if all correct parties invoke regent exchange, then all, all invocation eventually return. Okay. Now for safety, we require a agreement which says that all LBVs return the same values. They might return bottom, but if they return a value other than bottom, it has to be the same. Okay. In a proper, in a proper comp, in a proper execution. Okay. And a properly composed execution of LBVs. Now, another properties that requires the completeness for every LBV instance, if F plus one correct, engage invocation return, then no regent exchange return null. Okay. Now, if you think about it, this, these properties are already implicitly satisfied by existing view by view protocols. So, so in order to use our framework, all we need to do is to take a single view of the view by view protocol and wrap it with the, with the LBV API. Okay. Now, once we, once we have an LBV obstruction, we can actually use it in order to reconstruct the, the, the protocol that it's in, in is to initiate initiated with. So all we need to do is to work engage and the timer. And when the timer expires, we invoke regent exchange and move to the next view. Okay. So, so now for the, for the framework algorithm for the lightness boosting. So before I describe it, I just want to describe another two, another two auxiliary abstractions. So first is a little, little election, which is very simple. You invoke an elect operation, some point to return the leader. Now the properties are the following termination, if F plus one, if F plus one collect parties invoke elect, then all return agreement, all the collect, all the correct parties with on the same leader and validity. Leaders are chosen uniformly at random and unpredictability. Leader is unknown until one correct elect. Invoke elect. So this little election obstruction can be easily implemented with special signatures, for example, but there are also other ways. Okay. And the second obstruction is the barrier. And that's party parties enter the barrier and some point they exit. And the properties are the following coordination. No correct party exit the barrier until at least F plus one correct parties enter it. Termination requires that if two F plus one correct parties enter the barrier, then all eventually exit an agreement. If one correct exit or like this. So this in a way to my, it resembles the, the, the reliable broadcasters properties and, and actually it can be implemented in a similar way. Or it can be implemented also with special signatures. It's a, I will not go into these details in this presentation, but it's really not a big deal. You can look in the paper. Okay. So now, okay. So now we're ready to move into the framework algorithm. So instead of operating in a view by view manner, we operate in a way wave by wave manner. And each wave is a, and in each wave, we do the following. So first, instead of running a single LBV in the view, we run and concurrent LBVs. Each was a different leader. So we have a different leader. So we have a different leader. And now when you look at the LBV in the view, we run and concurrent LBVs. Each was a different leader. Okay. So every party actually invoke and gauge operation in every LBV in each LB in every LBV and every LBV as a different leader. Now, whenever some engage in vocation return, and the party sends an engaged done message to the leader of this LBV. leader collects 2f plus 1 engaged done messages, it enters the barrier. Now, when eventually it exits the barrier, when enough leaders enter the barrier to eventually exit, the leader correct elect. And when the waves leader is elected, in this case, it is process PI, then when they go and invoke regent exchange in the LBV of the waves elected leader. And we forget about all the other LBV instances. We don't care about them anymore. OK. Now, if it happens that the regent exchange returns a value that is not a bottom, then we can decide on it. In any way, we can take the state that the regent exchange returns and use it as a parameter to engage a parade, engage in vacation in the next wave, in the next LBVs. OK. So this is as simple as that. This is the entire protocol. OK. Now, let's discuss. Let me give you some safety intuition. So here is our protocol, operated in the wave by wave manner. Now, if we forget about the leader election obstruction, the barrier obstruction, and all the LBVs that were not elected, just forget about them, that really what we are left with is a proper execution of LBVs. And so the safety of our protocol really comes from the safety of the original protocol we use in order to instantiate the LBV obstruction. OK. So we get safety for free. OK. So as for termination, we get termination with probability 1. And here is the intuition. So a correct leader enters the barrier only after F plus 1 correct engage in vacation and needs LBV returns. OK. Now, a correct party exit barrier and invoke elect only after F correct leaders enter the barrier. So we get that the waves leaders become known only after F plus 1 correct leaders enter the barrier. And if we are lucky with probability 1 over 3, the waves leader is one of them. OK. So if you remember, if F plus 1 correct leaders, if F plus 1 correct parties return a value from their engage in vacation, then by the completeness property, the regent exchange operation will also return a value. So with probability 1 over 3, the regent exchange of vacation return a value other than but for all correct parties. And thus, all correct parties decide in this wave. So in expectation, all correct decide after three waves. We can actually have some tweaks to reduce it to 3 over 2 waves, but I will not go into these details. You can look in the paper if you want. OK. Now, we evaluated our framework. We implemented it in C plus plus and initiated it with the hot stuff protocol and compare the ace hot stuff with the base hot stuff under best case, worst case, and un-a scenarios, and also under attacks. So here I will demonstrate some of the results that we have. You can see some more in the paper. So first, of course, because we use NLBV instances concurrently in each wave, ace has some overhead in the case, in the good case, in the best case, where the network is perfect with no delays. So this is what we can see in these graphs. But if we start to introduce delays, then hot stuff is vulnerable. For example, in this experience, we have a fractionate in the delay. And if the timeouts are too aggressive, in this experience, the timeouts were set to the time. You can see that if the delay is 5 milliseconds, that hot stuff is great, better than ace. But if we increase the delay to 10 milliseconds, then because of the timeouts are so aggressive, leaders are not able to commit anything. And the hot stuff throughput is actually goes to zero, where the ace continues to operate in network speed. OK. On the other hand, if our timeouts are too conservative, then Byzantine parties can exploit it in order to make parties to wait the entire timeout until the due change. So this is what we see in this graph. In the first 25 seconds, everything is good. But in the last 25 seconds, in this graph, Byzantine parties are simply quiet in there whenever there are leaders. And we can see that the throughput of hot stuff drops significantly. Now, some of the protocols are tried to adjust to the real network delay. And so we implemented one option, one naive option of it. And whenever we have a successful wave, we decrease the timeout. And whenever we have a successful view in hot stuff, we decrease the timeout. And whenever we have an unsuccessful view, we increase the timeout. And this graph shows the result where in the second half of the experiment, we did the following. We did a set up on the correct leaders in order to force the system to increase the timeout. And then the Byzantine parties performed a silent attack in which they kept silent as long as possible without being detected. And you can see that the base hot stuff throughput is really dropping whereas the ace hot stuff throughput still operates in the network speed. The eight hot stuff operates in the network speed. OK, so this brings me to the discussion. So what we did in this paper, we implemented the LBV abstraction, which allow us to get rid of the timers by exposing an external API to control the way in which we switch between the leader base phase to the view change phase. With this abstraction, we introduce this ace and boosted, which can boost liveness of leader based view by view algorithms. Now, as I said before, ace is model agnostic. It doesn't add any assumptions to the assumption made in the original leader based view by view algorithm. So if we take Paxos, we get asynchronous protocol that work in the crash fatal model. And if we use hot stuff or PBFT or Ziva, we get a protocol that is resilient against Byzantine faults. OK, also because of the randomness, because of the way we choose the wave's leader, we get some sort of fairness. And the last point is modularity. If you have a system with ace and one day you want to switch the algorithm, the consensus algorithm, then all you need to do is just to change the implementation of the LBV abstraction and everything stays the same. OK, so with ace, we actually can enjoy both worlds. We can enjoy the experience gained in decades of algorithm design and system engineering in the eventually synchronous model. And at the same time, we can enjoy robust asynchronous solution that is live under attack. OK, so this concludes my talk. Thank you very much for watching the YouTube video. Bye.
|
With the emergence of attack-prone cross-organization systems, providing asynchronous state machine replication (SMR) solutions is no longer a theoretical concern. This paper presents \emph{ACE}, a framework for the design of such fault tolerant systems. Leveraging a known paradigm for randomized consensus solutions, ACE wraps existing practical solutions and real-life systems, boosting their liveness under adversarial conditions and, at the same time, promoting load balancing and fairness. Boosting is achieved without modifying the overall design or the engineering of these solutions. ACE is aimed at boosting the prevailing approach for practical fault tolerance. This approach, often named \emph{partial synchrony}, is based on a leader-based paradigm: a good leader makes progress and a bad leader does no harm. The partial synchrony approach focuses on safety and forgoes liveness under targeted and dynamic attacks. Specifically, an attacker might block specific leaders, e.g., through a denial of service, to prevent progress. ACE provides boosting by running \emph{waves} of parallel leaders and selecting a \emph{winning} leader only retroactively, achieving boosting at a linear communication cost increase. ACE is agnostic to the fault model, inheriting it s failure model from the wrapped solution assumptions. As our evaluation shows, an asynchronous Byzantine fault tolerance (BFT) replication system built with ACE around an existing partially synchronous BFT protocol demonstrates reasonable slow-down compared with the base BFT protocol during faultless synchronous scenarios, yet exhibits significant speedup while the system is under attack.
|
10.5446/52883 (DOI)
|
Hi everyone, welcome to our presentation. My name is Jovan Amitich and I'm going to talk about security analysis of Ripple Consensus Protocol. This is a joint work from researchers from Cryptology and Data Security Group from University of Bern. My co-authors on this paper are Ignacio Morassisar and Professor Christian Cachan. So let's start from giving a short introduction of Ripple. Ripple is a blockchain-based platform that enables secure and instant worldwide transactions at low-cost and it has a native cryptocurrency called XRP. Unlike Nakamoto's consensus protocol in Bitcoin or Ethereum, the Ripple consensus protocol does not rely on mining. But it uses a voting process relying on the identities of its validator nodes to reach consensus. This makes Ripple much more efficient than Bitcoin for processing transactions. So Ripple can process up to 1500 transactions per second and it achieves very low transaction settlement times, which is between 4 and 5 seconds only, which we have to admit it's pretty fast compared to some other blockchain protocols. Ripple protocol is generally considered to be a Byzantine fault-tolerant agreement protocol which can reach consensus in the presence of faulty or malicious nodes. In contrast to traditional Byzantine agreement protocols, there is no global knowledge of all participating nodes, but each node declares a unique node list, shortly called UNL, of other nodes that it trusts. The peer-to-peer network consists of many independent Ripple servers that receive and process transactions. In this talk, I will always refer to the Ripple servers that take part in the consensus and these servers are called validators. Now I would like to show you an example of Ripple network and how UNLs are configured. In this example, there are six nodes. These four nodes are in UNL1, so black UNL, and last four nodes are in UNL2, colored in blue. Nodes 1, 2, and 3 trust to UNL1 and nodes 4, 5, and 6 trust to UNL2. This means that, for example, node 1 will listen only to messages coming from nodes 2, 3, and 4, and based on these messages, this node will make its own decision. In order to reach a consensus among the all nodes, there must be an overlap between each UNL. Here in this example, we can see that overlap between two UNLs is 50%. Most of previous work was trying to discover what is the minimum required overlap and what Ripple company guarantees is that protocol will work and be safe if your node has 100% overlap with the list of validators recommended by Ripple company. All configuration of one node is done in two files, Ripple.config and validators.txt. Now let's take a look how these files look like. So here we can see two files, Ripple.config and validators.txt file. And they are from Ripple GitHub repository and this is what is recommended by Ripple company. On the left side, you can see Ripple configuration file which set up things like server addresses, port, database, etc. But for us, the most important part is highlighted in blue. Here we can see that the specific file that has the list of other nodes that our node should trust, it's called validators.txt which is presented on the right side of the slide. And as I said, both files are recommended configuration from the Ripple. So validators file configures as a trusted validator, all nodes that Ripple considers to be safe. This is a screenshot of the website where you can find all active validators in the Ripple network. And here we can see only one part of the validators that are recommended by Ripple, but the total number of them is 41 validator. A few years ago, most of the validators were actually validating nodes of Ripple company, but nowadays in the list of validators, there are more and more nodes that are run by other institutions. So the number of the nodes run by Ripple company is dropping slowly. Anyways, in the recommended list, there are still five validators coming from Ripple company. So still we have this open question if Ripple is actually decentralized. Now we can move to the more technical and for all of us more interesting part of the Ripple. So let's take a look into concessus protocol that runs the Ripple network. Before we go further, I would like to mention that there is not so much done in the research regarding the security of Ripple concessus protocol. There are only some papers which present how protocol works, but they are not very detailed and pretty hard to understand. So the first contribution of our work is that we have an abstract description of the protocol which we derived from the source code of Ripple 1.4 version. The current version in the moment of this presentation is 1.6, but it does not have any significant changes regarding the concessus protocol compared to the version 1.4. Here on the picture, you can see one part of the pseudo codes that we produced and you don't have to be able to understand now what is written here because I'm showing you this just to show how big and complex the protocol is. If you are interested in understanding the protocol, I encourage you to take a look in this pseudo code which you can find in our paper. Now let's take a look in the most basic element of Ripple concessus protocol. That's a ledger which roughly plays the role of blocks in the blockchain protocols. So a ledger is stored persistently and it consists of a batch of transactions, a hash of the logically preceding ledger, a sequence number, etc. Each node locally maintains three different ledgers. First one is a current ledger which is in the process of building during a consensus round, then the previous ledger which represents the most recently closed ledger, and the valid ledger which is the last fully validated ledger in the network. The protocol itself is highly synchronous. There are some parts which are unsynchronous but mostly this protocol is synchronous. It relies on a common notion of time and its structure into successive rounds of consensus. So the protocol rounds and their phases, there are three phases, I will talk about this later, are implemented by a state machine which is invoked every second when the global heartbeat timer ticks. As I said, there are three phases through which consensus goes on during one round. These phases are open, established and accepted. Usual phase transition goes from open to establish and then to accept it and then proceed to the next consensus round which starts again from open phase. It is also possible that phase changes from established to open phase if a node detects that it has been forked from the others to a wrong ledger. The timeout handler checks first if the local previous ledger is the same as the preferred ledger of a sufficient majority of the nodes in the network. If not, the node has been forked or lost synchronization with the rest of the network and must bring itself back to the state agreed by the network. In this case, it starts a new consensus round from scratch. Here is the general overview of how one consensus round looks like and now we will go step by step through each phase. So let's start from the open phase. When the node enters a new round of consensus, it sets the phase to open, resets round specific data structures and waits for the buffer to fill up with submitted transactions. Once the node has been in the open phase, for more than half of the duration of the previous consensus round, the node moves to the established phase. During the established phase, the nodes exchange their proposals for the transactions to decide in this consensus round, using proposal messages as you can see from the picture. These proposals may contain different transaction sets and all transactions on which the proposals from other nodes differ become disputed. Every node keeps track of how many other nodes in its UNL have proposed a disputed transaction and represent this information as votes by the other nodes. The node may remove a disputed transaction from its own proposal or add one to its proposal and this decision is based on the votes coming from other nodes from UNL. What is interesting is that the node increases the necessary threshold of votes for changing its own vote on a disputed transaction depending on the duration of the established phase, with respect to the time taken by the previous consensus round. So we can see from this graphic that the further we go in time, the higher is threshold. That means that the proposal needs higher support from other nodes in order to be included in the next letter. The moment when the node moves to the accepted phase is when it has found that there is consensus on its proposal, which means that 80% of nodes agreed on the same transaction set. So let's move to the accepted phase now. The node constructs the next ledger called the last closed ledger by applying the decided transactions. This ledger is signed and broadcast to the other nodes in a validation message. After that, node immediately initializes a new consensus round. Concurrently, the node validates validation message from the nodes in its UNL. It verifies them and counts how many other nodes in its UNL have issued the same validation. When this number reaches 80% of the nodes in its UNL, the ledger becomes fully validated and the node executes the transaction contained in it. Our second contribution is the security analysis of the protocol where we found out two scenarios in which it is possible to violate safety and liveness of the Ripple Consensus Protocol. Now I will introduce the attack that leads to violation of safety of the protocol. We studied this attack in a setup with a small number of honest nodes. In particular, we have seven nodes, one of them is a Byzantine. We divide them in two groups, nodes 1, 2 and 3, colored in black have UNL nodes from 1 to 5 and blue nodes 5, 6 and 7 have UNL nodes from 3 to 7. Node 4 is a Byzantine. This node will behave as an honest node until the precise moment of the attack arrives. The moment is when black nodes propose transaction TX at the same time when blue nodes propose transaction TX prime. In this moment, we trigger the adversary and it will propose transaction TX to black nodes and TX prime to blue nodes. Now let's take a closer look into attack by focusing on local view of one node in each group because from the symmetry that we can see in this problem, node 1 will follow the same steps as nodes 2 and 3 while node 5 will follow the same steps as nodes 6 and 7. So now we see that the local view of nodes 1 and 5, they will only listen to the messages coming from nodes in their UNL. Node 1 will get the messages from nodes 2, 3, 4 and 5 and these nodes will send the votes to node 1 and after collecting all votes, we can see that 3 votes are in favor for transaction TX and 1 node for transaction TX prime. Don't forget that we have to consider our vote as well, it means that we have 4 votes out of 5 for transaction TX. This is exactly the 80% which is needed for including the transaction in a ledger. However, at the same time, from the point of view of node 5, the opposite happens, we have enough support for transaction TX prime but not for transaction TX. The outcome of this situation is that these two nodes will create a different ledger to be validated, so ledgers L and L prime, but these two are conflicting because they hold different transactions. Now we move to the second round where we try to validate these ledgers. Again, we take a closer look into nodes 1 and 5. From the point of view of node 1, it will get 3 votes for validating ledger L and 1 vote for validating ledger L prime. The opposite happens for node 5. As we recall from before, 80% of nodes has to agree on the same ledger. Because of that, two honest nodes will validate two different ledgers, leading to a break of safety. In other words, we have serious problems in Ripple protocol. In the Ripple, we can also attack liveness. As we saw before, the optimal case for safety is having the huge overlap between UNLs. However, if we consider a scenario with two honest nodes and one Byzantine node and they all share one same UNL, we can break liveness. To do this, we just have to wait until the half of the nodes propose a transaction TX while the other half propose a different transaction TX prime. This is the moment when we start the attack. The malicious node will behave as proposing the transaction TX to the nodes that are proposing transaction TX and will behave as proposing transaction TX prime to the nodes proposing transaction TX prime. With this attack, we manage to break the liveness of the protocol because from the point of view of any of the nodes, we see that what we propose strictly has more than 50% of the support in our UNL, which are all nodes so we don't change our mind. We keep proposing transactions and we do not accept the other one because it has less than 50% support. So we will do over and over again, breaking the liveness. And now we are getting closer to the end of this presentation. So let's quickly sum up what we have done in our paper. Previous work regarding the Ripple consensus protocol has already brought up some concerns about its liveness and safety. In order to better analyze the protocol, this work has presented an independent abstract description of Ripple's consensus protocol derived directly from the implementation. Furthermore, this work has identified relatively simple cases in which the protocol may violate safety and liveness and which have devastating effects on the health of the network. Our analysis illustrates the need for very close synchronization and tight interconnection between the participating validators in the Ripple network. That would be all. Thank you for your attention. And if you want to know more details of our security analysis, you can scan this QR code or find our paper in archive repository. Thank you one more time.
|
The Ripple network is one of the most prominent blockchain platforms and its native XRP token currently has the third-largest cryptocurrency market capitalization. The Ripple consensus protocol powers thi s network and is generally considered to a Byzantine fault-tolerant agreement protocol, which can reach consensus in the presence of faulty or malicious nodes. In contrast to traditional Byzantine agreement protocols, there is no global knowledge of all participating nodes in Ripple consensus; instead, each node declares a list of other nodes that it trusts and from which it considers votes. Previous work has brought up concerns about the liveness and safety of the consensus protocol under the general assumptions stated earlier, and there is currently no appropriate understanding of its workings and its properties in the literature. This paper makes two contributions. It first provides a detailed, abstract description of the protocol, which has been derived from the source code. Second, the paper points out that the abstract protocol may violate safety and liveness in several simple executions under relatively benign network assumptions.
|
10.5446/52884 (DOI)
|
Hello everybody. My name is Amin Bustaw. I am a PhD candidate at UM-16P in Morocco and today's talk is about the business in resilience of distributed stochastic gradient descent. So after a brief introduction, I will state the problem and after that I will talk about some related works and how they handle the business in resilience problem. I will also briefly present the theoretical guarantees in the full version of this talk. Finally, I will present the selection of experiments to accurately assess the performance of our algorithm. So as you probably know, many large scale machine learning applications are now implemented in a distributed way. Perhaps you are mostly familiar with parameter server architecture which was introduced by Google research. So in this classical scheme, you have one parameter server. Let's call it PS for short and this PS can be possibly replicated and set of workers and each worker has a local data set. So if we consider stochastic gradient optimization in support that you are in, iteration number K, what happens is that first the PS broadcasts the model WK to all the workers. Then each worker computes an estimate of the gradient using this model and its local data set and sends it back to the PS. Finally, the PS aggregates all these received estimates of the gradient and performs an update. This operation starts again until a certain criterion is met. Now what happens when the proportion of these workers are malicious? So if the PS is only averaging the received vector, then it suffices to just have one malicious worker to make the system completely faint. So what we need then is a more robust aggregation rule to defend against bad behavior. Here are some of the properties that we want from an aggregation rule. First, the training itself is already taken a huge amount of time. So the defense mechanism should be fast, otherwise the training will be a daunting task. It should also defend against a high number of malicious workers and it should be noted that the maximum number of Byzantine workers that can be tolerated is half because if more than half are malicious, then it will be impossible to distinguish between the good and the bad. The third point, which is an important one, is the fact that the defense mechanism should not degrade the performance of the system when all the workers are honest because companies or firms that use distributed machine learning are not constantly under attack. Let's say that Byzantine attacks may happen once every five or 10 years. So in the rest of the time, your model should reach the full accuracy and it should not be limited by your aggregation rule. And finally, the system augmented with the defense should have an acceptable performance when facing Byzantine workers. And actually, this is the goal of an aggregation rule. So here is our algorithm. It is based on very simple functions like addition, subtraction, computing norms, and meetings. So basically, we try to come up with the closest vector to the coordinate-wise medium, which is itself a good robust estimator, but only constructed from full graded. So we do this first by computing the coordinate-wise medium and we store it in the vector m. And we subtract it from every vector, every column vector in the matrix. And then we compute the squared norm of these central vectors in the vector f. And finally, we compute the median of the squared norm. So let's call it r. And then we construct this interval from 0 to r. And r is the median of the squared norm. So now the filter is very simple. We only select the vectors inside this interval. Only vectors that satisfy this condition. And we average them. And this is our output. So from what we have seen, most defense mechanisms can be categorized as follow. You have aggregation rules that use historical information to filter the bedwalk while progressing with each iteration. You also have some methods that rely on redundancy, which is a classical way to deal with failures. And you have aggregation rules based on robust statistics. So our work falls into this last category. So in robust statistics, there are two ways to deal with subjects. You can either select whole vectors based on some rule or some filter and average only those selected vectors to come up with the final output. And this is what we call in the paper a full aggregation rule or full gar for short. Or you can apply the rule directly on each coordinate. And this is what we call blended gar. Now from experimentation, we observed that blended gar never reached the top accuracy in setting where we only have honest workers. But full gar do. So this is exactly the third point we discussed earlier on the fact that the defense mechanism should not degrade the system. So what we see here is an experimentation involving 50 workers. And the aggregation rules are tuned to withstand 12 business in work. But actually none of them are presenting in the experiment. So let's just forget for now the name access, which is the name of our algorithm and take a look at the performance of these aggregation rules. So it's clear that full gar, they have no overhead accuracy. Okay, you see that they are reaching the accuracy of average. But the blended gar, they have this gap. Okay. So they, they don't reach this top accuracy, which is reached by average. Okay. Okay, now let's compare the three most important properties of these closed gar to our work. So averaging is indeed fast. Okay. So it has an optimal time complexity. We say it's optimal because in deterministic methods, one needs at least to read the content of the gradient matrix, which is done in big O and D. So N is the number of workers and D is the dimension of the problem. So it also has a low angular error. Okay. So it is decreasing when the number of workers increase. Okay, which is good. The problem is that it is not business in resilience. Okay. So f equals zero. Now we have seen the downside of blended cars. Okay. So although they have an optimal time complexity and optimal break down point, and even some of them have reached the angular error of average. Okay. Which is the desired property. Now, to the date of publishing this paper, we only knew about three full cars. So Chrome, Multicrome and Boolean. Okay. The problem was the fact that they had a high time complexity, which was quadratic in the number of workers. Okay. And they had a high angular error. Okay. So you see this term in square N appearing, even N square D. Okay. So this is bad. And they didn't have an optimal breakdown point. So they are far from the N bigger than 2F. Okay. So for example, Boolean, it can only defend against nearly a quarter of the business in workers. Okay. So we came up with Axel that has improved on the three properties. Okay. So basically we have taken the best out of the two categories. So we took optimal time complexity and break down point plus the low angular error from the blended approaches. And we took the low overhead accuracy cost from full gradient approach, which makes our aggregation the best as we will see in the evaluation section. Okay. Now, this is the set of assumptions made in this work. So the first one concerns the breakdown point. So we require that the number F of malicious workers is less than half of the total workers. Now, the second and the third assumptions concern the cost function. And we want it to be smooth and strongly convex. But we also have a result on smooth general functions and three times differentiable, which is the consequence of the Byzantine originals. We will see it later. So the last assumption is the classical assumption in this line of research. So we assume that we have an unbiased estimator of the gradient. Okay. And the variance is bounded. All right. So the first thing to do was to upper bound the variance for Axel. So which was done in lemma 11. So all the theorems coming next are based on this upper bound result. For example, if we combine this result with lemma 12 on the control for the simple moments, we are able to prove the alpha F Byzantine resilience of Axel. So let me talk about this a little bit. So we say that an aggregation rule is alpha F Byzantine resilience if it can tolerate a maximum of F malicious worker and support that your output is this. So this is your function. This is your, this is the true gradient. And this is Axel output, for example. So your function, your aggregation rule is alpha F Byzantine resilience if it has an angle alpha with the true gradient. So naturally we want this angle to be as small as possible, even zero if you can do it. Okay. So a direct consequence of the alpha F Byzantine resilience is the almost short convergence. So in the paper, we don't prove the theorem because it was run on the original paper. So it was sufficient to prove the alpha F Byzantine resilience of Axel in our case to state the theorem. So this is indeed a weak convergence result because we only show that the gradient sequence will almost surely reach zero. So this means that if your function, for example, is like this. Okay. So this means that you are going to reach a flat region of the cost function. So it could be a global minimum, which is very good. But it could also be a local minimum. Or worse, it could be a saddle point, which is very bad. Now in theorem 15, we prove conversions for strongly convex cost functions. So this is exactly how conversions equations are stated for all stochastic gradient descent methods. So you always have a decrease in turn that we'll go to zero. Okay. When the counter T we go to infinity. And you are left with a small term here involving your variance. Okay. So it is important to have a gradient aggregation rule with a small angular error, which implies a small value. So just for the sake of comparison, if you take a vanilla gradient stochastic gradient like which has the aggregation rule of averaging. Okay. So the delta will be equal to the sigma square over n. So this is decreasing with n. Okay. Now if you take some aggregation rule like accent, we have this turn here. Okay. So this turn is constant with the number of, regarding the number of focus. Okay. So if you take prior full gas, they had, okay, they had angular errors with big O n. And this was bad. Okay. So at least with axle, we have a constant time. Regarding the number of focus. So it's not, it's not like averaging. Okay. But we are approaching average. Okay. So up to now, all the results were stated in expectation. So we had the results on the variance, which is the expected gap. We also proved the expected alpha F Byzantine resilience. And finally, we proved the expected conversions. So if you look at the last section of the paper, we also studied the actual error. So instead of having expectation here, okay, which was done in every result, we draw this expectation and we studied the actual error. Okay. So this was possible by assuming a normal distribution of the gradient. And we used the theory of extreme value in order to prove a logarithmic upper bound of the angular error of axis. Okay. So prior work, they had for optimal, your best gradient aggregation rule. So if you have n greater or equal greater than two F. Okay. If you have n greater than two F, then we had an angular error of the n square. Okay. So now let's take a look at two experiments that we presented in the paper. Actually, we have a whole section on empirical results for different data sets and different settings in the appendix. So in this first experiment, we train a CNN on C510 using 25 workers and 11 among them are Byzantine workers. So these Byzantine workers are implemented in the attack presented in the paper, four of empires. So actually we have tested two attacks, the one in the paper, four of empires and another one in the paper called a little is enough. So both, both of them, they are exploiting the fact that honest workers will send some values like normally distributed. Okay. So this is to me, this is standard deviation. So what they do is that they send vectors here. Okay. Or here. So these vectors, they are considered as honest because, but they are in the tail of the distribution. And this is very bad. Okay. So as you can see here, Accel has reached the best accuracy. So nearly 30%, even with nearly half of Byzantine workers, while other girls were limited to 10%. Okay. And even, even blended girls, they were stuck at maybe 20 or 25%. Okay. So the second experiment is quite interesting because Accel is the only girl converging. And most importantly, it has reached the top accuracy exactly like average. Even with the presence of roughly one fifth of the malicious workers. So these workers were implemented, the state of the art attack found in the paper, literally enough. Okay. So as you can see here, Accel is the orange dotted line. So it has completely reached the accuracy of, of averaging. Okay. Maybe even, it is even slightly better. I don't know if you can see it, but it is slightly better than average. Okay. While the other girls were stuck at 10%, maybe from the, did better in this experiment, it has reached 20%. Okay. But it's still, it is still far from, far from the averaging. Okay. And blended girls, they were all stuck at 10%. Okay. Accuracy. So the models have diverse. Okay. So as a conclusion, we are left with some open questions that will be addressed in future works, like whether it is possible to reduce further the angular error and achieve maybe the error of averaging or can randomness enable better performance. Okay. And what are the guarantees that we can have when dealing with non-IAD data? Okay. So this was all. Thank you very much for your attention. If you have any questions, I will be glad to respond. Thank you.
|
Modern machine learning architectures distinguish servers and workers. Typically, a d-dimensional model is hosted by a server and trained by n workers, using a distributed \textit{stochastic gradient descent} (SGD) optimization scheme. At each SGD step, the goal is to estimate the gradient of a cost function. The simplest way to do this is to \emph{average} the gradients estimated by the workers. However, averaging is not resilient to even one single Byzantine failure of a worker. Many alternative \emph{gradient aggregation rules} (GARs) have recently been proposed to tolerate a maximum number f of Byzantine workers. These GARs differ according to (1) the complexity of their computation time, (2) the maximal number of Byzantine workers despite which convergence can still be ensured (breakdown point), and (3) their accuracy, which can be captured by (3.1) their angular error, namely the angle with the true gradient, as well as (3.2) their ability to aggregate full gradients. In particular, many are not full gradients for they operate on each dimension separately, which results in a coordinate-wise blended gradient, leading to low accuracy in practical situations where the number (s) of workers that are actually Byzantine in an execution is small ($s<We propose \textsc{Aksel}, a new scalable median-based GAR with an optimal time complexity (O(n)), an optimal breakdown point (n>2f) and the lowest upper bound on the \textit{expected angular error} (O(1)) among \textit{full gradient} approaches. We also study the \textit{actual angular error} of \textsc{Aksel} when the gradient distribution is normal and show that it only grows in O(logn), which is the first logarithmic upper bound ever proven assuming an optimal breakdown point. We also report on an empirical evaluation of \textsc{Aksel} on various classification tasks, which we compare to alternative GARs against state-of-the-art attacks. \textsc{Aksel} is the only GAR reaching top accuracy when there is actually none or few Byzantine workers while maintaining a good defense even under the extreme case (s=f). For simplicity of presentation, we consider a scheme with a single server. However, as we explain in the paper, \textsc{Aksel} can also easily be adapted to multi-server architectures.
|
10.5446/52886 (DOI)
|
Hi, I'm Isaac. I'm a postdoc at the Max Planck Institute for Software Systems, and I'm here to talk about heterogeneous PAXOS, which is a project I worked on mostly at Cornell, with Shuman Wang, Robert Berners-Essay, and Andrew Miners. Most of you know that consensus is a key part of maintaining a replicated database, blockchain, anything else with consistent state. A consensus protocol is the part that decides which transaction goes next. Traditional consensus algorithms are built for traditional scenarios, usually one system owner running a few machines, trying to tolerate one or a small number of independent failures. Crucially, most distributed algorithms are what we call homogenous, as opposed to heterogeneous. And by that, I mean three things. First of all, that all participants are created equal. There might be different sets of participants with different roles, but with each role, any participant is just as good as the next. Usually, a system is characterized by having some number and participants. Secondly, all failures are created equal. Usually, when we're analyzing a system, we talk about tolerating some number of independent failures, all of one type, so F crash failures or F Byzantine failures. Third, all learners are created equal. Learners are the people who observe the output of the consensus. They make assumptions about which failures are going to be tolerated and which results need to be guaranteed. Traditionally, these assumptions are something like, there must be at most F failures out of the N participants. Previous projects have managed to break each of these assumptions independently. Upright and XFT, for example, makes crash and Byzantine failures. Byzantine quorum systems and refined quorum systems use quorums to encode heterogeneous participants. Ripple was a consensus protocol designed with heterogeneous learners. Even pairs of homogenous assumptions have been broken. Cellar has heterogeneous learners and participants. Flexible VFT has heterogeneous failures and learners. And I'm here to argue that not only could Byzantized Paxos be phrased with heterogeneous failures and participants, but we can adapt it to form a new protocol, heterogeneous Paxos, which is heterogeneous in all three ways. And I think that's really important, because a lot of new systems, including most permissioned blockchains, are trying to connect lots of very different people with very different assumptions. For example, in 2016, all of these companies were trying to run a blockchain together with R3. And of course, the actual machines running the consensus are different too. All of these companies, for instance, can be used to run consensus participants in addition to servers owned by stakeholders in whatever system you have. People are going to have different opinions about the trustworthiness of participants in each of these services. And not all failures are independent. Machines sharing implementation are more likely to be hacked and go Byzantine together. Machines that are in the same data center or the same power grid are more likely to crash together. Machines owned by the same people are more likely to fail together. Correlated failures happen all the time. So it's not always accurate to tolerate any F-independent failures. To build a system that works with many different parties, we should use a consensus that takes everyone's specific failure tolerances into account. For our project, we're working with some fairly standard distributed systems assumptions. And we want our distributed system to keep working even when some participants fail. Crash failures are what it knows simply stops functioning without warning, without detectability. Byzantine failures can behave however they want. They can send messages that aren't part of the protocol, send messages out of order. They are arbitrarily equal. There's one other critical component of distributed systems. Someone observes the output of the distributed system. Someone makes these demands about failure tolerances. And in consensus, we call that entity a learner. Usually, we think of learners as people interested in results of the system. In a consensus problem, we have some known participants who, for historical reasons, are called acceptors. And there are also some learners. And each learner wants to decide or learn a value, such as what transaction happens next. The key property of consensus is agreement. We want learners to decide on the same value. And we want this even in the presence of failures, either acceptors crashing or becoming Byzantine. The most famous consensus algorithm is Paxos by Leslie Lambert. The original Paxos tolerates only crash failures, but there are several variants designed to deal with Byzantine failures as well. Most famous of these is arguably practical Byzantine fault tolerance. But our protocol builds on landlords Byzantizing Paxos by refinement, mostly because it's easier to cruise stuff about. We usually think of Paxos as a homogenous algorithm, as a reminder that means all acceptors are created equal, all failures are created equal, and all learners are created equal. But I'm here to tell you that we don't always want to operate under homogenous assumptions. Sometimes we want heterogeneous systems. These are systems where many different, not necessarily symmetric, tolerated failure conditions, where there can be some crash failures and some Byzantine failures tolerated at the same time, and where different learners make different assumptions about what may happen and what guarantees they want under which conditions. Heterogeneity means that we can tailor the system to the specific constraints of the learners and the potential failures of the acceptors. In the case where everybody is really the same, we want our heterogeneous system to reduce to the homogenous version, but otherwise, we should be able to tolerate more failures, be more efficient with our resources. OK, so what does it take to have a heterogeneous consensus algorithm? Well, it has to be heterogeneous. That means many different failure scenarios, mixed Byzantine and crash failures, different learners make different assumptions. And it has to be consensus. Formally, it needs three properties, non-triviality, agreement, and termination. Let's take a closer look at what these consensus properties mean in a heterogeneous set. Non-triviality requires that if a learner decides something, that thing must have been proposed by someone who's allowed to propose it. You can't always decide three because three may not have been proposed. It doesn't matter whether you're in a homogenous or heterogeneous setting, what we're going to do is have our learners check that a value was signed by an authorized party before they decide it. Done. Agreement is trickier. If two learners decide, they decide the same thing as long as failure assumptions are met. In the homogenous case, this is clear. Either there were too many failures or there weren't. In the heterogeneous case, not so much. Whose failure assumptions are we talking about? Who should agree under which assumptions? And termination is tricky, too. Obviously, each observer can't decide if, say, all the participants fail. In the homogenous case, if there aren't too many failures, we want all the learners to decide, but we haven't yet defined aren't too many failures in the heterogeneous case. Alert. It is technically impossible to guarantee non-triviality agreement and termination even in the homogenous setting if you want to tolerate failures and make all these assumptions we've made so far. Particles get around this by either using randomness cleverly in which case they may show proper both determination or by guaranteeing termination only when there exists some unknown time afterwards. There exists some unknown message latency bound. PAXOS uses this weakened, partially synchronous termination property, and it's what we use as well. Anyway, we're left with two questions. How do we express heterogeneous failure assumptions, as opposed to homogenous ones? And can we make a consensus algorithm that works under those assumptions? But before we can answer those, we have to go back and learn a little more about Landport's PAXOS. Everyone remembers Landport's PAXOS as the optimal homogenous consensus protocol. That means, among other things, that it can tolerate strictly less than half of the acceptors crashing, or a different version can tolerate strictly less than a third of the acceptors being Byzantine. But it's usually set up exactly like that. Tolerate F out of N Byzantine failures, or tolerate F out of N crash failures. Pretty much every implementation I've come across does it this way. But PAXOS can easily be defined in a much more general way in terms of quorums, which are subsets of participants. When your quorums are just any N minus F acceptors R or quorum, it has homogenous properties. But the actual requirements are more general. Intuitively, a quorum is a subset of acceptors which is sufficient to make a learner decide. For a system to have termination, at least one quorum must survive. And for a system to have agreement, you can't have two quorums with an intersection that is entirely Byzantine. So if there are some crash failures, but quorum survives composed of non-crashed participants, they can allow a learner to decide. But if there are two quorums whose intersection is wholly Byzantine, then each Byzantine acceptor can bismulate, meaning they act one way for the purposes of one learner and act another way for the purposes of another learner. And that means that each quorum can make a learner decide, and they don't necessarily decide the same thing. We don't have agreement. If there are safe acceptors in the quorum intersection, that can't happen. A safe acceptor won't bismulate. Now, it so happens that with six acceptors, you can make a Paxos for any four are quorum. And this Paxos will tolerate one Byzantine and an additional crash failure, but not two Byzantine failures. So this quorum-based definition is already a little bit heterogeneous. So returning to our questions, how do we express heterogeneous failure assumptions? Well, with the quorum-based Paxos, they're embedded somewhere in the definition of those quorums. And can we make a consensus algorithm that works? Well, Paxos already kind of is for two out of three kinds of heterogeneity, specifically not all acceptors are created equal and not all failures are created equal. But what we really want is to be able to express our learner's assumptions in advance and then see if we can get a consensus protocol from them. So we're going to do it using a thing we call a learner graph. On the learner graph, the nodes are learners. We'll label each learner with the conditions under which it demands termination. And each edge is labeled with the conditions under which that pair of learners wish to agree. So suppose we have a pair of learners, I'll call them red, and five acceptors. Red learners want to terminate even if one acceptor fails. But they also want to agree even when one acceptor is Byzantine. In practice, a traditional consensus tolerating one Byzantine failure will work for them. But suppose that there are two more learners who I'll call yellow. And yellow learners want to terminate even when there are two crash failures. But yellow learners accept that they may disagree if there are any Byzantine failures. I'm labeling the edge between the blank to represent no Byzantine failures. This is simple enough if red and yellow learners don't want to agree. Red learners run a one Byzantine tolerant consensus, and yellow learners run an entirely separate two crash talk consensus. But what if the bottom yellow and red learners also want to agree, at least when there are no failures? In fact, agreement is transitive. When there are no failures, the bottom two learners agree. And the red learners agree. So the top red learner agrees with the bottom yellow learner. And for this graph, the red yellow pairs will in fact have to agree when there are no failures. Now, there is a consensus that can satisfy this observer graph. But first, I want to explain exactly how we implement these labels. Edge labels express when two learners want to agree. If we're worried specifically about Byzantine and crash failures, then it turns out what we really have to express here are which acceptors are safe. That is to say, which acceptors won't send wrong or out of order messages. Even a crashed acceptor is safe, but a Byzantine acceptor is not. So if we want to tolerate any one Byzantine failure out of five acceptors, then any four acceptors are a safe set. The learners demand agreement whenever the elements of one safe set on the edge label are non-Bizantine. I've only drawn one safe set here, but this label actually should have five safe sets, one for each possible tolerated failure. When we're only considering Byzantine and crash failures, it turns out that the conditions that matter to a learner or whether it can terminate concern who can crash. Usually, we assume that Byzantine nodes can crash, but you can represent a live but corrupt failure type. So if yellow learners want to terminate with any two out of five acceptors crashed, then they must terminate with any three live acceptors. By tradition, these live sets are called quarms. They are sets of acceptors necessary to make a learner decide. So on our learner graph, red learners have a quorum of any four acceptors, and all edges of the yellow learner have one safe set consisting of all the acceptors. So now we're ready to build a consensus for these learners. But first, we have to take one more look at Landport's Paxos. Paxos involves a step called 1B, in which acceptors respond to the question, is there any reason you can't consent on this value? And one possible reason is I've previously accepted something else. Now, I should note that accept is a thing that acceptors do in Paxos. It is not the same as when learners decide. However, this is the step that ensures that intersecting quarms can't make learners decide different things. The acceptors in the intersection will have a reason to put in their 1Bs. So we're going to make two changes to Byzantine Paxos. First, we're going to run a different instance of Paxos for each learner concurrently. So if we're talking about these two learners, we run two different Paxoses, one red, where any four acceptors are a quorum, and one yellow, where any three acceptors are a quorum. And second, in these Paxoses, one Bs can include I've previously accepted something else in another Paxos. That prevents the two learners from deciding different things under some conditions. Now, I know what you're thinking. Isaac, that sounds terribly inefficient to me. You're running a different Paxos for every single observer. Well, in reality, the vast majority of conceptual messages for each conceptual Paxos share an instantiation on the why with messages from other Paxoses. Most notably, if learners' connections all have the same label, we're exactly the same as regular Paxos. For this graph, the Paxoses for the red learners have quarms of any four acceptors, and for the yellow learners, any three acceptors. So long as there is at most one crash failure, there will be one live red and one live yellow quorum, and they all intersect. If there's a Byzantine failure, red learners will agree, but yellow learners may not. And if there are two crashes, the yellow learners will agree and terminate, but the red learners may never terminate. So at last, we have a consensus with that third kind of heterogeneity. Not all learners are created equal. So we're turning to our questions. How do we express heterogeneous failure assumptions with a learner graph? Each learner expresses their termination conditions, and each pair expresses their agreement conditions. Can we make a consensus out of them that works for them? Well, clearly sometimes yes, because we just did one. And sometimes no. If two learners want to agree, even when all the acceptors fail, that's not possible. We need to generalize Paxoses quorum requirements. Recall that Paxoses requires that one quorum survives, and no two quorums intersection is entirely Byzantine. And this is what our generalization looks like. Summarize, under the conditions of some edge of the learner graph, any quorum of one learner must have a non-Bizantine intersection with any quorum of another learner. In order to guarantee termination, just like in the homogenous case, a learner must have a live quorum. So yes, we know exactly when we can make a consensus out of them that works with a given graph. We run our transitivity process to condense the graph, and then we check if heterogeneous consensus is possible. We've implemented heterogeneous Paxoses, mostly to show that we can, and also to show that there are performance benefits from heterogeneity. We used the Charlotte framework for authenticated distributed data structures to build block chains, where each block requires a proof of consensus referencing another to prove that the reference block belongs in the chain. This was implemented in about 1,700 lines of Java with fairly standard building blocks. So for one experiment, we have an organization we call RED. RED has two learners and three acceptors. And RED learners want to terminate even if an acceptor has crashed. But they accept that they may not agree if an acceptor is visiting. There is also another organization we call BLUE. BLUE also has two learners and three acceptors. And RED learners do not care about BLUE acceptors. RED learners want to terminate even with all the BLUE acceptors crashed. And RED learners want to agree even with all the BLUE acceptors visiting. And BLUE learners, likewise, don't care about RED acceptors. They want to terminate even when all the RED acceptors have crashed and at most one BLUE acceptor has crashed. And they want to agree as long as none of the BLUE acceptors are visiting regardless of the behavior of the BLUE acceptors. Now I can tell you right now that no consensus is going to use only these participants and get the RED and BLUE learners to agree. They're going to need some at least slightly trustworthy third parties. So suppose that all the learners want to terminate as long as no more than one third party has crashed. And RED learners want to agree among themselves when a third party is visiting. BLUE learners want to agree with each other when a third party is visiting. So when can RED learners agree with BLUE learners? Well, suppose they want to agree when a RED acceptor is visiting and a BLUE acceptor is visiting, but none of the third party acceptors are visiting. Well, how many third parties do we actually need? If you use a regular old homogenous business team Paxos, then we still have some RED learners who want to agree in the presence of five failures. Five failures means you'll need 16 homogenous acceptors, which means 10 third parties at least. And your consensus would look like this, that have quorums of any 11 acceptors. But heterogeneous Paxos can save us some resources here. It turns out that we only need three third parties. Any two RED acceptors and any two third party acceptors form a quorum for RED learners. And likewise, any two BLUE acceptors and any two third party acceptors form a quorum for BLUE learners. You can see that if there were a third party business team failure, RED learners could disagree with BLUE learners. But that's OK, according to the Learn Graph. So we've implemented this scenario. For our experiments, we use a late client, which wrote transactions and passed them to some proposal machines from Paxos process. Paxos has the best case latency of three messages, or five wants to be included messages to and from the late client. We put 100 milliseconds of artificial latency on the network connections, and therefore, the theoretical best time would be 500 milliseconds. We ran homogenous business team Paxos, which is pretty easy given that our heterogeneous Paxos reduces to the homogenous case if we give it modernist assumptions. So we used 16 acceptors, and a median latency turned out to be about 555 milliseconds, or 55 milliseconds of overhead processing time. In contrast, the heterogeneous setup using only nine acceptors had only 37 milliseconds median overhead processing time. Here we've plotted the distribution of latency for appending 1,000 sequential blocks to our little blockchain. The bars are the top 5%, bottom 5% in median latency. The reduced time comes chiefly from having to process fewer messages, which means fewer signatures, fewer hashes, and so on. So demanding that this example system use a homogenous consensus costs an unnecessary extra 7 third party acceptors, which adds an unnecessary 51% median latency. So that's the kind of consensus you need when you have heterogeneous acceptors, heterogeneous failures, and heterogeneous learners. We built a heterogeneous consensus. We formalized a definition for what it means to have a heterogeneous consensus. We expressed heterogeneous failure assumptions with a learner graph, and we have a form requirement that tells us when a consensus is possible. Our consensus put very briefly runs one Paxos per learner, where one of the messages can include I've accepted in another Paxos. I'd like to thank the OPITS committee, as well as my co-author, Shim Woon-Wang, Robert Bernardese, and Andrew Myers. I am Isaac Schiff, and at this URL, you can find links to our paper, this presentation, these slides, and our technical report. Our paper includes a detailed step-by-step description of heterogeneous Paxos, and our technical report includes additional examples of heterogeneous consensus scenarios and additional experiments, as well as proofs of correctness. Thank you.
|
In distributed systems, a group of *learners* achieve *consensus* when, by observing the output of some *acceptors*, they all arrive at the same value. Consensus is crucial for ordering transactions in failure-tolerant systems. Traditional consensus algorithms are homogeneous in three ways: * all learners are treated equally, * all acceptors are treated equally, and * all failures are treated equally. These assumptions, however, are unsuitable for cross-domain applications, including blockchains, where not all acceptors are equally trustworthy, and not all learners have the same assumptions and priorities. We present the first consensus algorithm to be heterogeneous in all three respects. Learners set their own mixed failure tolerances over differently trusted sets of acceptors. We express these assumptions in a novel Learner Graph, and demonstrate sufficient conditions for consensus. We present Heterogeneous Paxos: an extension of Byzantine Paxos. Heterogeneous Paxos achieves consensus for any viable Learner Graph in best-case three message sends, which is optimal. We present a proof-of-concept implementation, and demonstrate how tailoring for heterogeneous scenarios can save resources and latency.
|
10.5446/52888 (DOI)
|
Hello everyone, I'm Sean. Today I'll talk about our work by Zanthinati's agreement in asynchronous systems. This is a joint work with VJGARC. Here's the roadmap of my presentation today. I'll first talk about the motivation, then go over the pre-numeraries, and then summarize the re-nity work and our results. After that, I'll talk about the basic idea of our order of log-f around algorithm with Resinian's f and s then over 5. And at last, I'll talk about some interesting future work. So the primary motivation behind the NITIS agreement problem is its application in this special type of replicated state machine called Update Curate State Machine, defined by a federal idle. In this type of state machine, it only supports Update and Curate operation, but not Update-Curate-Mixed operation. So in one command, you can either have Update or Curate, but you cannot have both Update and Curate. And they assume that all updates are commutative, which means the ordering of applying any two updates does not matter. Some examples include conflict-free replicated data types and atomic snapshot objects, etc. So we can use NITIS agreement to implement this type of replicated state machine. And federal idle, they already have a procedure to do that. And also, we can achieve linearizability. As we all know that for general replicated state machine, we usually use consensus-based algorithm like PAXAS to reach a grim to ensure a total order in our commands. But consensus is impossible to solve in a synchronous system, even in the presence of one crash failure, whereas NITIS agreement can be solved in order of log-f rounds. So if we use NITIS agreement to implement and update Curate-Replicated state machines, hopefully we can get better performance. And another motivation of NITIS agreement is its close relationship to atomic snapshot object. As shown by TIDL, we can use NITIS agreement to implement ASO and vice versa. We are working in a completely connected message passing system of N process. We assume there are at most F-bisantine failures and bisantine process can do obituary nasty things. And the system is a synchronous. The upper bound on message is not known by a process. We measure the run complexity of our time complexity of our algorithm using a synchronous rounds. A synchronous round is composed of sending message to N process and wait for at least MLSF-AC message back and do some local computation. And for communication, we assume the NINC is reliable. So in the NITIS agreement problem, each process will have input from a joint seminar. It is a partially ordered set that has a join for any non-defined subset. It can be denoted as the following in triple. The first element is the set of elements in the NITIS. And the second one is the order between the elements. And the third one is the join operator. Here is the example of a boon in NITIS. So the elements in this NITIS are the power set of the universal set ABC. And the order between two elements are defined as a set inclusion. And the join operator is defined as set union. Chain in NITIS is a set of elements that are totally ordered. So in the Byzantine NITIS agreement problem, each process has an input venue from some joint seminar. And they must decide on some output venue also in the joint seminar. Such that it satisfies downward validity, which requires the output of each graph process has to be at least its input. And upward validity, which requires that Byzantine guys can introduce at most TA venues into the decision venue of a correct process. Where T is the actual number of Byzantine processes in execution. The most important one is comparable validity, which requires for any correct PI and PGA, their output Y and YJ. Either we have YI, let's them at most YJ, or YJ at most YI. Here is the example of a possible NITIS agreement execution. Suppose we have three processes, they propose ABC respectively. And one possible outcome is P1 decides on AP2 decides on ABC, AP3 decides on ABC. Clearly, the output venues are comparable and they form a total order. So now let's look at some validity work and our results. In the synchronous model, if we don't assume digital signatures, and the best upper bound we know is the order of log F runs algorithm with resilience less than F less than over 3. And if we, in the other, the same setting, if we assume digital signatures, the resilience can be improved to F less than over 2. In the asynchronous model, that's the model we study in this paper. If we don't assume digital signatures, the NUNA at all, they propose the best upper bound so far, order of F runs algorithm with F less than over 3. And they also approve that F less than over 3 is the optimal resilience. And in this work, if we don't assume digital signatures, we propose an order of log F runs algorithm with resilience F less than over 3 and over 5. So we sacrifice the resilience a little bit, but we improve the time complexity exponentially. And if we assume digital signatures, we can improve our algorithm to have resilience of F less than over 3. So now let's look at our order of log F runs algorithm without the digital signature assumption. So in this work, I want to talk about the algorithm for waste digital signature. You can go ahead and look at the paper. Suppose process can only have crash finair first, we'll deal with by sending finair later. And we use a Heinenberg framework used by a many previous network. So the best idea is to design a classifier procedure that divides a group into two subgroups and ensure one group dominates the other. By dominates, we mean in the venue of one group, the venue of process in one group is at most the venue of the process in the other group, which means one group is less than the other group. And then this classifier procedure kind of divide the overall problem into two sub-problems because at the current, the classifier procedure guarantee that one group, the two groups are comparable. Then if we can, then we can recursively invoke the classifier procedure. Specifically, we have to, but the classifier procedure has to satisfy some properties in order to like completely divide the overall problem into two separate sub-problems. So the first property is that we assume each process, in the main algorithm, we assume each process holds a venue set and initially it's just input, a synchronous set with its input. And the first property requires the venue set of each slave process is a subset of the venue set of each master process. So basically after, say we have a group of process after invoking the classifier procedure, the classifier process, each within the classifier procedure process will do some communication in some ways and they will update their venue within the classifier procedure. So at the end, we need to guarantee the venue set of each slave process is a subset of the venue set of any master process. So for say two, we need to guarantee the venue set of each master process has more than K venues. This K is a threshold parameter associated with a classifier procedure. And this is used, this is mainly used to guarantee that if we carefully set the threshold parameter, then we can guarantee the algorithm, the recursion terminates in order of log F rounds. The third property is that the union of the venue sets of all slave process has size at most K, which means there are at most K venues in the venue set of all the slave process. So this is also used to guarantee the recursion, the recursive call can terminate in order of log F rounds. And so we call the one group as the slave group and another group as a master group, and we need to ensure the master group dominates the slave group. And okay, suppose we have such a classifier, so we can apply this classifier within each subgroup recursively to solve the overall problem. And here is the example of such a venue classifier, suppose we have four process, each PI proposed VI, for instance, and the threshold parameter K is three. So one possible outcome of the classifier is say P1 and P2 are classified as a slave and P3 and P4 are classified as a master. And for P1 and P2, their venues stay the same, and for P3 and P4, they take the union of a venue received from P1 and P2, which is V1 and V2. So clearly the venue of P1 and P2 are a subset of the venue of P3 and P4. So if we recursively invoke the classifier procedure within P1 and P2, because by the, the, their decision venue can be at most the union of V1 and V2, right? But for P3 and P4, if we recursively invoke the classifier procedure within the P3 and P4, then their decision venue is at least V1 and V2, because they have V1 and V2 in common and the property of net disagreement needs to ensure that your output, your decision venue has to be at least your input, right? Okay, so suppose we have a such a classifier procedure and let's look at how our main algorithm looks like. So, works. So the first, in the first round, we not net all process exchange their venues before recursively invoke the classifier procedure. So after this first round, each process will have, has at least a minus F venues and has at most M venues. And so, and then they invoke the classifier procedure in the way following the, the, the falling binary trade in the, the falling binary trade each node is associated with the classifier procedure. And if a process goes to a node, then it will invoke the classifier procedure corresponding classifier procedure. Initially, all process are located at the root node, they are within the same group. And the threat, the threshold for that classifier is M minus F over two. You can see that this, this number is the mid, the, is the mid of M minus F and N. So we set this number in a binary way. I mean, the, all the other threshold venues of the classifier procedure in the binary trail set in a binary way. So we can ensure that this trade has order of log F levels. And so at the root, never will they, in root node, they invoke the classifier procedure. If they are classified muster, they go to the right. And if they are classified, say if they will go to the left. Yeah. And the properties of the classifier can guarantee that at the end of the algorithm at the leaf level, if two nodes are located at different node, different with basically they're in different groups, then they must have comparable venues. By the property of the classifier. And if they are within the same node, we can ensure that they have the same venue. So the main challenge caused by, by zoning process can be demonstrated by the following example. Suppose we have for, for process and be a proposed via, via I and the threshold parameter case three is say we still, one possible outcome of a, if we invoke the classifier procedure within this group, then P1, P2, or classified slave and P3 and P4 classified muster. And they're updated their venue in the way shown above. Yeah. And now suppose P2 is by zoning, right? The classifier procedure can guarantee that they completely separate this P1, P2 in a subgroup and P3 and P4 in a subgroup, right? So they will recursively invoke the classifier procedure. But now suppose within group P1, P2, P2 is by zoning, then what if P2 sends value V3 to P1 in a later level of the recursive call? Then P1 will take the union of V1 and V3, right? Because P1 kind of cannot distinguish whether P2 is by zoning or not. So the best they can do is to, it can do is just take the union. And so, and then in this case, P1 will get to no value V3. And, but you can say that for process P4, which is a master process, it's venue in its venues, V3 is not in its venue set. And so it's, so for P4, it's decision venue might not include V3. Then in this case, we break the property of the, the domination property of the classifier. So P1 is not, P1's venue is not the subset of P4's venue anymore, right? So our main idea is to introduce the notion of miscible venues for a group, which is basically the set of venues that processing the group can ever have. And it's, which, which is basically like, after the classifier procedure, and we, there's a set of venues that are miscible venues for the slave group, and it's like a upper bound. And the decision venue of any, any slave process can be a, must be a subset of this, this set of miscible venues, right? And so, and then our Byzantine tournament classifier need to guarantee different properties. For B1, each correct slave process needs to have at most K venues after the classifier procedure, and each correct master process has more than K venues. And B2, this is the most important one, and also the hardest one to guarantee is the admissible venues of the slave group is, is a subset of the venue set of any master group process. So if we can guarantee this at the current recursive, at the current level of the classifier procedure call, then because for the slave group, their decision venue can be, is always a subset of the miscible venues, right? And we can read the miscible venues is a subset of the venue of any correct master process. Then if in the overall, in the main algorithm, if we can ensure that the, for each process, its venue is non-decreasing, then we can guarantee the decision venue of any slave process is, or is, is less than the, is at most of the venue set of any master process, right? So that's the basic idea. And for B3, we need to guarantee that there are at most K venues, and in the admissible venues of the slave group. And this is also used to ensure that there can be at most order of log F, a recursive call. So, and now let's look at our asynchronous presenting tolerant classifier. So in the classifier procedure, where in the main algorithm, each process stores a venue set VI and each process I also stores a safe venue set. So this safe venue set for each group. Yeah. Basically, it's the set of venues that will be considered as valued by the by process I, if process I received some, received this venue from some process in, in this group G. So this kind of serves as an upper bound, it limits the, the venues that can be sent by a process. And process I will use this safe venue set to filter out in many venues broadcast by process in this group. By the way, we use renewable broadcast to, to send to venues, because we have by sent in process. And so we define a venue V is, we say it's a missable venue for a group G. If this venue is in the safe venue set of at least a column of correct process. So basically, this venue has to be considered as venue by enough number of correct process to be to be a missable venue. Right. So now let's look at the scanning tone of the classifier procedure, how we guarantee those properties. So we had in this way, in the first round, we have the right step, each process will reliable broadcast its current venue set to all and wait for a acknowledgement from MISF process. And then after that process, in process, I will do a race step, it renown for broadcast a race message to all and wait for MISF process to send whatever the venues they have back. So whenever a process receive a read message, they need to send whatever venues they have say reliably delivered back to the this process. And then they do the classification based on the received venues. If the received venues is greater than K venues, then this process is classified as master. And as a master, it has to send a master messages again to all and wait for MISF venues. So basically, when a process receives a master message, it will send back their whatever venues they have a reliable deliver back to this back to the process. And the master for master process, it will update its venue set to be the union of all received venues. Otherwise, if received at most K venues, it is classified as a snaf and for the snaf, it stays its venue set. It's the same basically its output is also the same as its input for the class for procedure. And now let's look at how this class for procedure can guarantee the the the pony the three properties we define for B one. We need to ensure the venue set of each correct slave process has at both K venues and also the venue set of each correct master process has more than K venues. So this is obviously true based on the classification do we do right if received more than K venues is classified master. Otherwise, it's a slave and for B two, we need to ensure the MISF venues of the snaf group is always a subset of the venue set of any correct master process. So for the recall the definition of a MISF ball venue, right, it is a MISF ball venue if for a group G for the slave group specifically if it is in the safe venue set of at least a quorum of correct processes. So if you look at the code of the master process after it is classified as a master, it will send a master message to read like MISF venues from back right and it's updated venue set to be the union of always received venues. So we in our algorithm we can ensure that this MISF quorum has an intersection with the with the safe venue quorum defined for the MISF ball venue. So in this case, we can ensure that for the master process can read all the MISF venues for the slave group. And for B three, there are at most K admissible venues for the slave group. So in our case, we use this is guaranteed by the right step, the sequential execution of the right step and the rest of so consider the last process slave process possibly by something that completes the right step. It must have read all the admissible venues of the slave group because the other admissible venues of the slave group can be at most whatever venues we now bonnie broadcast by by the by the set of state process right in the right step. So the last process which completes the right step must be able to read all the admissible venues and it is still classified as a slave. So it has at most then it has at this set has most K venues. So for future work, we're interested in the Norrbund for net is agreement. There are no nor bound results at all for the net is agreement problem. And we need to. So our conjecture is the order of log F is not nor bound. And another interesting problem is how we can propose the order of log F runs algorithm with optimal reason is F less than over three. All right, that's it. Thank you.
|
We study the Byzantine lattice agreement (BLA) problem in asynchronous distributed message passing systems. In the BLA problem, each process proposes a value from a join semi-lattice and needs to output a value also in the lattice such that all output values of correct processes lie on a chain despite the presence of Byzantine processes. We present an algorithm for this problem with round complexity of O(logf) which tolerates f<n5 Byzantine failures in the asynchronous setting without digital signatures, where n is the number of processes. This is the first algorithm which has logarithmic round complexity for this problem in asynchronous setting. Before our work, Di Luna et al give an algorithm for this problem which takes O(f) rounds and tolerates f<n3 Byzantine failures. We also show how this algorithm can be modified to work in the authenticated setting (i.e., with digital signatures) to tolerate f<n3 Byzantine failures.
|
10.5446/52889 (DOI)
|
Dans cette vidéo, je présente un papier qui s'appelle « Self-Stabiliser Bisonntine Resilient Communication in Dynamic Networks ». Alors, première, nous allons présenter le set-up du problème. Nous considérons une networks de communication qui peut être représentée par le biographe. Et, dans le cas général, si un autre node veut communiquer avec un autre node, il doit envoyer sa message à plusieurs nodes intermédiaires. Donc, si tous les nodes comportent correctement et s'y collent correctement, ce n'est pas un problème. Mais, dans un monde où les networks se sont en plus grand, nous devons prendre en compte le fait que ces nodes soient probablement faibles. Donc, cela peut être un failure crash, où des nodes ne sont pas fonctionnés, mais cela peut aussi être beaucoup plus honneur que cela. Donc, il y a beaucoup de modèles possibles de failure, mais ici, nous allons considérer le modèle le plus possible de failure, le modèle bisonntine, où un node de failure peut avoir une comportement arbitraire. En autres mots, pour montrer que un système est robuste pour les failures de bisonntine, nous devons montrer qu'il n'y a absolument pas de stratégie pour les nodes bisonntines pour que le système soit en train d'avoir une comportement incorrecte. Et l'intérêt de assumer les failures de bisonntine est que cela ne compasse aucun autre modèle possible de failure, en définition. Donc, si un système est résilient à les failures de bisonntine, il est aussi résilient à aucun type de failure, qui est une garantie très forte de sécurité. Donc ici, le problème que nous considérons est le problème de communication reliant. Pour que les nodes corrects soient correctes, si P sent un message M, nous voulons assurer que la queue récente et accepte ce message. Ce serait la condition de la vie. Et nous voulons aussi que la queue ne accepte pas une erreur, qui est prétendue d'être de P. Ce serait la condition de sécurité. Par exemple, P sentait un message bleu et les nodes bisonntines sentaient une message rouge prétendue d'être de P. Donc, pour avoir une communication reliant en présentation de les failures de bisonntine, nous devons satisfaire ces deux conditions. Donc, le premier travail sur cette topic considère la connectivité du graph de communication. C'est-à-dire le nombre minimal de passés de déjoint entre les nodes des nodes de la networks. Donc, deux passés sont déjoints si ils n'ont pas de nodes en commun, excepto, bien sûr, le premier et le dernier nodes. Et le résultat fameux est le suivi. Pour avoir une communication reliant en présentation de K bisonntine, ce qui peut être à n'importe quel point dans la networks, bien sûr, il est nécessaire et suffisant pour la networks d'être connectés à 2K plus 1. Donc, entre les deux nodes P et Q, il existe deux passes de déjoint de K plus 1. Donc, je vais le séparer sur les parts nécessaires, mais la partie suffisante est assez simple ici. Donc, nous pouvons représenter deux nodes P et Q avec deux passes de déjoint de K plus 1 connectant. Et avec des failures de K bisonntine. Bien sûr, vous pouvez déjouer à la plupart des passes de 2K plus 1. Donc, le message de P est prévenu par le pass correct de K plus 1. Et puis, Q s'est demandé de s'attendre jusqu'à ce que l'on recevait K plus 1 fois la même message de P. Et avec les passes de K plus 1, nous ne serons jamais capables de avoir plus que des messages de K fausse et de ne pas être prétendus de P. Donc, la condition de sécurité est satisfait ici. Donc, ce résultat a été généré plus tard par les networks dynamiques. Ce sont les networks où la topologie ne peut changer avec le temps. Donc, le modèle de graphe dynamique est le suivi, qui est très standard en général. Donc, il y a une fonction présente, indiquant si ou non un canal de communication entre deux nodes est présent à un temps donné. Et il y a aussi une fonction latentale qui donne un temps à un temps nécessaire pour un message à traverser un canal de communication. Et puis, nous pouvons définir la notion de pass dynamique, c'est-à-dire, une séquence dynamique sur les canaux de communication en évaluant un message de transition de la note P à la note Q après un temps donné T0. Et puis, il devient un peu tricot. Parce que dans les networks statiques, il y a un équivalent entre la cut minimale et la connectivité. Donc, la connectivité est le nombre de pass de Dijon entre les deux nodes, comme l'on l'a vu précédemment. Et la cut minimale est le nombre de nodes qu'on doit retirer pour déconnecter les deux nodes de la nettoque. Et dans les networks statiques, ces deux numéros sont les mêmes. C'est appelé le theorem de mangueurs. Mais dans les networks dynamiques, ce n'est pas vrai anymore. Donc, vous pouvez avoir des exemples de networks dynamiques où la cut minimale n'est pas équil à la connectivité. Donc, pour généraliser le résultat des networks dynamiques, la solution était de réformuler tout en termes de cut minimaux. Donc, plus précisément, en termes de cut minimaux, c'est-à-dire, les numéros minimums de nodes doivent être retirés pour déconnecter tout le pass dynamique entre les deux nodes. Donc, cela nous lead à la nécessité de condition suffisante pour la communication reliable dans les networks dynamiques. Le cut de min de DIN PQ0 doit être plus strict que 2K, où DIN PQ0 est le set de pass dynamique entre P et Q, en commençant à 0. Et il y a un algorithme de communication associé, qui est garantie de fonctionner quand cette condition est satisfaite. Donc, dans le suivant, nous appelons cette condition le RDC condition, RDC pour Reliable Dynamique Communication. Et maintenant, je peux finalement présenter la contribution du papier. Nous présenterons un algorithme de solver le même problème, c'est-à-dire, Reliable Communication, dans une networks dynamique dans le présent de des fellows bisonnés, mais en plus, cet algorithme est de la stabilisation. Donc, qu'est-ce que je veux dire par cela? Donc, une système de stabilisation est un système qui peut récover d'un nombre de faillissements transients. Qu'est-ce que le faillissement transient? Bien, chaque note est suivi par un algorithme, qui pourrait être considéré comme un hardware, mais en beaucoup de cas, cet algorithme utilise des variables. Et les slots de mémoire se tournent sur ces variables peuvent être initialement corruptés. Et en plus, les channels de communication peuvent initialement contacter un nombre d'arbitraries corruptées par les messages. Si nous considérons que leurs buffers sont une autre forme de slots de mémoire. Donc, un système distribué est de la stabilisation si, il est toujours éventuellement satisfait un propriétaire, donc, par exemple, Reliable Communication, dans un sétting où le contenu initial de variables locales et des channels de communication sont totalement arbitraux. Et ici, nous voulons faire cette addition de la fête de Bisonnés, qui ne vont pas se passer. Donc, ce que nous essayons de faire ici, c'est de la multitolérance, c'est-à-dire, de la tolération de plusieurs types de variables. Donc ici, un nombre limité de variables transients et un nombre donné de variables Bisonnés. Donc ici, nous présente un algorithme qui assure Reliable Communication dans un très style style. Un nombre limité de variables transients, permanentes variables Bisonnés et la topologie change avec le temps, qui est assez bon dans le sens de robustité. Donc, bien sûr, comme des algorithmes prévus, ce algorithme est seulement garantie d'en travailler quand une certaine condition est satisfait. Donc, qu'est-ce que cette condition? Donc, sans la fête transients, nous devons satisfaire la condition RDC, et maintenant, avec les fêtes transients, nous devons satisfaire la condition suivante. La condition RDC est toujours éventuellement satisfait. Donc, qu'est-ce que je veux dire par cela? Bien sûr, la condition était que pour toutes les deux notes P et Q, le sens de la fête transients de Dean PQ0 doit être plus strict que 2K. Et maintenant, la condition est que pour toutes les deux notes P et Q et de la fête transients de Dean PQ0, le sens de la fête transients de Dean PQT0 doit être plus strict que 2K. Donc, à ce point, vous pouvez demander pourquoi il devrait être toujours éventuellement satisfait? Est-ce que c'est suffisant pour être éventuellement satisfait? Bien, en fait, non. Dans le papier, nous prouvez le premier résultat. Si la condition RDC est juste éventuellement satisfait ou pas toujours éventuellement satisfait, bien, le problème de communication reliant est impossible de se solver. Donc, dans ce secteur, juste éventuellement satisfait ne suffit pas. Donc, pour montrer cela, nous considérons un exemple toit avec seulement 3 notes. PQR. Et nous assurons que ce n'est pas un failure de temps à temps. Et P veut envoyer un message à RDC. Donc, dans ce secteur, la condition RDC est simplement, comme suivi, qu'il faut qu'une dynamique dynamique entre P et R existait. Et nous assurons que il existe un algorithme qui solvait notre problème sous cette condition. Donc, premièrement, nous considérons un premier scénario S1 de X, de lequel X est le paramètre. P sent le message M1. Step 1. Edge E1 appuie, puis dévoile. Step 2. Edge E3 appuie, puis dévoile. Et durant ce temps, T recevra le message X de Q. Note que cela peut arriver pour aucun message X parce que nous assurons que le failure de transition est là. Et donc, la communication de channel entre Q et R peut initialement contacter tout. Donc, dans ce scénario, quand Edge E1 appuie, puis, il existe un pass dynamique de P à R, et donc, comme l'algorithme supposé solve le problème, puis, R éventuellement accepte M1. Ok. Et nous allons dire pourquoi le start de R entre step 1 et step 2. Note que Y est indépendant de X, parce que R n'a pas interactué avec Q à ce point. Maintenant, considérez un second scénario S2. R est initialement en state Y. Il peut être en un state habituelle initiale, parce que nous assurons que le failure de transition est part de ce problème. P sent M2. Step 1 prime. Edge E2 appuie, puis dévoile. Step 2 prime. Edge E3 appuie, puis dévoile. Donc, ici encore, il existe un pass dynamique de P à R. Donc, encore, comme l'algorithme supposé solve le problème, R est éventuellement accepté M2. Nous allons dire X prime, le message sent de Q à R durant step 2 prime. Ok. Et maintenant, finitallement, nous allons considérer scénario 1 avec paramètres X prime après step 1, et scénario 2. Ce qui est intéressant ici est que, à ce point de vue de R, ce qui se passe est exactement le même dans les deux cas. R a commencé en state Y, puis a reçu X prime de Q. Donc, selon les scénarios, R doit s'adapter à les messages M1 et M2, qui sont deux différents messages. Ce qui est un problème. Donc, ce que nous avons montré ici, c'est que, en initiant les failures de transition sur le graph dynamique, il n'est pas suffisant d'assurer que la condition RDC est juste éventuellement satisfait. Parce que ça peut nous permettre d'adapter à une situation où un note accepte deux messages contradictoires de la même sénateur, et donc, nous ne pouvons pas avoir une communication reliable. Et nous n'avons pas d'invité à invoquer les failures de Bison Tine ici. Ok, avant de passer plus loin, nous allons réinstaller le problème proprement. Donc, nous avons un set avec des notes permanentes de Bison Tine, qui est une identité renoncée aux notes correctes, bien sûr, un graph dynamique. Et nous l'assions aussi que, pas à quoi que l'algorithme soit exécuté par les notes correctes, bien, à l'extérieur que cet algorithme manipule des variables, bien sûr, le state initiale de ces variables est assuré d'être arbitraire. Et les channels de communication entre notes peuvent également contacter un nombre arbitraire de messages arbitraires, qui sont sentis, mais pas encore reçus. Donc, ce sont les accounts pour les failures de transition. Donc, chaque note correcte a, à minimum, un message PM0 que l'on veut pour les broadcasts aux autres notes, et un set Pax pour les messages acceptés. Donc, Pax contiendra des tuppes de la forme QM. Et quand QM s'accueille à X, cela signifie que P considère que le message M a été senti par note Q. Donc, en autres mots, que M est equal à QM0. Un short remarque ici, ces messages Pm0 ne sont pas soujetis par les failures de transition. Parce que, si ils étaient, bien, ce serait impossible d'assurer que les formes de communication soient réconnues, parce que tous les messages seraient arbitraires. Par contre, c'est un moyen simplifié pour déterrir le problème, mais un peut aussi considérer que chaque note décide de sending des nouvelles messages en temps. La raison serait la même. Donc, le problème est la suivante. On veut trouver un algorithme pour être exécuté par les notes correctes, comme ceci, par la setting. Pour les notes correctes, P et Q, on peut avoir les deux propres suivants. Tout d'abord, il n'y a pas de M distingue de Pm0, comme Pm est le plus important de Q-AX. C'est-à-dire, on peut avoir pas d'autres fausses messages dans les sets AX. Donc, ici, Pm Prime correspond à une fausse message, parce que P n'a pas été senti par un message de fausse. Donc, on peut avoir une fausse message parce que P n'a pas été senti par M Prime. Donc, ce serait la propre sécurité. C'est ce que nous voulons éviter. Et la deuxième propres de la suite qui sera éventuellement satisfaite est la suivante. Pm0 est le plus important de Q-AX. C'est-à-dire, Q peut avoir les messages correctes de P. Donc, ce serait la propre livraison, ce serait ce que nous voulons achever. Et dans le papier, on présente un algorithme pour résoudre ce problème. Donc, je vais donner une description très très high-level de l'algorithme. Donc, une partie de l'algorithme est similaire d'un algorithme prévu considéré comme le même problème sans fausse de transgénie. C'est-à-dire, nous broadcastons les messages par tous les passés dynamiques. Et chaque fois que le message est passé à un nôtre neighbor, nous registrons l'identifier de l'on. Donc, nous essayons de garder le passage correct. Et, avant d'accepter le message, un nôtre qui est le passage des passés attaché à ce message est compatible avec la condition RDC. Donc, ce serait suffisant sans fausse de transgénie. Mais, avec fausse de transgénie, bien, pour exemple, il est possible que le message fausse a déjà été accepté. Ou, dans un plus tricot de façon que suffisant les messages fausse ont déjà été envoyés pour avoir un message fausse accepté en temps. Donc, dans le algorithme prévu, le formalisme des messages sont construits. S M capital S, où S est le sendeur, M est le contenu du message, et capital S est un set enregistrant les nôtre qui a envoyé ce message. Ici, nous avons un numéro alpha. Donc, ici est le plus principal. Le algorithme est créé pour que ces messages fausse de transgénie sont reçues avec le valeur de alpha qu'ils ont déjà à l'origine. Ce qui est ce qui est. Et puis, si les nôtre corrects d'avoir une grande valeur, et si on donne priorité à des messages avec la valeur la plus haute de l'alpha, alors les messages corrects sont éventuellement acceptés. Donc, bien sûr, les notes de bisonntin peuvent encore s'évoquer des messages avec des values arbitraires de l'alpha, mais plusieurs rues de l'algorithme assurent que cela ne peut pas se passer. Donc, chaque note correcte a la suivante variable, un counter u alpha, que l'on a vu précédemment, qui peut seulement augmenter. Un membre est dit u omega, qui store tous les messages reçu par u sans discrimination. Donc, comme on l'a vu, les messages de la forme SM capital S alpha, un set uax0, qui store préacceptés des messages. Donc, les messages qui vérifient la condition de l'air d'accent, comme dans le algorithme prévu. Et comme on l'a vu, ce n'est pas suffisant, et nous avons besoin d'autres garanties pour les accepter. Une condition qui serait intervient au paramètre alpha. Ok, donc maintenant, nous allons aller dans plus de détails. Donc, l'algorithme itself consiste dans 5 rues. Le premier roul est que chaque note régulièrement augmente sa counter alpha, de temps en temps. Et ajoute la suivante variable pour le set omega. Donc, une table avec sa propre identifiée, sa propre message, un set empty, et sa valeur du current alpha. Donc, pour commencer le broadcast de la message, alongs plusieurs passes dynamiques, comme on l'a vu immédiatement. Le second roul est de envoyer le contenu de u omega à chaque des neighbors, quand les contenus de changements de omega, ou le set des changements de neighbors. Donc, pour cela, nous ne sommes pas à la chance d'explorer un pass dynamique. Donc, basiquement, omega contient toute l'information u à chaque broadcast. Le troisième roul est que, quand vous recevez une table S M S alpha, bien, premièrement, il ajoute l'identifier de la table, envoyant le message à s. Donc, c'est ce que l'on a registré. Puis, le storez dans le omega. Et, quand le changement de omega est ici, bien sûr, il va aussi envoyer le message à les neighbors. D'accord à la roul précédente. Roul numéro 4 semble un peu plus compliqué. Mais, basiquement, ce que nous faisons ici est de vérifier si certaines collections de table S M alpha vérifient la condition de l'air. Plus précisément, il existe un message et un sonde d'association, ainsi que le pass attaché à la table S M, vérifie la condition de l'air. Et si ceci est ajouté à p à x, le set de messages préacceptés. Et finalement, roul numéro 5. Donc, premièrement, nous introduisons un compte fonctionnel qui compte le nombre de différents valeurs de alpha associées à un message donné. Et la idée est que, pour un sonde de note identifié S, nous considérons que le message sent par S est le message avec le plus haut compte de alpha. La idée sous-envoi est que, quand le sonde actuel continue de envoyer son message avec les valeurs d'alpha, son compte sera éventuellement plus grand que les autres accounts causés directement ou indirectement par les valeurs transigentes. Donc, dans le proof correct, entre autres choses, nous devons montrer que la combination de transigentes sur les valeurs de S a ne fait pas d'assurer de garder un message donné à un message avec les valeurs d'alpha. Donc, pour conclure, nous avons donné le premier sond de résilient algorithmique pour communication reliable dans les networks dynamiques et a prouvé la correcte. Et, pour aller plus loin, un sonde pourrait considérer des conditions plus probabilistes. Donc, pour exemple, des positions de sonde de note de S, c'est-à-dire que chaque sonde peut avoir une probabilité de donner une probabilité de être de S, ou peut-être des évolutions de la graphe dynamique. Un sonde pourrait aussi considérer permanentes failures à l'évolution des canadens de communication. Donc, pour exemple, des canadens de communication régulièrement et de fondament de messages de droits. Et, finalement, une question d'opinion intéressante serait de considérer le temps et de la complexité du space de solver ce problème et voir si les optimisations sub-optimisation pourraient être faites en regard à cette métrique de temps et de space. Merci pour votre attention.
|
We consider the problem of communicating reliably in a dynamic network in the presence of up to k Byzantine failures. It was shown that this problem can be solved if and only if the dynamic graph satisfies a certain condition, that we call "RDC condition". In this paper, we present the first self-stabilizing algorithm for reliable communication in this setting - that is: in addition to permanent Byzantine failures, there can also be an arbitrary number of transient failures. We prove the correctness of this algorithm, provided that the RDC condition is "always eventually satisfied".
|
10.5446/52891 (DOI)
|
Hello and welcome to my talk on broadcast in generalized network and adversarial models. This is a joint work with Chenda and Yuli model at Ethe Zurich. We'll start with the usual setting of a Byzantine folk model, namely we have n parties using a communication network and some parties could be corrupted shown in red here. We model this corruption by assuming there is an external adversary that has complete control over such subsets of parties. And a specific party known as a sender would like to broadcast its input value V to all other parties such that if the sender is onished, then all other onished parties end up with the same output value V. And if the sender is corrupted, then we would like to guarantee that at least all the other onished parties agree on some common output value V prime. Now let's focus on different types of communication networks. A seminal result by P.R. Sital states that in a classical model of communication where we have bilateral channels between every pair of parties broadcast is possible if and only if the maximum number of corrupted parties denoted by T here is less than n third. Now one may ask, is it possible to tolerate more corruptions? It turns out that it is possible, but if we assume some stronger communication primitives in the network to be specific, considering it showed that in the so-called B mini-cached model, where in addition to these bilateral channels, if we have some partial broadcast channels between every subset of B parties, then broadcast is achievable if and only if the T is less than the specific fraction of n. So here we have a three mini-cached model where there is a three mini-cached channel between every three plate of parties. Now such three mini-cached channels allows the party to locally broadcast its value to the other two. And in the setting, if we set B equals three, then we can already tolerate corruptions up to n half. Now looking at the previous two results, the national question would be to know the trade-off between the strength of the underlying communication network and the strength of the adversary that can be tolerated. Looking at some related work, we have already seen the bounds on the number of corrupted parties that can be tolerated in the classical model and the B mini-cached model respectively. So one thing to note here is that until now, we have been looking at some specific kind of adversaries that are characterized by a threshold number of parties or a maximum number of parties that can be corrupted by them. But we can actually go ahead and talk about a very general form of adversaries that are only characterized by the subset of parties that can be corrupted by them. So for example, the adversary A can either corrupt a subset A1 of parties or a subset A2 and so on. And it was shown by Ray-Kaw that broadcast is possible in the B mini-cached model and against a general adversary if the adversary satisfies the so-called B plus one chain-free condition. We will be seeing shortly what this chain condition actually is, but it is something that is quite important in the rest of this presentation. And one issue with these results is that they assume complete network structures. That is, for example, in the B mini-cached model, we assume there is a B mini-cached channel among every subset of B parties. And in our work, we are interested in communication models that need not be complete. That is, only certain subsets of mini-cached channels can be available in the network. At the same time, we are also interested in broadcast that is secure against these general form of adversaries. Now, looking at some related work, the broadcast problem has already been studied for general three mini-cached networks and with respect to these specific threshold adversaries in this range, because if P is less than n-third, then we know that broadcast is already possible in the classical model with bilateral channels. At the same time, if P is greater than or equal to n-half, then we know that broadcast is not even possible even if we have a complete three mini-cached model. Similarly, in our work, we focus on general B mini-cached networks and at the same time, general adversaries in this range, that is, adversaries which contain a B chain and which are B plus one chain free, it will become clear shortly why we choose this class of adversaries. Now, just to give a taste of our results, let's look at a very simple example. Let's consider a complete three mini-cached model among six parties and there is a threshold adversary that can corrupt up to three parties. In this setting, we know that broadcast is impossible thanks to this result by considering it all. So, if you plug in the parameters, setting B equals three and n equals six, then we can see this indeed the case that broadcast is impossible. Now, at the same time, if we upgrade our network to a complete four mini-cached model, then broadcast becomes possible. But what if from a complete four mini-cached model, certain four mini-cached channels are missing in the network? Specifically, if this particular four mini-cached channel were to be missing, this particular four mini-cached channel, this channel, and this channel. So, if these four four mini-cached channels are missing in the network, then in our work, we show that broadcast again becomes impossible for the same adversary. Now, fixing this incomplete four mini-cached network, now, if we look at some weak adversaries, namely, instead of an adversary being able to corrupt any subset of three parties, the adversary, it can only corrupt certain subsets of three parties, namely, the adversary can corrupt either this subset, this subset, this subset, or this subset. So, if the adversary is restricted to only corrupt these subsets of three parties, then we again show that broadcast is possible in this incomplete four mini-cached network. On a high level, this is what we are doing. So, to analyze this broadcast problem in general networks, the first partition, the space of all possible general adversaries in the following classes, in an increasing order of strength. So, here, the weakest class is U0, which contains adversaries that are three-chain free. They are considered to be the weakest because broadcast is already possible in the classical model of communication with bilateral channels. And subsequent classes are defined as follows. So, adversaries which contain a B-chain and which are B plus one chain free. So, once we have an ordering of the adversaries, we can also have some kind of ordering for the communication models as well. So, the weakest class is, of course, the classical model of communication, where we only have bilateral channels. Now, we can make this model stronger by adding some three mini-cached channels on top of it. So, this region would be the incomplete three mini-cached networks. And once we have a three mini-cached channel for every triplet of parties, then we have our usual three mini-cached model. And you can repeat the process for four mini-cached channels, for five mini-cached channels, and so on. And now, for each of these classes of adversaries, say for this general class UB, since the adversary contains B-chain, we know that broadcast is impossible in the B minus one mini-cached model, thanks to Raykov's result. And by extension, broadcast should be impossible in the weaker models to the left. At the same time, since this class of adversaries do not contain a B plus one chain, broadcast is possible in the complete B mini-cached model. And hence, broadcast is also possible in the stronger communication model to the right as well. So, this much was known from Prad work. And in our paper, one of our contributions was to bridge this gap. Specifically, for a general class UB, we showed that broadcast is impossible in certain incomplete B mini-cached networks in this region. And we also have some positive results. Namely, we first identified some weak adversaries in each class, and against them, we showed that broadcast is achievable in networks that need not be complete. That is where certain B mini-cached channels are missing. Now, let's look at these chain conditions. So what do we mean when we say a general adversary A contains a B-chain? So this is again a usual form of a general adversary in terms of the subsets of parties that can be corrupted. Now, before talking about the B-chain, let's look at a simple example of a six-chain. So we say that an adversary contains a six-chain if there exists a partition of the N-parties into six non-empty subsets such that leaving any two adjacent sets, the adversary can corrupt the rest. So in this case, the adversary, say that an adversary can corrupt the union of these four subsets, let's call it A1. Let's say this is A2, A3, A4, A5, and A6. So if the adversary can corrupt these subsets of parties, then it is said to contain a six-chain. Where the six-chain is again described by the partition S1, S2, and so on until S6. And we say that an adversary does not contain a six-chain if this property does not hold. And the definition for a B-chain, it generalizes in a straightforward manner. Now, after describing these chain conditions, we will now come to one of our main results, namely we characterized certain mini-cash channels to be essential for broadcast. To give an intuition, let's start with a result of RECO, which states that in a smaller setting with B parties, and if we are in a complete B-1 mini-cash model, then broadcast is impossible if the adversary contains a B-chain. And the key idea that we used is to extend this impossibility result from a complete B-1 mini-cash model, but incomplete B mini-cash model. And to illustrate this idea, let's start with an example with four parties and in a complete 3-mini-cash model. So and the adversary here contains a 4-chain described by S1, S2, S3 and S4. In the setting, we know that broadcast is impossible. Now let's look at another setting with six parties and in an incomplete 4-mini-cash network. Now we'll be seeing in a moment which mini-cash channels are actually missing in the network. And let's say an adversary in this case also contains the four-chain described by S1 prime, S2 prime, S3 prime and S4 prime. And now the network is nothing but the complete 4-mini-cash model, but with certain 4-mini-cash channels missing. And the 4-mini-cash channels that are missing are of the form which with their endpoints in each of these partition sets. That is, we will be removing this particular 4-mini-cash channel from the network, this particular 4-mini-cash channel, this channel and this channel. So from the network, we are removing four 4-mini-cash channels and then we can show that broadcast becomes impossible in this setting as well. And we show this via a reduction. Namely, if broadcast is somehow possible in this case, then broadcast should also be possible in this case, which leads to a contradiction of Ray-Cov's result. Specifically, let's say that the broadcast is possible by this sender, then broadcast should also be possible by this sender in this setting where S1 would be simulating the pair of parties S1 prime, S2 would be simulating S2 prime, SC would be simulating S3 prime and S4 would be simulating S4 prime. And now S1 would just run the broadcast protocol used by this party in this case. And one key thing to note here is that in this incomplete network, all possible 4-mini-cash channels could be simulated. For example, if you have this 4-mini-cash channel shared by parties from S1 prime, S3 prime and S4 prime, it can be simulated by the 3-mini-cash channel shared by S1, S3 and S4. But if we go back to the 4-mini-cash channel that we have removed from the network, which hides in points in S1 prime, S2 prime, S3 prime and S4 prime, then this could only be simulated if there was a 4-mini-cash channel among S1, S2, S3 and S4. But we don't have 4-mini-cash channels in a 3-mini-cash model. Hence, in a way, these particular 4-mini-cash channels that we have removed are essential for broadcast, because if they are not present in the network, then broadcast becomes impossible. And this brings us to our necessary condition on general networks for broadcast to be achievable against general adversaries. So we say that broadcast is achievable in a general network and against an adversary in the class UB, only if for every b-chain contained in A, which is described by the partition S1, S2 and so on, SP, recall that adversaries in the class UB contain b-chains. So for every b-chain, there must exist a corresponding b-mini-cash channel in the network that is shared by a party from S1, a party from S2 and so on and a party from SP. So these b-mini-cash channels of this form are nothing but the essential minicash channels that we have identified previously. So if such channels are missing from the network, then broadcast becomes impossible. So here we are currently restricted to b-mini-cash channels, but we can actually prove something stronger with respect to the underlying minicash channels as well. More concretely, not only b-chains, but for every k-chain in A, where k-rages from T to B of the form S1, S2 and so on, SK, so we say, so broadcast is possible only if there exists a corresponding k-mini-cash channel in the network, which is shared by a party from S1, a party from S2 and so on and a party from SK. And after characterizing our necessary condition, we also provide some sufficient conditions on general networks for broadcast to be achievable against general adversaries. Again before stating the result, we first give some intuition towards it. So let's start with a result of Raycov, where broadcast is possible in the complete b-mini-cash model among n parties if the adversary is b-chain free. And in our work, we extend this positive result to a general b-mini-cash network which could be incomplete, where some b-mini-cash channels could be missing. So we want to apply Raycov's broadcast protocol in a possibly incomplete b-mini-cash network, but the problem is such a protocol requires a complete b-mini-cash model to begin with. So our key idea was to start with an incomplete b-mini-cash network and to recreate a complete b-mini-cash model. And we do that by simulating the missing b-mini-cash channels with Raycov's protocol. That is, we apply Raycov's protocol in a local setting among b parties to patch these missing b-mini-cash channels. So we'll make this idea more clear using an example shortly, but we call such b-mini-cash channels which could be simulated by us as non-essential b-mini-cash channels, that is, whose presence is really not required in the network for broadcast to be achievable. Now to illustrate the simulation idea, let's look at an example among six parties. So we make an assumption where the underlying network has a complete set of three minicash channels. And now Raycov's result states that if we have b parties in a complete b-mini-cash model, then broadcast is possible if the adversary is b-chain free. So in our example, if we focus on these four parties, then these parties can simulate a four minicash channel among themselves since they lie on a network where we have a complete set of three minicash channels. So if the adversary is four-chain free, then a virtual four minicash channel can be simulated. But we have to be a bit precise when talking about an adversary because in this case, an adversary is defined with respect to these six parties. But what we actually require is the adversary should be b-chain free if its corruption power is locally restricted to these four parties only. To make this notion more precise, so let's start with the usual characterization of a general adversary characterized by the substance of parties that can be corrupted by it. And we define a new adversary structure called a or a row, which is nothing but the projection of a onto a subset of parties true. And a of row, it's a set that contains the intersection of all of these sets that could be corrupted with row. So in our example, if we have row to be the set of these four parties, and if the original adversary a could corrupt the set of these three parties. So if this is a one, then corresponding to a one, a of row could corrupt these two parties since we don't really care about this party and this party. And this brings us to our sufficient condition. Namely, road class is achievable in a network with respect to any sender while tolerating a general adversary in the class UB. If first of all, there should be a complete set of B minus and many cash channels in the network, recall that this was a starting assumption in order to use the simulation idea. And for each subset of B parties, the adversary's projection on these B parties, if it contains a B chain, then there should be a B many cash channel among row. Now if A of row did not contain a B chain, then we didn't really require a B many cash channel among row because it was non essential. So it is sufficient to only have B many cash channels to have B many cash channels in the network, except ones that are non essential in order for broadcast global broadcast to be achievable. But again, we can prove something stronger. So we can use this simulation argument recursively among the lower mini cash channels as well as the sub set of B many cash channels. That is, we can use a bottom up approach. So we first required the network to only contain a complete set of bilateral channels. Then we start with K equals three. So for each subset of three parties, we identify the set of non essential three min cash channels. So then it is sufficient to just have the other three min cash channels to be present in the network. Then you move on to subset of four parties and so on. So if it required all the many cash channels to be present in the network, because then the condition is not really helpful to us, and we are back in the case of a complete pay me cash model. And in our work, we showed that this condition is not revealed for certain week adversaries in the class to be, which we call beach in adversaries. So if we review the hierarchy of adversaries that you have seen, so recall that the class UB contains adversaries which contain a beaching and which are represented in free. Now, so these week class of adversaries called beach in adversaries, they just contain a single beaching and nothing more. That is, these adversaries, they minimally satisfy this requirement to be considered as a part of UB. And we showed that with respect to these beaching adversaries, they do exist the min cash channels that are non essential. So in this sense, the sufficiency condition is not revealed for these adversaries. So coming to some of our other results that I couldn't cover in this presentation. So our conditions, they actually allow us to derive separate bounds on the number of P min cash channels that are necessary and that suffice in achieving global growth cashed in general networks secure against general adversaries. Now, yeah, they provided such tight bounds on the number of three min cash channels in a network. And in our work, we could extend their analysis to general B min cash networks, although our bonds are non tight. And this brings us to an important open problem, namely to come up with tight necessary and sufficient conditions, the general networks need to satisfy in order to realize secure broadcast against general adversaries. So our conditions are unfortunately known tight, but we hope that our results, they provide some starting steps in this direction. So in fact, for general three min cash networks, tight necessary and sufficient conditions were already derived by Ravi Khan et al, where they used a technique called virtual party emulation, and in our work, we showed that a straightforward extension of this technique not be applied to a general B min cash setting. Another interesting open problem would be to study the practical implications of our results or subsequent results on the achievability of growth cashed in general networks that is secure against general adversaries. With this, I conclude my talk. Thank you.
|
Broadcast is a primitive which allows a specific party to distribute a message consistently among n parties, even if up to t parties exhibit malicious behaviour. In the classical model with a complete network of bilateral authenticated channels, the seminal result of Pease et al. [JACM'80] shows that broadcast is achievable if and only if t<n/3. There are two generalizations suggested for the broadcast problem -- with respect to the adversarial model and the communication model. Fitzi and Maurer [DISC'98] consider a (non-threshold) general adversary that is characterized by the subsets of parties that could be corrupted, and show that broadcast can be realized from bilateral channels if and only if the union of no three possible corrupted sets equals the entire set of n parties. On the other hand, Considine et al. [JC'05] extend the standard model of bilateral channels with the existence of b-minicast channels that allow to locally broadcast among any subset of b parties; the authors show that in this enhanced model of communication, secure broadcast tolerating up to t corrupted parties is possible if and only if t<b−1b+1n. These generalizations are unified in the work by Raykov [ICALP'15], where a tight condition on the possible corrupted sets is presented such that broadcast is achievable from a complete set of b-minicasts. This paper investigates the achievability of broadcast in general networks, i.e., networks where only some subsets of minicast channels may be available, thereby addressing open problems posed by Jaffe et al. [PODC'12], Raykov [ICALP'15]. To that end, we propose a hierarchy over all possible general adversaries, and identify for each class of general adversaries 1) a set of minicast channels that are necessary to achieve broadcast and 2) a set of minicast channels that are sufficient to achieve broadcast. In particular, this allows us to derive bounds on the amount of b -minicasts that are necessary and that suffice towards constructing broadcast in general b-minicast networks.
|
10.5446/52894 (DOI)
|
Hello everyone, I'm Haimin from Nanjing University. Today I will talk about both capital competitively against a debtor adversary in my channel real networks. This is John Toro with an advisor, Cao Zhong. We consider single-hopping network with annuals and those communicate over shared medium, continuing three channels. Time divided into synchronous ones often cause slots. So the time capacity is measured by the number of slots. Each node has half to two types of transceiver. So at each slot, it can either stand on a channel or listen on a channel or remain idle. A node cannot stand less than simultaneously. Also, it cannot work on multiple channels in one slot. Lots of real-time values are powered by a battery and are able to switch between active and sleep states. So we also care about the energy capacity measured by the number of channel accesses. Specifically, sending or listening for one slot costs a warrant of energy, while idling incurs no cost. For each slot, only nodes listening to the slot will get a feedback. In the single-channel setting, in other words, if C is 1, the feedback is determined by the number of sending nodes. If no nodes send, the feedback will be silenced. If exactly one node sends or listening nodes will receive the message. When at least two nodes send, the feedback will be noise. Now, back to the magic channel setting. Since there is no influence among channels, so for each channel, if data is dependent on number of nodes sending on that channel, and the only nodes listening on that channel will receive the status as a feedback. For example, two nodes send in one slot, but one node is only one sent on channel two. So the ping node listening on channel two will receive a message from the blue node. The shared nature of the wire medium makes it a forward-nove to jamming. So in this model, an adversary called Eve also participates at execution by jamming, but she cannot send any meaningful messages. As long as the channel jammed, nodes listening on it will receive the noise. And if a node received noise, it cannot distinguish whether the noise is due to message creation or jamming or both of them. For example, if in second slot Eve jammed channel two, then the ping node will hear noise now. While the green node on channel one will still hear the message. Each slot Eve can jam any set of channels, for example, jam all channels in the third slot. And Eve is adaptive, so she is given all paths to execution history to decide which set to jam in current slot. Clearly, we need to put a certain restriction on Eve and then develop corresponding algorithms. Notice Eve also consumes energy when she injects interfering signals. In view of this, a recent and interesting way results in competitive analysis is proposed. The only restriction of Eve is the total amount of jamming. Specifically, we assume jamming one channel for one time slot costs one or two energy. In the expenditure for sending, listening, and jamming are often in the same order. The batch of Eve is only kept to T, which is unknown to nodes. We see our algorithm no matter what the jamming strategy Eve adopts. The maximum cost is at most some function row of T plus some 10 tall. Here, row is a function of T and possibly other parameters such as N and Z, which capture additional costs due to jamming. When a function tall is a cost value of the one Eve is absent, so tall should not depend on T. Our main goal is to minimize a function row and keep tall small if possible. So we consider T large when compared with N or Z. But especially interesting algorithm with cost of each node is little over T, which draws to the draft nodes from solving the problem. Eve must spend much more energy than each node. The problem we focus on is broadcast, in which a cell phone wants to send a message to all the other nodes, specifically all nodes to the received message and the hot. And we want to minimize the additional energy cost and the addition time due to jamming. We are also interested in scenarios that end the unknown. For example, network organizing at Hawker mode without infrastructure often cannot provide knowledge of N. Before the execution, each node knows whether it is a source and its number of channels say. Here are the results. All these are multicolor algorithms and hold what type of data in N. In 2011, King, Sya and Yang developed first-result-compatible algorithm. Later, they also developed algorithm that each node holds within all T slots. And each calls soft all square root of T over N. This algorithm works even as unknown to nodes. That deal, we should have multiple channels as all the linear speed up in one time. While the cost remains unchanged, but our algorithm only works when C is all over N and can only tolerate an abbreviated adversary. There are also some low-bundle results. The time-compatible part is obvious, since you can jam all channels during the first T over C slots. As for the energy complexity, Delta-R proves there must be a node with cost root of T over N, showing the algorithm is optimal after a protocol. So our motivation is to close the gap by considering an adaptive adversary and designing algorithms that work for arbitrary Nc values. Besides, is there a more competitive algorithm in multi-channel setting? In this work, for the up-bound, we develop two algorithms that work for any Nc values and can tolerate an adaptive adversary, both with time all of T over C and energy cost root of T over N. The fourth algorithm is very simple, and the second algorithm works even as unknown to nodes. We also answer the second question by proving a low-bundle, root of T over N in a multi-channel setting. So our algorithms achieve near optimal energy complexity and optimal time complexity simultaneously. We first introduce our first algorithm, multi-cost. Before introducing our algorithm, we show a framework that is used by all these algorithms. An algorithm should group source into consecutive epochs, execute a Jeremy-resistant broadcast scheme, and check whether to hold up each epoch. The broadcast scheme should be effective and competitive, so each node learns a message while spanning a single target less than a year. As for the counting criteria, it should be reasonable, so each node should have a receive message when they decide to hold. Besides, to stop nodes from halting, again, year is forced to spend much more energy. Finally, since nodes do not know T, the usual approach is to let epochs learn scales exponentially. So for both nodes and year, the total energy consumption is dominated by data without epochs. So later, we only consider one fixed epoch. Our first algorithm uses a parallel broadcast, often called an epidemic broadcast. As you know, the user is still active at the beginning of epoch I, in each of the other sources of epoch I, it will help to channel children uniformly at random and choose to send a lesson, each with a property P. If you decide to send, you will send a message or an MUist true and send a beacon or MUist false. Here, MUist is a boolean variable, indicates whether it has known the message. Before the execution, only the soft node that is aimed to true. We use X to denote the number of cells that you heard during the epoch. At the end of epoch, you will hear if X is at least H, whereas the threshold H is half the exact times of listening. Finally, we set proper values of R and P. In analysis, we focus on answering two questions. First, why does the boolean stream work when P is root C over Nr? Secondly, why is this hot criteria correct and competitive? Let's begin with the first question. In a single channel setting, the core idea is to broadcast fastly. For example, consider n is true, and we name the soft node Alice and the other node Bob. If Alice sends in root R random slots and Bob listens in root R random slots, then ban arguments like birthday paradox. There must be a successful translation, even if you jam constant fraction of slots. Also, premise work has shown the expression for any n. Surprisingly, when it's larger, the cost of which node is smaller. When C is n over 2, roughly speaking, there are two nodes in each channel on average. So the right property is still 1 over root R. And we get another expression, root of C over Nr, and show another dimension of birthday paradox. So I'm now on with fixed nr slots, and assume each node can work on key channels simultaneously, otherwise the cost is t. Compared with single channel setting, if we have C channels, the property should be multiplied by a root C factor. Then each pair of nodes will meet on a C1 channel with counter property, which means optimal single channel analysis will work again. Obviously, root C over Nr is optimal expression for any of the values. Next, we focus on the hot criteria. Recall for each node, it counts number of samples heard at x and hot, if and only for x at x, at least h. We are fixing the new. If jamming is awake, we have already shown unknown nodes' message. Otherwise, each node will not hot, since the jamming is severe. So expectation of x is less than the threshold. Count the two properties together. Our criteria is correct. Here we use a foreign definition with two parameters. If there are more than a fraction of slots, that each was more than b fraction of the channels on jam. Then we say the epoch is ab weakly jam. For example, the picture shows an epoch of six slots. And we say it is 4 6 3 4 weakly jam. As for competitiveness, we show if the epoch is 0.8 0.8 weakly jam. Then expectation of x is large, so each node should hot. We will prove the competitiveness by bounding the property of the joint events. x below threshold h and the epoch is weakly jam. Or sometimes bounding the conditional property. When you some extra definition, that qk be the set of unknown channels in soft k. Gk be all nodes' behavior in soft k. Then the indicator xk is determined only by gk and qk. In our algorithms, nodes' behavior in different slots are independent. Well, e is oblivious, so jamming factor q is fixed before the execution. So x1, x2, txr are independent. And the expectation with sum x is also large. So we can use concentration in quality election of bounds to bound the conditional property. However, adaptive adversary can preserve nodes' behavior and determine the set qk in the case slot. For example, q3 can be dependent on g1 by this definition. Also, x1 indicating whether you hear silence in slot 1 depends on g1. So x1 and x3 are not independent. We are deprived of the useful tool, show no bound. Even when conditioned on a fixed q3, the distribution of g1 may be distorted. This means we have no guarantee on expectation of x1. In this work, we use coupling techniques to resolve dependence issue. By law of total property, we can first turn to bound the cell with property that x is x and h. And the jamming factor q equals to a fixed q. This is a fixed q that is weakly jammed. We create a coupled process and relate x to some variable yq in the coupled process. Specifically, in the imagined process, gammaq, all nodes run similar algorithm and eave jam obliviously using the fixed vector q. Then, y is the number of silence heard by you and y is the sum of independent indicator variables. We couple x and y in a third process by using identical nodes behaviors. The key point is each slot k, when the actual jamming vector q is equal to the fixed q, then xk are always equal to yk. So we can turn to bound probability that y deviates much from expectation. However, there is still a problem. There are nearly two to the arctic kind of vector q, so the sum of this failure probability is too large. Actually, we can derive couple x to y if we underestimate the x and use a more complicated version of coupling. Our second algorithm works even and is unknown to nodes. When the value of n is unknown, the principal issue is how to properly set what nodes work in probability. A natural approach is to get n. Specifically, the f of i contains i phases now. Different phases may have different working probabilities, but the probability is fixed within one phase. By assuming n is true to the j in the j's phase. However, the broadcast sees only when j is close to log n. Even worse, when the probability is too small. Most of feedback received would be silenced if e does not jam or jam very quickly. So nodes may halt in a phase that broadcast fails, which means the criteria becomes incorrect. Another challenge is related to compatibility. The total number of slots in faces that achieve broadcasts should be some sufficient large fraction of the number of slots in whole s. This ensures you cannot gain advantage by only jamming in certain faces. We provide our second algorithm, multi-card advanced, which adopts github.org framework. Now an app contains multiple faces, each with r slots. Each node you independently address is probability pu based on the feedback it received. Before the first phase of apoc.i, the probability is initialized to a very small value. After each phase, pu is multiplied by a vector of 2 to the maximum between eta minus one-half and zero. Where the fraction eta is the number of slots held by u over the exact time of listening. The way up to the p seems strange, but has two nice properties. Firstly, you have to keep jamming heavily to prevent pu from reaching the optimal value. Besides, pu and pv are always close at each phase. To find the difference of nodes probability, we ensure no matter what the jamming vector queue is, the number of slots x is always close to its expectation. But in previous algorithm, we only show x is large, its large size of such hole when the jamming is weak. This is a big difference. So handling as activity by coupling becomes more involved. The last part is about lower bound of energy complexity. In a single channel setting, previous work has shown in one-to-one broadcast, as Alex or Bob will call root key. And then, sure, one-to-one broadcast can be reduced to the end of the broadcast. Specifically, Alex simulates the source and Bob simulates all other nodes. It's easy to see this reduction of the holes in the multi-channel setting. So the only thing left is to call root key lower bound for the one-to-one broadcast in the multi-channel setting. We first introduce the strategy of Eve in each slot. For each channel, if the probability that Alex successfully transmits Bob exceeds one over two, then Eve will jam the channel. Success probability is a product of the probability Alex sent and the probability Bob listened on that channel. To prove lower bound for any algorithm, we only need to consider all availability algorithms. Here, an availability algorithm means the sending and listening probability in each slot should not depend on the actual behavior in the past slot. In other words, they should be decided before the execution. So why we only need to consider availability algorithm? This is because any path to a leaf of the decision tree A corresponds to an availability algorithm. So A can be seen as the convex combination of these availability algorithms. Moreover, at least one availability algorithm is as good as A in both scratch property and expected cost. Finally, according to whether Eve will ever deplete its energy, we consider two scenarios and prove a lower bound respectively. In summary, we design fast and competitive broadcast algorithms using after a broadcast with proper working properties. In analysis to handle the adaptability of the adversary, we cover the real execution without carefully crafted execution. We also prove energy lower bound for the broadcast problem in the multi-channel setting. Another question is, can we prove energy lower bound for an availability adversary? In all energy lower bound approach for an adaptive adversary. Thank you for your attention.
|
Broadcasting in wireless networks is vulnerable to adversarial jamming. To thwart such behavior, resource competitive analysis is proposed. In this framework, sending, listening, or jamming on one channel for one time slot costs one unit of energy. The adversary can employ arbitrary strategy to disrupt communication, but has a limited energy budget T. The honest nodes, on the other hand, aim to accomplish broadcast while spending only o(T). Previous work has shown, in a C-channels network containing n nodes, for large T values, each node can receive the message in ~O(T/C) time, while spending only ~O(√T/n) energy. However, these multi-channel algorithms only work for certain values of n and C, and can only tolerate an oblivious adversary. In this work, we provide new upper and lower bounds for broadcasting in multi-channel radio networks, from the perspective of resource competitiveness. Our algorithms work for arbitrary n,C values, require minimal prior knowledge, and can tolerate a powerful adaptive adversary. More specifically, in our algorithms, for large T values, each node’s runtime is O(T/C), and each node’s energy cost is ~O(√T/n). We also complement algorithmic results with lower bounds, proving both the time complexity and the energy complexity of our algorithms are optimal or near-optimal (within a poly-log factor). Our technical contributions lie in using “epidemic broadcast” to achieve time efficiency and resource competitiveness, and employing coupling techniques in the analysis to handle the adaptivity of the adversary. At the lower bound side, we first derive a new energy complexity lower bound for 1-to-1 communication in the multi-channel setting, and then apply simulation and reduction arguments to obtain the desired result.
|
10.5446/52558 (DOI)
|
One was related to something that more recent than I had been involved with and the other was the subject of the Nobel lecture which refers to things from 1949-1950 which in a way is somewhat ancient history. The decision was that the ancient history should provide so I'll ask you to be understanding that I'm not speaking about things which are quite recent. During the period 1949-1950 academic year at Columbia and in fact during my main career as a physicist I have been involved with experimental subjects. That particular time we had received funding from the Office of Naval Research for a synchrocyclotron something like the Berkeley synchrocyclotron and at that time I was involved mainly with trying to get the radio frequency system drive for this operating so that most of my time was spent at the cyclotron making these attempts. The cyclotron at that time the radio frequency system operated but such that all of our meters read zero and you look inside and you see saw the effect of fluorescent tube and our main problems at that time were to put in sweeping grids so that we were able to operate with the proper voltage for acceleration. During that period I was sharing an office room 910 in the Pupine Physics building with O'Aour Boer, this was to prove of great benefit for subsequent developments that I will mention. During the period from about 1948 to about 1962 I was also involved in teaching an advanced nuclear physics course at Columbia University and of course when you teach a course in some subject you are somewhat forced to become more expert than you would be and more familiar with topics than you would be if you were not teaching it. What I will try to describe now is how I understood things as well as I can reconstruct it at that time for my proposal which was made in a paper published in mid-1950 suggesting that one accept the Mayor Jensen Shell model but remove the constraint that the nucleus is spherical and allow the condition that it be distorted and in particular the emphasis was on the fact that the shell model itself contains the mechanism for the distortion. Let me take things now in a little bit more orderly fashion. The conceptual developments of the concepts of the nuclear theory in a sense began with Ernest Rutherford's Alpha particle experiment, scattering experiments in 1910 where he showed in fact that the nucleus was very small, about 10 to minus 12 centimeters or smaller and that the atom as a whole then had the electrons around. Previously there had been models for example where you had electrons in positive charge and some sort of a intermixed mice of some kind but this immediately led to a picture where in 1913 Niels Bohr was able to exploit the concept for the electron orbits about the nucleus with the introduction of the quantization conditions where you have in essence the earliest shell model picture for electrons about the nucleus. This was of course a great triumph and of course led to his Nobel Prize. This was extended by many workers and in particular there was the Wilson-Sommerfill quantization rules that generalize the quantization condition to all the degrees of freedom or the integral of P sub i d Q sub i is an integer times 8 planks constant where P sub i and Q sub i are generalized momentum and coordinate. In terms of understanding say atoms more completely the chemistry of atoms one needed more conditions and in 1925 there was the concept by Goudsmit and Nulandbeck that the electron in fact was not just a simple negative charged object but that it had a spin of a half and then when you add to that the proposal by Pauli of the exclusion principle you are then able in principle to build up the concept of the periodic system of the elements and understand in a way how chemistry works. This was in a rather crude period rather immediately thereafter quantum mechanics evolved with the Schrodinger equation Heisenberg Dirac bringing it into full bloom. It was immediately applied to practically everything that one can think of and for English speaking people this is seen by reference to the text or the treatise of 1935 by Condon and Chartly the theory of atomic spectra where you see in fact that the theory is a rather advanced stage. Now for the electron orbits and shells about the nucleus one notices that one knows to a high order of accuracy what the force law is namely it's the Coulomb force and the treatment for the hydrogen atom could be done before quantum electrodynamic complications essentially with complete accuracy. When you got to two electron system it became something that you had to do approximately and in fact for the helium atom you know that you do this with variational methods and you can get results quite closely but you have to take an enormous number of terms variational approach. If you try to do lithium or potassium or iron or something like that things tend to get somewhat out of hand for a reasonably exact calculation and you have to use somewhat more hand waving arguments and plausibility arguments as to what would take place in particular where you would average over the effect of the other electrons on a given electron and the treatment of their interactions. In the case of the nucleus attempts were similarly made but in the 20s the only particles that were known the electron and the proton and when you tried to make a nucleus with protons and electrons this led to great complications and the progress was essentially zero. When the neutron was discovered by Chadwick in 1932 this was a breakthrough and immediately had proposals that you should in fact consider the nucleus made up of neutrons and protons and except that you didn't really know what the force law was you know that it was strong, nuclei were reasonably stable but you could produce reactions where you had radioactivity, artificial radioactivity and natural radioactivity but the exact force law equivalent to the Coulomb force was not really known. During the 30s, 1930s the subject there was an initial attempt to develop the concept of a nuclear shell theory and this you would consider say a spherical box kind of problem and you would say put neutrons and protons in this box and if you do this and treat the neutrons and protons as obeying the Dirac equation and the Pauli principle then the first shell would be the lowest shell, the first S state for the neutrons and protons and you can put two neutrons and two protons to fill it up and you get helium-4 which is known to be unusually stable and in fact the binding energy of the last neutron and proton for helium-4 is somewhat over 20 million volts, if you try to add after you have helium-4 you try to add an additional neutron or proton you have to put it in the next shell and for a spherical box this would be the first P state, L equal 1 state and this shell that you're filling there with neutrons and protons extends from between helium-4 and oxygen-16 but to try to put the first one in it turns out that it won't even stick, if you try to add a neutron or proton to helium-4 you see the ground state of the system mainly as a scattering resonance that are little over 1 million volts, when you add two or more you start getting bound systems like lithium-6, lithium-7, brolium-7 and so on. The next shell though for a shell model, a simple shell model, particles in a spherical box would be closed in oxygen-16 which is an unusually stable nucleus. The next shell that one would obviously arrive at would be the one where the second S state and the first D state come in and this is filled at calcium-40 and calcium-40 is unique in that it's the heaviest stable nucleus that has equal numbers of neutrons and protons so in a sense you do have something like a closed shell picture there. But beyond that attempts to match to what you would picture a shell model would give for regions of extra stability were very frustrating because the actual numbers were not the right numbers. Now my own learning of the subject was quite a bit through Professor Betas' reviews of modern physics articles in 1936, 1937, one with Bacher, one by himself, one with Livingston and in particular for the shell model there was a paper in 1937 by Feenberg and Phillips which attempted to use a Hartree-Fock approach for nuclei in the region of the filling of the first P shell beyond helium-4 towards oxygen. This was something that you obtained then the relative energies of ground states, the predicted magnetic moments and various other things, the excited states. The agreement was not really very satisfying and there was great frustration for the shell model. Another feature which intended to complicate the situation was that in the early 30s when one considered reaction theory for example a neutron or proton incident on the nucleus how would you predict where residences occur and so on. And the picture that was used then was something like what we call the optical model now namely where you consider the nucleus as a region of potential, some average potential and you have residences which are very far apart in energy and in size of the nucleus that you're involved with. This was fine as long as you didn't have any experimental checks on it but very few years later slow neutrons were invented and patented by Professor Fermi and immediately huge numbers of experiments were done with slow neutrons or neutrons above the thermal energy and it was found in fact that things like gold and indium and so on had large numbers of residences and these residences were of widths less than a volt and depending on the element might be on the average 10 volts or 100 volts apart. And this was something that was completely not able to be understood on the basis of the picture where you considered the average potential. And this led Niels Bohr and others to propose that the nucleus in fact acted like a drop of a liquid say a drop of water where you have a tightly bound system and if a particle comes in from the outside it doesn't really just go through but it shares its energy with the other particles. The Niels Bohr and Kellekar evolved the liquid drop model to explain nuclear reaction theory. This indicated that in fact you would consider the residences that you see with the neutrons to be situations where you have an ability to excite the full degrees of freedom of the system so that it isn't just the excitation of the incoming, the residences of the incoming particle but residences for the complete nucleus. Now during the period of the 40s I was involved under Professor Dunning at Columbia in evolving slow neutron spectroscopy where we had our small cyclotron we would pulse it, detector some distance and we were able to study the interaction, cross-sections as a function of energy. We were of course also very familiar with the Bohr Wheeler paper on fission which indicated that the nucleus doesn't have to be spherical if it's going to fission it can't fission when at the same time stay spherical. During that period when we were investigating huge numbers of residences and field we were quite familiar with the fission concept. When I began in 1948 to teach the nuclear physics course I was quite interested in the particle in a box picture and the concept of the shell model and gradually, at least by early 1949 I became convinced that the shell model should have a high degree of validity even before Mayor and Jensen came out with what we now accept as the correct model. If I can have the first slide on the lights down. This indicates another feature which had involved by Weitzhakker. The figure actually is from the poor Mottelson text, it shows the binding per nucleon as a function of atomic weight for stable nuclei. Here we go from zero to 250 covering the range of the stable and man-made elements and the binding per particle, this is 7.5 million volts per particle, 8 million volts, 8.5, 9. The picture that it evolved with the liquid drop picture was that as in the liquid drop it set in fact a given nucleon is surrounded by other nucleons and you have a nearly constant binding due to the fact that it's immersed in the nuclear fluid. However, for a small nucleus the nuclei at the surface will have less binding and you have a decrease in binding which is proportional to the area. This makes it such that the lighter nuclei have less binding. The dots indicate this is a large scale for the lighter nuclei in this region here and you can see the shell model effects which are always superimposed even in atomic physics superimposed on something that would represent average trends. Well, the rise at the beginning indicates the contribution of the surface term which is proportional to the surface area. In addition it was early agreed that probably neutrons and protons were basically the same particle in different charge states. In fact the neutron can decay into a proton, a positron, and a neutrino or if it's favorable in the nucleus of one of the protons can change, I'm sorry the neutron to electron and an anti-neutrino in a proton and the proton could change if it were energetically favorable. So the condition of stability then is the condition that you have the same number of neutrons plus protons but you have the condition of greatest stability. It's a compromised balance always where different effects tend to make you want to go different ways. If you consider the statistics of the problem of filling a box you can put two neutrons in each space state, two protons in each space state, and the lowest kinetic energy I would tend to think of it in terms of kinetic energy in terms of just a spherical box with say infinite walls, the kinetic energy you fill successive levels, and if you want to change for example protons and the neutrons you have to take some of the protons from the upper filled proton states and move them into the unfilled neutron states so that you increase the energy by an amount where the average distance you move them up is proportional to the number that you've moved and the number that you move, well another effect is that you get a quadratic term. And that quadratic term tends to favor equal numbers of neutrons and protons. There's an additional effect which says of course that the proton has an electric charge, the neutron doesn't, so you have the protons repelling each other, z times z minus one over the radius kind of term, and this favors that you would only have neutrons. Well, whenever there are one term wants to go one way and another term wants to go the other way, you arrive at the minimum energy condition, a balance, and when you go into this region you start getting a larger and larger excess of neutrons over protons for the stability condition and also a decrease in binding because of the decrease due to the Coulomb term, the net effect at the minimum point. In addition to these terms there is a pairing term where particularly for the ground state you prefer to have neutrons balancing each other and spin for net spin zero. You can think of it in terms of the two particles per space state, but there are other ways of thinking of it which result in somewhat the same effect. This has the effect that the nuclei that have even numbers of neutrons, even numbers of protons are unusually stable and in fact most of the atoms in the universe are of that type except for hydrogen, the heavier elements. If you have an odd number of nucleons, that is the atomic number odd, then it's equally likely to be stable for even number of neutrons, odd number of protons, or the other way around. This leads for a particular say even number of nucleons into two parabolas for the energy of the system as a function of the difference between the neutron and proton number. One of the points here is at the top it indicates neutron number and you see 20, 28, 50, 82, 126. The proton number which is smaller for the stable nuclei 20, 28, 50, 82. This point here which is unusually bound is for the double closed shell nucleus lead 208 which has 82 protons, 126 neutrons. Well as I say the old shell model was not able to arrive at things beyond the closed shell 20 but in 1949 in one of the issues of the physical review there were three papers in the same issue proposing three different models which would explain things so to speak in terms of the different magic numbers. The one that held up was the paper by Maria Mayer. At the same time there was work going on in Germany, Jensen which I wasn't familiar with but between the two of them they developed the subject. In this picture one says well how can we make the shell model function and in the case of Maria Mayer it was the result of a suggestion by Fermi is there any evidence for spin orbit coupling? Well if you take the neutron and proton in orbits they have their orbital angular momentum and their spin angular momentum and certainly in atomic physics the couplings are somewhat important for the coupling between these but in nuclear physics it turns out that you get rather large effects. And if I have the next slide. Well what I'm showing here is the plot again from the Bohr models and text of the position of the various single particle states as a function of atomic weight using a potential which includes spin orbit terms of the type that we've considered. This shows the lowest s state how it goes as a function of the size of the nucleus, zero the bottom of the well is somewhere down here. The p shell is now split into a p3 halves and a p1 half. The d is split into d5 halves, d3 halves and s and a half and you notice here that the state of highest orbital angular momentum lies lower and also the state or the spin orbit couple to give the largest value is lower. And this was a thing that higher up where the splittings got larger gave a splitting of things where the high total angular momentum state went down to the next shell and gave you the proper numbers for the 50, for the 82 and the 126 with things like sub-shells at 28. The thing I wanted to point out here though was that the average binding per particle or the valence nucleons tend to have an energy up here, zero binding is there and somewhere around 8 million volts. And we note that the nuclear forces are short range, the nucleus is small and the net effect is that the kinetic energy and the potential energy inside the nucleus, not the average expectation value which includes contributions from where you're outside the range of the force are huge compared to the net binding. So it seemed to me that if you were considering what would happen if you tried to make the wave functions very different from just those that minimize kinetic energy in a box, that if you put very much of the states that had the same general symmetry properties that you would make the energy go up so very high that in terms of the long range properties of the wave function they must look very close to what you would expect from a shell model picture. This was something that I'd convinced myself of before the Mayor Johnson papers came out and I was rather appalled at all nuclear physics conferences that I went to until about 1955 that some very respected theorists would get up and say the shell model seems to work, there's no basis for understanding why it should work and so on. Fortunately this effect has stopped. Go to the next slide please. We have the lights. Can you focus it better? This indicates an attempt to use an alpha particle picture for rillium 8, carbon 12, oxygen 16, neon 20, magnesium 24, silicon 28, sulfur 32. This is an alternate way of looking at some of the light nuclei which are multiples of the alpha particle and see if it makes sense to consider them as such structures. In the case of rillium 8 which would be 2 you would think of it as a dumbbell, carbon 12 as a triangle and so on. And then you ask for how many bonds you would have and the average binding per bond. In the case of rillium 8 this one's tablet flies apart but for all of the others you get a surprisingly constant value in 2.4 something million volts per bond. In fact Linus Pauling for the last few years has been trying to do nuclear physics in this fashion where he considers nuclei as made up of alpha particles and tritons which form the shells placing them the way the chemists do and the structures like you would have for molecules. And there may be a certain validity to this. Next slide please. If you take the alpha particle model this is from Steven Muskowsky's article in the Handbook der Physik of Flug in the 1950s. This is the theoretical value for ground state, first excited, second excited and at that time these were the observed values you see in each case that the agreement is certainly remarkably good compared to the 1937 paper of Feinberg and Phillips where they attempted to do the real shell model. Next slide please. At that time in 1949 in the fall semester when the Mayor of the Mayor paper had come out we had a seminar at Columbia where O'Aour and I divided the time I reviewed the evidence for the Mayor at that time I didn't know about the Jensen work which immediately clarified much of nuclear physics that is you understood systematic relations for beta decay, you understood where you had isomeric states and just an enormous amount of information that had been you had individual evidence from individual nuclei suddenly you had an ordering of things over a large range. The magnetic moments were particularly of interest if you take their picture in the earlier stages literally and look at the odd atomic mass nuclei you would say that all of the even nucleons pair off to give you a spherical system of zero angular momentum so that the angular momentum of the nucleus as a whole is just out of the last nucleon and if you take this literally you get the what are known as the Schmidt limits which for L plus a half or L minus a half for odd neutron or proton. The actual values are not on these limits but they are in between but the remarkable thing was that all of those that should go with the upper limits seem to be above all of those that should go with the lower limits so although they weren't strictly on it there was a rather clean division that the ones that should go with the upper side did in fact tend to be above those that should go with the lower limits and this is one of the great triumphs. Whenever there was accumulating evidence on the electrical quadrupole moments which represents the distortion of the spherical nucleus away from a spherical shape or say a football or a cigar shape or a pancake shape which indicated that the nuclei particularly in the rare earth regions were very very non-circule and in late 1949 Professor Charles Towns gave a cloak women Columbia University to describe discussing a paper that he and William Low and Henry Foley had prepared which reviewed the evidence and in fact his talk was essentially based around this figure and what is plotted here is the quadrupole moment in units of the square of a radius which was taken then namely 1.5 a to the 1 third times 7 to the minus 13 centimeters or fair means. That is now considered rather large value and the thing that was emphasized was that in the region of the closed shells you did seem to be going through zero quadrupole moment as you should and qualitatively it looked proper but here you have this huge peak. Where in particular say with rutechium 176 you get a value that if you try to do with ordinary shell model methods is between 30 and 40 times anything that you can come up with so he left it as that this was something that wasn't understood. While he was talking I since I had been freshly thinking about the mayor shell model and the other pictures it occurred to me that the shell model itself if you remove the requirement that the nucleus be spherical in fact contained the mechanism for producing this distortion. Have the lights please. The picture that I considered at first during the colloquium was that just a particle in a spherical box. If you have particles in a spherical box in particular a closed shell of high angular momentum and then you add a nucleon of high angular momentum to start a new shell this the angular momentum of that nucleon and the nucleus will be the same because the core balances off to essentially zero contribution. So what you're picturing now is something of a circular orbit along the equator of high angular momentum and then you consider the quantization rule for that that the integral of momentum around the loop be some integer times Planck's constant and you find that the energy has a term which goes inversely as the square of the radius of the equator. Now from the theory of fission and other treatment of the nuclei you knew that nuclear matter tended to keep the volume fixed as you distorted it so you say well suppose we let the spherical nucleus to start in a way where the volume stays fixed but the equator bulges and if you do this then you can see that the for each 1% increase in the radius at the equator you're getting a 2% decrease in 1 over r squared and this then is a term which is linear in the distortion the restoring term was mainly the surface term balanced by a Coulomb term that wants it in fact to distort and if you put the two together when you have a quadratic term linear term you get a displaced parabola and the magnitude of the predicted effect seemed to be at least as big as the observed effect if anything from what on the two large side but it was in the correct direction but then the question is what do you do in the middle of the rare earth region and the picture there was that suppose you consider the axis of symmetry of the ellipsoid of rotation as being the z direction and x, y the plane of the equator then you consider the average value of the kinetic energy for the x component the y component and the z component and if you stretch the x and y one way and the z the in a compensating way the kinetic energy terms if you take as trial wave functions wave functions that you distort the same way would change by the square of the amount of the distortion and for a closed shell since you have all orientations this average is to zero so that there's no net linear term for a closed shell but suppose you take a closed shell and start and take out one nucleon what you have left as a whole and this whole if it corresponds to an equatorial orbit once the is the absence of a term that wants the nucleus to go disc shape so it tries to go cigar shape then you take out two of them and if you take them out with their z components as large as possible plus or minus as you start emptying if you do it in a way that doesn't satisfy the momentum for the system as a whole you increase the linear term until you get to about a half filled shell and then beyond that you're taking out things that would want to go the other way and it would come back down so qualitatively at least semi quantitatively it seemed to give the right answer this essentially was the picture that I had during the professor town's talk it seemed like rather obvious thing that one would ask students on some qualifying exam as a simple problem and that I thought everybody would immediately jump on it for some reason they didn't and I'm grateful and my presence here is due to the fact that it wasn't obvious to everybody else I apparently were frozen into different views it was something that I discussed quite a bit with oh bore and professor lambs suggestion others decided to get it in a more formal mathematical shape so that the paper that I wrote which is cited came out in the August 1st 1950 physical review meantime and discussions with all bore saying that he was interested in had to do with other aspects for example at the seminar where we both spoke the thing that he had discussed was paper that he had been working on with Weisskopf Victor Weisskopf which had to do with the distribution of magnetic moment inside the nucleus and he was interested in for example the fact that since you would have to have the rest of the nucleus help in balancing the angular momentum that is you don't have something which for the partially closed shell is a definite angular momentum state you have to have the rest of the nucleus help make it come out right and this in fact you then let you understand why the magnetic moments are not on the Schmidt limits but are somewhat in between also at about this time it was becoming evident I think Gertrude Goldhaber others book Haven had noted the systematic behavior of the low lying excited states in the rare earth region where you seem to have something which well eventually were known to be rotational states the question of rotation of the nucleus had always been something intriguing in the early days when you tried to consider rotational states of the nucleus as a whole taking the rigid body moment you got rotational states much too close together to agree with experiment but the thing that or Boris said well suppose you have the thing distorted now into a cigar shape as you have a bump here and bump here this bump can move around and can vibrate and go from this way to this way and back and forth and you can also move around you can have rotation and so on so he became quite interested in the general considerations and while he was at Columbia prepared a paper which considered how one treats angular momentum in nuclei and this appeared in the January 1951 physical review quantization of angular momentum in heavy nuclei the subsequent developments when modelson joined him and they prepared they exploited the field is history now and I won't discuss it and mainly in the intervening time since I'm mainly an experimental physicist with this somewhat accidental opportunity to contribute to the theory where the theorists seem to have been frozen by some consideration that I still don't understand very thoroughly that this was not the right way to look at it I as I say mainly been an admiring observer of the subsequent developments and I have the next slide please with the lights off well one of the things that all bore pointed out to me was that if you had a nucleus which was a spheroid shape and had some intrinsic quadrupole moment with respect to its symmetry axis then I in terms of the time average value that you could get in ordinary experiments the maximum value would be reduced quite a bit for example if the angular momentum is zero you can't see a time average quadrupole moment if it's a half you can't the angular momentum has to be unity or larger and in fact there is a reduction term that the observed quadrupole moment is smaller by a factor which is something like I times 2i minus 1 over 2i plus 1 2i plus 3 which is a rather small value until you get up to rather large angular momentum so in terms of the previous picture that professor towns had shown all of those quadrupole moments were the measured quadrupole moments and if you interpreted them in terms of the intrinsic nuclear distortions you would have to put in this correction in reverse fashion and they would be much larger also he had used a very large value 1.5 fair means times 8 of the 1 third for the nuclear radius and one of the contributions that I was able to make later with Bell Fitch with the muonic atom looking at the trans from the p state to the s state of the negative muon about lead was in fact that one should consider for a uniformly charged sphere that the effective nuclear size term was 1.2 so that would have made the numbers in the ordinance larger well this is from a paper by professor towns and I believe volume 39 of flug's handbooked or physique it was probably prepared about 1937 the volume itself came out about 1939 and it's a plot of the experimental quadrupole moments the quantity here is the number of nucleons and in case they're odd it's the number of odd nucleons here is the intrinsic quadrupole moment that is what you would get when you undo this factor over r squared r now using the factor 1.2 8 of the 1 third Fermi here he includes also results from the coolant excitation during the 1950s it was established that if you have a charged particle alpha particle proton make a near miscollision on a nucleus you can excite from the ground state to the second excited state or higher states and from the probability of this happening in the cross section you get a unique value a unique determination of the intrinsic quadrupole moment so a very large body of information was able to be obtained for both odd a and even a nuclei even a having spent zero in their ground state as to what the intrinsic quadrupole moment was and this is the figure that he had at that time quadrupole moment in units of r squared I might point out that if you use a spherical shell model way of doing it all of the values would be between about here and here so the values tend to be quite large compared to what you'd expect if you use a spherical base so obviously the nucleus isn't spherical it's quite distorted and here we have two regions which are somewhat mixed up in terms of the atomic number but represent the two main regions where you have very large angular momentum for the individual particles namely the rare earth region before you get to lead 208 and in the region beyond mass about 220 or 230 or you get the still higher states coming in and you can see that there are values on this scale that correspond to about 25 there's a tendency for the quadrupole moments to be cigar shaped and this is probably due to the fact that the Coulomb energy repulsion gives a lower energy if it's cigar shaped than if it's disc shaped that is the protons get further away from each other well this is essentially the same picture but with the more hugely more detail that had been evolved over about an eight year period since the first one this is a plot from the Bohr modelson book of the distortion parameter delta these are all things which are crudely speaking the difference the fractional difference between the major and minor axis of the ellipsoid and you see in the rare earth region this is atomic number when you get to lead 208 of course you essentially spherical but in going into the rare earth region here herbium eterbium haftium tungsten disposium gadolinium samarium you're getting distortion parameters which are about one third then in the region of thorium uranium plutonium and so on you also get large distortions so that there are strong experimental evidence that one should in fact when thinking of a shell model think in terms not of a spherical shell model in these regions but in one terms of one which is distorted there also been generalizations namely you can have octopole moments and higher order moments and in fact these show up as observed quantities now and then my paper in 1950 I pointed out that this was not a complete theory but a suggestion or a recipe as to what the theorists should carry out and one of the things that I expected to see rather more quickly was something that did the detailed treatment of the energy levels versus distortion this was finally done by models and nilson in a more proper fashion and led to what we now know of as the nilson diagrams and if I can have the next line this is a nilson diagram in the region of 82 to 126 where for example here we have the H9 hafts and the H9 hafts depending on its Z component of the angle breaks up into the various parts which have different slopes so that you have these things that as you have the distortion parameter each time one gets a slightly different definition of the distortion parameter but there for small values there always essentially the fractional difference between the major and minor axis and you can see that the state energies particularly in the ones that are mainly non equatorial favor the cigar shape distortion next slide please well this is from a review article in the flug handbooked their physique I believe volume 40 may be by Stephen Miskowski an article on nuclear models where he applies the general concept with harmonic oscillator potential and considers the region where you're putting in this case 20 24 30 36 particles from the box of a given kind what we see here is something where as you go into distortion you get this quadratic increase in this case but then you reach a point where a higher state suddenly crosses and is the lower state so we have a another thing here and here still another one crosses and you follow it and you get a thing that looks rather complicated like this in the case of 24 the stable condition is not zero distortion but over here but as you distort still more instead of following up probably here a higher state comes in with bigger slope you follow it and another another and so on this indicates that you can have a rather complicated situation next slide please well in the round 1966 this led to the picture that explained some of the odd effects that had been observed for sub threshold fission where it was observed for example that if you look at the levels of the fissionable material that the partial width due to capture would show just a random variation along for the levels the partial width of the state for neutrons would show a random variation but the partial width for fission would be very very small except in regions where you would see intermediate structure peaking over a number of levels and then the space in between and another one this was explained in terms of the picture indicated here and was proposed by Strzetsky in ninth paper in nuclear physics in 1967 and the picture that one has here which is a somewhat smoothed out version of the thing that Mishkoski had but and with two is that the ground state is a distorted state over here and that your excited states are systems as you go up one knows that the density of states increases exponentially as you go up from the bottom here so you have an exponentially increasing number of states but then you have this barrier top and a second well it's up a ways and you would have number of states here but for the systems a whole these states must mix in with the similar states there and for the complete fission process you have to have it go over eventually go out the picture then is that those states in the first well which are close to the energy of the states in the second well will be the ones that have the strong fission cross sections or the strength function for fission is concentrated according to the levels in the second well I believe that's the last slide or is there one more last slide okay that's the all lights please I'm finished now
|
On December 11, 1975, James Rainwater delivered his Nobel Lecture “Background for the Spheroidal Nuclear Model Proposal” in Stockholm, before an audience of academicians and other dignitaries. Slightly more than six months later, on July 1, 1976, he repeated the lecture in Lindau, before an audience mainly consisting of students and young scientists. For a Nobel Laureate coming to a Lindau Meeting the year after having received the Nobel Prize, this is not untypical. For us, it becomes an asset, since it means that many (or even all) of the illustrations shown on the lecture slides are available in the Nobel Lecture. In those days, such lectures tended to become rather technical and sometimes difficult to understand. Not so this time, for several reasons. One reason is that Rainwater for a long time had been teaching students, another is that he basically was an experimental physicist lecturing on a theoretical topic. As an experimentalist, he was not so interested in the theoretical machinery but more in the physical effects. Actually, it is an interesting fact that the Royal Swedish Academy of Sciences included Rainwater among the recipients of the 1975 physics prize. The other two, Aage Bohr and Ben Mottelson, were bona-fide theoreticians and had spent many years developing the spheroidal model of the atomic nucleus, starting with a paper by Bohr published in 1951. But the experimentalist Rainwater had published a paper already in 1950, where he suggested that the model might be useful, and that is certainly the reason that he was included among the three physics laureates. But a detailed understanding of the arguments are locked up in the Nobel archives at the RSAS. According to the Statutes of the Nobel Foundation, a time interval of 50 years must go before historians of science are allowed to look through the material, which means that not before January 2026, the detailed arguments will become known! Anders Bárány
|
10.5446/52561 (DOI)
|
For a number of years I have thought that there should be people who might be said to practice theoretical medicine. There are thousands of theoretical physicists, and they have, of course, made important contributions to science. And hundreds of theoretical chemists, some theoretical biologists, why shouldn't there be people in the field of theoretical medicine? My experience has indicated that many workers in medicine and such fields as nutrition are not able to understand and appreciate theoretical and rational arguments. Cancer is one of the most important causes of human suffering. People die, of course. They get old and in the course of time die. But the amount of suffering associated with death is different for different causes of death. And, in fact, for different ages, death at an advanced age very often involves considerably less suffering for the person himself and for members of his family and others than death at an earlier age. If we could increase the length of life so that more people died at an advanced age and could eliminate cancer, an especially unpleasant way of dying, then there would be a decrease in the amount of human suffering. I read the book by a man, Dr. Ewan Cameron of Scotland, a number of years ago. The book was published in 1966. Its title is Hyaluronic Acid and Cancer. He pointed out that not very much progress was being made in the attack on cancer by producing anti-cancer drugs and changing ways of irradiating a person with high energy radiation. In fact, the National Cancer Institute has spent billions of dollars, thousands of millions of dollars during the last 20 years, 800 million dollars is its budget this year. And yet the mean survival time of cancer patients has changed very little. For a few percent of these patients with certain rather special kinds of cancer, there has been a significant improvement. But for the great majority, more than 95 percent of the patients who have important kinds of cancer, there has been essentially no change as a result of all of the effort, all of the expenditure of money. Cameron in his book said that perhaps an effective way of attacking the problem of cancer would be to bolster up the body's natural protective mechanisms. See almost every cancer patient after surgery, when the primary cancer is removed, has millions of circulating malignant cells. And yet not every one of them develops metastases. In many of them, these circulating malignant cells seem to be kept under control. This is with little doubt the result of an effective immune surveillance. The body's natural protective mechanisms succeed in getting control over the malignant cells and the patient does not succumb to cancer. Cameron mentioned that malignant cells, many kinds of malignant cells, produce the enzyme hyaluronidase, which then attacks the hyaluronic acid in the intercellular cement of the surrounding normal tissues and weakens these tissues in such a way as to permit infiltration by the malignant tumor. And in his book he went on to express the hope that some way could be found to stimulate the production by the patient of an increased amount of physiological hyaluronidase inhibitor, which would inhibit the action of the hyaluronidase and in this way protect the surrounding tissues and permit the tumor to be brought under control. For a number of years he tried to find some such way by giving to terminal cancer patients various hormones and the mixtures of hormones, thinking that sooner or later he might find a hormone that would stimulate the production of the hyaluronidase inhibitor. And year after year he was disappointed, the patients with terminal cancer died at just about the rate, the standard rate, there was no effective treatment of this sort. I was asked in 1971 to give a talk at the dedication of a new laboratory for cancer research at the University of Chicago. The Bent May Laboratory. In fact Tizalius had been asked to give this address and had agreed to come, but a week or ten days before the occasion, the dedication, he sent a telegram saying that he was ill and was not able to come. So the organizers, the people at the Bent May Laboratory called me and asked if I would come and speak. I needed to say something about cancer and so I presented an argument as follows. We know one thing about ascorbic acid and have known for 40 years. That is that collagen, the principal component, proteamnaceous component of connective tissue is not synthesized except in the presence of ascorbic acid. The ascorbic acid is required for its synthesis. Connect the intercellular cement in tissues contains not only these glycosaminoglycans, hyaluronic acid, but long chains, but also long fibrils of collagen which act like the reinforcing rods in reinforced concrete. They help to strengthen this intercellular cement. And so I said that I thought that if patients were given more ascorbic acid, this would strengthen the normal tissues and help to bring the malignancy under control. Cameron read a newspaper account of my talk and wrote to me asking how much ascorbic acid they ought to be given. I wrote back saying 10 grams per day. He began cautiously giving 10 grams per day of ascorbic acid, actually sodium ascorbate at first for about 10 days, intravenously and then orally to these patients. And immediately developed the feeling that this was beneficial to the patients. Their argument had been presented at about the same time by Douglas Rotman who wrote to Cameron saying that perhaps ascorbic acid units are part of the hyaluronidase inhibitor and perhaps ascorbic acid in large amounts would permit the patient to develop hyaluronidase inhibitor. Well, now of course we feel that there are many ways in which an increased intake of ascorbic acid operates to potentiate the body's natural protective mechanisms and perhaps these two that I've mentioned are not the most important ones. The fact is that it is a matter of observation that patients who receive a good intake of ascorbic acid have a much better prognosis than those who receive just the ordinary intake or less than the ordinary one because of course most cancer patients are malnourished anyway with respect not only to ascorbic acid but to other nutrients. The reason perhaps I should say why I said 10 grams per day but first let me say that it has been a surprise to me to get involved in cancer research. I didn't intend to do it. I was working years ago in the field of immunology for a while and then I had the idea that there could be diseases that could be described as molecular diseases. Ciclcellonemia was the first disease characterized in that way when it was found that the hemoglobin molecules that patients with this disease manufacture differ from those manufactured by other people. We found other people very quickly found a number of other abnormal human hemoglobins, a total number known as somewhere around 300. The study of the hemoglobin apathase has extended greatly. I decided after eight years of working on the hemoglobin apathase to look at other diseases to see to what extent they were molecular diseases. I thought they might as well be, it might as well be some important disease. The choice seemed to me to lie between cancer and mental illness. I decided to study mental illness rather than cancer with the argument back in 1953. This was the argument that almost everybody works on cancer, practically nobody works on scetchophrania and other mental diseases, so there wouldn't be so much competition in that field. I worked for ten years on scetchophrania and at the end of that period, toward the end of that period, I ran across work by Huffer and Osmond in Canada on the treatment of scetchophranic patients by giving them large doses of vitamins, in particular of nicotinic acid or nicotinamide. I was astonished to read what these investigators reported. They were giving, let's say, 17,000 milligrams a day of nicotinic acid to scetchophranic patients, whereas 17 milligrams a day is the amount recommended by the Food and Nutrition Board of the United States National Academy of Sciences and National Research Council to prevent polygra and to keep people in what the Food and Nutrition Board calls ordinary good health. I call it ordinary poor health. Also, I found that Milner had carried out a double-blind study with scetchophranic patients to see what the effect of a large dose of ascorbic acid was, and he found a statistically significant effect that a large intake, not very large, a couple of grams per day of ascorbic acid caused the scetchophranic patients to improve much more than the controls who received a placebo. As I thought about this matter, I realized that one might formulate a general principle, which is that there is a concentration of each vital substance that corresponds to the optimum health. This is not necessary, only the concentration, the intake, that prevents overt manifestations of deficiency disease. It may be very much larger than that. In fact, there is no reason, I think, to say that only the vitamins are important as nutrients. A vitamin is described as an organic compound that in small amounts is required for life and good health. In the case of ascorbic acid, if we don't get any ascorbic acid, we die of scurvy. The connective tissue just falls apart because collagen is not being synthesized. The joints fall apart and the walls of the blood vessels fall apart. You have intramuscular bleeding, all sorts of manifestations ultimately leading to death, manifestations of degradation of the connective tissue, the collagen. Well, it might be that even without ascorbic acid, enough collagen could be synthesized to keep people from dying. And then ascorbic acid would not have been called a vitamin. And still, the various effects that it has could be very important for good health. I don't think that it is essential that a substance be a nutrient, be a vitamin in order for it to have great importance. But of course, ascorbic acid is a vitamin and it's the one that I shall talk about most today. The question comes up, why should I recommend 10 grams a day? Well, that seemed a sensible and safe recommendation to make. You know we can ask, why is it that all plants manufacture thiamine, vitamin B1, and animals do not manufacture it? They require it exogenously. The answer is that back several hundred million years ago, a plant began running around and eating the other plants and called itself an animal. It was eating its immediate ancestors. And they manufacture thiamine and other vitamins. And so it was getting in its food. It wasn't like the red mold that requires only biotin exogenously and can synthesize everything. This animal, it could synthesize ascorbic acid and the plants were making ascorbic acid or thiamine, were making thiamine. It got enough thiamine in its food. Well there is a general basic principle in biology that if you don't need a function, then the gene responsible for it disappears. And the reason for that, of course, is that it becomes a burden. So when a mutant arose which no longer had the genes that synthesized, a lot of them involved about sixteen perhaps that synthesized the enzymes that convert other materials into thiamine, the pyrimidine half and the thiazole half and the enzyme that hooks these two together. When the animal, a mutant arose that had shuffled off this machinery, then he was streamlined. He was not burdened the way the wild type was and consequently the wild type died out and this ancestral animal from then on, he and all of his descendants have required exogenous thiamine in order to be in good health. And this happened for riboflavin and for nicotinic acid and for pyridoxine and vitamin A, the other vitamins that all animals require these substances exogenously. This is an indication too that the needs of animals for these substances are about the same as for those, the needs of the plants. But it didn't happen for ascorbic acid. Practically every animal species, as a very good approximation you can say every animal species has continued to manufacture ascorbic acid. Why didn't this ancestral animal give up the mechanism for manufacturing ascorbic acid? The answer is clear. The amount that the animal was getting in the food was not enough for good health. Consequently, and we can perhaps understand it, plants don't manufacture collagen. They rely on cellulose as the structural high energy, high molecular weight molecule rather than on collagen. So that this may well be, this perhaps is part of the process of changing from being a plant to an animal that you make great use of collagen and require larger amounts of ascorbic acid. Plants have continued to manufacture ascorbic acid. Man had a bad accident. The precursor of man, in fact the common precursor of all the primates had a bad accident, that of living in too good an environment around 25 million years ago. This environment no doubt was in a tropical valley where the food was especially rich in ascorbic acid, if you weighed 70 kilograms or if we convert to 70 kilograms body weight to their foods that for 2500 kilocalories or 10,000 kilogoules of energy, food energy can provide as much as 10 grams of ascorbic acid per day. This was close enough to the optimum to permit the mutant who had lost one enzyme in the production of ascorbic acid to shuffle off this, to get rid of this ability and to compete successfully with the wild type so that he'd replaced him. And since then all of the primates have been in a bad way. Most of them have restricted their habitats to the tropical regions where a good bit of ascorbic acid is available. Analysis of the food eaten by a gorilla shows that he gets about 5 grams of ascorbate per day. We moved out into temperate and subarctic regions where the food is not so rich in ascorbic acid and we have been suffering, of course, almost all of us from hypoasecorbemia ever since. If we ask how much ascorbic acid do animals manufacture, animals over a 10 million fold range of body weight from the housefly up to the goat, say, I don't think anyone has studied the elephant in this respect, we find that the average amount manufactured by these animals is about 10 grams per day, per 70 kilogram body weight. And this is then one of the reasons for saying that this is a reasonable amount to try to see to what extent it will control cancer. You could use much more. Cases are being given as much as 100 grams a day and have taken, I think, as much as, I've heard as much as 400 grams a day without any difficulty. There are thousands of people who have taken several grams a day for years with no overt manifestation of serious side effects, kidney stones and things like this that are talked about in the medical literature without any sound basis for the suggestion. But 10 grams a day is an amount that it is easy to take. I mentioned the goat. I have, let's see, here in this test tube, which is essentially full, 13 grams of ascarbic acid. This is the amount that the goat manufactures each day. And would the goat manufacture this if he didn't need it? I don't think so. This basic general principle would operate. If he were to cut down from 13 grams to 12 grams a day, he would save 7 percent of the wear and tear and energy required to manufacture ascarbic acid. And if that extra one gram were beneficial, why not? Why shouldn't he be saving that effort? So I think 13 grams a day for a 70 kilogram goat is probably somewhat less, well, I won't say probably, is somewhat less than the optimum amount. And of course he gets a couple of grams a day in his food, too. In this other test tube, I have the amount that a human being manufactures. That's zero. So far as there's no human being's manufacturer of ascarbic acid, it's very hard to get back the gene, the ability only by transferring the gene from some other animal. I understand that human chromosome is found in cats of one kind, sort of European cat, that it's been introduced into the complement, the genetic complement of that animal. But it's hard to get back. Only microorganisms can develop, and under special circumstances, too, can develop these abilities. In this other test tube, there's a little bit of white powder down in the bottom. That's the 45 milligrams per day that the Food and Nutrition Board in the United States recommends for human beings. It's enough to prevent essentially all people from getting scurvy, but it is far too small an amount to put people in good health. I think the goat knows more about these matters than the Food and Nutrition Board. In fact, there's another committee of the National Academy of Sciences National Research Council that I think knows more, too. This is the committee on the feeding of laboratory animals. They have made recommendations about monkeys. Monkeys are primates like ourselves, and they recommend somewhere a diet with somewhere around four grams a day per 70 kilograms of monkey. Well, now, monkeys are expensive, and moreover, you might sometimes build up an automatic, automated colony of monkeys. It would be very expensive, and the monkeys themselves, you've paid a lot for, and you put in a lot of effort to inject the monkeys and do whatever is involved in the experiment, and then they die on you. That's a tragedy. So, this committee has gone very carefully into the question of what amount of ascorbic acid will put the monkeys in the best of health. And I think I have more confidence in their conclusion that four grams a day per 70 kilogram body weight is better than 45 milligrams a day. Well, if we do take the proper amount of ascorbic acid and potentiate our natural protective life mechanisms, we might well be able to achieve a considerable control over cancer. I think I might show my slides and amplify the argument as we go on. First slide, please. Here is a curve that I drew showing that with this vital substance such as ascorbic acid, their well-being might increase linearly for small amounts, small increased intakes, and reach optimal functioning at some point. Actually experiments with especially the red-red mold have shown that curves of this sort have very flat tops when you study nutrients or vital substances such as the vitamins. And you get, it's hard to find just where the optimum is. I have an arrow there, the functioning of the fittest strain. This is corrected for the burden of manufacturing the material in case that it is manufactured. It is the place where the slope is just equal, except for a change sign to the straight line, presumably straight line that represents the burden of manufacturing the substance as a function of the amount manufactured. Next slide. In the next slide I have just the Michaelis-Menten curve say chemical equilibrium A plus B equals AB which could be enzyme plus substrate combining to form or the apoenzyme plus the coenzyme combining to form the active enzyme. And if you have a mutant, everybody is a mutant, the average estimate is that the 100,000 genes that you've inherited from your parents, one has been mutated from that generation to your generation. Everyone has these mutants. There may well be people who manufacture apoenzymes with a decreased combination constant for the coenzyme. If the combination constant is decreased 200 times, then you could, by going to 200 times the concentration of the coenzyme, get the same amount of combination with apoenzyme to form the active enzyme. Well, there are scores of genetic diseases known that involve an abnormality in the combination between constant, between apoenzyme and coenzyme. An example is methylmalonic aciduria. Patients with this disease excrete methylmalonic acid in the urine because they lack the enzyme that would isomerize the methylmalonic acid to succinic acid, which would then be metabolized. And this enzyme uses cobalamin, vitamin B12 as a coenzyme. If you give the patient 1,000 times the normal intake of cobalamin, then many of the patients are put into normal health and do not show manifestations of the disease. I think that there are with little doubt thousands of diseases of this sort, each one of which could be controlled by a great increase in the intake of a particular vitamin or other coenzyme. Next slide. Here I borrowed this slide from Irwin Stone, who around 10 or 15 years ago was very concerned about the amount of ascorbic acid that people needed and about the prevalence of hypoascarbemia. It shows various animals have been reported as making between 2 and 20 grams of ascorbic acid per day per 70 kilogram body weight. Next slide. Here we have summarized some reasons for a large intake of ascorbic acid for good health. These animals, the average of raw natural plant foods giving 2500 kilocalories per day or well giving 2500 kilocalories of food energy is 2 and 3 tenths grams. I have an evolution argument which I should not go into that the optimum intake is somewhat greater than this figure. The monkey chow I have mentioned. Dr. Yu studied guinea pigs which like the primates had lost the ability to make ascorbic acid and found that they were apparently an optimum health when they got about 3 and a half grams per 70 kilogram body weight. Next slide. I mentioned that ascorbic acid is known to be required for the synthesis of collagen for hydroxylation of the prolyyl and lysyl residues in the procollagen molecule and it is involved in other hydroxylation reactions. For 40 years it has been known that a high intake of ascorbic acid is required for good healing of wounds and burns, broken legs, fractures, periodontal disease. Dentists are in the forefront among medical people in giving patients large doses of ascorbic acid to improve their health. The next slide. Back in 1935, the jungle blue tin Columbia University College of Physicians and Surgeons reported that polio myelitis virus is inactivated by sodium escorbate in concentrations that can be reached in the bloodstream. Various other investigators have reported the same thing. Next slide please. I was astonished when I started reading the literature to find how much there is. Here are some of the references, not given in detail, about reported control of viral diseases. The National Cancer Institute has been spending $100 million a year for study of viruses in relation to cancer, but $0 for an investigation of ascorbic acid as an antiviral agent. The next slide. To the extent that viruses are involved in human cancer, the ascorbic acid may be operating in this general antiviral way. The work of Moreshige, who is the senior surgeon in a hospital in Fukuoka, Japan, is interesting. Murata published a paper on it. Moreshige had the idea that ascorbic acid would prevent infectious hepatitis, serum hepatitis from developing in surgical patients who received multiple transfusions with a chance that the virus is in the blood that is infused. He found when he made a study with 1,250 patients that those who got little or no ascorbic acid developed serum hepatitis at the normal rate of 7%, but the 2 grams a day or more is 100% effective in preventing serum hepatitis in 1,100 patients. Now he gives 10 grams a day to all patients in his hospital, all surgical patients, and all patients in intensive care that he's in charge of. Next slide. And he is reported, but not with the statistics as yet, that in these other viral diseases too, this high intake of ascorbic acid is completely effective in controlling the viral diseases. Next slide. Antibacterial action has been reported for many years. There are several mechanisms involved. For one thing, it's been known for 40 years that leukocytes are not effective as phagocytes unless they have a high concentration, 20 micrograms per 100 million cells of ascorbate. A recent study by Hume and Wires in Glasgow in 1973 showed that the level was about 25 micrograms per 100 million cells in the ordinary Scots that they investigated. If they came down with some illness, it dropped to about 10. And this is not enough to permit the leukocytes to be effective, to have phagocytic activity. 250 milligrams a day also wasn't enough. One gram a day, however, did the job. Next slide, please. Let's see. I got that in. I put that in at the last minute and apparently put it in reversed, but I'll tell you it's upside down, too. Perhaps we should change that. Look at this one. At the National Cancer Institute, well, at the na- I'll go back to this one. Porter. Rodney Porter has reported in his study of amino acid composition of components of complements that C1, QR, and S are proteins which have collagen-like sequences. Professor George Fagan in Stanford University has carried out these studies. The two curves left refer to the C1 esterase component of complement, which consists of these three proteins, QR and S, and as would be expected, the guinea pigs with a high intake of ascorbic acid manufacture much more of this complement component than those with a low intake. The next slide. Here we have studies made in the National Cancer Institute, reported last year just in abstract, by Yonamoto, Crathian, and Feiniger, showing that the rate of production of new lymphocytes under antigenic stimulation is greatly increased in humans who are given five grams a day. There's about a doubling of the blastogenesis rate with five grams a day, given in fact only for three days. This increase continues for 18 days. They haven't determined what it would be for people with a steady intake of five grams per day or 10 grams per day. 10 grams a day gives about a tripling. It's well known that the prognosis for cancer patients is better for those who have a high blastogenesis rate of lymphocytes under antigenic stimulation than for those who have a low one. And these investigators said that these results suggest that ascorbic acid should be tried in cancer patients. They didn't know that Cameron had been doing this for several years. Next slide, please. There's a good bit of epidemiological information about ascorbic acid. In fact, when studies are made of the diets of populations in relation to incidents of cancer of different kinds, it is usually found that the biggest correlation of all is the inverse correlation with intake of ascorbic acid. I've written down some of the papers, mentioned some of the papers in this field. The next slide. There was a study carried out in California, a chocan brazilow with 577 older people starting in 1958. They were all 50 years or older at that time. And their death rates were followed. Those with a higher intake, here again, the biggest correlation, negative between death rate and any factor, was with the intake of ascorbic acid. Even a bigger effect with cigarette smoking. The subjects who were ingesting a larger amount of ascorbic acid had only 40 percent the age-corrected death rate of those with the smaller amount. The larger amount was only about 125 milligrams a day, the smaller amount 25 milligrams a day. We are checking a population now with an intake of between one and two grams a day. And in a few years, well, there are already some preliminary results for the first 18 months. That result was only a 30 percent. In fact, this population wasn't restricted to those between one and two grams, contained some with a smaller intake. Only 30 percent to the age-corrected death rate. There was apparently the same decrease in the incidence of heart disease as of cancer and other diseases in these studies. I think that it may well be that the age-corrected death rate could be reduced to a tenth of what it is in the population at present by simply by increasing the intake of ascorbic acid and perhaps other nutrients by a relatively small amount. Next slide. And that would correspond to an increase of 20 years or more in the length of the period of well-being. Two papers have appeared recently on control of polyps of the rectum, which universally become malignant in the people who have this chronic polyposis. And in each case, with only three grams per day given to the patients, the half of them showed the disappearance of the polyps. Instead of having 40 of them, they dropped to zero or two. The next slide. Here we have the first report by Cameron on the first 50 cases of patients with terminal cancer, advanced terminal cancer called untreatable cancer in Scotland. The statement is made that in four cases, the treatment with ascorbic acid was harmful. As I've looked over the data and the case histories, I concluded that this probably is not correct because these patients, the population of 50 or the population of 100, including this 50, died off during the first few days or the first few weeks at a lower rate than the control population. There was really no increased death rate in this first 50. The fact that these patients died during the first two weeks, four of them, I think indicated to Cameron and Campbell that the ascorbate was harmful. In all of the other cases, there was benefit. The benefit was sometimes a rather small amount, a decrease in pain, permitting the patients to be taken off narcotic drugs or general disappearance of catechixia and anorexia. Patients felt well and had good appetites, began to eat well, went back to work. Some of them have continued to live. The next slide, far longer than expected. One patient showed an unusual course such as to permit him to be described in a separate publication. He had reticulum cell sarcoma, well diagnosed by biopsy and lexuade diagrams showed that this disease was there. When he received 10 grams a day of ascorbate, he improved very rapidly as shown by these measures of illness, a decreased sedimentation rate of red cells was observed, and glycosaminoglycan serum seromuclide decreased rapidly. After six months, his physician took him off the vitamin C, 10 grams a day, with the argument that he was cured, no signs of disease anymore, so he shouldn't continue taking the drug. Well, of course, vitamin C isn't a drug, it is a food, and he was taking the amount that probably is just appropriate to human beings. If they were manufacturing it themselves, the amount they would manufacture. He took him off the vitamin C, and within a month he was back in the hospital. The cancer had returned. It didn't respond to 10 grams per day orally when it was resumed for a couple of weeks, but he was given 20 grams a day for 10 days intravenously and immediately improved in health. He has lived now for several years, getting 12 and a half grams a day orally, and driving his lorry back and forth, apparently, in perfect health. Well, it's good health, perhaps even better than you expect for people 50 years old living in Scotland. The next slide shows the results of a study of 1,100 patients with terminal cancer, 100 of whom received 10 grams of ascorbate per day, beginning on the day that they were pronounced untreatable. This may occur at laparotomy when the cancer is observed to be of such a nature that it is inoperable, or later after high-energy radiation treatment or cytotoxic drugs have led to have been tried and perhaps had some temporary value but are no longer effective. The matched controls, 10 matched controls for each of the 100 ascorbate treated patients had the same kind of cancer and the survival times are measured again from the time when they were considered untreatable. The 50 times the fraction of the ascorbate treated patients lived more than a year. The 16 out of 100 of the ascorbate treated patients are alive after several years as much as over five and a half years now, the average survival time now is more than five times the average survival time of the 100 controls, the 1,000 controls. The 1,000 controls have all died by this time, with only 15 out of 1,000 living more than one year. No, only, yes, I think 15 is right, only three, three out of 1,000 lived more than a year. It's marked as 400 days there. Now several hundred patients with cancer are receiving 10 grams per day in the Vale of Leven Hospital in Scotland and they begin to receive it immediately that they come to the hospital no matter what the stage of development of the disease is. I could go on and mention some individual cases, not only in Scotland but also in California, but this is the only quantitative material that I have. I think that every cancer patient should be put on ascorbic acid therapy. What the relation is to the cytotoxic drugs has not yet been carefully studied. In California, patients who are receiving five flu or uracil or methotrexate or other cytotoxic drugs have been given as ascorbic acid also 10 grams a day. And one observation has been made in practically every case, the serious side effects of loss of hair and nausea, other side effects of the cytotoxic drugs do not show up when vitamin C is given. On the other hand, it may be that these two treatments operate against one another. The cytotoxic drugs have the side effect of destroying the body's natural protective mechanisms. And the vitamin C operates by potentiating the body's natural protective mechanisms. If you knock out the immune system down to zero, a tenfold or a hundredfold potentiation still leaves it at zero. In the course of time, it will be possible to say what should be done when a trice arises between taking cytotoxic drugs or taking vitamin C. But there is reason to believe that whether or not these other treatments are used, the vitamin C should be taken. Well, this is the situation now. I haven't found anything in the cancer literature during the last 20 years that is comparable to this. To what extent it will stand up, whether you can have a fivefold increase in life expectancy for people at the beginning stages of cancer who might have a five-year life expectancy that would be increased to 25 years with a scarbate or not, I don't know. I do feel strongly that cancer and the scarbic acid are closely related. It may be that cancer is in large part one of the manifestations of vitamin C deficiency that people develop cancer because they are in poor health, because of the extremely small amount of scarbic acid that they ingest. Of course, I have read statements that chemicals which have been introduced into our environment, in our food and to the environment generally are responsible for a large fraction of the cases of cancer that develop. So scarbic acid is known to be a detoxifying agent for almost all substances, and perhaps it can work, or it does work as a prophylactic agent by helping to counteract the effect of these cancerogenic substances. I think that this opportunity of helping to control cancer by the use of the proper amount of a scarbic acid, both prophylactically and people who have not yet developed cancer, and therapeutically and those who have is so important that it should not be neglected by anyone. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.
|
Eight years after he had received a Nobel Prize in Chemistry for his fundamental contributions to stereochemistry, Sir Derek Barton attended the Lindau Nobel Laureate Meetings for the first time. He obviously liked their atmosphere and subsequently returned four times. It’s well worth listening to his talk. It offers a good deal of British pragmatism and humor, combined with the clear and confident sobriety of a scientist, which he also employs to explain why he had chosen a rather nonchemical topic. “This talk was inspired by a letter that I received some years ago from a number of American sociologists who wanted me to write an article how terrible the world was and how we were suffering from the most dreadful crises that mankind had ever suffered from (…) As far as I am concerned, the world is in a better shape than it’s been ever before and I am optimistic about its future and therefore I was pleased to write an article and I was somewhat surprised that this article was never published."Compared to the two World Wars and the big economic depression he had survived, Barton continues, most of the current crises did not appear dramatic to him. He distinguishes three classes of crises: Imaginary, artificial and real crises and then looks at “some of the crises the world press tells us we are suffering from” in terms of these categories. Presently, pollution for example, compared to “the thick, yellow London fog” of former times, appeared to be an imaginary crisis. The energy crisis is an artificial one for Barton, provoked by OPEC’s international monopoly. He is confident that science will find ways to overcome imminent energy shortages. In this context, he criticizes the “almost hysterical reaction in Germany to the proposal of building nuclear power stations”. All other crises he analyzes - global food supply, overpopulation and economic recession – Barton judges for various reasons as still being artificial – except for one: the danger of a nuclear war. While nobody knew exactly how many nuclear devices existed in the world, everybody knew that “it’s quite enough to kill off the population of the world several times.” On a statistical basis, Barton says, there had been at least one or two major wars every century: “Have we the right to believe that suddenly history of mankind is going to change?” Even if the balance of terror prevented the super powers to attack each other, “nuclear weapon technology is going to spread around the world (…) So we are going to come some time to a situation where a country in a last defense will use nuclear weapons or when some mad dictator will get hold of them and will use them in his madness.”Nevertheless, Sir Derek is “modestly optimistic because for the first time in human history we have seen a group of countries come together and give up some of their sacred national sovereignty”. Although it is, as he mentions, not customary in the UK to say something nice about the “Common Market”, he praises the predecessor of the European Union, as if anticipating the Nobel Peace Prize of 2012. Nations ought to have morality. They should learn “to work together in the same way as individuals work together to make up a family group”. If they succeed in doing so, “we may have a chance to evolve”. Joachim Pietzsch
|
10.5446/52563 (DOI)
|
Ladies and gentlemen, I would like to start with a little explanation. The talk that I am about to give you was originally prepared as part of a celebration of the 45th anniversary of the Lawrence Berkeley Laboratory that was held last year. And it was intended to be a brief history in what one might call an anecdotal form. That is, it touches somewhat on the light side of things. It deals a little bit with the scientific history of the laboratory, but largely it deals with the people and the things that happen. So with that small explanation, I will begin the talk as I gave it. This will be a multimedia presentation. I will start with a more or less connected discourse and finish with a slideshow. When I speak of the early days, I include only the period up to the end of 1940. By then, many people in the United States had become deeply concerned over the war in Europe. Some had left the laboratory for war work, and soon the laboratory itself became involved in war work. One major peacetime project was started in 1940, the 184-inch cyclotron, but it did not get back to its original purpose until after the war. The hill above the big sea, I should explain that in Berkeley there is a hill behind the town. There is a big letter C for California put there by the students, and I am referring to that. The hill above the big sea was chosen for the site, and by the end of 1940, the Magnet Foundation was completed and the bottom yoke was in place. This started the first expansion of the laboratory off the campus, a stage in growth belonging to a later period beyond what I am covering here. The radiation laboratory was the personal creation of Ernest Lawrence. With his idea, he got the financial support, he pulled together the equipment and drew the people, and of course he supplied the key idea, the cyclotron. Many other people helped in essential ways. I could name presidents, Browell of the University, Leonard Fuller of the Federal Telegraph Company of the University, who arranged the gift of the large magnet for the 27 and 37 cyclotrons. Henry Cattrell and Howard Poyon of the Research Corporation and Francis Garvin of the Chemical Foundation, who looked with favor on Ernest's request for grants. Raymond Burge who became chairman of the physics department in 1932, Don Cooksey from Yale, Stan Livingston and many others, but it was Lawrence's laboratory. Those of us who were there in the early days remember that Ernest was always THE boss. This should be one of the capital T and the capital B, that was a very important distinction. He could be very rough on people if he felt they were not giving their utmost efforts, but he made up for this by his generosity and giving credit and sharing ideas. I never met Rutherford, but I have been told that he had the same kind of character with an important difference. Rutherford favored the individual researcher working with simple apparatus. Lawrence believed in efforts so large that teamwork was necessary. In the very beginning there was a penalty for this. The drive for greater energy and beam current was so frantic that people hardly had time to think. Some important discoveries were missed and some mistakes were made, but this phase soon passed. On the whole, I think Lawrence was right. The rapid development of the cyclotron was more important in nuclear science than the question of who made which discovery. The laboratory was started in 1931, and when I came to Berkeley, near the end of 1932, it was in full swing. There was not only the 27-inch cyclotron giving protons around 2 MeV, but also the Sloan X-ray II, on which great hopes were placed for cancer treatment, and a couple of linear accelerators of the VEDA type, which were built and operated by Dave Sloan, West Coats, and Bernard Kinsey. The Sloan X-ray II was used clinically for many years, but the linear accelerator concept fell by the wayside, waiting to be revived by new ideas coming from wartime radar developments. It was certainly a busy place day and night, especially when Ernest was there, which was most of the time. I started my research in Lecomte Hall, that is the physics building of the University of California, Berkeley. I started my research in Lecomte Hall on a molecular beam problem, but dropped that when the result I was seeking was obtained elsewhere, and started the exciting world of the radiation laboratory in the spring of 1934. Stan Livingston, the cyclotron expert, and T'laceo Lucci, a retired commander in the Italian Navy, who was a beloved general helper in fact totem, gave me sage counsel on how to comport myself, as my previous experience had been in working alone, and I needed to learn the art of teamwork. This was not obviously easy, since no one was routinely coordinating the various tasks needed to keep the cyclotron going, and there were the twin dangers of neglecting what one should do, or getting in the way trying to do something that someone else should do. Robert Oppenheimer was the chief theoretical advisor for the laboratory, and he suggested that I study the gamma rays produced by proton and deuteron bombardment of light elements. This turned out to be an important experiment, because I found a 5.5 MeV gamma ray from fluorine bombard with protons, with which I could check Beton-Heitler's new theory of gamma ray absorption by pair production. The chief line of research going on then was the study of nuclear reactions by observing the protons and alpha particles, which were emitted during bombardment. These were detected by a thin ionization chamber connected to a linear amplifier, device not suited for observing gamma rays. Gaggar counters were considered unreliable. Stan Livingston tells in a paper presented in Texas in 1967 what happened on February 24, 1934, when the laboratory learned of the Jolio-Curie discovery of artificial radioactivity. They were using a Gaggar point counter, a device that now seems as exotic as the Coher to count alpha particles. It was not the familiar cylindrical Gaggar Miller counter. The cyclotron oscillator and the counter circuit were turned on and off by the two poles of a double pole knife switch for convenience and timing. Within half an hour, the switching arrangement was changed so that the counter could be turned on while the cyclotron was off. Counter voltage was raised so that it would count beta particles. The internal target wheel was rotated to bring a carbon target into the beam, and the activity of 1913 was there, produced by a different reaction than that used by its discoverers. The failure to see this activity first was a blow to the laboratory, and there was a natural reaction against all Gaggar counters. So the first thing I did when I entered the laboratory was to go to Pasadena to learn from Charlie Lawson himself how to make quartz fiber electroscopes. I had my first Lawson electroscope, which was mounted in a lead wall chamber for detecting gamma rays, inside the laboratory for only a few days when Malcolm Henderson came to me in the middle of May with the news of the discovery of neutron induced radioactivity in Rome, and he wanted to use my electroscope to look at some of these activities. It took only a short while to make a new chamber out of a tin can with a thin aluminum window, and the tin can version of the Lawson electroscope became a valuable instrument for observing beta rays. Jack Livengood made one like it, which used in a monumental survey with Seaborg and others of activities produced in many elements by Deuteron Barbartman, which resulted in a rate of discovery of radioisotopes that was comparable to that at Rome following Paramy's first neutron induced activity. Some of those that they found became very important in medical and other applications, like I did 131, iron 59, cobalt 60, and technician 99M. There was a great surge of activity in the field of artificial radioactivity. The names involved are too many to list completely. Stan Livingston and I found a radioactive form of oxygen. And Lawrence found sodium-24, which created a sensation because very strong samples could be made. Once Lawrence had the psychotron cruise working around the clock to make a whole curie for a demonstration, that was a tough job. Jackson Lazer found sodium-22, which was the longest live artificial activity in all the time, but was soon to be suppressed. Martin Cayman and Sam Rubin found carbon-14, probably the most important radioactive isotope of all, and so on. Also, new types of activities were discovered. Van Vouries found that copper-64 could decay by emitting either negative or positive electrons, which was the first known example of that kind of radioactive decay. And Louis Alvarez found the first case of decay by orbital electron capture, which is now a well-known process. Among the new activities were some that had atomic numbers differing from that of any known element, and were therefore new elements. First of these were found by Milius Agri and Carlo Perrier in 1937. They worked in Palermo with a piece of molybdenum that had been on the leading edge of the deflector plate in the psychotron, where it got a lot of bombardment, and which Segre had taken back with him after a visit in the summer of 1936. In it, they found element 43, which they named technetium, after the Greek word for artificial, as it was the first artificially produced element. Next was element 85, called astatine, for the Greek word for unstable, found by Segre, Dale Carson, and Ken McKenzie in 1940. A little later in that same year of 1940, Phil Abelson, who had been a graduate student with Lawrence, came back to Berkeley for a short visit and supplied the missing link in the chemical identification of an activity induced in uranium by neutron bombardment, which had been puzzling me for some time. It was mentioned in the introduction. This was, as I had expected, the first transuranium element. I named it Neptunium after the planet Neptune, just as uranium had been named after the planet Uranus. After Phil Abelson left, I continued the work, trying also the deuteron bombardment of uranium, which produced a different isotope of Neptunium than the neutron bombardment, and found alpha particle activity in the Neptunium samples, which suggest the presence of the next transuranium element. Because after a beta particle, the next step wouldn't actually be an alpha particle, and that leads one to think that that was the next one. I did some chemical separation showing that the alpha activity did not belong to uranium or Neptunium, but did not complete this investigation because I was persuaded by Uranus Lawrence to go to the Massachusetts Institute of Technology for a few weeks to help set up a new laboratory for developing microwave radar. It was not called radar then. That word was coined later, but we did work on radar. As a cover, the new laboratory was called the Radiation Laboratory. So there were two rad labs. That was sometimes a source of confusion. I left Berkeley by train for Boston on November 11, 1940. On November 28, Glenn Seaborg wrote me that Art Wall had been making some strong Neptunium samples and said, If you are too busy to carry on the work alone, we would be glad to collaborate with you. In my reply on December 8, I say, It looks as if I shall not be back at Berkeley for some time, and it would please me very much if you could continue the work on 93 and 94. Now in parentheses, they sometimes stretched out to five years before I came back to stay. I never did believe Uranus' estimate of a few weeks. That's end of parenthesis. On March 8, 1941, Glenn wrote to me describing the final chemical proof that the alpha activity belonged to the next element up the periodic table plutonium. In this correspondence, we did not use the names for the new elements, which were not yet official, but referred to 93 and 94, and the March letter was marked confidential. See, secrecy was already creeping into nuclear research, and after that, secrecy became absolute, and none of these things were published for a long time. Louis Alvarez came from Chicago in 1936 with a lot of clever ideas. He was the originator of the method of getting what is effectively a beam of very slow neutrons. By pulsing the cyclotron and gating the detector so that it is only sensitive at some chosen time after the pulse of neutrons has been emitted. With Ken Pitzer, he used this method in an investigation of neutron scattering by the two kinds of molecular hydrogen, ortho- and parahydrogen, and with Felix Bloch of Stanford, he made the first measurement of the magnetic mode of the neutron. One of the questions of that time was the relative stability of the nuclei hydrogen-3 called tritium and helium-3, both of which have been observed by Mark Aliphant at the Cavendish Laboratory as products of the bombardment of deuteron by deuteron. Alvarez and Bob Cornard first showed that helium-3 is a stable one by detecting it in atmospheric helium using the cyclotron as a mass spectrometer. Then, knowing that hydrogen-3 must emit beta particles, they looked for activity in deuterium gas bombarded by deuteron and found the activity establishing tritium as a radioactive isotope. Gilbert Lewis, the University of California Berkeley Chemistry Department, played a very important role in the laboratory's history. As soon as the discovery of deuterium was announced, he set up equipment to make heavy water by electrolysis and furnished a sample of heavy water to the laboratory. And in March 1933, the first beam of deuteron was produced by the cyclotron. That was a very important step, as well as I can go on and say here. From then on, a major part of the work was the deuteron, which are much more prolific and produce a nuclear reaction that our protons are alpha particle. When I say prolific, I mean, firstly, the cross-sections are larger, so that you get more abundant reactions. You also get a greater variety of reactions because the deuteron contains two nucleons and you get more variety also. Lewis, like many associated with the laboratory, was a colorful character. He liked to tell how he fed some of his first heavy water to a fly and it rolled over on his back and winked at him. The second anecdote I have here, I heard myself, one day at lunch at the faculty club, Lewis heard some professors in the Department of Education, arguing about whether children should be taught to add a column of figures from the top down or from the bottom up. Gilbert Lewis said, the way I do it is, first I add them down and then I add them up, then I take the average. I could go on and on. There were many visitors to the laboratory who stayed at work there for considerable periods of time, like Jim Cork from Michigan, Jerry Kruger from Illinois, Lorenzo Amo, a count from Italy, Harold Walk and Don Hurst from the Rutherford Laboratory, Wolfgang Gettner from Germany, Maurice Namius from France, Stenfun Friesen from Sweden, Rilkeci Sagani from Japan, and Basanti Nag from India. The working visitors were very important to the laboratory. They not only contributed to the research program, but they carried back the cyclotron art to their own institutions. Lawrence actively promoted this diffusion of knowledge, and Don Cooksey wrote what we call cookbooks of cyclotron lore, which are mailed to innocent institutions, and many people from the Berkeley Laboratory went out to help design and build cyclotrons elsewhere. Milton White went to Princeton, Henry Neusen to Chicago, Hugh Paxton to Joliot Laboratory in Paris, Jackson Lazzlet to Copenhagen, and Reg Richardson and Bob Thornton to Michigan. So the ability, the knowledge of building cyclotrons was rapidly diffused. I think this diffusion of, we might call, technological knowledge was very important in the advancement of nuclear science in that time, we're talking about the 30s now. Many of the physicists took part in the running and maintenance of the cyclotron. There were regular crews assigned to this task. I remember being on the owl crew for a while, which did not bother me as I was then a single man with rather nocturnal habits, but it was hard on some of the others, I remember. But anything went wrong, we had to pull the cyclotron apart and try to fix it. The greatest problems were vacuum leaks and the burnout of filaments in the ion source, which is inside the cyclotron tank, and also in the demountable oscillator tubes that had been built by Dave Sloan. When the ion source filament went out, the vacuum tank of the cyclotron had to be rolled out of the magnet gap, then the wax joint between the lid and the tank broken and the lid removed, the filament replaced, and it all had to be put back together again, the joints sealed up and the air pumped out and so on. Physis did more than just operate the machine. For example, Art Snell and Ken McKenzie built oscillators. Bob Wilson made the first theoretical study of orbit stability, and I designed the control system for the 60th cyclotron. I was even doubling as an electrical engineer for a while. This was in 1938 in a new concrete building, Crocker Laboratory was under construction to house the new larger cyclotron. The laboratory was now starting to expand. Bill Brovac came in 1937 as the first professional engineer hired by the laboratory. That created a real revolution. No more wax joints that leaked, no more equipment that fell apart in the middle of an important experiment, or at least less than before. The string and ceiling wax school of physics still has a nostalgic appeal to some old timers like myself, but is not suited to large efforts, or many people are depending on the reliability of apparatus. Winsalls Berry and Bill Baker, both electronic geniuses, took over the designing and building of oscillators and other electronic equipment. Charlie Letton came for a while and taught us many techniques in radio frequency engineering. He had a small company in Redwood City, which he later sold to some entrepreneurs from Texas who used as a nucleus for the giant conglomerate called Letton Industries. Charlie retired to Grass Valley where he spent the rest of his life happily working in various inventions. Interest in biomedical applications started very early. Ernest's brother, John, is a physician and Ernest always had an attraction to the field of medicine. I've already spoken about the Sloan X-ray II, which went into medical use in 1934. Next year, John came to Berkeley, John Lawrence, that is, John came to Berkeley for the summer and made the first observations of the effects of neutron rays on a living organism, finding the effects greater than those of other forms of radiation and, therefore, very interesting. And in 1936 he came to stay. Paul Abersold became the chief physicist for the biomedical group, making the arrangements for radiation and measuring the dosage. The first cancer patient was treated in September 1938 with sufficiently encouraging results that the Crocker Laboratory was devoted to medical research, although the physicists and chemists got to use it too. There were working visitors in the biomedical field also. Frank Exner from New York, Isidore Lampe from Utah, Raymond Zirkel from Pennsylvania, Al Marshak, Lowell Earf, John Larkin, and many others. Dr. Joseph Hamilton had a separate group studying the distribution of radioisotopes administered to animals and humans. To the smells of hot oil from the cyclotron were added to those of animal colonies. As Laszlet said in his cyclotron alphabet, M stands for mice whose smell makes us moon. We went through the WPA period. It was during the Great Depression. And the WPA was a scheme by which unemployed people were hired by the government and assigned to governmental bodies or other institutions to perform useful work. I have a, that's sort of for a Works Progress Administration, WPA. I have a 1934 letter from Lawrence to the University Office handing this program, requesting for a period of one month, one. One physicist with PhD and several years subsequent research experience. Two, one carpenter. Three, one machinist with several years experience in general shop work. Some of those who came on this program were real characters. I remember particularly Murray Rosenthal, who was an amateur magician. A Swedish draftsman named Hallgren, who was so profane that we tried to keep him away from Don Cooksey, who objected to his language. And a man who had been with a telephone company, who was very distinguished looking, he liked to go around checking the strength of solder joints by pulling at the wires with a button hook. Some who were only temporarily down in the state on and became valuable members of the laboratory staff. Some idea of the financial scale of that time is given by the cost estimate made by Wally Reynolds in 1931 for the installation of the 80 ton magnet. This includes moving the magnet from San Francisco and setting it in place. Four transformers, a 50 kilowatt motor generator set, a 10 ton crane, concrete pier, labor engineering and contingencies, all for $5,300. It is hard to convey the atmosphere of that time. The world was in a deep depression. There was a general strike in San Francisco in 1934. Some people on the campus took sides during this strike and friendships were broken over this. There was a lot of leftist agitation, which later had dire consequences for many scientists. There was not much money around for seven months between the end of my fellowship and my appointment to the faculty as a structure as a research associate without pay. But we all managed somehow and the laboratory kept going. Lawrence was the driving force and the spirits inside the laboratory were kept high by the excitement of discovery. There was very little organization. Lawrence was the boss and that seemed to be enough. What a change has taken place since then. The eager youth has grown into an adult with increased powers and problems that come with maturity. Now that's the end of the more or less connected discourse and now comes the slide show. And we used half our time up more or less, so we'll go on with the slides. Could I have the first slide? This is Ernest Lawrence taken on December, I'm sorry, September 19, 1930, just after he had given the first scientific paper on the cyclotron at a meeting of the National Academy of Sciences on the Berkeley campus. He was holding a glass brass and wax apparatus with which he and Niels Adlerson had obtained evidence of iron resonances in a magnetic field, encouraging Ernest to go on with the development of the cyclotron idea. From his expression, you can see that he has hopes for the future. See, this was in now September 1930 and this was before the radiation laboratory was started, but that little apparatus that Lawrence is holding was the, they gave the first evidence that the cyclotron might work and encouraged the whole thing to go on. Now slide two. Here are Stan Livingston and Ernest Lawrence standing beside the big magnet in the shop of the Pelton Waterville Company in San Francisco. This magnet had been built by the Federal Telegraph Company of Palo Alto for use as part of a Pulsom Arc radio transmitter ordered by the Chinese government, but it was never delivered. And Leonard Fuller, who was the Vice President of the Federal Telegraph Company of Palo Alto also at the same time Chairman of the Department of Electrical Engineering in Berkeley, he arranged for that magnet to be given to the university for the researchers of Lawrence. And as I said, there it is being converted into a cyclotron magnet. The bottom pole was removed and new poles were built. The core poles of the magnet had to be changed before it could be used as a cyclotron magnet. That is being done here in late 1931. Stan Livingston made the first cyclotron that worked. After that little model that we showed in the last picture, Livingston took over and built the next model. And he made one with reed did work. He found a beam of 80,000 electron volt hydrogen black-year ions. We heard about hydrogen black-year ions earlier, the simplest molecule, also the first thing accelerated in a cyclotron. He found those on January 2, 1931 in a four-inch cyclotron. Then he made an 11-inch cyclotron with which in 1932, Milt White continually confirmed the lithium disintegration results of Karkrauth and Walton. This work was done in Leconte Hall, but the big magnet needed a larger place to house it. As you all know, Stan was one of the discoverers or inventors of strong focusing, without which most of energy physics could not have been done. So those were the, Lawrence invented the cyclotron Livingston made one work, first one work, and they're the reed the creators of this whole business. Slide three. This shows the old radiation laboratory. It had been a civil engineering testing laboratory. It was scheduled to be torn down, but Ernest persuaded President Sproul, the President Sproul of the university, not of the United States. Of course, in the university town, as you all know, and you always say in the United States, I mean, you say president, you always mean the president of the university, not of the United States. Ernest persuaded President Sproul to let him have it for his experiments. This occurred on August 26, 1931 in President Sproul's office. At that time, Ernest had the promise of financial support and a formal offer of the magnet. So if one wants to choose a day for a birthday, this could be it. Early in 1932, the name radiation laboratory was painted on the doors. And I don't think you can see that in that picture, but around the door, the outside door saw it said radiation laboratory, way back then in 1932. The magnet was installed in January 1932, and the 27-inch cyclotron first operated in June of that year, 1932. Six years later, the magnet poles were enlarged, and the 37-inch cyclotron was installed. In the crew record for November 10, 1937, I found the following poem by Martin Cayman. And of course, Martin Cayman is a man who, with Sam Rubin, discovered carbon-14. And he also liked to write poetry of this type. The cyclotron is a noble beast. It runs the best, when you expect it least. Of all the pleasures known to man, the greatest is a good, tight can. The can he meant the vacuum tank, that's what we call it. And you remember what I said about the misery of leaks, because it was really a, it was misery. You'd spend the whole night trying to find a leak, and then you finally get it fixed, and then the wax would suck in, and you'd have to start all over again. In this building, which you see here, there was a large room for the cyclotron, and its controls. There was an open court for transformers and switch gear. There was a machine shop and some office space. Whenever there was trouble with the commutator or the generator that supplied the magnet current, I was called in to fix it. I was considered an expert at soldering with the torch in those days. On one time, I remember that Franz Curry, when he was starting the motor generator, threw in the switches in the wrong order and blew out the lights in all of Berkeley. That building was a scene of frustration and elation, human as well as scientific drama. Many anecdotes have been told about happenings there, like the times that Ernest Lawrence fired Bill Baker, and another occasion he fired Bob Wilson, only to recant and take them back again. But on the whole, relations were remarkably harmonious, considering the many different temperaments of the people. After the war, the first tests of the synchro cyclotron principle were done here in this building, and then at Melvin, Calvin did his pioneer work on the carbon cycle and photosynthesis. Now we have slide four. That's another view of that same building, taken in 1959. It's being demolished. Demolition is proceeding toward you in the last view, in the direction which would have been toward you in the last view, and not much is left of the building. I am standing there, that's me, sadly viewing the end of an era. Later, Crocker Laboratory had to go to the chemistry department needed space for more buildings. So that was indeed the end of an era, Bill. It was in that old building that the whole business of nuclear physics was cyclotrons, with accelerators, circular accelerators got started. Slide five. This is the 27-inch cyclotron in 1932. Vacuum chamber, you see in the middle, sits between the poles of the magnet, and it's all covered with wax. Everything was waxed together in those days. A stove pipe going up in front, a pipe, has a wire strung down the middle, which connects or carries a collective beam current to a galvanometer on the control table, which is out of the picture on the left. And sticking out toward you in the front of the vacuum chamber is the linear amplifier built by Malcolm Henderson, which was used to count protons and alpha particles. The magnet windings were cooled by oil in those big circular tanks. They were full of oil that was circulated by a pump, and there was always oil all over everything. One time, Louis Alvarez neglected to close a valve after turning off the oil circulating pump, and a whole tank of oil ran over and went through the cracks in the floor into the basement. I remember that was a very dramatic incident by a Nobel Prize winner. Slide six. This shows Ernest at the other side of the cyclotron, also in 1932. That photo has its own date, it's written on that hydrogen tank in front. You can't read it here, but in the original you can read it. Behind Ernest is the oscillator that supplied high-frequency power to the cyclotron. You can see that in this picture it uses a commercial vacuum tube, but these were expensive, and so for quite a while we used homemade tubes designed by Dave Sloan, which were demountable. They had a wax joint so that you could take them apart and change filaments. Ernest recognized as one of the world's great experimental physicists, but he was not particularly adept with his hands and contributed his share in the breakage of apparatus, as did all of us. When some delicate task was to be done, he would turn to someone else and say, here, you do this. It was his ideas and enthusiasm that were the important things. Now, next slide. This is Dave Sloan with his X-ray tube, which was essentially a test-acquire on a vacuum tank. This X-ray tube was actually the first apparatus installed in that building. As I told you, the big magnet didn't go in until 1932, but this went in in 1931. Dave was very important to the laboratory. He could build anything that was full of ingenious ideas. He built large oil diffusion pumps when such items were not obtainable commercially and made demountable oscillator tubes in which the filaments could be changed by taking apart a wax joint. One time he tried to make a diffusion pump using biswith vapor. This did not work very well, but it was an interesting idea. He is still active at Physics International working with high current accelerators, a natural continuation of what he did here. Next slide. Here is another side of the laboratory, the machine shop in the old radiation laboratory. Without shops, the laboratory could not operate. We used our own shop, also the shop in LeCount Hall, the physics department shop, and large jobs were sent out to commercial shops. In this view, on the left is George Kraus, and on the right is Eric Layman, working on a cyclotron tank, or at least looking as if they are contemplating working on it. Sitting in front are Don Cooksey, who is very important. He is a general helper in the laboratory and organizer. Sitting in front are Don Cooksey, who made the shops one of his primary concerns. Jack Livinggood, the great hunter of radioisotopes, is Livinggood in the corner. Three men who worked in that shop in the early days, Don Stollings, Jack Cole, and Paul Wells, are still with the laboratory. Next slide. This shows Art Snell, Franz Curry, and Bernard Kinsey, who were, I think, in the Strawberry Canyon pool when this picture was taken. I was almost tempted to say there at the Bod Shocking pool, but the background is not exactly right for that. The time is not right for that either. Art Snell came from Montreal in 1934, later went to Chicago and is now at Oak Ridge. He was famous as the poet laureate of the laboratory. He would make limericks for all occasions. When Lawrence was awarded the Nobel Prize in 1939, he said, Send a wire that said, congratulations, your career is beginning to show some promise. He also built an oscillator and he discovered radioactive argon, among other things. Franz Curry, the man in the middle, Franz Curry's Yale seems to be given a tarzan Yale, but he was actually a very gentle person. He introduced the cloud chamber technique into the laboratory. He made measurements of the energy distribution of beta rays and invented a method of presenting the data, the data for beta ray distributions. They made it easy to determine the upper limit of the energy. This is now known as the Curry plot and it's been widely used. In an investigation of the disintegration of nitrogen by neutrons, he found some unusual tracks which could be interpreted as being due to the capture of slow neutrons and emission of protons resulting in the formation of carbon-14. This observation of Curry served as a clue in finding the best method of making carbon-14, which as you might guess from what I have said, is a capture of slow neutrons by nitrogen. For quite a long time I had a bottle of ammonium nitrate sitting near the cyclotron target, hoping eventually to separate out carbon and see if it was active. This bottle got knocked over and broken and I never put one back. People considered it to be a nuisance and some were even afraid that it might explode. There had been some large explosions involving ammonium nitrate, but I don't think a small laboratory bottle was that dangerous. When carbon-14 was eventually identified and carbon bombarded by deuterons, came in and rubened, then tried neutrons on nitrogen, and they never went back to the carbon bombardments. The yields were smaller and the active carbon was diluted by all the ordinary carbon. Francis Curry later was the director of the U.S. Navy Radio and Sound Laboratory in San Diego. In the third man, Bernard Kinsey was a Commonwealth Fellow from England. He built a linear exciter for lithium ions. There are many stories about Bernard. He had a high temper and a very complicated, colorful form of swearing, really a high art. He was here at this celebration and perhaps he might be persuaded to give us, not this celebration, but the one where I gave this first. He was here at this celebration and perhaps he might be persuaded to give us an example. There was another Commonwealth Fellow at the University named Brown, who was probably the laziest man I ever knew. I don't think he ever did anything. They saw him around the faculty club where I was living at the time. He obviously was not in the laboratory. Ernest would have thrown him out. Now we come to slide 10. This is the Crocker Laboratory that I mentioned earlier. Old radiation laboratories off to the right, across an alley, and the 60-inch cyclotron resided in the high bay at the rear of that building. This was called the medical cyclotron, but as I have said, others used it. It went into operation in 1939, giving deuterons of about 9 million electron volts. Under the supervision of Dr. Joseph Hamilton, it was used extensively for making radioisotopes for medical and tracer uses. And now the next slide. Here is the 60-inch cyclotron done coaxially in Ken Green. You see that it's much neater looking than the earlier cyclotrons. Bill Broback, who was our first engineer, had had his influence. The structure projecting at the right was a pair of tanks that held the D-stams, which formed a resonant system. The oscillators were on the balcony at the right. You'll notice the coil of heavy cable at the top. This, that coil up stuff up there, this carried high voltage to the deflector plate from the rectifier built by Adloffgren. The reason for the coil is that high voltage cables usually fail at the ends and are very hard to splice, so the coil gave plenty of slack for making repairs. Next slide. This is looking through the window into the control room of the 60-inch cyclotron. You see Bill Broback, our engineer on the left, and Bob Wilson, who's smoking his pipe. Wilson, of course, now is the director of the Fermi Laboratory at Batavia, Illinois. And then there's Ernest Lawrence and a couple of other characters that says here, one of them is me and the one behind, I don't remember who that was. Bob Wilson follows, well, that's no point of this. This temporary setup, the Mars and Neatness of the control table, was a breadboard model of an automatic magnet current regulator that was being tested. Next slide. This shows a group of people. There's a man on the left, I don't know who that is. There then comes Ernest Lawrence holding the manuscript. Dale Carson, physicist who is now president of Cornell University. Winfield Salisbury, our electronic genius, and Louis Alvarez, who is one of the laureates. Carson participated in the discovery of Astatine and is now the president of Cornell University. Salisbury has had a distinguished career in the academic world since leaving the laboratory. He made very valuable contributions to rate our countermeasures during the war. Louis, as you know, went on to win the Nobel Prize in physics and so on. Next slide. This shows John Lawrence, the brother of Ernest Lawrence, taken in 1936 with rows of mouse cages in the background, which is a proper setting for a biomedical researcher. I will not say any more about the biological and medical research which should be covered by another speaker on the program this was on. Slide 15. Again, there are mouse cages, this time with mice in them, but the date is later, 1939, and the person is different, Dr. Joseph Hamilton. Dr. Hamilton had a set up in Crocker Laboratory where he worked with radioisotopes in medical and biological studies. His work was quite pioneer work in the, during the paths of the heavy elements in animals and in man. Joe's lunch table, the faculty club, was noted for the interesting conversations on many subjects. We remember that he had a special table with a sort of stomptish and I used to sit there and we discussed everything. Next. All was not hard work. We had fun too. There was a, an Italian restaurant called Di Biaziz in a small town near Berkeley. The Di Biaziz parties were famous yearly affairs in the laboratory and that was when we would let off steam and have fun. Here is Paul Abersold who's the man who was the physicist who worked with the biologists and the medical people in the setting up, measuring of dosages and setting up of patients and so on. Paul Abersold is the one holding the cake there. Paul Abersold had an irrepressible sense of humor and was also the master of ceremonies. The party in 1939 was in celebration of the 60th cyclotron. Paul was presenting a cake in the shape of a cyclotron with the words 8 billion volts or a bus. That was supposed to be a wild exaggeration but the bevetron had not been invented yet. Remember though just a few years later that we had 6 billion volts which is almost this number given then as just an impossible exaggeration. Lawrence is on the left foreground and the man in the middle foreground, Sten von Friesen, one of our visitors from Sweden. Next slide. Also at the same party, man on the left is Martin Kamen looking puzzled about something. Then there is Sten von Friesen next. Bob Kornog who worked with Alvarez in the discovery of Hydrogen III. Then there's Ken McKenzie is in the left background. I'm not just going to slide, it's a little dark for this. On the right there, the background, Mrs. Lawrence, Ernest's wife, flanked by two distinguished visitors, Vannevar Bush on our left and Alfred Loomis on our right. Alfred was a great friend of Ernest in the laboratory and helped in many ways. Next slide. That's Lorenzo Amo Capodolista who was the count from Italy that I mentioned who was one of the colorful characters of the early days. He came to the laboratory in 1935 and stayed several years. He did not use the last name, Capodolista, which means head of the list, which is apparently a name of very great antiquity in Italy. He was a very fine fellow. In 2019, is Charlie Lytton, the man who came and helped us in many technological aspects and whose name was used in connection with Lytton Industries. He's working with a glass lathe which he made himself. The main thing is his original product was glass blades like that. Next slide. This is Maurice Namius from the Gellio and Curie's laboratory in France, posing with a vacuum chamber for the 37-inch cyclotron in 1937. Next slide. That's Henry Neusen who came from Chicago in 1934 with a PhD in chemistry. I think he fits in very well with this group here. He came as a PhD in chemistry but became a physicist. He did some very ingenious experiments using recoil of artificially produced radioactive nuclei. This picture was taken in 1938. Next slide. This is Ernest and Molly Lawrence with their first two children, Eric and Margaret. They ended up having sex but these were the beginning. I'll take another step to the Crocker lab in 1939. Next slide. This is Ernest and Molly Lawrence writing the script for a movie about his Nobel Prize which is in 1939. He's simply using the fender of a car as a desk there and writing the script. Now let's go on to the next slide. That is Leedy Forest, the inventor of the man who put the grid in the vacuum tube, who visited the laboratory. We had many distinguished visitors in the laboratory and I included two shots. There's DeForest. Now you can show the next slide which is Diego Rivera with Lawrence. Diego Rivera, of course, was the Mexican mural painter and he came to San Francisco and painted a mural on the wall of one of the buildings there. I remember going over and watching and working on it. We're coming to the end. Next slide. Well, that'll do. That should be 90 degrees around but it'll do. That's one of the original 1934 Lawrence electroscopes that I built when I first came to the laboratory in 34 and used for that early work. By some strange miracle, two of those things survived. They still exist. They still even work and I put in a picture of one of them. Next slide. That's Glenn Seaborg on the occasion of receiving his PhD in 1937. That gets in ahead of this cutoff date of 1940 for this thing. The next slide is me. That's taken to the press conference held in Crocker Laboratory on June 8, 1940, the time of the discovery of neptunium, the first transuranium element. They took a picture of me really, really making like a chemist there. Next slide. I found this slide in the archives and I couldn't resist putting it in to end the slide show. I call it On the Beach, somewhere on the Sacramento Delta. John Lawrence, Paul Aversold and I are enjoying the sun with some girls. Now if we look at that a while, maybe the sun will shine here and at this point I will end.
|
There is a set of physicists who have been rewarded with the Nobel Prize in Chemistry and Edwin McMillan belongs to this set. This has to do with a tradition of the Nobel Committees of the Royal Swedish Academy of Sciences that became established at the beginning of the 20th Century, that work done in the field of radioactivity should be regarded as belonging to chemistry. Thus, already in 1908, Ernest Rutherford received the chemistry prize and expressed his consternation that he, a physicist, should be rewarded in this way. But when McMillan received his prize in 1951, primarily for the discovery of the radioactive element with atomic number 93, neptunium, the tradition was well established. For his lecture in Lindau, his second and last, McMillan choose to give some historical reminiscences from the laboratory founded by Ernest Lawrence, the inventor of the cyclotron and a Nobel Laureate in Physics 1939. The laboratory was originally named Berkeley Radiation Laboratory and was founded in 1931 to house Lawrence’s cyclotron and other radiation generating machines. McMillan arrived there already in 1934 and tells a fascinating story of the people and the work done there. After the discovery of the neutron, the 1930’s became a hothouse for nuclear physics, at least in terms of the discovery of “new” elements and isotopes through neutron bombardment of “old” elements. This also became a natural cross-disciplinary area, where physicists and chemists worked together, the physicists producing the new elements and the chemists studying their properties. A classical and much discussed case is the close collaboration between the physicist Lise Meitner and the chemists Otto Hahn and Fritz Strassmann, resulting in the discovery of nuclear fission, for which Otto Hahn alone was rewarded with the 1944 Nobel Prize in Chemistry. In 1951, though, both the physicist and the chemist were asked to come to Stockholm to receive the chemistry prize! Anders Bárány
|
10.5446/52565 (DOI)
|
300vet o pall gweldwc eich niferion u endyi fod y cymyl recording i fy minery ac oedda, lleogi paridentan sydd o'r cyf anghydd gweldweithbeth AU overlooked Prof. Keithnutz Ad Barnagr, ryswch i fy ffw Very fi f tas i fy felattur cry står wy добain mwyaf fydd sydd wedi weitti ei hun buffet ddarwch margin supercomputerol radiy require Rabb O'r cyfnoddau yn ymddiolol, mae'r cyfnoddau yn ymddiolol yn ymddiolol, ac mae'r cyfnoddau yn ymddiolol o'r ddweud o'r hynny yn y cyfnoddau'r cyfnoddau. Y rheswm yw'r ysgol yw'r cyfnoddau sy'n gweithio o'r cyfnoddau sy'n gweithio'r cyfnoddau. Mae eich cystafilyd yn ymddiolol iaw nhorw y commendau Fyfenwy. Mae'r cyfnoddau sydd yn cin Dieuchelu ësgol f CPR Fflaithio ffinoffon drwng, mae'r bât Visir yn caelsi wybarchuhe yn gyfund, ac mae'n gellych yn demolod i gyd i'r disadvantaged clickedllol yn ei goll Band Teimlo. Yn ôl gael cyfan gynrychiol, gan gyfan ei الأff danger i fyarr y pwysig cabari. mae'n trim achwydd safnymau Garrunog Menb용 כלuniau i ddefnyddio'r am Mehawr â hy吧wyr Lken DJ antibiotics. Jason i salig i'w vomiting yma am d хотите, ac eu nosio ddangos heating enviwyr.球 pan hyn dyfod e果 Meneddion international haf iawn d chu'r llyfr ownrad hy drew Adrod y wir twddóf yn merderen y Daewa, communications錯io Idon dogfan yydyn ni hynny gener Eyna llygau llyfr kangannauagen beth spr debug y gallaf boomhann. eksperiemod fell Excellent ar gyffredinol pryd. Diolch i ddiwedadol, a fydd ddim morfa ar gyfer eu gyarla Madagyn bonus yr ydyme wedigynt bits on-inf ratாg a positelaig sy'na dim ondd o'r ap selfish a wylo-yangur a'r rheslineldau. It's derived from accurate and detailed observations and experiments, most of which, can be verified by anybody, who will take the necessary trouble. Ond o hollbam eich Lord nodig datblyniad noi'r latwy Canvas yn ym EXTEDL wirdis,üy bawb, wrth adaptera gallwn ni'n grw Rama. Dyfarch현 yma am y cyfitiau ffac heelyn nag y gall Wg Ryf. Mae'r gweinidwch yw nämlichadonion am bach. Cymmygn �fağwch wedi rhaid â sicr iaeth y d bones peddar yn bach heats говорi, am ydyn nhw ychydig iron am y achzhwsfyn司mu, gennym ddelch gynedd pobl unigol, ac yn gwperu'r myful jo yn maith speaksr Wanton yng Ng使dd. Ond y taru'r brysion haf gweld eu efforte ei cyf sunrisedun gwyboddfa i amgylchedd pwycle ac y hwn yn bodi ynswag sy'n cyfrasparu eich newydd. ond nid oeth i ganwm y gallu bod b 우리가 llyn i'r sempanylo ar y dyfodolo mwyo Surebiad,peol 那autio am llin â cytunydd i ni ceuxwys G checkpoint a gwybod gyda pechPR yw i gyfnog woodun jion��u i'r bynnag cyfnod ynglynill uchydig Bird frylonol am ehoi Tyri, fe mwy fwy arrよね mewn yr wy lounau'r byll yn rhoi mewn dyfodol cyfnod am gyfnod o ddi coefficient acusiwys yn ymddiwch. Yn ystod 40 yma, JD Bernal, yn y ffunciol yw'r cyfnod, yn ystod yw'r cyfnod yw'r cyfnod yw'r cyfnod yn ymddiwch yn ymddiwch yn ymddiwch ymddiwch yn ymddiwch yn ymddiwch. Mae'n ddysgu'r ffordd o'r angen yw'r cyfnod yw'r cyfnod yw'r cyfnod. Twyd las påeth yn eistedd yn ymddiwch 40 yma ac felly mi'n bod y tu dodhalf i allu fally qweithio'r 08 iawn o'i gawr am y gy瓜 iei'r edrych neu sut ydych chi fan yllian ni'r op fourth hon, ond mae fan oherwydd y game a bod, Mae angen fyddin yn wedi eu cyflopio'r ddau ydych yn ymddian j turnedyn ac i Princenae Cafŵff, mommywwch yn credu datblygiad eraill gyda wedi bethau'i bwlad yr feidliol a phob oedd wnaiff i d artepl saith ag ér fawr yn lae peolais. Galychdoedd o gyfr觀au, fall anger BUTROfenedd y productледur gyfr Yn livelyologies, pan fydd hon viw aregyели of info am hwnny ogylau llifedig COVID19, rydych chi'r baller wahanol felly fod cymührt am y partiwn rhai current Wrth fod pethau i'r cerdiffenoedd learneg ar gelypl yn gyfroed pan reisdi Ebys replicatein Ini ei ddodiad eraill wedimission unig cybernaeth yn Lund yn 1799 agoe fair yn Spebru y Benjamin Thomson, cymryd rural. The first two directors of the Royal Institution, Sir Humphrey Davie and Michael Far adap, were each in their own way quite outstanding research workers. If we go back further, we find most of the important experiments were done by mining engineers, medical men, either polychores, clergymen, college fellows o'r llwyddoedd, o'r clasio'r lleiadau. Yn ystod, yn y 18e, y Llyfrgell Cymru, eraill y ffeithio ymlaen i'r gweithio'r gwybod gyda'r gweithio. Felly, mae'r llwyddoedd yn ymlaen i'r llwyddoedd, mae'r llwyddoedd yn ymlaen i'r llwyddoedd. Mae'r llwyddoedd yn ymlaen i'r llwyddoedd yn ymlaen i'r llwyddoedd, mae ddim llwyddoedd atgoed o ridingain. Ond ei ch pensoリol i gyrsdeithem optimisticusIR Bath ychwanegw soldiers oed i afael gif Bijech Mutcy. A Trydyldo'r siwr polen gelleg, ngu geresio gingmau i hm illolol pan rwyf i ddogfynol iawn a iewyd a'r ghanaeth statbyddon playable o Robert Boyle bod y diedliadiau Mae ddesg han 야� Robert Boyle was somewhat scornful about the alchemists ideas on the elements and eventually he gave his own empirical definition of a chemical element and that definition has survived. Ac broffl� ma e brought iddyn nhw lleしwch le addyniais stwerslld, unrhodid yr ystyried iAAAA Londoniaid a rhais unrhyw i kierwch, wrth hynny'r leth yn ddarparu cael ei hyffredig cadwgoffol ganFallen, bod trio'r Klwyd answered fy angen hynny. Roedd defeated hefyd yn trofod, fengynt gwahanol yng Nghi cabbage maes dyma er oedd ars ei ewg i Pass deilangb yn atia cael ei wneud ddod, a'r ddweud o'r llwythau yn cael ei gynhyrchu'r cymryd yn ymddiad. Nawr, o'r hoffaethau yn ddweud o'r ddweud, mae'r alchymysau yn ymddiadau'r cyfnod o'r ddweud o'r ddysgufyrdau yn cyfnodol cymysgol a'r metallogau. Mae'n gweithio'n ddweud o'r ddweud o'r ddweud o'r alchymysau yn ymddiadau'r ddweud o'r ddweud o'r ddweud o'r ddweud o'r ddweud. Mae'n crys o'r gysід mewn code arna Facebook i ddweud o'r ddweud o'r f faktiskt stanth chore o gw Abdurangranna. Dwi byd ei gweithio o'r ddweud oedd gradd o gorfaidd yn gy�� doenne yna. Ond gwe sem場 oherwydd bys bron dirti ar gyfer gwahodol i cór bron sy'nt f演hau ar庐 hon. Fy o ydynt screwfiadio ddim erbynnwys". y adversity y maedd hynny, mwy I g selective nhw mae maidwch yn henderon defendinglliaeth wedyn angel confuse du — nifer hynny o rydy皆さん o myfyr dediel y pethau 따� addresso rydych yn effeithio gerthys 근naf gan fyddion mewn iawn celtwyr hyn. Ond os unig bob ddarlees wych yn ôl maidwch oherwydd mich daethe fadna ''hypannau'bannau amdanoenidolaethiaeth eisiau complimentio bypan pa mê backyard y areas cryr iawn? El Sense 40 y griadynt y lle'r nu yn Eyfio Cysdrum yn y baesbyd y newid.werdydd bynnag cyllustei terr Merethiff! Er faliwch Being, decyfl expressions. arrwb title ti'n boddrif siweig feddwl yn blaireiamd, o na'r eitangos, a o Beir~? As gyda byrd e pa yma,가는 ei radwedd Donald ace Rydym ddeall iawn ymlaen tot. byff..! Concerniedat abaal cr Alyll wy Unigwyrdant ry Fundedd neu Scerned Ganllwyrge Cymru�� o Spicy Diolch yng nghymru Subloedd diolchMusic However, taking aasiw ó ro在oedd Perloedreall hyny booth drawer D tonnes cymryd ydw i'n llanol, ein ll...! Ob附ень vastwyth byddai tydw John Gastart sydd yn hynny,報 vinellon maenighau ac?? International还是 확실히 ni gyllid famous Eras. a bod yna bod yn ei wneud o'r cyfnoddau sydd o'r cyfnoddau sydd o'r cyfnoddau o'r cyfnoddau ymgylcheddau Euclidion, oedd yn ymgylcheddau i'r cyfnoddau. Mae'r branshau ymlaenau yn y ddweud o'r cyfnoddau yn y ddweud o'r cyfnoddau sydd o'r cyfnoddau. Felly, y cyfnoddau, mae'n rhaid i'r mynd i'r mynd i'r mynd i'r cyfnoddau i'iherwydd ofio stori gwaelogu i rider ac o'i gwrs o'r graen<|nl|><|transcribe|> o de nadau Amsterdam a die Psysittingen op encouragege alchemist op het sy Grindel van Juam Exc себе. Ond estos af pseudantruoidau am Fwazie erna stretcheplask sponsors op die breed de persuade consortium item paeth credits wannaaarko en lunour 70 pa Playing Thunder a Gena wedi'i gael i'r pwysig a Llyndd wedi'i gael i'r Skerfi, oedd ymdegwch yn ysgolio'r cyfnodol, o'r cyfnodol sy'n cyfnodol, cyfnodol sy'n cyfnodol, o'r cyfnodol, ond ymdegwch yn cyfnodol. Rwy'n cael ei wneud, mae'n ddweud yn ddweud, mae'n ddweud yn ystod y same ddim yn ymdegwch yn canser, y byddwch yn ysgolio'r cyfnodol, a'r ddweud yn ystod y bydwch yn ystod y cyfnodol sydd yn gweithio'r cyfnodol, ond oedd yn 60 ysgolio'r cyfnodol, mae'r ddweud yn ysgolio'r cyfnodol, ond y ffrindio newydd ymgyrch, mae'r cyfnodol yn ysgolio'r cyfnodol. Ond, mae'r ddweud yn ysgolio'r cyfnodol, neu form ar draws gail o llawer o syddaethol, a fe wneud am easiesto'n dw playerauлом, a heb a'r unedolio'r amser i'lltelu flop hi, i'w wneud y ch Бred Paed ifanc en gelatin sy'n ddweud i hyn? mewn gwahag o braenaf y rhaglen o'r wygaeth o hyn你有wan肉 wy unboxing?� Personal d�ぎch â a dyfnydd ystmawaeth Otto Han's work on the supposed transuranic elements, which Graf Bernadotte mentioned some of us heard in this very theatre a remarkable lecture about. In fact, important discoveries have to be made in a sequence which cannot be altered very much. Progress in one branch of science can only follow previous progress in another. And scientists are liable to become prisoners within a confining set of concepts out of which it's not too easy to break. T. S. Cwm has dealt well with these matters in his book, The Structure of Scientific Resolutions, and I'll not go further into his concept of paradigms except to say that people who break out of these intellectual prisons have often been thought of by their colleagues as interlopers or amateurs, which has usually meant that they've had experiences in quite other fields of science of everyday life, which caused them to see the problems some different way round. The recognition of nucleic acids as the carriers of heredity and the determinants of ontogeny is a wonderful story of this kind from our own lifetimes. But now for the commoner human failings. Work is often repeated because the research worker is unaware that it's been done already. Wilsteader has made this point well with its converse that I've just mentioned that too much familiarity with the literature inhibits creativity. Um arbwyth oedda'r sichin der arbwyth ddwch ddwch ddwsetzen, ddwtas not ddi bdoetyn dessen fas biais anstrebyn yn fas fhoron sligd, dwi'n yw beshetzen, so langa es fhoron sligd. Dyn belesen'n yn ddim kritisch begabtyn, mae'n gwyfnlich ddisef feigcaid des yw beshetzens. Dan feild der reit's yn der mwyth sy'n ffaswch. Synnwyr ffom reichtum angeddankanon beobachtungan, den intef y roeddem ardewyd, erfyl. Zoe bach lle iddyn ni'n gofal yr aileu i hefyd iaith a'r bydd yn tref am figheddf 로edd un gwast. Beth mae waschность i fynd yw oedda'r pas ym yw Fleing, fi'n f talents rhag manaeth e wedi yn dabion o 1760 di Gallfel alternative-mo positive, i'r sydd i'r stocs, i'r bwtawni'r croes bwrudin, Monteverdi, Zwett a Mwliš, als fforgange. Solche chaden, wenn es schaden isst, is der treglig. Aber entscheidender schaden droad, wenn wir zuforans von den bemu'r unrhyw fforgange rechenshaft geben, vom erdrycenden umfang i'r gysamlton e'r ffarring yn fforgagang yn y gen rathionyn. I feel that in my own search for simple peptides in plants from 1950 to 1965, I became, in just that way, immersed too narrowly in protein biochemistry. Wider reading and wider acquaintanceships would have shown me much sooner that I was really studying coupling products of amino compounds with polyphenols and quinonoids. It's much more serious when an important discovery is forgotten or suppressed. Mendel's work on hereditary variation, published in 1866 in a relatively obscure journal, was nevertheless quite widely discussed at the time. But it seems that the basis in cytology for understanding its significance had yet to develop. Meanwhile, Mendel's work was gradually forgotten and had to be rediscovered around 1900. The failure of organic chemists and biochemists to take up sweats well-published discovery of chromatography from 1903 or 1906, to take it up at all widely until nearly 30 years later, has been put down to Vilstedt as tendency to minimise sweats work and to present himself as the ultimate authority on plant pigments. But the failure was partly because chromatography yielded such small quantities of product that microchemical analysis was needed for these and that was only widely adopted during the 1920s on the basis of Coulman's and Pragel's work. At any rate, Vilstedt'r made handsome amends by drawing the attention of Richard Cohn and Edgar Laderer to sweats methods when they began to get involved with carotenoids about 1930. Since then, chromatography has steadily increased in favour with analytical chemists. But perhaps the most outstanding lapse by neglect of such a kind was the failure of chemists to pay proper attention to Avogadro's hypothesis, which was published in 1811. And it wasn't until Canitzaro forcefully reminded chemists about Avogadro's ideas at Karlsruhr in 1860 that proper ideas of molecules and atomic weights began to be generally accepted. And only then could Kekele, Lebel and van Thoff begin to formulate their ideas of molecular structure, the importance of which Professor Canitz mentioned in his introduction this morning. The sheer volume of scientific publication nowadays increases the chance of lapses through pure ignorance occurring. The big task ahead for the information scientists. The advent of microfiche journals such as CME engineer technique and the new Journal of Chemical Research may lead to higher standards in primary publication. And then the abstract journals, some of them have greatly improved their indexing arrangements. There's the science citation index, which has its special uses. And there are all kinds of partly realised or potential applications of electronic computing. I think that the information people have served science well so far and particularly they have served chemistry well. But they can't afford to be complacent. There's a big task ahead of them. Wastefulness by duplication of research need not be accidental or unconscious. The bandwagon effect is well known. An outstanding example was the study of the nucleotide codes for protein biosynthesis, whose results on one occasion were being published in a New York daily newspaper. Cavalieri has even accused research workers in the so-called genetic engineering field of rushing ill-advisedly ahead into dangerous experiments in order to get Nobel prizes. But competitiveness is not the only reason for unnecessary duplication. People who lack self-confidence and originality come to feel that to be trendy is to play safe. You have only to look at the research topics in biochemistry for which the British Science Research Council has made grants to university scientists to be brutally reminded of this phenomenon. I think some of the people who apply for these grants should really think more seriously about the direction in the longer term into which their researches are leading or how their researches are going to mesh with social requirements in a generation's time. Another way of playing safe is to apply some routine, if possible, prestigious technique as a substitute for thought. Ever since automatic amino acid analysis was made available by Spackman, Moore and Stein, these fairly expensive analytical machines have been churning out results of dubious relevance to most of the physiological and pathological problems on which they are employed. But the results are publishable and money is seen to be spent. Of course, the limiting case of trendy research was in biology in the USSR, and it was the doing of T. D. Lisenko and his political promoters. In this case, research workers with alternative approaches were physically removed from the scene, and the whole world is the poorer for what these people and their pupils would have achieved in the last 35 years. Well, now that one or even as much as 2% of the national income of most of the developed countries is being devoted to scientific research, it's easy to understand popular demands that these expenditures should not be wasteful. I suppose it was mainly because of what happened during the Second World War, that the Baconian proposals in Bernal's book came to be implemented over quickly and therefore not always wisely. Of course, comparable fractions of national effort are laid out on collecting taxes, on advising taxpayers how to pay less tax, and on lots of other non-productive services of varying popular esteem. Nonetheless, scientific research still tends to be judged very much in terms of fruit rather than light. And just personally, I'd like to say that I get much more uplift from colour photographs of the surface of the moon and of Mars, which are extremely expensive to obtain and involve a lot of research than I do from the objects of abstract art which you can see in modern galleries. But let us think now in purely economic terms about which aspects of scientific research could be considered wasteful. There's a tendency among financiers and economists to measure outgoings on research as a fraction of turnover or of the value of the product. This is implicit in some documents such as the Rothschild report in Britain. British expenditure on research and development relating to different branches of industry has been compared with corresponding figures for Japan by Sir Ewan Maddick, and he's demonstrated some striking contrasts. Japanese expenditure on the different branches of industry is fairly evenly adjusted to value of the product, whereas British expenditure is very unevenly balanced with 47% on the aircraft and electronic industries, which together account for only 6% of the value of total industrial output in Britain. Of course, it could be argued that special help is needed by industries in a phase of rapid development, but now that the Concord fiasco is reaching its consummation, I think we should look elsewhere for an explanation of the British imbalance. And I would say this has been promoted as a covert addition to military research expenditure, which I'll say something more about soon. To the extent that the British Agricultural Research Council has adopted a Rothschild or Japanese approach to financing different lines of research in relation to the value of annual output of each commodity on British farms, they could be said to be playing safe. They're helping farmers to overcome their current problems and opening the way to new and improved practices in the future. Their Secretary has pointed out that with farming production in an overall decline and farmers short of investment capital, the farmers are only in a position to use some of the age to production which arise from such research. Thus, a newly bred plant variety will sweep the country, whereas an improvement in livestock husbandry, involving perhaps extra fencing and new buildings, will not be implemented for lack of capital resources in the hands of the farmer. Proats of research may thus not materialise just when society needs them most. Just as farmers can be sold new plant varieties, doctors can be sold new drugs. Drugs are products requiring relatively little capital expenditure on manufacture by the chemical industry at a time when the profitable employment of capital on heavy organic products such as plastics is threatened by rising costs of raw materials and energy, static demand and a sense also of having reached an inherent limit in the likelihood of discovering any new plastic material that will be much superior to those already in existence. One consequence is that doctors are inundated with an ever-changing variety of drugs, many of which are not essentially superior to others already in use. They may in fact have been devised by Firm A simply to circumvent a patent belonging to Firm B. The amount of wasteful medical research engenderd by such sparring around patents is obvious from a look at the patent literature relating to drugs and we have the occasional phalidomide episode thrown in. Moreover, the attention both of doctors and of medical research workers becomes concentrated on drug therapy to the detriment of other therapeutic approaches such as those dependent on immunology, manipulation, dietary and spa regimes, exercises and so on, all of which have been in an unfashionable phase now for 40 years or more. While on drugs, it's interesting to compare and contrast the approaches of different sections of the industry. In chemotherapy, we could start with Prontosil rubrum, the Mark's notable discovery. It turned out that it was not the pigment moiety of the molecule that was effective, but the sulfonylamide grouping and hence the development of the whole series of the sulfonylides. Woods then discovered the antagonism of sulfonylamides to the bacterial vitamin para-mino-benzoic acid. Hope appeared before us that by intensifying the study of bacterial chemistry, new drugs could be developed in a rational way. Well, that's more than nearly 40 years ago and that hope has not yet really materialised at all. At present, there seem to be three general approaches to developing new drugs. You have chemical variations on an already promising theme and then you have the empirical approach of trying every new synthetic or natural compound for every conceivable purpose in agriculture, industry or medicine. That's an approach that tends only to be economic for a large firm with its fingers in many different pies, such as the British Imperial Chemical Industries. A third approach is to keep on following a number of imaginative or half-baked ideas in the hope that a small proportion of them will yield successes. Such an approach in the firm Hoffman-LaRoch led to the synthesis of libraium and valium, which turned out not to be the substances that it was intended to synthesise even, but they have been a winner for the firm. It has emerged from the various lawsuits in which that firm has been involved that it has an enormous research expenditure and that it's very far from frequent for anything as good as valium to be discovered. Can we say with any certainty whether or not such research is wasteful, particularly if most of it is eventually published, either in the patent literature or in scientific journals? The story of Penicillin is fairly widely known, but it's worth recalling that during the Second World War, when the natural product from the mould had proved promising for the clinical treatment of bacterial infections in the hands of Florian Hyne, the British government imposed official secrecy on research into its chemistry. There are no doubt hoping that an efficient chemical synthesis would be found so as to make the drug available for mass treatment of war casualties. Even at the time, the official secrecy had seemed doubtful in relation to the Hippocratic Oath and to the Geneva Conventions. But anyway, very many good organic chemists were drafted onto the problem. And with some crystallographic help, they did find out the structure of Penicillin. But to this day, no complete chemical synthesis exists, which is more efficient economically than biosynthesis or partial biosynthesis by the mould. The collective book on Penicillin chemistry, which was published not long after the war, indicates that to determine the structure of Penicillin took up more person research hours than had ever been devoted to any other organic compound of comparable complexity. So bureaucratic ordering up of research under conditions of secrecy, rather than the inherent difficulty of the problem, must be the explanation. Of course, during the war, conventional economic wisdom did not count for much and things got done because individuals or society judged them to be important in their own right. And that brings us right on to military research, which by its nature is done under conditions of secrecy. We have our suspicions from the scale of the expenditure that it may often be wasteful of research effort in achieving the ends contemplated. On the other hand, some of its results have been extremely spectacular. We can take as instances radar, nuclear bombs, rockets, artificial satellites and space travel. And here we've come the whole distance and the wastefulness or otherwise of the research will be determined primarily by the uses to which society puts such inventions. We can go on to argue that the overwhelming mass of research is directed by present-day societies too narrowly towards immediately profitable commercial activities or to military purposes. If scientific research were being directed towards truly long-term human interests, more attention would be being paid to such matters as finding novel and better catalysts for chemical engineering, improved capture of solar energy by conventional or novel means, better understanding and then improvement of tropical soils, study of the short and long-term determination of climates and a number of other similar neglected topics. But it's not my job today to assess research that is scarcely yet being done. And in conclusion, I just want to go into an embarrassing and potentially dangerous aspect of the growth of a substantial class or hierarchy of scientific research workers. That was something which began during and immediately after the Second World War, and it's persisted up to about the beginning of the present decade. With the cessation of exponential economic growth and with increased popular criticism of our activities, we feel our positions and the prospects of our students are threatened. In the new medieval setup which we have, in which the real power is exerted by the three estates, which I will name as the bureaucrats, the financiers and the trade union leaders, we try to carve out for ourselves a niche as a fourth estate. No better example of work by our estate on its self-preservation can be given than in the industrial application of nuclear fission to the generation of electric power and similar purposes. As Professor Barton pointed out, more than enough nuclear bombs have been produced for all conceivable political and military purposes. But rather than show once again the versatility of the many scientists who turned from other branches of science to military enterprises during the war, the nuclear establishment had to continue its own expansion, building up unsolved waste disposal problems for future generations without any outstanding economic achievements in the present or promising economic prospects for the near future. A glance at the energy sections of chemical abstracts gives an idea of the predominance of scientific effort devoted to applications of nuclear fission, whereas the prospects for advance in economic energy utilisation seem to lie in better understanding of photochemical and photoelectric phenomena in improved utilisation of solar energy through the green plant and perhaps also in controlled nuclear fusion. None of these topics receives more than a tiny fraction of the attention devoted to nuclear fission. And then there are these consequences of indiscriminate developments in nuclear fission, metallurgy, chemical and agrochemical industries determined mainly by profitability that they've raised many problems of pollution and consequent poisoning. Some of these may be genuine, like bow to Professor Barton, but they're seized upon by our fourth estate to promote its own interests and the media assists by playing upon genuine and understandable concern felt about these matters by the lay public. Scientists are quite good at taking in one another's dirty linen to wash, particularly if jobs for the boys can be generated thereby. Topics other than chemical pollution can also be taken up. Just as illustrations mentioned two problems each of which deserves a whole lecture to itself, one question is, do nitrosamines in European diets promote malignant disease? A second one, what are the roles of various major dietary components, particularly alcohol, sugars, cholesterol, saturated and unsaturated fats, in relation to heart and arterial diseases? Here we're only talking about normal dietary constituents because nitrosamines can be formed during the traditional storage or cooking of many ordinary foods. And the evidence showing any of these substances to be dangerous as they're ordinarily consumed is pretty thin. Yet publicised ventilation of such questions can be made to generate research jobs even from governments which are feeling financially embarrassed. And at the same time as generating research jobs, we tend to generate a health and safety neurosis in people who are actually enjoying such good health and safe living conditions as never before in history. And I think activities of that kind are alarmist as well as being wasteful. Well, every advance of science by discovering new phenomena or substances, each advance makes possible experiments which couldn't have been envisaged previously. So scientific research is in principle infinite and it behaves like the sum of a geometric progression. The problems which research can generate and then solve are infinite, whereas the human resources to carry out research are limited. There will be a time when people in general will be much better educated and also more contented with their material circumstances, whatever those are. They may even want to engage in research as a recreation, as did gentlemen in days gone by, and wastefulness will cease to be an issue. But that's for the future. Today, the last thing we should allow is for the so-called economic crisis to be used as a pretext for further damage to our educational arrangements. As to research, it may well be inadvisable to extend the sum total of publicly financed research activity, but nevertheless, the crisis should provide us with a stimulus towards working out which particular lines of research can best help towards getting us out of the crisis, whatever we may consider that to be. And with this end in view, research workers can help by taking more initiatives and proving themselves a good deal more adaptable. Thank you very much for listening. Lowell Ca inquiwr an efficiw e
|
Richard Synge was a frequent visitor and lecturer at the chemistry meetings in Lindau between 1958 and 1980. Until the present lecture, he kept his topics to chemistry, but this time the theme was more general. It is a theme, which touches upon every science policy maker, scientist and science student, always! So it is still topical and may be even more so today than it was in 1977. The reason is that Synge has a strong opinion on how research should be organized and funded and does not seem to shy away from trampling on some feet while bringing his point home. The lecture starts with an interesting historic overview of research organization through the ages. Synge names the French Academy of Sciences at the time of the revolution as the “villain”, the first research organisation trying to order scientists to produce specific practical results. He doesn’t like this way of producing science and gives a number of examples where it went wrong. His message is that scientific progress usually comes in small steps. This means that when politicians and science policy makers try to make scientists find a rapid way to solve a given problem, the result is usually inefficient and produces only what Synge calls “wasteful” research. One example that he gives is a (named) medical company trying out every possible combination of drugs instead of, as another (named) company, funding research on the mechanisms of the drugs. Synge’s visit to Lindau occurred in the middle of the two oil crises of the 1970’s, a time when the economies of the western world were shaken. A message, which must have been well accepted by the primarily young audience, was that politicians should not let the economic crisis influence the funding of education. Hear, hear! Anders Bárány
|
10.5446/52567 (DOI)
|
Our German hosts and fellow colleagues and students, I'm very grateful to be with you. I'm coming directly from a much warmer climate about which I hope to speak to you a bit at the end of my lecture. And I'm suffering under the influence of your beautiful weather here. The study of an epidemic of new disease, new to Western medicine, and a chronic, progressive, always fatal central nervous system disease in a remote population isolate in people still in a stone age culture in the central highlands of New Guinea, who, hate their own dead, has a right of respect and mourning to their nearest kinsmen, has led us to two major discoveries. The first, already mentioned by the over-generous introduction, was the discovery that chronic, progressive, even at times totally non-inflammatory, and even apparently heredophomial diseases could be caused by slow-acting virus infections, even years and at times several decades after original infection and a long, silent incubation period. The second discovery was that of a new group of microorganisms, which cause human diseases and which possess such unusual physical, chemical, and biochemical, and biological properties that many of our colleagues prefer that we not call them viruses. They are so very different from all other microorganisms that they clearly represent a new type of replicating agent, which we have tentatively decided to call unconventional viruses. The disease in question that led us to these discoveries is called Khuru, a now disappearing plague of the Foray people and of ten other cultural and linguistic groups in the central highlands of eastern New Guinea who intermarry with them. They all inhabit a high mountainous interior of the eastern highlands, living at from 2,000 to 3,000 meters above sea level. By opening the bodies and the skulls of their dead kinsmen, they contaminated the skin and eyes principally of the women who performed the morning ritual and of the young children with them. This led to a fatal brain disease appearing 5, 10, even 20 or 30 years later. It became the most common death among them. In fact, it accounted in the first 10 years of our total surveillance of all the population involved for over 50% of all death and among adult women, 90% of all those who died in the 15 year period of surveillance died from Khuru. We have now discovered that the disease first described in Germany a half century ago and named Kreuzfeld-Yakob type of presenial dementia after the two German neurologists who first described it is a worldwide disease of at least tenfold and probably a hundredfold the incidents that we attributed to it only a few years ago. It is caused by a virus indistinguishable from that of Khuru, but whereas we know the mode of transmission of Khuru from the contamination of the skin and mucous membranes through the ritual rite of cannibalism, we do not know the usual means of transmission of our civilized form of Khuru throughout the rest of the world. We now know that these two unconventional viruses of man are similar to two such viruses of animals, scrapey, tauberkrankheit of Deutsch or Tremblant en français and a disease called transmissible mink encephalopathy of mink. We call the group after the nature of its neuropathological lesion the subacute spongiform virus encephalopathy. Same resistance to an activation by ultraviolet radiation of 2,540 angstroms to 254 nanometers, the wavelength which is the peak absorption wavelength for RNA and DNA has led to the prediction that Khuru and CJD or Kreuzfeldt-Yakob disease virus and scrapey viruses possess no nucleic acid. This heresy has been further promulgated on the discovery that these unconventional viruses also exhibit an enormous resistance to ionizing radiation using either gamma rays from a cobalt 60 source or neutron beam. We're doing this work with Latigé at the Fondation Curie in Paris as well as at Oak Ridge and we have repeated the work of our British colleagues, Hague and Tigger Alpers, and confirm this resistance which is in the order of 100 to 1,000 times that known for any other microbial system. In fact the equivalent target size for a sphere with such unusual resistance to ionizing radiation as specific, a non-specific, though the damage may be, would be calculated to be under 150,000 Dalton's, a molecular weight which is only one-tenth the size of the smallest genome previously known to virology. Finally, intensive study of these viruses for over a decade has failed to reveal any antigenicity and in both the natural diseases and the experimentally induced diseases in laboratory animals, there is no evidence that the victims develop any immunity or that the immune system plays any role whatsoever in the disorder. The humoral and the delayed hypersensitivity types of immune response, B cell and T cell function remain intact and unstimulated. There are no immune complexes formed and no immune complex deposits found on basement glomerular membranes or in the chloride plexus. These thus are the first mammalium microbial pathogens which apparently contain no non-host protein in their structure. Instead, they seem to be minute messenger RNA sized pieces of DNA tightly bound to host membrane systems. And indeed, the demonstration that they are in fact DNA is very tenuous based on only one unconfirmed very recent experiment now in press. The only structure we are left possible for them at present remains a tightly membrane bound particle of size no larger than 150,000 Dalton's. Being probably for none of its own enzymes or any core or coat proteins. A virus without core or coat leaves those of us in structural virology most bewildered. It is from recent work in plant virology that we found find sucre and the hope that we may remain true to our basic tenets of our current DNA RNA religion and not have to retreat into the previous speculations of replicating membranes devoid of nucleotide structures or replicating basic proteins. But I must warn you that the very same workers who with no difficulty whatsoever quickly established the full sequence of FIX174 the smallest virus of bacteria yet sequenced have been six years on this problem from Caltech and North Carolina out and in Germany and in England and have failed yet to establish unequivocally that a nucleic acid is involved in this whole system. That we are dealing with viruses and viruses that cause human disease is very clear indeed. They grow in vitro in tissue cultures of infected cells obtained by primary culture of the spleen, liver, lymph nodes or brain of infected animals or primary human or animals with the natural diseases. Other cell lines such as L cells have been infected in vitro with the need of causing fusion to get the infection started and the perhaps hazardous procedures of transforming both human and animal cell lines with the SV40 genome so that they bear the tumor antigen and still carry replicating simultaneously the infected genome of both the Croix-Velliacob and the Coru agent and different cell lines and now the Scrapey have all been successfully performed. We are dealing with virus like agents. We are dealing with a new group of microbes which might better be named by another term and as usual the more fortunate plant virologists have preceded us and found that they must also revise their basic microbial conceptions in a group of agents which are the smallest of the plant viruses. They prefer to call following dener the man who has defined them most thoroughly, viroids. I will talk about them later and with that introduction I will start on a series of slides. If I get through them on time I will then try to take you on a historical travelogue to where this current day molecular biology began. So if I pay somewhat short shrift to the rather lavish and hardly credible claims I have made by beguipologies from my molecular biological colleagues to make it a little easier to follow the terminology of a discipline of neurology and neuropathology notorious for the use of impossible eponyms and impossible and almost meaningless descriptions based on neuropathological lesion. I will leave you rest assured that we have introduced only one new name into medicine the name Kuru which is a nice non-anglo-saxon four letter word which has even crept into the Webster Collegiate Dictionary whereas Lewis has yet failed. I am unable to use any other term for Kreuzfeld-Yakob disease than the long established eponym. The disease does run by 17 additional all longer synonyms in the neurological literature. Scrape is a spontaneous disease of sheep and goats throughout the world and minkensophilopathy we now have good reason both virologically and epidemiologically to believe is simply the scrapy infection having spread to mink ranches naturally. In fact it is no one that where it was first seen the carcasses of scrapy infected sheep have been fed. The next slide please. This slide simply shows the natural diseases on the left the experimental diseases in experimentally infected animals after over a year of incubation on the right. It shows what is called status spongiosis or spongiform change by the neuropathologist which simply means a sponge like punched out hole appearance of the gray matter it is not referring to white matter spongiosis and although it was previously thought to be edema and interstellular fluid accumulation it is exactly not that. It is a swelling of neuronal and astroglyos cells based on a coalescence of minute vacuoles which can be seen in their origin by electron microscopy long before they are visible to light microscopists. This is scrapy in sheep and scrapy in mice, curu in a child and curu in a chimpanzee inoculated with the child's brain tissue. Kreuzfeld Jacob from a brain biopsy in Oxford and the same process in the brain two years later of a chimpanzee inoculated with it. Next slide please. The same distribution, scrapy, natural and experimental, curu natural and experimental, Kreuzfeld natural and experimental demonstrating with a specific stain that shows astroglyos cells the fibrous astroglyosis which is a dominant part of the pathological picture and about as severe in this disease and this group of diseases as in any disease we know. On the other hand these are the only infections of the brain and I make no exceptions in infections. I mainly range through the whole range of such things as the protozoa through the viruses in which there is no perivascular cuffing, no leukocytic invasion of the brain parenchyma and no sign of a hypersensitivity or a inflammatory response. Next slide. For those clinically minded, next slide please. The phenomenon that these diseases proceed from beginning to fatal outcome with no change in spinal fluid protein and no pleocytosis at any stage of the disease is a further unique finding in fatal CNS infections. Now I'm not going to dwell on the hundreds of studies attempting to purify scrapy first using cesium chloride then using calcium and potassium tartrate, metrismid as well as sucrose and sucrose saline gradients have been used and the workers who have done this work are not workers who have difficulty using zonal ultrace centrifuges and purifying reverse transcriptases, viral subunits, P30 proteins, polyacrylamide gel, electrophoresing them but these same workers have been unable to purify by these techniques or any other electrophoretic focusing and the like the scrapy agent because no matter what small portion of maximum infectivity at a density of 1.2 to 1.25 they take, it spreads again and electron microscopic control of the re-zonal banding of it simply shows that we are trying to purify soap bubbles. The material obtained with enormously high infectivity is a three-layered bioelectron microscope plasma membrane light structure which fasciculates and the problem is that of concentrating and purifying membranes where the reactive receptor or the reactive macromolecule on the membrane is not known and although we assume it is a small nucleic acid all attempts with hot formaldehyde and with detergents to release this presumed genome have failed to obtain any infectious nucleic acid. Next slide. The most disturbing early data which came first from Compton working on scrapy then we reconfirmed it in using zonal banded and fluorocarbon cleaned brain suspended scrapy virus. We obtained linear inactivation curves at ergs 10 to the per square millimeter times 10 to the fourth. A energy input which if you are not familiar with radiation work would not lead you to realize that the most radio resistant forms of life we know to 254 nanometer radiation would intersect here at six. They would be the U1 variant of tobacco mosaic and bacillus radiodurans. This was interpreted first as absolute proof that the agents could not contain nucleic acid. In the suspensions if the argument is that they are dirty not sufficiently cleaned one puts a large variety of mammalian plant viruses and bacteriophage. These so called dirty suspensions have no effect on protection at this radiation. Next slide. I am not going to dwell on the long discussion this might entail but here is the same data you just saw but it simply shows that the plant extracts of the plant viroids these newly described smallest known viruses are even more radio resistant and the purified RNA genome of 115,000 molecular weight of potato spindle tuber virus is of the same order of magnitude of radio resistance whereas all conventional viruses fall here. Next slide please. This alone eliminates all other plant and animal viruses from consideration. Studies of very unsatisfactory zonal banding into 55 fractions in a sucrose saline gradient that the best anyone has obtained is a log scale so it doesn't plot out as well as the earlier ones I showed you. I have failed to reveal any relation with marker enzyme systems in enzyme activity for three lysosomal membrane bound enzymes and one mitochondrial enzyme. Next slide please. I am now going to rapidly summarize probably as fast as you can read. Much of what I have said simply to drive home the fact that these surely are an unconventional type of viral agent. On the physical chemical side I fail to mention what alone would leave any earlier microbiologist totally skeptical of the integrity of the workers. That a virus or any microbe can be stored in 4% formaldehyde or 10 or 20% formaldehyde at room temperature for years without losing tighter namely embalming fluid and that which we use in pathological laboratories to store specimens is blatantly ridiculous. It happens to be the fact. We have isolated Creutz-Feliaco virus from material that has been stored years at room temperature in pathological laboratories in formaldehyde. Scrapey and minkencephalopathy have been taken off paraffin embedded blocks which were for two years in formaldehyde before they were sectioned and left several years as blocks and obtained as viable microbes. Beta-propreolactone doesn't touch them. Ethylene, diamine, tetraacetic acid doesn't touch them. Proteases and the nucleases on even the deprotonized fluorocarbon treated zonal band infractions are not effective nor can we use hot formaldehyde to inactivate them and searching for double or single stranded RNA. Heat resistance is not incompatible with the biological experience but it is incompatible with most virological experience. Up until 80 degrees we have full stability. At 85 degrees we begin to inactivate and as with the hepatitis virus, boiling is not sufficient for hospital sterilization. Autoclaving however most certainly is. Next slide please. To go on further I have already summarized the UV resistance for you. I've mentioned the enormous ionizing radiation resistance. We use ultrasonic energy in such energy input that we would be inactivating rapidly polio or flu or most conventional viruses and all we do is gain in yield increasing our infectivity tighter. When we do an action spectrum at 2,370 angstroms the nanometer wavelength of 237 which is much less sensitive for inactivation work for most microbes, in fact all of them, we find six fold the sensitivity which is still not very high of that where it should be in a DNA RNA system. No one has visualized the agents and not one but literally dozens of the major structural electron microscopists working on the subunit structure of viruses have been involved for decades in the attempt and their starting material are either tissue slices containing an infectivity tighter of 10 to the ninth per gram or zonal banding material even more rich in infectious particles. That they contain no host proteins I mentioned earlier. We doubt this but it looks as though it may be the case and attempts to immunize animals with material that have infectivity titers as high as 10 to the 11th per gram have produced no neutralizing, no radioimmune assay obtainable and low cytotoxic antibody that we can with any of the existing systems that work with retroviruses and all other groups of viruses that we can identify. Next slide please. The pathological lesion is a coalescence of vacuoles which have been studied and studied and studied their membrane is not very nice to look at it doesn't look there's no visible very virus like structure on it. Next slide. It expands and blow next slide please it blows up the cell and finally we end up with neuronal destruction. A good deal of implanted electrode physiology done at Percy Camard and Marseille with animals inoculated having previously had electrodes implanted has yielded evidence that for literally months and sometimes years before neuron death, neuron involvement starts and by controlled perfused pathological study this can be verified. The first proof that we had an infectious agent here came when chimpanzees succumbed to the clinical disease Kuru. A disease which produced a clinical picture which no behavioral analysis analysts had ever seen in zoo or naturally observed chimpanzees or in laboratory subhuman primates. In fact all the experimental animals develop a disease which has never been produced artifactually which has never had a similar pathological or clinical entity develop spontaneously of unknown or known cause in laboratory animals of the species used. So there is really no control necessary. The next slide the pathology of these animals is identical to that in man with the complex perkinje cell and molecular loss in the cerebellum I'm not going to expand the pathological discussions. The chimpanzee interestingly enough develops a drooping lip months before our clinical neurologist and veterinarians are aware that the disease is there. Our keepers by personality change and facial expression are aware of the disease long before hard neurological signs appear. Months before. This is true of course with Kuru in New Guinea and where even child patients recognize their fatal disorder before any physician or their parents are aware of it. Next slide. The analysis of cinema at a stage too early for our neurological astute colleagues to detect the disease does reveal restricted forelimb movement patterns as opposed to the normal walking pattern which coincides with the drooping lip and the keepers note of personality changes. Next slide. This added matter is of incredible interest if one realizes that during the two and four years of silent incubation in cats and in many species of old world monkeys when these animals would be said to be normal and when their neuropathologically studied brain by conventional means would be said to be normal. We now know that there are vast disturbances from implanted electrode work in thalmic, hypothalmic cortical relations disturbing paradoxical sleep and other sleep rhythm patterns which Juve is so interested in and this is happening with all the viruses. In addition we are finding if we use perfusion and early sacrifice neuronal nuclear amyototic division by nucleate and trinucleate neurons, fusion of neuron to neuron and neuron to glial cell as we now know these agents cause in vitro as well in tissue cultures. And we have a whole gamut of exciting pathology going on in the normal animal years before disease only beginning to ask the question does this ever go on in man. Since it's associated with only what we would consider allowable personality disturbances and sleep disturbances and it requires perfusion studies to study it is a difficult question to decide upon and one many of us are not anxious to have investigated that thoroughly. Now the Kuru patient who first produced chimpanzee disease took 21 months incubation when that brain material was placed in other chimpanzees it dropped to 11 months by 13 months in fact many this has been done over many times and again we get a drop in incubation from the usual one and a half to two years to less than a year to 13 or 14 months on first passage. This immediate drop on second passage to a shorter incubation period has been done over again over 60 times. We know it occurs and with other species the same one passage adoption adaptation looks like a selection process which we are not familiar with in virology. We are more used to a step by step slow adaptation to a new host. It doesn't proceed further on further passage. By the oral route there was no infection in this animal 12 years later is still well having had millions of lethal doses of the agent by the use of skin liver a clean liver kidney of the serial suspension without brain we still got the disease and using peripheral non-intra cerebral inoculation we got the disease very early in the work. I'm not going into all the passage studies. Next slide. This simply shows the first demonstration that we could shift from the chimpanzee to South American monkeys, new world monkeys with the sacrifice of a short incubation period of one year and making our waiting time into two years again. This has not reduced on further passage. Next slide. The new world monkeys, a dozen species of them, owl, spider, squirrel are all now used in work with the human diseases and it has permitted the curtailment of work with the diminishing rare species of chimpanzee which incident in our breeding colonies we breed more rapidly than we use. We had 30 newborn chimpanzees last year and only use five or six. We're doing more. Eventually we may be able to return them to Africa where they're being killed more frequently and indiscriminately. The passage in chimpanzee is not continuing. We're using smaller animals and other hosts now that we can. Next slide please. I'm not going into the experiments that that involved. This is a biopsy of a European patient with Kreuz-Valiacov disease to reemphasize the punched out spongiform change in the cortex. Next slide. To show you that we also found chimpanzee passage in 13 months of this disease possible and then in 12 and 40 months in another chimpanzee but almost three years and almost four years when we went to capuchin monkeys and marmosets. In other words these smaller monkeys require much more patience in the experimenter. Again when we found we could go to small animals and finally cats. We see chimpanzee work but one line has been continued. Next slide. Now to depart I'm going to not use the small remaining time to elaborate extensively on all the animal work that has been done and continues but to tell you a little about the worldwide form of Kuru. One sixth of the cases are familial. These patients had the disease. They weren't autopsyed but it's a clinically recognizable syndrome. So here are two aunts and mother of the propositist plus his grandmother all dying of Kreuz-Valiacov disease. Next slide please. Another family from Germany with three generations of many proved autopsyed cases from which the virus has been isolated. In fact these genealogies which are every bit as good for a dominant inheritance when they are looked at in a whole as for Huntington's Korea for instance. Number over 60 now in the number of families we know of the disease and from 11 of the first 17 we have had brain biopsy material on. We have already isolated the virus and the remaining six are not negative. They have not been inoculated long enough. Next slide. So here we have a disease which would undoubtedly be acceptable as a gene control to retopharmidial disease where a degenerative pathology is virus determined and with no genetic control that we know of in the fully susceptible hosts. Here is a huge amyloid plaque staining TAS positive, metachromatic, agentophilic. The plaque that all of us have in our brains as we age some people unfortunately have a great deal of it before 60 years of age and we call it Alzheimer's disease. This plaque is present in 20% of the Kreuzfeld patients. It's present in 60% of the Kuru patients, even children. But since it is not present in the remaining 40% of the Kuru patients, brothers, aunts, uncles, and parents of the other victims who have it, we are sure it's a host phenomenon that has nothing to do directly with pathogenesis. On the other hand, in Alzheimer's disease it and the tangles are what we most focus on and it is very remarkable that a fair number of Kreuzfeld patients have the disease. Many lead to pathological disputes as whether they are Alzheimer's or Kreuzfeld until we finally isolate the virus. Next slide. Simply demonstrating the spherical symmetry with cross nickel prisms of the amyloid which by EM is amyloid and by long chain and short chain analysis is amyloid. Next slide. To bring you up to date on what is just in press a world epidemiology in an American of Kreuzfeld, we now know that anywhere we look hard we find.5 cases per million per annum death. This makes it a very rare cause of death indeed. But when one realizes that this means 200 deaths a year for the United States and we have not had 10 deaths a year for rabies with a 50 state surveillance in the last 20 years, it's fully 50 times more frequent than rabies probably 100. This is a minimum. We are actually finding one per million per annum anywhere in large cities we look intensively. The European studies now impress with Katala and Paul Brown working on them in Paris for all of France, parts of Germany, most of Italy and Spain are showing.5 per million per annum on first survey for the last 10 years with 1 to 2 per million per annum in some foci and large intense foci in Hungary, Slovakia, Palma in Italy and in Israel. Next slide. Kreuzfeld is a worldwide disease. The countries of Africa where it's been found are few and far between. Hungary has not been intensive. We don't know why the, we haven't been to Argentina, the other countries in South America, everyone we visited, we found it and in high incidence. It far outrivals, outranks rabies as a world cause of death from infectious brain disease by probably a factor of 100 on all continents. Next slide. The scrappy virus is probably also ubiquitous where sheep are kept, but most countries with it do not recognize it. We have, and British workers have carried it for decades through mice and to our horror after long incubation period of 73 months the cinemologous monkey developed a spongiform encephalopathy identical with Kreuzfeld's disease. Their scrappy sheep brain inoculation as did after almost three years squirrel monkeys. On passage the incubation period has dropped in the squirrel monkey, but not in the cinemologous. Here it has been passed back into sheep and goat with disease proving that it is still the scrappy virus biologically, although since no Kreuzfeld or Kuru virus go this way. We are very worried however since European strains of scrappy taken from the Compton laboratories once transmitted to monkeys lose their ability to go back to sheep and goat. Conceivably scrappy is the source of some of the human Kreuzfeld. Next slide. It incidentally can be found in the butcher shops throughout the world. There's a high selection for the butcher of all scrappy infected sheep and even laboratory infected sheep throughout Europe having passed decades ended up in the butcher shop. Scrappy is a worldwide disease. Next slide. Now in a rush travelogue I promised you I will try to tell you where this story began. It began not in this enormous second largest island in the world but in one small highland area here in New Guinea. Next slide. The Kuru region is at 2000 to 3000 meters with human population scattered in the eastern highlands of a very mountainous area where all the grass covered areas represent the effects of millennia more than centuries of human cultivation by slash burn agriculture. Next slide. The villages are small few humans see or saw in the stone age period when we first described Kuru more than a few hundred individuals between their birth and their death in adult life. The population unit is under 100 in most places up to 200 in some and never exceeding that. Next slide please. This is a village in which we're returning the area has rugged and the first year of patrolling throughout the area required of moderate amount of mountaineering. Next slide and bridge building. Next slide. But the next slide I'd go through these rapidly and the mesolithic level culture of the population involved with Kuru are people who quite significantly never washed in a lifetime covering their body with the rendered grease from their pigs which run wild and mate with the wild pigs or of their relatives. Next slide. The kunai fields are high grass which long carrier line is carrying our supplies through. Next slide. And in that area we would meet one of the 11 language groups which were defined during the Kuru research project since only two of them had had European contact at the beginning of our Kuru epidemiological work. This is an Anga Kuka group which is just on the border. Next slide. The villages contain groups of women who were slow to leave as most women on our arrival. Preferred not to walk but when tested could walk but with minimal ataxia. They told us through the one and two level child translation we had into a language we could finally use that they were dying of Kuru and they were invariably right. The disease rarely lasted more than a year and was characterized by midline cerebellary ataxia with preservation of cortical functions and no sensory disturbance. Next slide. The patients at this stage are unable to focus their eyes or hold their neck properly so she looks demented but with grunting signals can evidence very good contact with reality and good intellect still. However she is so ataxic that she can no longer stand without support. Next slide. And in every village of 100 people we found people dying with no exception. Next slide. We found over 300 deaths the first year and in a given 100 population village three women using sticks to keep ambulating to their gardens to bring food back to their houses were already evidence that they had Kuru. They agreed that they did have no foray woman required to stick to maintain her balance unless she had Kuru or some other injury. Next slide. The smallest patients were children of four or five years of age. Next slide. And the area we found was of enormous linguistic diversity. The Nugini people, the Melanesian speak over 1000 languages not dialects like French, Spanish and Italian or German and Dutch. But real languages at a dialectical language it's much more diverse than that. In the eastern highlands alone there is this enormous level of languages and at least five language families. Next slide. As we worked out the boundaries of Kuru, next slide, we also gave names to most of the people which as is always the case with indigenous populations they are the fallacious names usually an insult like the bastards over the hill used by their enemy neighbors. The people themselves have no names for themselves. So our fallacious names have stuck and the new generation children don't know that we are the source of them. And this is the boundary of Kuru against the language groups that had it showing that here culture and language have determined the boundary completely. And here culture and language have had nothing to do with the boundary of the disease. Next slide. When we counted up cases at the end of a year and this prevalence data can be read as yearly mortality the disease usually lasts about a year. So this was 1.6% of the population dying per annum. This went on for 50 years previously and has gone on through most of the first 10 years of surveillance. But there's been a change. Next slide. Please. This simply the Kuru area would be superimposed on here and this is decreasing population density into high altitude mountain ranges and virgin bush with no people. Showing that the people became less and less dense as we went in this direction. Next slide. But the running away from Kuru which they consciously attributed their location in the virgin forests too was with no success whatsoever since maximum incidents with 3% dying per annum occurred in the least dense most remote populations bordering on populations here that had no disease at all. But there's no intermarriage or contact culturally between these groups. There's no ecological shift. Next slide. We began to accumulate patients. They were willing to come in until we had from 40 to 100 at one time dying in the O'Kapa hospital which we built. Dr. Zegas, for next slide. A New Guinea government physician who discovered the disease and has worked on it with me since the beginning. Is Vincent Zegas here myself in an early day with a child patient at O'Kapa? Next slide. And we would have this boy himself knows he's dying of Kuru but hadn't convinced us yet his autopsy six months later certainly proved he was right. He is supporting the girl with the disease and these two have the disease and that young girl does too. And next slide. The moon fascist is a result of unsuccessful attempts at cortisone therapy. Next slide. Next slide. It simply caught a patient during the movement. I should be showing you cinema not still pictures. We don't have the apparatus. Next slide please. And a severe strabismus and extreme dysarthria. A final total neuromuscular incapacity with minimal loss of intellectual function characterized all patients. Children usually died in six months to nine months. Next slide. Fully a third of all the patients were children and most of the patients were female except among children where the male to female ratio became a little more reasonable. This 13 to 1 to 78 to 1 ratio of female to male in adult life was very difficult to imagine epidemiologically at first. Next slide. We eventually found that from genealogical studies that a one of three wives, the people are polygamous and this enormous death of women had reduced the sex ratio to three to one raised it in favor of males in the southern villages for polygamous society with a total sex ratio of three to one. This made it closer to five to one for marriageable age males. Polygamy on top of it made it very hard for many young men. The every case with no exception of the 4,000 we have studied has had similar intense family history. Next slide. I'm not going to dwell on the usual family history. At present we know from the dating of these histories that this patient here was the contaminating source for the whole crowd but this has taken years of epidemiological sleuthing. I'm not going into it. Next slide. And here is what's happened. An exotic disease that few of us have had a chance to see and now we tell you it is almost gone before you've got to see it. It has disappeared progressively and the male and female patients have left us with only a few deaths. This was plotted prematurely. The final number in 77 actually was under 20 for the total. Next slide. This year it will be under 20. Next slide. But the most dramatic thing is that these are taken at three-year intervals, leaving out the intervening year in case of epidemiological end of year prejudice. It doesn't make much difference. It's a matter of Christmas New Year, European physicians doing less patrolling at that time. It shows that the zero to nine-year-old patients rapidly decreased, decreased and then disappeared from the world. The early adolescent patients rapidly decreased but later, five years later, then disappeared. Finally the older adolescent patients decreased and then disappeared. And now the 20 to 25-year-olds are gone. Every year the youngest patient is somewhat older. The law that no patient who was born since Kuru cannibalism stopped in his village has ever died of the disease holds. It tells us a great deal more than that first remark might immediately reveal. It tells us that the suspected transplacental transmission, milk factor transmission, vertical transmission from mother to child born during the disease or born before the mother is ill is not occurring. We're it occurring, this incredibly uniform and if I projected to 78, still more uniform with this mostly gone and that gone, disappearance would be unaccountable. Next slide. We do not have any such knowledge for the familial cases of Kreuzfelljacob in Germany and the rest of the world. If one doubts our ability to assess age of these now disappeared types of patients, pre-pubertyl children of both sexes, here is a group on dimer caprol and calcium-versonate therapy when we were looking for a Wilson's disease like copper or molybdenum toxicity and failed to find it. We had many false alarms in the beginning and all of them died of Kuru. There are no such patients in the last 10 years in the world. Next slide. The Kuru epidemiology was done mostly by the Robert Browning technique of the Pied Piper of Hamlet with assistance of here who are all married polygamous men today. Next slide, they rapidly, a group of some 30 of them provided, next slide provided us with translation of all 11 languages into the one language we learned. The young men who were leaving the area had to be initiated into adult life at which stage of life, all still pre-pubertyl, they have totally left the women's society never in their life even as polygamous adults to enter the house of a woman in traditional culture. All copulation and procreation was a daylight affair in the gardens where the privacy was greater than that obtained in the most remote Swiss valley and the phenomenon of sleeping with a wife is unknown to Highland New Guineaers until missionaries introduced this strange custom to them and the phenomenon of crawling in darkness for copulation under a roof is a phenomenon unheard of and rather repelling to them a subject of jest not to be taken seriously. Anointed with pig grease and never washed this would be human grease in a cannibal ceremony next slide but only for the smaller children and never the males. At this age people have left the women's culture only the women opened the Kuru victims bodies and contaminated themselves and children with them. Next slide they celebrated their initiation into manhood. Those three young men incidentally are all college graduates in the United States today. Next slide they are I'm not exaggerating. Next slide but this is the age at which all warfare took place most New Guinea Eastern Highland groups left that for the adolescent sex segregated culture while the old men as in most cultures planned the wars but didn't take part in them. Next slide and this age produced the sex segregation which did produce this enormous difference in sex predominance of the disease. Next slide girls at a early pubertal age were married with pig omentum on their head. Next slide and usually the young wife of a polygamous older man older widows went to the adolescents boys as their first wife. Next slide and all cooking was done over steam cooking with no vessels in pits of heated rock a meter and a half under this pile of several hundred pounds of pig meat and vegetables. It's an autoclave essentially a volcano like peep is earth is heaped on top of it and there is very little enteric disease since it comes right out of steam cooking. So the temperature attained would have been insufficient to inactivate the Kuru virus and human tissue was similarly handled. Next slide on the other hand with men handling pig tissue this would have been women for human tissue and the children around would have been infected. We believe the all infection was through mucus membranes and skin not orally it takes multi million lethal doses to infect with scraping orally only a millionth as much through skin or mucus membranes and with current cranes felt it doesn't take orally it is not eating the dead even though Claude Levy Strauss would have it totem wise symbolically that way. Next slide simply showing the children's involvement with the mumu pit of steam cooking. Next slide and finally the disappearance from the world of childhood Kuru ten years later the disappearance from the world of adolescent Kuru now the disappearance of young adult Kuru and hopefully within 20 years the disappearance from the world of Kuru how we will get rid of Kreuzfeld Jakob disease is another matter. We can tell you that only a few percent of the cases in the world five or six percent are attributable to the atrogenic cause. In Suric the use of stereotactic electrodes contaminated from a Kreuzfeld patient has produced two deaths years later in young people who were cured of their epilepsy by the surgery only to die from Kreuzfeld Jakob disease. Our form of neocannibalism brain use of in brain surgery of durameter from older patients and the use of corneal transplants from victims of Kreuzfeld Jakob disease has in fact caused the disease. Most major American and European clinics if thoroughly studied and if they have good enough records can detect the case caused by their neurosurgery. We do have a series of several dozen atrogenic cases the complex medical legal aspects of the issue and the humane aspects of the panic produced by epidemiological questioning of those in danger has restricted reporting of many of them. On the other hand we do already have the published accounts of a half dozen. One of the easiest ways of course to our utter horror of finding the virus is visiting a neuropsychiatric clinic where it would accumulate and we are just now aware of this hazard and changing the sterilization methods necessary to prevent the atrogenic spread since conventional sterilization with alcohol, ethylene oxide, sterilizers, zephyrin and many of these usual hospital antiseptics don't work. Because it smelled like hospital should in two, 30 years ago with Clorox, permanganate and iodine around which none of us and as young physicians want to take home to our families any longer on our clothes did inactivate the agents phenol did but the modern most of the modern antiseptics don't. There has been warning in all the world literature especially in the neuro and to pathologists neurosurgeons have died of the disease pathologists and dentists have we do not know and have no way of knowing whether the number that has died is a significant increase above what would be expected because of course they are closer to the medical profession and this rarely made difficult diagnosis is really even now missed in major academic neurological clinics as we are discovering in the worldwide epidemiology. I leave you then with a failure to have told you about the plural, the phenomenon that the Kuru focus is not the only intense focus of neurological or other chronic disease in primitive populations which we have pursued and are we are pursuing others, amyotrophic lateral sclerosis foci of 1000 to 2000 fold the intensity of any other civilized part of the world in West New Guinea populations and keep an ince of Japan and Guamani and Micronesian populations. Obviously if we ever find the answer to amyotrophic lateral sclerosis in these small enclaves which can be visited in a week where this has become the first cause of death in adult life are the place to find it. Thank you. Thank you.
|
This was Carleton Gajdusek’s first lecture in Lindau after winning the Nobel Prize in Physiology or Medicine two years previously. Gajdusek dedicated most of the lecture to the research that led to the Nobel Prize, namely the occurrence of kuru, a fatal neurodegenerative disease that afflicted the Fore people of eastern New Guinea. The characteristic features of the disease are dementia and loss of muscle function. All kuru sufferers died within one year of the first symptoms taking place. Members of the Fore tribe had a ritual of eating their dead relatives as a sign of mourning, and this was demonstrated as the root cause of kuru, particularly since women and children were most likely to fall victim to the disease, and they usually ate the brains of the dead [1]. Missionaries working in New Guinea discouraged the Fore people from this practice and the illness that was considered to be an epidemic gradually came to a halt, although long incubation times caused the disease to persist in the population for decades [2]. “We are dealing with virus-like agents, we are dealing with a new group of microbes”, noted Gajdusek, however, he used the term “virus” or “slow virus” throughout the lecture. The replicating agent was still a mystery to researchers. At that point it was known that the “virus” was bizarrely devoid of nucleic acid, and resistant to ionising radiation, boiling, and storage in formaldehyde. Its presence in the body did not induce an immune response. The similarities of kuru to Creutzfeld-Jakob Disease (CJD) and to two animal diseases; scrapie in sheep and goats and transmissible mink encephalopathy were confirmed, yet the form of transmission of CJD and the animal diseases was also unknown. Only several years later, in 1982, Stanley Prusiner published a paper demonstrating that scrapie is caused by proteinaceous infectious particles, and proposed the use of the term “prions”. These pathogens induce normal proteins to change their structure, which leads to changes in their physicochemical properties [2]. Prions are responsible for CJD and kuru, as well as several neurodegenerative animal diseases. Prusiner was awarded the Nobel Prize in Physiology or Medicine in 1997 “for his discovery of Prions – a new biological principle of infection” [3]. Hanna Kurlanda-Witek [1] https://www.nobelprize.org/uploads/2018/06/gajdusek-lecture.pdf [2] https://www.nobelprize.org/uploads/2018/06/prusiner-lecture.pdf [3] https://www.nobelprize.org/prizes/medicine/1997/summary/
|
10.5446/52568 (DOI)
|
Count Bernadette, distinguished guests, students, colleagues, ladies and gentlemen. Very little is known about the primary prevention of cancer, with the exception of the very important link between cigarette smoking and cancer of the lung. According to the present understanding, the cessation of smoking could eventually result in the near elimination of cancer of the lung, which is said to represent up to about 40% of cancers in some communities. To my knowledge, there is not at present any known association of a pollutant with the cancer, which occurs in very high frequency. Although for theoretical reasons, it is very well worth while continuing to investigate this possibility. I would like today to discuss the association between hepatitis B and primary cancer of the liver. If the present findings, which I will report on, are sustained on this possible ideological connection between hepatitis B and cancer of the liver, then it may be possible in due course to prevent a cancer, which is probably, again, one of the very common cancers of the world. The work, which I will be reporting on, which has been done, some of which has been done in our laboratory, was done over the course of the last 10 or more years in collaboration with my colleagues, Dr. London, Sutnick, Millman, Lussbader, Werner, Drew, and others. In a paper presented in 1974, we pointed out that for many years, workers in Africa and elsewhere had suspected that hepatitis infection might predispose or cause the subsequent development of primary cancer of the liver. When these suggestions were made, it was not possible to test the hypothesis since methods for the detection of the virus in occult and hidden infections were not available, and it was known that many patients became infected without any clinical evidence of the disease. With the discovery of Australia antigen and its subsequent identification with the surface antigen of hepatitis B virus, and particularly with the development of the sensitive methods, for example, radioimmune oasis or in particular radioimmune oasis, it became possible to look at this question directly. Since the publication of this paper in 1974, which included a discussion of the information that was then available, a large body of data bearing on this subject has become available. Today I'd like to present the evidence which supports the hypothesis that in many parts of the world, chronic infection with hepatitis B virus is a necessary condition for the subsequent development of primary cancer of the liver. If this evidence is convincing, then it follows that planning for public health measures for prevention of chronic infection should be investigated. This raises the problems that are associated with all extensive public health projects, namely anything that you do in order to prevent disease has other consequences. And as scientists, we have a responsibility to try to learn as much as we possibly can about these possible consequences in order to deal with them most effectively if the measures, the control measures are undertaken. Now I propose to present this evidence in a kind of using the technique of parallel evidence that is showing you several bodies of data, all of which presumably would converge on this hypothesis that I've stated. This incidentally I've learned recently was a technique used very much by Darwin in building up his convincing evidence relating to relativity and in many ways an important introduction which he made into scientific process. Before presenting this evidence, I'd like to quickly summarize the information available on the nature of the hepatitis B virus. The first slide is a diagram of the Dane particle which is thought to be the whole particle of the hepatitis B virus. It consists of an inner core which contains within it a DNA and in addition a specific DNA polymerase. There is a specific antigen associated with the core hepatitis B core antigen. Surrounding that is the surface antigen which also has a specificity hepatitis B surface antigen. There as far as we know is no cross reactivity between the core antigen and the surface antigen. The coexistence of the DNA and the DNA polymerase in the same location apparently is an unusual feature of viruses of this kind. Antibodies to the hepatitis B surface antigen can be identified in peripheral blood. These appear to be highly protective. People who develop titers of the antibody to the surface antigen are unlikely to become reinfected with the hepatitis B virus. Antibodies to the core antigen may also be detected in the peripheral blood and is nearly always found when the individual is a carrier of the hepatitis B virus. Antibodies to the core does not protect against subsequent infection as far as is known. There are different determinants on the surface of the hepatitis B surface antigen and they have a rather odd characteristic similar to serum protein polymorphisms. All viruses have a common determinant A. In addition there are allelic determinants D and Y. That is a virus can be either D or Y. Rarely both rarely neither. And there is also W and R. And again the virus can be either W or R. Rarely both and rarely neither. There is there are highly specific geographic localizations for these specificities and they don't travel well. That is you don't find rapid spread of particular geographically associated viruses from one location to the next. The way you do with let's say influenza virus which can start in Hong Kong and within a period of months or half year or so spreading throughout the world. So the hepatitis B virus specificities stay close to home. Next the data next the D of it. This is a projection of an electron micrograph showing the three forms in the three sort of flavors that hepatitis B particles come in. The large particle here is the whole DNA virus the so called Dain particle named after the British investigator who first saw this. These smaller particles are consist entirely of the hepatitis B surface antigen and apparently do not contain any nucleic acid. They are found in very large quantities in the peripheral blood and are essentially always identified in the peripheral blood of people who are carriers of the hepatitis B virus. By carriers we mean that the person is infected with the virus the virus or the surface antigen is detectable in the peripheral blood usually in extremely large amounts but the individual himself does not have any apparent signs of illness. There are in addition these elongated particles also made up as far as we know entirely of hepatitis B surface antigen and there the function of these rather strange particles are not known although there is some information they may be a kind of a transitional phase but very little is known about these. Now later on I will talk about work on the vaccine. The process of making the vaccine is an unusual one different from the production of any other vaccine. In it the surface antigen particles which occur in very high frequency are separated from the Dane particles and then they are treated in such a way to kill any whole virus particles that may have been left in the preparation. Then the surface antigen produced in this manner from the peripheral blood of carriers is used as the vaccine. This vaccine has now been tested in animals it's the initial study tests in humans have now been done and the planning for field trials is now in progress. So far the results are very encouraging if this vaccine proves to be effective and safe then it may have a very important role in the prevention of infection with hepatitis B and if what I'm about to tell you is true it may have a role in the prevention of cancer of the liver that is it would represent a kind of a vaccine which in the long run may have an effect on the development of cancer. Next slide please. There is a very unusual characteristic to the DNA associated with the Dane particle the large virus particle. Most viruses have DNA which is either double stranded or single stranded. The DNA of the Dane particle of the hepatitis B virus is sort of again comes in two has two forms it's both single stranded in part and double stranded in part. The length of the single stranded section varies from literally from virus to virus and appears to be polymorphic for this characteristic again a rather unusual feature of a virus. The next slide please. These photographs are taken by my colleague Dr. Summers at the Institute for Cancer Research in Philadelphia and these were done with Dr. Kelly in Baltimore. They in order to demonstrate the single strandedness they used a in effect a kind of a stain of protein removed from E. coli which will adhere only to single stranded sections of a DNA circle but does not adhere to double stranded areas. This is a control virus which is totally double stranded. These are hepatitis B viruses with the stain indicating that the single stranded portion is different in the different viruses which are shown here. Again this is an unusual feature of the hepatitis B virus. The biological significance of this is not clear but I guess you can say intuitively that if there are advantages to being double stranded and there are also advantages to being single stranded then this has both advantages and both sets of advantages and both sets of disadvantages. However, it has prepared to cope very well with its environment since the virus has developed many kinds of vectors and can be transmitted in a very large number of ways. This will come up during the course of the discussion. Next up, and these following slides I've tried of listed these independent points which I hope to make. So I'll follow a course which I was advised to do by Dr. Schultz of our institute who told me that when giving a scientific paper the first thing you do is say what you're going to say and then say it and then say what you've said afterwards. In that way there's a possibility that what you have to say will actually get across. So what I plan to do is to list the topics that I would like to discuss. The first point is that there's a high prevalence of chronic carriers of hepatitis B virus in the areas of the world where primary hepatocellular carcinoma is common. In northern Europe, the United States, the frequency of carriers is of the order of 0.1, 0.2, 0.3 percent, 1 or 2 or 3 out of 1,000. However, in many tropical regions of the world, in Southeast Asia, in Oceania, in South Asia, in Malaysia, the frequencies may reach up to 4, 5, 10, 15, and even 15 or higher percentages of the population are carriers of the hepatitis B virus. That means that there are probably several hundred million carriers of hepatitis B virus in the world. It's in those regions where hepatitis B virus is common that primary hepatocellular carcinoma is common. If in these regions one examines people who have primary hepatocellular carcinoma, then they have a higher frequency of carriers than appropriate controls from the same region. I'll show you the data on these items shortly. The third point is that primary hepatocellular carcinoma usually arises in a liver which is already diseased with cirrhosis, chronic hepatitis of various kinds. The frequency of underlying cirrhosis or chronic hepatitis varies from place to place, but where it's been studied very carefully indeed, it's very often the region of 70, 80, 90 or even 100 percent of the cases will have an underlying chronic liver disease. In these diseases, that is the chronic liver disease and the cirrhosis, there is also a high prevalence of carriers of hepatitis B virus. That is the disease which in effect precedes cancer of the liver also has a high association with the presence of carriers of the hepatitis B virus. Next a bit. Now what I've told you, what I've mentioned so far are retrospective studies, or studies taken at a point in time. There are now several prospective studies in progress to determine what happens if you look at people who have hepatitis B virus to see what happens to them in the future. Since these studies have just begun, the results are very early, but I'll tell you about these to indicate the kinds of studies that are being undertaken. In one study in Japan, cirrhosis patients who had cancer, who had chronic liver disease and cirrhosis, were compared depending on whether they had hepatitis B virus or did not, those with hepatitis B virus were the ones who developed cancer. A similar study, prospective study, was done in asymptomatic individuals who were chronic carriers of hepatitis B virus, and again based on very small numbers, the much higher probability of the development of cancer in the liver was demonstrated in those. I'll show you these data shortly. Seventh point, in several studies now, it's been shown that by histological techniques that liver tissue which contains primary hepatocetial carcinoma also contains evidence of infection with hepatitis B virus. That is, if hepatitis B virus were concerned with the development of cancer of the liver, you'd expect to find it in the liver, and you do find it in the liver. The eighth point is that the specific hepatitis B virus DNA has been isolated from the majority of livers with PHC that have been tested, and it's not found in controls. Again, that's what you would expect if the virus were involved in the illness. A further point is that there is a family clustering of hepatitis B virus carriers and chronic liver disease, including primary hepatocetial carcinoma. In particular, there's a very high frequency of carriers among the mothers of people who get or have primary hepatocetial carcinoma. We'll get back to some of these points to show you some of the data next to Pitta. There have been a very large number of studies demonstrating the third point that I told you, namely that there's in areas where primary hepatocetial carcinoma is common and where carriers of hepatitis B virus are common, then in those areas the frequency of hepatitis B virus is much more common in the people with the cancer than in what appear to be appropriate controls. These are illustrations from two of the studies that we've done in Africa and West Africa, and these were done in conjunction with Professor Payette from University of Dakar and University of Paris, Dr. Zlares, Samo, Barois, Theret, and Professor Sankely, a large group of the French, American, and Senegalese co-workers. In the Mali study, the frequency of hepatitis B surface antigen was 47 percent as compared to about 5 percent in controls. The frequency of antibody against the core, which is thought to be an indication of active infection, was 75 percent in the patients, 25 percent in controls. The frequency of antibody was actually rather less in the patients with hepatitis B virus, with patients with cancer than in the controls. The overall infection rate was high in both groups, but higher in the patients with hepatitis B virus, patients with cancer. But again, the important point is that there's a much higher frequency of carriers in the patients as compared to the controls. The data from Senegal are similar in the same direction and rather higher than in the Mali study. But these studies, these two studies are representative of about, let's say, 15 studies of the same kind, and they are essentially all in the same direction. Next the bitter. Now to deal with the point number four, I believe it was five, in which I said that the patients who get primary hepatocellular carcinoma, it's superimposed on an underlying chronic liver disease, including chronic active hepatitis, cirrhosis, here's PHC, and then controls. These were studies done in South Korea by Dr. Han-Hae Won-Hwan from our Philadelphia Laboratory and Professor Kim from the medical school in Seoul. In this study they found that there's a very much higher frequency of hepatitis B surface antigen, 58.6%, then in the control groups, 2% and 6%, 3% and 6%, also of cirrhosis, a very high frequency of hepatitis B surface antigen, 93% compared to 6%, and again a very high frequency of the surface antigen in patients with primary hepatocellular carcinoma, then controls. On the contrary, the frequency of antibody against the surface antigen, that is the protective antibody, is lower in the patients with these various diseases than in controls, and this again has been seen wherever it's been studied. The suggestion being that the patients who go on to develop chronic liver disease and primary hepatocellular carcinoma have a rather different immune response when they're infected with the hepatitis B virus, they're more likely to become carriers and incidentally at the same time form antibody against the core of the virus, that is they're more likely to become carriers than they are to develop antibody against the surface antigen. Now it's not quite appropriate to say that they're immune deficient since they're quite able to form antibody against the core, they're sort of immune specific, that is they're more likely to form, become carriers and form antibody against the core than they are to form antibody against the surface antigen. So there's, it's again they cannot be characterized as deficient but rather as different from the individuals who don't want to develop these chronic illnesses. Next a bit there. Now this is an illustration of the prospective study that has been done in Japan and this is illustrative of what is similar studies which are going on elsewhere in Asia and Taiwan and people's republic of China and in South Amnesty. In this study some 80 or so patients with cirrhosis were identified by the Japanese workers. They then found that 25 of these had hepatitis B surface antigen, 17 had antibody against the surface antigen and 43 were apparently uninfected. Now if the hypothesis were correct one would project that the people who had surface antigen would be more likely to develop primary epatocelular carcinoma than the people in the other two groups. The follow-up has now taken place for about three and a half years and seven cases of primary epatocelular carcinoma developed in this 80 or so people, incidentally an incredibly high risk group and also a very rapid development of cancer. Six of them fell into the hepatitis B surface antigen group that is the one predicted by the hypothesis and one in the uninfected group. Again this is close even though the numbers are quite small, it corresponds very closely to the expectation generated by the hypothesis. It also discloses an extraordinarily high risk group for cancer of the liver, namely people with cirrhosis who are carriers of hepatitis B virus. Next bit. Now I apologize for this slide, it contains more detail than is necessary so you can forget about the material below the line and I'll lead you by the hand through the other portions of the slide. This was a study done again in Japan on the national railway system where regular physical examinations are done on the very large number of employees. As part of this examination they collected blood on some 18,195 individuals and tested them for the presence of hepatitis B surface antigen or other manifestations of infection with hepatitis B virus. They found that 341 of these people were carriers of hepatitis B surface antigen and these number were not. Now they follow these people for a period of about from a half year to three and a half years that is a relatively short time. Now again these were asymptomatic individuals who were healthy coming in for a regular physical examination. Now again the prediction from the hypothesis would be that the individuals who were carriers of hepatitis B virus even though asymptomatic were at a measurably higher risk of developing PHC primary hepatocetial carcinoma than those normal individuals who were not carriers, were not occult carriers. Many cases have developed in this relatively short time and all of them fall in the category of individuals who were hepatitis B carriers. Now all three of them incidentally were people who had relatively low SGPT elevations. They were slightly above normal but not very high. Now if this prospective study is sustained it provides considerable support for the hypothesis that for which I have been accumulating this evidence and as I said such studies are now in progress elsewhere. Next to, Dr. Nyakonis colleagues in India did an extensive and comprehensive study of liver taking from autopsies of people with various liver diseases including primary hepatocetial carcinoma, cirrhosis and in addition people who died for reasons that were unconnected with liver disease. There are various methods of detecting manifestations of hepatitis B virus in tissue. These include fluorescent techniques where fluorescent material is bound to specific antibody that is fluorescent material would be bound to antibody against surface antigen, fluorescent material could be bound to antibody against core antigen. In addition you can see the particles and they can be identified by the use of ferritin labeled antibodies so that under the electron microscope their actual location there can be shown. Now using these various techniques Dr. Nyakonis colleagues found the following. In the patients with primary hepatocetial carcinoma 94% of them had evidence that hepatitis B surface antigen was present in the liver which was infected, liver where there was cancer of the liver. 71% of the patients with cirrhosis and 2% of controls. Hepatitis B core antigen was again found in high frequency in the primary hepatocetial carcinoma patients in the cirrhosis patients but not in the controls. There was also some cases where both surface antigen and core antigen were found and again in much higher frequency in the patients with cancer and those with cirrhosis and again none in the controls. So again the virus is where you would expect it to be if the virus is associated with cancer of the liver. Now generally speaking in these studies the presence of the virus is shown not in the cancer cells themselves that is the transformed cells but in the cells immediately surrounding the transformed or the cancerous cells and in many cases in the liver tissue in general. There has so far been no evidence according to my colleague Dr. Summers has investigated this and he has said that there is no evidence of incorporation of the DNA of the virus into the DNA of the liver cells. I believe nobody else has found any evidence for this. Occasionally you do find the virus actually within the cancer cells but the general finding is that it's in the surrounding tissue. Next the bitter. Now the specific DNA can be identified by traditional methods and Dr. Summers has used the hepatitis B virus DNA as a probe to look at the tissues taken from livers of people who have cancer of the liver. He's found the DNA present in such tissue and it's not present in controls. Within the group of PHC patients there he looked specifically at individuals who had surface antigen and had primary epacelular carcinoma and those few individuals with cancer of the liver who had antibody against the surface antigen. The specific DNA was identified in 10 of the 11 cases where there were carriers but in only one of the four where the people had antibody. The significance of having cancer of the liver with antibody against surface antigen is unclear but as you may recall from the previous studies that represents a smaller percentage than the individuals who have cancer and are carriers of the virus. Next the bitter. Now our work started out as a consequence of a genetic investigation. We were studying polymorphisms in blood and as a consequence a lot of the focus of our work has been on families. In human genetics you study families so you get very kind of family oriented. One of the investigations which we did rather early on was to study the families of people who are carriers of hepatitis B virus and found that there was a very much higher frequency of carriers among the offspring when the mother was a carrier of the hepatitis virus than when the father was the carrier of the hepatitis virus. This was consistent with the notion that the mother could transmit the hepatitis B virus to her children if she were a carrier. Subsequently workers in many areas particularly in Asia have found that a very high frequency of children born to mothers who are carriers will become carriers within a few weeks or months and some 50% of them will be carriers. They may not become carriers directly at the time of birth but the carrier state may develop subsequently. In some cases the hepatitis B virus is found in the cord blood. Based on this and now a large number of observations it appears that hepatitis B virus may be transmitted from mother to child during and affect any time of their association with each other. For example conceivably could occur even before conception that is if the egg became infected. It could occur during conception by passage through the placenta. It's very likely that it could occur at the moment of birth. The moment of birth is a very dangerous time in one's life, very exciting time and also very dangerous. In particular there's a breakdown of the barrier between the circulations of the mother and child and it's possible for quite large things to get across both ways. The basis of what we know about incubation period it appears that infection of the child by the mother may occur at that time. And we've heard from Dr. Timberg about the possibilities of damage to individuals in this very crucial period in our lives. Now it also appears that transmission from the mothers to the children may occur during the early period of their close intimacy. In all cultures mothers and children are very close to each other during their first months and years much closer than they are later on. And it's probable that transmission could occur then. As a consequence of the importance of maternal transmission or parental effect I think would say we've devoted a lot of time to studying mother-child interactions or as a matter of fact family interactions using these ethnological techniques that we've heard about this week. And my student of mine, Ms. Dickey has made observations in the new Hebrides on newborn children and their mothers making the behavioral observations that we've, the nature of which we've heard about that have been done so much on animals to see how mothers and children interact with each other in relation to behavior patterns that might lead to the transmission of a virus from one to the other, primarily from the mother to the child. And we're hoping to learn something about this since it may have an important bearing on control techniques. In the studies in Senegal we examined the mothers of patients with primary and Patosalila carcinoma and compared them to the mothers of controls. The controls were mostly people who were asymptomatic carriers of hepatitis B virus. We found that there was a much higher frequency of hepatitis B surface antigen in the mothers of the patients than in the mothers of the controls. We also found that there was a much higher, there was a much lower frequency of antibody against the surface antigen in the fathers of the patients than in the fathers of the controls. And this study has not been repeated. If it is supported, this suggests that there's a parental effect, that there may be transmission from the mother to the child and the nature of the response that the child has will be conditioned by some characteristic that they either inherit or acquire from their father. I should say in discussing this, this raises some very important psychological problems. If in fact there is maternal transmission which in due course may lead to serious illness in children, this could represent a very difficult psychological burden for parents. When children are sick, parents are very concerned of course. And if there's any implication that they somehow had a role in it, then this could have a very serious effect on their psyche and their relations with each other and with the children, particularly conceivably between parents. Obviously, there's no guilt wrapped up in a situation of this kind, but I think it's very important for us to try to understand this process as well as we possibly can in order to first of all deal with preventive measures if that becomes possible and certainly so that we can understand it sufficiently to deal with the questions raised by parents. It's been my experience that ethical issues usually require more information. If you have an ethical problem, what you usually need is more knowledge and less argument, I think, but more knowledge in order to be able to deal with it. And in many cases, the ethical question, it doesn't exactly go away, but it changes into something else, which you then have to deal with also. But any rate, you're in another place. Next slide is a diagram of what we think, a very rough diagram, what we think may be happening in the transmission, in the development of primary hepatocidular carcinoma. We think that children may become infected early in life, and it's possible that the infection may occur from the mother, probably with some effect of the father in developing the carrier state. Some of these then will go on to become chronic carriers of hepatitis B virus, some will go off in another direction and will not become carriers of hepatitis B virus. Some of those who are chronic carriers of hepatitis B virus will go on to the development of chronic hepatitis, some of them will go on to no effect, that is, they won't know that they've been infected unless they're tested. Some of those with chronic hepatitis will go on to the development of post necrotic cirrhosis, which in itself is a very serious disease, and is life shortening. Some of those with post necrotic cirrhosis will go on to die of that. Others will go on to the development of primary hepatocidular carcinoma. Now it's patent, it's obvious, it's clear that there must be other factors involved in the development in following this unfortunate course. About 10% of the people in Senegal, let's say, are carriers of hepatitis B virus, whereas only even in high frequency countries for PHC, the frequency is 100 per 100,000, let's see, the order of 50 per 100,000. So obviously there are other factors which are involved in the development of the cancer. Mods have been implicated. Nutritional factors have been suggested. Other edible materials, toxins have also been suggested as necessary for the development of PHC, of primary hepatocidular carcinoma. Can we have the lights please? The Licht an Bitte. We're trying to determine what the other factors involved in the development of the cancer is. But it's a very interesting characteristic of preventative medicine, and as a matter of fact an extremely hopeful one, that you don't have to know everything in order to prevent disease. Now I don't want to sound like a Philistine, that is to say that I'm not advocating not learning things, quite the contrary. The more you know, the more effective control methods could be. But medicine is a very emergent business. You're dealing with lives and deaths, or deaths in this case. And if some method of prevention is known and it can be executed, then there's a kind of an obligation to use it as soon as possible. But at the same time, exerting all the precautions not to do as little damage to the general population and to the people who were subjected to these preventive methods. But there is an obligation, you can't sort of not do anything because that's the equivalent of doing something. Well I want to remind you that in preventive medicine, it has been possible to go ahead with rather fragmentary knowledge. And a classic example is that of snow in the cholera epidemic in London, who found that people who were drinking from a particular well were more likely to get cholera than those who weren't drinking from that well. People who worked at a brewery in the same region and drank their own beer or had water from another source. This he therefore decided that you shouldn't drink from that well and he removed the handle of the well as a preventative measure. And now this was done before any knowledge of the germ theory of disease and well before the discovery of the agent that causes cholera. Nevertheless, it was effective in preventing the spread, further spread of this illness and it died out in that region. So again, I want to emphasize that we do have an obligation to learn as much as we can about problem obviously, particularly so that it can be done in the most effective and least harmful way. But at the same time we, I think anyone who has seen these people dying, it's a terrible, cancer delivers a terrible disease. And there's no treatment for it. There's very little that can be done for these people. And a kind of a urgency develops. Now if this vaccine that I mentioned is effective and if as we learn more about the control measures, about the methods of transmission, it may, I think we're now kind of ready to start thinking about design. Again I think it's kind of an obligation that we have to learn more about the biology of hepatitis B virus in order to deal with this problem in the most effective way. Now as physicians we always, we have viruses and bacteria have a kind of rather bad name in medicine because we only see the worst things that they do like disease. You know, that's the end of the spectrum we see. We have a rather distorted view of life, terribly distorted view of viruses and microorganisms. Just think of all the nice beer that we wouldn't have if we didn't have microorganisms. They do all sorts of things. But we tend to think about their negative aspects. But obviously that can, in terms of the viruses attitudes if they do have such things, that can only be a very small part of what they deal with. I'd like to tell you about some, in a very brief way, about some of the studies we've done on how viruses, on one biological aspect of the virus, namely how it interacts differently with males and females, human males and females. The next dia, bitter, is taken from a study by my colleague Dr. London and Jean Drew, where a group of individuals on a renal dialysis unit in Philadelphia was studied. There's a very high infection rate for hepatitis B virus in renal dialysis unit and this particular unit has been organized so that all the carriers in the Delaware Valley, the area around Philadelphia, are kept in this unit. So there's a very high infection rate in this unit. Now they asked the question, what happens if a person is infected with hepatitis B virus? What's the likelihood of their becoming carriers or their likelihood of developing antibody? And the data was broken down into whether the patients were females or males. So the two things that can happen that can be measured, that were measured, whether you became a carrier of hepatitis B virus or whether you developed the protective antibody, antibody against the surface antigen. Bloods were collected over the course of several years now, every two months, and all were tested. People who were known to have been infected were identified. This indicates the probability of remaining a carrier after one is infected for this number of months. So for example, if a female is infected at the time of the first infection, the probability that she would become and remain a carrier is about something over 30%. If a male is infected, the probability of his becoming a carrier is more than twice as much. And this difference exists for other lengths of time of infection. So following infection, males are more likely to become carriers and females, as we'll see next to Bitta. This is sort of the obverse to this. If a person is once infected, what's the probability of their developing antibody against the surface antigen? Females are much more likely to develop antibody once infected than males. So from this we can say that once infected, males are more likely to become carriers, females more likely to develop antibody. Now this may explain the rather unusual male preponderance of diseases associated with hepatitis B virus. Cancer of the liver occurs in seven or eight times as many males as females. Chronic liver disease associated with hepatitis B virus is much more common in males than in females. Now if males, once infected, are more likely to become carriers of hepatitis B virus, then they are much more likely to develop diseases associated with chronic infection, i.e. primary cancer of the liver, chronic liver disease, and a whole variety of other diseases associated with this illness. So this may offer some of the most perplexing problems in medicine for certain diseases why males are more likely to get them than females and in some cases vice versa. Now another interesting interaction between the virus and humans in respect to males and females is shown in the next slide please, which is a summary of data collected in a small community of Platte in Macedonia in North Greece. This was selected because it was a very homogeneous community in many respects, but also they had one of the highest infection rates for hepatitis B virus in the Greek populations which we surveyed with our Greek colleagues, Dr. Economedu and Hadeanus and others. Now the whole village or most of the village was tested, the parents were all classified into three groups, whether the parent was a carrier of hepatitis B surface antigen and did not have antibody, that's one class. Second class were parents who did not have carriers, who were not carriers, but who did develop antibody against the surface antigen, and then a third class, individuals who had no evidence of infection. The number of children they had and the sex of the children was determined and the sex ratio was computed for each of these groups separately. Sex ratio is the number of male live births over the number of female live births. This is the secondary sex ratio, the sex ratio at birth, the primary sex ratio is the ratio at conception. There was a highly significant difference between the sex ratio of the parents and of the families where the parents were carriers compared to the families where the parents had developed antibody and an intermediate ratio in the families where neither had any evidence of infection. Now we've subsequently done, tested the same hypothesis in an island called Carcarvage in the north coast of New Guinea in two communities in Greenland, a place called Scoresby'sund and Amalansik and then in Mali. And in each of these communities, none of the data have rejected the hypothesis, the observations generated by this first study, namely that the, we can, lift on, we can, if this data is supported by subsequent studies, then it suggests that the virus has a very important kind of interaction with humans which is different than causing disease. I'm not exactly sure how you'd classify it, that is the determination of sex ratio, but it's certainly not disease. So again, if these data are sustained by other investigators, and that hasn't happened yet by the way, that is, it hasn't been rejected, but I don't think it's been tested. But if it is supported, then this would says that this virus has a very important interaction with a human characteristic which is of great importance to us, that is whether people or males or females. And this has a great effect, a great psychological, economic, sociological effect on the makeup of populations. Now there are other biological characteristics associated with this virus that we'd like to learn more about while we're preparing for public health measures in the hope that we'll be able to deal very effectively with this, with this, with French in this illness, and do as little damage as possible. I think we always in medical work in particular have to kind of balance possible advantages against possible disadvantages. There's nothing that happens in life is without risk. And what we want to do is maximize the benefit and minimize the disadvantage. Thank you. Thank you.
|
Baruch Blumberg only attended one Lindau Meeting and gave only one lecture. But in this lecture he covered a lot of ground, from the innermost parts of a new virus to large epidemiological surveys. In the introduction (without slides), Blumberg mentioned some of the work that he and his collaborators had done after the 1964 discovery of small protein particles in the blood of an Australian aborigine. It was the discovery of this so-called “Australia antigen” that eventually led to an understanding of how the hepatitis-B virus acts. It also led to the 1974 paper, which pointed out the possibility that there could be a connection between the virus infection and (primary) cancer of the liver. It is interesting to note that when Blumberg mentions the detection method that he had used, radioimmunoassay, the inventor of the method is in the audience. This is the 1977 Nobel Laureate Rosalyn Yalow who, as Blumberg, was a participant of her first Lindau Meeting and had actually lectured on the radioimmunoassay method earlier in the week! Since Blumberg’s 1974 paper, a large amount of epidemiological data had been brought together, which he went on to discuss (with many slides). But first he described the virus. It turned out that the hepatitis-B virus was of a kind that had not been studied before. The main virus particles have both DNA and a varying amount of RNA. They also have two kinds of antigens, core and surface antigens. The latter can separate from the main particle and enter the blood stream, where they can be detected (as shown by the 1964 discovery). Since the virus was of a new kind, producing a vaccine against it could not follow the normal procedure. Instead it was found out that the surface antigen could be used as a vaccine, since injecting this antigen provokes an immune reaction against the main virus particles. Before describing the epidemiological surveys, as a joke, he referred to an advice from a colleague on how to give a lecture: “Say what you are going to say, say it, and say what you just have said!”. From his lecture, it is evident that Blumberg had put a lot of effort not only into making a vaccine, but also in finding out more about the virus, in particular the way it spreads. He had also been interested in looking at the differences between men and women and also in the distribution of virus infections in whole families. It turned out, e.g., that the virus can be transmitted from mother to child over an extended time. Finally, Blumberg carefully discussed the pros and cons of a vaccination campaign at a time before a full understanding of the virus had been found. According to Blumberg, in medicine, which is an emergent subject, one has an obligation to learn more, but also to act. Anders Bárány
|
10.5446/52572 (DOI)
|
Some of my physicist friends are Starry-Eid. That will give the Dolmetscher trouble. A Starry-Eid, Hochbegeistert, huh? Are Starry-Eid about the prospect of coming into radio communication with what they call advanced technological societies in outer space. They have been listening now for a generation without hearing anything meaningful. And the thought is becoming more and more widespread that perhaps there are no more advanced technological societies in outer space. Perhaps they destroy themselves just about as they reach our stage as we are threatening to do. May I say at once that I reject that thought completely. I reject completely the thought that there is some kind of natural law at work that spells the self-destruction of technological societies when they reach about our stage. It isn't a natural law. It's all utterly man-made. It's part of the special structure of our society in our time. In 1976, my nation celebrated in what has come to be our tawdry way. The bicentennial, the 200th anniversary of American independence. Well, that was an interesting event, but minor in the long scale even for us. Because at just the same time, the Industrial Revolution was beginning. And in 200 years, the Industrial Revolution has brought us to a strange pass. You know, I see the history of our universe in the perspective of some 15 to 20 billion years, 6 billion years of the solar system, 4.7 billion years of the planet, 3 billion years of life, 3 million years of something like human life, hardly 10,000 years of civilization. And then this miserable, trivial 200 years of the Industrial Revolution to bring humanity to the brink of self-extinction. One talks about that Industrial Revolution in special ways. At its beginning, it seemed to promise humanity endless leisure and abundance. But then new characteristics began to emerge. One describes it frequently in terms of an exponential curve, an exponential curve in which one writes the years along the bottom. Though the way things have gone, it hardly matters what may have happened before 100 years ago. And vertically along the ordinance, one writes many things, population, industrial pollution, the use of fossil fuels, the exhaustion of many other irreplaceable resources, armaments, and something that has a special interest for us, our sort of person. And that is information. We're living in the middle of an information explosion, which for those of us who are scientists and many others is in some ways as uncomfortable as any of these other things. And virtually one exponential curve fits all these phenomena. And that curve is reaching for the moon at just about the same time, the year 2000. I'm one of those scientists and would there were not as many who are finding it hard to understand how the human race is to bring itself much past the year 2000. I'd like to talk about this exponential phenomenon in a simpler and more homely way. Two hundred years ago the industrial use of coal was in its infancy. One hundred years ago the first oil wells were just being opened. I am as old as the industrial use of gasoline. For the first 25 years of the petroleum industry, gasoline was looked upon as a useless and dangerous byproduct. The only question about it was how can one get rid of it before it blew one up? And then in America Henry Ford put motor cars on the road and there was a first industrial use for gasoline. And now for many persons the thought of going on with civilization without gasoline is almost unimaginable. And now we are being told that we can't live without nuclear power. Ladies and gentlemen, the reality is that we cannot live with nuclear power. The population explosion, I wonder whether all of you realize just what the problem is. Let me say it in this way, that population explosion is of course a rather unanticipated product of the industrial revolution. And we've reached the point at which if one were to achieve merely the replacement level with each fertile couple producing two offspring by the year 2000 in all the developed countries of the world, and if one were to reach that same replacement level in all the so-called underdeveloped countries of the world by 2050, then 70 years later, so by 2120, the world population should have risen to about 13 billions. It's now approaching four billions and many of us are beginning to feel crowded. I think it's well recognized by now in medical circles that 70 to 90 percent, so a reasonable figure would be 80 percent of the cancer in our country at least, is of environmental origin and hence preventable. I hope there will be a chance for your comments. If I finish soon enough, I can't be sure. I would be much more anxious to hear from you than perhaps you are from me, and I hope there will be some opportunity in the course of these meetings. About 40 percent of those cancers in our country are happening in the workplaces. I'm talking of the cancer production that comes associated with the black lung of the coal miners, the brown lung of the textile workers, the vinyl chloride poisoning of the plastics workers, the PCB poisoning of the workers with electrical products, with talking of asbestosis and silicosis and the results of exposure to the innumerable and very rapidly expanding organic chemicals coming into further industrial use. It's become rather difficult to remain alive while earning a living, and some of the lectures that will be held at this meeting, to which I look forward, themselves look forward to being able to do therapeutic things about cancer, to do something about curing cancers. But ladies and gentlemen, what I've just finished saying means that the primary need is not to cure cancers, the primary need is to keep them from happening. It is the prevention of cancer that we should be bending our energies and devoting our resources, mainly to. I said that we can't live with nuclear power, it is intolerably life-threatening in three quite independently different ways. First of all, the thing that is talked of most and probably is the least important is the danger of nuclear accident. And when he is this argued back and forth by the proponents and opponents of nuclear power, I should like to say something very simple and direct. In our country, those superb realists, the insurance companies, refused from the beginning to insure nuclear power facilities. And for that, Congress passed for the first time in 1957, a so-called Price Anderson Act, that lays four fifths, four fifths of the liability for nuclear accident upon our taxpayers. They will be the ones to suffer and they will be the ones to pay themselves. The second of these life-threatening dangers of nuclear power, of course, involves the realization that every kind of nuclear installation, power installation now in existence produces plutonium 239 as a byproduct. That is at once the most toxic substance we know and the most convenient material from which to make atom bombs, fission bombs. Its toxicity is such that the inhalation of one milligram would cause death within hours of massive fibrosis of the lungs. The inhalation of one microgram, one millionth of that amount, produces reasonable chance of an eventual lung cancer. As for making atom bombs, the trigger quantity, so-called, of plutonium 239, the smallest amount from which one can make a fission nuclear weapon is two kilograms, something less than four and a half pounds. You could carry that in an ordinary brown paper bag such as we bring groceries home in and with complete safety. To make a Hiroshima-sized bomb take six to seven kilograms, something like 13 or 14 pounds, you'd need a shopping bag for that. And wherever in the world now, nuclear power facilities have been opened, the potential exists for producing these fission bombs, an ordinary standard nuclear power station produces enough plutonium 239 per year to make as many, perhaps, as 100 Hiroshima-sized bombs. The present nuclear club, the nations that possess nuclear weapons numbers, six, but it is rather confidently expected that within the next decade it may have risen to about 25. You know, one here's a great deal in my country, perhaps in Germany too, at present that fossil fuels are polluting too. They pollute the environment also. The whole is dangerous in this respect. I want to say something about that. There is a qualitative difference. The pollution we get from fossil fuels is temporary. It's mostly of the moment, but nuclear wastes, which I'm just about to talk about, they are forever. In terms of human history, forever. You just think civilization, hardly 10,000 years, but the half-life of plutonium 239 is 24,400 years. After 24,000 years, half of it is left. After 48,000 years, a quarter of it is left. After 72,000 years, an eighth of it is left. And that's too much plutonium 239. No one, no one knows what to do with the nuclear wastes. In our country, they're simply being stored on site. One hears confusing and misleading things about this problem frequently, that actually it is not a problem that the amount of nuclear waste per family, per year, getting all their electric power out of nuclear sources would amount to, a common phrase is, an aspirin tablet. Don't be fooled by that kind of statement. One comes down not to one aspirin tablet, but perhaps 10 or 20, only by the process of reprocessing the nuclear fuel. And in the United States at present, we have no commercial reprocessing, none whatsoever. Science were made to start it and they ended futile abortive. We have no reprocessing. We have the whole business to deal with. And no one, no one knows what to do with it. A recent copy of Science Magazine reported a very recent extensive study we've heard a lot about burying that stuff in salt mines and salt domes. And here our geologists were saying that it's by no means clear that that is a possible way of disposing of this material. There is the thought of perhaps burying this material under the sea in the places where the tectonic plates that form the surface of our globe leave a crack and to which one might perhaps sink this nuclear waste in the hope that it will keep sinking. I had a call in my office from a Royal Commission from New Zealand a few months ago and this came up and we pulled out a map and the most attractive place among the tectonic plates on the globe runs as it happens right under New Zealand. That made my visitors happy. Is what I'm talking about science? Is it biology? Is it a proper concern of scientists and biologists? I've just had an experience that raised this question very pointedly for me. I was looking forward to attending indeed opening a I'm sure pleasant quiet thoughtful meeting on the origin of life to be held in Cardiff, Wales beginning on August 7. And last Friday morning I received a telegram by the organizer of this conference confirming that it was all settled. But two hours later I had an urgent request to appear in Australia beginning with taking part in Hiroshima Day, August 6 and then speaking in other places in Australia about both nuclear power and nuclear weapons, two matters upon which the Australian government will have to make early decisions. So there I was, the origin of life, the end of life. That's the symmetry of that situation. The origin of life I've been deeply interested in for years I would love to be at that meeting. But you know, that's just history. But what we're talking about here is what may happen to human life and much other life on this planet. And that we might be able to do something about. I call this lecture life in a lethal society. Why do I call it a lethal society? Well, I've already begun to tell you it has gone lethal on the grand scale. Not only through these things that I've already mentioned and many others that there is no time to go into, but in one particular sense. And that is that killing and destruction and now the biggest business on earth. Military expenditures in 1977 were $350 billion, expected to go to $400 billion this year. The biggest business on earth and nuclear weapons, a fraction of that business, represent the most immediate threat to our lives and much of life on the earth. So let's talk about that for a few minutes. It's already 10 years ago that the stockpiles of nuclear weapons in the United States and the Soviet Union reached the explosive equivalent of 5 to 15 tons of TNT for every man, woman and child on the earth. 5 to 15 depending on how one computed them. That's about their level now. That comes out to be 40 to 120 tons of TNT for every man, woman and child in the two superpowers in the United States and the Soviet Union. A half hour's interchange between the superpowers using those weapons would put the whole of humanity in very serious danger and wipe out the populations of the superpowers and almost surely those of the neighboring nations. If that were to happen, it would be a good idea to be in the southern hemisphere. Most of that stuff is concentrated in the northern hemisphere, but ultimately even that offers no protection because you know the Vietnam War taught us that one can commit utter devastation with so-called conventional weapons. Tought us that one can commit ecocide and genocide on the grand scale with conventional weapons. But there is a qualitative difference as with the difference between fossil fuels and nuclear power there is a qualitative difference between conventional weapons and nuclear weapons. However devastating their effects when one has stopped using conventional weapons, that's it, it's over. One can count the dead and hot up the destruction and that's it. But not with nuclear weapons. They are unlimited in their effects in space and time. The fallout entering the atmosphere eventually covers the entire globe and ladies and gentlemen, not only nuclear waste is forever, nuclear fallout is forever. Am I exaggerating? Would that I were? I don't enjoy saying this kind of thing. I believe it to represent reality. But let me reassure you, if there were that full scale exchange with the present stockpiles between the Soviet Union and the United States, would it really wipe out every human being? Rather conservative calculation has just appeared in the atomic scientist bulletin Bernhard Feld who has been chairman of Pugwash for some time, wrote it and he is conservative. And he puts the explosive equivalent of the present stockpiles at 15 billion tons of TNT. And to exterminate every human being on earth, he thinks would take 60 billion tons of TNT. The Vladivostok agreement of 1974 between Messers Brezhnev and Ford gave license to roughly doubling the present stockpiles of nuclear weapons by 1985. And if that happens and there is is yet no assurance that it won't happen, we would then be halfway to that conservative estimate of what it would take completely to wipe out the human race. You know, killing off virtually all the people in the United States, that's an idea that's been bandied about for years now. And we had, he is, he doesn't live any longer, but we had a senator from Georgia named Richard Russell who was holding a patriotic speech in the United States Senate and talking about this, the wiping out of the American population. And he said in his speech and I quote, if we have to get back to Adam and Eve, I want them to be Americans and I want them in our country and not in Europe. That's genuine patriotism. Last August I was a member, the American member of an international commission that went to Hiroshima and Nagasaki to evaluate finally what those two bombs had done. Why should one be interested? Now then, last year, 32 years after they were dropped, well for the very interesting reason that they are our only experience with the action of nuclear weapons upon populations if it weren't for those two bombs, we would only have computer simulations of what nuclear bombs can do to populations. You know, sometimes one argues that those bombs were not needed, that it was useless. We owe a fantastic debt to the people who suffered. The terrible agonies of those two bombings, they have shown us, if we will only watch and listen, they have shown us what a nuclear attack is like. And why was it that one still didn't know how many were killed and how many injured for a perfectly simple reason? All the records were destroyed in those bombings. The only thing one had to go on was the identification of victims by families. No families, no identifications, apparently no victims. That meant that a lot of soldiers, Japanese soldiers who were quartered in both places, that meant most sadly tens of thousands of Koreans who had been impressed as laborers and carried to Japan to labor during the war and no one counted them. And they're still not counted. But what our commission found was that in those two bombings, the city of Hiroshima was leveled and 140,000 persons plus or minus 10,000 had died within that year. And Nagasaki 70,000 plus or minus 10,000 had died within that year. Ladies and gentlemen, there is a kind of mythical feeling certainly in my country and perhaps throughout the world of what an atomic bomb attack is like. You know one thinks, bang, the next morning when reads in the newspaper the account of how many were killed, that isn't the way it is at all. Those that were killed immediately are the lucky ones in many ways. The meaning of a nuclear bombing is just filled with people who are maimed, burned, blinded, poisoned and a lot of them take a long time to die. When I go to Hiroshima and Nagasaki and I've been for the last five years, I still visit the bomb survivors more than 30 years after it in those hospitals that maintain them. So that's our problem, a highly biological problem. And how are we to deal with it? I'd like to say a few words about that because I think that also is largely misunderstood. You know, we constantly talk about matters, about matters that we speak of as matters of state, political, strategic, military considerations, security considerations and tend to leave out of account the enormous business that is involved in these situations. And that's what I'd like to talk about. And please let no one misunderstand my own position. If I am talking disarmament, which I haven't talked, but of course I feel is utterly necessary and getting rid of all nuclear weapons, I don't dream of doing this unilaterally as an American. No, bilaterally, multilaterally. It should cover all the nations involved in the world. And I wonder how many of you are aware, because many Americans hardly are, that right now right now there is coming to an end in the United Nations General Assembly, the special session on disarmament that began on May 24th and is running for a month. And may I say at once that that special session on disarmament has been systematically sabotaged by both the superpowers, by both the United States and the Soviet Union. Heads of both states refused to appear at it. Our own President Jimmy Carter chose to call the first meeting of NATO, I believe, ever to be held in Washington, just as it began. I should tell you that the word disarmament became inoperative, officially in the United States many years ago. It ceased to be used. The trouble with the word disarmament is that it has a meaning. It means fewer arms. This place has been taken by two entirely meaningless terms. They are arms control and arms limitation. The assault talks arms limitation. You just think a moment. Arms control, meaningless. One can control them up or down. So far it's always been up. And however far up they go, they'll always be limited. You know, my nation is making three hydrogen warheads per day. That's been going on at that rate for about six years, and the Russians keep pace with that kind of production. The strategy in the two countries is a little different. We have, we Americans. We have two and a half times as many warheads deployed as the Russians. But they have twice the explosive power, the so-called throw weight. They prefer to rely upon fewer bigger weapons. One of the problems in our American arsenals is the question, are there enough targets for the number of bombs that we've prepared? Before I forget, let me mention another thing. The atom bomb that flattened Hiroshima and destroyed 140,000 people, 140,000 Japanese people, plus an unknown number of Koreans by the end of that year. That was a pitiful 12 and a half kiloton bomb that rates in the present arsenals as a tactical weapon, not strategic. It would not, for example, be counted in the salt talks. It's too small for that. When it's supposed to get used to tactical weapons, they don't really count. And when we produce three hydrogen warheads per day on top of the amount of overkill, let me tell you what it's like, the overkill, we have enough stuff now deployed in the United States, not all in the United States, but under American auspices, to destroy every city in the Soviet Union of over 100,000 population, 40 times over. And they have enough to destroy every such city in the United States 20 times over. So why are we making three hydrogen warheads per day and the Russians, the equivalent, sounds crazy? It is, it's insane. Unless in our part of that, on our side of the iron curtain, unless one holds an arms contract, and then it's business, and the more of it, the better. And it isn't very different on the other side of the iron curtain. I've traveled a lot lately in, never in the Soviet Union. My only visit to the Soviet Union was to try to attend a meeting of dissident physicists in Moscow last year, but unfortunately I had been sent over Leningrad, and in Leningrad they caught up with me and told me I was welcome to visit any place in the Soviet Union except Moscow where there were no hotel rooms. So my entire experience of the Soviet Union is 30 hours, 30 hours. I came away with a rather poor impression. I've reached the conviction a while back that if one organizes a society to maximize production, it ends up not very different from a society such as ours organized to maximize profit. Those people in the Soviet Union who are producing their nuclear weapons are as much concerned with getting on with that and are paid off in terms of personal power and status. And all the pequisites that go with those things that are the equivalent of what keeps our Western captains of industry interested in these activities. I think that thinking of these matters entirely in terms of statecraft and strategy and national security is a blind. There is something else at work. I think and I'm trying to speak responsibly and I wish there were more time to go on with these thoughts because I think everything I'm saying I can defend. I don't think in the Western world, our world, that the governments are running the nations. I think the governments are serving as the agents of great corporate and financial power. And I want to talk about that a little. Most persons don't understand what a really big modern corporation is about. So let me try to tell you in a few words. The biggest corporation in the world. Here I am in Germany and any name of a big American corporation I mentioned will be as familiar to Germans as to Americans. Those things are worldwide now. The biggest corporation in the world is Exxon. The sales of Exxon in 1977 were $58 billion. The second biggest corporation in the world is General Motors. A German said to me in my last visit, here we call the General Motors car the Opel. Yes, General Motors, annual sales, last year, global sales, $53 billion. There are only 18 nations in the world with gross national products as big as the annual sales of either Exxon or General Motors. All of you know what a gross national product is. That's everything that goes on. Let me say it a little differently. The annual sales of Exxon are as big as the gross national product of Australia. You just think of a pleasant conversation between the Premier of Australia and the Chairman of the Board of Exxon. Who is he, incidentally? I don't even know his name, though it's easily to be found out. You think of that pleasant conversation, that poor Premier of Australia. He had to get himself elected. He has to do things looking forward to the next election. He faces a big opposition. None of those limitations apply to the Chairman of the Board of Exxon. Corporations such as that, and I've mentioned only two among dozens of such transnational giants, corporations such as that represent the biggest concentrations of power and wealth that have ever existed in the whole of human history. They're not to be thought of as businesses. They are major powers. Do they have military forces? Yes. They have our military forces. Do they have systems of information, of surveillance? Yes. On the American side, they have the FBI and the CIA. Do they have systems of control? Yes. They have our governments. You know, that arms business is not only lucrative and huge. It is thoroughly concealed. The American arms business is thoroughly concealed from Americans. You know, I asked a few months ago, asked myself, who makes the hydrogen bombs? And I started by calling on the telephone a number of persons who live with these matters and surely would have known. They didn't. They just shrugged their shoulders. I had to start a research. It's gone very well. I now know who makes the hydrogen bombs. And yet this is a thoroughly concealed business. I have never seen in a quarterly or annual report of either Exxon or General Motors any mention of a military contract. And yet I just looked last night. I have the data for 1977. And Exxon is number 38 in the top 100 prime contractors for arms in the United States. And General Motors is number 24. I'd like to say something that I think is interesting about those huge transnational corporations. You know, I believe it to be deeply implanted in the theory of Marxism that it is precisely industrialization that prepares the road for socialism. It's precisely industrialization that socializes a country so that eventually the dictatorship of the proletariat so-called can take over what is already a finished product. When I was in Vietnam a couple of years ago, I found their theorists very worried because they told me, you know, we have a deep problem. We are trying to achieve socialism without having gone through an industrial phase. Now you know, one hesitates to say it when it's frightened of the very thought. All of us are. But we can't live much longer with the present militant forms of nationalism that exist all over the world. We need some kind of world government. And you know, we have a kind of world government. It is the transnational corporations. And so, you know, one might think that just as industrialization prepared the way for socialism, one might make a parallel theory that transnational corporations prepare the way for world government. There's only one difficulty with that thought, and that is their lethality, their life-threatening quality and activity. They are bringing us, those transnational corporations, our bringing humanity to the brink of self-destruction in many ways and as rapidly as can be managed. So ladies and gentlemen, I have already spoken too long. Please forgive me. I think that I think myself that I have not in any way stepped beyond my conception of the role of a natural scientist in this situation. You know, are we, are we scientists merely to study and measure and record what goes on as nature goes down the drain? Are we to be the passive witnesses of all that destruction without making any attempt to prevent it, not in my book? I think being a scientist is in many ways a religious vocation in the broadest sense. And I think that we as scientists are trying not only to understand nature, but must take on the responsibility to take care of it, to take care of the earth, to take care of life, to take care of human life. Thank you. Thank you. Thank you. Thank you. Thank you.
|
George Wald was well-known for his lectures on current global politics and the arms race [1]. The 1967 Nobel Laureate in Physiology or Medicine attended the Lindau Nobel Laureate Meetings three times, beginning in 1978, and probably few were surprised that Wald devoted his lectures to issues unrelated to “the primary physiological and chemical visual processes in the eye”, research for which Wald was awarded the Nobel Prize [2]. This lecture particularly echoes the turbulence and fears of the late 1970s. Wald began his lecture by saying that he rejects the idea that there is a natural law that dictates that technological societies will self-destruct. Yet his description of the last two hundred years since the beginning of the Industrial Revolution painted a grim picture for the future. The proliferation of nuclear warfare in the midst of the Cold War, the depletion of natural resources, the information explosion, and the exponential growth of the world’s population cast doubt over whether the human race would survive past the year 2000. Forty two years have passed since his lecture was delivered and it is thought-provoking to see how the numbers and mindsets have changed. Wald attributed 70-90% of cancers to be of environmental origin, and this statistic is still valid today [3]. Of these cancers, Wald stated that 40% was a result of carcinogen exposure in the workplace, a number that has since decreased to 3-6% of total cancer incidence worldwide [4]. Wald presented a world population growth forecast of 13 billion by the year 2120; a recent United Nations report pointed out “growth at a slower pace”, predicting 11 billion by 2100 [5]. Many would disagree with Wald’s statement that “the pollution we get from fossil fuels is for the moment, nuclear waste is forever”, although there is little enthusiasm today for nuclear power, even amid negative attitudes towards fossil fuels. “Life has gone lethal on a grand scale”, said Wald, referring to the marked increase in the amount of nuclear weapons in the last decades. The Cold War ended with the dismantling of the Soviet Union, and nuclear weapons are not perceived as a dominant threat to most people anymore. But are our societies any less lethal than illustrated by Wald in 1978? Wald would undoubtedly have plenty to say in 2020. Hanna Kurlanda-Witek [1] https://www.nytimes.com/1997/04/14/us/george-wald-nobel-biologist-dies-at-90.html [2] https://www.nobelprize.org/prizes/medicine/1967/summary/ [3] https://www.washingtonpost.com/news/to-your-health/wp/2015/12/17/study-up-to-90-of-cancers-not-bad-luck-but-due-to-lifestyle-choices-environment/ [4] https://www.cdc.gov/niosh/topics/cancer/default.html [5] https://www.un.org/development/desa/en/news/population/world-population-prospects-2019.html
|
10.5446/52575 (DOI)
|
Honored laureates, students, other guests. As scientists, we must appreciate that it is because of science and the achievements associated with its advances that there are now four billion people in the world, better fed, better housed, and in better health than ever before in history. If the problems to which Dr. Wald referred this morning are to be solved, it is by an appreciation that the population of the world should not increase, because this will be the only solution to our problem. But who among us has the wisdom to know who shall live and whose lives should not be maintained? Who shall reproduce and who shall not? I am not that wise, and so I will restrict my remarks to fields in which I can offer scientific proof that you all can accept. That is, I shall discuss only the science to which I have contributed sufficiently to find my place among these honored speakers. To primitive man, the sky was wonderful, mysterious, and awesome. But he could not even dream of what was within the golden disk, a silver point of light so far beyond his reach. The telescope, the spectroscope, the radio telescope. All the tools and paraphernalia of modern science have acted as detailed probes to enable man to discover, to analyze, and hence better to understand the inner contents and fine structure of these celestial objects. Man himself is a mysterious object, and the tools to probe his physiologic nature and function have developed only slowly through the millennia. Becquerel, the curies, the jollio curies with a discovery of natural and artificial radio activity, and Hevesi, who pioneered in the application of radioisotopes to the study of chemical processes with the scientific progenitors of my career. For the past 30 years, I have been committed to the development and application of radioisotopic methodology to analyze the fine structure of biologic systems. From 1950 until his untimely death in 1972, Dr. Solomon Berson was joined with me in this scientific adventure, and together we gave birth to, and nurtured through its infancy, radioimino assay, a powerful tool for determination of virtually any substance of biologic interest. Essentially, an amalgamation of physics and medicine, a physicist and physician. How did radioimino assay begin? Were we setting out to measure anything? Let me share with you today its history, its past, and something of its potential. Radioimino assay came into being not by directed design, but more as a fallout from our investigations into what might be considered an unrelated study. Dr. Iotham Mersky postulated some 25 years ago that the diabetes of the adult, maturity onset diabetes might not be due to a deficiency of insulin secretion, but rather to abnormally rapid degradation of insulin by a liver enzyme, which he deemed hepatic insulinase. Why did he make this suggestion? It was already known that although the pancreas of the juvenile diabetic, the child with onset of diabetes, has practically no insulin in his pancreas. However, it was known from post-mortem studies that the maturity onset diabetic generally had almost normal, normal, and even supra normal amounts of insulin in his pancreas. And yet 25 years ago, before the days of the oral hypoglycemic agents, virtually all diabetics were treated with insulin. If the pancreas has enough insulin, and it is presumed that the circulation does not have enough insulin, then the idea of Dr. Mersky that insulin was being destroyed abnormally rapidly in the diabetic seemed very reasonable. At the time, Dr. Burson and I were working in studies concerned with the turnover, the distribution and disappearance from the plasma of serum proteins. It therefore appeared not unreasonable to perform the same type of studies to determine how rapidly was insulin disappearing from the plasma of the diabetic subject as compared to the plasma of the non-diabetic subject. If the Mersky hypothesis were correct, then we would have expected insulin to disappear more rapidly from the plasma of the diabetic than from the plasma of the non-diabetic subject. May I have the first slide, please? We therefore administered labeled insulin, insulin labeled with radioisotope of iodine, I-131, intravenously to diabetic and non-diabetic subjects. If the Mersky hypothesis were right, the insulin would have disappeared more rapidly from the diabetic. What we found to our surprise is that the radioactive insulin disappeared more slowly from the plasma of the diabetic subjects than from the non-diabetic subjects shown in the lighter curves at the bottom. There were included in the lower group some diabetic subjects. In fact, one MN when first seen as a fresh diabetic, the disappearance curve was normally rapid. However, following several months of insulin therapy, he joined the more slowly disappearing group. In addition, there was an occasional non-diabetic subject shown in this heavy dotted line who also had a slow rate of disappearance. This was a schizophrenic subject who had received insulin shock therapy. The difference between the slowly and the rapidly disappearing group, therefore, was not a history of diabetes per se, but rather a history of previous therapy with insulin. We therefore suspected that the retarded rate of insulin disappearance was due to binding of labeled insulin to antibodies which developed in response to administration of exogenous insulin obtained from cows or pigs. However, classic immunologic techniques were not satisfactory for the detection of antibodies which we presume were likely to be of such low concentration as to be non-precipitating. We were confronted with the idea that insulin was not antigenic or its antigenicity would have been observed in the 25-year previous history of insulin therapy. We therefore felt that we must introduce new techniques of high sensitivity for detection of soluble antigen antibody complexes, and these techniques depended upon the use of labeled insulin, insulin labeled with radioactive iodine. In the next slide, please, we see some of the techniques. The first of the techniques we use were called electrophoresis. In these techniques, essentially we are separating proteins on the basis of their differences in charge and their response to the superposition of an electric field. Down here in the center are the electrophoresis patterns of labeled insulin in the plasma of a patient never treated with insulin, what we are calling a nonimmune plasma, and in the plasma of patients treated with insulin, we are calling it immune plasma. In the plasma of the patient's never treated with insulin, the labeled insulin binds to paper as the rest of the serum proteins migrate. However in the plasma of the insulin treated subject, the insulin migrates in whole or part with the serum proteins here shown as an inter-beta gamma globulin. Sol Berson and I were always in a great hurry, and therefore we designed a new, very simple technique called paper electrophoresis in which we left the top of the electrophoresis box open, we encouraged water flow chromatography, and therefore could affect separation between the free insulin which remains at the site of application and the protein bound insulin which migrates in about 15 minutes as compared to the overnight or 16 hours required for standard electrophoresis. Here we see starch block electrophoresis, again a system that separates only on the basis of charge, where free insulin migrates almost in the region of albumin, and the labeled insulin bound to a gamma globulin remains quite close to the site of application. Using a variety of such systems, we were able to demonstrate the ubiquitous presence of insulin binding antibodies in virtually all subjects treated with insulin for a period of a month or more. This concept was not acceptable to the immunologists of the mid-1950s. Heidelberger had just stated in his book that peptides less than 10,000 in molecular weight could not be antigenic. Next slide please. The original paper describing these findings was rejected by science and initially rejected by the leading American journal of clinical investigation. A compromise with the editors eventually resulted in acceptance of the paper, but only after we omitted the word antibody from the title because they were unable to accept our conclusion that the globulin responsible for insulin binding was in fact an acquired antibody. Here we see the bound insulin migrating with the gamma globulins, the free insulin remaining at the site of application. Note that as we increase the insulin concentration from less than one millio unit of insulin per milliliter up to a value tenfold greater, we get a reduction in the ratio of antibody bound to free insulin. Some 67% being bound in the first case, 64, 57, less than half, and 31% being bound as the insulin concentration has gone up tenfold. This observation provided the basis for the radioimmunoassay of plasma insulin. However, investigations and analyses which lasted for several years and which included studies on the quantitative aspects of the reaction between insulin and antibody and the species specificity of the available antecera were required to translate the theoretical concepts of radioimmunoassay into its practical application almost 20 years ago to the measurement of insulin in unextracted plasma. This slide please. Radioimmunoassay is simple and principle. It is shown in the competing reactions in this slide. The concentration of antigen in the unknown sample is determined by comparing its behavior in inhibiting the binding of labeled antigen to antibody with the behavior of known standard solutions. Radioimmunoassay is not an isotope dilution technique as originally described by Hebbesey since there is no requirement for identical immunologic activity of the unlabeled antigen with that of the labeled antigen. The validity of radioimmunoassay is dependent only on identical immunologic behavior of the unknown samples and of the known standards. The specificity of immunologic reactions can permit ready distinction, for instance, between corticosterone and cortisol. Two steroids which differ only in the presence of a single hydroxyl residue. There is no requirement in radioimmunoassay for standards and unknowns to be identical chemically or to have identical biologic behavior. It is therefore necessary if one is concerned with the biologic behavior of quantities measured by radioimmunoassay to have some additional proof as to the validity of the biologic comparison. In addition, radioimmunoassay can even be clinically useful in some assays which cannot be properly validated due to a lack of immunologic identity between standards and the sample whose concentration is to be determined. The use of radioimmunoassay for the measurement of parathyroid hormone, the hormone which controls the body's handling of calcium, is typical of this. Radioimmunoassay is a test tube method. To perform a radioimmunoassay, we use one or another variation of the following. We mix in a test tube a fixed amount of labeled antigen, a fixed amount of antibody, and in some tubes the known standards or in other tubes the unknown samples. At the end of a period of time, which may be from minutes to hours to days, we provide some way of separating the antibody-bound labeled antigen from that which is free because as I have described earlier, the antibody-bound antigen is not in the form of soluble complexes. Tens of techniques have been used to affect this separation. Next slide, please. We then plot a standard curve which consists of the ratio of antibody-bound to free labeled antigen as a function of the concentration of the unlabeled antigen. We then measure the percent binding or the B over F ratio in the unknown tube and from this calibration curve can measure directly the concentration of the unlabeled antigen. As shown here, the sensitivity of radioimmunoassay is quite remarkable. As little as a tenth picogram per milliliter or five times ten to the minus fourteenth molar, gastrin is readily measurable. Next slide, please. The radioimmunoassay principle is not limited to immune substances but can be extended to other systems in which in place of the specific antibody, there is now a specific reactor. The specific reactor can be any type of binding substance. This might be a binding protein in plasma, an enzyme, or a tissue receptor site. Furthermore, it is not necessary that a radioactive atom be used as the marker. Currently there has been considerable interest in employing as markers enzymes which are covalently bound to the antigen. Although many variations of competitive assay, the more general name for radioimmunoassay have been described, radioimmunoassay has remained the method of choice and is likely to remain so at least in those assays which require high sensitivity. The receptor site assays for the peptide hormones have the advantage of measuring biologic activity but are generally at least ten to a hundred fold less sensitive than radioimmunoassay. Radioimmunoassays which have been used rather extensively the last few years do have several disadvantages. The most important is that the steric hindrance introduced into the antigen antibody reaction because of the presence of the enzyme molecule almost inevitably decreases the sensitivity of the assay. Two decades ago when bioassay procedures were in the forefront, the first presentation on the potential of hormonal measurements by radioimmunoassay went virtually unnoticed. Somewhat more interest was generated by our demonstration in 1959 of the practical application of radioimmunoassay to the measurements of plasma, insulin, and man. Nonetheless in the early 60s the rate of growth of radioimmunoassay was quite slow. Only an occasional paper other than those from our laboratory being found in the leading American journals. Not only did we describe radioimmunoassay but in the early 60s we conducted training courses in our laboratory over a three year period in which we trained more than a hundred American investigators in the use of its techniques. And as you can see at the end of this training period the procedure took off and there was in fact an exponential and continuing exponential growth of radioimmunoassay. Like most scientists we did not patent our discoveries. By the late 1960s radioimmunoassay had become a major tool in endocrine laboratories. More recently it has expanded beyond the research laboratory into the nuclear medicine and clinical laboratories. It has been estimated that in 1975 in the United States alone over 4,000 hospital and non-hospital clinical laboratories performed radioimmunoassay of all kinds. Almost double the number of a year or two earlier. And the rate of increase appears not to have diminished over the past two or three years. The technical simplicity of radioimmunoassay and the ease with which the reagents may be obtained have enabled its extensive use even in scientifically underdeveloped nations. In fact I would say the current problem in radioimmunoassay is its overuse as is the problem with many of the tools that modern medicine has given us. Next slide please. The explosive growth of radioimmunoassay has derived from its general applicability to many diverse areas in biomedical investigation and clinical diagnosis. A representative, illegible and incomplete listing of substances measured by radioimmunoassay is shown here. This slide is meant more to impress than to be read so I'll go on to the next slide which has only the listing of the substances shown. On the left is the listing of the peptidal hormones in the center, the non-peptidal hormones and in the right the non-homonal substances which have been measured by radioimmunoassay. And I would first like to describe some very typical uses to let us know how with radioimmunoassay we have gained new insight into physiology and pathophysiology. I started this lecture by indicating in the 1950s it was thought that all diabetics had an absolute deficiency of insulin. This is the reason for the Mursky hypothesis with an adequate pancreas and with the thought that the circulating insulin was too low he therefore thought insulin was being degraded abnormally. In fact, the first discovery made with radioimmunoassay was the recognition that the maturity onset diabetics subject did not have an absolute deficiency of insulin but in fact his insulin levels generally exceeded that of the non-diabetic patient because there was something wrong in diabetes which inhibits the body from properly using the insulin. So that the first discovery made with radioimmunoassay was the recognition that in diabetes the elevated blood sugar was due in the adult not to an absolute deficiency of insulin but to something in the disease state that accounts for the failure of the diabetic subject to use his insulin as effectively as does the non-diabetic subject. The second discovery with radioimmunoassay was the ability to measure one of the pituitary hormones growth hormone the hormone concerned with the growth of small children. If these children are treated in time with growth hormone then they can achieve almost normal growth. Are all children of small stature due to a deficiency of growth hormone? The answer is no. There are many other causes for the short stature of some children malnutrition, genetic constitution, even lack of love can prevent some small children from growing. At present we have no way of obtaining human growth hormone except as autopsy material from human subjects. Perhaps someday E. coli will grow it but at the present time our only source is from autopsy material. We cannot use animal growth hormones to treat growth deficient human children. The importance of radioimmunoassay is it provides us with ways of determining which small children are small due to an absence of growth hormone and which small children are small due to other causes. Why is it important to make this distinction? Because we only have enough growth hormone to treat less than half the children who really need it. If we wasted it by treating those children who do not need it we would not have enough of it to treat those who do. Second role for radioimmunoassay. With radioimmunoassay we were able to measure the hormones which control calcium metabolism, the calcitropic hormones. We have understood something about the secondary hyperparathyroidism, the bone disease of patients with renal disease. We have been able to make the diagnosis of calcium secreting tumors of excess parathyroid hormone from tumors of the parathyroid gland and so on. We have learned a good deal more about sterility and fertility with assays for the gonadotropins. To me perhaps the most exciting new development of radioimmunoassay is one that has not come from my laboratory because I work at a veterans administration hospital and our patients are adult males for the most part. But the use of radioimmunoassay for the screening of hypothyroidism of the newborn. In our country we have been familiar for a number of years with the screening for newborns with phenylketonuria, the inability to properly metabolize phenylalanine. These children if not given a special diet in the first year of their life develop irreversible mental retardation. As a result in our country we have had a screening program in effect throughout the 48 or 50 states. With the development of radioimmunoassay we are able to measure the concentration of thyroid hormones also in a drop of blood on filter paper taken from the newborn. Phenylketonuria in northern Europe and the United States occurs one in 25,000 births and actually the treatment is a very difficult one. Hypothyroidism of the newborn, inactive thyroid of the newborn occurs in about one to 5,000 to once 8,000 births. Three to five fold greater incidence. If not detected by three months mental retardation is inevitable. A lowering of the IQ, the intelligent quotient by more than 30 to 40 percent below that of their siblings. The cost of treatment is a dollar a year and in many states now screening of the newborn for neonatal hypothyroidism is being required and we can look to the end in one in 5,000 births of this terrible tragedy for the family, for the child and for the community. More recently I have returned from India where I have noted that in India infectious diseases are second in the cause of death as opposed in Europe and in the United States 15 among the causes of death. Radioimmunoassay will have an important role in the early detection of carriers of disease. Some eight years ago we described from our laboratory the first application of radioimmunoassay to a viral antigen, the measurement of hepatitis B antigen and the antibody using radioisotopic techniques using radioimmunoassay. It is now the method of choice in our country for the detection of blood which has been infected with hepatitis B antigen. We have more recently described from our laboratory the application of radioimmunoassay for the measurement of purified protein derivative. This is a protein that is derived from the cell wall of the mycobacterium tuberculosis. We have described the ability to detect this antigen in situations of miliary tuberculosis. We hope to be able to apply it to other very severe problems such as tuberculamenongitis where an early diagnosis is very important of treatment is to be affected in time and where the usual biologic culture techniques frequently will take six weeks or so before the diagnosis can be made. This might be less exciting in the States, in India where the differential diagnosis on brain scan is not between stroke and tumor, but between stroke and tuberculosis granuloma, it is in fact a very important problem. Other uses which I can envision would be for instance in the case of leprosy, a disease in which there is a very long incubation period and which could be treated in time if we could identify the carriers. I believe that in the 1980s, radioimmunoassay will find us why the applicability in the study of infectious diseases as it has proven its role in the 60s in the study of endocrinology. Rather than go on with this very generalized description, I would like to consider a few specific examples of radioimmunoassay to illustrate how it can and should be used. Proper interpretation of plasma hormone levels, particularly of the peptide hormone concentrations in clinical diagnosis requires a clear understanding of the factors involved in the regulation of hormonal secretion. Next slide. Only such secretion is stimulated by some departure from the state of biologic homeostasis that the hormone is designed to modulate. A representative model for one such system is shown in this slide. Regulation is affected through the operation of a feedback control loop which contains the hormone at one terminus and at the other, the substance which it regulates. Gastrin is a hormone which is secreted in the stomach and which controls gastric acidity. Gastrin secretion increases gastric acidity, which then suppresses the secretion of anterogastrin. Secretion of this system can be affected by a number of factors, perhaps the most important of which is feeding. Feeding promotes gastrin release directly by its chemical action on the antrum, by distinction of the stomach, or by the buffering action of food which reduces gastric acidity and through this mechanism promotes gastrin release. Next slide. The normal fasting gastrin concentration in most types of ulcer patients and normal patients is less than a tenth nanogram per milliliter. Here we see three different clinical conditions and I will describe them in which the gastrin is abnormally elevated. The first group is a group of patients with pernicious anemia. These patients who develop, if untreated, the fatal form of anemia, the inability to make blood, are patients whose stomachs are characterized by marked hypoacidity. Since gastric acid normally suppresses gastrin secretion, the continued absence of acid and the repeated stimulation by feeding eventually produces secondary hyperplasia of the gastrin producing cells. The high level of gastrin in these patients is then considered very appropriate because of the absence of the inhibitory effect of hydrochloric acid on the secretion of antragastrin. The second group is a group we call them Zollange Ellison syndrome. Essentially these are patients with a tumor that secretes gastrin. This tumor is frequently a malignant tumor. It is a form of cancer. And in this case, secretion of gastrin from the tumor is not being appropriately regulated. The patients have marked hypoacidity, develop ulcers in the duodenum due to the marked hypoacidity. It is important to be able to distinguish between the patients who have a gastrin secreting tumor and patients with elevation of gastrin because of hypoacidity. And so it is evident that we must measure both the hormone and also the level of acid. In application of radioimmunoassay in the case of the peptide hormones, it is not sufficient to measure the peptide hormone concentration per se. We must measure the hormone and the substrate which is designed to regulate. Having enough, we can also get marked hypoacidity not because of a malignant condition such as a gastrin secreting tumor, but simply because of hyperactivity of the antrum. Hyperactivity of the gastrin secreting cells of the tumor, of the stomach. How do we distinguish between those patients with marked hypoacidity and excess gastrin secretion in the region of overlap? Certainly the treatment of a tumor is likely to be quite different from the treatment of simple hyperactivity of the gastrin secreting cells. With radioimmunoassay, we can do dynamic studies. For radioimmunoassay, it took a cup of blood to measure the concentration of insulin in the blood. With radioimmunoassay, we can make the same measurement with a finger stick. As a result, we can apply dynamic studies. We can perform five, 10, 20 successive radioimmunoassays to determine how the particular substance concentration is changing in plasma. Next slide, please. And in this slide, we see how we make the separation between gastrin hypersecretas due to the tumor, as shown on the left, and due to hyperactivity of the antral gastrin secreting cells, as shown in the right. Patients with hyperactivity of the gastrointestinal tract respond dramatically to feeding and not to other secretogogs, such as calcium asecrotin, because they are characterized by overactivity of the gastrin secreting cells, which respond to feeding. Patients with a tumor do not respond to feeding down here as opposed to up there, whereas they do respond to other secretogogs. So that with appropriate choice of secretogogs, we are able to make diagnostic differentiation between two groups that clinically would resemble each other. They both have high levels of gastrin, both have high levels of gastric acid, but with appropriate stimulatory tests, we are able to make the distinction. Thus, in the application of radio-emino assays to problems of hypo or hypersecretion, we seldom rely on a single determination of the plasma hormone. Generally to test for deficiency states, we measure concentrations not only in the basal state, but in response to administration of appropriate physiologic, pharmacologic stimuli. When hypersecretion is suspected, sometimes we use suppressive tests. Studies such as these are common now in endocrinology and would not have been possible without radio-emino assay. The study of the peptide hormones has been further complicated by a change in our concepts of the chemical nature of the peptide hormones. We now know that the peptide hormones are found in plasma in more than one form and in the glandular tissues from which they come. These forms may or may not have biologic activity and may represent either precursors or metabolic products of the well-known, well-characterized, biologically active hormone. Their existence has certainly introduced complications into the interpretation of hormonal concentrations as measured by radio-emino assay and as measured by bioassay as well. A typical example of work in this area is the current interest in the heterogeneity of gastrin. Next slide, please. Stereoanalytical methods were used to elucidate the nature of plasma gastrin. The technique shown here is called Cephodex gel filtration and it separates molecules on the basis of their molecular radius, essentially on molecular weight and configuration of the molecule. This is done in a column in our hands about a half a meter long. We have marker molecules which mark the void volume and marker molecules which mark the salt peak. When we add gastrin, it's heptadecopeptide gastrin. The gastrin purified from the anterum is a 17-amino acid peptide. We add this to plasma. We note that the gastrin elutes after insulin which weighs only 6,000, pro-insulin weighing 9,000. If we examine the nature of the gastrin in plasma, we see that it elutes between insulin and pro-insulin, clearly behaving different chemically from the gastrin which has been purified from the anterum. Next slide, please. We can use other physical chemical methods to affect that separation. You can leave it that way. The anode is shown up. The material is applied here at the origin. We note that the gastrin in plasma differs in charge also from the 17-amino acid peptide which had been separated from the anterum. We have called this new form of gastrin big basic gastrin. Next slide, please. Again, it's not lined up right. We note here's the big gastrin eluting between the void volume and the insulin. If we now use tryptic digestion, we can convert the big gastrin into the heptadecopeptide gastrin. We therefore predicted that big gastrin was a precursor molecule for the 17-amino acid peptide and was linked to the 17-amino acid peptide through a lysine-erogenine residue. Next slide, please. Soon thereafter, Gregory and Tracy were able to show that the 17-amino acid heptadecopeptide gastrin was incorporated in big gastrin through two lysine residues. Thus, our prediction based on the measurement of picagram to nanogram amounts of immunoreactive gastrin in the presence of billion-fold higher concentrations of other proteins was justified by the work of Gregory and Tracy purifying and chemically characterizing this material. Next slide, please. Unlike pro-insulin, which has virtually devoid of biologic activity, the in vivo administration of immunochemically equivalent amounts of big gastrin and heptadecopeptide gastrin results in the same physiologic response, the same acid output in a dog. On this basis, we would say that big gastrin and heptadecopeptide gastrin have equal biologic activity. You administer the same amount, you get the same biologic response. And in this way, the classic physiologic description, big and heptadecopeptide gastrin have the same biologic activity. However, the turnover time for big gastrin is five times as long as that of heptadecopeptide gastrin. So that when you administer it intravenously, it disappears more slowly. Therefore, under the conditions of a continuous infusion, the plasma concentration of the big gastrin is five times as high as that of the heptadecopeptide gastrin. Therefore, if we define biologic responsiveness, not as dose-administered biologic response, the plasma level versus biologic response, we would say under those conditions, big gastrin has only a fifth of biologic activity, as does the heptadecopeptide gastrin. So the concept of heterogeneity has introduced complications not only into immunoassay, but how we define things in terms of bioassay as well. And present, a decade after the concept of heterogeneity was developed, and in spite of an enormous body of descriptive data in this field, we still do not know very much about the rules or reasons for this precursor product synthetic scheme. Is the synthesis of the peptide hormones in the form in which they are linked to another peptide essential only for the method of synthesis? Whether the enzymes involved in the conversion process, ten years later, we don't know if the enzyme converting enzymes are hormone-specific or species-specific. There are still arguments as to whether the conversion is affected only in the secreting tissue or whether there is peripheral conversion from inactive to active form. What is the role of the part of the precursor molecule which is discarded after biosynthesis? In fact, is it discarded? There is now evidence to suggest that ACTH, lipotropin, and others are part of the same molecule, and each part has different physiologic functions. Finding the answers to these and related questions will keep many of us busy for quite a while. In the few minutes left, I would just like to discuss with you what is to me a very new and exciting development, again making use of radioimmunoassay. The findings by Van Der Hagen et al. of a new peptide in the vertebrae central nervous system that reacts with antibodies against gastrin was confirmed by Doc Ray, who suggested that the brain peptide resembled cholecystokinin-like more closely than it did gastrin-like peptides. These studies were based only upon the differences in immunoreactivity of different antiserims. We extended these studies and demonstrated that the peptide in the brain was not gastrin, was not simply cholecystokinin-like, but was in fact intact cholecystokinin and its C-terminal octopeptide, a gut hormone now being found in the brain. These observations depended on the use of two antisera with different immunochemical specificities. One was prepared by immunization of a goat with porcine cholecystokinin, the only species from which cholecystokinin has been purified, and this doesn't react with any of the other gut hormones, including gastrin or even the octopeptide, the C-terminal octopeptide of cholecystokinin. So it reacts with amino acids in the N-terminal portion of the molecule. A second antiserum, next slide, was prepared by immunization with the four-terminal amino acids of gastrin. Gastrin and cholecystokinin share the same five C-terminal amino acids, and we see that the peptides, cholecystokinin, the octopeptide, gastrin, the 39 amino acid cholecystokinin all behave about the same in this system. Using this antiserum, next slide, please, we observed that in all animal species studied, the immunoreactive content of cholecystokinin in the gut and in the brain were roughly comparable so that, in fact, if we consider man with a big brain, there is more of the gut peptide in the brain than there is in the gut. A very interesting observation. Note also that the concentrations among the different species are very constant within the range of five or six fold. Furthermore, after trypsin digestion, which, as you know, breaks the lysine and arginine residues, there is virtually no change in amino reactivity because the cholecystokinin octopeptide does not contain lysine or arginine residues and therefore is not a substrate for trypsin. And in the next slide, we see the cephidexgel filtration patterns. In yellow, as shown, the intact cholecystokinin in white is shown the octopeptide about half in half before tryptic digestion, all converted to the octopeptide after tryptic digestion with no loss in amino reactivity. And essentially, the cholecystokinin in the gut of the pig is comparable to that in the cerebral cortex, in the gut of the dog, comparable to that in the cerebral cortex, in the gut of the monkey, comparable to that in the cerebral cortex. Next slide. In the same monkey and dog extracts in which the cholecystokinin-like material was present in about the same concentration in the pig extracts, we did not find it in the brain and gut extracts of the other species when we use the n-terminal antiserum. In other words, we found amino reactivity with an antiserum directed to the c-terminal portion of the molecule, but not with an antiserum directed to the n-terminal portion of porcine cholecystokinin. We therefore predict on the basis of radioimmunoassay that there are major differences between pig and the other animal cholecystokinins in the amino-terminal portion of the molecule. Since this portion of the molecule is not directly involved in spyologic action, it is not surprising that the amino acid sequences in this region have diverged during the course of evolution. In fact, the c-terminal octopeptide has about 10 times the biologic activity as intact cholecystokinin. We look forward to our prediction stimulating Victor Mutt to purify and chemically characterize the other animal cholecystokinins, and we are looking forward to what he finds. Next slide. Where in the brain has cholecystokinin found? Its concentration is highest in the cerebral cortex. Immuno-hystochemical studies as shown here indicates that the hormone appears to be concentrated in the cortical neurons. Lights please. The finding of peptides resembling cholecystokinin and its octopeptide in the central nervous system raises intriguing questions about their physiologic function, particularly with respect to their potential roles as satiety factors. The observations of Gibbs et al. that injection of purified cholecystokinin or the octopeptide evokes satiety, although other gut hormones did not, has suggested a negative feedback mechanism from the gastrointestinal tract as the causative mechanism. The finding that cholecystokinin peptides appear to be endogenous in the brain suggests a more direct role for them as neuro regulators. And in fact, we are now examining the changes in the cholecystokinin content in configuration in obese animals as compared to that of normal or fasted animals. Where is radioimmunoassay going? I'm afraid I'm not the best predictor of where radioimmunoassay is going. I have now seen hundreds of different applications. I've seen whole new fields in medicine stimulated. I, like you, look forward to where radioimmunoassay goes from here. Thank you.
|
Each year the Nobel Prize-awarding institutions have the possibility of dividing the money for their Nobel Prize(s) into two parts. These parts can then be given at the same time for different discoveries or inventions. This was the case when Rosalyn Yalow, as second woman ever, received her Nobel Prize in Physiology or Medicine. Her Nobel lecture given in Stockholm on December 8, 1977, is entitled “Radioimmunoassay: A probe for the fine structure of biological systems”. When she came to the Lindau meeting half a year later, she chose to speak on a very similar subject and apparently also re-used most of the 16 slides. Looking them up may help the interested listener to follow the details of her quite technical lecture. In the main part, she describes the invention of the RIA method and an impressive number of discoveries that have been made with this method, both in pure biomedicine and in more practical clinical medicine. But in the beginning she takes up a more general global problem, that of the increasing population of the world. This seems to be typical of her, since at the Nobel banquet in Stockholm she gave a very unusual and engaging appeal to the students, in particular the female students. Her point then was that the female half of the population can not be left out when our global problems need to be solved. From her own life, she knew more than enough about the many difficulties facing women entering scientific careers. When, as Rosalyn Sussman, she tried to enrol as a PhD student in nuclear physics, she initially had great difficulties. It was not until WWII started and most male physics students had disappeared into military activities that she finally was accepted. Since what she learned about nuclear physics and in particular about radioactivity is the starting point for her Nobel Prize work, one might paraphrase a quotation from her, to say “Without the war she would never have received the Nobel Prize”! Anders Bárány
|
10.5446/52576 (DOI)
|
I wish to thank you very much for your kind introduction. It is a great pleasure to be here and to listen to so many interesting lectures. And especially, I am glad to note that there are so many physicists who invade other fields of science. We have just heard today a couple of excellent lectures about biology. And the physicists have made all, presented all sorts of excuses for going there. I think that they do not need any excuse for this, but I should like to try to follow the same approach, namely to try to go into a field where I am an outsider. And this means that I would like to present the views of plasma physicists on cosmology and astrophysical problems in general. As an excuse for doing so, I should perhaps mention that everybody knows that 99.99999% of the universe consists of a magnetized plasma, and therefore it may be allowed for plasma physicists to present his views there. We have listened to wonderful lectures about cosmology, and it has been stated here, the general agreed fact, that the Big Bang cosmology is the cosmology which explains everything. And I am of course very impressed by this cosmology. It is based on the general theory of relativity, and in the year when the 100th anniversary of Einstein is celebrated, I need not stress to you how wonderful, how beautiful the general theory of the general relativity is. And when you listen to the presentation of Professor Dirac, of his version of the general of the Big Bang theory, you are also very impressed. Of course, the general feeling is that it is a beautiful theory which explains the whole evolution of the universe from the Big Bang, the Ur-Knal, when all matter was, we have now, was concentrated in one point, in one singular point. There are of course a number of difficulties, which is, it is a little, which, there are a number of things, it is a little difficult to understand, namely that the whole world which we see, the Linda, and the whole earth, and the planets, and the sun, and the galaxies, and all that was once condensed into a very small volume, as small as this, or as small as this, or even still smaller, because a singular point is very, very small. And better, I take the authority of Einstein and Professor Dirac, that it must have been so. And furthermore, you hear the detailed description of what happened during the first three minutes after the Big Bang, and that is described as you know in detail, you are a little surprised to find that the accurate dating of this is not so well known. Professor Dirac said that some people say that it was 10 billion years ago, and other 18 billion years ago, and I think this states the general situation, there is large uncertainties in certain respects, but of course not about what happened during the first three minutes. But with this, and then of course you ask yourself what happened before these three minutes, and then I haven't got any answer yet what happened before, well this has no meaning, because nothing existed there, and how did all this come into being? There are some people who say that this proves the existence of God, because it must have been God who created all this at a certain moment. And this means that we mix science and theology, we come into the borderline there, and this is a thing which perhaps is somewhat dangerous. But as I said, the strongest impression is the wonderful beauty of the whole theory, it explains everything. However beauty, beauty is sometimes dangerous, also in science, and especially in cosmology. If we look at the history of science, there has been other cosmologies which have been wonderfully beautiful. Take the six-stake creation, isn't that a wonderful cosmology? And still it is, in spite of its beauty, it isn't believed very much, at least not in the scientific community. And take the wonderful Ptolemann system, which was generally accepted for thousands of years or so, with the harmony of the spheres and crystal spheres revolving, that was also very beautiful. But still there are very few people who believe in it, except of course those who believe in astrology, that is perhaps more than those who believe in science, in astronomy, but these are outside extramurals, they do not belong to the scientific community. But I think that the reason why these very beautiful cosmologies are not accepted anymore, is that they are not reconcilable with observations, because science is, after all, an empirical, to some extent, empirical. We have wonderful theories, which we have heard so much, but there is also empirical evidence, and how does that agree with the theories in this respect. We have heard that there are convincing proofs for the Big Bang cosmology, and we have heard that in some cases there is expected to be a convincing proof of it in a few months, but let us see a little how much to what extent the observations support the Big Bang. I think that the general impression is that all really good observations support Big Bang, and all bad observations contradict it. But what is the definition of a good observation? It is an observation which confirms the Big Bang, and the definition of a bad observation, an uninteresting observation is an observation which brings Big Bang into some difficulties. We have heard about these wonderful models. It is the homogeneous model, which is the basis for the Big Bang, derived from the general theory of relativity. The first question is, is really the universe uniform? Is it isotropic? If you go out in the night and look at the stars, you don't see something which is not at all uniform. I have the first slide, but that is only a local anomaly. It is only something which happens here in our close neighborhood. If you go out and have a look on the galaxy, this does not either give you an impression of a uniform distribution of matter. But this is again a local anomaly. If we go out further, we should, according to the theory, be able to apply a uniform homogeneous theory. That means that such islands should be distributed uniformly in space. No, it does not, because the galaxies are lumped together in groups of galaxies, and these are lumped together in clusters of galaxies, and the clusters of galaxies are not either uniformly distributed. They are lumped together in super clusters, and that is as far as our information goes, because if we go to still larger size, we don't know anything with certainty from observations of this kind. Can I have the projector on here? This is a diagram by Devochler, which gives the experimental, the observational results, correlation between the maximum density and the radius of a sphere, and you see here, if you have these represent galaxies, and the average density in them is something like 10 to the minus 23. These are groups of galaxies and clusters of galaxies, and this is the last largest unit. You can measure, that is, a super cluster of super clusters of galaxies. They come down here, and you see, this means that we have rather a hierarchy of lower and lower densities when we go out to very large regions. Here we are out on close to 10 to 26 centimeters, and the radius of the universe is called 10 to the 28, so we have here still a couple of orders of magnitude to go, and about this region, we don't know anything from galactic observations how uniform it is. It is quite possible that from here, further out, we have uniform density, that is about 10 to the minus 29, which I think is the figure which Professor Dirac quoted. However, if you take, you can also, without being in disagreement with any observational fact, continue the extrapolation here, and that brings you down to 10 to the minus 34, 32 at this distance, which is three or four orders of magnitude below this value. So we obviously here have an amplitude about which the observations don't tell us anything. There is nothing wrong. We cannot say that the Big Bang uniform picture is wrong, but we can also accept such a solution, and we should just, it is of interest to see what we result we can reach if we take the other alternative. So it means that the homogeneity of very large of the universe or meet a galaxy, it also always sometimes is called, meaning just that part of the galaxy, of the universe we explore here, that this uniformity is not known with any, is not proved by observations of galaxies. What is the main proof of it? Well, it is the most important phenomenon which has been discovered for quite a few years in astrophysics, namely the black body radiation, which is completely isotropic, and that shows that the universe as a whole must be completely isotropic. It agrees with the Big Bang model, and this is actually the strongest support there ever. There is, it is, as far as I know, the only support there is for it. Well, perhaps I should say was, because one year ago that happened a very regrettable thing, namely that this radiation turned out not to be isotropic. It is very well with you to a local anomaly, of course, but it is, if you correct for the rotation of the galaxy, you still get a large anisotropy of the order of velocity of 800 or 1000 kilometers per second. And if you then correct for the motion of the galaxy in relation to other galaxies in the verbal cluster, which is the larger unit, you do not either get any better isotropy. So it must be some still larger unit where this anisotropy is caused. So I'm not quite sure that we could rely on this either. Then comes the Hubble expansion. The Big Bang says that everything was condensed in a singular point, and from that the galaxies flew out in all directions. And there is no, this is correct, at least to the extent that there is a Hubble expansion, the galaxies move outwards. And this is a good diagram which shows the relation between the distance, just measured by corrective apparent magnitude of the galaxies, and this is the velocity. And you see that these points fit very well on a straight line, which it should do according to the Big Bang theory. They should all be on a straight line. And of course we have observational errors because these measurements are very difficult to make. However, if we take the individual observations here, we have the distance and we have the velocities, and from that we could construct a diagram, how these have moved under the assumption, which is very reasonable, that they haven't changed the velocity. This is now the distance from us, and this is time, and you see that if you go back in time, these are all coming closer together. Now every individual point here is used for such a straight line, and they come together here. So there is no doubt that our metagalaxy is expanding at present. However, does this expansion necessarily derive from a Big Bang here? It's quite possible. You cannot rule that out because these could very well be observational errors. It might be that everything has derived, has originated from one point here, and then it has gone out like this, and the minimum size which you get here may very well be due to observational errors. However, we cannot say that from observations it is possible to conclude this. We can conclude that once the metagalaxy was much smaller than now, at the Hubble time, 10 billion years or so, it was something like one-tenth or less than that. It could be zero, it could be a single point, but it could very well also be much larger. So this means that if we try to construct an earlier state of our metagalaxy from observations, we could do that. That could lead to the Big Bang model, but it could also, according to this, lead to a rather drastically different picture. You have here the Hubble radius, the Hubble density, and so on. And here is beta. That is the velocity of the different galaxies which have been measured. And the galaxies for which one has measured the trade shift, most of them are well below 0.3 of the velocity of light. That means that this is the size, 3 times 10 to the 27, if we take the Hubble radius to 10 to the 28. Professor Dirac gave a model in which he said that he discussed especially the part of the universe, which was receding with a velocity which was less than half the velocity of light, which we can take for this, and here we can take 0.4 as some sort of average. This is only to show you what one may get in such a way. Then you see that the total rest mass of the metagalaxy is given here, and the rest mass energy, the rest mass multiplied by the square of the velocity of light, comes down to 433, the universe is 10 to the 70. You can also calculate the kinetic energy of this, and the kinetic energy comes out to be 19. It is about 5%. So the part of the universe which we have observed with any degree of certainty has a kinetic energy which is about 5%, a little different here, of the rest mass. So in some way we need to have an energy put into the metagalaxy which gives you 20%. From that we can construct, I don't have so much time, I see that goes very rapidly. This is a table of what we have here. What is interesting is to see, this is the minimum size of the metagalaxy, and what is interesting is that we are even at the minimum size, 100 times outside the Schwarzschild limit, which means that the correction for the general relativity effect is only 1%. What does this mean? It means that if we go out to the galaxy, in the galaxy, we of course have measured the general relativity effects in our close neighborhood, and if we go out to study the behavior of the galaxy, no one applies general relativity, every order motion there can be used, can be calculated with classical mechanics. If we go out further, out as soon as we are far from the Schwarzschild limit, we can use classical mechanics and use Euclidean geometry with a high degree of certainty. So actually with this model we have a 1% correction for the general relativity, and something like 10, perhaps 25% correction for the special theory of relativity. So you see that this is a possible model, which as I said is just as well reconcilable with the observational data as the Big Bang theory as far as I can see. However, now comes another thing. Sorry about here. And that is that there are so many other very interesting phenomena which have been observed in astrophysics, and one of the most dramatic events, dramatic things, is the QSOs, the Quasars. And the Quasars have velocities, redshifts, which are much larger than the galaxies. Under the assumption that the Quasars, the QSOs, have a redshift which is cosmological, that is due to the Big Bang, then you can go out from point 3, the velocity of light, point 4, out to almost the velocity of light. You have redshifts which are up to 2 or 3 or perhaps even more. So it is a critical question whether the redshifts of the QSOs is cosmological or not. And the redshifts, the QSOs are a very interesting, very fascinating thing to study. And I have here a short summary of their properties. They are not really introduced very much in the general cosmological discussion. And the reason for this is simply that they are very awkward to the Big Bang cosmology. There is no evident explanation of it and you can see that they are causing considerable trouble. The QSOs have very large releases of energy. It is of the order of the annihilation of one solar mass per year and in some cases still more. They have redshifts which are very large and the controversial question are these redshifts cosmological or are they caused by some other mechanisms. And then you can see that what we should take out here is especially that some QSOs are located close to galaxies. And in certain cases they have the same redshift as the galaxy. But there are many cases and undoubtedly very convincing evidence that there are QSOs close to galaxies. But they have very different redshifts. This has been demonstrated by measurements by Margaret Burbage. The researchers have very strong evidence for the non-cosmological redshifts and our Pimpasadina has made beautiful measurements of this. So there must be mechanisms by which these QSOs get up too close to the velocity of light without being these velocities being produced by the Big Bang. And you can see what requirements one has here. If you take the enormous energy release which are measured and you introduce the condition that this energy is emitted in one direction, then you can get the bodies up to these velocities. This is one possible suggestion to explain the QSOs. This means that the very large velocities which we observe are not necessarily cosmological. There are other mechanisms also. But what are these mechanisms? What is the mechanism which produces the energy of the QSOs? We see immediately that the nuclear energy which is giving us the energy of the stars is by far not sufficient. So we have three possibilities. We have either to invent a new law of physics which gives you these very large energy releases, which we perhaps are a little hesitant to do. We have two other alternatives left. One is gravitational energy and the other is annihilation. And the gravitational energy, there have been a number of theories according to which black holes produce these large energy releases. But if you try to work out a theory of the QSOs, how they are accelerated, you find that you run into very serious difficulties. And then it's just the possibility that we have annihilation as an energy source. And that brings up an interesting problem, namely, is there antimatter in the universe? Is the universe symmetric with regard to matter and antimatter? This has of course been speculated much about it. And it is Oscar Klein in Stockholm who made 20 years ago a systematic effort to show that to make a cosmological model with the symmetry between matter and antimatter. There has been much objection to that. And this is essentially because if matter and antimatter are mixed in the universe, you would have an enormous gamma radiation and you will have a very rapid annihilation of it all so that this could not persist for a very long time. However, all this depends upon the assumption that the universe is homogeneous. That there is, can be known that there cannot be separate regions. And this is one of the really dramatic new results of space research, namely that the properties of space has changed in a drastic way. And I'm not speaking about the four-dimensional space in the Big Bang theories. I'm speaking about the space which is explored by spacecrafts. Fifty years ago it was believed that everything was vacuum outside the celestial bodies. Then it was observed that there was an interstellar medium, interplanetary medium, interstellar medium. And we heard earlier lecture about that. And it was then natural to assume that this was a continuous medium. And it was natural to assume that also in our close neighborhood, in the environment of the Earth, so-called magnetosphere, and in the interplanetary space, the so-called heliosphere or solar magnetosphere, we had a homogeneous medium. This is not correct. This is one of the most surprising results of space research. If you have the magnetic field as a function of the radius from the Earth, this is the Earth, and you go out and measure the magnetic field by spacecrafts, it should decay as r to the minus 3, and that is just what it does, out to about 10 Earth radii or something like that. And it suddenly changes to the opposite sign and goes on like this. And this is a most dramatic change. It takes place in a region which is a few microtron radii. It is a sudden change in the magnetization. So the magnetization here is in that direction, and it is here in that direction. The magnetization of space is not continuous, it is discontinuous. It means that we have a current layer here. And such phenomena have been found, not only in the magnetopause, it has also been found in the magnetotail of the Earth, in the solar equatorial plane. We have an outward-directed magnetic field which suddenly changes to the opposite, and again there is a thin current layer. It has been found in the Jupiter's magnetosphere and so on. There is half a dozen places where we observe this. It means that space in our close environment has a cellular structure. There are cells with that magnetization, and there are cells with that magnetization. And it is rather watertight separation surfaces. And this means that space is no longer uniform. It consists of a number of cells, and they are separated by current layers. And on two sides of the current layer you have different magnetizations, different pressures, different densities, and perhaps you also could have different matters, different kinds of matter. Such thin layers, I should just show you here what it is. This is the interplanetary medium, this is the Earth. The Earth had a magnetic field like that, that is 50 years ago. When space research started, we got this picture, a neutral sheath here. Now this is one of the later models. This is the Earth, and you see a number of such layers. Space is drastically different from what it was earlier. And these interfaces were not detected from the Earth. They cannot be detected unless a spacecraft penetrates it. Even if it comes close to it, you see no sign of it. We knew, hence, that space has this structure, how far out. As far as the spacecrafts go, and what is beyond that, no one knows. We cannot prove that it has the same cellular structure further out. It could very well be that we have a wonderful homogeneous model, but the limit is just as far as spacecrafts go. So perhaps it is easier to assume that this is a general property of space, that it is all the way everywhere as this structure. And then, time is getting on, we can just see here a model of a layer separating matter and antimatter. We have, suppose, that in interplanetary space, we have interstellar space, we have a region containing matter, and another region containing antimatter. And then, there will be a boundary layer where they keep in contact, and they produce high-energy particles here. And you can see that the number of such particles which are produced is very small. You have no hope of detecting it from any difference, any distance. And the distance which such light and frost layer occupies need only to be 100,000 or 10,000 over light year. So we can very well assume that the cellular structure, if we accept cellular structure, we can very well think of the universe divided in such regions. And this is important because it is obvious that it has cosmological consequences. Quite a few of them, which I shouldn't go into more here. I should only like to say that it seems that with the idea of the symmetric universe, you can explain quite a few of the observations which are embarrassing to astrophysics and especially to cosmology, namely the enormous release of energy in the USOs, the so-called gamma ray burst, quite a lot of the X-ray radiation and so on. But this will take us too far. I thank you.
|
When Hannes Alfvén gave his talk at Lindau, he was well known by the general public in Sweden as the eminent scientist and Nobel Laureate who was strongly opposed to nuclear power for environmental reasons. Among his scientific colleagues world-wide, he was at the same time known to be a strong opponent of the prevailing theory of the birth of the Universe, the Big Bang theory. In Lindau he spoke to an audience of students, young researchers and Nobel Laureates, most of whom probably whole-heartedly accepted the Big Bang theory. Alfvén had been active as a political speaker in Sweden for some years and it is interesting to hear him use an old rhetorical technique to try to make his point. He several times first gives praise to the “beautiful theory” or “wonderful model” and then almost immediately brings up his criticism: “What happened before?”, “But the Universe is not homogeneous”, etc. So does Alfvén have an alternative theory? Since the 1960’s he had been working on a model of the Universe originally put forward by Oskar Klein, professor of theoretical physics at Stockholm University. In this model the Universe contains equal amounts of matter and antimatter, so that some stars that we see are made of matter and others of antimatter. When matter meets antimatter a violent annihilation tales place and energy in the form of electromagnetic radiation is emitted (radio waves, light, X-rays, gamma-rays, etc). As a plasma physicist, Alfvén had been working on mechanisms that would keep matter and antimatter mostly separated from each other. At the end of his talk, he first brings up annihilation as a possible energy source driving the very energetic stellar objects named quasars. He then describes spacecrafts actually finding a cellular structure with cell boundaries having magnetic fields in different directions. Even if a few scientists are still working on the Klein and Alfvén model of the Universe, it is today looked upon as dated. But what will never be dated is Alfvén’s strong scientific plea never to accept “final solutions” because they are beautiful, but to always look out for new empirical evidence! Anders Bárány
|
10.5446/52577 (DOI)
|
I am very happy to be here in Lindau again and to have this opportunity of talking to you about scientific questions that interest me. Today I want to talk about the possibility that quantities which are usually considered to be constants of nature are really not constant but are varying slowly with the time. It may be that they do vary and that this variation is so slow that it does not show up in ordinary laboratory experiments. This idea that maybe constants of nature are really varying was first introduced by Moon 50 years ago. It's not a new idea. Now Moon supported his arguments with philosophical considerations which I believe are not very reliable but still he introduced a new idea into physics, an idea which has excited many people since then and which is being very actively studied at the present time. Now if we are going to think about constants of nature we must first of all make sure that the quantities which we are interested in are dimensionless. That is to say that they don't depend on what units we use. If we have a quantity which depends on whether we use centimeters or inches then such a quantity will not be very fundamental in nature. It will depend on our choice of units. So we have to make sure that we discuss only quantities which are independent of the units, dimensionless quantities. Now many people who write about this question rather forget this. You see printed papers now where people discuss can it be that the velocity of light varies. Now the velocity of light depends on your units very much, a unit of distance and a unit of time and the question whether the velocity of light varies or not will depend on how these units are defined and it is not a fundamental question, not a fundamental problem. One usually in theoretical work takes one's units of space and time to make the velocity of light equal to one. Then of course there is no question it has to be a constant. Let us now focus our attention on dimensionless quantities. There are some which are well known in physics. One of them is the fine structure constant whose reciprocal is the E squared over, the reciprocal is H, C, Planck's constant over 2 pi C, the velocity of light over E squared, a very famous constant in atomic theory and its value is approximately 137. Another constant of nature which will immediately spring to your mind is the ratio of the mass of the proton to the mass of the electron, M P over M E and that is something like 1840. Then there is another constant which suggests itself at once. If you consider a hydrogen atom where there is an electron and a proton they attract each other with a force inversely proportional to the square of the distance, an electric force. There is also gravitational force between them inversely proportional to the square of the distance. The ratio of these two forces is a constant. It is a dimensionless constant, it has the value E squared over G M E and P. G here, capital G is the gravitational constant. If you work this out you get roughly 7 times 10 to the power of 39, an extremely large number. Now physicists believe that ultimately they will find an explanation for all the natural numbers that turn up. There ought to be some explanation for these numbers. If we had a better theory of electrodynamics it would presumably enable us to calculate this number 137. If we understood elementary particles better there ought to be a way of calculating this ratio of the masses and getting this number here. These calculations would just involve mathematical quantities for a prize and similar factors like that. So we should imagine these numbers constructed from these simple mathematical numbers. But what about this number here? This enormous number here. How can that ever be explained? You couldn't hope to explain it in terms of just a prize and things like that. Maybe this number should not be explained by itself but should be connected with another large number which is provided by the age of the universe. It seems that the age of the universe had a very definite beginning, a big bang as it is called, that is pretty well universally accepted, so that there is a definite age of the universe. That age is provided by Hubble's constant which gives the ratio of the speed of recession of distant matter with its distance. We get in that way an age of the universe somewhere around 18,000 million years. It's not very well known. There might possibly be an error of a factor as much as two in it. But still it is somewhere of that order, let us say 18 times 10 to the ninth years. So this involves the unit years and we ought to have something more fundamental in our theory. Let us take a unit of time provided by electronic constants and square it over mc cubed. Let us say this is a unit of time. If we express this age of the universe in terms of this unit of time, we get the number two times 10 to the 39. We get a number just about as big as this number here. Now you might say that is a remarkable coincidence. Well, I don't believe it is a coincidence. I believe there is some reason behind it, a reason which we do not know at the present time but which we shall know at some time in the future when we know more about atomic theory and about cosmology. And we shall then have a basis for connecting these two numbers. These two numbers should be expressed, one of them as some simple numerical factor times the other. Now I am assuming that there is such a connection and that it is not just a coincidence. That is the whole basis of the theory which I am now going to give you. Now this number here is not a constant. It is continually increasing as the universe gets older. And if these two numbers are connected, it means that this number here must also not be a constant and must be continually increasing as the universe gets older. We see that this number must be increasing proportionally to the time t, the age of the universe. These things on the left here are usually considered as constant. And now we can no longer consider all of these things as constant. It is usual to suppose that the atomic ones are constant, E, M, the two M's. And then we have G varying proportionally to t to the minus one. To express this result accurately one should say the gravitational constant in atomic units is decreasing according to the law t to the minus one. We must not just say generally that the gravitational constant is decreasing because by itself it is a quantity with dimensions. And one must avoid talking about quantities with dimensions. One must say that one must specify it by G in atomic units varying according to this law. While we are looking at it is like this, the gravitational constant is the gravitational force is very weak compared with the other forces known in physics, very weak in particular in connection in comparison with the electric force. Why is it so weak? Well, we might suppose that when the universe was started off these forces were roughly equally strong and that the gravitational force has been getting weaker and weaker according to this law. It has had a long time in which to get weaker and that is why it is now so weak. We get quite a unified principle in that way. Now the question arises how are we to fit in this assumption of G varying with our standard physical ideas? That is one question. Another question is is there any experimental evidence in favor of it? These are the two questions which I shall want to be discussing at length. First of all there is the serious question of how we are to fit this in with Einstein's theory of gravitation. Einstein's theory of gravitation has been extremely successful. We can't just throw it overboard. We have to amend our theory in such a way as to preserve all the successes of the Einstein theory. For that purpose the obvious thing to do, in fact the only suggestion that has been put forward is that we have two different metrics which are important in physics. One metric which one has to use for the Einstein theory and then a different metric for a different metric provided by the atomic constants. So that will be a basic idea of this new theory. These two metrics which are both of importance in physics. The idea of two metrics was first introduced by Milne 50 years ago but his relationship between the two metrics is not in agreement with the relationship which I am going to propose now. I would like to say a little bit more about this connection of a universal constant with a time. We are here making a new assumption and we should make this assumption quite generally and say that all the fundamental constants of nature which are extremely large are connected with the time. Not just this one, any other constant which is extremely large should also be connected with the time. Otherwise it is a very artificial assumption. Now there is one other constant which immediately springs to mind namely the total mass of the universe. Now it could be that the universe is of infinite extent and then the total mass is infinite but in that case we must modify our number and make it precise in this way. Let us talk just about that part of the universe which is receding from us with a velocity less than half the velocity of light. I take this fraction half arbitrarily but it would not affect the argument if we used another fraction two thirds or three quarters the argument would one the same. It is just to have some definite quantity which we can use even if the universe is infinite. Now let us consider the total mass which is receding from us with a velocity less than half the velocity of light. Express this mass in the proton units. We get a very large number, a large number which is not very easy to measure because we don't know how much dark matter there is into galactic gas or black holes or things like that. But it seems that this number is somewhere around ten to the power of seventy eight. I will call this number N, a number of proton masses in that part of the universe which is receding from us with a velocity less than half the velocity of light and it is roughly ten to the seventy eight and according to this assumption that we are making about the large numbers, the large numbers hypothesis I call it. This must vary according to the law T squared. This assumption is just as necessary as this one if we are to have a consistent picture. Now how can we understand this continual increase in the amount of matter which is in the observable part of the universe? People for a long time supposed that there was continuous creation of new matter. I made this assumption myself but I feel now that it is a bad assumption. It is very hard to develop this theory in any consistent way and there are also observational grounds for disbelieving in it. So I want to keep to the assumption that matter is conserved, the usual old fashioned conservation of mass. Then this continual increase in N is to be understood in this way. We suppose that the galaxies are not receding from us uniformly but are continually slowing up so that the number of galaxies included within this sphere of radius recession velocity less than half the velocity of light is continually increasing. We can then easily arrange it to have this number N increasing proportionately to T squared and at the same time keep the conservation of mass. We have now quite a definite cosmology where we have the galaxies always moving apart from each other but their velocity is decelerating all the time so that there are always more and more galaxies being included within this sphere V less than a half C. Now these ideas which I have been talking to you about immediately give us some definite results concerning the model of the universe which is consistent with them. We get a model of the universe when we imagine all the local irregularities to be smoothed out all the stars and galaxies to be replaced by an equivalent continuous distribution of matter. What model of the universe must we adopt? People have often supposed a model of the universe which increases up to a certain maximum size and then collapses again. Very many models have been worked out consistent with Einstein's equations but we can now assert that all those models in which the universe increases to a maximum size and collapses again are wrong because such a model would provide a large number namely the maximum size before the collapse starts. This large number expressed in atomic units would be a constant. It won't be something that can change as the universe gets older. Now any constant large number is ruled out by our general hypothesis that all these very large numbers must be connected with the age of the universe. So that means that a large number of models are immediately ruled out. For quite a while people were working with a steady state model of the universe but that must also be ruled out because a steady state model cannot have G varying like this. There are many models which do get ruled out and one finds that there is only one model which is left which is in agreement with this large number's hypothesis. This was a model which was proposed in 1932 jointly by Einstein and Deseter. Let us call it the Einstein-Deseter model. It's not very much heard about but I think that it should receive much more importance and I believe that this is the correct model. This model involves a metric ds squared equal to the tau squared. I'm using tau for the Einstein time minus tau to the power of four thirds dx squared plus dy squared plus dz squared. Now you may wonder why we have this strange power of the time occurring here. Only such model with any function of the time here we would get uniform density and a uniform pressure. The pressure is caused just by electromagnetic radiation or similar radiation and in our actual universe the pressure is very small, very much smaller than the effects produced by the static matter. This particular power of the time occurring here is the one which we have to choose in order to have zero pressure. That is the reason why there is this rather strange factor. Now this model of the universe gives an average density roughly in agreement with the observed density. That was pointed out by Einstein and Deseter when they introduced the model in 1932. This model I would like to propose to you as the correct model for describing our universe. It is the only model which fits in with the large numbers hypothesis. What is the law according to which the galaxies separate from us? What is the law of expansion of the universe? Which we take the distance of a galaxy and consider it as a function of the time. Let's say a galaxy corresponding to the velocity every session, half the velocity of light. In the distance of this galaxy varies according to the time. R is proportional to T to the power of one third. That is quite a big change from our usual ideas that the galaxies are receding from us roughly uniformly. It means that the distance, although it always continues to increase, increases according to a slower and slower law as time goes on. The galaxies never stop receding but they continue to recede from us more and more slowly. This is a consequence of this model which we are using here. Now I talked earlier about the two metrics which we have to use. An Einstein metric, ds, I put the suffix e here to say that this is referring to the Einstein unit of distance and then there is a dsa atomic distance. What is the relationship between them? That relationship is very easily worked out just from the requirement that G is proportional to T to the minus one. From purely dimensional arguments one finds that dse equals T dsa. That is the relationship between these two metrics and the argument is a simple definite and just involves discussion of the dimensions of quantities. We may take this ds to refer to a time interval and then if we use the variable to stand for time in the Einstein picture and use T to stand for time referred to atomic units as I have been using all the time up to the present. We have d tau equals T dt. Tau is proportional to T squared. In fact tau equals a half T squared. We have then these two times which should come into physics, a time which one has to use in connection with the Einstein theory, the time tau and a time which is measured by atomic clocks and things like that, time t. Perhaps I better pass on to the discussion of whether this theory is going to lead to any effects which one can check by observation. Our usual application of the Einstein theory is concerned with the Schwarzschild solution of the field equations of Einstein. This Schwarzschild solution was worked out on the assumption that at great distances from the singularity where the mass is, at great distances space becomes flat like Minkowski space. Well according to our present picture that must be modified by the requirement that at great distances space goes over to the space of the Einstein-Disseter metric described by this equation here. One can work out how one has to modify the Schwarzschild solution. I've done that in a paper which was published recently, then one gets the results that if you take a planetary orbit which is roughly circular, even in the new theory that planet will continue to circulate around the sun with constant velocity and with a constant radius for its orbit. These results are not changed by the new theory. However, there are some results which are changed. These results that the velocity remains constant and the radius of the orbit is constant refer to the Einstein units. If we pass to atomic units then the velocity will still remain constant. The velocity is the same in all units because we are keeping the velocity of light unity and any velocity is just a certain fraction of that independent of what units we use. But the radius of the orbit will be affected by this transformation and it means that the radius of the orbit of a planet expressed in atomic units is continually getting less. So radius is proportional to t to the minus one. The planets are all spiraling inwards. That is a cosmological effect which is to be superposed on all other physical effects. That is an effect which one should be able to observe if one makes sufficiently accurate observations of the planets. Then another effect which one should be able to observe is in connection with this formula which gives a difference between dTor and dt. dTor, the time which one uses in the Einstein equations, is the same as the time which is marked out by the motion of the Earth around the Sun. That time is what astronomers call ephemeris time and they have been using ephemeris time for centuries in order to express the results of their observations. This ephemeris time, according to this theory, should not be the same as the time measured by atomic clocks. But there is an effect which one should be able to observe and the inward spiraling is another effect. I would like to discuss what the experimental information is about these subjects. If you want to compare the two times scales, the best way is to study the motion of the moon. The moon's motion through the sky can be observed very accurately but it is subject to very many perturbations. Perturbations caused by the tide, perturbations caused by other planets, these perturbations are not negligible for the accuracy which we need in this work. This study of the motion of the moon has been considered for a good many years by Fanflandern who works at the Naval Observatory in Washington. I have put down here a summary of the results. You see it, N is the symbol used for the angular velocity of the moon. The units it is expressed in are seconds per century. See why I mean century. The acceleration, N dot, is in seconds per century squared. This acceleration is negative, it means a deceleration. Here are figures which have been obtained recently. Can you see it or is the light too strong? Can you perhaps reduce the lights? There is a whole set of figures which have been worked out by different investigators. There is a result given by Erster, Winter and Cohen in 1972 where the figure is 38 plus or minus 8. I have ignored the earlier estimates which are not so good. There is a figure given by Morrison and Ward in 1975 just from observing the transits of Mercury, a time when Mercury passes in front of the sun. Then there is a figure given by Muller from studying ancient eclipses. There are records of eclipses going back to before the time of Christ. You might say that observations made so long ago would not be useful to us because their methods of observation were very primitive. They had no accurate instruments. But still, if you see an eclipse, a total eclipse at a certain place on the earth, that is a very definite piece of information. You know just when the eclipse took place, it took place when the earth, sun and moon were all in a line. And if you're told that at a certain place on the earth's surface, the eclipse was total, that is a very definite piece of information. Now Muller has been studying the records in monasteries about eclipses going back to more than 2,000 years. And you have to get a suitable understanding of the language used by the people recording eclipses in those days. These observations have the advantage that they are made over a very long time base. Well, the result that Muller gets is 30 plus or minus 3. Now there are some recent observations which were made with the help of lunar models and also of parameters which were obtained from satellite observations. And two different workers have given the result 30.6 and the 27.4. Well Fan Flankelndern has considered all of these figures for the acceleration of the moon and he considers the weighted average of the above to be 28 plus or minus 1.2. These are all observations with ephemeris time. There are observations with atomic time which have been made since 1955. Since 1955 people have been observing the moon with atomic clocks. The result, the most recent result of Fan Flankelndern of the atomic time figure is 21.5. Fan Flankelndern in previous years had given results differing quite a lot from that figure. But he finds that there are systematic errors in his earlier results and the systematic errors have been, he hopes, eliminated by new calculations and also by new recent observations. Now a different kind of observation has been made by people working at the Jet Propulsion Laboratory using the lunar laser ranging technique. They send light to the moon and observe the reflected light and see how long it took for the journey and get very accurate estimates of the distance of the moon. And Calomi and Will Holland have been working on that and they got the figure 24.6. Now I just heard a few weeks ago from Fan Flankelndern that these people think that there was an error in their calculation. And with a new correction that should come down to 20.7. Now Williams and others also using lunar laser ranging get the figure 23.8. Now Fan Flankelndern has considered all the results and gives as the weighted mean 22.6. Now there is a difference between this figure and the one above. This figure referring to atomic time and the figure relating to ephemeral time. And that difference provides evidence for the correctness of this theory. If the gene is not varying, if one just stuck to the old ideas, then one should get the same lunar acceleration whether one uses ephemeral time or atomic time. And the observations do seem to support the idea that there is a difference. Now there is another way of checking on this effect by seeing whether there is this inward spiraling of the planets that has been worked on by Shapiro and his assistants some years ago in 1975 they published some figures. Their method is to send radar waves to a planet on the far side of the sun and then observe the reflected radar waves. These reflected waves are extremely weak but still they can be observed and one can measure that time required for the two and four journey and that gives us accurate information about the distance of the planet. One measures this time in atomic units. These were some figures that were published by Shapiro and his assistants in 1975. End dot, the lunar acceleration is divided by N, the velocity. And those figures are given for the three inner planets, Mercury, Venus and Mars. Well you notice that the results are all plus. A plus result there means that the planet is spiraling inward. If the planet was spiraling outward it would be minus. Those results are all in favor of inward spiraling but you see that the errors, the probable errors are quite large and as big as the effect that you are measuring. Well Shapiro rather emphasizes that he has really no evidence for inward spiraling. His results would be consistent with the old theory of constant G. Still there is perhaps some weak evidence for inward spiraling. Now at the present time one can make very accurate observations of the distance of Mars with the Viking lander which was put on to Mars in 1975. And Shapiro is working on the results of this lunar lander. He says that just the time since the lander was put on Mars is not a sufficiently large time base in order to be able to work out this effect. One has to combine these Viking observations with the earlier mariner observations on Mars. I met Shapiro last March and he told me that he is now working on this problem of combining the recent Viking observations with the mariner observations and it will be six months before he has a definite answer. Shapiro is a very careful observer and won't publish anything until he is pretty certain that it is right. So we have to wait a few more months and then we will have definite results on this question of whether there is this inward spiraling of the planets or not and that will probably make it pretty certain whether this new theory is right or not. I wonder how much more time I have. Can I go on another five or ten minutes or should I cut it short? I will go on. Another question which I might refer to is concerned with the microwave radiation. There is this microwave radiation with just a few centimeters wavelength which is observed to be coming out of the sky in all directions and falling on the earth. It is a very cold radiation. It is a black body radiation so far as it can be observed corresponding to a temperature of about 2.8. Absolute. People have explained this radiation by saying that it is the result of a big fireball which was in existence in the early stages of the universe, extremely hot to begin with, but it has been cooling all the time because of the expansion of the universe and that this very cold radiation, 2.8 degrees, is what is left of this original fireball. Now I would just like to show you how this microwave radiation gives a strong support for the ideas I have been presenting here, in particular for the large numbers hypothesis. Let us consider this radiation, temperature 2.8 degrees, that provides us with a large number kT, Boltzmann's constant, kT, the energy, the sort of average energy of this microwave spectrum divided by, let us say the rest mass of the proton in Pc squared. If you work this out you get a number somewhere around 2.5 times 10 to the minus 13. You may consider that 10 to the minus 13 is the reciprocal of a large number and that this should therefore be varying with the time, varying according to the law, T to the minus one third is being roughly the cube root of 10 to the 39. That means that the temperature of this microwave radiation should be decreasing according to the law T to the minus a third. Now that fits in with the law of expansion, which we have previously. We had the expansion that the distance of a galaxy is proportional to T to the third. The wavelength of each of the components in the blackbody radiation should also be increasing in proportion to T to the third, frequency T to the minus a third and the temperature of the blackbody radiation should therefore be decreasing according to the law T to the minus a third. We do get consistency in that way. These figures are quite different from the old theory. According to the old theory we had the galaxies receding with a roughly constant velocity and the microwave radiation having its temperature going down according to the law T to the minus one. Well the T to the minus third law is provided by the large numbers hypothesis and it fits in with the law of expansion. I feel that this is quite a strong confirmation of our ideas. We might easily have had things going wrong. It means that the fireball has been decreasing according to this law since a time very close to the Big Bang. According to the older ideas this law for the cooling of the blackbody radiation only started at a certain time when the fireball got decoupled from matter. This idea of a sort of decoupling taking place at a certain stage in the evolution of the universe is of course quite foreign to the idea of a large numbers hypothesis. According to which there must not be any time coming in in an important sense into the theory in such a way as to provide a large number which is a constant. There is a lot of further work to do in connection with the development of the theory and several people have been studying the question. Also unfortunately a good many people have been writing papers about it who have not correctly understood the basic ideas of the theory and who have come to wrong conclusions. People have said that this theory is untenable because it would have been that in the geological past the temperature of the earth would have been much too hot to allow life at a time when life is known to have existed. Well they haven't taken into account the inward spiraling effect. In fact this inward spiraling was only worked out about a little over a year ago and that is going to help quite a lot with this temperature of the earth because it means that at these times in the past the earth was quite a bit farther away from what it is now. I would like to conclude here and I would like to say that there are many people working on this subject. In particular there is a school which is run by Kenuto at the Space Research Center in New York and the result of a lot of calculations that he has made is that up to the present there is no irreconcilable discrepancy which has showed up. Thank you.
|
Paul Dirac attended the first 10 physics meetings organised in Lindau and gave talks at all except one. In 1979 he choose to talk about a subject based on a more than 40-year old love of his, the so-called Large Numbers Hypothesis from 1937. This hypothesis emanated from the fact that the strength of the electric force today is about 40 orders of magnitude larger than the gravitational force and that this is of the same order of magnitude as the age of the universe (in atomic time units). In his lecture Dirac assumes that the two numbers always have been proportional to each other. From this he draws a number of interesting conclusions. One of them is that the gravitational constant varies with the age of the universe and is decreasing. Another conclusion is that the only viable model of the universe is based on the one that Albert Einstein and Willem de Sitter published in 1932. Dirac’s assumptions lead to effects that should be observable. One of them is a difference between time as measured by an atomic clock and time as measured by the motion of the earth. Another effect is a very slow spiralling of the planetary orbits towards the sun. Due to a revival of interest in cosmology in the 1950’s, observational techniques for high-precision measurements of astrophysical effects had been developed. Irwin Shapiro had bounced radar signals on the planets and measured different parameters and the Apollo collaboration had bounced laser pulses from the moon. At the end of his lecture, Dirac discusses the status of the observations and concludes that they cannot yet confirm or rule out his theory. Anders Bárány
|
10.5446/52579 (DOI)
|
The galaxy in which we live consists mostly of stars. In fact about 90% of the matter in the galaxy is gathered into stars. My subject this morning is the other 10% the stuff between the stars. If I were a regular astronomer I would probably hesitate to talk on such a broad theme, but I'm going to speak about it as a physicist who somehow wandered into that subject, found some of the physics that was relevant, extremely fascinating, and found some problems unsolved that physics could still be brought to bear on. So I want to give you a very broad view of the interstellar medium, introduce you to a few, only a few of the special problems in physics that are still intriguing and unsolved, and then try to suggest in my own view at least what are some of the big problems that remain to be worked on, suggesting that in the hope that some of the younger people here will find that will be tempted to turn their own physical ingenuity and imagination on those fields. Well in our galaxy of course for many reasons some of which will become clear presently that we can't see what the structure of the galaxy is, but we can always begin in a survey like this by looking at ourselves as if we were in another galaxy. And if I may have the first slide now, could I have the lights off on the first slide then. This is how our galaxy might look. I begin with, this will be a view that many of you have seen before, we can get it. Could you kill these lights up here too please. The difficulty. Next slide please. I put that in for focusing and it's serving its purpose. Well, this is a big spiral galaxy very much like our own. This is Messier 81, about the size of our galaxy, about the same number of stars, namely one and a half times 10 to the 11, and it's about 30 million light years away. If this were ourselves we would be somewhere out here, and I call your attention to these dark places in here which are actually obscured by interstellar dust, which will be one of the topics I'll come to a little later. Now the interstellar medium, let me say if we could see with radio eyes, not when looking at visible light, but let us say with radio, the stars here would not be bright at all and we'd see the whole thing filled with a gently glowing gas. The interstellar medium, that is to say everything but the stars, consists mainly of the following items. The next slide please. Here's our galaxy in a very schematic sketch, it's about 10 to the 23rd centimeters in diameter, although of course the edge is a little bit hard to define. And we'll be talking about the gas, which amounts to about 10 percent of the mass, the equivalent of 10 to the 10th solar masses. It's mostly hydrogen and helium of course, and about 90 percent of it is quite cold, neutral hydrogen and neutral helium. Then there's some solid material, which is of extreme interest to us in some respects. Electromagnetic waves, both starlight and the ubiquitous microwaves, Professor Dirac mentioned just a few moments ago, cosmic rays and magnetic fields. Now this medium is extremely empty. I tried to show that on the next slide, next slide please. The emptiness of the galaxy is illustrated in two ways. Here are some stars, a typical size of a star is 10 to the 11 centimeters and the nearest star is about 10 to the 19th centimeters away, a few light years, perhaps 10. The gas that's between the stars is empty to almost the same degree, curiously enough. Here is a hydrogen atom in the interstellar gas, the nearest hydrogen atom is about a centimeter away, in fact there are a few atoms typically for cubic centimeters, and the ratio of the diameter of this to the distance is again a factor of 10 to the 8th. In fact, I think that's an accidental, I don't think there's anything fundamental about the coincidence of those two factors. The galaxy is so empty that if two galaxies collided and passed through one another, it would be likely but not very likely that one star in the whole thing would hit another star in the other one. If you draw a straight line through the galaxy, the chance that it hits a star is exceedingly minute, something like 10 to the minus 10 or 12. Nevertheless, in this emptiness there's a lot going on as we'll now try to explain. The next slide shows the electromagnetic radiation in the galaxy, plotted here as a function of wavelength in centimeters. This is the star light, and this is plotted on a scale in such a way that area of the plot truly represents energy. And the fact that these two forms of electromagnetic radiation, the star light here and the microwave background discovered by Penzius and Wilson are roughly the same energy density, namely a few times 10 to the minus 13th for a cubic centimeter. But of course, we remember that this is only in our own galaxy. This is local. This is everywhere, filling a million times more space so that in terms of energy density in the universe as a whole, there's a million times more of that than that. Of course, if I get close to a bright star, this is different. I'm taking a typical place between the stars, and some of you may recognize here a few features I just sketched in. There's Lyman alpha sticking up on the spectrum, and there's, I guess, H alpha up there. Whereas over here, as far as we know, we do have the black-body spectrum appropriate to a temperature of 2.8 degrees, as Professor Dirac mentioned. Although there's now some evidence from recent experiments at Berkeley that it doesn't fit quite as well, and there may be a little bit of excess at the middle of the spectrum, which is perhaps trying to tell us something. Now the energy here, I'd like to make, people think of the microwave spectrum as being extremely weak and very hard to detect. The following is a surprising but true statement. At all wavelengths, longer than a few millimeters, the Earth receives more radiation in this form than it does in direct radiation from the Sun. I'll leave that as an exercise for the student to check. Now the next slide compares some of these substances in energy density just to make a rather general point, which is that if we look at the mean energy density in starlight, cosmic microwaves, magnetic field, cosmic ray particles, and the gas, curiously enough, these are all in the range of 10 to the minus 13, 10 to the minus 12, or cubic centimeters. That I've already, insane, that I've already tacitly assumed the value of the magnetic field and in a moment I shall discuss what we know about the magnetic field strength. Some of this equality is no doubt a kind of equal partition since the magnetic field and the gas and the cosmic ray particles are coupled together and the energy density, which is equivalent to a pressure, it expresses that dynamical coupling. Other coincidences there must be said to be accidental. Now let's go back and talk about the gas a little bit. What we know about the cold gas is largely determined by observing it in the microwave spectrum and the next slide just reminds you of the way that is done using the radiation given off by neutral atomic hydrogen in the ground's hyperfine ground state. The change in energy involves a switching of the relative orientation of the magnetic moment of the electron and the magnetic moment of the proton and this well-known frequency 1420 megacycles is emitted by all the cold hydrogen in the galaxy. The interstellar hydrogen cloud is thus identified by its emission of this 21 centimeter electromagnetic wave. Now using this over the past 25 years astronomers have, radio astronomers have gradually built up a picture of the galaxy. They're able to do so because the galaxy as a whole is totally transparent to this radiation whereas it as we shall see it is not to visible light. And out of this one has galactic maps of which I give a recent example on the next slide. Next slide please. This is a composite map based on measurements by Kerr and Berskauer and others showing this is our location about 30,000 light years from the center of the galaxy and we're now looking down on the galactic plane and these are arcs which are believed to represent the locations of the hydrogen and thus show us that we are in fact living in a spiral, a galactic spiral. I must however say that the picture here is not neat and in a way it is less neat now that it was at the beginning. So complicated is the actual distribution of the gas that the location of the spiral arms and so on by this indirect method is very uncertain and it's in fact I think recognized to be more uncertain now than was believed to be the case in the early days of 21 centimeter astronomy. So that I really, I don't put too much faith actually, I would not like to have to go out here and guarantee to find one of these arms when I get there the situation as often in astronomy is more complicated than it appeared to be at first. Moreover the gas is very lumpy and distributed in clouds and there's a lot of things going on. Now there is one other method for studying the gas from the earth and that is to observe and avoid the atmospheric absorption by going into orbit and much of what we are learning now about the interstellar gas comes from the orbiting telescope in particular from the Copernicus telescope which is operated and conceived actually by Professor Spitzer at Princeton and others. And I show you on the next slide first a view of the Copernicus telescope which has been doing wonderful observations now for several years and still running fine. This is basically a flying ultraviolet spectrograph looking at the stellar radiation which never reaches us in earth and on the next slide you see a spectrum, next slide please. This is a spectrum, one of the hundreds of spectra taken by the Copernicus telescope looking at a particular bright star, Zedophiuchus, as an ultraviolet source and in front of that star we see absorption of specific wavelengths by the interstellar gas. The particular importance of these observations is emphasized in this slide when we look at the peaks that are absorption by molecular hydrogen H2 and indeed by the molecule HD over here since the H2 molecule unfortunately has no radiation and no absorption that we can see from earth it first is seen in the ultraviolet. And in fact these observations represent the first really fruitful observations of molecular hydrogen which is as it turns out a very important component of the gas. And this way too when measures the abundance of other elements one has already found for instance that elements like magnesium, silicon, iron are depleted in the gas that is to say one sees much less of those elements than one would expect to on the basis of the general relative abundance of the elements. This already is a hint of the things that we will learn when we talk about the interstellar dust. Well so much for the gaseous components now I'd like to move ahead and talk some about the interstellar dust which I must tell you will get rather more attention in my talk than it perhaps should because that is the happens to be the part of this thing that got me intrigued in the beginning and to a certain extent I'm still stuck working on certain problems with the interstellar dust. Let me first show you some interstellar dust. On the next slide this is the beautiful astronomical object called the trifid nebula in Sagittarius of one of the photogryph I think from the famous Schmidt, 60 inch Schmidt and these dark clouds in here in this nebula which are obscuring the light from the very bright star in the middle are just clouds of literally solid particles more or less the size of dust or cigarette smoke or any things like that. Now it is the dust which obscures our own view of our galaxy. There is so much of it that when we try to look toward the center of the galaxy we most we can't see it at all it's like looking through a big smoke screen. The next slide shows how this emphasizes the point I made about the hydrogen being in clouds the dust is associated with the gas and in general when we find the gas we also find the dust and so if you have the whole business compressed into a smaller cloud that's darker and harder to see through even though it's the same fraction of dust so that we have a typical cloud that most of the hydrogen is in clouds like this. The cloud might have 20 atoms per cubic centimeter its temperature local thermometer would read something like 70 degrees Kelvin starlight would be able to go through it with some absorption perhaps 80 percent would come out the other side. If I compress this cloud down to this size I get a cloud which is so dark that starlight can't get through it it now has 2000 atoms per cubic centimeter it's somewhat colder and it's in clouds like this that much of the chemistry goes on for reasons that I think will become apparent when we turn to that. There's also believed to be an inter-cloud medium while there must be an inter-cloud medium if we have the clouds and it's thought that in between this space is typically very much hotter with perhaps only a fraction on the average of an atom per cubic centimeter. Incidentally I meant to remark when we were talking about how empty the galaxy the gas is that a hydrogen atom at this density experiences a collision with another hydrogen atom about once every 30 years. It goes in a straight line absolutely straight line hits another hydrogen atom 30 years off again traveling about the distance of the Earth's orbit between collisions. It's this rather weird situation that a physicist a laboratory physicist bound to find intriguing because one's instincts are all unreliable when those situations exist. Now how do we measure something quantitatively about the extinction of light the next slide explains that very quickly astronomers do that by picking out two stars which they can tell from other evidence are very similar stars one of them not obscured and the other by clouds the effect of the obscuration is to course weaken the light but also to redden in fact the dust clouds in the galaxy have precisely the same effect as dust clouds on Earth they make the sunset red they do so by absorbing the blue light more effectively than the red light and the next slide shows how that kind of information is actually plotted out and analyzed. Here I plotted against wavelength in a logarithmic scale wavelength in micrometers something called the extinction when that's high it means that the light is being absorbed never mind exactly how that's defined and here we see the extinction as a function of wavelength the invisible the entire range of the visible spectrum is just spanned in there so that when we had only that information one really didn't know very much nevertheless I must say people were very busy fitting this curve with the rather elaborate theories which had three or four adjustable constants and needless to say these theories all were able to fit it. The result of the excursion into space and the ultraviolet telescope is to add the blue curve up there with this conspicuous hump at about 2000 angstroms a hump whose actual origin we do not yet we are not yet able to explain but that is certainly trying to tell us something. In the infrared there are some very interesting parts of the spectrum which are already interpreted as indicating that the dust is at least partly silicates. Now how much dust is there? It turns out that that's one thing we can answer rather rather definitely. The next slide gives an idea of that if you took the whole galaxy let me say in a big dust like this the galaxy and I put a big piece of sheet of paper under the whole thing then the amount of dust that you would precipitate on that piece of paper is given there but I will tell you what it would look like. It would look like this if I take a piece of white paper and a soft pencil and just slightly gray the paper that's what the paper would look like under the galaxy. On the other hand if I were to try to look through the galaxy edgewise through all that dust then it would look like this and that's why we can't see the center of the galaxy invisible light. On the other hand if we go into the infrared or even further into the radio then the whole galaxy becomes visible and that is the way in which today people are learning a great deal about the very center of the galaxy where a great deal is going on where there's an enormous concentration of mass. Now what the dust is is another question. May I have the next slide? We really don't know what the dust is but if one had to guess today and make a bet I think it is fairly certain that a large part of it, some part of it is in the form of silicates. There's very likely some graphite and the rest of it might be called dirty ice where ice is used in a rather general way to include not only H2O but NH3 and CH4. These abundant elements, condensable, are probably there. Now there are many other suggestions, some rather exotic. Some people think that dust might partly be or largely be polymers. Even cellulose has been suggested as a constituent but that there are silicates in graphite and dirty ice I think is almost escapable but apart from that we know very little. In fact, curiously we know two things about the dust rather certainly. One is its total amount which I've already indicated and just for the finish it's something to remark that the way we know the total amount so well as by applying the chromers-Kronig relation to the attenuation curve. We also know the temperature of the dust rather accurately as we know what it must be and the next slide shows why we can make a statement about that. This is a nice point of elementary physics I like to explain to my students. If I had a large black object out in the interstellar space it would absorb starlight and then it would have to come to equilibrium by radiating at the appropriate temperature and if you work that out you'll find that its temperature would come to be something like 3 degrees Kelvin. A little higher because of course actually it's also coupled to the microwave radiation at 2.8 but well a dust grain however can't be black because this thing is radiating at millimeter wavelengths and a dust grain being only 10 to the minus 4 centimeters in size makes an extremely poor antenna. In fact radio engineers know that you cannot make a good antenna for long waves in a short space and this means the dust grain has to get hot and when one does the calculation one finds that it will end up somewhere between 10 and 20 degrees Kelvin and that is a fact about the dust that I think I really would be willing to bet fairly high odds on. The dust is important in many ways but one of the most interesting ones is that it is the primary catalyst in interstellar chemistry about which I would now like to say a few words. The next slide shows the most important chemical reaction namely the formation of molecular hydrogen from atomic hydrogen. If the whole system were in chemical equilibrium it would all be molecular hydrogen despite the very low pressure because of the low temperature but in order to form molecular hydrogen when two hydrogen atoms hit you have to get rid of about 4 volts of energy and there is simply no way to do so. In the laboratory at normal pressure you could do it in a three-body collision but in the interstellar gas this practically never happens. By practically never I mean that in a volume the size of this hole in the interstellar gas there is not one such collision in the life of the universe. So that doesn't go and the way this reaction is catalyzed is by an event in which a hydrogen atom hits a dust grain, sticks to it, perhaps wanders around, tunneling over the surface held on by van der Waals forces, another hydrogen atom sticks, the two get together and then they have no trouble making a hydrogen molecule that flies off. This is the primary reaction of the interstellar chemistry. From there on the picture changes because once one has molecular hydrogen and a source of ultraviolet light or ionization at cosmic rays then as a number of people shown in particular my colleague at Harvard William Clemphor the chemist has shown that the other reactions will go in the form of ion molecule reactions right in the gas and that in this way he can explain the buildup of the extraordinary number of molecules that the radio astronomers have now identified. The next slide gives a list, well not a list because I didn't want to write everything in but the interstellar molecules of which about 50 have now been seen nearly all of them by, with the exception of H and 2 and HD by radio astronomy, spectrum in the millimeter and centimeter range start off with HD, OH, the radical, H2O and carbon monoxide which is an extremely important indicator and is very widely distributed and then through lots of others, his names are listed, then finally coming down through ethanol that you recognize I called up an expert in this just before I left home to say what is the biggest molecule that's been found yet and he told me about HC9N which I'm not quite sure about its name but I think it is cyanotetra acetylene. Now you'll notice that all these molecules I put on here, in fact everyone that if I had filled these in would be molecules that are not symmetrical, they're molecules with electric, with exception of hydrogen, they're molecules with an electric dipole moment and that's simply because in order to see them for them to radiate radio waves at their rotational spectrum they have to have that. There's no doubt whatever I think that the symmetrical equivalence of all these things that is HC9H for instance also exists. So this is even a small sample of the very rich array of chemicals out there and goodness knows how far up the line we go as these are discovered in size. These actually in dark clouds the chemistry is kept alive primarily by cosmic reionization since the ultraviolet light can't get in and dark clouds are otherwise a good sight. Now may I turn back briefly to the magnetic field. There are various lines of evidence that show that there is a large scale magnetic field interstellar throughout the galaxy. I will just mention one way in which the magnetic field can be directly measured. The next slide please. Making use of the properties of the pulsar which sends out pulses of radiation which are polarized in the radio range. One can measure two things if there is a magnetic field in space along the direction in which the wave is traveling and if there are electrons out there which there are I should have said that even the cold interstellar gas is slightly ionized then we have the ferrary effect operating to rotate the plane of polarization by an amount that depends on the magnetic field strength and the density of the electrons integrated over the path. On the other hand the electrons also cause different frequencies to travel at different speeds namely dispersion and by noting that the pulse arrival time depends on the frequency at which you are observing one can determine the dispersion and that depends on the integral of the number of electrons. So if you have measured those two things and if everything is uniform then you divide this result by that result and you get very directly the magnitude of the magnetic field along the line of sight. Using this with the pulse number, extraordinary number of pulsars that are now known one can say something about the direction and magnitude of the field. The next slide is a chart I won't spend much time on this from a recent book of Manchester in which this has been done and these circles all refer to a particular pulsar, the magnetic field measured in the manner just described, they are clustered here in the galactic plane because these are galactic coordinates and a plus here means the field is pointing towards us and a blank means the field is pointing away and the size of the circle is such that that's a micro-gauce but let me just say that in general the magnetic field interstellar magnetic field of the large scale sort appears to be of the order of a few micro-gauces in strength. That was the number I assumed way back there on that chart which showed the energy density. Now there's another demonstration of an interstellar field which has been known for a long time in fact for nearly 30 years that I would like to spend a few more minutes on and that is the remarkable fact that the interstellar dust seems to be capable of polarizing starlight. In the next slide I show you the primary observational material. No, I'm sorry, this slide is one which tells us what the magnetic field is in our neighborhood. I said a few micro-gauces, if this is the sun we now believe that the magnetic field in our general vicinity, here's the center of the galaxy down here, points in that direction. I'm not sure what that direction, which side of the galaxy I'm on, perhaps that direction to astronomer but anyway it points along generally along the spiral arm in which we live. Okay the next slide please which this is a, please ignore this stuff down here, just look at the top which is a plot in coordinates of the galaxy, here's the galactic plane going clear around 360 degrees, each one of these little dashes is an observation of a star at that position in which the starlight turned out to be somewhat linearly polarized. The polarization has nothing to do with the star itself, it's caused by the intervening medium as if someone had taken a sheet of Polaroid and held it out in front of the, between you and the star. And the direction of this shows the direction of the polarization and the length of the line shows something about its strength, which is not, not enormous, one or two percent typically but still very definite. And now you'll notice here, you can hardly escape noticing that these are all combed out here in a systematic way, here they're sort of in all directions, here again they're rather combed out again. I like to call this the iron filings picture because in elementary physics you know we all scattered iron filings on a magnet to see the lines of force, I don't know the German equivalent for filings, I don't know what you call it but this is, there's really no doubt that this is showing us the structure of a large scale magnetic field. Just how it is showing us that though is a little harder to say. Here's the way we explain the polarization of starite by the dust in the magnetic field. Next slide. The star emits unpolarized light as much vibrating this way as that way, this light comes through us and perhaps passes through a cloud of dust grains. When it comes out of the dust grain cloud, one of these components has been absorbed more than the other so that we receive it as polarized light. And in fact it turns out that when the light has its vector like this, that's the direction of the magnetic field. The presumed explanation of this is as follows. The dust particles are certainly not spherical, they couldn't do that if they were spheres, they are, let us assume that they are rather elongated things although I really could, I can't tell the difference between that and flakes. And these somehow are caused to be aligned with the magnetic field cross ways so that if the magnetic field is running like that, then these on the average spend more time perpendicular to the field than parallel to it. Only in a very rough average way, I mean their motion is quite irregular, but nevertheless there's some physics going on here which orient these things with the magnetic field locally. And that physics has been a puzzle that's at least 25 years old. We thought for a long time, we know how that works from a proposal that Davidson Greenstein made back in 1952 or three, but there's still some questions about it. Let me indicate very briefly now how that goes. I don't want to go into this story except in one way to show you how some interesting and ancient physics came into it unexpectedly near the end. The alignment of these things in the magnetic field is not explained by saying that they are little particles of iron which orient like a compass. Even if they were totally magnetized, that would not work because the field is too weak and the bombardment with other atoms would completely destroy the alignment. The physics is much more subtle and is believed to involve a phenomenon called paramagnetic relaxation that's suggested on the next slide which I'll spend only a moment on. No, I'm sorry. This I must... When going into the physics of the dust grain, I want to show you what the situation is and what kind of thing one has to work with. We assume that the dust grains are about two or three times ten to the minus five centimeters. That conclusion is drawn from the way they scatter and absorb the light. It's not absolutely certain. The temperature of the dust grain we know is small. The dust grain is embedded in the gas and now there are a number of time constants in the problem. If you're an atom, you hit a grain about once every ten to the eighth years. That's actually a short time from one point of view. Wouldn't be very important. If you're a grain, you get hit by an atom once every five minutes. You get hit by a yalt or about a photon somewhat more often. The grain, after all, is in a gas at 100 degrees Kelvin and is executing Brownian rotation, so its energy is presumably something like kT. That means it's rotating at ten to the fourth revolutions per second. All these numbers are, of course, average rather than exact. Here is the time constant that rules this problem from the point of view of the physics is the time that the grain motion is damped by the gas. That is, if I throw a grain into the gas at high speed or spin it, it will hit a gas atoms and will gradually slow down. They're going to that simple form of friction. The time of slowing down is a few hundred thousand years. If I spin a grain and go away and come back several hundred thousand years later, it will gradually have slowed down a little bit. This makes it possible for very small causes to have big effects. The next slide shows another form of cut which is interesting. The grain undoubtedly has some electric charge. Actually the processes tending to charge it negatively, electron capture and charge it positively, photoelectric emission are different. It would be unreasonable for them to cancel exactly. We believe the grains have a potential, the order of a few tenths of a volt. Such a grain is coupled to the magnetic field just like a charged particle. If you calculate the period for a cyclotron orbit, it turns out to be only ten to the fourth years. The point is that that is short compared to this time I just mentioned and therefore for their dynamical progress, the grains are effectively locked to the magnetic field just as a nion would be and can only be pushed for a long time parallel to the field in the cross ways. Well, in this situation, the next slide shows, indicates the idea that I may assume that the grain is paramagnetic because any dirty ice in the solar system in the interstellar medium has probably got some iron atoms and that's all I need. The magnetization paramagnetic susceptibility of course is complex. There is a relaxation effect. The magnetization lags behind the field and for a rotating grain that means there is a viscous, there is a torque tending to slow the rotation, an absorption of energy from the rotating field. And that operating in a rather indirect manner results eventually in a partial alignment of the grain axis if the grain is not spherical. It's a very messy problem. We think we now understand the dynamics of that and when we finally did understand it and had reliable calculations of all the dynamics, it turned out that in fact we were still in difficulties. Namely, it appeared to require a magnetic field at least ten times as strong as we have assumed in order to explain the alignment. And this is the thing that's kept me at this problem for so long. I hope the paper I'm publishing next month will be my last paper on the subject but I'm not sure it will be. At any rate, the reason for this last paper is that some physics has come into the problem which I had not suspected although I've been working on it for a long time. The physics is of two kinds. The next slide, may I have the next slide please? If I have, this is a thing we hadn't taken into account before, if I have an asymmetric thing rotating, not a sphere but some odd object rotating about some odd axis, then there is a source of internal, if there is any internal dissipation, the dynamical effect is to cause the rotation to line up with a major axis of immersion. There are two kinds of internal dissipation, both of which we had omitted and ignored for a long time. One is just due to imperfect elasticity of the solid, I won't say anything about that except that it made us scurry off to look into the solid state literature again. The other one is the one I really thought some of you would be amused by, what I call Barnett relaxation, an effect which as far as I know has never even been predicted, let alone measured in laboratory, which has to do with the so-called Barnett effect. The next slide, the Barnett, for most of us, know better the Einstein-DeHase effect, a very important experiment done in 1915 by Einstein, by DeHase at Leiden, by Einstein suggested the experiment. The experiment is very simple, you take an un-magnetized iron rod, hang it so it can turn, you suddenly magnetize it and you see it begin to turn. It does so because you've lined up the electron spins, as we now say, and then the conservation of angular momentum means the whole rod has to turn. It's an important experiment because it gives one a way of measuring the fundamental g-factor of whatever it is that causes magnetism, which was really a key experiment. The Barnett effect is the converse, namely if I rotate, have a freely rotating rod, it gets magnetized if it has spins in it because it's cheaper in energy to have some of the spins lined up and take that angular momentum out of the rotator as a whole. To my surprise, I had heard about the, I knew about both these effects, of course, I'd heard more about the Einstein-DeHase effect, and I was surprised when I went back to the literature last year to find that Barnett did his experiment first. In fact, he did it a year before Einstein-DeHase. Not only that, he got the right answer because one of the remarkable aspects of the Einstein-DeHase story is that when they determined the g-factor, they found that it was what they, of course, then expected, namely g-factor one. And of course, we know it really, g-factor two. It got two more or less but didn't know what to make of it, and it was several years before the experiment, which is exceedingly difficult experiment, both of them, fantastically difficult, fought with all kinds of systematic errors before the g-factor settled down too. And now it turns out that in the interstellar grains, this ridiculously small effect, the Barnett effect, is important. It was first suggested, its rolling interstellar grains was first suggested by a paper by Dolguinov and Mitrofanov in Moscow two or three years ago, and they did not use it in this way. They simply pointed out that the Barnett effect produces a magnetic moment in the rotating grain, and then it occurred to me that there must also be associated with a relaxation, and the relaxation is the thing that does the business in lining up the grain rotation with its principal magnetic moment. The reason that so a tiny effect can be so important is that the interstellar grain is not slowed down by the gas until it has made something like 10 to the 18th revolutions. And it's in that setting where a thing has a rotational Q of 10 to the 18th that you can do it. Okay, now I would like to conclude by taking a moment to look at the extra galactic space. On my last slide, the last slide shows the universe, the extra galactic space that Professor Dirac was talking about. In fact, on this slide, we'll have something about the mean density of the universe, and here the question is, could it be that some of the mass, the missing mass, which would be required if the universe is to be closed according to the conventional theory, could that be hidden outside the galaxy in some of the forms that we've discussed about in connection with the interstellar medium? Here is the very size of the universe I've taken is 10 to the 28 centimeters, and the difficulty in defining the size, of course, has already been pointed out. But when we add up all the things we see, which are something like 10 to the 10th galaxies now, about 100 times as far apart as their size, with 10 to the 44 grams each, we find the apparent mean density of the universe. The density, if all there is, is what we see, is about 10 to the minus 30th grams per cubic centimeter, whereas the critical density, that density, which would just be enough to slow the expansion, more than that would then cause it to come back, that is something like 20 times as large. The question then is, have we seen only 5 or 10 percent of the total matter, the rest of it being hidden? I think we can say now fairly confidently from the observations that if it is hidden in the intergalactic space, it's not hidden as dust, it's not hidden as cold hydrogen, it might be hidden as hot, totally ionized hydrogen, but even that prospect is rather quickly being ruled out by some of the X-ray astronomy observations. One is left, of course, with the possibility that it's hidden in black holes, more massive objects, and in fact, in order to hide intergalactic, hide matter in the galaxy or intergalactic, all you need to do is to make it in large lumps and not in small pieces. Any amount of matter could be hidden in the form of golf balls, and one would never be able to see it. But there is a suspicion now that if the mass is there, it's hidden as black holes, or possibly it's hidden mainly in the centers of galactic clusters, but even saying that, I've really strayed into astronomical fields that I haven't yet explored and shouldn't act as if I knew the answer. Within the galaxy itself, there are still some important problems. One of the most important, in my view, is the dynamics of the interaction between the dust and the gas when the gas is turbulent. There is some suspicion already from some curious results that in a turbulent gas, the dust may tend to clump, and in clumping, make dark clouds. Making dark clouds make chemistry, and also those dark clouds are really the forerunners of stars. So the whole question of star formation may hinge in a rather critical way at some stage on the behavior of dust grains in a turbulent gas. That's a problem that has not been solved, and I think may be a very exciting one. The dust, you see, is the material out of which stars are made, and it's in turn made from stars. And in fact, when I showed you the graphite on the paper, it may even be that the interstellar dust is graphite. But what's certain is that the dust on the paper was once in a star, and we'll be again, we will be around to use it, to pencil. Thank you.
|
While Edward Purcell also attended the 4th Lindau physics meeting in 1962, the present lecture is the only one he ever gave there. The work that brought him the Nobel Prize in Physics 1952 concerned magnetic properties of solid matter in the laboratory. But Purcell also had other interests and it was he and his collaborators who found the sharp 21 cm radio astronomy line emanating from clouds of hydrogen in space. In 1979, Purcell had for several years worked on a problem connected with interstellar matter in our galaxy, i.e. matter between the stars or their planets. At that time it was thought that only 10% of the matter could be found between the stars and it was also unclear if other stars had planets. Today the estimate is that most matter is located in the interstellar and intergalactic medium. We also believe that most stars have planets and the research into what is today named dark matter and dark energy is a very hot topic, as is the search for new planets. But for Purcell, space was mainly empty, only containing a very dilute mixture of atoms, molecules and dust grains. Although scarce, the grains play an important role in the chemistry going on in space by, e.g., acting as surface catalysts for chemical reactions such as the formation of hydrogen molecules, H2, from free hydrogen atoms H. The grains also interact with the radiation from the stars and this was Purcell’s main interest. It was known that there are large-scale magnetic fields in space and his idea was that these magnetic fields align the dust grains and thus act as a polarizer of the radiation from the stars. In a sense this astrophysical problem made contact with Purcell’s laboratory experiments, where he had studied such phenomena as relaxation effects in ensembles of magnetic atoms and nuclei. Anders Bárány
|
10.5446/52583 (DOI)
|
Thank you, Professor Ho. I would like first to take the opportunity to thank you, Kat and Katas Benadot and the Oba Bugamast of Lindau and everyone else who has been so gracious in making us feel so comfortable here in Lindau. It's a beautiful place and this afternoon when the sun comes out it will be even nicer. Before I begin formally speaking and since it's still early in the morning so I'm, everybody's a little bit sleepy. I thought I could just say two words about some thoughts I was having about the significance of these meetings. It's a tremendous effort to get 20 wonderful geniuses like myself here. Together with all these enthusiastic students and so on. I was thinking about the true value of this kind of meeting. If we had two months I think we could actually achieve some kind of education back and forth. But as it is the time is very short and I think perhaps one of the main functions is to make it clear to students who are beginning now in their careers in science, make it clear to them that people who win Nobel prizes are not very different than other people. They may have had mothers that made them work a little harder in school or they may be a little more aggressive but they're really quite ordinary except for a few unusual. Once in a while you get unusual types like Schweitzer and Linus Pauling and so on. But I think that's perhaps one of the main messages is to make it clear that we're all ordinary people together. Some people work perhaps a little more diligently, perhaps a little luckier, have better postdocs coming to the laboratory and so on. So that's, I think I'm now more awake than I was before. I would like to be able to begin this lecture with a sentence that I've often thought would be ideal at the beginning of the lecture. Cancer may be cured as follows. That would be a nice way to begin. And I think if you read Time Magazine and all the newspapers and listen to the television most people have the impression that this interferon molecule is the cure of everything. As a matter of fact it may very well turn out to be an extremely useful substance in the treatment of some viral diseases. The problem in the case of cancer for example is that we really don't know that cancers are generally viral diseases. There may be some cancers that are viral and some that are not. We know that interferon is definitely an antiviral substance. It's very difficult to get enough interferon to test on a large scale. And what I'd like to tell you about today is the current status of our own more chemical approach to the problem and a few things about the current status of the availability of interferon through genetic engineering and other techniques. It has been tested clinically now not in a very thorough way but sufficiently to make it almost certain that it will be useful as a drug against some viral diseases notably hepatitis B I think, herpes zoster which is a very serious neurological disease, juvenile papilloma has been some very nice work done in Sweden on this disease in the throat of young of juveniles. Possibly in some forms of cancer osteosarcoma has been under study in Sweden for a number of years by Strander and his colleagues. And in recent year, in the past year, enough money has become available from a number of governments to buy large quantities of interferon as much as possible from the group in Finland that is manufacturing interferon from the white cells that are obtained from blood bank contributions. So that perhaps in another year we will have some definite information on such serious large scale problems as breast cancer and other forms of cancer that are now being studied systematically. Another problem that I'll talk more about is the fact that the interferon that is available is generally only about one or two or three percent pure so that when we do these tests at the moment it's not entirely clear whether it's only the interferon that's doing whatever happened good or bad but possibly other small proteins that are mixed with the interferon. The substance was discovered by Isaacs and Lindemann in England in 1957. They did some experiments and biological experiments to examine an old well-known phenomenon in medicine known as interference. If you've been sick from a viral disease and you recover from this disease there's a sort of subjective impression that one is somewhat more resistant to other viral diseases. The body has somehow developed some resistance. They show that if they took chicken, chick cells in culture and exposed these cells to killed flu virus that the cell and then washed off the killed flu virus after a certain time that the cells had produced a substance which made them resistant to live flu virus. In other words the virus, the viral material had induced the production of an antiviral substance and they were able to show by indirect methods with these tiny, tiny amounts of material that it was indeed a protein of fairly small molecular weight and that it was species specific that is to say interferon produced in a mouse would not protect a horse or vice versa but that interferon from producing a human cell culture would protect human cells against many viruses. So it's species non-specific, species specific but virus non-specific. And the observation was very interesting and attracted some attention but not on a large scale. Only because there was not much available. Most of the early work was done with white cell interferon and as I mentioned before in Finland, Kerry Cantel arranged with the Red Cross in Finland to get almost all of the Buffy coat, the white cells from blood cell, from blood contributions and taking these white cells and adding to it Sendai virus, he was able to stimulate interferon production by these cells, remove the cells and then purify the interferon somewhat and distribute this for clinical trial. And most of that early material and indeed throughout the 60s and partially throughout the 70s, most of that was used in clinical trials on a rather small scale and mainly in Finland although some work was being done in other countries in England, France, Belgium, United States. So that the quantities were limited and there was not that much activity. Suddenly it became clear, I think partially through the press, that here was a possible substance that might be of value in viral diseases including cancer and immediately a lot of biochemists like myself and a lot of physicians got into the business and began to think about isolation and production and clinical trial. I'm not a physician myself so I don't want to show any clinical results. I have one slide on a clinical trial which was carried out by my friend Michelle Ravel in Israel together with some physicians in Tel Aviv on a disease known as shipyard disease. It's an adenovirus conjunctivitis which does cure itself after 35 or 40 days although it can leave serious permanent damage to the cornea. The first slide please is a summary of some experiments that can't reveal and his colleagues made where they took out of a group of patients they took about 17 controls who were given simply some human albumin eye drops in the eye five or six times a day and another 15 who were given interferon as eye drops in the eye. Then you will see down where it says average length of disease that the controls cured themselves mainly after about 27, 30 days whereas those who received interferon were free of the conjunctivitis after six and a half days on the average and if they selected among the experimental patients those with double with bilateral conjunctivitis and treated one eye and not the other so they had an internal control once again the treated eye is cured in seven days and treated eye in 25 days. Lights please. It's a trivial example but I show it only to indicate to you that interferon does indeed stop the course of a viral disease and we hope that much more dramatic results will be forthcoming in the next year. So what I want to do now is to tell you something about preparation, characterization and possibilities for the future. There are three possible ways one can think of making enough interferon. One is to grow large amounts of human cells in culture and then stimulate the cells with virus to make interferon and then purify the interferon from these cultures. That's what we've been doing for the last five or six years. It's a very slow, tedious, large scale project. A second way would be to hope that one could get enough interferon to chemically characterize the molecule and then knowing the structure, knowing the amino acid sequence of the chain of amino acids to be able to do the classical process of organic synthesis to make the molecule from individual amino acids. As you'll see, it's not a small molecule so this becomes a very large task, a very difficult task but it is a possibility and it's the one that happens at my own associates and I have selected. The third became possible only in the last few years and is now beginning to become quite exciting and this is the possibility of taking the human gene for interferon, putting it into a bacterial cell, an E. coli cell for example, and allowing the bacterium to make the interferon for us and then isolating the interferon either from the bacterial cell or from the medium in which the bacterium swims. As you all undoubtedly know from the press and elsewhere, within some hundred kilometers, Charles Weisman at the University of Zurich and his colleagues have already managed to make some white cell interferon in E. coli by genetic engineering and there's been a group in Japan, Taniguchi, who have done this with fibroblast interferon and there are two or three other groups now who have clones of bacterial cells that can make one of several different kinds of interferon and there seem to be a number of different, the human chromosome seems to have perhaps as much as 10 different interferon genes so that a number of different interferons are made in different cells. So this is the third possibility for making large amounts. Now I thought I might just tell you something about to give you some impression of the power, the powerfulness of this drug, of this protein and also about the minute amounts that one obtains to describe how you assay the interferon, how do you test for interferon. The standard technique is as follows. You take a porcelain plate with 8 by 12 holes, 96 depressions and in each of these depressions you grow fibroblasts, human cells that grow in sheets. You grow about a million cells in each of these 96 wells and then you expose each of these wells to an interferon solution, first the undiluted and then say 1 to 2, 1 to 4, 1 to 8, 1 to 16 and allow it to sit overnight and you shake out the interferon and then you put in a virus, standard virus suspension into each hole and then the next day you look at the plate under the microscope and determine the dilution at which half the cells have been protected against cell destruction by the interferon. On that point it's considered one unit. It's a very, very crude test. It's accurate only to about three tenths of a log, about plus or minus 50, 100%, 50%. So it's quite inaccurate and takes two to three days but at the moment that is the only relatively safe way of assaying for interferon. So namely one unit would be the amount to protect half of a million cells against the standard viral dose. Now when we grow a thousand liters of human white cells in culture and infect these cells when they're grown up to about three times ten to six, three million cells per milliliter with virus out of a thousand liters of cell culture we are very lucky if we can have a total amount of perhaps 20 milligrams and after isolation perhaps two milligrams, generally more like a tenth of it, like 100 micrograms so that the amounts what it gains are very disappointingly small. Furthermore it turns out that one milligram is equal to about two times ten to the eighth units. Now if you were treating patients you would like to give, it's become standard practice to administer not less than a million units per day to a patient. So if you had say one milligram of interferon you could give 20 days of treatment to one person and it works out that if you had one gram of interferon in pure form you would be able to treat a thousand patients for perhaps 200 days. Of course what we need is enough to treat 10 million patients for 100 days. So obviously the amounts required are much larger than that. I'd like to show some, a few slides of our own studies on the purification of interferon to give you some idea of the difficulties that are involved. The next slide please. This is it. What we did originally, this was seven or eight years ago, we decided that the perhaps the most efficient selective purification method would be to use immunological techniques. So we took some of the interferon that we obtained from Finland and purified it on a Cephodex gel filtration column. It's simply a column that separates on the basis of size. And if we look at this left, on the left side the dark circles represent the peak of activity, the interferon activity coming off this column. The white circles are total protein. So most of the impurities go out first and then comes the interferon with only a small amount of protein. It's still at this point only perhaps half of 1% pure. But it's good enough to give as an antigen to animals to prepare antibody. So we gave one or two micrograms every two weeks for 16 to 18 weeks to sheep. And it turns out that interferon is a very good antigen and made rather high titers of anti-interferon in three or four months time. So that we then had an antibody which would catch interferon quite efficiently. The antibodies produced by the sheep, of course, are not only against interferon but also all the other protein impurities that are in the material that was injected into the sheep. And we tried then to purify this antibody. The next slide is a slide showing that if you use the following technique you can make much better antibody. We prepared what we called a cocktail column. What we did was to take all of the protein impurities that we could think of that might be in the material that we gave to the sheep, serum proteins, egg proteins, virus proteins and so on, attach them to a column, and then we passed the antibody through that column. All the impurities caught the antibodies against the impurities. And what went through the column in the beginning contained the anti-interferon, and the anti-impurities could then be taken off with acid, and the column could be washed, and then you could repeat the process many times. And eventually one could get an antibody preparation which was mainly free of antibodies against the impurities. Now this purified antibody could then be attached to another column. We chemically attached the antibody to a Cepharose column, and passing crude interferon through the column then permits the interferon in that crude substance to be caught by the antibodies, and the impurities go through. You can wash the impurities off. The next slide, you see that most of the protein goes through the column without losing much interferon, and you wash and wash and wash, and finally you can take the interferon off the column at once again at a low pH, a higher acidity, and you get out some material which has now been purified between 500 and 5000 times in this one step. So this is a way of getting from very large volumes down to very small volumes fairly quickly. The only way we felt that we could make very large quantities eventually was to obtain some kind of human, was to use a human cell which would grow well in tissue culture. And fortunately, Strander and his colleagues in Sweden had examined many kinds of B lymphocytes for their capacity to produce interferon when stimulated with viruses. And one particular kind of B lymphocyte is known as Nimalva. That's a pet name for this cell. It was originally isolated from burgut lymphoma, a virus produced lymphoma. The next slide I think shows a picture of this cell. It's not terribly pretty. It grows very well in solution. You grow it in a rich salt solution with vitamins and amino acids and so on. And unfortunately, you get the best growth when you add 10% fetal cast serum which is now becoming extremely expensive. And we have to develop some new techniques for growing on a large scale that will be a lot less expensive. The next slide shows some properties of this lymphoblastoid interferon that we are working with. This shows you that it's very stable, both to temperature and to acidity. For example, at almost boiling temperature at 97 degrees centigrade, one can allow the material to sit for 10 minutes and still have essentially the same activity so that it's extremely stable to heat and low pH. Next slide. It's however quite heterogeneous in the sense of its electrical properties. It turns out that interferon is a glycoprotein. It's a protein molecule, globular protein molecule, on which carbohydrate is also attached. And these carbohydrate side chains contain acidic residues, sialic acid residues, which are present in different numbers on different molecules, sometimes there are three, sometimes four, sometimes two. So that if you do an isoelectric focusing experiment, you get quite a large heterogeneity in the isoelectric points. These peaks that you see are different forms of interferon differing by their charge due to the difference in sugar. However, if you remove the sugar, thereby removing the negative charge from all of these side chains of carbohydrate, the material becomes much more homogeneous. This shows first of all that the heterogeneity is due to the carbohydrate side chains. And secondly of all, and secondly, that the carbohydrate can be removed without losing activity because it's still active in tests. The next slide indicates the results of treating partially purified lipoblastoid interferon with a mixture of enzymes which choose off carbohydrate, which we obtained from a bacterium dipococcus pneumoniae. The molecular weight of the interferon molecule shifts to a smaller weight when you take off carbohydrate. And if as shown in the next slide, you plot the results on a standard type of figure showing the relationship between weight and behavior on a column against some standard proteins of known molecular weight, you can see that the interferon after treatment with enzymes has shifted to a lower molecular weight. As a matter of fact, one can chop off about 4,000 Dalton's of molecular weight, and the weight goes from about 22,000 to about 18,500 having taken off most of the carbohydrate. It's still active and upon injection into, let's say, a rat, you can show that it is maintained in the circulation at least as long as the normal untreated interferon so that it's not destroyed quickly. Consequently, we feel that it is realistic to think of synthesizing the protein part of the interferon and not having to worry about the carbohydrate part because that would be essentially impossible. There's no organic chemistry at the moment that permits the systematic synthesis of carbohydrate side chains on proteins, but apparently we don't need that, so that's a very lucky thing. The problem now is to make large amounts. The next slide is the kinetics of growth, kinetics of production of interferon by these lymphoblastoid cells. What we do is to grow the cells up in large tanks until they reach about 2 million per milliliter and they're infected with the new castle disease virus which induces the synthesis of interferon by these cells and they secrete the interferon out into the medium that they're growing in. There's a slight lag at the beginning, at the end of about 20 hours you have reached the maximum production of interferon. Then the cells are removed by taking the whole contents of the tank through a large cream separator, actually a cream separator like you use on a farm turns out to be the most convenient way of removing the cells. The fluid comes out and we precipitate all of the protein including the interferon and that's the starting crude material. The next slide shows that one can actually save a little money and it is important in these experiments to do that. Fetal calf serum at the moment costs I think on the order of $200 per liter. It's extremely expensive. We use 10% fetal calf serum in growing these cells. In 1000 liters we have 100 liters at $200 per liter. It's a little bit rich. However, it turns out that the lymphoblastoid cell produces interferon much more efficiently at a lower cell concentration than it does at a higher cell concentration. You see if you infect with Newcastle disease virus at 2 times 10 of the 6 cells per mil, you get in this case 1800 units per mil. If you dilute down to 0.4 times 10 of the 6, we get 7,000. Simply by adding salt solution, you increase the ability of the efficiency of production of interferon. What we do routinely is grow up, instead of 1000, we grow up 250 liters and then dilute to 1000 with salt solution. Next slide shows here the same thing. At the top is the undiluted cells and their efficiency in producing interferon. At the bottom is the efficiency of the cells. They're diluting with salt solution containing glutamine, which is an essential component during the production. Basically we get 4 times as much interferon as we would otherwise, simply by diluting the cells. The next slide. This is a typical kind of slide that you can't read. Simply summarizing all of the steps that we go through. We start, as I mentioned before, we take off the cells in the cream separator. Then we precipitate all the proteins in that clear fluid with trichloroacetic acid. Then we go through a number of steps. First to remove the trichloroacetic acid, then this antibody column that I spoke of, which catches the interferon. A number of other steps including sizing on columns, ion exchange separations, and finally a polyacrylamide gel separation in SDS, sodium nodesisulfate slab gel purification. By this time we're down to very small amounts. In this particular slide the total recovery was only 6%, which is very bad. We now have this up to about 15%, and I hope we'll soon have perhaps 50% with some new tricks that we are beginning to use. But it's still a very slow and tedious process. As you can see, at this point we have on the order of 2-tenths of a milligram of protein from 200 liters of cells. The specific activity of the pure material, this is the 18,500 k, 18,500 molecular weight, is about 2.2 times 10 to the eighth units, 2 to 4 times 10 to the eighth units per milligram. This is the purity of such material. There is one other component, the 21.5, which is the same as the 18.5, very likely, with carbohydrates still attached, which makes it heavier by that many units. The next slide, this is a polyacrylamide analytical slab gel with the interferon activity plotted as black points at the bottom, and at the top the pattern of the gel stained with comassie blue to indicate the position of the components. You see the dark band corresponding to the main interferon peak, and another smaller band with activity to the left. If you then cut out this main band and rerun it on another slab, the next slide, you see on the right a pure sample of human lymphoblastoid interferon against some known proteins as markers on the left to give some idea of the molecular weight. The first pure material was produced by Ernest Knight at DuPont about two years ago from fibroblast interferon. This is our lymphoblastoid interferon. Both varieties have about the same molecular weight and the same specific activity. That is to say, they both, one milligram is approximately two or three times 10 to the eighth units. The next slide. Once again, protein chemists always show amino acid analysis. Not terribly useful, but this is simply the amino acid analysis done with the micro method for amino acid analysis on some pure interferon. It's perhaps only interesting to point out that there is quite a high amount of hydrophobic amino acids like leucine, phenylalanine, valine, and a few others. The next slide is more of the same, but here what we've done is to take the amino acid analysis from fibroblast lymphoblastoid, leukocyte, and mouse, mouse interferon. Amino acid analyses have been done on pure specimens of all of these. This is simply to show that as you go from a mouse to a leukocyte to a fibroblast to a lymphoblastoid cell, the amino acid composition is about the same. Each type has about the same amount of lysine or histidine or arginine and so on. That last fall in November, we felt we had done enough on purification to accumulate material that could be put into the amino acid sequinator, the machine that has been designed over the last years by Edmund and his followers that will tell you something about the structure of a protein, one by one. Next slide shows results which were produced actually by Hood and Hunkapiller, two scientists at Caltech in California who have one of these very sensitive machines for determining sequence. We sent to him, together with Pete Knight at DuPont with his fibroblast interferon, and Leng Yell and his group at Yale who had mouse interferon samples that these people could degrade. It turned out that the mouse came in two varieties, or three varieties, A, B, and C. We show here some results on A and C. Simply to indicate, particularly here at the bottom, that the mouse aeroleicosides interferon called band C is rather similar to the material that we obtained from lymphoblastoid cells. There's quite a high degree of homology in structure. This is only the first 20 amino acids, but it is a beginning, and from the amino terminus it's the first 20 residues, and there's quite a high degree of similarity. It does at least tell us that the gene for interferon in the human and in the mouse, those two genes must be quite similar, at least for the first 20 amino acids. It means that this is probably quite an old gene. It can be found in animals as far as fish and turtles and frogs, so that the interferon gene has been around for quite a while. We were very proud of ourselves at this point, because after six or seven years we had finally achieved some beginnings of sequence. We really had a protein that was pure and it was characterizable. So we were working along, making more material to have the rest of the sequence finished, when suddenly the genetic revolution occurred in Zurich and elsewhere. One of the nice things about genetic engineering and producing material in E. coli cells is that you can obtain the DNA from the chromosome of the E. coli cell, and it's much easier to determine the structure of a DNA fragment than it is to determine the structure of a protein, so that when Charles Weisman, for example, obtained a clone of E. coli that produced interferon, he was able to very quickly take this material and buy some techniques, details which I won't go into, because first of all, I'm not terribly familiar with the field and it's quite complicated to explain. He was able to work out the total sequence of the gene that determined the interferon that was made by those cells for RNA message, RNA messenger RNA isolated from white cells. He used leukocyte message to prepare the original plasmid, which he put into the E. coli cell for translation. The next slide shows a very confusing slide, it's not as confusing as it looks actually. On this slide, what I've done is to put down Charles Weisman and his colleagues' sequence for one of the leukocyte interferons as a protein, that is to say, that's the center of the lines, the one that begins with the CYS and the box at the upper left. He goes along the center one all the way through is Weisman's leukocyte interferon, knowing the DNA strand sequence and knowing the genetic code, you can simply read off the amino acid that corresponds to the triplets of nucleotide bases in that DNA sequence and translate the DNA structure into a protein structure. The top line represents a similar translation of the DNA structure that was determined by Teneguchi and his colleagues' Japanese group from fibroblast DNA. This is another interferon gene. In their case, they selected the gene from fibroblast. So that's on the top line and on the bottom line is the sequence that we now have from our own material on lymphoblastoid. As you see, there are some empty spaces that are not finished, but we hope in the next months to complete this. The important thing is that when you go from fibroblast to leukocyte to a specific white cell like lymphoblastoid, there is a large amount of similarity. I have enclosed those areas where the same sequence occurs in all, in the three species. And there's obviously a great deal of homology down from one cell type to another. I've also cross-hatched the areas which include the cysteine residues, the amino acids that are responsible for making cross-links through SS bridges in the protein when it folds up. And there's one that I forgot to cross-hatch down at 140. There's a CYSALATRP cysteine alanine tryptophan, which occurs both in the fibroblast sequence and the leukocyte sequence. We unfortunately do not have it yet in our structure, although we know that we have one tryptophan in the molecule and we hope to have that section soon. The fact that that section and the section at the upper right has been so carefully maintained in these three varieties suggests that these may be particularly important cysteines that perhaps form an SS bridge. But this kind of detailed chemical consideration will have to really await the isolation of larger amounts. So what to do if you have this? For Taniguchi and Weisman, the future is very clear. They have to produce E. coli systems, E. coli cells themselves, plasmids to insert into these that are better and better and better and can produce more and more interferon. And then they have to purify the interferon away from the proteins of the bacterium sufficiently that it becomes pure enough to be able to give to human beings. You can't give human patients impure proteins because one develops antibodies against them and you have to avoid such phenomena as anaphylaxis and so on. So although it sounds phenomenal in time magazine, and I think these people have done a marvelous job and will probably eventually produce the interferon that we need, they still have a way to go in terms of purification and preparing better vectors for their work. In our own case, since we're not quite finished with our own sequence and since we're interested in synthesis, we have started to synthesize slowly the sequence from Weisman's leukocyte interferon structure, the middle line here. And we're doing this both by the new solid phase synthetic techniques developed by Merrifield and his colleagues and also by classical peptide synthesis fragment condensation, which is an enormous job and may take too many years to think about unless we can think of some tricks or perhaps some combination of both techniques. The other thing we would like to do is to grow enough material or to get enough material from Weisman or Taniguchi to do some careful structure function work. That is to say to try to determine whether one can chew away parts of the molecule and still keep the activity because if one could cut away half the molecule and make it 80 amino acids instead of 160, then it would become synthetically not so difficult. So that's our own hope at the moment. And perhaps we can even use the E. coli interferon for our structure function work. I think it would be interesting, I'm sure, to someone like Dorothy Hartzke and Bill Lipscomb sitting here to think that if the crystallography three-dimensional structure of interferon has ever worked out, it will probably be on synthetic material because we'll probably never have enough of the other kind to do it, but perhaps we will. It would be a nice first to try. One final point that might give some interest to some of your... We've known for some time that interferon and cholerotoxin compete for the same receptors on the surface of cells. We can add interferon and compete for the recognition by adding cholerotoxin simultaneously and vice versa. So I asked the people who put out this dictionary of protein sequences, Margaret Dayhoff and her colleagues who have a computer full of all the known sequences, to compare the leukocyte sequence of Weismann with all other known protein sequences of which there must be hundreds of them. And she called back and said in a very unbelieving voice that there seemed to be very little similarities that there was one rather interesting one that there was... Next slide, please. Rather interesting similarity between the interferon structure and the sequence of cholerotoxin, which I thought was very nice, because I didn't tell her in advance. It was out of the computer without any hinting. It turns out that at the very bottom here, there's a number called an alignment score, a score of 4.1 approximately, which is considered rather good. The alignment score for, say, myoglobin and beta chain of hemoglobin in man is something like nine. So this is really a rather high level of homology in sequence. And you can see that in this section here, there's quite a large number of identical amino acids. Could I go back to the last slide, please? The previous slide. So that as a beginning, we have started at the bottom right, and we are working backwards, and it turns out that this section that resembles cholerotoxin runs from about 150 back to about 120. In that region are these similarities. It'll be very interesting to see whether this peptide, which we now have essentially finished, will be competitive with cholerotoxin, and perhaps may serve, for example, as an antigenic site that we can use for further purification of our anti-interferon antibodies. And it might also give us some approach to the process by which, understanding of the process by which interferon recognizes the cell surface. I mentioned in an abstract that I sent here before we came that I would say something about the mechanism of action. I'm afraid that's very difficult to do. All one can say at the moment is that if you give interferon to cells, it is recognized by the cell and then induces the cell to make two or three enzymes, which either are absent or present in very small amounts before the interferon appears. The enzymes that are produced are all involved in the control of message translation, that is to say in protein synthesis. The translation of RNA message into protein. And interferon appears, therefore, at the moment from the work of a number of laboratories, to be involved in the inhibition of the translation of the viral message that comes into the cell. That's about all we know about the mechanism of action. Most of the rest is phenomenological, and I hope that we'll be able to say something specific about not only how it works, but against which diseases it is helpful if at all. And I feel quite confident myself that it will be very helpful against certainly some viral diseases. Cancer is still a knock on the wood question. Thank you. Thank you.
|
In 1980, the biochemist Christian Anfinsen participated for the first time in a Lindau meeting. Listening to his introduction, one can hear that he liked the lecturing situation: A Nobel Laureate and a select audience of students and young researchers. As Anfinsen puts it, the main function could be to make it clear that people who win the Nobel Prize are not really different from other people (except, in his view, certain exceptions such as Albert Schweitzer, Linus Pauling and a few others!). Many Nobel Laureates coming to Lindau for the first time repeat (more or less) their Nobel lectures that, according to the Statutes of the Nobel Foundation, should be “on a subject relevant to the work for which the prize has been awarded”. But Anfinsen choose to talk on his work on the human interferon and what medical applications it could have. The idea was that interferon could be important in curing viral diseases (maybe even cancer) and the problem was to get enough interferon to be able to make large-scale medical studies. Anfinsen discusses two approaches, first the traditional one of a biochemist (with a lot of money): Use 1000 litres of human white blood cells, infect them with a virus, use biochemistry to produce about 100 micrograms of interferon (which only amounts to a limited amount of doses for a patient). The second approach discussed was the modern one: Produce the interferon protein molecule using genetic engineering, since it is probably easier to find the structure of the interferon gene than that of the protein it produces. He may not have known it, but at the same time as Anfinsen gave his talk, the Nobel Committee for Chemistry discussed the 1980 Nobel Prize. In the early autumn it was decided that one of the fathers of genetic engineering, Paul Berg, should receive one of the two prizes. Today his technique is in fact, as Anfinsen discussed, used to produce several kinds of interferon for medical purposes! Anders Bárány
|
10.5446/52585 (DOI)
|
Thank you very much. I feel a little out of place when we hear about molecules with molecular weights of 20,000 or 100,000 in here's a molecule molecular weight three. You're all familiar with the fact that elementary molecules are normally diatomic like molecular hydrogen in H2, molecular nitrogen into O2, F2, CO2 and so on. But there are some exceptions to this statement. You are also well aware of the fact that oxygen also forms a triatomic system O3 and phosphorus forms a four atomic system P4. More recently, now already some 20 years ago, triatomic nitrogen was observed as a free radical. It is physically stable in its ground state but is a transient molecule and obtained by the for example by the photolysis of HN3. Four atomic hydrogen has been discussed in connection with Van der Waal's interactions and indeed very good evidence has been produced by Professor Welch at the University of Toronto for the existence at very low temperatures of a molecule that consists of two hydrogen molecules that is H22 or H4. Until last year H3 triatomic hydrogen was not known and hardly discussed. However H3 plus is a very well-known system in any discharges. The molecular iron H3 plus was first discovered back in 1907 by JJ Thompson and when the first mass spectrum of hydrogen coming from an electric discharge was obtained. Now H3 plus turns out to be a very stable system and no structure determinations have been made until about a month and a half ago. I mean experimental structure determinations but theoretical work which is possible for as simple a molecule as H3 plus have been made and they have fairly clearly shown that H3 plus is a system like this that is a system consisting of an equilateral triangle. The symmetry there's a three-fold axis of symmetry in the system that has come out of these theoretical considerations and indeed the binding energy in the system H3 plus I'm not talking yet about H3. In H3 plus the binding energy that is the energy that is required to remove one proton from this system is almost the same as the dissociation energy of the hydrogen molecule so it's a very substantially dissociation energy it's a very stable system the H3 plus. Now in order to get experimental information about H3 plus and other molecules one of the ways that the spectroscopist uses is to study the spectrum of such molecules and if we want to determine the structure of H3 plus experimentally then we should try and find try and find a spectrum of H3 plus. We have looked my own subgroup shall I say in Ottawa for the emission spectrum of H3 plus and independently of us Takeshi Okka in our lab also has looked for the absorption spectrum and he has very recently been successful about six weeks ago I would say in finding such a spectrum for the first time but this is not the subject of my talk. Now before I can tell you about the spectrum of H3 neutral H3 I would like to give you very brief summary of the of molecular spectroscopy if I may put it that way that is of course not really possible in the matter of a few minutes but just to remind some of you and to introduce others who are not familiar with it to the way in which a molecular structure is determined from a molecular spectra I'm showing you a few slides so the first slide just reminds you of the fact that the energy of a molecule can be subdivided into three parts the electronic energy the vibrational energy and the rotational energy in spectroscopy we usually divide these energies by the quantity h times c and we work in term values which are given in units of centimeter to the minus one or reciprocal centimeters and I don't need to explain how the vibrational energy depends on the vibrational quantum numbers because I need I will not need it for the special purpose of this talk but I would like to emphasize the rotational term value or the rotational energy which is the same except for a constant factor namely the rotational energy is given by a constant the so-called rotational constant times j times j plus one where j is the quantum number of the total angular momentum of the molecule j takes the value 0 1 2 and so on now this constant be b sub v it's given is given here is simply a combination of fundamental constants and the average of 1 over r squared the distance of the two nuclei in the particular molecule and we have to take the average of that and then we find that this rotational constant depends slightly on the vibration if the vibrations are have large amplitude since they are an harmonic you find that this b v depends on the vibrational quantum number but again that is something we will not need for the further considerations the main point is that you remember that the rotational constant in the equilibrium position is inversely proportional to the moment of inertia of the molecule which is mu times r e squared where mu is the reduced mass and on sub e is the inter nuclear distance the distance between the two nuclei in the molecule this is the important relation so when we the moment we get b from the spectrum we get the inter nuclear distance or if you like the moment of inertia now the next slide well this repeats what I had just said here and adds the following point when we have a transition from one state to another in the molecule then we take we may have a change in electronic energy giving rise to this difference we may have a change in vibrational energy giving new v and we have a change in the rotational energy giving this new r so that the spectral the frequency of the spectral lines may be represented as a sum of three terms of which the term new sub e is the most important when we are talking about an electronic spectrum and where the term new sub r is interesting because it gives us information about the structure of the molecule well that is shown here at the bottom now the next slide just shows in an energy level diagram the kind of transition that we are talking about we have an electronic state up here with its various vibrational levels and in each of these vibrational levels we have a series of rotational levels and you notice that the spacing of the rotational levels is quadratic it ink this the spacing between succeeding levels increases linearly therefore the energy increases quadratically j times j plus one and the same for the lower state of the particular transition that we are considering the various vibrational levels which are nearly but not quite equidistant and the various rotational levels in each of these vibrational levels and the all the the manifold of transitions from these various levels up here to the various levels down here gives what we call a band system in the spectrum of the molecule and now I think in the next slide I have just one or two very simple examples of such spectrum this is a very old spectrum of the CN free radical CN where you can see particularly clearly here the regular structure with a gap here and then it goes on like that this is what we call an r branch to the short wavelength side and a P branch to the long wavelength side of the new zero which is the transition which corresponds to the change of electronic state as well as possibly vibrational state now the point that I would like to emphasize is the fact that the spacing of these lines here in the neighborhood of this so-called band origin this spacing is simply equal to two times this constant B to be that's a very rough statement but by and large it helps in understanding the following considerations this this is to be so if we measure the spacing between these two lines we get immediately the moment of inertia of the molecule or if we know the reduced mass we get the inter nuclear distance in the molecule this is one case now I didn't mention that the r branch corresponds to a change of angular momentum by plus by plus one the P branch corresponds to the change of angular momentum by minus one the next slide shows another old example aluminum hydride the aluminum hydride molecule in which we have here again an r branch here a Q branch now which starts at right at the band center and then underneath a P branch which is indicated here these lines belong to the P branch and again you might say that if I take this spacing here divided by two I have a rough value for the constant B which tells me the inter nuclear distance in this particular molecule now in the case of the hydrogen molecule ordinary hydrogen H2 the spectrum of course has been studied for about a hundred years and in this case the spectrum doesn't look as simple as it does in aluminum hydride or cyanogen in the previous slide but in the next slide you see two small sections of spectra of in this case deuterium and in this case hydrogen and you see there are just many many lines and the regularity is by no means as obvious as it is in the case of CN or aluminum hydride the reason is that in molecular hydrogen the moment of inertia is very small and therefore the value of B is very large therefore the separation of successive lines is large maybe one line is here and another line is here and in between are all sorts of other bands and that makes for a great complication of the spectrum but this spectrum has been analyzed in the visible region for example and other parts of the spectrum by a succession of workers some 50 years ago Richardson Finkelberg Meckler Weitzel and others have clarified the interpretation of the spectrum and by this study they have shown that this spectrum that you see here and adjacent parts of it correspond to the transitions between excited electronic states in the H2 molecule they are not related to the ground state of the H2 molecule in order to establish the structure of H2 in its ground state you have to go to the to the ultraviolet in the vacuum ultraviolet there's a very similar kind of spectrum which when analyzed gives you the structure of H2 in its ground state for example you find that the inter nuclear distance in the hydrogen molecule H2 is 0.741 angstrom units and that piece of information can be obtained with a very great deal of precision now before I can now come to the actual topic of my talk I have to tell you a few things about polyatomic molecules and their spectra of course linear molecules are very similar to diatomic molecules and I don't want to go into that now because I will not need it but nonlinear molecules unlike diatomic molecules have nonzero moments of inertia about any axis through them for example if you take could we have light a little bit if you take this system here and any more complicated system you can see that about any axis of rotation this axis or this axis any other axis you find the moment of inertia is not zero while in a diatomic molecule if this were the diatomic molecule here the moment of inertia about this axis is zero so the polyatomic molecule is somewhat more complicated but there are some polyatomic molecules which we call symmetric top molecules in which two of the so-called principal moments of inertia are equal and that is indeed the case in this system here where the moment of inertia about this axis here is the same as the moment of inertia about this axis and any of the other axes and that means any axis in the plane of the molecule gives the same moment of inertia and that is the characteristic of a symmetric top the moment of inertia about the axis perpendicular to this plane is different in the simple molecule like this it would simply be the sum of the moments of inertia in the other two directions and we distinguish in this case here oblate symmetric tops from prolate symmetric tops prolate symmetric tops are those in which the moment of inertia for example this stick here has a very small moment of inertia about this this axis here but a large moment of inertia about this axis or that axis this is what we call a prolate symmetric top this is an example of an oblate symmetric top since we are interested in this molecule I only give you the energy levels for a an oblate symmetric top but the formula are almost identically only change names a little bit so the next slide shows for an oblate symmetric top the rotational energy or term value if you like is given by this formula here forget about this because that's only a very small correction due to centrifugal force which I won't discuss here but you see this term here is exactly the same as in the case of a diatomic molecule only that J is now the total angle momentum which may not be in the plane of the molecule or right angles to it it may have any direction to the plane of the molecule and then there is an additional term here which is due to this fact that we have three or three moments of inertia different from zero and this K that you see here K is the quantum number of the component of J in the direction at right angles to the plane of the molecule so J K is the component of J in the direction of the symmetry axis it is multiplied by a constant C minus B where B is the same constant as this and C is a very similar constant only it refers to the moment of inertia about this axis the axis perpendicular to the plane of the molecule so B and C correspond to the two different moments of inertia of this system now what I've written down here refers to a non degenerate state but in a molecule you may also have so-called degenerate states for example in a diatomic molecule take away one of these you can have an electronic angular momentum about this axis the electrons go around like that or they go around like that in this alternative whether they go either way this leads to a double degeneracy but such a degeneracy arises also in the case of various states of symmetric top molecules if you imagine this molecule now as a symmetric top and you imagine an electron is moving around this way it can also move around that way and therefore there would be a degenerate state double doubly degenerate state and in such a state the energy formula has an additional term compared to this here and that is given here and this additional term is this here minus all plus 2 C zeta K where K and C I have already defined here K being the angular momentum the component of the angular momentum in the symmetry axis and the zeta this zeta here is the electronic angular momentum just as in a diatomic molecule you have what is called lambda the electronic angular momentum about this axis so in a symmetric top molecule we can have an electronic angular momentum about this axis and this electronic angular momentum can interact with the rotational angular momentum and that gives rise to this additional term in the energy formula which was first discussed I believe some 50 or 48 years ago by Teller and Tisa now the next slide here are the energy levels of let's only look at the right-hand side of this diagram of a summit of an oblate symmetric top molecule we have a number of values of K K was 0 1 2 3 4 and so on and here we have the values of J so for each value of K we have a set of rotational levels very similar to the rotational levels of a diatomic molecule but we have instead of one of these sets we have a number of sets and all these sets are similar by just shifting them a little bit they would come into coincidence if it weren't for the effect of these terms that I neglected namely the centrifugal distortion terms so we have these different sets now let's look at the next slide this was what I just said was for the non degenerate electronic state when the electronic angular momentum or the vibrational angular momentum for that matter about this axis is is 0 but when we have a degenerate electronic state because we have these two possible directions of the angular momentum can either go this way or this way we find that there will be a splitting and this splitting is given here it's due to this minus or plus sign in front of 2k zeta k 2c zeta k and you have this splitting which is the same for all these levels here is larger here because it's proportional to K and still larger here and still larger here I don't want to go into great detail about that because time would not permit to explain everything that one could say about the structure of this molecule but I did want to mention the complication that arises even in a simple system like tyatomic hydrogen when you introduce degeneracy and yarn teller effect and all sorts of other things that can be considered now the next slide gives you the selection rules for asymmetric top molecule for the rotational quantum number J you have the same selection rule as in a diatomic molecule or for that matter in any atomic system or even a nucleus for so-called dipole radiation delta J the change of J is either 0 or plus 1 or minus 1 and in the spectrum that gives rise to three kinds of so-called branches the Q branches with delta J equals 0 the R branches with delta J plus 1 and the P branches with delta J equal minus 1 in a so-called parallel band that is to say when the dipole moment of the transition is parallel to the symmetry axis then the selection rule for K is that delta K is equal to 0 when however the dipole moment of the transition is perpendicular to the symmetry axis in this direction then the selection rule is delta K is equal to plus or minus 1 now let us first consider this kind of parallel band and we have to consider it if we want to interpret the spectrum that I'm going to show presently the next slide here are just two of these sets of rotational levels that I mentioned earlier here for K equal 0 K equal 1 K equal 2 and now we have to apply the rule that we can either have delta J is equal to plus 1 like here delta J equal to minus 1 like here this is the upper state the upper electronic state presumably and the Q branch doesn't occur here but here you have J equal 1 here J equal 1 here you get Q1 line and more lines for higher J values and the same here now you can see that because these levels coincide with these levels and these levels coincide with these levels except for shift since in this case where delta K is equal to 0 that is we go from this stack to this stack from that stack to that stack from that stack to that stack all the subbands will be superimposed and that is shown in the next slide oh could we just try the one after that yes I come back to the other one in a moment here are the for K equal to 0 you have one subband K equal 1 K equal 2 K equal 3 and you always see this here the R branch here the P branch and here the Q branch now what has been done here is assuming that the moment of inertia and therefore the rotational constants are different in the upper and the lower state and therefore there is a shading the it widens out here and narrows down this way but even so the same statement applies that I made earlier for a diatomic molecule namely that the spacing between these lines near the band origin so-called is roughly 2B where B is this moment of is the rotational constant that corresponds to the moment of inertia about an axis like that now you have to imagine that all these subbands are superimposed and that is what you see here and if you don't have enough resolution either because the lines are broad or because your instrument is not powerful enough then you will get just a simple series of lines lines in quotation mark and here and what you will see is then what you see in the next slide for a simple case that was found some 20 years almost 20 years ago it was more than 20 years ago CD3 CD3 unlike CH3 which has a just a broad spectrum here that cannot be there's nothing to analyze here you have this fine structure and this is exactly the P branch where each of these lines really consists of a number of lines of different K values of different K values now if we could just go back two slides 2 Lichtbilder zurück bitte in molecules that have identical nuclei there is a very important phenomenon that occurs and that is the intensity alternation and in a symmetric top molecule of this particular symmetry B3H symmetry you find that when K is equal to zero alternate lines are missing in H3 and they have the ratio 10 to 1 in D3 and that applies whether it's H3 or in H3 or CH3 in all such molecules we have this intensity alternation for K equals zero and we have another intensity alternation in K namely that we have strong weak weak strong and the same in D3 but with another intensity ratio I don't want to go into that further now let us just look forward again the next slide now I just wanted to point out no no one back no there I just wanted to put it out I'm not quite tall enough for K equals zero you see this intensity alternation there's a strong line that a very weak line that a strong line very weak line and so on now if you superimpose this all then you then it is clear that if you have always one line very weak here alternately that there will be a slight intensity alternation in this group of lines if you haven't resolved them and that is what I think you can see in the next slide I think it's just visible that you see here strong weak strong weak strong weak not terribly strong because it's only one line out of several that is weak or that's absent or weak or strong and but it was in that way that for the first time in CD3 it was established that CD3 is a planar molecule a planar molecule I don't want to go any further into that now let's go to the next slide what I've just discussed where parallel bands of symmetric top molecules you have this stack combines with this stack this stack with this stack and so on if we now come to perpendicular bands the situation becomes more complicated because we go from this stack to this stack and from this stack to that stack and so on and these several sub bands no longer coincide but they rather look like shown in the next slide which is a again a very old slide due to Denison many years ago here you have these very sub bands which no longer coincide because you don't keep K the same in the upper and lower state but you get a series of sub bands whose Q branches which are unresolved here form a series of lines like that well one could say a lot more about this but I think time would not be sufficient to go into all that and I will proceed immediately to the next slide the spacing between these sub bands is given by this expression to a minus b I should have written here C minus B which is a negative quantity I don't think there's a complication in degenerate states but I think I will omit that complication for the purpose of this lecture so let us go to the next slide now I come really finally to the real subject of my talk as I mentioned at the beginning we were actually engaged in experiments with the air that had the aim of finding an emission spectrum of h3 plus and in doing so we found a number of features which seem to agree very nicely in the infrared spectrum of a discharge through an discharge tube of this sort with what had been predicted from theory in this discharge tube we have a cathode so-called hollow cathode and we have an anode and we fill this with hydrogen or deuterium and a discharge is struck between these two electrodes we have here the negative glow which withdraws into the hollow a hollow cathode glow and then we have here an anode glow the positive column and we look at the spectrum of the cathode glow through this window and we look at the spectrum of the anode glow through this window and we found in this way in the in the hollow cathode which is assumed to be rich in molecular ions we found several spectral lines that were candidates for h3 plus but we weren't sure unfortunately we didn't publish that because they were not the lines that were six or seven weeks ago found by ochre in our lab in the absorption spectrum of a similar kind of discharge which was done with very great refinement and where he actually found the spectrum of h3 plus in absorption but at the time we didn't know that and we thought we must first establish whether in this discharge there is rotational equilibrium and so we decided we take some spectra of this discharge in the ordinary region where we know as you saw in a previous slide the many line spectrum of ordinary hydrogen auditorium and see whether from this spectrum of ordinary hydrogen auditorium we can establish the rotational temperature and when we did this we obtained a spectrum that I think is shown in the next slide and you see here this is the spectrum in the same region as I showed in an earlier slide of the anode glow and the normal set of spectral lines of ordinary in this case deuterium deuterium is shown this is nothing new but when we looked at the spectrum of the cathode glow we found some of these broad diffuse features here and here and here and here on the first spectrum that we obtained it wasn't as clear as it is on this particularly when we cooled with liquid nitrogen it came out much stronger and it took several months before we realized what really this spectrum means now I like to remind you of what I said earlier in connection with diatomic molecules that the spacing between successive lines in the neighborhood of the so-called band origin this spacing here is very close is is 2b where b is this rotational constant which is simply the reciprocal of the moment inertia except for a constant factor and when I looked at this separation here I just measured it with the with a ruler and converted to reciprocal centimeters it came out to be I think it was well it doesn't really quite matter 44 reciprocal centimeters 44 and then I try to go backward now and try to identify the molecule that is responsible for this spectrum here from the value of this constant b that I found of course the first assumption the very first assumption when we first saw that was that we simply had to do with some kind of artifact some how the wrong order had come through or some foolish thing like that but when I got finally we repeated and repeated and always came up again and then I looked at this distance 44 centimeter minus 1 I tried to see what could it be now the most natural thing of course would have been to say well it's some unknown spectrum of ordinary hydrogen ordinary molecular hydrogen but in this case deuterium of course but we knew the b values of deuterium in the ground state the b value of deuterium is 60 and in the excited states the only states that come into play in this spectral region because we cannot involve the ground state here in the excited states the b values are all of the order of 30 in ordinary hydrogen in deuterium they would be half that in other words 15 here we have 2b equal 44 therefore b equal 22 and 22 is not equal to 15 not anywhere near that and so it was rather baffling why we should find something with a b value of 22 now you could say well couldn't it be some hydride shall we say some impurity OH or CH or some other hydride well indeed hydrides have fairly large b values but when they are deuterated the b value all the b values the maximum b value that has ever been found for a deuteride is 10 and not 22 so that didn't work either and then fortunately it I remembered that in the calculations of the structure of h3 plus the b values that were predicted from theory from up-in-issue theory were 43 for h3 plus and correspondingly the half of that 21.5 for d3 plus and I noticed that here we measure a b value of 22 and predicted for d3 plus is 21 and a half well if we had been foolish of what we could have said here we have the spectrum of d3 plus but it was immediately clear to me that we didn't have the spectrum of d3 plus because there's no way in which you can account for a spectrum of d3 plus in this particular spectral region because we know that d3 plus is unstable in well I shouldn't say we know but from theory it had been predicted that all excited states of d3 plus are unstable moreover if we excite d3 plus to an excited state we take one of the electrons that holds it together and they're only two in d3 plus out of the lowest orbital and put it into an excited orbital and then it would have a very much smaller binding energy and this spectrum is more the spectrum of between two states that have nearly the same binding energy because it's fairly uniform spacing on the left and the right hand side of this central part of the band so it was clear to me it couldn't be d3 plus and yet the b value was the same now what could that be well the assumption that we have confirmed by many further observations is simply that we have here neutral d3 in which an electron is moving in a Ruhnberg orbital that is an orbital with a principal quantum number higher than in the ground state somewhere around here this Ruhnberg orbital this Ruhnberg electron does not change the moment of inertia it does not change the binding and therefore the b value is the same as if this Ruhnberg electron were not there that is the same as in d3 plus and since Ruhnberg states have much the same rotational constants in various Ruhnberg orbitals it is explained that the upper and lower state are similar the other point is and that was important you see that these lines why why are these lines so broad here well there's a whole field in molecular spectroscopy that deals with the phenomenon of predissociation and I cannot undertake to describe that in detail here but I can only remind you that the predissociation is caused by a radiation less transition from some excited state into the continuum of another state and the reason why these lines are broad here must clearly be that the lower state not the upper state of this particular transition is pre dissociated it cannot be the upper state because if the molecule were predissociating in the upper state it wouldn't emit light it would dissociate and you wouldn't see the spectrum and that is a very strong conclusion because when you have this kind of line with the lifetime that is connected with this radiation less transition is of the order of 10 to the minus 11 or 10 to the minus 12 seconds so there's no time for the emission of light so the assumption then was that what we see here is a spectrum of h3 or d3 rather in this case in which a transition takes place shall I say from n equal 3 to n equal 2 where n is the principle quantum number simply analogous to the ordinary H alpha transition in atomic hydrogen and indeed the spectrum is not very far from the H alpha line of atomic hydrogen and the lower state the lower state predissociates because the ground state of d3 or h3 is of course unstable everybody knows that if you take a hydrogen atom and bring it up to a hydrogen molecule it will not be attracted we have a repulsive state which in fact is a state is a degenerate state but never mind about that now let's look at the next slide now that's not all that of course we have as evidence here's the same spectrum we've seen a moment ago but here's the corresponding spectrum in h3 and you just can just see that there's one line here much weaker line here just this background here not the line not the sharp lines the sharp lines are all due to h2 or d2 whichever case may be and there's another one here now the important thing that is an important confirmation of the idea that this is really h3 is the intensity alternation namely you see that you have strong weak strong weak and then it fades out the Boltzmann factor overcomes the intensity alternation but the more important thing still is that in h3 the first line is strong while in d3 the first line is weak in other words in h3 the odd lines are strong and in d3 the even lines are strong and that is precisely what I showed in an earlier theoretical slide precisely what we expect in a molecule like this for the rotation about this axis because we exchange two hydrogen nuclei and but it's a little more complicated because we have a triatomic system and one finds a from theory that the intensity alternation in hydrogen is such that alternate levels for k equals zero are missing while in uterium alternate levels are very weak and on top of that the in one case they are the odd levels and the other the even levels now I also mentioned earlier that each of these lines if this is really a symmetric top molecule as I think it is each of these lines really consists of a number of components and only the components with k equals zero are the ones that are missing or have low intensity so that you cannot expect that alternate lines are completely out they are there but they are weak because one of the lines contributing is is missing or is very weak now let's look at the next slide in addition and naturally when we first had found this band at 5600 angstrom units we tried to see whether might be something else somewhere along the spectral regions and low and behold at 7100 angstroms we found a very strong feature here much stronger than the band that you saw a moment ago in this in D3 and this in H3 that is in a hollow cathode discharge in hydrogen or uterium now the structure of this is not as clear as the structure of the band that you saw in the preceding slide and the reason for that is clearly that this is a perpendicular band that was clear to me right from the beginning but it took quite a long while in the help of Dr. Jim Watson of Southampton University to really confirm unambiguously that this is a perpendicular band of 8 of the system D3 or H3 whichever the case may be I don't want to go through the whole argument because it would take much too long but I think the next slide might show you no next one please oh perhaps I should omit that next slide please and the next one hope I get that yeah no let's forget now about the perpendicular band because the the explanation while we feel it's completely unique and unambiguous is somewhat involved because it involves yarn-teller interaction lambda type doubling lambda type resonance and all these things that make the the spectrum rather complicated and would take an extra hour to explain these these irregularities if you like to call them that anyway it has been completely explained but what I would like to show you very much is the spectrum that you see here this slide that you see here is not what we directly observe in the spectrum we have used a trick in order to isolate that part of the spectrum that belongs to H3 or D3 from the part that belongs to H2 or D2 and we have done it by taking two pictures we take the original and we take a negative of the anode spectrum as you may recall in the spectrum of the anode there's no sign of H3 or D3 therefore if we now take the negative of that put it on top of the cathode glow spectrum of H3 or D3 and then copy through that we eliminate all the lines that are either due to H2 or D2 and only the lines of H3 or D3 remain and that is what you see here and now you see here a but an additional band that is very much nicer than the first one that we found namely you have here a cue branch and you have a P branch here and you haven't this goes on here this is the cue branch again the same as that and you have here the our branch and the important thing is now here the lines are sharp or almost sharp and here you see that the so-called K structure is resolved you recall that a parallel band of a symmetric top is a superposition of subbands in the original band that we found these subbands were not resolved because the lines were broad but here the lines are sharp and therefore the subbands are resolved and more than that you see here for example for J for J well it's called in here because this is the angular momentum without spin if equal three you have three sublines zero one and two these are the K values for the next line you have only one two three the zero line is missing or very weak so we don't see it here the zero line is there so you have a hundred percent intensity alternation here in the K equals zero subband here it's there there's absent here it's there there's absent and so on if they were so on on the basis of this spectrum here there can be absolutely no doubt that we have here a spectrum of a system that has a three-fold axis of symmetry like this system here and I think the next slide please well this is a corresponding thing for h3 which is not quite as clear cut because they are predissociation sets in that limits the number of K values I don't want to go further into that next slide please and I think I will forget the discussion of this to the next slide please this I think I should pass say this is a photometer curve of the perpendicular band at 7100 angstrom units these spikes here you must disregard because they correspond to deuterium this is what you might have expected naively and this is what Dr. Watson finally obtained by introducing all these corrections and interactions and you see that and then introducing the line width and you see that this spectrum is a very good reproduction of this spectrum here of this perpendicular band but I don't want to go further into that the next slide please and this is an infrared band I think I will not go into that next slide please my time is gradually coming to an end these are the results in the form of the values of this constant B and C if you just look at B these are the B values of the various electronic states of D3 in here of h3 and for comparison we've given here the theoretical values for D3 plus and h3 plus this slide was made a little before dr. ocher got the spectrum we have now an experimental value he has an experimental value for this quantity but it's only a half a percent higher than this is forty three point six I believe or something like that what you do see is that the B values of all these redberg states of D3 are very close to the B value of the ground state of D3 plus and the same for h3 and h3 plus the next slide please I should say a few words if I may go on for a few minutes longer about the electronic states that we see the orbitals in this symmetry D3 8 of an electron well in in the limiting case of the united atom if you bring these three nuclei together then you would have a 1s electron 2s 2p 3s 3p and so on but when you now separate them then while in a 1s electron simply gives an a1 prime electron according to the notation which I cannot explain now here the 2p electron is is has a six-fold degeneracy and it's resolved in e prime and an a2 double prime well no I should say threefold degeneracy and you have an e prime and a2 double prime electron or orbital that you get out of that and the same out of this one for 3d you get this and if you now consider the electronic structure of this h3 or D3 then you have first of you have a total of three electrons the first two electrons go into this lowest orbital 1s a1 prime the other electron goes into any of these orbitals here and depending on which one it goes to we have different electronic states and they give them rise to the spectrum and so I think the next slide shows an energy level diagram and shows the transitions that we have found for n equal to we have actually three states one of them isn't shown here we have the 2s state which is a double it a1 prime and a 2p state which is double a2 double prime but there's also derived from that an e degenerate state which should be shown here but it is a continuous state it's a repulsive state it is the ground state of the h3 or D3 system the excited states are shown here for n equal 3 you have 3p here and here these two states this is the upper state of the degenerate of the perpendicular band this is the upper state of the parallel band that was first found and then we have this new system here which has this upper state and the other round state here these lines are much sharper because the predetermination of this state is strongly forbidden while the predetermination of this is allowed and so these two transitions are brought I have broad lines these two transitions here have sharp lines we have also observed infrared transitions this one here which combine upper or lower upper levels of these other transitions and everything fits together in a an almost perfect way so that there can be absolutely no doubt that we have this kind of energy level diagram let's just look at the next slide well this is somewhat similar to what I just showed for D3 all I want to point out is the inter nuclear distance observed here for these various excited states that is we can determine the distance between the deuterons or protons in the H3 plus molecule and as you see they are similar to the ones in D3 plus or H3 plus some are smaller some are larger indicating whether the orbitals involved are slightly bonding or slightly anti bonding next slide please yeah perhaps this may be the last slide I will show this is just to indicate the correlation between the energy levels of H3 on the one hand and H plus H2 on the other hand I've shown here H3 if I gradually increase the principal quantum number eventually I come to H3 plus and an electron and in the same way if I ionize the hydrogen atom here I eventually come to H plus plus H2 plus an electron if I now bring H plus and H2 together I get H3 plus plus an electron and this is the energy I mentioned at the beginning the dissociation energy of H3 plus which is large and the important thing is now that all the excited states of this system lie above this limit the first excited state is arises when you have the principal quantum number equal to 2 and that is already above this level and similarly with all the others in other words all these excited states of H3 have no other way to go but go up here and it's only the ground state which is unstable which goes here but of course we can have predassociation from any of these levels into this level here but this predassociation is strong only for the lowest ones according to the trunc condom principle now one might ask what is the determining factor in obtaining such a spectrum it is clearly the fact that this so-called proton affinity proton affinity of molecular hydrogen is large and there are other systems in which this is the case and maybe I can just show the next slide if it's the one I think it is the proton affinity of hide molecular hydrogen is 4.4 of water 7.1 of NH3 is 8.8 of CH4 is 5.3 EV in all these systems we have very stable ions these ions here and we can therefore expect if we add an electron to these systems we should get Rydberg states which are stable in one of these cases we have indeed we believe found such such Rydberg states and that is the system NH4 plus if we add an electron to that we get Rydberg states and they give rise to a spectrum to two spectra one of which has been known for 108 years the so-called Schuster band of ammonia and the other has been known since 1955 through the work of Schüler but I think time does not permit me to go into further detail about that now there's one further remark that I am anxious to make because I'm naturally always asked well what is it good for what where can it be applied of this sort of thing well I don't know what that can be applied to any chemical reactions that is for the future to show but there is one place where it has long been assumed that H3 plus molecules are not only fairly abundant but also fairly important and that is in the interstellar medium H3 plus is present in the interstellar medium by common consent of theoretical astrophysicists although nobody has seen H3 plus yet in the spectrum because it's difficult to see and but Dr. Ochre has now supplied the basis for establishing by the spectrum the presence of H3 plus in the interstellar medium but suppose that H3 plus is really there and that is generally believed as I say and it is considered to be the origin of the formation of all these fancy molecules that have been found in the interstellar medium if it is there it is subject to formation to recombination by with an electron if there are H3 plus ions there must also be electrons in fact for many other reasons there are now if H3 plus an electron recombine then we have the phenomenon of dissociative recombination and in this process we first form a highly excited state of H3 neutral it cascades down emits the spectrum that I have been reporting and then dissociates into H2 plus H by this phenomenon of dissociative recombination and I have every hope that in the not too distant future my astronomical friends will observe the spectrum that we have found in the laboratory in the spectra of suitable interstellar clouds thank you very much
|
Some Nobel laureates receive their prizes so late in life that they let the prize mark the end of their scientific activities. Not so Gerhard Herzberg, who almost ten years after receiving the Nobel Prize in Chemistry came to Lindau with a first class scientific discovery. It is well known that two hydrogen atoms H may join into a diatomic molecule H2, but also that if a third hydrogen atom approaches the H2, it will be repelled. This seems to imply that it would be impossible to form a triatomic molecule H3. But in his lecture, Herzberg tells the fascinating story of how he and his team a few months before the Lindau meeting actually discovered H3 in their laboratory! Almost as in a section of the TV-series Forensic Files, he first states the problem, then in some detail describes the methods to be used and finally solves the problem using the technical equipment of his laboratory. The story starts with a slightly simpler molecule, the electrically charged ion H3+. Traces of this molecular ion were seen in the laboratory already in the early 1900’s by Nobel laureate J.J. Thomson and later on in different kinds of particle accelerators. But it wasn’t until almost 90 years later that the important discovery was made by Takeshi Oka, a colleague of Herzberg, that this ion is abundant in space, where it plays an important role in the interstellar media. There it may meet an electron e- and form a loosely bound so-called Rydberg system (H3+ + e-). But this is just the molecule H3, even if it doesn’t appear in its lowest energy state. Precisely this process was found in Herzberg’s laboratory just before he came to Lindau. He ends his lecture by mentioning the process dissociative recombination, which describes the formation an decay of the Rydberg system. Interestingly enough, the details of the formation and decay of H3 has now been studied in detail in accelerator experiments starting only a few years after Herzbergs discovery. Anders Bárány
|
10.5446/52586 (DOI)
|
Midwestwch i chi cy overwhelmingly野fagur o tarl receiving i Feogel a Tipwysau early in web REFESOR Jeff Cantelland, Cynedereum Prof es medium hon yn byw ddysgu I'll be illustrating some of the remarks he has made in the course of this lecture. As you have heard already from other speakers, discoveries often get lost in the literature. Well perhaps Dixing really showed you how to find them however far back you had to go. And sometimes observations that should lead on to great developments get made and somehow not used the gaps of 10, 15, 20 years before they are really finally put to good use that everybody works on the problems revealed as in the case of interferon. Yesterday and these gaps are in themselves quite interesting and they occurred in the course of the story of the extra analysis of proteins. Protein crystals were observed in plants and in animal tissues during the course of the 19th century and there are many nice drawings of them made by botanists and others who examined biological tissues. The first one on my first slide was made by Professor Schimper and I think from this part of Germany pictures of crystals observed in plant cells and you can see he was looking at them through microscopes and there is one particularly here. Is there a pointer? Which you can see he's viewed through nickels in different directions which shows pleochroism and some of them are protein crystals. I'm meaning this one here where he's obviously got his nickels in different directions and another one on the next slide also taken from more than 100 years ago from Pryor's book on the blue crystal. A very lovely photograph of hemoglobin crystals I think they are dog hemoglobin showing that he was viewing them through the microscope using nickels turning them around so that the crystals appeared different coloured in different directions. Even at that time it was realised that proteins, molecules, whatever they were in these crystals were large. On the next slide there is a little early analysis made, figured in Pryor's book giving a shot at the molecular weight of hemoglobin. It isn't quite right in any direction because the analysis is too high but it gives a large figure of 13,000. It should be more like 17,000 and then the actual molecule is four times that. But it was known that there were large molecules in these crystals in the 19th century and observations were made by both Schimper and Pryor that showed that to get these beautiful pictures of crystals you must keep the crystals covered with liquid and that they dried or shrank when removed from their mother liquor. Now the next discovery was the discovery made in Munich by von Lowy, Friedrich and Nipping, illustrated by the next slide. Oh sorry, this is just another one that shows you a crystal actually growing, a crystal of hemoglobin actually growing in a red blood cell and you can see that the hemoglobin crystal is occupying almost a whole of one of the blood cells. It's this, that, that particular cell probably the wall was damaged and so the crystal started to grow whereas in the next one you can see the normal appearance. And now the photograph which was taken by von Lowy, Friedrich and Nipping in 1912 in Munich by passing x-rays through copper sulphate and it shows that x-rays have wavelengths of the order of magnitude of diffracting units in the crystal and that these units must be the atoms arranged in a regular arrangement in three minutes. And so it's this, this is the first time in a three dimensions to produce these effects. Von Lowy, Friedrich and Nipping didn't go on to work on this crystal, its structure wasn't solved for more than 20 years, it seemed quite complicated in those days. The cubic crystal, zinc sulphide and took very beautiful photographs but the actually first use made of these x-ray diffraction photographs was made by a very young man W.L. Bragg aged 24 in England who showed how to work, how to use the diffraction effects to find the relative positions of the atoms in space in sodium chloride. He was helped by the structure having been suggested to him by Barlow who in fact published a proposed structure in 1885 quite correctly again a long time before. I illustrate this structure on the next slide by an actual section in the electron density in the crystals of sodium chloride. The electrons scatter the x-rays and because they are grouped into atoms in a regular array in three dimensions, the interference is partly destructive and from the spectra one can form a Fourier series as W.H. Bragg first suggested in 1915 the spectra provide the components, the terms of the Fourier series, from the scattering separates the terms, they have to be recombined to give you back the pattern which produced them. The recombination is usually done mathematically by calculating the contribution of every term observed to every position in the crystal by a mathematical formula. For this you have to know the amplitudes of the waves which you can easily measure and also their relative phases which are lost in the process but can sometimes be easily recovered. The calculation though suggested in 1915 was not in fact made till 1926 by Havie Kirsten America and Duane who suggested it pointed out that the phases were one known in sodium chloride from Bragg's work but two could have been inferred because the heavier atom would dominate the effects. Or alternatively as Bragg had used to begin with the differences between sodium chloride and potassium chloride where one iron varied in density would give a direct method of finding the phase relations. Then the picture can be combined and the electron density plotted at any density intervals you liked to show the arrangement of the atoms. Now when the experiments on x-ray diffraction were first made passing x-rays through crystals it was natural for different people in different parts of the world to repeat the experiments. The WL Bragg was one but they were also repeated in Japan and in Japan for the first time in 1913 immediately afterwards x-rays were put through protein silk fibres. I think the next slide should show you a photograph of silk fibroin. This is a photograph in which the reflections are very fuzzy. Good photographs were obtained first actually in Berlin in 1921-22 by again a very young man as he was then Rudolph Brill who I hope is still alive and living here in Munich as he was a year or two ago. He took the photographs for his dissertation, helped in the interpretation by Michael Pellani and Herman Mark who was slightly older in the same laboratory. The interpretation was that in these fibres there must be long chains of proteins as indicated by Emil Fischer's experiments and that these chains were not quite regular that the amino acids might not repeat quite regularly. This is not a perfect crystalline photograph but one in which the essential intervals shown by these fuzzy spots are the intervals in the chains. The next slide shows Professor Mark and Myers idea of what the protein chains, amino acid chains should be like to give the actual observed distances between these fuzzy reflections on the silk fibroin photographs. The next slide shows the extended zigzag chains, alternately glycine and alanine in the fibre structure running through the unit cells. In this period of the 1920s there were a number of experiments in which crystals were actually prepared in the laboratory from newly isolated enzymes and hormones, urease, somnerned northrop pepsin insulin by JJ Abel in America. It was natural for young crystallographers in the 1920s to try to put x-rays through these crystals too. In the laboratory of W.H. Bragg at the Royal Institution several attempts were made to get x-ray photographs of insulin, hemoglobin, one of the enzymes, a destined plant hormone and they all got nothing but somewhat vague blurs. Two of the young men present were Astbray and JD Bernal and when they left the laboratory of the Royal Institution, Bernal for Cambridge and Astbray for Leeds to work on wool fibres at York, they were both very anxious to work on proteins and they corresponded with one another and their correspondence exists in the Cambridge University Library where I found it and Astbray described how he wrote to northrop for pepsin crystals and northrop sent him ones and he got absolutely nothing on the photographs except as sort of got two rather different ones. He took two rather diffuse reflections rather like some of the silk fibra in ones and he took fibre photographs as well. In fact that particular silk fibra in photograph is taken by Astbray and not by Brill or any one of the earlier workers. He found that protein fibres in general tended to give two patterns, one when it was unstretched with two reflections which he called the alpha pattern and then if you pulled it out it gave the pattern that suggested stretched chains, the beta pattern. But he was very anxious to work on crystals and to collaborate with Bernal. The only thing he complained of in his letters was he would like to start a serious collaboration. If only you were not such a soft hearted chap and taking on problems for all sorts of other people and the problem of course that JD Bernal was taking on at that particular moment was the structure of the sterols. He had just put X-ray photographs, X-rays through calciferal crystals and shown from his results that the Vilearn Vindhau's formulae could not be correct and so opened the way for a whole new passage in sterol chemistry. But Astbray wrote, why not ask for hemoglobin crystals? A dare is the bloke. And Bernal I think hesitated a little but suddenly the crystals were brought him in his hand. They were brought from Uppsala where they had been grown by a young man called John Philpot who was a biochemist learning how to purify proteins with tizalius. John Philpot enjoyed skiing. He went off skiing in the mountains for a fortnight, leaving his crystals growing in the fridge. When he came back he found his tubes of purified pepsin full of the most marvellous large crystals about 2 millimetres long. And as good fortune for the advance of science would have it, they are passed through the laboratory, Glenn Millican, the son of Ray Electron Millican who was working in Cambridge on fast reactions and he was shown the crystals and he said, I know a man in Cambridge who would give his eyes for those crystals. And Philpot happened to know the same man, John Desmond Bernal, because he had earlier been involved in the isolation of Vitamin D at the Medical Research Institute in our country and so he very willingly handed him a tube of the crystals which Millican stuck in his coat pocket right way up. Crystal still in their mother liquor and took them back to Cambridge. The year was then 1933 and when Bernal saw the crystals, of course he immediately did, he looked at them first within the tube under the microscope and saw they were brightly shining, brightly birefringent. And he took one out of it being in a hurry to see what was happening just with a needle out of the tube and took an x-ray photograph of it and got exactly what Asprey had got and he perhaps rather less, because he was a less skillful in general experimenter, hardly anything on the photograph and he thought this must be wrong. Went back and looked at the crystals, brightened their mother liquor and it suddenly struck him that they needed their mother liquor round them to keep their actual form. And he was lucky in another way because he was working at that time also on the problem of ice and water and he had in the laboratory a student Helen McGaw taking x-ray photographs of ice crystals which she grew in little fine Lindemann glass tubes and kept at low temperatures. So Bernal took just one of her little fine ward tubes about half a millimetre cross and fished out peps in crystal within its mother liquor, concealed the tube at two ends and put x-rays through it and immediately got an x-ray photograph with reflections, ever so many reflections all over the photograph. Now the next slide should, if I'm remembering right, well first we have the people. So here is JD Bernal much later on in life and talking to Katie Dornberger, a student who was working at that time with VM Goldschmidt in Goettingen but came to work with him and came to work on some of the protein problems later and myself and I must say this photograph was taken in relatively old age when Kate had just become a young woman. She had just become the director of a small institute for x-ray diffraction studies in Berlin East. Now the next slide shows another character in the story whom I will mention, A. L. Patterson in his laboratory with two students and the next slide shows the photograph. This isn't the original peps in photograph. Here are some peps in crystals soothing about in their mother liquor and above is a photograph taken of them by Professor Tom Blundle in Birkbeck College London showing very, very many reflections along these parallel lines on an x-ray photograph. The original photographs we think must have perished at Birkbeck since the laboratory, part of the laboratory was destroyed during bombing during the war but this at least illustrates the character of the picture quite different from silk fibro in a definite crystal repeat. You can very easily measure one lattice constant 67 angstroms from the separation of the lines and the other one in fact we got it wrong when first we measured it. We got it about half the size it really is. It's really nearly 300 angstroms corresponding to the long dimension of the peps in crystals. I was at that time working with Bernal and it was only a sort of bit of bad luck but perhaps it was good luck for science that I was not in the laboratory the day the first crystals came in. I was having a bad cold or something and Bernal made all of the first observations himself. I'm always a little afraid I might have got more on the first photograph since it was possible to get more reflections from the dried crystals than Bernal actually did and so delayed the observation that it was absolutely necessary to keep these crystals in their mother liquor. I went on to take most of the rest of the photographs but I do some of the calculations which we didn't carry very far because our first measurements indicated that we had a very large unit cell that it could correspond to there being 12 peps in molecules in this cell. Each of weight about 40,000 several thousand atoms each you see within each molecule and that this was it was beyond our possible means at that time to think that we could work out the structures of such molecules. And the, yet the reflections extended to about one and a half angstroms. It was clear that they were sufficient to show us atoms if ever we could form an electron density pattern from them and look at it. At the time I was under pressure to return to Oxford to a college teaching appointment that should lead to a permanent appointment. I was most unwilling to go but everyone in Cambridge said difficult to get university jobs in this time. Of course you must take it so reluctantly I went back to Oxford. Bernal had got a small grant to support me of 200 a year which she gave to another young person and I think could I have the next slide. And this was Isidor fan Cwcchan who then was visiting from America who had come over with WL Bragg and was working with WL Bragg at Manchester and heard about the work at Cambridge and wanted to join in. And he came in to take the next protein crystal chymotrypsin from Northrop and then passed over to work on virus crystals but played a very important part in the development of the subject and particularly later in America. And the next year there came another young person to work with Bernal in Cambridge and I think he's shown on the next slide again much older than he was Max Perutz and Max Perutz came from Vienna wanting to work with Hopkins but Mark had forgotten to ask Hopkins. To have Max Perutz as a research student when he visited Cambridge in 1935 because he was so excited by the work that Bernal was doing and sent him instead to work with Bernal saying someone who really needs you. And Max said but I don't know any crystallography and Mark said you will learn my boy which he did the hard way for many years to come. The other one in the picture is John Kendrew who Professor Hoppe mentioned and he doesn't come into the story for very much longer. Now what happened to me in Oxford working going back to begin work all on my own was that Sir Robert Robinson who was then Professor of Organic Chemistry was given a small present of the first insulin crystals obtained by the firm Brutes in our country following a prescription for growing insulin crystals given by DA Scott in America that it was necessary to add zinc to the preparation. And they gave Robinson 10 milligrams in a little tube and he hadn't any use for them and knew the work that we had done in Cambridge taking x-ray photographs of pepsin crystals so he said why don't you try to photograph these. And they were microcrystalline but very bright by refrigerant and so I looked up all the preparations and grew the crystals finally not very well by Scott's method large enough to take x-ray photographs of and I made a horrible mistake. I decided that it didn't matter whether they were wet or dry and it was easier to handle them dry. I dried them like a good organic chemist pouring metal alcohol over them and then took x-ray photographs of them and these very dry looking crystals as you can see are the crystals. Not looking very good single crystals but they are and up at the top is the little x-ray photograph they gave. Well they gave an x-ray photograph spots on the film and I developed the first x-ray photograph about 10 o'clock at night and waited in the lab while I fixed it and washed it and then walked out absolutely dazed very excited. Little spots on this photograph down through the centre of Oxford away from my lodgings and about midnight I was accosted by a policeman who said where are you going so I said not very truthfully back to college and turned round and went back. But I woke up in the morning next morning about six and I was suddenly extremely worried and I went to the, I thought perhaps those spots, perhaps those crystals aren't really protein crystals at all but something else, some impurity, some breakdown product in the preparation. And I went round very quickly before breakfast at the laboratory and picked one out of the tube and tried protein tests on it and I tried the xanthoproteic reaction which consists in dropping first a drop of concentrated nitric acid and it turns yellow and then a drop of ammonia and it turns brown which it did to my great relief and I went back. Happily to the labot to breakfast. Now I perhaps should tell all those who were young here why I knew that reaction so well was because when I was rather young and still at school I had a laboratory and did experiments on my own and I was doing experiments suggested by Parsons fundamentals of biochemistry and completing something or other one Sunday in a nice new silk frog and of course one shouldn't ever do this kind of thing and I accidentally dropped a spot of nitric acid on this by the front of my dress and seized the nearest alkali which was ammonia and put it on it where of course it was much worse. I was pretty upset. My mother comforted me and said she could cover it all with a frill which she did and so this particular reaction is indelibly engraved in my mind and I was very pleased when I could test it once again with the insulin crystals. Now what happened after that I didn't really remember until I was back reading the Bernal files at Cambridge but it's obvious that directly after breakfast I rang up the Cambridge lab to tell them that I had taken these insulin photographs and got the very sad news that Bernal was at home with a temperature of 104. So then I wrote a little letter to his wife saying please tell him when he's well enough that I have these insulin photographs and I gave the rough dimensions of the crystal unit cell and there on the next slide. A wrong is the real form of the crystals, 74.8 across 30.9 high and within the crystal there's roughly 36,000 molecular weight of protein and it should formally be divided into three which by the crystal symmetry to give you 12,000 molecular weight. For the insulin molecules in the unit cell. Bernal recovered and wrote me a letter which begins, dear Dorothy, ZN 0.52%, CO 0.49. CD 0.74%. This gives rather less than three in each case. I am going back to Cambridge, I forget when, I will send you some cadmium stuff. Any crystallographer can see what he was saying in this letter. This sync according to DA Scott's observations is replaceable by other elements of which cadmium is the heaviest so far observed. You should try and see if the cadmium crystals show changes in the intensities of the X-ray reflections and then you might be able to use the method of isomorphous replacement to determine phase constants for the different reflections and really see the atoms in your crystal. Terribly premature, I'm afraid. Again, I didn't remember all the details. I found the letter in which I said, sorry I can't get anything like good crystals from the cadmium material. It must be very impure. I'm having a terrible time with scholarship examining for the college and I did make one or two abortive efforts but I had a feeling that cadmium wasn't really heavy enough to do what was wanted and that anyway if I did a little calculation on the number of atoms that there were in insulin it was too large a problem for myself to set out to work on at the age of 24 and that I must try to solve some simpler structure first. I tried out the idea of the isomorphous replacement game actually in little calculations at the Royal Institution in a notebook on cholesterol, chloride and bromide while I was taking the insulin photograph which is shown on the previous slide because it's a very, I took that particular photograph for show for publication. In the Royal Society using the very big x-ray tube at the Royal Institution for the purpose. So I didn't go on with insulin but Max Proutes went on with hemoglobin. Can I have the next slide please? I went on as I said with sterols and with the sterols I explored the possible use of both heavy atoms and isomorphous replacement for showing electron densities. This is one of the experiments we did and I show it because I have to introduce another character in the story and this is A.L. Patterson. A.L. Patterson I showed on that earlier slide was one of the young men, A.L. Patterson, Asprey and Bernal who were all.
|
Dorothy Crowfoot Hodgkin lectured at the Lindau meetings five times and repeated her basic story several times. The story is about the development of X-ray diffraction as a method to determine the structure of biologically important organic molecules, such as insulin. This time she tells a very personal version of the story, with a lot of photographs that, sadly enough, are not in the archive of the Lindau meetings. Some photographs are of beautiful organic crystals, the growing of which is an art in itself. Other photographs are of her mentors and colleagues. Her foremost mentor, John D. Bernal, plays a large role in her lecture and she has even gone through his correspondence that is kept in an archive in Cambridge. Bernal, who was born in Ireland 1901, was the first to show clearly that even organic molecules can give rise to well defined X-ray diffraction diagrams. This discovery was made in Cambridge in 1934, using crystals of pepsin. According to Dorothy Crowfoot Hodgkin, his important discovery was that the crystals had to be kept in their mother liquid. This is because they contain water and may become deformed if dried. Bernal put the only millimetre large crystals in a small glass tube that had been sealed at the ends. I was particularly interested in hearing that the crystals for this groundbreaking experiment had been grown where I was born and went to school and university, in Uppsala, Sweden. The pepsin crystals were grown by a visitor to the laboratory of The Svedberg, the Swedish Nobel Laureate in Chemistry 1926. Svedberg’s invention, the ultracentrifuge, evidently was a strong attractor for scientists from all over the world interested in sorting large organic molecules. So one of Bernal’s friends happened to pass by and saw the crystals and brought them back to him. This kind of story is by no means unique and shows the importance of scientific exchange and travel. Anders Bárány
|
10.5446/52593 (DOI)
|
I am going to speak about vitamin C and cancer. I hadn't planned to work on cancer, which I worked that I began, in fact, ten years ago, nor to work on vitamin C or other vitamins as I began around 16 years ago. Instead, I got into these fields by accident or through some concatenation of circumstances that just left me, led me into these fields. I started to work on hemoglobin after having worked on simpler molecules for 14 years in 1936. And then the next year, I began some work together with my students, of course, in the field of immunology. Then in 1945, I had an idea that sickle cell anemia might be a disease of a molecule rather than a disease of a cell. Dr. Harvey Tonow, a young physician, came to study with me, and he and I began to check up on this idea. In 1949, together with two other students, a singer in Wells, we published a paper, Sickle Cell Anemia, a Molecular Disease. After being with me eight years, Dr. Tonow was augured as an officer of the Public Health Service to move to Bethesda. And I decided that I should give up work on the hereditary hemolytic anemias. It didn't seem to me very sensible of me to be competing with such an able and vigorous young investigator as Dr. Tonow. So I thought, why shouldn't I look at other diseases to see whether or not they are molecular diseases? And they might as well be important diseases because nobody else was working in the field, except by that time there were a good number of hematologists studying the hemoglobin anemias. I thought, well, I might work on cancer or I might work on mental disease. And I rejected cancer for two reasons. One it seemed to me that it was just too complicated a field for me to be involved in. And second, many people, many investigators were carrying on studies in the field of cancer, whereas in 1954 very few people were working on mental disease. It was the mental disease, our studies on schizophrenia and mental retardation that got me into vitamins. After about ten years of work in this field, I ran across some papers, publications by Dr. Huffer and Osmond in Canada, Saskatoon, Saskatchewan, Canada. They made a report that really astonished me. They said that they were giving large doses of nicotinic acid or nicotinamide to schizophrenic patients. I knew, of course, that a little bit of nicotinic acid or nicotinamide must be ingested day after day, five milligrams, perhaps a little pinch, to keep a person from dying of polygra. So I knew that these substances, the polygra preventing factor, are very powerful substances. Five milligrams a day will keep a person from dying of the disease. And yet these substances are so lacking in toxicity that Huffer and Osmond were giving a thousand or ten thousand times this physiologically effective amount or pharmacologically effective therapeutic amount to schizophrenic patients. They know just how toxic they are. People have taken a hundred grams a day or even more without any serious side effects. I thought how astonishing that there are substances of this sort that have physiological activity over a tremendous range of concentration, thousand-fold or ten-thousand-fold range of concentration. In fact, I found that Milner had been giving large doses of vitamin C in a double-blind experiment to schizophrenic patients. And he reported that several grams, perhaps a thousand times the amount that will prevent scurvy in most people, several grams of ascorbic acid is also effective, more effective than a placebo for these schizophrenic patients. Well the idea that such substances exist, which are effective in one way or another over a tremendous range of concentrations, caused me to decide that this field of medicine deserved a name. I invented the word, the orthomolecular, to describe it. Orthomolecular means the use of the right molecules and the right amounts. The right molecules are the molecules that are normally present in the human body. And the right amounts are the amounts that put people in the best of health. Well I was interested in infectious diseases in relation to vitamin C originally, but in 1971 Charlie Huggins asked me to speak at the dedication of his new cancer research laboratory. I thought I must say something about cancer. So I remembered that I had read a book published in 1966 by Ewan Cameron, a surgeon in a hospital in Scotland who had been a general surgeon who had, however, all of his life. A surgeon had been interested in cancer. He formulated a general argument in this book, Hyaluronidase in Cancer. This argument was this. He said, the body has protective mechanisms, the immune system for example. If we could potentiate these natural protective mechanisms, they might provide additional protection against cancer. As Professor Dedeves pointed out, we have many patients, most patients with cancer circulating cells, but not all of them develop metastases. Those whose immune systems are functioning well have a smaller chance of developing metastatic cancer than those whose immune systems are not functioning well. Well, I thought we know one thing about vitamin C. It is required for the synthesis of collagen. If then, persons with cancer were to be given larger amounts of vitamin C, they would be stimulated to produce more collagen fibrils in the intercellular cement that holds these cells and normal tissues together. These tissues might become strengthened in this way to such an extent. That's how Cameron pointed out without mention of vitamin C, to such an extent that they could resist infiltration by the growing malignant tumor. Dr. Cameron saw a newspaper account of my talk and wrote asking how much vitamin C to give. I replied, he should give the patients 10 grams a day, 10,000 milligrams, 200 times the usually recommended amount. He began cautiously with one patient in Valeveen Hospital, Loch Lomondside, Scotland, and was astonished by the response of that patient to such an extent that he gave 10 grams of vitamin C a day to a second terminal cancer patient, an untreatable patient receiving no treatment other than the vitamin C and narcotics to control pain. And then a third and fourth and more and more patients as he became more and more convinced of the value of this substance for patients with cancer. I may have started out now with the first slide. Well, I think, and perhaps this is the message that I should emphasize, I think that the people in the field of nutrition, the scientists in the field of nutrition, which up to 15 years ago seemed to me to be a terribly boring subject, have been off on the wrong track. They have said that vitamin is needed, it's an organic compound needed in small amount to prevent death by corresponding deficiency disease. And they have striven very vigorously over a period of 40 or 50 years to find out just how much each of these substances is needed to keep people from dying. I believe that the problem that should be attacked is that of finding the intake that would put people in the best of health, not just the amount that will keep them from dying. The nutritionists who refer to their recommended dietary allowances are the eight by saying that these are the amounts that will prevent most people from developing the corresponding deficiency disease. Most people in ordinary good health. What they should say is that it will prevent most people who are in what is ordinary poor health from dying of the corresponding deficiency disease. To be in what ought to be ordinary good health, they need to be ingesting the optimum amounts, the proper amounts of these valuable substances. Next slide, there's an interesting difference between vitamin C and the other vitamins. The other vitamins, thiamine, pyridoxine, riboflavin, vitamin A and so on, are required by essentially all animal species. Vitamin C is not required by most animal species. 99% or more of animal species synthesize a scar bait. They do not rely on the dietary sources of the substance. If I ask, why do these animals continue to synthesize vitamin C, even though they may be getting large amounts by ordinary standards in their diet, several grams a day for an animal the size of a man, the answer surely is that they continue to synthesize a scar bait because the amounts they get in their diet are not enough to put them in the best of health, not enough to put them in the fittest condition, in the environments in which we live. Next slide, man is one of the few unfortunate species of animals who are in rather poor health generally because of not having as much a scar bait as corresponds to the best of health. When I looked at 150 raw natural plant foods, taking the amounts that would give 2,500 kilocalories of energy, I found that for thiamine and other vitamins, there was perhaps three times or five times as much of the vitamin as is now recommended as you get in modern diet on the average, but 50 times as much vitamin C as is recommended. And I thought this is an indication that larger amounts of vitamin C are needed because animals are getting these larger amounts but continue to make a scar bait. Next slide, the next slide please. Another interesting fact is that the committee that recommends the diet, the food for monkeys, experimental monkeys, recommends 70 times as much vitamin C. Monkeys also require exogenous vitamin C. They are primates and all of the primates require this vitamin. Many times the amount recommended for human beings. I think that this is understandable. Monkeys are very valuable, experimental monkeys. If you've spent months carrying out studies with them and then suddenly your monkeys die, it's a real tragedy so that a great effort has been made to find out how much vitamin C will put the monkeys in the best of health. No one has gone to the effort to find, to carry out corresponding studies for human beings. Next slide. Well, I mentioned that these most species of animals synthesize a scar bait. The amount they synthesize depends on the size of the animal. Small animals produce a small amount, large ones a large amount. Proportional to body weight, not to surface area, two-thirds power of the body weight, but to body weight. And the amount produced by different animal species is between 40 and 400 times the usually recommended intake for human beings. Average is 10 grams per day per 70 kilogram body weight. That's why I wrote to you and Cameron to say that 10 grams a day is the amount that he should try. I might say that the pharmacologist sometimes say that 50 milligrams a day of vitamin C per day is a physiological intake. And that 10 grams a day is a pharmacological intake, that the vitamin is being used as a drug. I would say that 10 grams a day is the proper physiological intake. And 100 grams a day might be called a pharmacological intake. And people have taken that amount. People have received 125, 150 grams of sodium miscarbate today by intravenous infusion to control serious diseases without any side effects and have taken similar amounts by mouth. Next slide. So this is the conclusion that I have reached. I've already stated it. Next slide, please. Well vitamin C is required for synthesizing collagen and I think it might well strengthen the normal tissues. Next slide. The reason that it is required is that collagen is formed from procollagen by hydroxylation of prolil and cereal residues. And there are other hydroxylation reactions. This one and other similar reactions do not take place except with use of vitamin C. Next slide. Well collagen, the value of increased intake of vitamin C under several circumstances has been known for a long time. And surgeons for over 40 years have been recommended in the better surgical textbooks to give all surgical patients a gram or two or three grams of vitamin C per day in order to facilitate wound healing, healing of broken bones, of burns, to take care of peptic ulcers, periodontal disease. Physicians have known and dentists too have known about this. They don't all practice it but it's been known. Next slide. Here is a reference to Ewan Cameron, my associate for 10 years now in this work and to his book Heil, Your Under Days and Cancer. Next slide. And I argued then in 1971, stated on this slide that the increased synthesis of collagen might strengthen normal tissues to a significant extent. Next slide. Since then, a large amount of information has been gathered about the relation between intake of vitamin C and various aspects of the immune protective mechanisms. Valence, Fagan, Yonemoto, others have shown that antibodies, IgG and IgM are produced in larger amounts with the increased intake of ascorbate. Fagan showed that component of complement involving collagen-like sequences of amino acids as shown by Professor Porter is produced in larger amounts. The blastogenesis of lymphocytes occurs at a greater rate and the activation of cytotoxic macrophages has been shown to be increased. The interferon production is reported to be greater with a greater intake of vitamin C. I might mention that there's been a tremendous amount of interest in interferon for the treatment of cancer. One important point here is that to treat a patient with interferon costs about a thousand times as much as to treat him with ascorbic acid. Dr. Cameron says to people who ask about interferon take ascorbic acid and synthesize your own interferon. Next slide, please. This is the only study on vitamin C that's been carried out in the National Cancer Institute of the United States. Yonemoto, Crateon and Feiniger gave vitamin C five grams a day for three days to volunteers. The rate of blastogenesis of lymphocytes under antigenic stimulation doubled with this intake. When 10 grams a day for three days was given, the rate tripled. And when 18 grams a day for three days was given, the rate quadrupled. It's known that a high rate of blastogenesis of lymphocytes in a cancer patient is correlated with a better prognosis, longer survival than a low rate of blastogenesis. Next slide. Scorbate seems to have a significant prophylactic value. Here I've listed seven studies relating to vitamin C and cancer. Studies in which when a number of environmental or nutritional factors were correlated with the morbidity from cancer, vitamin C turned out to have the highest correlation coefficient. Negative, of course. To be the nutrition factor that seemed to be most strongly related to morbidity from cancer. Next slide. An interesting study was carried out by Dr. Dacos and his associates. Dacos is now the head of surgery in the Memorial Sloan-Catering Cancer Center in New York City. He found that three grams a day of vitamin C given to patients with familial polyposis caused the polyps to disappear in half of the patients. I've suggested that he give 10 grams a day in the new trial which is underway. But he is sticking with three grams a day because of his worry about toxic side effects of large doses. Well, there just aren't any toxic side effects of large doses of vitamin C. Talk about kidney stones has essentially no basis, whatever. No cases in the medical literature. Damage to the liver doesn't occur, although it's sometimes mentioned without references. Next slide, please. Bruce in Toronto has used the AIMS method of testing for mutagens to study fecal material, the contents of the lower intestinal tract. There are many mutagens that show up. And they, of course, when tested with much difficulty, various mutagens for carcinogenic activity have usually been found. Ninety percent of them have been found to be carcinogens as well. When vitamin C is given by mouth to patients, the number of mutagens in the fecal material is much less. One, this is presumably a mechanism for preventing cancer of the lower gastrointestinal tract. I find when I take 10 grams a day, which is the amount that I do take, that half of the vitamin C, 5 grams, remains in the contents of the gastrointestinal tract, presumably then providing protection against this tract. And in particular, of course, destroying, preventing formation of nitrosamines, which are a cause of gastric cancer and other cancers, but also destroying other mutagens in the material in the gastrointestinal tract. 5 grams gets into the bloodstream. Of this, 30 percent, 1.5 grams is eliminated in the urine and provides protection of the urinary tract on the way out. And the other 3.5 grams works throughout the human body. Next slide. Here are some clinical tests, all that have been carried out, that have been reported as yet. I made a mistake when I wrote the copy for this slide. Cameron's last study involves 300 terminal cancer patients treated with a score bake compared with 2,000 matched controls in the same hospital. With these studies, 100 against 1,000 matched controls, there was essentially a random distribution of patients between Dr. Cameron on the one hand and the other surgeons and physicians in the same hospital on the other. Over a period of time, when Cameron was giving the patients with terminal cancer, uncreatable cancer vitamin C, and the other surgeons and physicians were not, so that we had a sort of randomized allocation of patients to the two groups. Morishiga and Morata are associates of our institute in California. Morata's worked there two summers and their work is carried out in the hospital in Japan. Next slide. The first thing that Dr. Cameron and his collaborators, the surgeons working with him noticed was that the patients feel better when they receive a score bake. Cancer patients usually are pretty miserable, don't feel well, they have poor appetites and don't eat well. These patients lost their catexia, they began to feel lively, feel well, have good appetites, and then there were other responses that were noted. And later on, of course, it was found, next slide, it was found that they survive longer than the controls. This slide shows survival times of the 100 patients with untreatable cancer who received a score bake and the 1,000 matched controls, 10 matched to each of the escarbate treated patients who had also reached the untreatable stage when no therapy was administered to them except morphine or diamorphine to control pain. After the date of untreatability, the controls lived on the average 54 days and the escarbate treated patients lived on the average about a year, much longer. Of the controls, only 3 in 1,000, 3 tenths of a percent survived over 400 days, 4 tenths of a percent over a year after untreatability, whereas there were 16 percent of the escarbate treated patients who continued to survive. And these patients continued to survive for a long time, as much now as 8 years after having been considered to be terminal with an expected survival time of only a couple of months. Next slide. These are results of similar observations made in Fukuoka Torikai Hospital in Japan, Fukuoka, Japan. Again, there's a lower escarbate group receiving less than 5 grams of vitamin C per day. In the Camerons patients in Scotland, the lower escarbate group received very little, perhaps 50 milligrams a day. And the highest escarbate group receiving more than 5 grams a day, an average of about 15 grams a day. The survival curves are essentially the same as for the study in Scotland. The next slide. These graphs represent a breakdown of that comparison of 100, the first 100 escarbate treated patients in Scotland and the thousand matched controls. Here we have 17 patients with cancer of the colon who had reached the untreatable stage compared with 170 matched controls. The controls died off pretty rapidly. The escarbate treated patients lived on a number of them here, a third of them are more over a year. One of them was still alive in 1978. With cancer of the stomach, bronchus, and breast, the situation is rather similar. Next slide. Cancer of the kidney, rectum, bladder, ovary. There doesn't seem to be much difference in the response of patients with different kinds of primary cancer to escarbate. There are some statistically significant differences, but it's a difference between living a year longer on the average and living seven months longer. In general, I would say these patients who have reached the untreatable stage and in Scotland, the untreatable stage for adult patients with solid tumors, gastrointestinal tumors and so on, do not receive chemotherapy so that these patients in general had not been treated with chemotherapy. In general, I would say that the evidence indicates that to give them escarbic acid leads to greater survival time as well as better well-being during the period of survival than treatment with chemotherapy does, the standard sorts of chemotherapy that are used now. Next slide. Another trial carried out was by Cregan, Mertel and others in the Mayo Clinic. The difference between that trial and the trials in Scotland and Japan is that 88% of the patients, 123 patients in the Mayo Clinic trial, had received chemotherapy, only 4% in Vale of Leven Hospital. We argued, Dr. Cameron and I, before the Mayo Clinic trial was begun, that they should not use patients who had had their immune systems badly damaged by courses of chemotherapy because of our feeling that vitamin C works largely by stimulating production, by stimulating the immune system. Next slide. Well, our conclusions with the possible, and I just quote from our book, The Last Sentences in our book, Cancer and Vitamin C, with the possible exception of during intense chemotherapy, we strongly advocate the use of supplemental escarbate in the management of all cancer patients from as early in the illness as possible. Next slide. We believe that this simple method measure would improve the overall results of cancer treatment quite dramatically, not only by making the patients more resistant to their illness, but also by protecting them against some of the serious and occasionally fatal complications of the cancer treatment itself. Next. We are quite convinced that in the not too distant future, supplemental escarbate will have an established place in all cancer treatment regimes. Next slide. Now, the advantages? It is an orthomolecular substance. Every human being has vitamin C in his body so long as he continues to live. Has very low toxicity, no serious side effects, makes the patients feel much better. Very low cost. It's compatible with most or all other methods of treatment, the exception being chemotherapy. Of course, I think that chemotherapy should be used with childhood cancer, leukemia, Hodgkin's disease, possibly together with vitamin C. But my own feeling is that vitamin C for adults with solid tumors is probably preferable to chemotherapy. Well, I can end up by saying that in the United States, the medical profession as an organized group has not accepted these ideas. Individual physicians, I would judge, have because we get hundreds of letters and telephone calls from individual physicians who have developed cancer asking for more information. Thank you. Thank you.
|
After being awarded the 1954 Nobel Prize in Chemistry "for his research into the nature of the chemical bond and its application to the elucidation of the structure of complex substances", Linus Pauling turned his attention to a range of medical issues, including topics like the mechanisms of general anaesthesia or the molecular mechanisms of sickle cell anaemia, which he both discussed in his 1964 Lindau lecture (LINK). He also briefly worked on mental diseases. This brought him to the vitamins, some of which were evaluated as drug candidates for mental patients. What ensued was some of his probably most controversial work, which culminated in the claim that vitamin C megadoses (around 10 g per day) would be suited to treat cancer and support health in general. Pauling himself took at least 10 g of vitamin C per day for more than 20 years. Aged 93, he died of prostate cancer. Before his death, he claimed that vitamin C had delayed the onset of his disease significantly. In the present lecture, Pauling clearly outlines the rationale that led him to support vitamin C megadoses. He begins by pointing out the importance of vitamin C for collagen biosynthesis (collagen is a structural protein responsible for the integrity of skin, hair, muscles, tendons and other tissues) and hypothesizes that vitamin C could inhibit the metastasis of cancer by generally strengthening tissues due to improved collagen synthesis. His second argument concerns the curious fact that the vast majority of animals are able to biosynthesize vitamin C and are thus not dependent on an intake via food. Humans, primates, bats, guinea pigs are some of the exceptions. Pauling extrapolates that a typical animal weighing 70 kg would produce 10 g of vitamin C per day. This is much more than a human will normally take up via food, hence, according to Pauling, heavy supplementation is necessary. This line of thought is the source of his famous 10 g per day megadosage recommendation. However, to date (2013), it has not been proven that vitamin C megadoses are suited as a cancer therapy and the studies Pauling describes in his talk have been shown to contain systematic flaws. In this context, the remarks made towards the end of the talk are highly problematic. Based on the assumptions that (i) vitamin C helps to fight cancer by stimulating the immune system and that (ii) vitamin C can hence not act if the immune system is suppressed by chemotherapy, Pauling recommends to treat adult cancer patients with vitamin C only and to omit chemotherapy altogether. Luckily, this highly questionable recommendation was never adopted by conventional medicine.The Linus Pauling Institute at the Oregon State University, founded in 1973 by Pauling and colleagues, today distances itself from the claim that vitamin C is effective in cancer therapy and recommends a rather low daily intake of 400 mg based on the “currently available epidemiological, biochemical, and clinical evidence” [1]. David Siegel [1] http://lpi.oregonstate.edu/infocenter/paulingrec.html
|
10.5446/52596 (DOI)
|
Bacteria have been the favorite tool of geneticists and molecular biologists for a number of years because of their simplicity. This simplicity is both in structure and in their genetics and also in their life cycle. They're not only the simplest cellular organism but extremely small in size and with other properties of rapid division and so forth that make them ideal objects for study in the laboratory. If I could have the first slide, I'll show a scanning electron micrograph of a typical bacterial cell, one that we study in our own laboratory, homophous influenza. Now for the purposes of my lecture, let me describe very briefly some of the characteristics of these bacteria. First of all, bacteria can be divided into two major categories based on the structure of their cell envelope. We have the gram positive bacteria that have an outer boundary consisting of a cytoplasmic membrane and a rather thick cell wall. On the other hand, the gram negative bacteria have in addition to these two layers an outer membrane. So that when we talk about gene transfer, we have to think of the problem of getting genes that is DNA out of one cell through these layers and into a recipient cell. Now the bacteria also is rather simple inside. There's no true nucleus. The chromosome is in direct contact with the cytoplasm and genetic expression occurs in a coupled fashion with the genes being transcribed and then directly and immediately translated into protein. The chromosome is a single molecule of some million or so base pairs and the nucleotide sequence of course carries the complete genetic program which specifies not only the structure of the bacterial cell but also the complete life cycle. Perhaps we could show the first slide. The nucleotide sequence itself which carries this genetic information is organized into a series of contiguous units which we call genes and in a typical bacterial cell there might be some 3,000 such genes which play out the genetic program of this organism every 20 or 30 minutes as the cell divides. Here is the homophilus influenza cell that we study and you should keep this in mind for future reference. These cells are only about one or two microns in size. Now bacteria in general if one looks in nature grow in extremely large populations and consequently and they also divide very rapidly in a matter of 20 or 30 minutes per cell per doubling time. In addition these large populations collectively carry an enormous variety of mutations so that in adapting to their environment they can select for favorable mutations and in addition they've developed a variety of means of exchanging genes between cells so that they can play with various combinations of mutations in order to develop the most adaptable organism. So that many modern biologists because of these reasons have considered that bacteria may be the most highly evolved organism. This is not to mean that they're the most complex in structure but genetically the most evolved. They have a genome with the highest density of genetic information of any that we know other than the viruses. If I could go to the next slide then let me go over the known mechanisms for gene transfer in bacteria. This is the subject of the lecture. We have transformation, transduction and conjugation. In transformation a donor cell in the population releases its DNA into the medium by lysis or perhaps in some cases by a secretion mechanism and other cells in the population act as competent recipients. That is they have become able to take up DNA from the medium into the cell. In transduction you have a similar transfer from donor to recipient except that the vector for transfer is a virus which in a small frequency of the cases will package a piece of the bacterial DNA rather than phage DNA. In conjugation we have a highly specialized mechanism in which there is an actual bridge between the donor and recipient cells and there is a plasmid mediated linear transport of the donor chromosome or a copy of the donor chromosome into the recipient. Now in all these cases once the DNA gets into the cell, into the recipient cell, if it contains homologous sequences it can be recombined into the recipient chromosome to form recombinants and this is ordinarily by several different biochemical pathways that all cells carry that enable them to recombine homologous sequences. My lecture will deal only with the transfer mechanism and not with what happens in the cell after the DNA gets in as this is fairly uniform. Alright let me then talk a little bit in more detail about each of these mechanisms and I'll start with transduction which in many ways is the most universal transfer mechanism because as far as we know all bacterial cells can be infected by at least some viruses which are capable of the transduction mechanism. Let's go to the next slide and I want to concentrate really on the most salient features for this particular comparison. Here we have the essential mechanism for generalized transduction. In this case the virus itself replicates its DNA in the form of a tandem polymer in which individual viral chromosomes have been joined end to end either by recombination or in the process of a rolling circle type of replication and the mature virus is formed by packaging genome units sequentially along this polymer much as you would take an egg shell and stuff along string into it but the important feature is that the head full packaging starts from a particular site identified by a nucleotide sequence which is called the packed sequence which occurs in each genome but more or less randomly the packaging will start at one of these sites and then continue along packaging slightly more than a complete genome unit without the requirement for additional recognition of the packed site as you go down. It's only the initial one which seeds the packaging mechanism. Now this works very accurately in the cell so that the vast majority of packaged pieces of DNA are viral. However there are mistakes built into the mechanism. Bacterial DNA itself contains a few sites which are very similar but perhaps not identical to these sites and this fools the packaging mechanism so that occasionally viral packaging will start on the bacterial chromosome and proceed sequentially thus forming particles which contain bacterial DNA and thus are transducing particles. The other way that mistakes are made are by actual mutations within the viral protein present on the phage head which recognizes this site. And this we know that these mutations play a role in the formation of transducing particles because in certain single bursts one sees a variety of transducing particles formed originating from a number of sites in the bacterial cell. Now if I could go to the next slide I show the mechanism of specialized transduction. In this case we have a virus which is integrated as prophage into the bacterial chromosome with neighboring genes on either side and when the virus induces at some subsequent time and begins to replicate it must excise itself from the bacterial chromosome. Normally it would do this with great accuracy so that the excision occurs precisely at the two ends of the virus. But about one percent of the time an error is made and the virus loops out and excises so as to lose some of the viral genes and gain some of the neighboring bacterial genes. Thus you form a recombinant molecule containing virus and bacterial DNA. This can be packaged into a viral coat and then can subsequently infect another cell in the population and transfer these donor genes into the recipient cell. That's nature's way of doing recombinant DNA and it's been known for some 20 to 30 years. Now let me emphasize in both cases that the transducing particles arise as a byproduct of the normal replication and life cycle of these bacterial viruses. And the question arises as to whether nature could have evolved a more accurate process so that you would not make so many mistakes. But on the other hand you have to ask the question as to whether nature has found it more desirable to design a certain number of errors into the system for the benefit of the host cell. Because ultimately the host cell must survive in the environment if the virus itself is to live. Let's go to the next slide which will give us a brief picture of conjugation. This is actually quite a complicated mechanism and it's one that is not yet clearly understood although we have a general picture of it. Here is the classic F factor mating cycle and it starts with a bacterial cell typically E. coli containing the F factor which is a plasmid in supercoil form as shown by this figure of eight. Now the plasmid itself can replicate inside the cell and maintain itself in this cell and its daughter cells. But in addition it has a mechanism to spread horizontally in the population and this is the conjugation mechanism. The plasmid specifies, genetically specifies the synthesis of a tube-like pilus on the cell surface which contains specific recognition proteins at its tip that can interact with a so-called female cell which does not carry the plasmid. So you have here a donor or male cell and a recipient or female cell in contact. And when contact is made a single break occurs at a specific origin in the plasmid and thus relaxes the supercoil. We also have to imagine that the plasmid molecule is attached probably in the vicinity of that nick at the base of the pilus. Then over a period of a few minutes the pilus retracts by a mechanism that's not understood and the cells are drawn together so that an actual contact or bridge can be made. Replication of the plasmid then ensues from the original break in this one strand and you have a linear transfer of the plasmid into the female cell. And by DNA replication in the recipient one forms a double helix and the molecule is rejoined to generate again the supercoil molecule and the cycle is complete. So that one has now, the plasmid has now successfully transferred itself into a previously non-plasmid carrying cell in the population. In this way one can have a very rapid infective process in a cell population if one adds only a few fertile donor cells to a culture of female recipients within a few hours the entire population can be converted to plasmid carrying cells. On this slide we see the genetic structure of the F-plasmid. The complexity of the conjugation process is mirrored in the number of genes that are required to specify the process. There are some 19 genes that have been located in this segment of DNA called TRA for transfer genes. And these genes not only specify the pilus structure but also specify a new and independent replication from this transfer origin. This replication mechanism being quite separate from the replication mechanism which maintains the plasmid in a particular cell. Now what I've described so far is simply the normal reproductive process for this plasmid. But the plasmid can also transfer bacterial genes under rare circumstances and it does this by literally incorporating the entire bacterial chromosome into a particular site on the plasmid. And the incorporation occurs at particular sequences which occur also in the bacterial chromosome so that one has homology between the plasmid and the bacterial chromosome at specific points allowing a genetic recombination and cointegration of the two structures together. Then when the transfer occurs by the mechanism I've described the bacterial chromosome which now as part of this plasmid is simultaneously transferred in a linear fashion. And so we have then the very useful transfer of bacterial genes which is of great benefit to the bacterial cells themselves. And one can again use the same argument that although the mechanism is primarily for the benefit of the plasmid it also has designed into it features which enable the bacteria to gain benefit and survival which is useful to the plasmid in a secondary way because the bacterial cell is a host for that plasmid. Now let me move on to transformation on the next slide. And here I want to describe the mechanism separately for the gram positive cells as opposed to the gram negative cells because we have in these two types of bacteria a difference in the cell envelope which apparently is made it necessary for nature to evolve two separate mechanisms that are designed to allow naked DNA to penetrate the cell envelope and undergo recombination. The gram positive transformation mechanism has been studied for some 50 years and only in the past few years have we begun to understand through the work of a number of laboratories the details of the mechanism. But I can make it very clear that it is not understood at a biochemical level. What we have really is a molecular description of some of the events. The slide here illustrates transformation for pneumococcus which is the most widely studied organism but the features hold also for the other gram positive organisms for example bacillus subtilis and a number of the other streptococcal strains. There are two stages to the transformation process. First we have competence development which involves an induction of certain changes in the cell which allow it to become permeable or to transport DNA. As the pneumococcal cells grow in a broth medium they elaborate an activator molecule which is secreted into the medium. As the population of cells increases in density the amount of activator molecule builds up in concentration until it reaches a critical level at which the entire population of cells is induced to competence. The induction mechanism itself involves the binding of the activator to a membrane receptor much as a hormone would act in a eukaryotic organism. This process by unknown means causes the induction of a series of a dozen or so genes which specify changes necessary for changes if we could go back to that slide a bit. Go back to the other slide. Yes thank you. Which specify a protein that exposes certain binding proteins on the membrane surface and also a number of proteins which act internally to facilitate transfer of the DNA. Now here we have a competent cell which is bound a large DNA molecule and I show sequentially some of the steps in the uptake mechanism. The first thing that occurs is that this binding protein interacts with the DNA and produces a nick in one strand. This black protein then completes the break and in some cases DNA comes off but some of the strands are then taken in to a space which is still outside the side of plasma membrane. Here's the cell wall. The DNA is converted to a single strand and here we see the process further along. In addition there is a protein which is been induced by this process which binds to the DNA to the single strand of DNA in much the same way that a viral molecule would be packaged. This has a structure which is highly protected against nucleases and is stable enough to be isolated as a DNA protein complex in season chloride gradients and by several types of chromatography. Then by mechanisms which are not understood we have a incorporation of this single strand into the chromosome to form a transformant. Now if I could move quickly to the gram negative cells as shown on the next slide. These are bacteria that my own laboratory is studying and we became interested in the whole process of how homophilus bacteria take up DNA because of a discovery by a colleague John Skoka that homophilus has ability to recognize its own DNA during transformation. If you for example take a mixture of several types of DNA, homophilus DNA, E. coli DNA, calf thymus DNA and so on. The cells will selectively take up only the homophilus molecules and incorporate them. This is shown by radioactive labelling experiments here, homophilus DNA as opposed to the absence of uptake of a variety of foreign DNAs. It was clear from these experiments that the cells somehow at the surface could identify which molecule was which and we decided to determine what that recognition mechanism involves if I could have the next slide. Now to do this we took a pure piece of homophilus DNA that we obtained by molecular cloning using the recombinant DNA techniques. That piece of DNA was then cleaved into a dozen or so fragments using a restriction enzyme, ALU1 and the fragments were all labeled with radioactive phosphorus. And here is the mixture. That mixture then was incubated with competent cells and they were allowed to take up DNA. We found that they took up only two fragments out of this entire mixture. They recognized those two fragments but not the others. So the natural question is what is there about the sequence in those two fragments that is different from the others? To obtain that answer we continued to use restriction enzymes to break these fragments into smaller and smaller pieces again asking which pieces are taken up. So that finally we obtained as shown on the next slide four different fragments which were small enough that we could easily determine their nucleotide sequence. And when we then allowed a computer to scan these sequences it told us that they all contained an 11 base pair sequence in common. And they also told us that the probability of that event randomly occurring was about 10 to the minus 11. Well, if I could have the next slide. We continued then to study this presumed DNA uptake sequence which is shown here. We found that the sequence is present on all fragments which can be bound and taken up by the bacteria. Whereas a number of sequenced molecules that are available that do not contain that 11 base pair site are not taken up. We also found that if we modified certain bases in this sequence that would affect the uptake. Implying that the cell was actually interacting with this site in a direct way. And finally we were able to obtain by collaboration with Saran Narang and Ottawa a chemically synthesized 11 base pair sequence which could be inserted into any foreign DNA molecule and then conferred on it the ability to be taken up. Thus completing the proof that this sequence must be present and is the necessary and sufficient condition for uptake. Or at least for catalyzing uptake. Could I have the next slide please? Well, at this point we were very curious as to how the cell recognized the sequence in order to initiate uptake. So we began to look at the proteins on the membrane. And first let me point out that competence induction itself involves the new synthesis of a number of proteins which are implanted in the membrane. And this occurs in a synthetic media over a period of about 90 minutes. If the cells are then returned to a rich medium they lose the competence very quickly. So it's a true induction and then a de-induction mechanism which is called into play presumably only when the bacteria need to genetically recombine. The next slide shows our attempt to purify the membrane receptor. Here we've introduced S35 label during the induction step and then extracted the membranes and cybalized the membrane proteins with detergent, put them over a DNA affinity column and we find that there is a small fraction of the proteins about 3 or 4 percent which are specifically bound to homophilus DNA and can be eluded. This fraction contains about 6 different polypeptide chains which we believe to be part of the transport mechanism for DNA. And one or two of those polypeptide chains probably are involved in the specific recognition of the 11 base pair sequence. If I could have the next slide. I show here evidence that the receptor is in competent cells is in the outer membrane because if you separate outer membrane fragments from inner membrane fragments by a suitable density gradient the specific binding activity is predominantly in the outer membrane fraction. The next slide shows a summary of what we believe to be the initial steps in the transformation process. One has then the outer membrane receptor present probably in only a few copies on the cell membrane and donor DNA containing 11 base pair sites. These interact in a reversible fashion which we've been able to demonstrate to form a complex which then at some point becomes irreversible and transport proceeds into the cell. Now here is the really the unknown and more difficult part of the problem. How does this highly negatively charged molecule actually penetrate the membrane? The receptor DNA interaction is only the trigger for this process. And we're only now beginning to get some notion as to how this occurs. I see that my time has run out. The next slide shows a scanning electron micrograph of competent cells. And if you remember the first slide I showed the surface was smooth. As the cells develop competence they form little vesicular labs on the surface about 100 nanometers in diameter which is sufficient in size to contain fairly large molecules of DNA. If the cells are exposed to DNA the vesicles disappear and in some cases can be visualized inside the cell. If the competent cells are returned to a rich medium to promote de-induction the vesicles are released into the medium and we can harvest these vesicles. This is work done by a former student Dr. Robert Dyke. The vesicles can be harvested and they by themselves will take up DNA in a specific fashion so that it becomes tightly bound and is resistant to nucleases. In addition if one looks at the protein content of the membranes of these vesicles it's highly enriched for the six proteins that we had previously purified from whole membranes. So we are building a circumstantial case for the involvement of these vesicles in the transport process. On the last slide I show a schematic version of how the transport might take place. Here we have vesicular transport in which DNA could perhaps bind to the vesicle which would then evaginate and package the DNA much as a virus would package DNA and then transport it to the inner membrane where it could be by fusion the DNA could be injected into the cell. I would like to discuss this with Dr. Luria perhaps to see if it is a feasible mechanism. On the other hand we believe that this is a likely mechanism for the hemophilus or other gram negative bacteria. Whereas for the gram positive bacteria we think it's more likely that there is...
|
Hamilton Smith received his Nobel Prize in Physiology or Medicine in December 1978. Unfortunately, the 1978 Lindau meeting had been dedicated to physiology or medicine, so the first possibility for Smith to participate as a Nobel Laureate and lecture at Lindau was three years later, in 1981. He grasped this possibility and presented a very clear lecture (with slides) on the mechanisms of gene transfer in bacteria. Compared with his Nobel Lecture in Stockholm 3 years earlier, which is a more technical account of the discovery and use of restriction enzymes, the Lindau lecture sounds like a very pedagogical overview in the frame of a university course on bacterial genetics. The young scientists and students in the audience certainly must have appreciated the level of the lecture. Hamilton Smith himself must also have appreciated the situation, because he returned to Lindau for all the remaining medicine meetings of the 20th Century and has continued his participation into the 21st, also for the interdisciplinary meetings! A factor, which probably is non-negligible in attracting Nobel Laureates to come to Lindau, is the presence of other Nobel Laureates. Werner Arber, who shared his medicine prize with Smith (and Daniel Nathans), participated together with Smith until 1990. Subsequently, he became involved in the organising body, the Council for the Lindau Nobel Laureate Meetings and thus participated in all meetings, irrespective of their subject. So Smith could be sure that in Lindau, he could, at least meet Arber. Actually, if you listen to Smith’s lecture you may notice that he would like to discuss certain matters with another Nobel Laureate, Salvador Luria, who only participated in one Lindau meeting (1981). Another Nobel Laureate probably listening to Smith’s lecture was Rosalyn Yalow, who received her Nobel Prize a year before Smith and who had the use of radioactive isotopes as tracers in common with him! Anders Bárány
|
10.5446/52597 (DOI)
|
Music Well, students and colleagues, my scientific life is a little bit different from that of some of my associates. I think most of the work I've carried out has been aimed at solving a definite problem, usually a clinical problem. Three years ago I talked about some of the problems of tropical diseases and development of developing areas of the world. That's one facet of interest. Another facet has been the common viral communicable diseases of children, and I've chosen today to talk about the one virus that I deliberately set out to isolate and had the pleasure of succeeding. Some of our other isolations had a much greater element of the serendipitous, but that's another story. Being the last presenter on this program, at a point when you're saturated with knowledge and perhaps slightly hypoglycemic, I think you'll probably judge my presentation by the brevity of its content, as well as the subject matter per se. I hope to get high marks on both counts. Applause Well, why should we be interested in varicella? It's a continuing problem. As in introduction, mentioned there's no vaccine, but there's one on the horizon, and it's going to challenge all of us to have it develop successfully, and it may well be that it'll be your generation with the techniques of genetic engineering that will produce a good immunizing material for this disease. I'm presenting varicella because of my personal interest, but also because it's a problem of increasing social significance. Here we have a relatively benign disease. May I have the first slide, please? Which in children manifests itself with a rash? They aren't very sick. Yet, as man, the physician, manipulates the human host, we now see increasingly an eatrogenic type of varicella that may be lethal. Now, this is not eatrogenic in the bad sense, as Professor Krebs alluded to yesterday. I'm talking about the modification of the human host, say, to treat cancer with cytotoxic or immunosuppressive drugs, and in the course of that modification, one has the unhappy complication of severe varicella. Now we typically have an individual with a primary attack of the virus that manifests successive crops on the skin of lesions, little vesicles that rapidly go on to become popular and then postular. One little point I want you to keep in mind is, where does the virus come from that spreads around the community so fast? And I'm going to very briefly summarize the present state of our knowledge, go a little bit into the historical background, and then point out some of these special problems, the unique characteristics that have made this virus so difficult to work with. From your standpoint, you all ought to be interested in it more or less on a selfish basis. You all expect to achieve 60 or 70 years of age? No doubt, you all will. 50% or more will have another encounter with a chickenpox virus that you had as a child, and then you will get a severe, painful, incapacitating attack of herpes zoster, decairle rosa, and perhaps with very severe post-zoster neurologic pain. This is one of the reasons that this disease is becoming more important, because we are an aging population. And the next slide, please. The prevalence of zoster is directly related to increasing age. These are some data that Hope Simpson corrected in the course of a study of a panel practice in England, and it was his estimation that if a cohort of 1,000 people were to live to age 85, half would have had one attack of zoster, and 10 would have experienced two attacks. Next slide, please. We mentioned that medical progress, per se, turns varicella into a lethal entity. This was first recognized and reported by us in 1956, where a child with malignancy being treated with cytotoxic and immunosuppressive drugs did not get the single little series of crops, two or three or four, of vesicles and lesions, but instead continued over a matter of four or five weeks to have vesicular lesions and then went on and died. Next slide, please. That's a systemic disease, a situation like that, and one has to think of varicella as a systemic process. There are lesions all through the body, ear, pox, and the lungs. Next slide, please. And if you focus down on one of those little areas, you'll see the typical intranuclear inclusion bodies characteristic of varicella and indeed of all of the members of the herpes group. Well, it's this situation that gives us great concern now about varicella virus. In one series of children treated with malignancy, varicella as a complication had a mortality rate of 7%. In patients who received marrow transplants, varicella infection flares up in some 50% of all such transplanted children with a 12% mortality rate. So as medicine progresses, we're creating new problems and we need new preventive measures and methods of treatment. Well, now just a brief word about the history. As Longgoes, 1888, in Vienna, Bouquet, suggested on epidemiologic grounds that varicella and zoster might be related. He saw outbreaks of varicella occurring in children exposed to adults with zoster. Now the next slide, please. And then the first sort of entree into modern virology came when Dr. Tisser, my predecessor at Harvard, in 1903, studied the evolution of varicellulose in the skin. And this is one of his slides stained back in 1903, beautifully preserved, pictured now, and it shows the characteristic features of the herpes group, malinucleot giant cells, inter-nuclear inclusions. Next slide, please. Now that's a vesicle on the first day of the rash. That vesicle has lots of virus in it, maybe 10 to 7, 10 to 8 per ml, in fact, some particles if you were titrated out. There's another Dr. Tisser's preparations. By day two or three, when inflammatory cells begin to migrate in, there is practically no virus there. Virus is very transient. Next slide, please. And to show you some of his meticulous drawings he made at that time here, typical inter-nuclear inclusions. He didn't know he was dealing with a virus at that time, but he laid the basic classical description of the cytopathology of the herpes group agents. Well, things move very slowly for many years. During the 20s and 30s, biologists and workers tried to get virus into animals and reluctantly had to conclude that there weren't any susceptible laboratory animals. About 1941, a fellow intern and I got the idea that maybe one could use human cell cultures and we started out to inoculate vesicle fluid into cultures of human cells. That had to be put aside in 1948. I was able to return to it and we first got inclusions like this and it in suspended cell cultures. And then the next slide, when we switched to sheets of fibroblasts, inoculated with vesicle fluid, either from varicella or from zoster, didn't make any difference. We got plaxous woman cells that got larger and larger, seventh day, tenth day, and the virus seemed to spread from cell to contiguous cell. Next slide, please. And if one stained edge of one of those plaxons, even with this power, you can see all the nucleic giant cells, internal conclusions. Here are normal fibroblasts. Next slide, please. And if you go up under higher power, under oil, at the edge of such an advancing legion, here are normal fibroblast nuclei. Here are nice big characteristic, inter-nuclear inclusions. Well, with that as a starter, we were rapidly able to work out serologic techniques, immunologic procedures, and show that the viruses of zoster and varicella were alike. And we coined the name varicella-zoster virus. But the virus growing in the tissue cultures proved to be most frustrating. Unlike other agents, the virus remains cell-associated, and very little would come out into the fluid phase of the tissue culture. And this situation obtains to the present. The arches develop some. We can get virus out by sonocating infected cells, concentrating, say, with polyethylene glycol, and maybe get as many as 6 to 7, 10 to 6 to 7, infectious particles per ml, but still a tighter, much inferior to the little vesicle in the skin. Now the next slide, please. Peculiarly, the best yields from the tissue culture system yield less than one cell-free, infectious virus particle per infected cell. Now, if one's going to think about using a killed viral vaccine, and you're going to need huge masses of antigen, then we're a long ways, technically, from getting the antigenic mass that we're going to have. Right now, the killed vaccine is out the window. But what's happening? What happens in the skin is different from growing the virus in vitro. We've worked for years now on trying to have different cells and trying to improve the system. Was this tipped or are we in the same position as Dr. D. D. was yesterday? Another 10 minutes or so? Okay. At least I was prepared. Well, with serologic tests now and tests for immunity or susceptibility, we can look at children and know whether they need a vaccine, or whether perhaps they might get a short-term passive prevention by the use of hyperimmune gamoglobulin that will prevent chickenpox. But we know so little about this agent. The next slide, please. It's been notorious that when a case of chickenpox appears on a hospital ward, it spreads rapidly through the ward to all susceptibles. And here is a summary of a paper published last year by Claire and co-workers from Study Dunn in Boston, index case here on the ward, and then here were the contact cases. Next slide, please. When they looked at this epidemiologically, here was the index case. Nine out of 10 patients, susceptibles got it there, three out of five over there, one out of one, and so on. They took chemical smokes, followed the airflow, and believe it or not, there was a flow outside through window air conditioners as well as a flow inside. So where did the virus come from? Faculately, in spite of thousands of attempts to isolate virus from the throat washings, respiratory secretions of patients with varicella, I know of only one success. Now, there's virus in the vesicles, but that is a very labile agent. And I'm not sure that it was vesicle virus that infected all these other susceptibles. Surprisingly, in the tropics, varicella is an adult disease. Apparently, the virus is relatively labile, and it spread from person to person is delayed. You would think in a poorly-sanitated, dense, little, heavily populated village hut in the tropics that all the kids would get varicella very soon, but they don't. It's an adult disease. So here are mysteries. We don't know how this thing really gets around. Well, there are other problems. Mention of latency. Next slide, please. All of the members of the Herpes virus group have this propensity for latency. Herpes simplex virus can be recovered from ganglia of man and experimental animals in between periods of viral clinical activity. We assume that the same thing obtains for herpes zoster that it hides out in the sensory ganglia. If the patient has zoster, one can look at that ganglia and find nerve cells degenerating, find the satellite cells with typical inclusions, but in hundreds of attempts to detect or rescue virus from the ganglia at autopsy of people dying of a variety of things, when they did not have acute zoster, they've all been negative. So we don't know how this virus stays latent. And that's one of the interesting challenges. We're learning a little more about the host response and what happens to the host that releases the virus from latency. And we have phenomena in the child, infant. Next slide, please. Here is zoster in a baby two and a half months old. That mother of that baby had varicella during the pregnancy, and here's a shortened period. This baby's immune responses were immature and it developed zoster a few months after the episode of varicella in utero. Next slide, please. Parenthetically, within the last five years, next, please, we have learned to recognize that there, like congenital rubella, there is now a congenital varicella syndrome. It's rare, but if a mother should happen to have varicella in the first or second trimester, the baby may be damaged, and there are characteristic deep deforming scars that developed for the lesions were. Now, if we look at the human host, say, in mid-age, and look at patients who have diseases such as Hodgkin's disease, then we very clearly can determine that cellular immune responses to varicella virus are depressed. And most recently, next slide, please, Miller did a study on adults, looked at the blastogenic lymphocytic response to varicella antigen, white cells taken from older people, white cells taken from younger middle-aged people, and showed very clearly that the person over 60 years of age has a relatively incapacit-ly to react at the cellular level. In other words, this is an indication of what I like to call immunologic senility. Now, one of the real problems has been, right along, that there's been no animal model for varicella. So there's nothing to do safety testing in, nothing to work out the pathogenesis of this virus, no non-human host. And it's not a surprise that because of the peculiar nature of this agent, it was not until 1974, 25 years after we isolated the virus that the first varicella vaccine was gingerly tried out and published about, Takahashi and his co-workers in Japan. Since then, two or three other groups are working with varicella vaccines, their live viral vaccines, but lacking a human host, I mean lacking an animal host, one has to test out whether a tenuator did not in children. That, of course, is a scary thing. Fortunately, some of the candidate vaccines look extremely promising, but there remains the problem perhaps of latency and of zoster developing years later. Then there's another new development. Many of the herpes group viruses have some transforming or oncogenic potential. And last year, Gelb and his co-workers published a paper in which he showed that hamster cells underwet transformation and exposure in vitro to varicella zoster virus. Injection of the transformed cells into hamsters produced malignant tumors. Now we have no evidence at all now that there's any relationship between varicella zoster virus and malignancy in man. But the idea of putting an attenuated virus of uncertain parentage into children when there might be some oncogenic potential gives one pause. Well, this is where we now stand. I think it'll be years before a definitive assessment of benefits and risks of such a vaccine can now be made, and therefore alternative approaches to live virus vaccines should be explored concurrently. The rapid developments in the area of molecular biology and genetic engineering should expedite the development and production of a subunit vaccine that would be devoid of the possible risks of a live viral product. I commend this problem to you and to your generation. Varicella, the last of the common communicable disease of children, assumes increasing social significance and deserves priority of attention. Thank you.
|
Thomas Huckle Weller won the Nobel Prize in Physiology or Medicine in 1954, sharing the prize with John F. Enders and Frederick C. Robbins. Five years previously, the three scientists had demonstrated that the poliomyelitis virus could be cultured in tissues outside of nerve cells, which galvanized the race for the polio vaccine. Time was not wasted. At the time of the Nobel Prize announcement, the vaccine was already subjected to clinical trials, leading to large-scale vaccination of schoolchildren in 1955 [1]. Weller remained in the field of clinical virology, and had a particular interest in viruses that usually occur in childhood; rubella, Coxsackie, or varicella zoster, known as the chicken pox. During this brief lecture, Weller focused on a “problem of increasing social significance”, but one that rarely makes the headlines despite its prevalence. Once the symptoms of varicella clear, usually in a child, the body never completely rids itself of the virus. Varicella lies dormant in nerve cells. By the time the child reaches middle age, there is a considerable chance the virus will reactivate in the form of “severe, painful herpes zoster”, in the words of Weller. It is estimated that half of those who live to the age of 85 will develop herpes zoster, also commonly known as shingles, while 10% will have had the disease twice. Weller cited these statistics in 1981, and they are still relevant today [2]. Yet the incidence of herpes zoster is increasing, and this is simply as a result of a longer life expectancy. Weller and his colleagues isolated the varicella virus in 1952, effectively showing that varicella zoster and herpes zoster are caused by the same virus. The symptoms are different, however, and while varicella is often mild and quickly forgotten, zoster may result in various neurologic complications, which may linger for months or even years. The disease is particularly serious in the immunocompromised, such as patients with cancer or who are HIV positive, or those who have undergone organ transplants. Soon after this lecture was delivered, a vaccine against varicella became available, yet it is usually only advised for specific groups, such as young children in child care facilities, and not part of national immunisation programmes. In 2006, the first herpes zoster vaccine was introduced, which is recommended in many countries for adults over the age of 60 [3]. Despite the fact that there are vaccines for both varicella and zoster, there is still much debate over whether these are illnesses that warrant enough of our attention to rationalise widespread vaccination. It seems that most of Weller’s lecture could be delivered unchanged today. Hanna Kurlanda-Witek [1] Baicus, A. (2012) History of Polio Vaccination. World J Virol. 1(4), pp. 108-114. [2] Cohen, J.I. (2013) Herpes Zoster. N Engl J Med 369(3), pp. 255-263. [3] Warren-Gash, C., Forbes, H., and Breuer, J. (2017) Varicella and herpes zoster vaccine development: lessons learned. Expert Rev Vaccines 16(12), pp. 1191-1201.
|
10.5446/52598 (DOI)
|
Mr. Prime Minister, Count and Countess Ben-Dotter, Mr. Chairman, ladies and gentlemen, this talk is a sequel and in a sense an end piece to the talk I gave you three years ago, when I discussed results of a study over about eight years that my wife and I together have conducted. That study was characterized by the application of some methods developed in animal ethology to abnormal children, disturbed children, who do not speak and therefore had to be studied by the nonverbal behaviour just as animals have. I reported on what our studies, the application of these methods made us conclude or interpret about the autistic condition about what it was like to be autistic and also about its ontogeny, the genesis, what made, could make children autistic. And I want to report today on what I call the end piece on how it might be possible to cure them, how we have found sins that we can cure against expectation a number and perhaps a high proportion of these unfortunate children. Now when I speak of autistic children, then I don't care too much about this label, but I'm speaking of children that you can recognize by the following observable behaviours. One is what we call social avoidance. They do not want to make any contact with any other human being, not even their own mother. Second, reluctance to explore an unfamiliar physical world. They do not explore. They shrink back from anything that is unfamiliar to them. They do not develop speech or if they begin to develop it, it regresses again and they become totally mute. They resist any change in space or in time. Any new situation, any slight change in a familiar situation creates tantrums and also changes in routine, changes in what is done in time. They show a continuous repetition of very simple, often very bizarre, strange mannerisms or stereotypies. Time after time the same. They scratch their ear, they may make curious movements with their hands, half looking at them, they may spin pirouette around and similar, very simple and very monotonous, alternately repeated movements. They become in general retarded, although almost all of them and perhaps all of them have what are called islets of good performance. In some things they can be very good. And further they usually show sleeping difficulties. They sleep very poorly. They are generally very over aroused. And some of these points have to do with each other. Our conclusions about being autistic can be summarized in an expression of a few words. They live in a continuous emotional imbalance which is dominated by hyperanxiety. They are anxious, they are apprehensive of almost everything. If this is seen as central to the whole syndrome that I have described, then the syndrome becomes very clearly understandable. It's a very plausible way of looking at the syndrome. You find, namely, that part of the behavior consists of incipient, half-hearted social and exploratory approaches. But at the same time they shrink back and the behavior consists of equally incipient and not complete withdrawal from persons and from new situations. And you also see in the behavior the direct expressions of the motivational conflict between these two incompatible things. You can't approach and avoid at the same time. And whereas in normal children the avoidance is initially there, but gradually wanes. In these autistic children it stays there. And they are in perpetual conflict. They are not just merely anxious. At the same time very willing and even want to approach situations and persons, but they can't bring themselves to do it. And they are in this internal balance between the two, which we call an emotional imbalance. As to becoming autistic, which is a quite distinct question, how does a child become autistic? We claim without being able now to elaborate that, but we can do that in our informal discussions, that there is only a very minor genetic component. It is possible that there are genetic differences between children connected with this and that some children are more vulnerable than other children. But that is all. And we know that from the beginning study of identical twins of whom some are autistic, one may be autistic or repair and the other not autistic. To that extent this point to a non-genetic component. And there are more discordant identical twins with regard to autism than there are concordant identical twins. And even those that are concordant can be partly genetically determined, can be partly determined by the environment, as I can also argue, but not now. That I did last time. Secondly, there is no evidence at all for gross structural damage, for instance, gross brain damage, not even for minimal brain damage. If there is minimal brain damage, it's quite equally possible that that is the consequence of having been autistic for a long time, as that it is the cause. It's a correlation that is sometimes found. But even the minimal brain damage, which is a fashionable word nowadays, has been inferred, not found. But there are, contrary to what is at the moment quite widely held among the circle of experts, there is quite some evidence of very early traumatization of external environmental influences in early youth, which may undo the normal development of a sense of security, which controls both the approaches to other people as the approaches, exploratory approaches to unfamiliar environments, from both of which a normal child learns so much. We have traced together some, something over 20 of possible early influences that may happen, some before birth, some during birth, some shortly after birth, usually well before the child is 30 months old, which can make a child derail into this autistic deviation of the normal development. What they have in common is that these early conditions hinder the early affiliation between mother and infant. The all socialization, development of social relationships in any human being begins with the formation of a strong emotional bond by which the child is tied to the mother and the mother to the child, and in which mother and child interaction build a kind of upward spiral for both, for the child mainly, but also for the mother. The second thing we find about these external traumatizing influences is that they are all characteristic of modern social conditions, at least in our type of modern industrialized, urbanized societies. Societies where many people work under great stress, which extrapolates itself, radiates onto the mother infant and the father infant and the family infant, and ultimately the community infant relationships. Now these conclusions ran all counter to prevailing opinion. Even at the moment, most, almost all people, all the experts on autism have come to a standstill. They all believe in what they call organic or structural damage, which is irreversible, and they believe in the impossibility to rehabilitate these children. These children are given up and they end up in mental institutions and are often then diagnosed suddenly schizophrenics. And that is still true the latest summaries, as latest in British Medical Journal in late 1980 said there is no known treatment. And now I'm going to turn to this problem of treatment. Since we spotted that there was a possible environmentally controlled etiology origin of the disorder, and also having seen a number of recoveries of autistic children, which should not happen if they were ineducable, we began to believe in the educability and the rehabilitation of autistic children. And we began to think of the possibilities of therapies from two ends. First of all, we looked at our own analysis of our understanding as we saw it of what makes a child autistic and what an autistic situation is like, and deducted from that what could be expected to cure an autistic child. But that, of course, what you expect is very much dependent on what theory, what hypothesis you adhere to. So our hypothesis could be wrong, and we had to look in the empirical, typically medical way, the proof of the pudding is in the eating, other autistic children that do recover. And if so, how? What has happened? Why have they, what has made them recover? And for that, we had two sources of information. One was we looked at the quite considerable number of autistic children who had recovered without doctors ever knowing about it. We studied what we called the do-it-yourself mothers, and how have they proceeded? And we've seen in a number of cases, these had to be studied in depth, that these procedures of the do-it-yourself mothers had quite a lot in common. What they did was actually what you would expect that would be, that would cure autistic children, namely they made this anxiety wane. And how did they do that by being super mothers? But we hadn't got beyond that. We had seen, yes, these children must lose this anxiety, and once they begin to feel as secure as normal children, they will explore, they will accept instruction, they will even seek instruction, they will practice, they will do all kinds of things. And you need not then to teach special skill, they teach so to speak themselves in the interaction with the environment. But that's where we had got stuck. And the breakthrough came when we got in touch with a New York psychiatrist, Dr. Martha Welch, who had in a rather empirical way come to a conclusion that we at the beginning would have shrunk back from. She said to the mothers, she persuaded the mothers of autistic children to go back to square one, so to speak, and to treat their children even if the child were 10, 12 years old, in the same way you treat a baby or a toddler in distress or overtired or ill or an upset child by holding the child, if necessary, forcibly but lovingly, with all the non-verbal expressions and verbal expressions that belong to that, until the child gives up its initial struggle, it does struggle originally, but you must go on until the child turns around and becomes positive, socially positive to the mother again. And then in the course of one session, which in the beginning may last a couple of hours, then the child will begin to socialize, it will also feel that it has a secure home base and it will set out and explore. And now I should like to show you a few slides, the least-billed bit, and in which I show you two sequences of two sessions supervised by Dr. Welch. The first shows the beginning and the end of a session. Sorry, could I have the first slide back, please? That's the first-billed bit, this is the second. Here you see the beginning of a session, the boy resisting very much and furiously, the therapist sitting, Dr. Welch, sitting in the background, rather unconcerned because she knows this is the beginning of many sessions, you have to repeat these sessions. The next please, next is the end of this session where the boy and the mother do socialize with each other. I needn't point out how different the facial expressions and the body expressions are. Now the next six slides show six successive stages in a treatment, one session of a series of sessions in another child. You see the child, the mother, and the mother's mother. Now in the first you saw the child in a very resistant condition, here the child is still not very happy but begins to look at the face of the mother. The therapist begins now to study what has happened because after the familiar beginning, now it begins to be interesting. The next slide please, next is the bitter. Now the child begins to explore the face of the mother, first it looked only at the face of the mother but now begins to touch the expression in both child and mother and interestingly the therapist changed. All the details of this we can discuss in detail in our informal discussions. Next slide please. Now the child begins to interact with the mother's mother who has been supporting the mother morally and you see again her face of the therapist shows delight in this development. Next slide please. Next is the child having got a secure home base, ventures forth away from it and socializes with the therapist and you see again typical non-verbal expression of this friendly interaction and the last slide the next one shows that the child ventures out into the room and begins to explore objects in the room. Now thank you that was the last slide. That shows the typical cross of one session. These sessions have to be repeated a number of times at reducing and increasing intervals but the mother is taught how to do this and the interesting thing is that as soon as the child begins and it's usually the child who begins to react first to this forcehold holding. As soon as the child begins to react in a friendly way to the mother then the mother receives that as a reward that reinforces her maternal behavior which may have been suppressed for a long time because having an autistic child is as you can imagine a very miserable experience and that makes the mother behave in a more mentally manner. That again encourages the child and the downward spiral which is normal is reversed into an upward spiral. I put it now in very simple words I needn't stress that this is a very complex set of interaction. So complex that even in the quite exhaustive literature on child behavior even on nonverbal child behavior a great number of the behavior patterns that you see in autistic children and in recovering autistic children haven't even been described yet and yet you see them repeated time and again. What is interesting in this is that since the autistic child has become more and more retarded the longer it has been autistic the more and more it has to catch up on and the longer the interaction must become normal between mother and child and in the run of this upward spiral the child runs very rapidly through a process that a normal child takes five six years for and the autistic child in this way catches up on its traditions. Very fast showing partly how much it has learned lately by unobtrusively observing what's going around around him and partly by now wanting to learn and learning with such an incredible rapidity that they often end up well above the average normal child. That's another interesting point which we may discuss informally so many of these autistic children who have recovered show that they are in some way special children. They have a very special social sensitivity perhaps a little more timid than other children but they usually have very special gifts either intellectual or artistic or social gifts. One thing for instance by which to do it yourself mothers have helped autistic children to recover even faster is to engage their help for even more unfortunate children. Suddenly the autistic child who has always felt anxious and close in is given the message you can help somebody else and that does a great deal to the self esteem and that gives a boost to this emotional development on which they have to catch up. I use the word emotional in the subjective sense more or less as equivalent to what we would call motivational in the more objective sense. What fascinates us is the success rate. We have studied now together with Dr. Welts and Dr. Zappella of Siena in Italy and in the case of do it yourself mothers about 29 cases on which we can report and of these 29 cases 25 have recovered fully or practically fully not quite fully because the treatment hasn't been going on for a long time. Now four failures two of them are because the mothers gave up and it's very interesting but also very worrying that when the child makes the first friendly approach to the mother that there are some mothers who have been so traumatized themselves either by having had an autistic child for so long or by earlier conditions such as their own early childhood that they shrink back from the friendly early approaches of the child which they have so rejected by then that they find it difficult if not impossible to bond again with them. The other two failures were due to sheer physical circumstances. The patients live too far from the psychiatrist that they could see him only a couple of times and not enough times for the psychiatrist to teach the mother and then the parents more fully how to conduct these sessions and it's quite possible that they will still recover if this difficulty will be overcome. What worries us about this whole story is that it has become more and more clear that although we can now cure autism we don't know what proportion of children can be cured but the indications are and with information way beyond these 29 children that it may well be possible for the majority of children or rather child mother diets is that since all the autism inducing factors that we have been able to find are in the social environment in the modern social environment that what is a real cure for autism is seen in wider context a symptom cure for our society. If autism is as we believe a consequence of psychosocial stress to use the word that is widely used then it is a societal disease and then the treatment of autism alone would be a symptom disease in this wider context and although I speak of children with this particular syndrome of autism we suspect and are fairly sure that is a much larger proportion of children who are labelled in a different way and you know labelling diagnosing in psychiatry is still very much an art and no two psychiatrists really agree on a diagnosis except in such rare cases where the syndrome is well described we consider this a kind of test exercise of the application of the logical method to emotional disturbances and we begin to sense that emotional disturbances particularly in childhood are very common at the moment and may very well be on the increase. One indication of that is that the country where autism causes the most worry and most attention is not a western country but a westernised country is Japan. The news we get from Japan is just frightening and it looks very much as if these mental disorders in childhood which will lead if not cured to mental disorders in adults to those who have to take part in adult life and even have to take part in running our society that may well be in the increase in the very countries who believe or have believed until recently that they are making great progress in civilization. There is a heavy price obviously to pay for the modern industrialised competitive anonymous society. I have no time to elaborate as I have said but we have fortunately at last been able my wife and I to write up this whole story in the form of a book which is in the press now and that we take our time not only in explaining these methods which take some explaining to psychiatrists who have been raised in such a totally different way of thinking than biologists but in which we also have to describe in detail the case histories on which inevitably we have to base our conclusions. I would like to point out in conclusion that although as an experimental zoologist I love to be able to run experiments with proper controls that is in this case absolutely impossible. A because of the fantastic complexity of the phenomena of which most of the aspects haven't even been described yet. B because the cures take such a long time and from the moment on you even suspect that you have spotted the factors that may bring on or perpetuate such a disorder you are in the same position as the man the pharmacologist who begins to suspect that this or that drug may help in the curing of this or that disease you are very reluctant to deliberately withhold treatment from a whole population and you would have really have to have enormous samples of thousands or tens of thousands of children whom you would deliberately expose to these terribly damaging fatal external influences. What we have learned from this we think from this little intuition in little corner of psychiatry is that psychiatry as a pre-science in many aspects of pseudoscience is very much in need of injection to even from an injection from even such a primitive science as histology still is. Thank you for your attention.
|
This is the second of the two lectures that Nikolaas Tinbergen held in Lindau. Tinbergen pioneered experimental investigations on top of the “watching and wondering” which characterized most of ethology during the first half of the 20th century. But both lectures concern what can be learnt concerning autism by applying the methods of ethology, i.e. watching animal behaviour. In his first lecture at the Lindau Meeting 1978, he described the research project that he and his wife Elisabeth had started around 1970 and which was reported on already in his Nobel Lecture in Stockholm 1973. The main conclusion was that autism is not connected with the genes or is an effect of a brain damage. Instead a more psychoanalytical hypothesis was put forward, that autism derives from a hyperanxiety developed in the child through early loss of contact with the mother. In his second lecture, the present one, Tinbergen reported that they now had found a cure for autism. This cure originated with an American psychiatrist, Martha Welch, who had pioneered a therapy where the mother holds the autistic child for extended periods of time and thereby establishes contact with it. Tinbergen refers to an investigation where he and his wife found that almost all autistic children responded to the cure. Inspired by this result, they wrote a book together entitled “Autistic Children: New Hope for a Cure”, published in 1983. As of today, a detailed understanding of autism is still missing, but it seems clear that the hypothesis of the two Tinbergens is not the whole story and that there may be some genetical aspects of autism. Concerning their optimism of having found a cure, this seems also to have been somewhat premature. Anders Bárány
|
10.5446/52599 (DOI)
|
Diese nette kleine Übersetzungapparaten haben leider ein Fehler. Man kann Englisch über Deutsch übersetzt werden oder auch Deutsch und Englisch. Aber es gibt leider keine Übersetzung für zerbrochene Deutsch und am diesen Grund muss ich leider meinen Vortrag in Englisch geben. So, ich werde heute über die Abundant-Elemente sagen. Die Abundant-Elemente in der Galaxie. In diesem Grund meine ich, weil es so wenig Sand und Glas gibt, dass ich nur Oxygen und Carbon meine. Die Abundant-Elemente beginnt natürlich in der großen Bange, wenn es eine große Bange war. Und am Ende der Zeit waren die Lichter-Elemente bis zur Masse 4 produziert. Und dann, bis zur Masse 4, die eine starbe Hilfe für die starbe Hilfen cuerpo, diehelp angry, ein Bestandnis, um die ist es verantwortlich, dass die Stärken in den äußeren Städten, die in den äußeren Städten der Galaxie sind, sehr viel heißer als die in den Innenparten. Das hat mit einem sehr simplen physischen Prinzip zu tun, nämlich dass die Elefanten nicht überkotieren müssen, weil sie so kleine Füße haben. Wenn ein Stärk so groß und groß ist, hat es viel mehr Träume, weil sie so kleine Füße haben, weil sie so kleine Füße haben. Eine sehr simple physische Kalkulation, die Sie alle machen, ist, dass jeder von euch so viel Energie pro Gramm wie der Sonne ist, aber die Absicht von der Sonne auf dem Sonne macht es sehr viel heißer. Der Sonne hat sehr kleine Füße für eine sehr große Füße im Innenparten. Und als die Stärken größer werden, wird das Problem größtendlich mehr schwierig. Sie werden so heiß, dass sie nicht nur das Hydrigen brennen, und also in diesem Helium, der nur ein paar Immutable-Partikel gibt, wie Protons und Neutrons, Elektrons vielleicht, und wenn Sie sehr fussy Positrons sind, und dann, zumindest für den Vorteil dieser Rede, dass Sie Ihren Kontakt mit diesen Morgenspeakers zu socialen Erkunden haben. Wenn man diese sehr simple Reaktionen bietet, kann man in diesen großen Stärken sofort für Carbon-12 und Oxygen-16 machen. Sehr schnell ist dieses Material wieder in den Interstellerspaces, wo neue Generationen von Stärken werden. In den neuen Generationen von Stärken, gibt es mit jeder Generation von Stärken eine bestimmte Fraction von kleineren Stärken. Stärken, die eher homindrömmig sind als unsere eigene Sonne, und wir haben genug Ökonomik, ohne die Sonne zu worsen. Unter diesen Situationen können die Astronomers besser übernommen als die Ökonomisten. Unsere Sonne werden für billions von Jahren gelassen. Nicht nur deutsch- und amerikanische Billionen. Unter diesen Situationen finden wir, dass sehr schnell, sehr schnell, als die Generationen von Stärken vorgehen, was mit dem, was die Stärkformation von Stärken verwendet, ein nicht alle die Absinfe von Starzen, sondern wirklich nur die Tracer der interstellar Gast. Und unter diesen Verständnissen, die Verbrechung des Materials, die Verbrechung der Kontribution von den großen Starzen, diminuiert, als man aus der Mitte des Galaxies steigt. Ich habe es schon gemacht, der andere Weg. Ich glaube, ich habe das richtig. Die Verbrechung des Materials von großen Starzen, erhöht, ich habe es vergessen, die Verbrechung des Gas von großen Starzen, erhöht, als man aus der Mitte steigt. Ein weiterer Weg, das man sehr früh in der Kindheit lernt, ist, dass man besser ist als jemand anderen. Die Leute, die Physiker, die sich denken, dass sie besser sind, sind Chemisten. Das haben wir schon gesagt, in diesem Meeting. Was man so tun kann, ist, dass man sich die Chemie nicht mehr erheben will. Wenn man zum Beispiel eine Nukleus, die ein Lohmaschstarr ist, und mit einer Nukleus aus einem Lohmaschstarr zu beijagen, das schönste Weg, um es zu machen, ist, die Chemie zu vergessen. Das ist der Weg, um die Nukleus zu vergleichen, die den gleichen Chemie und die gleichen Proberten haben. So kann man, wenn man die Elemente studiert, und ich glaube, weil die Sand nur über zwei Dinge geht, wie ich gesagt habe, Carbon und Oxygen, oder Oxygen und Carbon sind die mehr verbundenen, was man dann auch zu tun, ist, dass man nur die massive Starr-Iso-Tope von Oxygen und die Lohmaschstarr-Iso-Tope vergleichen. Oder in jedem Fall studieren die Starr-Iso-Tope, die einen anderen als anderen, in dem Fall, dass man sich die Steller-Prozessung aus der Inneren der Galaxie beigetragen kann. Und dann auch etwas lernen, über einen bestimmten Spot in der Galaxie, der uns nämlich unser Solarsystem hat, einen bestimmten Interesse hat. Ich habe in diesem Fall fünf Iso-Topes gefehlt. Oxygen, wie ihr es kennt, hat drei stabilisierte Iso-Topes. Oxygen 16, der größte Abundant, der von Helium-Berne aus ist, und Oxygen 17 und 18. In dem Fall von Carbon, gibt es Carbon 12 und Carbon 13, als ein großer Starrprodukt, und ein weiterer Produkt, ein zweitiger Prozess, das ist, mit Helium-Berne zu verhören, in die Lohmaschstarr und auf die Hydrogen zu stecken. Wenn man sich die Ration von Carbon 12 und Carbon 13 anschaut, als man von der Galaxie aus der Galaxie aus dem Starrprodukt, das ist, dass die Ration von Carbon 13 und Carbon 17, die Idee ist, dass in diesem Fall, es gibt viele Lohmaschstarr. Die Lohmaschstarrs beginnen mit ihren eigenen Solarwänden, und sie werden ihre eigene Produkte verhören. In ihrem eigenen Produkt produzieren sie ein gutes Komponent von Hydrogen-Berne Material, das ist, dass Material, in dem sie die Energie von einem Proton auf eine, in diesem Fall, eine vier-Alpha-Partikel- oder drei-Alpha-Partikelnucleus, und man erwartet diese Art Generalverkaufung. Wenn man hier aus dem Kilo Parsecs, das ist, 30.000 Lichtjahre aus dem Galaktischen Zentrum, wird hier noch eine andere Zahl, und die Ration von Dott, die Ration von Dott, ist die Zahl, die wir auf der Erde erwarten. In diesem Fall ist diese Lange, was nicht das Dott zu verhören, weil die Erde, nachdem die Prä-Solarnebula vor der Zeit des Prä-Solarnebulaes, 5-Billion Jahre ago, die US-Billion, unter diesen Situationen, 5 x 10 x 9 Jahren, presumably hat der Interstellarmedium nicht noch als viel Lowmass-Star-Produkt, das es jetzt gibt, noch nicht mehr verhörbar ist. Wir sollten eine natürliche Progression in diesen einfachen Karten, die aus einer sehr hohen Abundanz in der Zentrum der Galaxie, sorry, ein bisschen weniger hier, und dann considerably weniger hier. Dieser Intervall reflektet eine Zeitvervariation, mit der Fall, dass die Stars nicht lange sind, und dieser Lager reflektet die Fall, dass die Innenpartei der Galaxie mehr agitiert und mehr dünn ist, so dass die Stars mehr schnell sind. Objetzt mehr Generationen von Stars hier als hier, und mehr Generationen von Stars hier als in den Materialen, die wir für lunch haben. In diesem Material, wie wir es vorhin erwähnt haben, die Carbon-13, wenn man das, was es war, die Carbon-13 in das war, in was es war, wenn man es für lunch hatte, war 1%, und das war ein extraterrestrial Kuchen. So, die zwei Überraschungen. Jetzt gibt es ein 5. Isotope Oxygen 18. Und Oxygen 18 ist ein späkulierer, weil die normalen, einfachen Nuklearophysik-Prozesse, die die Lowmass-Stars, die diese 2, nicht machen. Es gibt eine Suggestion, dass ein anderer Lowmass-Star-Prozess, nicht das selbe Prozess, aber eine Nova-Explosion, ein anderer Prozess aus dem ordentlichen Stellerwind, für Oxygen 18. Die erste Frage, die man sich fragt, ist, ob Oxygen 17 und Oxygen 18 das selbe Stau von der gleichen Population ist. Das heißt, ob sie von der gleichen Population kommen. Nicht von der gleichen Star, ich erinnere dich. Das heißt, man kann einen Lowmass-Star, für Beispiel, Oxygen 17 geben, und einen anderen, vielleicht ein zweiter Star, das dann ein weiterer Prozess aus dem Ordentlichen explodiert. Aber wie lange die Populationen die Galaxy ausgeliefert sind, hat man nicht eine Zeitvariation, und eine spatiale Variation. Das ist in diesem Fall, die Linie geht so wie diese. Wenn man auf der anderen Seite Oxygen 18 etwas mit einem großen Star hat, dann kann man einen Entspielung und einen Stau haben. So, natürlich, das nicee Ding über dieses Spiel ist, ist die Sicherheit. Wenn man startet, sind es nur zwei Möglichkeiten, und alle, die man auf einem Verbrauchern und den Experiment machen kann. In diesem speziellen Fall ist es ein spezieller, leichter Experiment, wenn man Carbon-Monoxide benutzt, weil alle, die man macht, einen Spass sehen. Nature ist, wie Bob es gesagt hat, ein bisschen früher ein großer Teil Carbon-Monoxide. Wenn man einfach den Oxygen 18-Eisotope von CO mit der Oxygen 17-Eisotope von CO mit zwei Abundenzen, sind sie beide sehr unterabundend in Spass, damit diese Probleme von Lignformation, von der Optikum-Depth nicht in die Linie kommen. Das ist eine sehr straightforwarde Bezahlung. Und alles, was man tun muss, ist dann das Bezahlung, wie ich das über ein Jahr oder zwei ago gemacht habe. In dem Fall, der junge Frau, die in diesem Experiment für mich die Daten in der Menge tookt, ist in der Audienz. Für die, die ihr für die sozialen Interesse habt, möchte ich später sehen, nimm ich Lauren. So, anyway, was passiert, ist, dass wir diese Daten haben, und ich denke, dass alles okay ist. All was wir zu tun haben, ist, dass wir die Daten reduzieren, und dann sehen, welche von diesen beiden Lignen, die auf die Linie liegen. Und hier ist das Ergebnis. Das Ergebnis ist, dass es auf der beiden Linie liegt. Was man anstatt, ist eine sehr horizontale Linie, genau das, was man anstatt, wenn, wie die Nukleofizistinnen und Nukleofizistinnen sich das verdient, dass Oxygen 17 und Oxygen 18 einen kommunen Ursprung für die Populationen, die die Sorgen sind. Wenn es hier noch ein weiter Punkt ist, ist das eigentlich ein Envelope für eine spezielle Star. Eine Star, die von Hydrogen-Berne der Masslösung geht. Und diese Star, wie wir wissen, hat nicht die Prozess für Oxygen 18. Aber presumably, weil die Flatten in der averageen Population und Interstellungsspace sind, sind es eine equal Nr. von Star, die Oxygen 18 machen, sondern 17. Das ist hier sehr komfortlich. Die horizontalen Linie ist komfortlich, aber das, was uns unvertinglich ist, ist, dass die Linie, die farbe der Galaxie zu der außenpartei der Galaxie, zu dem Solosystem nicht funktioniert. Das ist, da es keine Spatial-Differenz gibt, keine Spatial-Differenz zwischen Oxygen 17 und 18 und ihre relativ abbreitenden Abunden, das ist also nicht eine temporale Differenz. Als Zeit geht, bekommen Sie die gleichen Anzahl an. Das bedeutet, dass es, aufgrund des Ihres Punktes, das etwas Funnig ist, dass es etwas Funnig ist, um den Ort zu leben. Presumabli, als wir die Formation des Solosystems geöffnet haben, eine eher ungewöhnliche Nukleerkomposition, das ist, in den Falle der physischen Volumen, mehrere Parts per Million des Masses der Solernebule. Also, dass es irgendwie, die außenpartei, die außenpartei, eine Art galer deferente Neuktärziehung hat. Das referente Protester in dieserpleteoration der Kalauointe, in deravian eine Initiative an Alles Deutschland eines River MY eine zurücksтовende Auf Sicht desiblical Ey Docks, die dort tatsächlich kannst.<|es|><|transcribe|> ein gewisses Nukleerprozess, das sage ich in diesem Land der Protester, aber im Beginn unserer Evolution, lange bevor es der Mannschaft immer war, war es dieses gewisse Nukleerexplosion, oder ein sehr energetisches Nukleerprozess, das irgendwie mit dem Geigen des Solarsystems coincidiert. Wenn das eine necesäre Coincidence war, in der vielleicht nicht viele Planeten da sind, weil es presumably nicht so viele Coincidences gibt, oder es sei nicht sehr wichtig gewesen, und wenn es keine Nukleerexplosion war, dann hätte es ein Solarsystem. Wir können das aber nur spekulieren. Aber es sagt uns, dass das simple Bild, das ich vorhin hatte, nicht funktioniert. Es funktioniert nicht in der einfachsten, die straightforwardesten Situation, die wir hier vorstellen können. Die nächste Frage ist natürlich, ob wir alle die relativ Abundanz haben. Wir wissen nicht, ob die Erde in den Oxygen 18 entdeckt ist, oder ob die Oxygen 17 in einem Fall deppletet ist, als die Interstellermedium. Aber in dem Fall, dass man dann mehr Probleme hat, hat man, wenn man diese rareren Isotope zu nehmen und sie jetzt zu den Abundanzspezien zu verbreiten, dann startet man mit der Laufformierung. Das ist der zweite Thema, das ich heute Morgen möchte. Was ich sagen möchte, ist, dass Bob zuerst gesprochen hat, weil er viel zu dir erklärt hat. Das ist, dass das größte Abundanzmaterial hier saturiert. Es gibt Astronomers mit Mathe im Man kann von der Ration der geübs俵ten Temperatur zu der Temperatur der Carbon Minoxide in die gleiche Velosität in die Opasität verändern. Man kann sich und lange, als man nicht den Carbon-12 direkt anschauen möchte, als man das als Tool benutzt, um die Opasität der anderen Spezies zu korrektieren, man kann in der Fallen, wenn nicht mit dieser Spezie, zumindest mit dieser, dieser und dann noch rarer. In der Falle, ich habe heute keine Zeit, die Daten zu zeigen, die man heute benutzen kann, die Rarerisotope, die mit den Carbon-13 und den Oxygen-18 Nucleus in den Knie. Solche Maßnahmen sind möglich, aber weil diese Linie teilweise saturiert, muss man etwas völlig verabschiedet werden. Wie Bob sagte, dass als man die Wien zu den Linen, die Carbon-12- bis Carbon-13-Raschio, viel größer ist, dass das Linen nicht mehr saturiert. Also ein paar Jahre ago, als wir diese endlose Serie von Experimenten begann, um Interstellarisotope zu verstehen, oder eine andere Art, ein paar Jahre ago, als wir viel weniger Informationen hatten, und deshalb haben wir das Problem viel besser verstanden. Was wir das Experimenten, wie wir das gemacht haben, war, dass wir die centralen Linen nicht mehr verabschiedet werden. In other words, das ist, dass wir keine Maßnahmen zwischen diesen beiden Bereichen, wo sie saturiert, nur auf der anderen Seite machen. Das ist logisch konsistent, weil man nicht nur die Maßnahmen, die in einem Ort sind, wo dieser Ratio hoch ist, man sich auf die Kriterien, die diese zwei auf die anderen handeln werden und in operate Wireенты craften. Ein Dulace wirditausendutscher Wir haben das Method, das wir die Zeit erzielen, aber was man normalerweise in so einem Papier macht, wächst in der Texte, und sagt, es sei kein Problem. Und das Problem, das wir mit Bezirken haben, war etwas, das Bob auch erwähnt, das ist, dass wir ein Modell eines klopfenen Clouds considerieren. Dieser einzelne Symbol ist supposed to be the observer. Der Observer schaut an ein Cloud, das auf eine Korne verzerrt ist. Vielleicht ist das ein kaltes, aber die Außentstein der Korne ist verzerrt. So wie man auf diese Korne schaut, sieht man hier ein Gas, das red- und blüh-Gas, das ist, dass die Radienz, die hier kommen, an die Höhe der Lange, eine Wange der Lange, und die Radienz, die hier kommen, ist an der Höhe der Lange. Das würde nicht so schlecht sein, dass die Chemisten das letzte Lache haben, weil in den Außenten der Galaxien, in den Außenten dieser Cloud, eine eher spekuläre Art der Konditionen ist. Was man in den Außenten hat, ist eine eher hohe Temperatur, eine ziemlich hohe Densität und eine sehr feine Menge ionisiertem Material. Und unter diesen Situationen, die ordentliche Handwaving-Erzehung, die ich als Einzelne Physikerin würde gerne haben, die sagen, dass wenn ich ein Liter Bier trinkt, die Carbon-13 und die Carbon-12 den selben, weil sie chemisch identisch sind. Auf der anderen Seite, in den Interstellar-Spiel, die Stabilität der Carbon-Monoksetze, die auf die Konstituierung der Eisotope ist. Die Molekule, um wieder ein bisschen die Anzeigenphysik zu benutzen, mit einer springenden Verbindung, vibriert. Die 0-Punkt-Vibration Energie ist dependant auf der Gewicht der Konstituierten Masse. Wenn man diese Gewichten ein bisschen schwerer macht, ist die 0-Punkt-Vibration Energie ein bisschen höher. Und durch eine unverzichtbare Zahl in unserer Erfahrung wird diese Molekule ein bisschen destabiler. Diese Verwendung ist so klein, dass es nur mit extrem hohen Temperaturen und extrem hohen Temperaturen passiert. Es gibt etwas, das nennt sich Aktivationenergie, das sich alles von der Gewicht von den Aktivationen-Barrieren verwendet. Auf der anderen Seite, wenn man die zwei Konstituanten einig ist, wenn man einen Carbon-Monoksetze, C12016, einen Charged Carbon-13, kann es mit einer Aktivationenergie reakt und die Carbon-12-Adem in den Molekule verwendet und ein bisschen Energie eröffnet. Wir konnten den dumped servo-d recomposed In den Städten des Clouds, wo wir uns die Determination machen, war es irgendwie, dass es extra Carbon-13 in den Carbon-13 Isotopes wir measuren. In other words, Carbon-Monoxide war es irgendwie wie ein biologisches Stain, das Carbon-13 gebraucht hat, uns ein besserer Kontrast gegeben, es wurde leichter zu verbreiten, aber in den Fall nicht uns das richtige Antwort geben. Well, thanks to the splendid equipment, which exists now at Bell Labs, we're able to make these measurements again. And having been fooled in Oxygen-18, I decided to take on the most simple-minded project I could. And that was instead of doing what had been done before, which is to measure a great number of clouds at one position each, was to take just two clouds and measure so extremely carefully at several positions in the cloud, that one could actually measure the ratio as a function of velocity. That is, what one could do is obtain these spectra, get them so noise-free, that having corrected them with the C12 for opacity, one could actually plot the ratio as a function of velocity. Obviously on the edges it gets noisy, but in the middle that might be possible. And then plot those ratios as a function of velocity for several positions in the cloud. And this is in fact the data that I was able to obtain last winter on one of these two clouds, NGC 2264. Now these spectra are really remarkable in several ways. The first is that each of them has a central minimum. There are ears on the side, wings which increase, and except for the center, the wings are quite high. As one moves out, the central minimum is constant over a number of positions. Now that's important that the central minimum is constant, because had I done the saturation correction wrong, then presumably if this was some kind of saturation effect, because all this data is independent, there are very large differences in relative intensities, one would expect it to jump around. But the fact that if in the center of these one gets exactly the same number in the minimum at the central velocity, one presumably is measuring something. But then, when one has those wings on the side, we're measuring something else. As we move off, the wings get closer together, and also the minimum between the wings rises. Now let's try to understand that in the picture I showed you a moment ago. We look in the center of the, now let us assume that the edge of the cloud, the edge of the cloud has extra carbon 13 in it. Then if one looks through the center of the cloud, what one is going to see is at the high velocity edge and the low velocity edge, one is going to see an enhancement relative to the middle. Furthermore, as one looks off to the side of the cloud, the projected velocities get closer to each other, and so the line is expected to narrow. Also as one gets off to the side, the unfractionated core, the material which hasn't been messed up, diminishes, and so that moves up as well. So as one moves either to the side spatially or front to back in velocity, one sees exactly the same effect. Finally, if the core of the cloud itself in the middle, and Bob mentioned that these clouds are not in freefall, if the core in exactly the center is agitated, that is to say there is a high velocity component in the middle somewhat, then a line which goes exactly through the core, ought to be messed up somewhat at the high velocities, and so the ears, the ears then in the middle ought to be diminished, and in fact they are. They are lower here because in addition to the high velocity wings we are getting from the outside of the cloud here and here, also in other words, there is an underlying high velocity agitated feature of unfractionated gas which lowers the average ratio. So from this data we have, for the first time, unmistakable proof that in fact these clouds are collapsing. And furthermore, we have a mechanism now for tracing the chemistry as well as the nuclear physics of these objects. One of the experiments I did some years ago had to do with tracing deuterium in the galaxy, and under those circumstances deuterium is tremendously affected by fractionation in these clouds. And it will be fun to go back as I plan to next year and trace them again and see if in fact the deuterium has similar behavior. The other cloud is a little less interesting, exactly the same thing happens. Here unfortunately the cloud is about six times as far away, and as soon as one moves away from the center, one immediately starts seeing a rise, the ears aren't very big, this cloud is very much agitated, but on the other hand somewhat the same symmetry that one saw before, one sees in this second example as well. So having done that, let's see, I'll turn that off. I don't know if Bob took a drink, but we're friends. So, ich habe diesen Vortrag angefangen mit einer Bemerkung über ein Apparat. Dieser andere einfache Apparat da ist mir vertraubbar, sagt, die Zeit ist jetzt fertig, und ich möchte bloß diesen kleinen letzten Levelful from sand benutzen, zum diese Gelegenheit nehmen, ein herzlichen Dank anbieten, zu Graf Bernadotte und auch die Komitee für dieses hervorragendes und wirklich gemütliches Versammlung. Herzlichen Dank.
|
Arno Penzias and his co-Nobel Laureate Robert Wilson came to Lindau for the first time 1982. In principle, they could have attended the previous physics meeting, in 1979, the year after their prize year. If so, there would have been no less than three talks on radio astronomy at that meeting. This shows that some 270 years after Galileo Galilei constructed his optical telescope in 1609, observations in the radio wave part of the spectrum of electromagnetic radiation has become an important way to study the physics going on in the Universe. At the Lindau meeting, Penzias and Wilson reported on different aspects of a joint project using Bell Laboratory’s excellent equipment. Wilson’s talk was scheduled first, which made Arno Penzias refer to it a couple of times for more general information. In the project radio observations were made of the abundance of the molecule carbon monoxide, CO, in the interstellar medium. With the precision obtained they could actually not only measure the normal molecule CO, but also molecules made up of different isotopes of C and O. While Wilson used their observations to learn about molecular clouds, the scientific subject of Penzias was how this information on the abundance of isotopes of the elements C and O can be used to infer what goes on in space. He first gives a short review of the production of the elements from the Big Bang to recent times. Since only the lightest elements were produced originally, the heavier ones, such as C and O, were produced in stars and blown out into space after supernova explosions. The most interesting result that Penzias describes is probably that the variation of the isotopes in our solar system is not the same as in interstellar space. This seems to imply that there were some special nuclear processes going on when our planetary system was formed. It can also be noted that Penzias both starts and ends his talk in German, at the end thanking Count Lennart Bernadotte for a very nice meeting! Anders Bárány
|
10.5446/52600 (DOI)
|
I am going to report about the achievements of space research, 25 years of space research, and its application to the more distant regions of space. And I will concentrate on the change it has produced in plasma physics. I think this is an illustration of what Professor Nagel said that you should not believe in what is accepted today, because that may very well change very rapidly. First of all, you may ask, is plasma physics of very much importance in astrophysics? If you read the usual textbooks in astrophysics, you don't think this is the case. But in reality, the stars consist of plasmas and the interstellar medium also of plasmas. And it seems that the universe consists more than 99% of plasma, in fact, at least by the volume, more than 99.9999999999% of plasma. Plasma physics should not be considered to be completely irrelevant to the research how the universe is structured. To be more specific, the plasma physics extends from the laboratory. Typically this is in a logarithmic scale, that is typically one tenth of a meter for a not normal experiment, up to the magnetosphere, the magnetic field surrounding the Earth and the other planets and the Sun, that is about 10 to the 8 meters. And then we come up, this is a jump of 9 orders of magnitude by another jump by the factor of 1 billion. You come up to the galactic phenomena and the third jump, this is a cosmic triple jump, brings you up to the Hubble distance, which is what the Big Bang believers call the size of the universe. This is 27 orders of magnitude and laser fusion has extended it downwards to by 5 orders of magnitude more. There are reasons to believe that the basic properties of a plasma is the same in the whole region. This is by no means certain. We can trust that it is so in the laboratory and in the magnetosphere because there we have reliable measurements. We are out to this limit, outside this the field is necessarily more speculative because what is called high quality diagnostics, that is an investigation of the properties of a plasma is possible in the laboratory and as far out as the spacecrafts go, but it is not possible further out. Whether we should believe, accept that plasma changes its properties at the outer reach or spacecraft or not, this is a thing which we cannot prove, but I think there are good reasons to suppose so. What has happened in this field during the last years? It is especially that the space research has made the magnetosphere accessible to detailed analysis by high quality instruments which are sent out here and going up and down, up and down and sending signals to the earth which have been interpreted in detail. The result of this is at the same time laboratory research has made a great step forward to some extent favored by, to a large extent favored by the fusion work which is going on there so far the fusion research has not given us any energy, but it has given us very valuable information which can be used for clarifying the structure of the universe. There has also been much work spent in the translation between laboratory work and the magnetosphere. And the result of this is that we have got a drastic change in our concept of what the plasmas are in space are like. In reality there are half a dozen different respects in which this change has taken place and I'm going to select a few of them and try to discuss them more in detail. One of the important things is that our concept of the structure of interplanetary interstellar and intergalactic space has changed drastically. Fifty years ago it was generally believed that the space between the stars, between the planets, satellites, commerce and so on was absolutely empty. It is from that point of view absolutely empty, but not absolutely empty. It is rather empty, but the little matter which is between is very important. Twenty-five years ago it was real. The attention was drawn to the interstellar interplanetary matter and this was then considered to be a homogeneous nebulous gas with dust in it. Space research has given us a new view which you can call the space age concept of space, namely that space is highly structureized, it is penetrated by a network of electric currents and this is something which is of importance in all fields of plasma physics. We know that this is so after this limit that there are good reasons to suppose that the whole universe is penetrated by, has this structure, highly structureized, penetrated by electric currents. More specifically what does structureized mean? It means that we have discovered a number of phenomena which are strongly inhomogeneous. There are electric double layers which you find everywhere in space. These were not believed to exist up to something like five years ago. Now they are very popular. There was a few weeks ago a symposium in Denmark where there were 50 of the most prominent people working in this field who discussed the properties of double layers. What is a double layer? If we have an electric current in this direction then under certain conditions we have a density of the plasma which is fairly homogeneous. This is the density and this line gives the electric potential, the voltage which increases slowly, there is an electric field which drives the current through the plasma. However, when a double layer is produced, the conditions is changed like this. Here is the voltage. It makes a sudden jump and then has another constant value and the density changes in a corresponding way. Such double layers were well known in the laboratory since the time of Langmuir about 50 years ago. It was denied by the belief that they could be of any importance in space until they were actually discovered. There are such double layers at the height of something like one or two Earth radii above the Earth. This is a picture of the Earth and these are magnetic field lines. Here in the equatorial plane at a distance of 5 or 6 Earth radii you have a plasma flow, a sunward plasma flow. This is seen from the night side and that produces an electromotive force here. This produces electric currents which flow along magnetic field lines to the Earth, then through the ionosphere and back again to the equatorial plane. So we are here discussing an astrophysical problem not in terms of magnetic fields as has usually been done but to the same extent in terms of electric currents and these electric currents may produce double layers. Here is a double layer at one or two Earth radii and that means that we have a sudden jump in the voltage there which produces in which auroras or auroral electrons are accelerated. Here we have the electromotive force, here the auroral electrons are accelerated and the energy is transferred by the circuit. This is not a theoretical hypothetical picture, it is something which is actually measured by spacecrafts which have penetrated many times. Of course there are many details which are still obscure. These double layers may have voltage differences of kilovolts which you have here, solar flares it is megavolts or gigavolts and they may be still higher. Then the currents produce filaments. In cosmical physics we are accustomed to the Newtonian attraction, the general gravitation which typically produce aims at producing spheres like stars and planets and so on. However we have also electromagnetic forces and these electromagnetic forces they tend to produce filaments. The basic phenomenon is known for a very long time, it is actually the Osavars law that two parallel currents attract each other and this produces electric pinches and filamentary structures. Such filamentary structures are common in the universe, can I have the slides here? We have good reasons to suppose that whenever you have observed a filament that is just an indication that we have electric currents, pinching electric currents there. Can I have the first slide? Yes, this is the sun, this is the solar corona and if you sharpen the picture a little you will see that this has thin, thin, thin filaments in every direction. The sun goddess actually has a beautiful hair which you see here and these filaments are likely to be due to electric currents which produce the so called pinch effect, produce filaments out of it. Next slide. Here is a comet and here you see striations, filaments of the same kind. The tail of a comet is obviously a plasma phenomenon, this was first pointed out by professor Biermann here in Germany. Next slide. Here are photographs of interstellar space far out in the galaxy, you see thin filaments everywhere. Next slide. Here are other filaments of the same kind. Next slide. Here is an ordinary cosmic cloud which seems to be homogeneous structure but if you subject it to what is called contrast enhancement technology you get this picture, that is you put it into a computer and ask the computer to look for contrast and then you see it is penetrated by filaments which is a strong indication that filamentary structure, that there are electric currents in also there. And I have one more slide. Here you have dark lanes which probably are also due to filaments. This is just some arbitrary examples to show you how important the formation of filaments are and what is likely to be due to then comes surface currents in space which are also very traumatic. It is actually to me it was the most important, the most shocking discovery namely that if you go out from the earth, measure the magnetic field out from the earth, you observe, this is now the distance from the earth, this is the magnetic field, you observe that it decreases approximately as r to the minus 3 as it should do out to something like 7 or 8 radii. Then it may suddenly change its sign and this is made very abruptly in a very, very short distance, some 100 kilometers less than the distance from here to Paris for example. And what space craft records here is that you have constant value here with some fluctuation and then suddenly it jumps over in this way. This demonstrates that there is a thin current layer which separates the earth's plasma controlled by the earth's magnetic field from the plasma controlled by the solar magnetic field. And such double layers, they are found on many places on the Jupiter, Saturn, quite a few other places, comets and so on. We have something like 10 different cases where we have such thin, thin filaments. They separate regions which may have different magnetizations, it goes here like that and outside it goes like this. It also, the regions may also have different temperature, different density, different chemical composition and if we go out in space further on it may be that similar layer separates regions of ordinary matter from antimatter if we extrapolate it. The awkward thing with such layer is that you cannot observe it until you penetrate it. I attended the meeting in, I attended the arrival of the space probe to Saturn and then it was dramatic, dramatic. No one saw it and certainly everybody in the big hall saw it. Here it comes. And this makes it awkward because if you go out to the interplanetary, to the intergalactic, to the interstellar and intergalactic regions, you may have similar structures there and they cannot be observed. Now it is very unpleasant to introduce the searcher concept if you cannot observe it. But it is still more unpleasant to me to postulate that at the outer edge of the reach of the spacecraft, space changes its properties. And this has far reaching consequences for astrophysics in general and not the least for cosmology. Now if we try to apply all this to see what changes it, this makes for astrophysics in general, I think many people are mostly interested in the application to the cosmology and I have tried to concentrate on it. I don't think there is time enough to develop, present a new cosmology here. The application is first of all that space has a cellular structure and this means that the existence of antimatter is not excluded. There are a number of very nice arguments for against the existence of antimatter in the universe but these are all based on a concept which we know now is not valid. So we cannot exclude the existence of antimatter and the universe may very well be symmetric with regard to ordinary matter and antimatter. Then comes an analysis of the redshift, the redshift demonstrates without any question, it must be a Doppler shift, I think it is impossible to avoid that. The redshift demonstrates that the universe, the or to be more correct use the old term metagalaxy, that means all the galaxies we can observe, it is a synonym to what the big bang believers call the universe. If you plot the redshift that is the velocity of galaxies and here is the size, the distance to the galaxies, you get this famous Hubble diagram and people conclude that this proves that there is a linear relation between the expansion and the distance. This and that the deviations from a straight line here is due to observational errors. This may very well be so but it is not necessary to make such a conclusion. If you take each individual point and extrapolate backwards in time, you see here that this is now and from here you extrapolate backwards in time and then this is the distance from the earth or in the reference system, you see that these do not necessarily coincide in one point. They spread here over a large region and it does not exclude that everything could converge in one point but it does not prove it. It proves that the metagalaxy presence is expanding and that it was once about 10 billion years ago, one tenth of the present size that is one billion years ago, one billion light years but this is not proved at all. Furthermore it has been discovered that space has a hierarchical structure. The hierarchical model was introduced by Charlier long before the Big Bang around in the beginning of this century and it said that stars are aggregated to galaxies, galaxies to what we now should call clusters of galaxies and clusters of galaxies to superclusters and superclusters to some larger units and if this hierarchical structure follows some general law then we can satisfy some conditions, the Albert objection and the Zealiger objection to an infinite universe. This was not believed in until 1970, the Vocular Demonstrated, he is a very famous observer, that this really is true, the universe has the galaxies and so on are arranged into a hierarchical structure and this is how the Vocalers diagram looks actually, it is plotted in different coordinates, this is the size of a structure and this is the mass of it. You see that the stars are here, this limit which is very important that is the Laplace-Svartschild limit, it means actually that on the other side up here we have black holes. You see here that stars go down and neutron stars may approach the Schwarzschild limit rather much but if we go out to galaxies and clusters of galaxies and so on they are very far from the Schwarzschild limit, this is actually given in escape velocity, it is actually two orders of magnitude in density here, so they are four or five orders of magnitude from the Schwarzschild limit, it means that the general theory of relativity comes in here as a correction which is negligible, ten to the minus four or ten to the minus five as far out as we know. Then if we extrapolate to the Meta-Galaxy using the same formalism, it comes here four orders of magnitude in density from the Schwarzschild limit. So the hierarchical structure of space which the Vokkler introduced 1970, that was not believed until at the end of the 1970s it was general pebbles and collaborators make a less sophisticated statistical analysis and did confirm this. So you can say that the hierarchical structure of space is now an observationally confirmed structure and there is a large void region here which makes it very unlikely that space is closed which it should be if it is on the other side of the Laplace Schwarzschild limit. I think this is approaching its end, this is the Sand Reckoner which Archimedes has titled for one of his most famous books. And if I should conclude this, I think that we should not take the generally accepted Big Bang hypothesis as confirmed by observations. Instead, I should like to quote once again what Professor Nagel said and I think that space research has given us so much new information about it, about what the space structures are like and it is as far as I can see unavoidable that this will shake the concepts of the basic concepts of astrophysics in a rather drastic way. So if I should conclude this by giving an advice to the 500 students here, it is that those of you who are interested in astrophysics should not take the curriculum in the general theory of relativity but instead a very good course in modern plasma physics. Thank you.
|
Three years before the present lecture, in 1979, Hannes Alfvén gave a talk on ”Observations and Cosmology” at the Lindau Meeting. In his talk, he rejected the Big Bang theory and instead advocated a model of the Universe symmetric in matter and anti-matter. In 1982 he came back to Lindau, this time with a more general lecture title about space research and its results. In particular he wanted to describe the implications for cosmology of the discoveries made in 25 years of space research. In his introduction, Alfvén quotes the Chairman of the session, Bengt Nagel, at that time the Scientific Secretary to the Nobel Committee for Physics, as saying ”one should not believe in what is believed today, because that may very well change rapidly”. Even though I was not present at the lecture, I think that I can warrant that this is a correct quotation. Nagel was my predecessor as secretary to the Nobel Committee and it is true that he used to say things like that. After the introduction, as a true plasma physicist, Alfvén then spends most of his lecture describing and explaining electric currents and magnetic fields in space. In particular the emphasis is on the then recently discovered electric double layers. These were well known from laboratory plasma physics, but had only recently been discovered by spacecrafts exploring our solar system. He then makes a rather large extrapolation from these interplanetary discoveries to space in general and in particular to a hierarchically structured universe, with stars, galaxies, galactic clusters and superclusters, etc., etc. If this extrapolation is accepted, there is a mechanism that would give rise to a cellular structure in space, where the matter content of each cell would be separated from that in the surrounding cells by double layers. So the main result for Alfvén turns out to be a physical mechanism that would allow a universe symmetric in matter and anti-matter, as described by him in 1979. Always somewhat of a showman, at the very end of his lecture, Alfvén again quotes the Chairman and ends by giving as advise to the young 500 students in the audience: Don’t go for a curriculum of General Relativity but chose Plasma Physics instead! Anders Bárány
|
10.5446/52601 (DOI)
|
Ladies and gentlemen, I don't understand German very well, but I heard something in that introduction about prognostication. The crystal ball is not a device that reads the future. It's actually an experimental device that looks into the wonderful and marvelous laws of nature and tries to find out as precisely as possible what some of those things are. I'm going to be talking about the crystal ball experiment today, and this experiment involves a very large number of physicists. When it began, it involved a collaboration of about 32 physicists, and there's been a discontinuity recently, which I will tell you about a little later, and it's now an experiment that is being conducted by about 80 physicists. So obviously, what is being studied is a complicated thing, and obviously my talk is going to be complicated in some way as well. For that reason, I decided that I would give some kind of an introduction to the students here in case they don't know what scintillations are or what sodium iodide crystals are. And anyway, it gives me a great excuse to talk about sodium iodide, which I invented in Princeton in 1948, and I've been working on that material in one way or another ever since, so I'm kind of wedded to it. I'm also wedded to the study of gamma rays. I've always liked gamma rays, and when I first went to Princeton, I started to study means of detecting gamma rays, and I worked on a device that is called a conduction counter. It was an insulator, actually, at low temperatures. It became an accounting device. It was an insulator at low temperatures and became an accounting device there. After a little while, it turned out that there were some difficulties with using it as accounting device because it did require low temperature, although nowadays it operates very much like a Germanium-Elythium-drifted type of detector. So in some sense, perhaps I may go back to it someday. But in the meantime, perhaps some of you know that in 1947, right after the war, a German physicist by the name of Kallmann invented the Cintillation counter, which used large lumps of naphthalene crystal. And that crystal enabled gamma rays to be measured in quantity and also slightly in energy. At that time, if you would go around to any laboratory in the United States, it would smell of mothballs because naphthalene is the thing that makes mothballs, and all the nuclear physicists were using that material. And I tried it, too. But since I had had some experience with conduction-type counters, I knew the literature, and in particular the German literature of Hilschen Pol, who had worked on luminescence of crystals for many years, and particularly on the alkali halides. And so I looked up a few of the articles that the Pol group had written, and I recognized that one could make a Cintillation counter of some of the alkali halides, and within a day or two I had made sodium iodide, scintillate, and also cesium iodide, and rubidium iodide, and several other things like that. And so I will show you a little bit about that early history. And I feel that I was very lucky in finding that material because it has turned out to be enormously useful in medicine and geology and all branches of science. And I am particularly pleased that it's had so many humanitarian applications. Now in 1950, I went to Stanford and started to work on electron scattering because there was a very fine accelerator there. But I carried along this information on sodium iodide, and I started some graduate students working on the problem of detecting high energy gamma rays because the accelerator was going to produce high energy gamma rays. And so with a graduate student by the name of Asher Kantz, who has disappeared from view, I don't know where he is, he went to Afghanistan once, and I never heard from him since. With Asher Kantz, we developed large, we made measurements that could establish how large detectors could work in totally absorbing a high energy gamma ray or a high energy electron. And this work led eventually to theories which predict now how showers, electromagnetic showers, which is what gamma rays and electrons produce in large crystals, can be estimated and how you can estimate things like resolution and so on. So I want to tell you a few of those things. Before I tell you about the crystal ball, because the crystal ball is made of sodium iodide. The crystal ball is a beautiful device, and it merits some discussion on its own. So I would like to do that now by starting with some slides. If I can have the first slide, Ashta build. I guess I'll turn this off. Now this first slide, I want to use that again later. Okay. Yeah. This first slide shows the energy level diagram of positronium. And I've shown this rather than the energy level diagram of hydrogen, because this bears a resemblance to the mesons that I'm going to discuss, like charmonium and botemonium, and so on, up-salonium. The point is that Bohr in 1913 gave us the idea that photons could jump between energy levels. In other words, we could have transitions which would give us photons. Now the same kind of thing happens in high energy physics. There are energy levels, and there are transitions between those levels or among those levels, and one can thereby find the structure of those mesons or particles or whatever they are by studying those transitions which lead back to the energy level diagram. And positronium consists of a positron and electron circulating around each other with this energy level diagram. And there's a great resemblance, as I've said, to much higher energy systems. In particular, I'd like to call your attention to a set of seven levels, this one, this one, this one, these three, and that one up at the top. And later you will see that this level corresponds to a state in a meson discovered by Sam Ting and Bert Richter, which decays into three gluons. This one decays into three gamma rays. And I'll come back to that in a little while. So this is kind of an example of what we're going to see later on. The physics is not too different in some respects. It's the same kind of physics. All you have to do is change the names. And I hope to show you something of that sort. Now let me make a little diversion to sodium iodide. May I have the next slide, please? Actually Willis Lam reminded me that I talked about sodium iodide here probably in 1965, but no one else will remember. And so I hope you'll excuse me if I do it again. I did it at that time in connection with tests of quantum electrodynamics, which Sam Ting has improved on very greatly. But this is the original sodium iodide detector with the sodium iodide at this end of the tube, an evacuated quartz tube, fastened on to a small photomultiplier tube, a very small tube. Next slide, please. Here is the active material. And you see the dimensions of this are very small. This is only a half an inch in size. The material had to be kept inside a quartz tube because it's terribly hygroscopic. And that's the way you have to work with this material. You have to preserve it from contact with the air. Next slide, please. Next the build. A little bit later, I grew some crystals of sodium iodide in this form. And you see they have good geometry. They're beautiful crystals. And I knew that something beautiful had to come out of this. And so we actually set up an apparatus which is shown at the top, not important, Jack McIntyre, a graduate student and I. And we made some measurements on the Compton effect. And we verified the Kline-Nashina formula at energies in the cobalt-60 range. Next to build. Then we prepared this nice crystal, instead of that amorphous kind of crystal you saw before, and used radioactive elements. In particular, this one is gold 198 with a 411 kilovolt line. We immediately got this remarkable plot on the oscilloscope. This is a photopeak. And this is the Compton distribution that corresponds to the absorption and scattering of 411 kilovolt gamma rays in sodium iodide. Next slide, please. We also did cobalt-60, right then, just about at the same time, and observed the two photopeaks corresponding to the two peaks in cobalt-60 and the double Compton edge corresponding to those two different gamma rays. Now we also realized that there was some escape of the Compton gamma rays, the scattered gamma rays, in the crystal. And if you would make a big crystal, you could probably capture the second gamma ray. So at some time later, we made bigger crystals. This one is about a foot across, 13 inches, something like that. And in that crystal, you could see the two gamma rays with the Compton edge greatly suppressed, and here you even see the addition of those two. This led to the idea that by going to large crystals, you could capture all the energy in a gamma ray or in electron or a positron showering particles. Next slide, please. This shows the showering phenomenon in a very crude way, where an electron comes in and makes Bremstrahlung, and then you get pairs, and then with pairs you get gamma rays, and then the gamma rays make pairs and so on. And if the material is big enough, you can capture a very large fraction of the whole shower and thereby measure the energy of the gamma ray. And this is the work that Asher Kantz and I were engaged in when I went to Princeton to Stanford. Next slide, please. Sometime later, we had the Harsha Chemical Company grow larger crystals for us. This one weighs 1,000 pounds and is 30 inches in diameter and almost a foot thick. This is Barry Hughes, who was very glad to receive that crystal when we first got it. And that crystal has been used in subsequent experiments to test electrodynamics to a high order of accuracy. Next slide, please. The experiments I'm going to talk about today have been carried out at the two mile accelerator at Stanford. And this is a representation, a picture of the accelerator along here, and this is the end station area where the experiments are done. Next slide, please. Here is the Spear Storage Ring Facility. And our experiments were done in this little laboratory here. This is the SSRL, Synchrotron Radiation Project, this and this, where I and my collaborators are also carrying out some medical experiments. Next slide, please. Now after the J-psi was discovered, we tried with some of those large crystals to detect gamma rays from the decay of the psi. And we were thwarted to some extent because the multiplicity involved in the decay was too great and too many particles would enter the large crystal at the same time. And so the first gamma ray measurement was actually made at Daisy in the decay of the psi. But we soon followed up with a system, with an experiment in which we used a large number of sodium iodide crystals around the region where positrons and electrons collide. And at the psi prime, which you'll see in just a moment, we detected some interesting things. May I have the next slide? This is an experiment done in collaboration with Princeton and Johns Hopkins. And this is the gamma ray spectrum of the psi that we found. And I would say that there is no structure in that. But the psi prime, which at that time was called psi 3684, you see that there is some evidence of some kind of line structure. So we pursued that, but realized that it would be good to have better modularity. Next slide, please. And so we devised the idea of the crystal ball. And the crystal ball comes from originally a platonic figure called the icosahedron, which has 20 equilateral triangular faces. And if you divide up those faces into four parts, you get a system like this. And then if you further divide each one of these small parts into nine parts, as shown here, you end up with 720 individual modules or crystals. Note that in order to do an experiment, you have to have some holes in this device. And here is such a hole, and there is one on the other side, and there's a tunnel in between. This is the original design of the crystal ball. Next slide, please. This is a sodium iodide crystal, which is placed at some distance from the intersection region where electrons and positrons collide. There is a photomultiplier, which is put on the end, so as to receive the light from that individual module. All modules are optically isolated from each other. Next slide, please. This is the way the assembly was made at the Harshaw Chemical Company by Harshaw personnel and by our own people. And you see the individual crystals being assembled. This was not an easy job because you can think of what might happen if you put all of these in and then the last one won't go in or the last several won't go in. So these all had to be surveyed, and the surveying equipment is shown there, but it doesn't show up very well in the slide. May I have the next slide, please? This is the completed hemisphere for the crystal ball. The next slide, please. And this shows the hemisphere with a cap that will go down on it and all the holes in there for the photomultipliers. And you see the surveying equipment here that had to be used in putting that together. That was quite an elaborate job. Next slide, please. And this is the way the crystal ball was put together at Slack at the Spear Storage Ring. Of course, these two hemispheres join each other. They join together. And in here, one has the beam pipe. Here are some end cap photomultipliers of sodium iodide. And in here are wire chambers and proportional chambers that allow you to identify and track charged particles and so distinguish those from gamma rays which do not leave a trace as they go through. Next slide, please. Here's the actual device itself. You can see it's very complicated. This is Ian Kirkbride who has a very personal relationship with the crystal ball. He goes wherever it goes. He doesn't let it get out of his sight. And he makes sure that it's in good condition. This represents the one half and this the other half, and they're joined together. Next slide, please. Couldn't hear that. But this is the tunnel inside the crystal ball before the end caps are put on. And this shows the beam pipe going into the interior of the crystal ball. Next slide, please. And this shows the end cap crystals here as they are located around and just outside the tunnel. Now the next slide, please. Now I want to show you how some events are recorded in this device. This is the so-called BABA event in which E plus and E minus of high energy are scattered into E plus and E minus. And each leaves a shower in the crystal ball. Of course we have made a flat representation of all the crystals. This is one electromagnetic shower and this is another electromagnetic shower. I'm sorry that the focus is not very good, but I think you can get the idea. And we can identify where the center of the shower occurred and we can also add up all the individual energies in the crystals and get totals. And for example, here are two total energies. Next slide, please. This one shows three gammas recorded. Here's one gamma, here's another gamma, and here's another gamma. Next slide, please. Here's another one where there is a pi plus and a pi minus and three gammas. Here, here, here, there and there. Those are the two pies. They do not make showers the way electrons or gamma rays do. Next slide, please. The previous slide had an orientation, a representation of this kind. The three gamma rays here and the two pions which left tracks in the wire chambers and proportional chambers. Next slide, please. Now the resolution in energy of this crystal ball is given on this table. For cesium 137, which is at 600 something kilovolts, 6 tenths of an MEV, the resolution is 20%. If you know what you can do with this material, you can get a resolution of about 5% with a good crystal, a good individual crystal. But because one has to put all these modules together and add their outputs in various ways, the resolution is deteriorates so that for 0.667 MEV you get only 20%. For an energy of 1,842 MEV you get 4.7% for full width half maximum. Now I will make a prediction here that this kind of resolution will be improved by factors of 100 or 1,000 someday and in the not too distant future. And in that case we'll see incredible detail which we can only guess at now. It certainly won't be done with sodium iodide or something. I don't know what. Can you slide please? Now this is one of the results we obtained and this curve has become rather famous I would say. If you remember the energy level diagram in positronium you will see that in charmonium in which this is the lowest level, I wonder if that wouldn't come out more clearly on some of the transparencies. Let me try a transparency of that. I think I have one. Yes could I have this thing set up again here? You see the energy level diagram for charmonium that is the J-psi system and you see the same set of energy levels this one, this one, this, these three and that one. And the transitions between them are indicated and are numbered and these spectral lines here refer to that energy level diagram. And you see that there is the same kind of splitting that occurred in positronium. Here it is in charmonium where the energies are multiplied by orders of magnitude. Also shown are two little peaks here which have been investigated in great detail because these levels were not known. That is the theorist who do QCD theory tell us that there should be states like these just as one sees in positronium and there should be transitions from here to here, from here to there and from there to there. And if you identify with the numbers there, this one is number eight which corresponds to that transition about 640 MeV, not kilovolts, MeV and this one that corresponds to this little transition here at about 90 MeV. Now this transition right here is not shown in the psi prime spectrum because it would occur from the psi to that level. And so I think that the next slide shows that. I don't have a transparency of that. Next slide please. Well, before I get to that, this shows the branching ratios for the various transitions from those P-state levels. And I'd like to make a remark here that you see accuracies on the order of 9% or maybe 6% or something like that. And I've always had a strong interest in precision and 9% and 6% really bother me and I hope that someday somebody will get much better precision because you want to compare those numbers with what you can calculate from QCD, from quantum chromodynamics. I do not believe that quantum chromodynamics at the present time can make a calculation that is better than about 20%. But on the other hand, if we were to have these transition rates observed to within a half a percent or a tenth of a percent, it would really make a test of QCD. And I hope that that will occur. Now, the next slide please. Now here are some other results. I'm just showing you a few results at random. The crystal ball has obtained many, many results and a lot of them are, as I said at the beginning, complicated. It would take a very long time to explain them. But the top figure is a dollyts plot of decays of the psi. And from this you can see that the eta meson is formed and the eta prime meson is formed. We verified that also by working with decay processes in which we did not have a company in quantum electrodynamic transitions and you see the same kind of thing here. So these are three gamma decays. The eta goes into two gammas. Next slide please. Here are the branching ratios that were observed for those transitions, the j psi going into three gamma. Here's the transition into the eta, a gamma eta state, gamma eta prime, and the ratio. And these measurements early eliminated a couple of other states that had previously been found. We did not find these states and I think it's now recognized that any such states do not exist at least within levels of that sort. Next slide please. Now we also did make experiments of the kind Sam Ting talked about, the R measurements, and this is just a sample of our own measurements. These of course are to be collected with those of many other groups and from these one will try to tell how many quarks are produced and what total cross sections are. May I see the next slide please? Yes. I would like to just look at my transparencies for a moment. Oh, there was one small matter that I did leave out. That very small transition from the psi to the eta state was detected by us and that is shown here. This is decay of the psi and that's at about 120 MeV. That was detected and in an exclusive reaction it was detected also there. Now let me just say a few more words about some of our other findings. In both the discussion of Steve Weinberg and Sam Ting, the word gluon appeared or did it appear in your talk Steve? Yes, it did. Nobody talked about combinations of gluons and theorists have predicted that there should be states in which gluons interact with each other and they're not like photons in that respect. They're different in that respect. So inspired by the work of the theorists, we have looked at transitions of the psi into gluons, possible gluons to see if there could be two gluons interacting with each other and a gamma ray emitted at the same time along with those. That is what has been predicted and in fact that three gluon type of decay is similar to the one that occurred in positronium, but you could also have two gluons and a gamma ray. And so we looked at the end of the psi spectrum and sure enough there was a lot of structure there. And I'm not going to go into the details of this because the arguments are rather intricate and complicated. However, as a result of making those measurements and following a very nice presentation by Don Coyne in our group, we believe that there are two possible candidates for combinations of gluons, a two-gluan state in other words. And this indicates one of those possibilities, a resonance at about 1640 MeV. In this case, what we look for is a five gamma ray decay because each of these eta mesons decays into two gamma rays, so two and two make four and then the original gamma ray made five. And if you do that study carefully, we believe that this is an indication of a possible gluonium state, some people call it a glubol. And this has been called by our group the theta state. This is another one and I won't go into details. This is called the iota. The evidence for that is not quite as clear as for the theta. It would be very advantageous if we had more events to look at. We ended up with about two million events for the psi decay. If one could increase the luminosity of storage rings by a factor of five or ten, we would get, we'd begin to get the kind of accuracy that spectroscopists would like to see. Now I'd like to go back to the slides very briefly. One before this please. The one before this? Yes. This shows the crystal ball being moved out of slack. The idea was that we had done quite a bit of work at slack, the collaboration done quite a bit of work at slack, and the Doris machine at Daisy in Hamburg was going to be improved to give an increase of luminosity by a factor of ten, which is just the kind of thing that I was talking about. And the energy was to be increased also so that one could make a study of the epsilon meson, which was a new meson involving the bottom quark. And so in April of this year, there's one half the crystal ball being moved for the first time within Kirkbride right there, watching it all the time, being moved on its way to Doris in Hamburg. The next slide shows how we did that. This is the back end of a U.S. Air Force plane, the C-5 plane, which is an enormous military plane. And it was our idea that we should use the military for a useful purpose so that we loaded a whole truck here, as you can see, inside this plane. And the crystal ball is in there, including dehydrating equipment and also electronic equipment. Now, it turned out that the C-5 made a beautiful trip and made a beautiful landing at Rhein-Main Base. And there was a little trouble in getting the truck off, but they did get the truck off successfully. And then the truck made its way to Hamburg and broke down about halfway. So the airplane trip was successful, but the land trip wasn't. And so it took an extra day.
|
This is the last of four reports on Robert Hofstadter’s post-Nobel project: construction and operation of total absorption detectors for gamma ray spectroscopy. The earlier reports were given in 1968, 1971 and 1973. The title of this talk, “The Crystal Ball Experiment” refers to experiments performed with a detector in the shape of a ball of crystals. The “ball” is really an icosahedron consisting of more than 700 individual crystal modules enclosing a collision chamber connected to two beam pipes through which the colliding particles enter. The crystals are looked at from the outside by photomultipliers, connected to each other so that gamma rays emanating from the collisions can be detected in coincidence. During the decade since the 1973 lecture, one can say that his project had been a total success. As Hofstadter remarks, the number of physicists working on the Crystal Ball project at the very beginning was 32. It had increased to as many as 80 at the time of the lecture. The Crystal Ball was first set up at the SPEAR colliding beam facility at SLAC, where it in particular looked at transitions between energy levels of charmonium, the quark-antiquark system discovered in 1974 by the 1976 Nobel Laureates B. Richter and S.C.C. Ting. In 1968, Hofstadter used the blackboard for pedagogical explanations, but this time he has so many interesting results that the lecture becomes a slide show and his “next slide please” is repeated N times, where N is a large number. At the end of his lecture he shows pictures from the transport of the Crystal Ball detector from SLAC to the DESY laboratory in Hamburg, Germany, where it was installed at the DORIS colliding beam facility. There both intensity and energy of the beams were larger than at SPEAR and the plan was to study gamma rays from transitions in systems containing the bottom quark. As far as I am aware, the Crystal Ball is still used, but now (2012) it has moved to Mainz. Anders Bárány
|
10.5446/52603 (DOI)
|
It is a great honor to be invited to address this gathering and also it is a very fine development that different professions which together are needed in collaboration in order to face up to the many problems that modern technology and modern society has presented us with. So I am glad for two reasons. The second being to be allowed to participate in interdisciplinary activity. I am worried that our universities are not designed and organized to make it easy to find these collaborations but outside universities there are other search opportunities and in particular the tagungen in Lindau have that character. Now my topic then is long range projections of alternative energy futures which is one particular study in the field of energy modeling itself a development of the last 10 to 12 years. I forgot to ask the chairman to let me know when I am five minutes before the end of my talk so please tell me when I have only five minutes left. The development of energy modeling took place in United States, in Europe, Mexico, India and other countries. The aim of that technique is to visualize alternative energy futures. The nature of the problem draws, makes one need to draw on various fields of knowledge and actually some speculation. Technological knowledge to draw on physics, chemistry, biology, engineering, behavioral to draw on economics in regard to the behavior of consumers and of producers faced with the market and the resource availability problems in which we need to turn to geologists and mining engineering. There is in this type of work a speculative element and that is inevitable in models that look far ahead into the future. For that reason the conclusions reached have the following logical form. If such and such then so and so. And the ifs must be emphasized if you find that someone reports on a model study and those are my conclusions and that person only gives the thens and not the ifs then that is unsatisfactory. Now I would like tend to use one recent modeling study in which I was involved that illustrates the type of work along these lines. There was in the United States a very substantial and widely ramifying study of energy futures and energy presence called by the committee on nuclear and alternative energy systems, a committee set up by a combination of the National Academy of Sciences and the National Academy of Engineering. There was an overview committee that was responsible for the ultimate publication, a very substantial book that has come out of that work. There were also panels to provide analytical work into the mill so to say of the deliberation of the overview committee and I was chairman of a panel called modeling resource group. Now the word resource in that sentence is human resources, experts, resources of expertise. The assignment of that group was to answer, to compare the answers given to the same set of questions by three different models that were already available before the work was started. My emphasis in this report is on the methods and on the kinds of questions that can be answered. And therefore the ifs that we did use and we had several alternative ifs side by side should be revised as time goes on and any results if I may use that word of this particular study would have to be revised as the ifs are revised. Now I have, can we dim the top light a bit enough now for the, I have to switch this on. This diagram is in a way a layout of the study. We did consider three groups of variables, the driving variables, the realization variables, excuse me, the word realization was used by something for something that is not subject to policy decisions. It is there already or it is caused by circumstances and decisions that the energy related policies do not have an effect on or a sizable effect on. Now we had there the GNP growth rate as one, you may also say exogenous variable and that was estimated on the basis of population extension, labor force participation and such factors at 3.2 per annum from the beginning year of the study which was 1975 up to 2010. Then another realization value was cost levels of energy technologies. At that point we did not have the subsequent experience and so we had the following figures in terms of dollars of 1975. The figures are for capital cost of electricity generation, the coal fire generation we had $520 of 1975 per kilowatt electric. Light water reactor, the number goes up to 650. Advanced converter reactor 715. Fast breather reactor 810, again dollars of 1975 per kilowatt electric and solar central station 1730 of in the same unit. Now these numbers have all gone up since then so I report on the study as made at that time without trying to bring up to date any numbers. We also had resource stock availability numbers on there for oil and gas. It was in the United States a quantity of 1,720 quads where one quad is 10 to the 15th BTU and this was oil and gas at a cost of extraction up to $2 of 1975 per million BTU. For uranium it was 3.710 to the 6th tons of U308 at a cost of extraction not exceeding $30 in that same unit. Now then it says there on the right demand elasticity with regard to price and with regard to income. It turned out to at the end of the study that was an extremely important parameter and I will come back later to its precise definition. It is a measure of the response of demand to a given increase of price and a given increase of income. I have a slide later that will indicate both the numbers used and the definitions of the concepts. In any case the three important models that we used differed in their price elasticity and that was in fact helpful because it indicated the importance of that parameter. Now the models that were used we had 3 plus 4, 7, no 6 models actually in the study but the only ones long enough looking into the future were DSOM, ETA and North House, STARD and I will refer to those in more detail later on. Then we had policy variables. First of all, base case which was going on pretty much with the developments as they are presently called for and that is called the base case and then we had other cases that are obtained from the base case by policies that were in there because they were much under discussion at the time. The nuclear moratoria was not then considered as a policy since then something in that direction has taken place, not really as a policy but as a result of mishaps and fears so it is desirable to have this in the study regardless of whether or how such a containment might come about. It was defined, nuclear moratorium in the case of applied to all nuclear reactors was one case, another case was to apply it not to the light water reactors already in service for quite some time and it was so these two cases were distinguished. Also limits on the use of coal and oil shale coal because of the acid rain and oil shale as a result of water use and water deterioration as a result of use and in both cases or in the case of all fossil fuels there is also a long range concern with the CO2 content of the atmosphere. So that's the reason why that is in there also and these limits were defined, we won't give the figures, by drawing some curves that would level off to an asymptote in case of coal and another curve in the case of shale oil and the limits would be that at no point would the annual rate of production of coal or shale oil exceed that curve. Now then finally we had a third category of variables that we called blend variables which are called blend variables because they combine the properties of realization variables and a policies. The discount rates as we have found to our dismay in the United States are also subject to policy but they are also in the absence of specific determined policy still are a reflection of the behavior of parties in the capital market of the economy and so that is a blend variable that we ended up using 13 percent for pre-tax discount rates applied to investment and pricing decisions of the energy producing business forms industry and 6 percent post tax applied to the consumers relative weight to future benefit given to future benefits as compared with present benefits for that we had a 6 percent discount rate. Then there was a ceiling on quantity and price of imports of fuels that I will not dwell on and an estimate of the commercial availability if wanted of advanced converter reactors fast breeder reactors or solar electricity generation all assumed to be available at and from the year 2000 if that were to be aimed for. Then the technique on the ideas on the economic side were that optimization optimization is can be looked at as a simulation of the behavior of competitive market systems. I like to put it this way there is in the world neither come perfect competition nor perfect planning but if there were then they would be equivalent the perfect planning would be guided by prices similar to those produced by perfect competition or producible by perfect competition as well so that we have had this fictitious image of efficiency and use of resources and it doesn't matter how it is obtained assuming it can be obtained. It was meant to be actually an approximation of how our not quite perfect system of markets operates. So you could say we used Adam Smith's invisible hand but with foresight. Now then the price and income elasticity of demand for energy I'm now ready to say some more about that. Here are the numbers that we in fact used and now I also want to define the price elasticity that was the most important of the two. Let X be the demand and P the price the demand for energy and the price of energy in terms of some aggregates over the various forms of energy just one number then the elasticity is defined as the derivative of the log of quantity demanded with respect to the log of price in the market and that is as you see from the use of logs a dimensionless quantity. Now there were if policies that constrain energy supply are strong I replied strongly then a high price increase is needed to constrain demand accordingly and if the absolute yes I should have said the each normally each price elasticity of demand is a negative number because as price goes up quantity demanded goes down now I go back to my sentence a high price increase is then needed to constrain demand accordingly and if in absolute value the elasticity is low and less than less remains to spend on other goods. If the absolute value of the elasticity is high then only a smaller price increase results and the amounts spent on other goods are less affected and the total of GMP of gross national product is less affected. This sort of simple straightforward reasoning indicates that ADA the price elasticity of demand that negative number is a very critical parameter and I like to mention also I will first describe the elasticity that have been used in the study. Here DSOM was a model developed at Brookhaven laboratory and for reasons that were connected with the purpose of that model it did not have consumers response dependent on price it just projected the curve of consumers demand as a function of time into the future up to 2000 and for that reason the price elasticity of demand was really very small the only sensitivity or response to price was still in the choice of the particular energy technology conversion transport or what a certain sensitivity to price came out before the energy reached the consumer. Then the other two models were quite similar in structure the energy technology assessment model of Alan Mann at Stanford University distinguished electric and non-electric demands and it came to an estimate of minus point two five so absolute value of the elasticity of one fourth. The North house model had more subdivisions one residential and commercial users two industrial three transportation for specific electric services that could only be done by electric power and he came at an eight out of minus point four the there was a study done subsequently by the energy modeling forum an organization based on Stanford University but operating nationwide with the nationwide following and participants of the elasticity values that had been produced by various studies by econometric methods statistical methods as against judgmental estimates. Now the modeling forum had found a range of minus point four to minus point seven of those obtained by statistical those estimates obtained by statistical and econometric methods the minus point two five in the man model was labeled as a judgmental estimate. But at the request of other members in the group Alan Mann who was himself a member of the committee of the modeling resource group and so was North house and so were two people from Brookhaven and the group the estimate that since the estimate was we didn't use that term at that time had some judgmental aspect to it we asked man and he willingly did add a second minus point five oh I have five minutes thank you then they I will go and jump now to the results that came out of the here we have to look carefully at the definitions of what is on the axis. I think I first must list the policies here the policies were those that they had already enumerated namely the base case and then the moratorium only on advanced converter and breeder reactors then moratorium on all reactors and then the coal and shale limits those were the first assumptions and then certain combinations the moratorium on both the moratorium and the coal and shale limits and of those the following diagrams indicate what was found and let's now read carefully what is on these axis here we have e the ratio of aggregate energy consumption in 2010 projected for policy I that is the one of that list of five that refers and that ratio is set off here so the more drastic the policy the further the point corresponding to it will be down on this axis the other is the ratio of cumulative discounted GNP for 1975 to 2010 projected again for the same policy and then which model that is in indicated by the particular symbol that is there to indicate the dot and that here are the three D some responses to policy and that's the only one until we get there where there is a direct response to some constraining measures with regard to consumption used. Therefore we have been note then that all the other points somehow heard the vertical line and that indicates that quite that where the elasticity is that are no longer there on the screen are moderately high from minus point four on the points remain and the effect on GNP of the constraints on energy use are not severe however in this this point here the this is the eta point at minus point two five price elasticity a constraint in quantity that has an effect this much then also the GNP is constrained so here we see it very clearly before us from these measurements that it depends on the price elasticity of demand in the model that is perceived that is produced by the model. Now let me then just miscalculated my time somewhat but let me just then indicate my summary of what was learned from this study the first of all the smallness of the effect on GNP as long as you don't get below let's say minus point five or minus point four in if in the price elasticity and it is a matter of econometric work to improve the assurance that we can have in reading these estimates but I do read maybe a little ahead of myself that the principal conclusion that I draw is that there is some time left to overcome the problems of widespread concern with the safety of nuclear reactors including the breeder reactors and this is important because if we mostly rely on fossil fuels we have to deal also with their side effects the acid rain or the CO2 in the atmosphere the acid rain mostly from coal but the CO2 problem as much from oil and gas if those are to be the main state so I read out of this study first of all it's a first study of its kind therefore provisional and not to be dogmatic about but second also it indicates that there is enough time to try out alternative methods of energy generation or rather the mix there of we are not under the sword of demo class thank you.
|
Since the very beginning in 1951, the Lindau meetings were dedicated to medicine, chemistry and physics. But when the Royal Swedish Academy of Sciences and the Nobel Foundation in 1968 agreed to take on a new prize in economic sciences to the memory of Alfred Nobel, the Laureates of this prize were also invited to Lindau. Ragnar Frisch, one of the two co-Laureates who received the new prize when it was given for the first time in 1969, lectured at the Lindau meeting already 1971. During the 1970’s, several of the new economic Laureates visited the meetings, but the second to give a formal lecture was Tjalling Koopmans 1982. By the time he came to Lindau he had regained his gold medal, which was mixed up with the medal of his co-Laureate Leonid Kantarovich in 1975 and which spent four years in the Soviet Union before returning to the west! Koopmans was a former theoretical physicist who seemed to feel at home with the physicists at the 1982 meeting. Among many other things, he had acted as chairman of the Modelling Resource Group of the Committee on Nuclear and Alternative Energy Systems of the National Academy of Sciences of the US. This committee had the task of making long-range projections well into the 21st century. One of Koopmans’ specialities was the application of the techniques of optimization over time, in this case as applied to the field of the supply of energy, and this was the topic of his talk in Lindau. It is a pity that we don’t have access to his viewgraphs, but from the spoken word it is clear that many of the questions that are at the forefront today were already present some 30 years ago. This goes, e.g., for the side effects of using fossil fuels, which is discussed by Koopmans mainly in economic terms. In particular the emission of CO2 into the atmosphere, leading to the greenhouse effect, was already there in this 1982 lecture! Anders Bárány
|
10.5446/52605 (DOI)
|
In 1970, Arno Penzius, Keith Jefferts and I put together a millimeter wave spectral line receiver which we took to Kit Peak to look for carbon monoxide in interstellar space. The main ingredient of the receiver was a front end that had been developed for communications purposes at Bell Labs, but we integrated it with pieces that the National Radio Astronomy Observatory provided and made the complete spectral line receiver. When we finally got to Kit Peak, got it all installed on the antenna and we're ready to go, the source on our list which was up was the Orion Nebula. This is a fairly ordinary ionized hydrogen region. It happens to be relatively nearby so that it is a nice thing for people on earth to study. It's excited by some hot stars in this region which are so bright that on this picture which has been exposed for the for the nebulosity, the ionized gas, you can't really see them. They're usually referred to as O or B stars and they may have masses as much as 50 times out of the Sun, but a star with a large mass burns its material much faster than a low mass star. So such a star will have a luminosity perhaps 20,000 times out of the Sun and a lifetime of only a few million years. So one knows that such a region is a relatively young object. Well at that time the chemistry of the interstellar medium was really fairly simple. If we ignore possible massive neutrinos which don't enter into chemistry anyway, 75% of the material was thought to be hydrogen, maybe a quarter of it helium, and those are things which were formed in the in the Big Bang most likely, and something under 1% of such regions was typically thought to be dust. That's the the only readily observable component of the heavy elements, the things that were made of that were the result of previous generations of star production. At that time the OA radical had been discovered and was distributed somewhat with neutral hydrogen atoms. A couple of years before ammonia had been found, water vapor and and from aldehyde, but the ammonia and water vapor were in very special dense regions. About here in the Orion Nebula there were a couple of each of infrared emitting regions that didn't have optical counterparts, the so-called Kleinman-Low and Beclyn-Noyga-Bauer Nebulae. So this was a somewhat interesting region, but we had no way of knowing that there might be any carbon monoxide in it. In fact, formaldehyde, which was the only thing that was known which might have made one expect carbon monoxide, was not present in that region. Well, when we got the receiver going, there was a mode in which it would operate for testing purposes, in which it continuously looked at the spectrum with about a two-second time constant so that so that you could see things change. It wasn't for serious observing, but as soon as we looked at the, had the antenna pointed at the Orion Nebula, something like this, with of course not the signal-to-noise ratio, immediately appeared on the screen. After some months of working, not knowing whether one was going to see anything, it was quite a moment of discovery. After we looked at it more carefully, we developed a picture sort of like this of the central spectrum, and I probably ought to tell you a little bit more about what that is. Carbon monoxide, of course, is a diatomic molecule. You have a carbon and an oxygen, and they can spin around one another. And the first rotational transition frequency is at 115 GHz or 2.6 millimeters. And that's what we were looking at. There's a small dipole moment so that as the thing rotates, it tends to radiate radio photons. Well, if we were just looking at the intrinsic radiation of carbon monoxide, we would see only one frequency of radiation. But of course, I can't imitate Art Shallow's good explanation of the Doppler effect, but that's what's going on in the source. Some of the molecules are coming toward us, some are going away, and the ones that are going away radiate at a lower frequency, and the ones that are coming toward us radiate at a high frequency. And so this frequency displacement can be expressed as a velocity. Now, why is the carbon monoxide radiating at all? Something keeps hitting it. We now know that there's a lot of hydrogen molecules in this same region. And the temperature of the gas there is around 70 degrees. And when a carbon monoxide molecule encounters some other molecule, typically a hydrogen molecule, it can be set spinning again. So if it's radiated away, its rotational energy, it will be started up again. Radiophotons, unlike optical photons, are very easy to generate. You don't need thousands of degrees. At 2.6 millimeters, all you need is a few degrees above absolute zero, a few Kelvin. So this region in which we're looking, the temperature is actually only about 70 Kelvin. At that sort of temperature, the sound speed is about a half a kilometer per second. So it's clear that the velocities in this thing, even the core of this line, the velocities are quite supersonic. And that is the beginning, the first manifestation of a problem. What is it that is causing the broadening of this line? Another thing which shows up on this spectrum is the fact that this is not a simple, narrow spectral feature, but has emission which continues out to very large velocities. That's perhaps the hint of a solution to the first problem I mentioned, but I'll get back to it in a few minutes. If we look at somewhat later results in which we've measured three isotopes of carbon monoxide, substituting for versus the carbon and then the oxygen less abundant species, you start to see the problem in a little more detail. We have carbon monoxide on the same scale as 13 CO, which should be maybe a 60th of the abundance. On the earth, it's about a 90th of the abundance of the carbon monoxide. So here we have it appearing at maybe, oh, one fifth or a sixth of the intensity. And so what's the explanation of that? Probably there's so much carbon monoxide that we can't see at all. It's saturating. The molecules on the near side of the cloud are absorbing the radiation from the molecules on the far side of the cloud, and we can only see radiation from part of it. Or on thermodynamic terms, the actual temperature of the cloud corresponds to this peak. And then we have, we know that the isotope ratios are not too messed up because the 13 CO and the C18O are in somewhat of a correspondence, although Erna will go into that in much more detail, I expect. Well, if one actually had a simple saturation model of such a region with sort of isotropic homogeneous, I guess I mean, turbulence causing the line broadening, then if you multiply a point about here by 90 or by 60, one would come up way above the picture, but the thermodynamic limit would make it still come about there. If we come out in the wing somewhat, down about a half as much, we're still going to be up there because we're still essentially saturated. Only when we get way out where this line has gone down very much would we expect the line shape to drop down. So we would expect a very square top line under such conditions. Obviously, that's not what's going on. Another possibility is large scale motions in the whole cloud. A likely thing in such a cloud might be collapse. However, if the cloud were collapsing at the velocities involved, the lifetime would only be a few million years, and those clouds would be much less abundant than we see. Well, what was immediately clear, especially from looking at the 13CO, is that there is a large amount of carbon monoxide in this cloud. And when one gets into details of the excitation and how much hydrogen, how many hydrogen molecules it takes to excite the carbon monoxide, there is a tremendous mass in this cloud. If I may have the first slide, we can see a larger picture of the region that's been made in carbon monoxide. Here we have the extent of the Orion Nebula, which we were looking at before. These are the three stars in Orion's belt. So you can see the constellation here as it appears on the sky. And now you can see the extent of carbon monoxide in the vicinity of the Orion Nebula. All right, I think that'll do for the slide. It's clear that the nebula itself is just the tip of the iceberg. There's a lot of material behind there, which is associated with it. In fact, the explanation which is developed is that the Orion Nebula has been created out of the molecular cloud, the so-called giant molecular cloud, which is behind it. This, in essence, is a new type of object that was discovered with carbon monoxide. The picture I showed a moment ago covering degrees of sky, one might term a giant molecular cloud complex, since it actually can be subdivided into separate clouds. The picture, though, is that somewhere in such a cloud, when the densities get great, star production can start. And if a few high density, a few large mass stars are created, they can turn on and start producing a lot of ultraviolet flux. This can ionize the remaining gas around the clouds, heated up the way the nebula looks, and blow apart that part of the cloud. There are various scenarios people have suggested for what might happen following that. A shock wave from such an event could propagate into the cloud, causing another generation of stars to form. Or there may be other reasons why stars would form in the other parts of the cloud. But in any case, it seems to be common for stars to form in such clouds. In fact, for a number of years, we found molecular clouds just by looking for the ionized hydrogen regions caused by the bright stars which had been formed from them. Well, if we go back to the question of the star, go back to this picture and the high velocities, which we see there. If we were to look away from the center of the cloud, this part of the line would stay about the same, but the component at very large velocities would disappear. So it's a very localized phenomenon. In recent years, it's been realized that this is a new phenomenon. High velocity flows in these clouds. And apparently there are stars there which are in the process of either of blowing away their envelopes or of having a tremendous stellar wind. You're probably aware of the of the wind of the sun, which is just a wind. This is a stellar tornado or something because it is extremely high velocities and a lot of mass. One of the problem I alluded to before with turbulence in the cloud is partly that supersonic turbulence tends to dissipate very quickly or at least to dissipate a lot of energy. So you need a lot of energy input to keep supersonic turbulence going. Well, here we have a new source of energy input in such a cloud. There are in fact six such flows within the Orion Nebula. It's not clear that the sum of that is enough to keep the cloud stirred up, but certainly that energy input is contributing to the keeping the cloud going. I'd like to make a comment to the students at this point that it's very important that you understand your apparatus if you're doing experiments. And once you understand it and know what it's doing, it's important that you pay attention to things like this that you weren't really looking for. Because this was known, existed in 1970, but it was only in 1977 that someone else looked at a similar spectrum and realized that there was new information there. Well, let's take another look at a different cloud. This is a so-called horsehead nebula named after the obvious horsehead here. This is a negative photograph. The stars are appearing black and this white region is actually an absorption region. There's a lot of dust in there. And in this case, we have a dusty cloud on this side of the picture and an ionized hydrogen region over on this side. Now, if one looks at this in carbon monoxide, I will superimpose the brightness of the carbon monoxide on the optical picture. I'll take it away so you can see the optical picture again. And then, I'll take it away so you can see the optical picture again. And then superimpose it. What we're seeing is that the original dark cloud contains the molecules and the ionized hydrogen region is eating away on that. This particular region has a series, not just the single horsehead, but a series of bumps on it which are spaced at just about the spacing one might expect from Rayleigh Taylor instability. I mean, we have a light, high-density gas on this side which is impinging on the low density gas on this side impinging on a high-density gas over here. So it's like the overburden of a heavy liquid on a light liquid. And these are either instability from that or regions of extra density. Some of the pressure in this region is caused by the rocket effect. The ultraviolet radiation from the stars in this region are hitting the surface of the molecular cloud causing it to evaporate and putting pressure on it. Well, what about cloud lifetimes? We see numerous examples of these clouds. What can we learn about how long they might live? If they were to live a short time and suddenly turn all their material into stars, then there would be many more stars around in our galaxy. However, if they only, there is a question then of whether they might live a long time and only have periods of star formation or short periods of star formation or whether they live a short time because in spiral galaxies such as the Andromeda nebula shown here, one can see a large contrast, optically, between things that one might call arm regions, or arm regions there, and inner arm regions. The star formation seems to be very much concentrated to the arm regions. Well, at the bottom I have a blow-up of this little region, but let me show you a carbon monoxide picture in more detail. Here we have the same region and you can see that the carbon monoxide is concentrated in an arc there. On the other side you can see a blow-up of the photograph and then I will help you by putting this on top and you can see that the dusty region where the molecules would be expected shows carbon monoxide. The other regions, the part in the center, shows very little. So it's clear that the molecular clouds, at least in M31, the Andromeda nebula, are highly concentrated in the arms of the galaxy. In fact, there's at least a five-to-one contrast between the arm region and the inner arm region, and in the inner arm region if there were just a single giant molecular cloud in our beam in any of this region, we would see more than we do. So it seems in at least in M31 the giant molecular clouds and the so-called spiral arms really go together. There's a difference, however. M31 looks a lot like what we think our galaxy looks like, but the carbon monoxide shown in this picture is only about a tenth as much as one would expect if one were seeing our own galaxy from the same perspective. So we can't take the association completely directly. Well, a similar sort of thing can be seen at least in parts of our own galaxy. This is some carbon monoxide which has been selected from a region where we think there's a spiral arm. You see that there are large extents of carbon monoxide. One might at least call this a cloud complex, another big cloud. There are numerous big clouds in this region. Let's see. This is at 34 degrees, a few degrees away, 36 degrees, where we expect an inner arm region. We see only a few very minor clouds. Again, the giant molecular clouds and giant molecular cloud complexes, at least in that part of our own galaxy, don't seem to be present between the arms, but are present on the arms or in the arms. That goes along with the picture that probably the giant molecular clouds are what really define the arms from an observationalist point of view. Thus one is left with a problem of the lifetime of the clouds, how they are formed and destroyed. One expects that the same material is going to be used in the next generation of arms. I should have explained earlier that the material in a spiral arm does not stay there. The linear velocity on a galaxy is about constant, so that the inner part rotates with a much higher angular velocity than the outer part. Any structure would rotate up and become very flat very quickly. In order to maintain anything which is not circular, it requires that the material move through the structure, that it be a density wave or something else, which is not actually a material structure. I've given an overview, I think, of molecular clouds. I would like to close with a discussion of something I've been studying recently, and that is the structure of small dark clouds. May I have the first, the second slide, please? We're going to see a picture of what's known as a dark cloud. The next slide, please, yes. The dark cloud is known because it's relatively nearby, it's full of dust, and it obscures all the stars in the background. So it's not very dramatic, but you can see a region that's somewhat elliptical in which there are not very many stars to see compared to the rest of the region. Well, what happens when we look in carbon monoxide? Here is the 12CO, the common carbon monoxide, and it's a slightly funny plot in that it has a spatial coordinate this way and a velocity coordinate that way. We've taken a cut across the cloud and plotted the intensity and velocity as we go along that cut. And what you see is that as you go across the cloud this way, the velocity is changing. That's the picture of a rotation. This is a rotation which is counter to the general rotation of our galaxy. Now, when we look in the abundant species of carbon monoxide, we see only the outer part of the cloud as I explained before. If I now show you a slightly different presentation of the same sort of thing with C18O, you can see that the velocity now is horizontal and the same spatial offsets are vertical, but what we've shown is several spectra instead of a contour plot. And you can see the opposite slope. We're now looking at a species which is perhaps 1,500th as abundant as the first one. We're looking way into the cloud and we're seeing a rotation in the opposite direction. If we look at 13 carbon monoxide, we can see both the major, the rotation of the major part of the cloud and the minor part of the cloud. The core of this cloud, the part that's rotating in the same direction as the galaxy as a whole, contains perhaps 300 solar masses and the outer part, maybe 8,000. The axis along which I took these cuts is the small axis of the cloud. That is, the rotation axis seems to be the long axis of the cloud. It's about 15 light years by 30 light years. The typical picture of something which has collapsed has a pancake which is rotating around an axis perpendicular to the pancake because it's easier to collapse along the direction of rotation than perpendicular to it in order to conserve angular momentum. In this case, the general galactic magnetic field is perpendicular to the direction of rotation. That may have caused the collapse of the cloud perpendicular to the field to be slowed. There's enough ionization even though the cloud is basically neutral that the material has a hard time crossing the magnetic field. The magnetic field may have been wound up by the cloud as it collapses and rotates. That is, like the ice skater, as the cloud comes in, it will spin more rapidly. Perhaps the core of the cloud spun much more rapidly than the outer part, wound up the magnetic field, storing energy just like in a rubber band. That then stopped the rotation and has accelerated it in the other direction. Or perhaps the core disconnected itself by completely winding up its magnetic field, and the outer part has done that. In any case, it's an interesting puzzle in cloud structure to see this cloud, which is rotating in both directions. Thank you.
|
Robert Wilson came to Lindau in 1982 together with his co-Nobel Laureate, Bell Labs colleague and astronomy research collaborator Arno Penzias. Their two talks did not explicitly touch upon their Nobel discovery, but reported from ongoing radio astronomy research concerning the interstellar medium, the space between the stars. Radio astronomy started up around 1950, but in the beginning the radio astronomers in general “saw” only a diffuse radiation of varying intensity. Then there was the important and promising discovery of a sharp line at 21 cm wavelength, emanating from hydrogen, the most abundant element in the Universe. With the invention of the maser amplifier by Charles Townes in 1954 (rewarded by a Nobel Prize in Physics ten years later), a very rapid development took place and from around 1960 characteristic radiation from a whole series of molecules was detected. Bell Labs were at the forefront in constructing amplifiers using the new technique and Robert Wilson in his lecture describes results from very precise observations of the molecule CO, carbon monoxide, in interstellar space. When this molecule rotates, it emits radio waves at around 2.6 mm wavelength, with small variations depending on what isotopes of C and O enters into the molecule. It turned out that “empty space” was not empty at all, but filled with molecular clouds of varying sizes and densities. Wilson describes how his measurements of three isotopes of CO laid the foundation of a new picture of, e.g., the Orion nebula. Behind the nebula he detected a giant molecular cloud, which made this well known and well studied nebula appear as only the tip of an iceberg! Anders Bárány
|
10.5446/52606 (DOI)
|
Good morning. Even though I have worked in Germany close to 20 years, my knowledge of German language is exceedingly limited. So since there are more people understand English than Chinese, I will give this lecture in English. What I would like to do today is to go over some experiments on phoetons, leptons, quarks, and gluons. The study of proton has progressed a lot since 1911 when Rutherford first measured the size of atom and found it about distance approximately 10 to the minus 8 centimeters. In 1930s, through the work of Yokawa, study of nuclear force went extended range to 10 to the minus 13 centimeters. In 1953, through the fundamental important experiment by Hofstadter, which measured the size of the proton. In 1963, German and Zweig postulated that inside proton there are three kinds of elementary particles known as quarks. And from this, one can explain the spectroscopy of the known particles at that time. Then in 1968, came a very important experiment by Taylor and collaborators at Slack, which measure electron scattering from a proton at a given momentum transfer Q square and the cross section for this inelastic process is proportional to the point line cross section known as mart cross section times the energy difference between initial and final energy times the form factor W2. The measurement of this cross section shows W2 as function of loss energy in mu goes down as 1 over mu. Independent of Q square. That means W2 goes as 1 over mu. Therefore, sigma is equal to sigma dot, sigma mart. That means you begin to see scattering a large momentum transfer and some relatively small angles scattering from point inside. And shows therefore indeed they are pulling like structures inside the proton. If you plot the same curve as function of Q square but compared with normalized to the mart cross section, you see the elastic cross section of course goes down because of the structure of proton. The inelastic cross section at different energies is essentially almost constant and shows you are now probing the points inside proton. This of course is a really fundamental important experiment. From this in, one can explain a lot of phenomena. Once you know their protons are made out of point like particles and you assume there are three kind of point like particles known as quarks. One explanation of experiment is with energetic photon on a nuclear target produce two kind of elementary particles, one we call the rho which is resonance from two pions, another is called omega is the resonance of three pions. Two pions and three pions of course do not interfere in the final state because of isotopic spin. But since rho and omega has the quantum numbers like a photon and therefore when the energy is high enough you can ignore the mass and therefore this is really like elastic scattering of photon and therefore rho and omega sometimes small probability one part in ten to the eight of the time goes to electron-positron pair and this of course we interfere in the final state and this is the plotted mass spectrum as function of the electron-positron mass and you see a rho which is a broad peak and interfere with a coherent interference with omega. The shape and exact size indeed can be explained by assuming three quarks and the three quarks relative interaction does explain all the phenomena. The next question is of course how many quarks exist in nature beside the three known quarks. Then in 1974 experiment was carried out one as slack by Richter's group another at Brookhaven by my collaborators and myself on proton on a brilliant target produce electron-positron pairs. This is plotted on this axis is the mass of the electron-positron pair compared to the relative year. The year is essentially flat except the mass of 3.1 billion electron volt you have a very sharp peak which exists for a very short time indicates two electron-positron pair and you measure this electron-positron pair which we call the J particle. This particle has a rather long lifetime about 10,000 times longer than all the other known particles. It is very heavy three times the mass of the proton and after the discovery of this particle a family of particles were discovered which by emission and absorption of gamma rays are related to this particle. Because the long lifetime and heavy mass and because a close family associated with this particle it cannot be explained by the known three quarks and therefore a fourth quark is necessary. Once you have four quarks you can naturally ask why there should be only four? Where is the fifth one, sixth one, seventh one and so forth. To look for more quarks another experiment was carried out at the highest energy accelerator the 3,000 billion electron volt in the laboratory system intersecting storage ring at CERN where you have a 30 billion electron volt proton and a 30 billion electron volt proton collide each other. The process therefore will be proton-proton goes to a heavy electron known as mu plus mu minus plus x. In this quark model therefore this process can be visualized very simply in the following way. One proton consists of many quarks and another proton consists of many quarks and also anti-quarks. So anti-quark and the quark can annihilate goes to a fourth quark goes to a mu minus mu plus. If this is the case the cross-section can be visualized as a point like cross-section of a quark, anti-quark, go to mu plus mu minus which is this term and a distribution function of quark and anti-quarks which is this term. In other words if you express in terms of variable tau which is m squared divided by s the total center mass energy the forward cross-section then is nothing but a function of tau only. Now this quantity has been measured again the quark anti-quark distribution function measured in 1975 very precisely the new W2 as function of x, x-slack. With this then you in principle if the theory is right you can calculate what would be the behavior of proton-proton goes to mu plus mu minus and whether you will see scaling or not. So this is a measured cross-section as function of tau. So if the theory is correct independent of energy if you express in this quantity square root of tau all the cross-section should be on top of each other. Indeed whether it's 62GV or 44GV the cross-section when you express in this form are exactly the same. The measured cross-section itself of course is quite different it's only the same when you express in units of tau. So this shows a direct test of this fact that protons are made out of point like particles without any further assumptions of nuclear physics. The next question of course how many quarks anti-quark particles exist. This is a direct measured cross-section as function of the mass of mu plus mu minus. The disenhancement come from the fourth quark which I call the J which is the C bar the fourth quark and this additional peak is the confirmation of the existence of the fifth quark which is called Upsilon which is come from the BB bar quark. So now we know there are five. The question of course is where is the sixth one, seventh one and so forth. To look for the sixth one means you have to travel once more and this time a serious experiment was carried out at the highest energy electron pass-on accelerator in the world the 38 billion electron volt colliding beam accelerator located in Hamburg Germany. And there pass-on and electron were accelerated with existing data synchrotron and then let it collide in four intersection regions. And I just collate intersection region you have E plus E minus colliding total energy 48 TV and you can look whether there are new quarks or new particles exist. The detector for this type of experiment has become rather large. This is one of the detector. This is the size of a person E plus E minus colliding occurring here and surrounded with the thousands of channels of detectors in the field of a superconducting magnet and also with the liquid argon colorimeters to measure the energy. So practically all the modern technology is used. Fast electronics, fast computers, low temperature, quaringe genics everything necessary is trying to use, it's being used and also it's not cheap. So the first question I would like to discuss is on the existence of free quarks. Free quarks has been reported as Stanford and this is a search of free quarks a measurement of energy loss as function of apparent momentum for a detector where K-ion would have this curve, proton at this curve, deutron, triton and you have charged one third, charged two third quark with the mass of 5 TV, it should be here. And therefore this shows there's no free quark has been found at Petron. The next question is what is the size of the quark? Now to determine the size of the quark one can plot the measured electron pass-on to head-run cross-action compared to electron pass-on to mu plus mu minus cross-action. This ratio is essentially the sum of the quark-charged square as function of energy. So the first three quarks U, D, S give R about 2, U, D, S, C give R about 3.5, U, D, S, C and B give R about 3.9 and now we assume that between quarks the force is transmitted by another particle which we call gluons of which I will speak a little more. We just gluon it's like a really good correction the R changes a little bit. From the flatness you know that the quark a point like and indeed a simple comparison with the theory would say its dimension is 10 to the minus 18 meters which means it's one part and ten thousands of a typical atomic nucleus. It also shows there's no sharp structures. So now we have mentioned the search for the free quark, the size of the quark, let us now concentrate on the search for the sixth quark. Six quark which we call the top quark. People give different names, of course you can use your own names. The first three are U, D, S, the other ones the fourth one is called charm, the fifth one is called beauty and this one is called top. This is the measurement, the latest measurement on search for the sixth quark. What is plotted here is the same unit R by controlling the electron pass of the energy in 20 MEV intervals. If there is a sharp new particle of course there should be a sharp resonance and this you have seen in here. Every time there is a sharp peak spike here, when you see a B there's a sharp spike here. The question is what are the sharp spikes in between in here. And the latest experimental result does not seem to indicate the existence of a sharp peak. And this by itself is somewhat puzzling because the first three quarks give particles of mass 1gV, C give a mass of 3gV, B give a mass of 9gV, so you have 1, 3, 9. You imagine the next one should be 27 and we have gone to 37. The search was also extended in here and nothing was there. And you can search for new quarks in many ways and this is another way of searching. What is plotted here is a variable which I call thrust which is nothing but in the central mass system some of the parallel momentum compared with the total momentum. And so this of course you have a certain distribution. And this distribution, there's a major point, agrees with the simple five quark with the gluon theory which known as QCD and agree with the data. If there exists a sixth quark and because sixth quark is produced more or less a rest so when it decays, it decays somewhat isotropically and therefore there will be more year at a low thrust. And indeed if you have one third quark you have this axis, you have two third quarks you have this axis and this clearly is nothing. This is another way to exclude the sixth quarks. Now let me discuss a little bit about the physics of gluons. Gluons are particles which transmit forces between quarks. The first I think important contribution Petra has made is the discovery of three jet events which I will explain to you in a minute compared with quantum chromodynamics, the physics of quarks and gluons. Do not let this slide scare you. This slide says the following. When you have E plus E minus annihilates to a quark, annihilates to a photon which to go quark and anticork. Quark and anticork pick out other quark and anticork from the sea and combine themselves into headrons and then produce pi's and k's. Because E plus E minus is a high momentum and therefore this quark and anticork are produced very energetically, therefore you have three more particles very much collimated and therefore known as jets. So you have normally you have two jets. If there are gluons and then gluons can be emitted from quarks and the gluon by itself can decay again into quark and anticork and then can fragment into another jet. And therefore if there exists gluon when the energy is high enough you will see three jet events. And when the energy is even more higher you can see four jet events and multi jet events. This is the observation of three jet events. E plus E minus collision and this is the energy distribution of the event you are looking perpendicular to the event plan. You will see two lobes. That's because initially you have E plus E minus on a line so whatever is produced along this line has to be conserved. This is nothing but a conservation momentum. You are looking from top of the event you will see one lobe, three lobe, second lobe and third lobe, quark, anticork and gluon jet. The size and the distribution of the gluon jet agrees with our knowledge, with our theory known as quantum chromodynamics. That is the first indication of the physics of gluon in E plus E minus. Much more study has been carried out and this is an example of the first three jet events. Here is the plot of the transverse momentum distribution for the head ones. If you only have two quarks the event of course is more collimated. You have three quarks the event of course is more you have large people perpendicular momentum and indeed there is the gluon distribution compared with the data. There is the low energy data when you have only two quarks at 12 GV, at 30 GV when you have three quarks, two quarks plus gluon is the event distribution. You can also study the spin of the gluon. To do that you transfer into a system where the anticork and gluon are back to back and you measure this angular distribution. There are major events as function of this angle, if gluon has spin zero you have this distribution. If gluon has spin one you have this distribution. The clear lecture gluon has spin one. Then you can also measure the strength of coupling between gluons and quarks. Now when you have E plus E minus go to full time go to QQ bar, produce a gluon, produce three jet. That's the sum of the time. Most of the time go to two jet. The ratio between two jet and three jet event clearly is a measurement of this coupling constant. The only two free parameter in this theory is one is the coupling constant another is a momentum distribution of the quarks. And so this curve is the rate of the two jet versus three jet. And this curve is the rate of the momentum distribution out of the production plan of the gluon, of the three jet event. The intersection here shows the quark has 300 MeV like the hydrogen and the coupling constant is about point two. And you can make even more detailed study of physics of gluons and quarks. And that is to study events with the final state as a muon. When you have E plus E minus produce quarks. The heaviest quark so far is the B quark, BB bar produces a mass of 9GV. B quark of course can decay to a C quark plus a muon. And therefore if you take a muon give you some handling of which quark you select. And so you have one jet, another jet and take a muon. Just give you a more sensitive way to isolate the processes. The first thing you can do is to search, give you a more sensitive way to search for new quarks. This again is a thrust distribution of all the events that has one muon produced in them. The points are the measurement. The green line is a five quark model with gluon. Clearly in agreement with the data. You have a charge one third quark with AGV, charge two third quark with AGV, you have a distribution like this. And that again shows there's no additional new quarks. You can also refine your measurement to study the rate of how the B quark decays into C quarks. If you measure the transverse momentum distribution of the muons, when you have the ordinary quark UDS because they are very light, so the transverse momentum distribution is peak forward. C quark is a little bit heavier, so the transverse momentum distribution is a little bit broader. B quark is the heaviest, so you have a more broad transverse momentum distribution. So measurement of transverse momentum distribution enables you to identify this process. And indeed, this is the measured transverse momentum distribution, and from this distribution you get a B quark branching ratio of about 8%, which is a rather small number. Another thing you can do in study the physics of quarks and gluons associated with the muon is a precision study of quantum chromodynamics. And you do this in the following way. You have E plus E minus produce a QQ bar which fragment to head runs. And if this Q decays to another Q prime, of course, it meets a muon. And then by taking on this muon, you have a more sensitive way to isolate various processes. And indeed, the green points are measurement of the thrust distribution without muon. If the model is exactly correct, muon and headrun should not be different. And the red point are the measurement with muon. And in this clear show, they are the same, and shows the theory again is correct in this sensitive test. So much on quarks and gluons. And the next topic I would like to touch is on the measurement of the size of muon, electron, and tau leptons. In other words, test of quantum electrodynamics. Let me summarize by saying the measurement as they say on muon, electron, tau shows quantum electrodynamics is correct. And if you want to express the radius or size, muon, electron, tau are smaller than 10 to the minus 16 centimeters. And this shows the measurement as function of energy, the cross-section of E plus E minus cos of tau plus tau minus compared with the prediction of electrodynamics. This agreement enables you to parameterize the size of the tau. Tau, let me remind you, is twice the mass of proton, but its major size, at now, is a thousand times smaller than the size of the proton. Next question you can ask is how many leptons exist? We now know there's an electron, there's a mu, there's tau. Tau is a mass of 1.8 billion electron volt. The question is how many more this type of family exists? This is the measurement of number of events as function of a heavy lepton mass. The solid curve is the prediction of the number of events if a heavy lepton exists. The red curve is a 95% upper limit confidence level. From this one can see between 2GV and 16GV, there's no more heavy leptons. Next question I will deal with is the theory of Weinberg's alarm on the effect of Z0. The first experiment, first important experiment was carried out again in 1978 by the group of Taylor on polarized electron scattering from nucleus. When you have an electron with a polarization of about 40%, scattering from a nucleus, two term contributes, one is the photon, another is the Z0, which carries the weight force. And so because of Z0 you have a parity violation effect. And so you measure asymmetry, which is the difference between right-handed cross-section minus left-handed cross-section compared to the sun. And this is the measurement of precision of the Weinberg's in the analyzing magnet. This is experimental, asymmetry normalized after you remove the polarization in q-square dependence as function of incident energy, which is measured in the analyzing magnet, in the magnet, and you see a spin precision in units of 10 to the minus 5, and shows parity indeed is violated in this process. A direct comparison with the Weinberg's theory is shown here. This is the measured asymmetry normalized to q-square versus a quantity which is called a rapidity, and this is the Weinberg angle, and this is the data. The data says the angle is about 0.2. And these are other possible models which clearly are through and out by this theory. There's, of course, another way to study the neutral, weak neutral current, and that is in this time like rate. And that is by comparing the mu plus mu minus production from E plus E minus, which has a full term term, and the zero-ball term term. And then if you measure a forward minus backward distribution mu plus mu minus, and you will see asymmetry. The theory of Weinberg's alarm is minus 9.2 percent measurement by the Tassel group, minus 16 plus minus 3.2 percent. So this is the data, compared with the theory, and this is the pure electromagnetic interaction clearly is through and out. A more precise measurement has been carried out recently by my group, and that is shown in here. Measurement of forward-backward asymmetry is a very difficult process because of the effect that is very small. And therefore, the first thing you ought to do is to calibrate your detector, make sure your detector has no asymmetry. This is the measurement of cosmic ray asymmetry and shows the detector is symmetrical to 1 percent. Measurement of mu plus mu minus, a low energy, 14 GV, 22 GV shows no asymmetry, and again in agreement with the theory. Measurement at high energy clearly show asymmetry. And if the asymmetry measurement, so now you know the systematic error is less than 1 percent, there are some confidence to quote the errors, minus 8.4 plus minus 2.1 percent, the Weinberg Salam is minus 7.6 percent. And this also set a limit of Z0, larger than 51 GV. In the next month's time, Petra will go to 45 GV with a luminosity of 10 to the 31. With this, one should be able to measure charge asymmetry within one year to an accuracy of 13 plus minus 1.8 percent. And therefore you will be able to find the mass of Z0 to 25 percent accuracy. And that should be therefore the program for the A.J. for the next few years' time. Of course you can always continue search for the top core up to 45 GV that's the highest energy Petra. Professor Glashow's current prediction is 38 plus minus 2 GV. Theory is always a little bit ahead of experiment. Of course it's interesting if it's correct. After that, at this moment there's a very important plan at the A.J. to use the existing electron-pustle-tongue colliding beam as a pre-accelerator to do a large energy electron-proton accelerator. And that means you have a proton in one direction, electron in another direction colliding here and you can perform experiments. If this accelerator is visualized you can then pretty much see what particle physics will be until the end of this century. Until the end of this century in the United States you have a proton, anti-proton collider, 2,000 GV, the basic reaction of course quark, anti-quark. Since each quark is one-third of the energy, so the total quark, anti-quark energy, 660 GV. The distance you can probe is about 5, 10 to the minus 18. Study, you study strong interaction. In Geneva, I said there's an E minus E plus which has 130 GV and 130 GV. In this case you have pure lepton interaction. The total center mass is 260 GV, 10 to the minus 17 centimeter, it's essentially study electron V. In Hamburg if this project go ahead you have electron-proton which is just a cross between these two, that's why it makes so interesting. You have a 30 GV electron, 820 GV proton and therefore you have a lepton quark interaction. This is a lepton lepton, quark, quark, this is a lepton quark. And this will probe 100 GV center mass, the distance is 3 to the minus 17, you will study electron-proton and neutral quark combined. And that means for particle physics at least if all this project go ahead one can certainly do many interesting physics until the turn of this century at which time I will be retired anyway. Thank you. Thank you.
|
This is Samuel Ting’s second lecture held at the Lindau Meetings, three years after the first one. The two lectures are connected and tell more or less the same story, the story of a travelling high-energy physicist, who moves from one accelerator to another in search of higher and higher energies. Ting seems immediately to have understood the idea behind the Lindau Meetings and, as a number of other Nobel Laureates, fallen in love with it. Understanding the idea partly means that he knows that most of the students and young researchers in the audience are different from meeting to meeting. So he could in principle tell more or less the same story every time. But Ting, even as a Nobel Laureate, is an extremely active physicist and he cannot resist telling the latest news from his work. So after an historic introduction, involving a long list of Nobel Laureates such as Rutherford, Yukawa, Hofstadter, Gell-Mann and Taylor, he concentrates on his own work. This involves the discovery of the 4th quark, the charm quark, for which he received the 1976 Nobel Prize in Physics together with Burton Richter. He then moves on to the work he has done looking for more quarks, since at the time of the lecture the number of quarks was still an open question. The 5th quark had been discovered, but Ting could not find any signs of the 6th quark. So he instead brings up the question of how large the quarks are and concludes that according to his experiments they are pointlike. At the PETRA accelerator in Hamburg, where very high energy beams of electrons and positrons were made to collide, he worked in a team which discovered the so-called three jet events. These are signatures of the existence of the carrier of the strong force, the gluon. The discovery of the gluon has not been recognised with a Nobel Prize in Physics, maybe because there were so many collaborators in the experiments. So far (2012) the physics prize has not been given to whole groups. Anders Bárány
|
10.5446/52608 (DOI)
|
Physicists naturally try to see phenomena in simple terms. You might say that the primary justification for doing elementary particle physics with all its expense and difficulty is the opportunity it gives us of seeing all of nature in somewhat simpler terms. Great progress had been made a few years ago, say, from the late 1960s to the mid-1970s in clarifying the nature of the elementary particles and their interactions. Then starting about the mid-1970s, we began to confront a series of problems of much greater difficulty. And I would have to say that very little progress has been made. I would like, first of all, today to remind you of what the present understanding of elementary particle physics is, as it was already formulated by the mid-1970s. And then for the larger part of my talk, discuss the attempts that have been made since the mid-1970s to go beyond this to the next level of simplicity. The present understanding of elementary particle physics, I would say, is based on three chief elements. First of all, there is a picture of the interactions of the elementary particles as being all extremely simple, similar to the one interaction that was earlier well understood, the electromagnetic interaction. You know that the electromagnetic interaction is carried by a massless particle of spin 1, the photon, which, for example, is exchanged between the electron and the proton and the hydrogen atom. In the present understanding of elementary particle physics, there are 12 photons, and I should put the, I should do this, meaning that the word is put in quotation marks. There are 12 photons which carry all the forces that we know of between the elementary particles. These 12 photons comprise, first of all, the familiar old photon, which is emitted, say, by an electron or by any charged particle, and then by three siblings, three family members called intermediate vector bosons, a W minus a W plus and a Z zero, which are emitted, for example, when leptons change their charge, when electrons turn into neutrinos or neutrinos turn into electrons, and the neutral one, the Z zero, is emitted by electrons and neutrinos when they don't change their charge. Similarly, the W and the Z are also emitted by quarks when they do or do not change their charge, respectively. In addition to the four, quote, photons, unquote, of the electro-weak interactions, there are eight similar particles known as gluons that Sam Ting has already mentioned, which are emitted when quarks change not their charge, but a different property which has been humorously named their color, so that a green quark may turn into a red quark emitting a red-green gluon. There are three colors, and hence there are eight gluons. You may ask, why not nine gluons? And I will tell you if you ask me privately later. Now, in electromagnetism, we have not only the general idea of a photon, but a very clear picture of a symmetry principle of nature which determines the properties of the photon and determines in particular all of its interactions, the principle known as gauge invariance. In a sense, from the point of view of the theoretical physicist, the photon is the way it is because gauge invariance requires it to be that way. The twelve photons of the elementary particle interactions, as we know them, are also governed by a principle of gauge invariance, but the group of gauge transformations is larger, and it is known mathematically as SU3 cross SU2 cross U1. The SU2 cross U1 is a four-dimensional group which governs the four particles of the electro-weak interactions. The W and the Z transmit the weak nuclear force which gives rise to radioactive beta decay. So this whole set of interactions are called the electro-weak interactions, and the SU3 governs the eight gluons which give rise to the strong nuclear forces. This is sometimes called the 3-2-1 theory. The theory of the eight gluons by itself is what is known as quantum chromodynamics. The electric charge of the electron, say, is in fact just a peculiar weighted average of coupling constants, G and G prime, associated with these two groups, SU2 and U1. G and G prime play the same role for these groups of gauge transformations that the electric charge played in the older theory of quantum electrodynamics and the electric charge, in fact, is given by a simple formula in terms of them. And similarly, there is another coupling constant. A coupling constant is just a number that tells you the strength with which these particles are emitted and absorbed. There's another coupling constant that tells us how strongly gluons are emitted, say, when quarks change their color, known as G sub s for the group SU3. Now this is a very pretty picture, especially since, based as it is on familiar old ideas of gauge invariance, it requires us really to learn very little new, which is always to be preferred. But there is an obvious difficulty with it. That is that gauge invariance requires that the vector particles, the spin-1 particles that transmit the force, have zero mass. Just as, for example, electromagnetic gauge invariance requires the photon to have zero mass. Of course, the W and the Z do not have zero mass, they have masses which are so large that no one so far has been able to produce them, although we have strong reasons to think we know where they are. The explanation for this is now universally believed to be that the gauge symmetry, although precise and exact, in no sense approximate, is subject to a phenomenon known as spontaneous symmetry breaking. That is, these are symmetries of the underlying equations, but they are not realized in the physical phenomena. This is a lesson that elementary particle physicists learned from solid state physicists who understood it much earlier than we did, that symmetries can be present at the deepest level and yet not apparent in the phenomena. The symmetries are just as fundamental as if they were not broken, but they are much harder to see. Because the electro-weak symmetry of SU2 cross U1 is spontaneously broken, the W and the Z have masses. The W mass is greater than 40 GeV, the Z mass is greater than 80 GeV, and the precise values are determined by an angle, which is just basically tells you the ratio of these two coupling constant. The angle is measured in a great variety of experiments, and on the basis of that, we believe the W will be at a mass of about 80 GeV and the Z will be at a mass of about 90 GeV. Of course, we anxiously await confirmation of that. The second of the three ingredients or the three elements on which our present understanding is based is the menu of elementary particles. I won't dwell on this. There are six different varieties of quarks of which five have been discovered and the sixth is anxiously awaited. Each one of these varieties, sometimes called flavors of quarks, according to quantum chromodynamics, comes in three colors so that altogether there are 18 quarks. And then in parallel to the quarks, there are doublets of leptons, neutrino electron, and then the muon behaving like a heavier electron has its own associated neutrino, and the tau lepton has its associated neutrino. Multiple processes are most simply seen in terms of these quarks and leptons. So for example, when a neutron decays a state which was originally an up quark and two down quarks of three different colors turns into two up quarks and a down quark of three different colors, a W minus being emitted which then turns into an electron and an anti-neutrino. This menu of elementary particles is in no sense forced on us by theory except for the structure of doublets of quarks and leptons and color triplets of quarks, the fact that there are six flavors is just taken from experiment. And it has to be regarded as just an independent empirical foundation of our present understanding. The third of the foundations of the present understanding of physics is more mathematical but I think equally important, the idea of renormalizability. Renormalizability is very simply the requirement that the physical laws must have the property that whenever you get, calculate a physically relevant quantity, you don't get nonsense. You don't get a divergent integral. You get an integral which converges. That is a finite number. I think we'll all agree that that's a desirable quality of the physical theory. The physical theories that satisfy that requirement are actually very easy to distinguish. If an interaction of some sort has a coupling constant G, like the coupling constants G and G prime and Gs that I discussed earlier, and if that coupling constant has the dimensions of mass to some power minus D, let's say a negative power, D is positive. And when I talk about dimensions, I will always be adopting the physicist system of units in which Planck's constant and the speed of light are one. Then because the coupling constant has the dimensions of a negative power of mass, the more powers of coupling constant you have in the matrix element for any physical process, the more powers of momentum which have the dimensions of a positive power of mass, mass to the first power, the more powers of momentum you will have to have in the integrals so that as you go to higher and higher order in the coupling constant, you get more and more powers of momentum in the integrals, and the integrals will therefore diverge worse and worse. That's a bad thing. That's not a renormalizable theory. The allowed interactions, the renormalizable theories, are those with coupling constants therefore which are not negative powers of mass but which are either dimensionless like the electric charge of the electron or a positive power of mass like, for example, any mass. A physically satisfactory theory ought to be one which contains only such coupling constants. Now that is enormously predictive because the energy densities or the Lagrangians that determine physical laws always have a fixed dimensionality, mass to the fourth power. And I remind you that I'm using units in which Planck's constant and the speed of light are one, so energy has the unit of mass and length has the unit of inverse mass. So therefore, if a coupling constant appears in an energy density and it multiplies an operator, a function f, with the dimensionality mass to some power d, then the dimensionality of the coupling constant must be just the dimensionality of the Lagrangian, 4 minus d. So therefore, in order to keep the dimensionality of the coupling constant positive or zero, we must have the dimensionality of the operators, 4 or less. But almost everything has positive dimensionality. Fields have dimensionality, one for boson fields or three half for spinner fields. Derivatives, space-time derivatives have dimensionality, one. And therefore, as you make an interaction more and more complicated, its dimensionality inevitably increases. But the allowed interactions have dimensionality only 4 or less. And therefore, the principle of renormalizability limits the complexity of interactions that are allowed in physics. This is just what physicists need. They need something to tell them, do not think about all conceivable theories, think about a limited set of simple theories. The limited set of simple theories that we allow ourselves to think about are those with interactions whose dimensionalities are 4 or less, and therefore are sharply limited in the number of fields and derivatives that they can have. In fact, so powerful of these limitations that principles A, B, and C determine a physical theory uniquely, except for a few free parameters. The free parameters are things like the electric charge of the electron, the Fermi coupling constant of beta decay, the mixing angle between the z and the photon, a scale parameter of quantum chromodynamics, which tells us where the strong coupling constant begins to become. And I'm very large. And of course, all the quark and lepton masses and masses for other particles called Higgs bosons that I haven't mentioned. But aside from this fairly limited set of free parameters, not as limited as we would like, but still not enormous number of free parameters, the physical theory of elementary particles in their observed interactions is completely determined. And not only determined, but agrees as far as we know with experiment. One of the features of this understanding, which I think is perhaps not as widely emphasized as I would like, to me it seems one of the most satisfactory aspects of what we know about physics, is that the conservation laws of physics that were painfully deduced from experiment in the 1930s, 1940s, and 1950s and 1960s are now understood as often approximate consequences of these deeper principles. The theory as constrained by gauge invariance and by renormalizability and other fundamental principles cannot be complicated enough to violate these symmetry principles. So for example, as long as you assume that certain quark masses are small, the strong interactions must obey the symmetries of isotopic spin invariance, chirality, in the eightfold way of Gellman and Neyman, which were previously deduced on the basis of data. As whatever the values of the quark masses, the strong and the electromagnetic interactions must conserve the quantities known as strangeness, charge conjugation invariance, and with certain qualifications parity and time reversal invariance. And independently of the strong and electromeg-, independently of the values of the quark masses and without any qualifications at all, the strong, weak, and electromagnetic interactions must conserve baryon and lepton number. There is no way of writing down a theory complicated enough to violate these conservation laws that would be-, a theory that would be consistent with the principles that I've described. How much time do I have? Not bad. This understanding of the physical origin of the symmetry principles leads us to a further reflection. We now understand why, let us say, strangeness is conserved. Strangeness, the property that distinguishes a K-meson from a Pi-meson or a hyperon from a nucleon, is not conserved because the laws of nature contain on some fundamental level a principle of strangeness. Strangeness is conserved as a more or less accidental consequence of the theory of strong interactions known as quantum chromodynamics. The theories simply can't be complicated enough to violate the principle of strangeness conservation. Because strangeness conservation can be understood without invoking strangeness as a fundamental physical conservation law, we are led to reflect that perhaps it is not a fundamental symmetry, and perhaps when you widen your scope beyond the strong interactions, you will see that strangeness is not conserved. That's in fact, of course, true, and it's been known to be true since the first days that people started talking about strangeness conservation. The weak interactions don't conserve strangeness. For example, a hyperon is able to decay into an ordinary proton or neutron violating the conservation of the strangeness quantum number that distinguishes the two. In exactly the same way, because we now understand that baryon and lepton number, by the way, baryon number is just the number which counts the number of quarks. It's one-third for each quark, and lepton number is a number which counts the number of leptons. It's one for each lepton. The conservation of baryon and lepton number, for example, prohibits processes like the proton decaying into a positron and a pi-zero, which would otherwise be allowed. Because the conservation of baryon and lepton number is understood as a dynamical consequence of the present theory of electro-weak and strong interactions and the principle of renormalizability, there is no reason to doubt that when we go to a wider context, this conservation law will be found to be violated, because it is not needed as a fundamental conservation law. It is understood without it's being needed on a fundamental level. A great deal of attention has been given to this possibility that baryon and lepton number are not conserved. Suppose for example that there are exotic particles with masses much greater than the w or the z. Let me take the mass, capital N, as just some characteristic mass scale, for a new world of exotic particles that have not yet been discovered. By exotic, I mean rather precisely particles with different quantum numbers for the gauge symmetries, SU3 cross SU2 cross U1, than the known quarks and leptons and gauge bosons. The world that we know of is just the world of those particles that have much smaller masses than this new scale, capital M. That world is described since we're not looking at all of physics, we're only looking at part of physics, not by a fundamental field theory, but what's called an effective field theory. We should describe our present physics in terms of an effective Lagrangian. That effective Lagrangian, since it's not the ultimate theory of physics, might be expected to contain non-renormalizable as well as renormalizable terms, in the same way that when Euler and Heisenberg in the mid-1930s wrote down an effective Lagrangian for the scattering of light by light at energies much lower than the mass of the electron, they wrote down a non-renormalizable theory, because they weren't working on a fundamental level, but only with an effective theory that was valid as a low energy approximation. The effective theory should contain non-renormalizable terms, and as I indicated before, these are terms whose coupling constant has the dimensionalities of a negative power of mass, that is, we have operators O with dimensionality D and coupling constants with dimensionality 1 over mass to the D minus 4. And what mass would this be? Well it would have to be the mass of the fundamental scale, of the particles that have been eliminated from the theory, the same way the electron is eliminated from electrodynamics in the Euler-Heisenberg theory of the scattering of light by light. This tells us then that the reason that physics appears to us to be dominated by renormalizable interactions at low energies is not because the non-renormalizable interactions aren't there, but because they're greatly suppressed by negative powers of some enormous mass. And we should expect in the physics of low energies to find not only the renormalizable interactions of the known electro-weak and strong interactions, but much weaker, subtler effects due to non-renormalizable interactions suppressed by very large masses in the denominator of the coupling constant. There has been work by myself and Wilczek and Z to catalog all possible interactions of this sort up to dimension 6 or 7 operators. The lowest dimensional operators that can produce Baryon violation turns out are dimension 6 operators, and hence according to the general rules I've given you are suppressed by two powers of a super large mass. A catalog has been made of these dimension 6 operators of the form quark-quark-quark-lepton. A catalog has been made of all of these interactions, and it turns out that they all satisfy the principle that although they violate Baryon and lepton conservation, they violate them equally so that, for example, the proton can decay into an anti-lepton, the neutron can also decay into an anti-lepton, neutron can decay into E plus pi minus, but the neutron cannot decay into a lepton, neutron cannot decay into E minus pi plus. And there are other consequences of the simple structure of these operators, something like a delta I equal a half rule. The decay rate of the proton into a positron is one half the decay rate of the neutron into a positron. We can say all these things with great confidence without being at all sure that protons and neutrons actually decay. The decay rate of the proton and the neutron, that is decay at an observable rate, the decay rate of the proton, let us say, will be suppressed in the matrix element by two powers of a superlarge mass, and they're sure to be a coupling constant factor like the fine structure constant. You square the matrix element and multiply it by a phase space factor, the proton mass to the fifth power to get a decay rate. From this, the big unknown in this formula for the proton decay rate is, of course, the super-heavy mass scale. We know the proton is more stable than 10, well, its decay, its lifetime is longer than 10 to the 30th years, and therefore this mass must be very large indeed. It must be larger than about 10 to the 14th GeV. There are other effects that violate known symmetries. Lepton number is violated by an operator that has dimensionality, not six, but only five, and this produces neutrino masses of the order of 300 GeV squared divided by the super-heavy mass scale. That's a very low neutrino mass. That's less than one volt if the mass scale is greater than 10 to the 14th GeV. Now, there is other debris which might be expected to be found in the low-energy world, and I simply won't have time to discuss this. In a sense, gravity itself can be regarded as the debris in our low-energy effect of field theory of a more fundamental theory that describes physics at a mass scale above 10 to the 14th GeV. Why in the world should there be a mass scale so much larger, 12 orders of magnitude larger than the highest masses that we're used to considering? A possible answer comes from the general idea of grand unification. Grand unification is very simply the idea that the strong and electro-weak gauge groups are all parts of a larger gauge group, which is here simply denoted G. Just as the electro-weak gauge group is spontaneously broken, giving masses to the W into the electromagnetic gauge group, and that's why the W and the Z are so heavy and have not yet been discovered, the grand gauge group G is assumed to be broken at a very high energy scale, capital M, into its ingredients, SU3 cross SU2 cross U1. One coupling constant will turn out hopefully to generate the strong and the electro-weak coupling constants in the same way that the two electro-weak coupling constants combine together to give the electric charge. Another hope here, equally important, is that the quarks and leptons would be unified into a single family so that we wouldn't have red, white, blue, and lepton colors, but we would have one quartet for each flavor of quarks and leptons. Models which realized some of these ideas were proposed beginning in 1973, starting with the work of Paddy and Salam and then Georgie and Glashow, and then Fritsch and Mankowski, and then many other people. But an obvious difficulty with any model of this sort is the fact that the strong coupling constant, as its name implies, is much stronger than the electro-weak couplings. Sam Ting told us that the fine structure constant for the strong interactions is about 0.2, and we all know the fine structure constant for the electromagnetic interactions is 1 over 137. How can two such different strengths of force arise from the same underlying coupling constant G sub G? The answer, which is now, I think, the most popular, was proposed in 1974 by Georgie Quinn and myself. Our answer was that these coupling constants are indeed related to a fundamental coupling constant, but they're related only at a super-large mass scale, M. The strong and electro-weak couplings, which are indicated here as these three curves, are not really constants. They're functions of the energy at which they're measured. This is well known in quantum electrodynamics. The property of asymptotic freedom means that the coupling constant shrinks with energy. The coupling constants of the electro-weak force, one of them shrinks, one of them increases. One imagines there might be a point at which they all come together at some very high energy. Indeed there is such a point, but since this variation with energy is very slow, it's only logarithmic, the energy at which the coupling constants come together is exponentially large. It's given by the formula that the logarithm of the ratio of this fundamental mass scale to the W mass is something like 4 pi square over 11 e square, where e is the electric charge, with a correction due to the strong interactions. And that comes out to be about 4 to 8 times 10 to the 14th GeV. So we see now why there has to be a scale of energies greater than 10 to the 14th GeV. It's to give the strong interactions time to get as weak as the electro-weak interactions. These three curves are coming together at one point. It's not easy to get three curves to intersect at a single point, and in fact the way it's done is by carefully adjusting the data at the low energy end to make them aimed in the right direction so that they'll all hit at the same point. That careful adjustment of the data, I can put in a slightly more complementary way, is saying that we predict, and this was done by Georgie Quinn and me, we predict certain ratios of these coupling constants, which can be expressed as a prediction of the value of the mixing angle between the z and the photon. That prediction was in conflict with experiment in 1974, in agreement with experiment now, and it's the experiment that changed, not the theory. There are a great many problems. I would say, in fact, that this prediction of this mixing angle is the only tangible success, quantitative success so far of grand unification. There are a number of problems with further progress. One problem is that we have had no convincing explanation of the pattern of quark and lepton masses. There's been no convincing explanation, I mean, more than a theory in which you have enough free parameters to arrange things the way you want, but something that really gives you a feeling you understand it. There's been no convincing explanation of the pattern of generations that we have not only an up-down electron generation and a charm-strange muon generation, but a third generation, maybe a fourth generation. We don't know why any of that's true. That's the most puzzling problem of all, we have no fundamental explanation of the hierarchy of forces. That is that there is a ratio of 12 orders of magnitude between the symmetry-breaking scale of the grand gauge group and the electro-weak gauge group. We know that that's true, we think we know it's true, because the strong forces at low energy is so much stronger than the electro-weak forces, but where this number of 10 to the 12 comes from is endlessly speculated about. There are many, many ideas, but there's no one idea that really convinces people. And finally, there's no one model that stands out as the obvious model. There are many candidate models of grand unification, but since all of them leave A, B, and C un-understood we can't really attach our allegiance to any one of these models. There is a further development which started in 1974, which has occupied a large part of the attention of theoretical physicists in the succeeding years. This is a symmetry called supersymmetry, invented, although there were precursors, invented by Wesson's Amino and then carried on by Salam and Strathday and many other people. Supersymmetry is a symmetry which operates in a sense orthogonally to the symmetries that I've been discussing up till now. The electro-weak gauge symmetry, for example, connects the neutrino and the electron, both particles of the same spin but different charge. Supersymmetry connects particles of different spin, but the same charge, flavor, color, etc. For example, supersymmetry would connect the electron, which has spin a half, with another particle that might have spin zero or spin one. It had been thought that such symmetries were impossible, and in fact they're almost impossible. There is a theorem by Hogg, Lopuzansky, and Sonius, terribly important theorem that tells us that the kind of supersymmetry which was invented out of whole cloth just out of the richness of their imagination by Wesson's Amino turns out to be unique. It is the only mathematically allowed way of having a symmetry that connects particles of different spin. However, and therefore, without too much arbitraryness, we can fasten our attention on a particular kind of symmetry, which is simply called supersymmetry, and explore the consequences of it. And we know that whether it's right or wrong, there isn't anything else that could unify particles of different spin. Now we don't see any trace in nature of supermultiplets of this sort. That is, the electron does not seem to have a partner of spin zero or spin one. Well, that in itself should not convince us the idea is wrong. We're used by now to the idea that symmetries can be true at a fundamental level and yet spontaneously broken. Supersymmetry must be spontaneously broken. In fact, there is not a trace in nature of any supermultiplet that is visible to us. Sometimes it's said that supersymmetry is the symmetry that unifies every known particle with an unknown particle. Supersymmetry is surely spontaneously broken. The big question is where is it broken? And I've been giving a lot of attention to this question lately, and I think I will close by just summarizing my conclusions. You might think perhaps that supersymmetry is broken at the same sort of energies at which the electro-weak gauge symmetry is broken. That is energies like MW or the order of 100 GeV. There are many reasons why that doesn't work. The partners of the quarks, which are scalar particles, spin zero particles and hence are called squarks, would give very fast proton decay. This could be avoided by inventing new symmetries. The squarks and the sleptons would be too light. That is they would be light enough to have been observed, and as I already told you, they haven't been observed. This also can be avoided by inventing new symmetries. In particular, Fayet has explored this possibility. These unwanted new symmetries and other new symmetries that you can't avoid in such theories lead to light spin zero particles called goldstone bosons, which are apparently impossible to get rid of. Glennis Ferrara and I have been exploring their properties. We have come to the conclusion, or I should say I have come to the conclusion, I'm not sure Glennis agrees, that supersymmetry broken at these low energies is really hopeless. Another possibility is that supersymmetry is broken at medium-high energy. That is supersymmetry is broken at energies which are much larger than the W mass, but much smaller than the mass scale at which the grand unified symmetry is broken. Now supersymmetry-
|
To the general public, Steven Weinberg is probably best known as author of the book “The first three minutes”, an extremely well written and popular account of the the beginning of our Universe according to the Big Bang hypothesis. As a Nobel Laureate of 1979, together with Sheldon Glashow and Abdus Salam, he accepted the first invitation to come to a Lindau Meeting, where he gave a lecture which was at least as well composed as his book. Over the years, I have had the pleasure of listening to Weinberg at several physics meetings and my impression has been the same every time. Weinberg received the prize for his part of the theory that unifies the weak and the electromagnetic interactions. His lecture leads up to a discussion of what has become known as GUT, Grand Unified Theory, a theory which unifies all three interactions of the Standard Model: electromagnetic, weak and strong. But to lay the ground for this discussion, Weinberg first gives an exceptionally clear account of the Standard Model, its particles, interactions and conservation laws. In particular he stresses the fact that many of the conservation laws are not conservation laws of nature, they are rather conservation laws of our present theories. This reminds me of Niels Bohr’s statement “Physics does not tell us how nature is, it tells us what we can say about nature”. Also in Weinberg’s lecture, the concept of supersymmetry is introduced, a concept which would give rise to a whole set of new particles. Today, in 2012, the Large Hadron Collider at CERN is actively searching for these hypothetical new particles and the whole physics community is eagerly waiting for the LHC to reach is top energy so as to become able to answer some of the questions raised by theoreticians such as Steven Weinberg. Anders Bárány
|
10.5446/52611 (DOI)
|
I am very happy to be here once again in Lindau. I have attended every one of the physics meetings and I have given a set of lectures at these meetings discussing basic questions in physics. I would like to continue in that way to discuss basic questions in physics but today I would like to shorten my talk a little because I want to tell you also about an important discovery that has been made in the place where I work. That is in the physics department of Florida State University at Tallahassee. This is the discovery of some new elements and I am very happy to be able to tell you about it. The question of basic beliefs for physicists is very important for those who are doing research. They must each have some beliefs which they hold on to and discuss and maybe criticize and sometimes they find that one of these basic beliefs is wrong. It then becomes a prejudice and they say one has to discard this belief and set up a new physics without it. That is the way an important discovery comes about. It might even be a revolutionary discovery. Now Einstein's important discoveries were of this kind. Einstein criticized the notion of absolute time and simultaneity. Previously people had taken it for granted that there had to be an absolute time. In fact it was very essential for some of the laws of physics as they were then understood. In particular Newton's law of gravitation which was based on action at a distance and that could be only understood with reference to an absolute time. Newton's action at a distance was criticized by philosophers on the grounds that a body cannot act in a different place from where it is. However it was found pretty soon that the action at a distance was not really an essential feature of Newton's theory. One could reformulate Newton's theory in terms of a field using Poisson's equation, a field which spreads out and which requires action only from one point to a neighboring point. There was thus an alternative way of describing Newton's law of gravitation. On the one hand the action at a distance, on the other hand action through a field. Now you might think that with these two alternative forms both of which are mathematically equivalent and which both give the same results applied to any example, you might think that these two forms are equally good. But that is not necessarily the case when one has two forms of a theory which give equivalent results because one of those forms may suggest improvements and developments which the other form does not suggest. In the case of Newton there was the action at a distance form for his law which did not suggest any possibilities of improvement or development. On the other hand the action through a field did allow developments and ultimately led to Einstein's reformulation of the law of gravitation. Einstein was led from his deep thinking to suppose that action, to suppose that the idea of simultaneity has to be abandoned. He was able to figure out that there's really no experimental way of telling when two events are simultaneous. The best one can do is to send signals with a velocity of light. Now light travels extremely fast and the time taken for light to travel from one place to another is quite unimportant so long as one keeps two phenomena on the earth. But astronomically the time taken for light to travel from one place to another may be quite large and owing to the delay produced by the finite time intervals one cannot give a meaning to events being simultaneous when they take place a long way apart. Einstein was led to a new idea of geometry in which the notion of simultaneity was abandoned and instead he had as his absolute, instead of absolute time, the velocity of light. That led to Einstein's special theory of relativity which is best understood as supposing that space and time form a four-dimensional world which has a geometry slightly different from Euclid's geometry. A difference which is expressed very simply in this way. In Newton's geometry we have the theorem of Pythagoras. The square of the length of the hypotenuse is equal to this distance squared plus this distance squared. You replace that by having a geometry in which this distance squared is equal to this one squared minus this one squared. You bring in a minus sign that leads to a geometry which is in many ways analogous to Euclid's geometry but is essentially different in some ways and it is called the geometry of Minkowski space. On the basis of this new geometry Einstein set up a theory where there is no absolute time, no absolute simultaneity but he still had the problem of bringing in gravitation. In order to do that he had to make another drastic change in our basic ideas namely he had to suppose that space and time are curved instead of being flat. The flatness which one had assumed automatically right from the beginning when people first studied space at all, that flatness is just a prejudice which has to be given up. One has to think of space and time as forming a curved manifold and this provides the basis for gravitation. The reason why a body falls is because it keeps as closely as it can to a straight line but the space is curved and that leads to the body taking a path something like that. This was a revolutionary idea. I don't think people altogether grasp the importance of it because it has the immediate effect that antigravity is impossible. Many people have imagined there could be antigravity in which bodies fall upwards instead of downwards but that is quite impossible in terms of Einstein's ideas because there is no question of a force pulling a body up or down. Everybody moves as closely as it can to a path which is a straight line in the curved space and that involves the particle, the body moving like this. Well that shows the important ways in which Einstein influenced our thoughts. Now perhaps I should tell you about my own basic ideas. I started physics at a time when people were working with bore orbits and I was very excited about bore orbits and thought that they were fundamental in nature and that they would explain everything in the atomic world. One just had to understand how the bore orbits interact with each other. I was working on this for two or three years and my ideas led nowhere because I had the wrong basic ideas. The advance was made by Heisenberg who had different basic ideas. Heisenberg had the very good basic ideas that one should base one's theory on quantities which are closely connected to observation. Now we cannot observe bore orbits at all and for that reason the variables that describe bore orbits could not be important with Heisenberg. Instead one has to work with physical quantities which are each associated with two bore orbits. Heisenberg set up a new theory in terms of these quantities related to two bore orbits. These quantities are understood mathematically as the elements of a matrix and that led him to a new matrix mechanics. It was a mechanics in which we have to depart from the usual equations of mathematics which assume that all the dynamic of variables commute with each other. It required a setting up of a new dynamics in which the product u times v of two variables need not be the same as v times u. Well it was quite a revelation to me when this discovery of Heisenberg was set up and it showed how wrong I was previously and I felt that one had to adopt different basic ideas and it seems to me now that the best basic idea one can have in physics is to suppose that physical laws must be based on beautiful equations. That is the only really important requirement. The underlying equations should have great mathematical beauty. One should look for relations in physics having great mathematical beauty. Now De Broglie had previously already shown the success of this basic idea. Simply from considerations of mathematical beauty he was led to assume a relationship between particles and waves. This relationship is very hard to explain without mathematical terms but it means physically that each particle has its motion guided by waves and the connection between the particle and the waves is provided by De Broglie's equations which he discovered just from considerations of mathematical beauty. It was a development of De Broglie's ideas which led Schrodinger to his wave mechanics and that was a form of quantum mechanics which was found to be equivalent to Heisenberg's form. We then had quite a satisfactory quantum mechanics coming from this work of Heisenberg and Schrodinger. A quite satisfactory mechanics in many ways. Very powerful, very beautiful but it had the limitation that it was not a relativistic theory. It did not apply to particles moving with large velocities. In order to set up a theory which should combine the ideas of quantum mechanics and relativity one was faced with a serious difficulty. The general quantum mechanics of Heisenberg and Schrodinger involved working with equations which are linear in the operator d by dt, the operator of time differentiation. When one tries to set up a relativistic theory one is led to equations involving d by dt squared. Now if one works with these equations involving d by dt squared one is led to a quantum mechanics in which one can calculate the probabilities of events taking place but those probabilities are not necessarily positive. A probability which is not positive of course is just nonsense. In order to have a theory which gives you only positive probabilities you have to use equations containing d by dt not d by dt squared. Now at the time when I was working on this in 1927 most physicists were quite happy to use the equations with d by dt squared but I was very unhappy about it because it meant abandoning some of the basic assumptions of quantum mechanics which really required one to use d by dt. I remember that I was pretty much alone among physicists at that time in being so dissatisfied with the current state of the theory and it was this discontent of mine which led me to try to find an equation involving d by dt which should still be relativistic and suitable for particles with high velocity and I did find such an equation and the use of this equation was very satisfactory in that it led to a mechanics in which all the probabilities are positive. It also incidentally led to equations for the electron which provide the spin and the magnetic moment of the electron. That was an unexpected bonus. I was not at that time looking for a way of describing the spin. I thought that it would be necessary first of all to describe the simplest kind of particle, the particle with no spin and only after one had a satisfactory theory of the particle with no spin would one be able to introduce later on the spin. This again shows how one's ideas may lead one astray. It turned out that the discussion of the particle with a spin of a half a quantum was really simpler than the discussion of the particle with no spin. Well there was a theory which worked very well for the electron in that it gave you results in which all the probabilities are positive but there were still difficulties left with it because this theory allowed negative values for the energy as well as positive values for the energy. One had to have an interpretation for the negative energies because the mathematics did not allow one to eliminate them from the theory and I was able to think of a way of accounting for these negative energy states by changing one's ideas of the vacuum. One had previously always thought of the vacuum as a region of empty space. It now seemed that one had to replace that idea. One could take as the definition of a vacuum a region of space where the energy is a minimum. That would require one to have all the negative energy states occupied by electrons. There cannot be more than one electron in any state according to the exclusion principle of Pauli. So one could just put one electron into each negative energy state and that is all that one could do with the negative energy states. We then had a new picture of the vacuum where all the negative energy states are occupied. All the positive energy states are unoccupied. Then one had to consider the possibility of an unoccupied negative energy state, a hole in the distribution of negative energies. This, it seemed, would be a particle. The hole would appear as a particle with a positive energy and with a positive charge. How is such a hole to be interpreted? Well at that time people believed there were only two basic particles in nature. That was really a prejudice. There was a need for two particles because there are two kinds of electricity, positive electricity, negative electricity, and there had to be one particle for each of those two kinds of electricity. There were the protons for the positive electricity, the electrons for the negative electricity. So it seemed to me that one would have to interpret a hole as a proton. Now right from the beginning I felt that the hole ought to have the same mass as the electron, but there was a big difference in the mass of the proton and the mass of the electron and that provided a difficulty which I could not understand at the time. Of course the explanation came a year or two later. One just realized that it was a prejudice that there are only two particles. One should really have more and the hole should be interpreted as a new particle having the same mass as the electron but having a positive charge. These particles are now known as positrons. The climate of opinion about new particles has changed very much since those days. One was prejudiced against new particles at that time. Nowadays people are only too willing to postulate new particles as soon as there is any evidence for them experimentally or as soon as there is any theoretical reason why it would help to bring in new particles. There are rather too many basic new particles nowadays. Well a piece of work independent of that, which I did at that time, was to set up a theory of magnetic monopoles. According to these standard electrodynamics of Maxwell one has magnetic charge appearing always in the form of doublets. Any piece of matter which has magnetic properties would have positive magnetic charge in one place and an equal negative magnetic charge in another place. But simply from a study of mathematical beauty of equations I was led to suppose that there might be monopoles, particles with a magnetic charge of one kind only and the strength of the monopole was fixed by the theory somewhere around sixty seven and a half times the strength of the electric charge on the electron. This theory of magnetic monopoles has been lying dormant for a long time. People looked for them experimentally and could not find them but the situation was changed about a year ago when a set of a team of four workers, Price and others, claimed to have discovered a magnetic monopole. They claimed to have discovered a magnetic monopole coming in from outer space among the cosmic rays and the way they did their experiment was to send up a whole stack of sheets of lexon. Lexon is a kind of plastic and energetic particles passing through the lexon do some damage which is shown up if one etches the lexon plates. They had sent up these lexon plates with balloons to a high altitude so as to get the cosmic rays before they had been disturbed much by passage through the atmosphere and they found one example of a particle which they interpreted as due to a magnetic monopole. The reason for this interpretation is that a magnetic monopole produces ionization which is pretty well independent of the velocity of the particle while an ordinary charged particle produces ionization which increases as the particle is slowed up. They had evidence of a particle which produced about the same ionization at the top of a stack of plates as at the bottom just in the way that a magnetic monopole should do and the charge of this monopole was about two units of what my theory gave as the unit for a magnetic monopole. They published this work they were rather rushed into publication by circumstances which couldn't altogether control and they published it pretty quickly without considering all the implications of it and this work was very strongly criticized by other physicists. Chief among them was Alvarez. Alvarez proposed an alternative explanation for this particle. He thought that it had started out as a platinum nucleus and that in passing through the sheets of lexon it had lost it had undergone changes losing some of its charge and as a result of those changes its ionization had not increased in the way that an ordinary charged particle would have its ionization increasing and they thought that they could account for Price's results in that way. Most physicists tended to assume that Price was wrong and the reason for that is that many other searches had been made for magnetic monopoles. Very extensive searches had been made in many places. People had studied all places which they could think of on the earth's surface going close to the magnetic poles and they had also studied at the sea and the sediment at the bottom of the sea. They had also studied the moon rocks and all these studies showed no magnetic monopoles. Now here comes Price and his co-workers who have one example which they claim is a monopole going against all the other evidence and for that reason most physicists were led to believe that Price was wrong and that magnetic monopoles do not exist. There is in any case a difficulty with the explanation of Alvarez. Price's original paper did not give the complete results from the analysis of all his lexon plates. There were two plates which were kept in reserve and the analysis of the etching of these two plates gave further results which do not fit in with Alvarez explanation. It becomes very difficult to understand this track of Price by any explanation at all. It seems that one should perhaps assume that the original particle was something heavier than a platinum nucleus and even then there are some difficulties because Price had set up an apparatus to detect the Seringkov radiation which should be produced by any rapidly moving particle such as would be required by an explanation of the type that Alvarez was proposing and this Seringkov radiation was not detected. Well that is the situation about the magnetic monopole at the present time. If you just take Price's work by itself it seems to provide fairly strong evidence that he does have a magnetic monopole there and it is difficult to fit in this evidence with any other explanation but there is the problem of how one can understand the failure to find magnetic monopoles anywhere else in spite of the intensive searches that have been made. It seems that the only way out of this difficulty is for Price and his co-workers to try and find some more particles of this nature and that I believe is what he is doing. He sees now that there is no need to send up his lexan plates in balloons to a high altitude. One can just spread the lexan plates on the ground because these particles have enough energy to come through the atmosphere without being very much disturbed. I believe that these experiments are underway at the present time and we should have to wait for them to get further results before we can understand this situation about the magnetic monopoles. I have to leave this subject undecided. I would like now to pass on to this new work which has been done in Tallahassee concerned with the discovery of new elements. Every element has an atomic number which is the number of protons in the nucleus. There is a number one for hydrogen, two for helium and so on and among the natural elements that series continues up to number 92 which is uranium. Now some elements beyond uranium have been manufactured by man using neutrons produced by reactors. Man has produced element number 94 plutonium and a good many other elements around there. Elements even going up beyond the hundred. Elements 101, 102, 103 have been produced but these very big ones have a very short lifetime. They are very unstable. Now theoretical people have been studying atomic nuclei for a long time and they have set up a theory according to which the neutrons and the protons in the nucleus form shells, something like the shells formed by the electrons moving around outside the nucleus. And when you have a closed shell you have especially stable nucleus. Nuclear for which the shells are nearly closed will also be more stable than those for which they depart very much from closed shells. Now there might be some nuclei present still heavier than the ones which have been made by man going up to 103. There might be such nuclei which could be stable or long-lived if we had them in the neighborhood of a closed shell. If you take a nucleus 1, 2, 6 that would have a closed shell of protons and you might expect such a nucleus to be especially stable or more long-lived than nuclei with smaller numbers. Physicists have been wondering whether such nuclei, super heavy nuclei as they are called, do exist or not. The theories about the shells of protons and neutrons are not good enough for one to be able to decide whether these exist or not. People have been trying for many years to make these super heavy nuclei with the help of the heavy machines, trying to get ions and elements to stick together by making them run into each other. But they have failed. Now if you cannot make them with your machines there's still a possibility that these super heavy elements might exist in nature in very small amounts. And there is now evidence that super heavy nuclei do exist in a mineral called monazite. I've written it on the blackboard there at the top. This is a mineral that consists mainly of a phosphate of cerium. Cerium is one of the rare earth elements and this monazite also contains mixed with it phosphates of other rare earths. It is a crystalline rock and some monazite also contains uranium and plutonium. Now there exists some old mica deposits. Micra is a transparent material. Now there exists this old mica containing inclusions of small quantities, microscopic quantities of monazite. Now if you have a piece of mica with an inclusion of monazite, you would have something like this. Here's your sheet of mica and here is a speck of monazite in it. If this monazite contains uranium or thorium, the uranium or thorium will emit alpha rays and the alpha rays will damage the mica around and make halos. So you get a halo around the inclusion like this. The uranium and the thorium emit alpha rays with an energy of seven or eight million volts and that energy is shown by the length of the track that one has to proceed along to get to this ring here. So one has a halo around the inclusion with a radius corresponding to the energy of the alpha rays. Now many of these halos have been found but also a few halos have been found which are much bigger than the normal ones. Halos where the alpha rays extend out to a much greater distance. These are called giant halos. What is the explanation for them? They have been studied by a man called Gentry who works in Oak Ridge. You will probably not have heard of him but you will hear very much of him in the future. He has considered whether there is any other explanation for these giant halos and he has come to the conclusion that no other explanation is possible. They have been formed by some element unknown to us which emits more energetic alpha rays than any alpha rays emitted by the known elements. And as a result of this emission of the alpha rays at some distant time in the past, these giant halos are formed. This mica in which the giant halos are found is about a thousand million years old and maybe the giant halos were formed at that time in the past. Gentry has been studying these halos for seven years. Now there's a man called Cahill who also you will not have heard of probably but whom you will hear about very much in the future who had the idea that in these giant halos there may be some super heavy elements still surviving in the inclusion here. Perhaps not the elements which produced the halo in the first place but some products of that element. There might be small quantities of super heavy elements surviving in the halo and he said about trying to find them. How can one find new elements? A good way would be to bombard this region with protons and examine the x-rays which come off. X-rays are very characteristic for each element. One can calculate just what the x-rays should be if one knows the atomic number of the element and this will be a good way of searching for new elements. Cahill works in California at Davis and his laboratory did not have the necessary equipment and facilities for doing this experimental work. For that reason Cahill came to Tallahassee where we do have a vandagraph machine, a tandem vandagraph capable of producing high-energy protons and we have other equipment which would be needed for this kind of work. Cahill has been working for six months in Tallahassee. Could I have the first slide please? Airstrip built. This is a picture of one of the giant halos. In the middle there's a small particle of monazite and this is all damage which has been done in the mica by the emission of alpha rays of greater energy than any alpha rays which are produced by known elements. Could I have the second slide, Neste-built. This is a picture of the X-rays which were obtained by Cahill and other workers at Tallahassee from an inclusion producing a normal halo, a halo coming from the alpha rays of uranium and thorium. You notice that there are essentially two peaks in this picture with a valley in between. This peak corresponds to the L X-rays from uranium and thorium. This peak here comes from the K X-rays produced by various other elements, rare earths and so on. In between these two peaks there is a valley and this valley is where one would expect to find the L radiation of super heavy elements if they exist. This valley is the important region. Now Cahill and his co-workers proceeded to examine some giant halos which were sent to Tallahassee by Gentry from Oak Ridge. I heard that one of the specimens got lost in the post and one specimen I heard got dropped on the floor and they could never find it again although they searched all day. However there were six specimens of giant halos which they had to work with. Could I have the next slide please. Neste-built. This is very similar to the previous, the valley between the two peaks, but now there is some structure in the valley shown by these irregularities here. And this structure would indicate small amounts of some new elements. One must take a closer look at this structure. This part and this part don't interest us at all. Neste-built, this is just an amplified picture of that valley. The dotted line is a sort of smoothed out background and you notice very prominently a peak here which corresponds to element number 126. 126 we think would be a specially stable element because it has a closed shell of protons and there is the evidence for the existence of element 126. Now this evidence was obtained perhaps two months ago. They did not immediately rush into publication claiming to have discovered a new element. They thought it was best first to study these results and see whether there could be any other explanation for that peak. What they did was to consider all the known elements and see whether any of them could provide x-rays which would just produce such a peak. Well there are such elements, two of them, tellurium and indium. It might be that there was some tellurium or some indium in the specimen giving an x-ray line just there. But the spectra of tellurium and indium are both well known and if they have a spectral line here, they should also have spectral lines in certain other places and looking in these other places, one finds that there is no tellurium present at all and indium, there might be a small amount of indium but in any case it will be very small and it could not provide more than 5% of this peak here. It seems that there's no other explanation for that peak except for some of the element 126 existing in the inclusion which makes the giant halo. This is some more evidence of the same nature. You see here very well indicated the difference between a giant halo given by this curve here and an ordinary halo produced by uranium and thorium here. A very pronounced difference and this difference is shown with all the specimens of the giant halo. Here underneath are the spectral lines of all the known elements which could lie in this region and one has to examine these peaks and see whether any other element could be producing the peak. In addition to the peak corresponding to element 126 occurring here, there's also a peak corresponding to element 124 here and a peak corresponding to element 116. And in each of these cases it seems that there is no other element which could provide those x-rays. There is reason to exclude the possibility of other elements by examining whether these other elements are present from the x-rays which these other elements would produce. So there's quite strong evidence for the existence of these three super heavy atoms. There is also some weaker evidence for the existence of other super heavy atoms. Three more in fact. They're nasty built. This shows the same results illustrated in a different way. The background radiation has now been subtracted and that gives a zero line, the middle line here and the middle line here and the middle line here. Now there is an upper and a lower line which show the statistical variations which one would expect in the background. Anything going outside those statistical variations would have a probability of being some definite physical event. The top curve there refers to an ordinary halo and there is nothing there to indicate the presence of new elements. There happens to be this dip here which is presumably a statistical fluctuation which has gone outside the normal range of statistical fluctuations. Here and here are the corresponding pictures for two of the giant halos and you see very definitely how there are these peaks going well beyond the region of statistical fluctuations and it is these peaks which provide the evidence for the new elements. Well that is the evidence which has been obtained in Tallahassee and the people who have been doing this work are sufficiently confident that they have a proof for the existence of new elements and that there is no alternative explanation. They are now sufficiently confident so that they have sent in this work for publication. It's going to be published in the physical review letters. That is a journal which gives prompt publication to important new work and this will appear in the July the fifth number. Theoretical people have also been busy in Tallahassee revising the theories of atomic shells. They have further evidence now with the existence of these stable super heavy elements. They have to be pretty stable because they have survived in this old mica. The mica is about a thousand million years old and these super heavy elements probably have a lifetime also somewhere the order of a thousand million years. There's a man, Phil Pot, theoretical worker in Tallahassee and a few others who have been studying these atomic shells in the light of the new data. They find that these super heavy elements cannot themselves be the cause of the giant halos. If they emit alpha rays, the alpha rays would only have an energy of five or six million volts and that would not be nearly enough to produce the twelve or fourteen million volts which would be needed for a giant heavy elements which are now observed are the results of disintegration of this 160. With regard to the atomic weights, there is no experimental evidence for the atomic weights of these new elements but Phil Pot and the other theoretical workers think that number 126 would have an atomic weight of around 350 much heavier of course than any atomic weight of the known elements. I would like to say in conclusion that I have not been engaged in this work myself. I have just had the good fortune to be working in the same institute where this work was going on and I have been able to follow all the developments and I'm able to see the caution with which the people are observing with their new results and I see that they will not publish anything without having strong conviction that it is right. Also I'm very much indebted to this group of workers for providing me with a preprint of their paper which is going to appear on July the 5th and also for giving me these slides which I have been able to show to you. I might mention that you are the first people to see these pictures apart from those who are actively engaged in it. Here is the evidence for the new elements. Thank you.
|
The title of Paul Dirac’s lecture is a very general one, but in Lindau he immediately announced that he would use some of the time to describe a still unpublished discovery of so-called super-heavy elements, elements heavier than uranium. These should recently have been found at his home university in Florida in certain minerals. This announcement fits very well into Dirac’s general theme of beliefs and prejudices, since we know today that the announcement turned out to be false and that the first known super-heavy elements eventually were produced in large accelerators. Dirac’s main point, though, is that a scientist may have certain beliefs, but that these easily might become prejudices to be discarded as soon as the beliefs are shown to be wrong. He tells of his own belief in the 1920’s that the Bohr orbits of the electrons in atoms would explain everything in atomic physics. This belief (turned into a prejudice) he discarded when Heisenberg showed that it made no sense to think about one Bohr orbit, but that everything actually was based on so called matrix elements connecting two Bohr orbits. Another of his examples concerns the existence of so-called magnetic monopoles. From ordinary experience we know that a magnet always has a north pole and a south pole. But the equations describing electromagnetism seem to allow also magnets that have only one pole. These hypothetical particles were studied by Dirac already in the 1930’s. In his lecture he describes the (then) recent experimental search for monopoles in the cosmic radiation at high altitudes. A balloon experiment had detected a particle track in a plastic detector and the experimenters interpreted the track as resulting from a magnetic monopole. The result was published, but met strong criticism. At the time of the lecture, Dirac still kept an open mind, but we know today that (again) the discovery was false. Even today there has been no trace of magnetic monopoles. Anders Bárány
|
10.5446/52615 (DOI)
|
Ladies and gentlemen, it seems to me that there are two major frontiers, major barriers to our understanding of physics at the present time. One of these occurs on the submicroscopic scale, and we can refer to this as the inner space of fundamental particle physics. This region we can study in the laboratory using high-energy particle accelerators, although the techniques are becoming ever more complex and costly. The other frontier occurs at the other end of the scale of length. It occurs in outer space and concerns the large-scale physics, such problems as the very origins of space and time themselves. The life history of galaxies, the evolution of the universe. This region cannot be studied in the laboratory. We have to extend physics beyond the walls of terrestrial laboratories, and we have simply to observe the interplay of phenomena on a cosmic scale. The disadvantage of this is that we cannot, of course, arrange our experiments as in a laboratory. We cannot adjust the conditions. This is the major disadvantage. On the other hand, the advantage is that we have at our disposal a range of parameters far beyond anything which can be achieved here on Earth. So our problem is to extend physics in this indirect fashion and to extrapolate what we know in the laboratory to the limits of space and time. Many phenomena occur which are quite different from anything we have previously understood. The effects of curvature of space-time in general relativity predicted by Einstein are just detectable with the best modern techniques in our own laboratories. The effects of space-curvature can become quite dominant when we examine nature on a large distance scale. Space and time can, given the concentrations of mass, one finds in outer space close up. We can create bubbles in space-time which are regions in which phenomena are completely removed from our own universe. There is an event horizon. I refer of course to what we popularly know as black holes. Recently, there have been very exciting advances in this field. There has been made a connection between quantum electrodynamics and general relativity which seem to me to be a close parallel to the advance we heard about from the lips of its inventor, Professor Dirac, when he connected quantum theory with the more limited special relativity of Einstein. We know what advances followed from that link. It seems to me that corresponding advances in the understanding of the large-scale universe may follow from the link between quantum physics and general relativity which is now being made by such scientists as Stephen Hawking. These are features of large-scale physics which I am sure will be most fruitful and we shall be hearing much about these in the years to come. However, in my talk today, I wish to be more humble and to consider problems involved when matter reaches conditions under compression which make it far different from the matter we know in the laboratory. Like forces, the gravitation one can achieve inside massive stars is sufficient to compress material until atoms, the simple atoms are literally crushed out of existence and we achieve a new state of matter in which the density can reach values as large as 1,000 million tons per cubic centimeter. The possibility of condensed matter in this state was first considered as early as 1932, soon after the discovery of the neutron particle by Chadwick. To my understanding, the first discussion was made in Copenhagen when news of this discovery reached. The Russian scientist who was there at the time, Landau, and Landau first proposed condensed matter of this kind in a discussion with Neils-Paw and Rosenfeld. However, it was some years later that the possibility was extended and the astronomers Bader and Zvicki predicted that perhaps one would find matter of this sort in space in the debris left behind from the explosion of a supernova. Theorists attempted to construct models of what a star would be like if atoms were crushed out of existence and we had what was essentially a ball of material in which the density approximates that of the atomic nucleus. These models were discussed in the 1930s by such scientists as Oppenheimer, Volkov, but I think the subject really was of academic interest until quite recently because there was no evidence that such material really did exist. The first evidence that stars of this kind could occur was obtained when we made the discovery of pulsars in Cambridge in 1967. Now just to remind you of the basic features which lead us to the conclusion that matter of this enormous density is a reality, I should just like to run over the basic evidence which the pulsars give us. May I have the first slide please? The pulsar phenomenon as you all know is that one receives in the radiation at radio wavelengths with a radio telescope one can receive from certain objects in space a regular succession of pulses as you see displayed there. Radiation of this kind was of course totally unpredicted and it seemed at first very strange that there should be objects in the sky which produce radiation in a very regular succession of pulses, typically one second apart and of duration a few hundreds of a second. The remarkable feature of these sources which was realized very quickly was the incredible accuracy with which these pulses are maintained. Careful measurements show that in all cases except perhaps one or two where the extent of the effect is not large enough to detect in almost all cases one can measure a slowing down in time of the pulse rate. It takes generally a long time for anything noticeable to happen but if you were to wait in the case of this source I display on the slide if you were to wait approximately 100 million years and make observations again the pulse rate would be roughly half what it is today. Now in most cases one detects these strange objects with radio telescopes but there is one example a famous example where a pulsar can actually be seen and I would just like to show you that because after all seeing is believing. May I have the next slide. This object that you see is one of the most famous supernovae known to astronomers. What you see is the remains of a star which has exploded and that explosion was witnessed by Chinese astronomers it was witnessed and documented in the year 1054 when the star was visible in broad daylight. What we see now is an expanding cloud and the object of interest is this star here near the centre of the nebula is a pair of stars and it is the bottom right hand star which is the pulsar. That star has been known for many years to astronomers but only after the pulsar discovery was it realised that this star here is flashing its light with great regularity at a rate of 30 cycles per second. If one puts a stroboscope inside a telescope one can display this effect and may I have the next slide with a stroboscopic technique and this is an enlarged picture. These are the two stars at the centre of the nebula here the bottom right hand star is the pulsar and we see here one frame another frame taken a few milliseconds later this star here has totally vanished its radiation is completely extinguished so that star is we know flashing at a rate of approximately 30 flashes per second. Now how does one account in broad terms for this phenomenon? Well in the early days and I won't bore you with early theories there were many ideas but now the only theory which has stood for test of time is that this radiation can only be explained if we have a star which is really acting like a lighthouse beacon a star which is rotating rapidly on its axis and producing a well-defined beam of radiation. May I have the next slide please? The type of phenomenon which we believe accounts best for the pulses is the situation perhaps like this you have some star which produces a well-defined beam of radiation that beam as the star rotates circulates around the celestial sphere and any observer who lies in the right belt of latitude with respect to that star will observe a flash each time the star rotates. This is a rather simplified model it is probably more realistic to suppose that the star really is producing a beam along some axis which perhaps does not coincide with the rotation axis and that beam produces flashes for any observer in two regions of space. Now the basic problem is to find some star which can spin sufficiently fast to account for the observed flashes. This is the fundamental problem and the difficulty is to find a star which will hold itself together at the enormous rate of rotation which this phenomenon demands. All stars are bound by gravitational forces and if you spin a star on its axis too fast then it simply flies to pieces like an exploding flywheel and until the pulsar discovery the most compact star known to astronomers was the white dwarf star that's a star approximately as large as the earth and stars of such matter stars of such a kind can rotate once every few seconds without disrupting but they cannot spin fast enough to account for the most rapid pulsars that we observe. The only possible candidate is a star in which one finds matter in the neutron state which I shall be discussing and such a star can spin at any speed up to several thousand revolutions per second. The gravitation is sufficiently strong to hold it together and that is the only known star which theoretical star which could produce the pulsar phenomenon. Well there is much evidence and I won't go into this in detail there is much evidence to confirm this conclusion which was reached it was indeed suggested in the pulse when the pulsar discovery was made that such a star was responsible but there were many other possibilities too. The conclusion that the star must be a neutron star was confirmed within roughly two years of the discovery and this theory is now generally accepted. I want you to understand something about this strange behavior of matter when it reaches the densities one finds in the neutron star and to do this I would like to come back to some elementary physics because the properties of matter under extreme compression can really be predicted with some precision from what we already know of the behavior of fundamental particles. May I have the next slide? I have sketched on this slide very schematically the behavior of matter, common matter under compression and we start with matter as we know it perhaps a lump of iron or something of that kind and on a simple model we know that atoms are composed of a nucleus of heavy particles at the center and we have electrons moving in the region surrounding that nucleus moving under the laws of quantum mechanics. This phenomenon is well known to radio astronomers. It is the kind of difficulty we have in our experiments and that noise I have frequently heard on a radio telescope and it can come from agricultural machinery at a distance of 20 miles. To return to common matter the distance at which one finds electrons from the nucleus is determined by the most elementary property of fundamental particles that particles need to be associated with a quantum wavelength and in the very first suggestion by De Broglie of course we remember that the quantum wavelength depends upon the speed at which the particle is moving. The wavelength of a particle is determined by Planck's constant divided by the momentum of the particle. Now in ordinary matter the electrons are at a distance roughly speaking of one angstrom from the nucleus and this defines the density of common material around us in the world. The atoms in the solid are roughly as close together as the electron orbits will permit and it is the value of Planck's constant which essentially determines the state of matter as we see it around us and it is perhaps hard to believe that the common matter is virtually empty space. There is an enormous amount of empty space within an atom because the fundamental particles are a great distance apart. Now what happens when we put such matter under some compression and force it together? Now these experiments one can only do on a very small scale in a terrestrial laboratory but if you allow cosmical forces to apply an ever increasing pressure to ordinary material one can bring about some remarkable changes. As you squeeze the particles, as you squeeze the atoms together then of course the electrons have less space in which to move and from quantum mechanics we see that we have to shorten the wavelength of those particles to fit them into the available space and as we shorten the wavelength we must increase the velocity of the particles that is the elementary fact of quantum mechanics. So as we squeeze these atoms together the electrons will move faster. If we squeeze the matter until it has a density which we can never achieve in the laboratory but say a density of one ton per cubic centimeter at that density the electrons require such a small wavelength to fit into the available space that they are moving so fast and that they are no longer trapped in orbit about particular nuclei. The electrons move freely amongst the nuclei, all the electrons in any atom one can consider, they are dissociated from one particular nucleus and this state of matter we call degenerate matter. A few electrons, degenerate electrons we understand in the physics of ordinary metals but under this compression all the electrons will become degenerate and when matter has reached a density of one million tons per cubic centimeter then we have the state in which all the electrons move freely through the material. Now the electrons need to move very fast to do this, the electrons have speeds which are approaching the velocity of light itself and if we compress the material still further then a remarkable change takes place. The matter here is largely composed of positive and negative charged particles with of course some neutrons present also. The fundamental particle, the neutron, is basically composed of in a simple model a proton and an electron with opposite charges. When one creates free neutrons in a reactor such as the reactor we heard about at Grenoble earlier this week, when one creates neutrons in a reactor the normal condition is that that neutron will rapidly change within a few minutes into two charged particles, the proton and the electron, plus other particles to maintain the balance. Now this lifetime as I say is short a few minutes only but in matter which is sufficiently compressed it is energetically more favorable for protons and electrons to combine to form neutrons. In other words the reaction proceeds from left to right. Nature always chooses if it can a state of minimum energy and it is not difficult to calculate that the energy of compressed material is less when you combine the protons and the electrons into neutron particles. This is because the high energy that you require to fit electrons into a small volume causes the energy on this side to be greater than the energy on this side unless you have the conversion of protons and electrons to neutrons. So progressive compression of this material will eventually lead you to an equilibrium state in which matter contains mainly stable neutrons. There is of course one has a fraction of charged particles present but that fraction will be small in general. Now the neutron material has a density which is approximately that of the atomic nucleus and so we reach finally a density of 100 or perhaps even a thousand million tons per cubic centimeter and that is what that is the density of this material. We find it hard to visualize this number. Physicists are quite happy to juggle large numbers and of course they don't usually try to imagine what the numbers mean. It's not a physical thing to do but in explanations one likes to know roughly how big things are and I can only say that if one had a spoonful of neutron star material, a spoonful of neutron material of this kind would contain enough matter to build all the ships that are currently sailing the ocean. Now this is elementary physics essentially and is the result of an idealized experiment where we might just simply compress material on a laboratory bench. Now we witness phenomena of this kind when we look into the sky. This basic physics has great relevance to the evolution and the life cycle of common stars. May I have the next slide please? In a slightly oversimplified way one can sketch quickly what is likely to happen to most of the stars that we see in the sky. If we take a common star like the sun which is sketched at the top here, any reasonably small star is of course a nuclear fusion reactor in which we are fusing the simple nuclei to more complex ones with the release of energy. The star has the size we see because it is a balance between the pressure generated inside by the fusion reaction which tends to expand the star and the force of gravity which tends to compress the star. So a star is a battle ground between the outward pressure and the inward force of gravity and that determines the size at which we see it. As the reaction proceeds and the fuel is depleted eventually we are left with the ashes of the nuclear fusion process and gravity, the force of gravity will then squash the star irrevocably and we end up in the case of a light star with a small object in which we have degenerate matter with a density of about one ton per cubic centimeter and this is the white dwarf star and we see many such stars in the heavens. But if the star is heavier to begin with than the sun we cannot have such a peaceful passage to old age and retirement. In the case of a heavy star what happens is that within the interior where of course the reaction takes place as the fuel is depleted and gravity compresses it we can have that conversion of electrons and protons into neutrons and when this takes place we remove suddenly the cause of the pressure within the star which maintains it at its equilibrium size so it is rather like having a balloon and pricking it with a pin in the case of a heavy star as the conversion to neutrons takes place there is a dramatic decrease of internal pressure and the inside of the star can simply collapse rather suddenly. It collapses from the size of a white dwarf to something much smaller in a time of the order of one second or less. So the final evolution of a star can be quite dramatic. The collapsing fusion products will only reach equilibrium when the material has become almost pure neutrons at a density of 1000 million tons per cubic centimeter. In that condition it occupies only a small volume very close to the center of the star and since the material has really fallen from the edge of a star to the center it is moving very fast under the scale of the collapsing core in a star of the size say 10 times as massive as the sun. The collapsing material is falling inwards at nearly the velocity of light and when it collides as it were at the center here the situation can be quite complex but there is an enormous release of energy and of course that it generates a shock wave which propagates outwards from the center and will blow off into space the remaining material which has not yet had sufficient time to collapse. Now this is believed in very brief outline to be the theory behind exploding stars the supernova process. So we expect neutron material to be left behind at the center of a stellar explosion. That's one possible end to a fairly massive star. Because that a heavier still can have even more dramatic evolutionary cycles and finally can collapse into these bubbles of space time that I mentioned the black holes of general relativity but in that case the star would collapse until it disappeared from view entirely and it is only some quantum effect of the type which is now being considered which I mentioned which can save matter from collapsing to zero volume. This is a region of physics which is not yet well understood. This however is where we are concerned at present. We expect to find a neutron star, a neutron material left at the center of a stellar explosion and you can regard this ball of neutrons as the ashes of one type of stellar evolution. The material will be completely inert and it will simply retain the physical properties which it had at the moment of formation but there is nothing left there to burn any longer. There is no possibility of further fusion reactions. Well we can imagine what a cold neutron star might be like. May I have the lights for a moment? If we take a neutron star containing roughly the mass of a body as large as the sun then when it collapses to the neutron star configuration it will have a radius of some 10 kilometers only. It will be an extremely small object and the most dramatic feature that one first considers is of course the intense gravitation which will surround such a region of space. If that represents the surface of a neutron star then we have that the gravitational acceleration which I'll call G-neutron. The gravitational acceleration is approximately 10 to the 11 times stronger than the gravitational acceleration at the Earth. This would be very noticeable if one got close enough to the neutron star to make experiments. If you lift an object through a height of 1 centimeter and allow it to fall then it will reach the surface of the neutron star here. The velocity after falling 1 centimeter is going to be something like 400,000 kilometers per hour. The weight of a tiny object like a feather would be hundreds of tons. The effects of space curvature which are normally only just detectable in the laboratory become quite prominent. It's not of course very safe to walk about on a neutron star even if it's totally cold because the gravitation will cause your weight to be several millions of tons and so of course you're spread very thinly over the surface. We can perhaps attempt to escape such effects by being in orbit in a spacecraft around a neutron star. We customary consider astronauts as weightless when they are following their geodesic in four dimensional space and are in orbit about some mass. Well, this is a possibility. The spacecraft would have to orbit a neutron star roughly 1000 times a second in order to maintain a stable orbit but we can also consider the effects of space curvature on the astronaut himself. The fact we tend to forget is that, I suppose we have our astronaut here, supposing he's just got outside his spacecraft to look around as of course has been done already in orbit around the Earth. What we have to remember is that it is only the center of mass of the spaceman which follows this weightless geodesic. Now the effects of space curvature are quite noticeable because in an extended object there is a different gravitational acceleration at your head and at your feet and the effect of this is to put a force between your head and your toes. This is just one phenomenon of space curvature and in the particular example of a neutron star and a typical specimen of humanity here then this force is something like 300,000 tons. So space curvature becomes not a marginal effect but extremely unpleasant. Well I'm not joking entirely here because in the early days of neutron star physics it was considered that there might be remnant particles of the original system in orbit about the neutron star and this might have led to some of our pulsing phenomenon. It is not of course possible to have large regions of ordinary material anywhere near a neutron star. Space curvature simply disrupts the material by immensely strong tidal forces. Well on a slightly more scientific level may I have the next slide please now. The state of matter under extreme compression is of course an extrapolation of the field of solid state physics and solid state physicists have been very busy in attempting to work out all the possibilities of the structure of neutron stars. Now there's still much to be learned here but the general picture seems fairly clear. We have a ball of neutrons which is roughly 10 kilometers in radius but of course near the surface the gravitational compression is not yet strong enough to compress all the material into the neutron configuration and we would expect near the surface a shell of the most stable nucleus known which is Fe56. We expect therefore a skin of iron around the neutron star. As we descend through that skin we come then to a shell of initially degenerate material where the density is still not high enough for many neutrons to be present. We have a very rigid lattice of material here with probably a melting point of greater than 10 to the 10 degrees Kelvin, a very rigid material of degenerate electrons flowing between a rigid lattice of positively charged nuclei. As we descend lower down more and more neutrons are formed, the nuclei become neutron rich, the nuclei eventually become unstable and as one proceeds far enough down into the material one then reaches virtually pure neutrons. Now the properties of this neutron material are interesting because neutrons are particles we call fermions but it is believed that they will behave at any temperature below roughly 10 to the 9 degrees Kelvin very much like helium at room temperature, I'm sorry like at helium near the absolute zero of temperature. The fermions will pair up to form bosons in much the same way as in superconductors and we shall get a quantum fluid, a neutron fluid in which the neutrons have essentially paired and we get then a liquid which has the quantum properties of liquid helium but it does have also the enormously high density. Now that is generally understood as one goes further into the center and the pressure rises the problems multiply and we do not really seem to understand yet the potential, the neutron-neutron potential sufficiently well to predict in detail what is going to happen further in. There is the possibility that the neutrons themselves form a rigid lattice, there is the possibility also that rather more exotic states of matter are found, the pion field can become highly coherent, one can have what is called a pion condensate in this region here but these possibilities at the moment one can barely distinguish because one doesn't understand fully the neutron interaction. Nearer the center one gets graver problems, hyperons may be long-lived particles near the center of a neutron star, these are of course the exotic particles of high-energy physics which normally last a negligible time in a terrestrial laboratory, it is conceivable that they would be long-lived particles at the center of a neutron star, a sufficiently massive neutron star. The quantum state of these particles is problematic because they are overlapping to such an extent that the meson field surrounding the fundamental particles are overlapping and interfering and the general quantum state of such a situation of course is quite unknown. Well for the moment then the important parts of a neutron star are its rigid shell which is fairly light because the material is not yet too dense and then a liquid region inside and this model is generally believed to be correct. Now I have the next picture please. Now of course a neutron star is formed from a real astronomical body and it is not made ideally under controlled conditions in the lab. So it is endowed already with stellar properties which it had originally and if we collapse a star say with properties something like the sun where we have a magnetic field of roughly say a 100 gauss 10 to the minus 2 Tesla radius of 10 to the 6 kilometers and a rotation rate of roughly one revolution per month. So if we collapse such a star to the neutron configuration then it is probable that there will be no leakage of flux. The magnetic flux in astrophysics is a quantity which is usually conserved so that we shall arrive finally with an extremely powerful magnetic field at the surface of the star perhaps 10 to the 8 Tesla and if the star maintains also its angular momentum if that is conserved in the collapse and not given to escaping debris from the explosion then the star can be spinning at up to 10 to the 4 revolutions per second. These are the properties which must maintain a neutron star and make it detectable because as I said earlier a neutron star is not is not burning it is only endowed with the energy of collapse which is largely turned into rotational kinetic energy and its magnetic flux. So we expect this lump of matter to cool down steadily in space. Well there is some evidence that this general model is more or less correct from observation. May I have the next slide please? Occasionally one observes in certain pulsars that the rate of the pulses shows a characteristic jump. This is the famous pulsar in the Krab Nebula the only pulsar to emit visible light and here you see the steady slowing down represented as an increase of period with time, calendar time. Well schematically what has been observed is this type of phenomenon where the pulse rate suddenly shows a small increase over a few days and then relaxes to its previous slightly slower value. Now how does one understand this kind of phenomenon? I should emphasise that this is grossly exaggerated on the slide. When the pulse rate changes by perhaps one part in 10 to the 8 this is a notable event for pulsar observers the pulses are normally so regular that the slightest change becomes a noticeable phenomenon. Now how does one explain a sudden change in the pulse rate like this? Well clearly you can't suddenly speed up a star which is spinning on its own in space there must be some rearrangement of the material within it to cause this effect. Now I have the next slide. The type of model and I'm not saying this is exactly correct but the type of model that's been put forward to account for this phenomena is sketched schematically here. When the neutron star is spinning rapidly you expect it to be an ablate spheroid because of the outward forces at the equator. So you have a rigid shell of material surrounding a liquid core and the whole thing will be spinning as a solid body although the interaction between the quantum fluid and the outer crust is of some interest. The linkage here is via we believe quantised magnetic field lines which thread the star and transfer momentum from one part of the star to another. The momentum transfer probably takes place by the scattering of electrons, the residual few small percentage of electrons from these quantised vortex field lines. As the star spins more slowly when it loses energy it wants to revert to a more spherical shape and since the crust is rigid it can only do this when the crust suddenly cracks like the shell of an egg. So progressively from time to time as the stored elastic energy overcomes the strength of the material then the crust will crack and suddenly become more spherical and when it does this the outer crust will spin a little faster to conserve its angular momentum but finally there will be a coupling of momentum between the outside and the inside and the star will spin again at its approximately its original speed. Well effects of this kind are detectable and give us some confidence that the solid state physics which has defined our model of a neutron star is mainly correct but this is a complex subject and I cannot spend too long on it. Perhaps the most serious problems are that we don't really understand the plasma physics of the space surrounding a pulsar. It would be nice to know after all what it is that generates the radiation that makes these neutron stars detectable and this is a major problem which we as yet about which we understand very little. Could I have the next slide? Speaking as a physicist what we need to do of course is to solve Maxwell's equations around a rotating magnetised sphere and that rotating magnetised sphere might initially have commenced its life in a total vacuum. It scarcely needs to be stressed that a normal atmosphere surrounding a neutron star one would expect to be non-existent. The scale height of the atmosphere in the intense gravitational field that I have mentioned would be roughly one centimetre for some reasonable temperature so that one really expects no atmosphere one would expect a neutron star to be surrounded virtually by a vacuum but a vacuum does not generate electromagnetic radiation very easily and we must have some type of plasma surrounding this star. Well the basic ideas I can only sketch because as yet there is no exact solution to this problem. When one spins a magnetised sphere in vacuo that of course on a laboratory scale is a purely classical experiment. With the magnetised sphere if it is spun in the simplest symmetrical fashion with the magnetic axis aligned with the rotation axis will generate polarisation charges within itself and charge will redistribute and the sphere will be surrounded by what we call a quadrupole electrostatic field. That is well understood. But when we try to extend the simple solution to a neutron star the numbers get slightly out of control. The electric field between the pole and the equator of a neutron star is obtained by integrating a field strength of perhaps 10 to the 14 volts per metre. One can then have something like 10 to the 11 megavolts electric field between the pole and the equator of a neutron star and this means that the surface electric effects at the star will produce forces on any charged particles one has which can exceed even the immense gravitational fields which I have discussed. So the possibility is that particles are literally wrenched from the surface of a neutron star and flung into space by electrostatic forces. Once this happens one runs into the usual astrophysical situation which is that charged particles in space tend to move with the magnetic field. The situation of the frozen in magnetic field which one so frequently finds so that charged particles which are in space surrounding the neutron star are essentially tied to the magnetic field lines and will spin with it. Now if one attempts to solve this problem one can write down the equations of electrodynamics and leave out of the equations any effects of the inertia of the mass of the system. This is a reasonable approximation close to the star and one results with the sort of situation I have here. One generates a plasma surrounding the star in which the charges are virtually separated into two regions depending on the sense of rotation. One can have a region of positive charge near the equator, one can have a region of negative charge around the poles and this generates a plasma with which we're not familiar. When we create plasma in the laboratory the charge balance is always perfect or nearly perfect. The idea of a plasma in which the charges are completely separated is rather unusual to us. Well we have this atmosphere in which the charges as I mentioned are as it were tied down to the magnetic field like beads on a wire and as the star rotates the atmosphere must rotate with it. But one soon reaches a grave problem because in the case for example of the neutron star in the Crab Nebula at something over 1000 kilometers from the star one reaches a region where the material tied to the magnetic field is trying to sweep through space at the speed approaching velocity of light. Now this runs into relativistic physics and the situation is not understood but probably the kind of effect that happens is that at this critical radius which we call the velocity of light cylinder material can no longer be tied to the star and must escape so one probably has an efflux of particles as a stellar wind probably at a speed close to the velocity of light beyond that distance and within that distance one has material which co-rotates, spins with the star and forms a closed system. Well somehow within this type of model we have to explain the generation of the radiation which makes these objects detectable. Just one feature while we have the slide present some of these magnetic field lines intersect the velocity of light cylinder and they are the field lines which come from the magnetic poles. From here particles can always escape from the star because they are threaded on lines which never return to the star so near the poles one can have escaping particles along these so-called open field lines which intersect the velocity of light cylinder. Well now we have a wealth of information available, in fact we have too much information available, there are too many facts about pulsars to be explained and I would just like to show you some of the features which observers can feed into this situation in an attempt to understand the physics of this complicated relativistic plasma. May I have the next slide? This is an expanded version in time of a succession of pulses from the first pulse to be discovered. The time scale here is some tens of milliseconds from one side to the other and you see successive flashes of radiation are displayed one beneath the other and you see the profile of the flash of radiation from this particular neutron star. It is like a lighthouse in which somebody is tampering with the mechanism, each time it goes round one gets a flash of slightly different character and the situation can change dramatically from one second to another second. These peaks here in the emission are reminiscent of an alpine range and this kind of irregularity is typical of all pulsars. Could I have the next slide please? One can measure also such features as the polarization of the radiation and an important feature which shows up clearly in this example of a pulse. Here you see a very rapid pulsar in the southern skies. The total pulse lasts a few milliseconds only and here is shown the direction of the electric vector in the radiated field and from the leading edge of the pulse to the trailing edge it shows a very characteristic rotation. The radiation in this case at radio frequencies is approximately 100% linearly polarized and that of course gives us some strong clues as to the kind of radiation process which must be taking place in the plasma close to the star. The properties of pulsar radiation change so quickly that attempts have been made to display in a more elaborate way the character of the pulses. May I have the next slide please? Will that focus a little? Here you see a plot in colour of a succession of pulses and one is displaying different characteristics. This slide was sent to me by the operators of the large radio telescope at Arrasibo in the United States, Puerto Rico and here is a sample of pulses, one under the other and the first slide shows you how the intensity varies. The intensity is adjusted on a colour scale there with white being most intense and violet, the least intense so that this is essentially the pulse profile measured across here. And then get various polarisation parameters plotted out successively and you can see that it really is a very complicated situation. Apart from the changing intensity and polarisation there are features in the radiation which appear to drift. It shows up quite nicely on this band of radiation here. A polarisation feature which in this case lasts a few milliseconds drifts across the pulse as if it were somebody in the lighthouse moving something across the lens which is forming the beam. This phenomenon is well known, it occurs in many, many examples and it's called the phenomenon of drifting sub-pulse features. Well we do not understand these at all, all we can do is make some guesses as to the correct explanation. And to finish I will just mention very briefly the kind of model on which astrophysicists are currently working. May I have the next slide please? The type of model which appears to fit a fair quantity of observational data is that we have a neutron star with a rotation axis which is vertical and a magnetic axis which is oblique. As one rotates this system then the overall character of the plasma surrounding the star will be much the same as in my simplified sketch earlier but of course one has now a fluctuating magnetic field fluctuating at the rotation rate of the pulsar and this adds further complications. But the kind of model which seems to fit the observation is that we have charged particles escaping along the field lines which can reach to infinity so those particles can escape from the star altogether and if some mechanism can be found to create particles in bunches then as they accelerate outwards along this curving magnetic field near the pole of the magnetic pole of the star they will launch radiation which is akin to the synchrotron radiation one is very familiar with in high energy particle beams. This will generate radiation which points beams of radiation which point along the general direction of the magnetic axis and this can lead to the kind of pulse shapes which we certainly observe. But pictures of this kind are drawn with much much imagination and what we have to do is work out in some detail why the electric charges should be bunched and just how many one needs of course to make the radiation fields these sort of calculations are fairly straightforward and whether the radiation has the right polarisation. It turns out that this synchrotron type radiation does have the sense of polarisation to account for the drifting angle of polarisation which I mentioned earlier. But some arrangement where electrons move extremely coherently as in a radio transmitter some arrangement of that kind is essential to explain our radiation. As I said earlier pulsars are not burning any fuel they only have this they are as it were a freely running electrical dynamo and the radiation they emit must come from purely electrodynamic processes. May I have the last slide please? These as always tend to become exotic and the kind of exotic idea which is being mentioned now is that one perhaps has a pair creation process electron positron pairs near this process occurring near the magnetic pole of a star. The sort of situation would be as follows the star is rotating about a vertical axis and charged particles in the magnetosphere are escaping from the polar cap region where such escape is possible. The escaping particles can perhaps leave at a rate which is not easily maintained by a flux of further particles from beneath the star that is a statement which I could amplify but unfortunately it would take us too far from the general scope of this lecture but one might end with a near vacuum with zero charge and escaping charges higher up. In this case one will build up across this gap the voltage which occurs between the pole and the equator of the star which as I mentioned can be something like 10 to the 10, 10 to the 11 megavolts. A very powerful field exists here. Well a stray particle in that field will be accelerated rapidly to relativistic speed it will emit gamma radiation and in the powerful magnetic field that exists here that gamma radiation can create positron electron pairs.
|
When the Nobel Prizes started to be given at the beginning of the 20th Century, there were a number of procedural questions still open. One of the questions generated considerable discussion within the Royal Swedish Academy of Sciences, namely the question if discoveries or inventions in astronomy could be rewarded with the Nobel Prize in physics. The outcome of the discussions was a working rule saying that this could be the case only if the discoveries or inventions were important for physics. The prize to Hewish and Ryle in 1974 was the first one to be given in astronomy and it is interesting to note that the Academy uses the term “radio astrophysics” instead of “radio astronomy”. In his talk at Lindau, which was also his first, Hewish picks up this physics thread and stresses the fact that observations of the Universe may give insight into physics under extreme conditions that could never be obtained in the laboratory. In particular, the gravitational forces in space may compress matter to extreme densities, thus forming strange objects such as the “bubbles in space-time” that we today call black holes. The discovery in which Hewish played a decisive role was made in 1967 and consisted in the finding of a new kind of stellar object, later given the name pulsar. The observation finger-print consists of a regular and rapid succession of very short radio signals. The discovery eventually led to the understanding that matter may be compressed to such an extent that the electrons and protons of the ordinary atoms recombine to form neutrons. This may happen as the result of a supernova explosion and leaves a rapidly spinning neutron star behind, acting as a light-house. If the Earth happens to be swept over by the light-house beam, the pulsar may be registered and studied. It is a sad fact that the PhD student who made the actual discovery was not given a shared part of the prize. But is is positive that 20 year later, when a similar discovery was rewarded in 1993, the professor and the PhD student shared the prize. Anders Bárány
|
10.5446/51975 (DOI)
|
Today's topic by sharing with you some of the key factors for competence building in a digital environment. We will also try to transfer these factors to the basic teaching of classics and take into account existing preconditions and challenges that can be anticipated. A short survey among students from several German universities attending various subjects in ancient studies, we wield a uniform picture of basic teaching of classics before the Covid-19 crisis. The conventional combination of lectures, seminars and exercises is the most common. Lectures and seminars follow a routinely repetitive structure and focus on the auditory and visual teaching of content. Students work on a seminar topic by analyzing more specific subtopics in safe study and presenting their results in lecture form followed by producing a written record. Digitally supported forms of teaching are almost completely avoided. Some core competencies of humanities work are trained. They are the questioning of research opinions and facts through their evaluation, scientific writing, discussing and presentation, but so-called future skills as postulated by the Stifter Verband in cooperation with McKinsey, such as problem solving, creativity, collaboration, digital interaction and learning are neglected. As a result, students are insufficiently prepared for their future scientific practice and even less for a job in a professional environment. We have to ask ourselves, do traditional teaching formats achieve the best possible learning outcome? How can digital teaching and learning formats help us to gear our studies towards competence? Since the beginning of the COVID-19 crisis, lecturers have been forced to switch entirely to digital teaching within a few weeks. This has presented them with numerous challenges ranging from technical difficulties, lacking knowledge of digital formats, methods and tools to a fundamental skepticism about digital formats. Quite often, attempts have been made to translate the same type of traditional teaching one-to-one into the virtual world, which is currently leading to a certain amount of frustration among students and lecturers and naturally also to the desire to return to the status quo as soon as possible. So not only in view of pandemic-driven needs, but especially in view of the potential and added values of digital teaching and learning formats, we now have to shift the focus back towards the didactically effective planning, designing and teaching of courses. The continuing education program of MUSEAN can serve as a good example for this, but before we start looking at certain MUSEAN elements that might be beneficial for basic university education as well, we'd like to give you a quick overview of the project, objectives, its target group and the program structure. By funding the project MUSEAN Weiterbildung und Netzwerk, the Federal Ministry of Education and Research has been supporting the establishment of an academic continuing education program for the museum and cultural sector since 2014. It is part of the program Advancement through Education, Open Universities, running from October 2014 till July 2020. So we'll come to an end in a few weeks. MUSEAN pursues two goals, on the one hand the development of further education courses is intended to support a targeted professionalization of museums and their staff. On the other hand, the development of a museum of the future is to be promoted through an ongoing discourse among an emerging network of museum experts. Accordingly, our target group includes people who work in various areas of museums and cultural institutions, from volunteers to service personnel to museum directors. With very heterogeneous prior knowledge and professional experience, regionally scattered throughout the German-speaking world with highly specific needs and interests in continuing education. The museum curriculum is therefore designed as a small-scale modular building block system that you can see here. The modular system is based on the main tasks of museums, collecting, exhibiting, mediating and educating, managing and marketing. The courses focus on highly relevant and pressing topics, for instance sustainability management, digital strategies or heritage interpretation, but also on basic topics such as public relations, exhibition planning or curating. The modules can be studied as individual courses or cumulatively as certificate or diploma of advanced studies according to Swiss UNI. The program is designed in such a way that by allowing the participants to choose flexibly for more than 50 courses, they can tailor their own further education to their own level of knowledge and interests as well as to their own professional and private situation. In addition to their work, our participants dedicate approximately five hours per week over a period of five weeks to a topic, mainly online in a mentored group setting. The courses have been developed with a professional support of museum experts and scientists. The courses are designed in a blended learning format. This means that most of the courses, so 80 to 100 percent, can be studied online on the digital learning platform. Mostly our synchronous phases are interconnected with synchronous contact periods with the entire group, either independent of location via web conference and or at a museum as a place of observation and action, so to speak, in a laboratory-like setting. The didactic design or learning design is built upon a constructivist understanding of education and focuses on problem and task-based learning with a high emphasis on practice transfer. This means that with a hub of expert input, interactive reflection, discussion and advice, the participants can acquire new knowledge based on problems from their everyday work, meaningful and relevant tasks in able transferring new insights into practice. To give you an example, participants of the course exhibition planning, for example, are given the task to submit a plan for a real or fictitious, small-scale, special exhibition combined with a presentation or written conceptualization for their fellow students. To accomplish their tasks, students use various multimedia resources, such as so-called electros that you can see here, interactive videos, texts or learning modules, as well as real-life materials. Depending on the learning goal of the task, they work either alone in tendons or in small groups using the various communication and collaboration tools. So, Adte, your microphone, your microphone, please turn it on again. Thank you. How much did you miss? Shall I go back? Just one sentence, I guess. Okay. So, I've just showed you some examples of communication, collaboration tools that the students use working on the ELIA's learning platform, like, for example, a whiteboard in a web conference system, for example. Self-directed learning is key, but in each course, the students are intensively accompanied and supported by a tandem consisting of a subject matter expert and an e-tutor throughout the five-week period. I would like to show you an example from the course Objects and Knowledge, which I'm conducting together with Professor Dr. Alfvonenhoff. One of the aims of the course is to raise awareness of the various fields of meaning of objects and to use this knowledge specifically in the exhibition concept. The course reflects the interactions between the way objects are handled in museum presentations and the knowledge that is conveyed to visitors about the objects through the method of staging chosen. For this purpose, the students meet various texts and discuss essential terms in small rooms. In the left picture, you can see an example of a discussion in the EtherPAT. Each person writes in a different color. The tool can be used synchronously and asynchronously. As soon as the students agreed on a definition, it was transferred to a glossary that you can see in the right picture. At the end, the glossary can be printed and used as a basis for further work. The second part of the course was about selecting objects and analyzing their different meanings in a blog post. Here you can see the Ilias block tool, which allows you to switch between the preview on the left-hand side and the entire post, including images on the right-hand side. The fellow students comment and give further suggestions. Again, small teams develop two concepts for each object to show how it could be exhibited in a museum. For the Bluejack, the development of two proposals show entirely different approaches to the object. In an online meeting, the participants discussed which meanings were superficially conveyed in which version and which ones were less important. Museum is considered best practice in continuing education. So what are the key factors that have contributed to this success? As a starter, a look at some key figures might be helpful. Since 2016, a total of 380 students have participated in 160 courses in test and regular study operation. The majority of our students have consecutively completed several courses. Among them, almost 120 have obtained a certificate of advanced studies and almost 20 even a diploma of advanced studies. The number of dropouts is extremely low, especially compared to pure online formats such as MOOCs. The total of 70 experts, 12 etuders and the museum team, about eight part-time staff, has been involved in developing and conducting the courses throughout the entire project phase. Since 2016, the program has been regularly and extensively evaluated both quantitatively and qualitatively at course level and program level. The statements of participants, subject matter experts and etuders are regularly triangulated and feed into the process of continuous quality improvement. So I would like to share with you some of the main results of a recent comprehensive survey carried out among 24 CAS graduates at the end of last year. The aim of the survey was to specifically assess the sustainability of the program and to examine the transfer success as well as the longer term effects of participation on the professional situation of the graduates. The quality of the CAS program was rated good or very good by 87.5% of the graduates. From the graduates point of view, the CAS program is particularly suitable for career starters. As given for recommendation included the high degree of flexibility and diversity up to date and relevance of the topics, the possibility of gaining insight into other museum areas and expanding competence as well as networking and exchange. In addition, the expansion of professional, methodological and digital skills and the linking of theory and practice play a particular role for the participants. According to the survey results, the vast majority of participants were able to expand digital and non-digital key competencies as listed here. In terms of practice transfer, most of the participants were able to integrate their study contents into their professional activities, use them as a starting point for developing new ideas or adapt them to different situations. Most of the participants found their participation in the continuing education program beneficial for their career development. As a result, graduates feel more confident in their working environment and also indicate that their work quality has improved and the majority are more satisfied with their work. Both from the many evaluations carried out and from the experience gained during the development and delivery of more than 50 courses, we can now derive a number of factors that have proven to be crucial for the learning outcomes of the participants. At program level, the first thing to mention is the high degree of modularity which enables flexible and learner-centered studying. Study contents are relevant to individual professional goals, they are up to date and they are dynamically adaptable. Competence orientation, including or let's say across all levels of Bloom's taxonomy, the inclusion of future skills and constructive alignment are crucial for learning success. Theory practice transfer has to be ensured through problem and task-based learning. The didactic design is based on the constructivist understanding of education in which learning from and with each other in learning communities is encouraged. The learning setting or blended learning setting itself is designed in such a way that teaching and learning activities are coherently arranged in face-to-face and online phases and the chosen mix of methods, media and social forms is beneficial for learning. Last but not least, lecturers are prepared for that changing role and the requirements in a blended learning setting with expert authors and teachers as learning initiators and coaches and etuders as learning facilitators. So what conclusions can we draw from a successful ongoing education program for basic university education? The benefits could be made available for both teaching, study and even research and which obstacles and challenges might have to be tackled. Let's first take a closer look at the benefits of adding digital elements to classroom teaching, particularly in the disciplines of classical studies. At present, depending on the size of the institute, of course, prospective antiquity scholars have rather little opportunity to influence their profile within their specialization, for example, by choosing seminar topics that interest them due to the very limited number of seminars available. If for example, students had access to additional recorded lectures from their own institute or from others, they could more easily acquire fundamental knowledge in subjects that their study program does not offer. The formats can facilitate learning for all types of learners and significantly increase the range of learning activities, encompassing not only acquisition, but also investigation, discussion, collaboration, practice and production. Through the use of multimedia, tasks and tools, various senses can be addressed and students can study the material at their own pace. As a result, different thought processes are triggered and long-term memory retention is supported. Multimedia also allows for creative learning outcomes such as videos, podcasts, presentations, virtual exhibitions and others. That means learner-generated content that can be shared with others and also reused by others. And also e-portfolios can serve purposes of reflection and synthesis of whatever has been learned. Flipping the classroom by shifting acquisition activities into a preparatory online self-learning phase can give room for joined in-depth discussions or alternatively a discussion in the seminar room which is restricted by attention and time pressure can also be continued online. At the same time, skills that will be beneficial for students in their professional lives such as digital interaction and collaborative working are trained incidentally. This is particularly important in classical studies where only a few elements prepare the students for their professional future. But in order to maintain its attractiveness as a subject, the classics must not turn a blind eye to students' employability. Digital skills are already required in a variety of future professions, whether in museums, publishing houses or in teaching. If we now attempt to establish a program like Muzion just as it is in basic humanities, our attempt would probably not be crowned by success. The reasons for this are as so often multi-causal and lie in completely different framework conditions. Let us take a glimpse at the actors involved. Why for participants in a continuing education program, the priority of their studies always ranks third after work and private life? For students, studying is their main occupation. As canonical study programs for students are geographically and chronologically tied to accommodate teaching staff and certain infrastructure, the university, the advantages of digital teaching to study independently of time and place are not relevant. Why complicated online dating when it is easier and, besides, socially more rewarding to meet on campus during the day? As soon as digital learning units are fixed in the examination regulations, it must be ensured that every student has the necessary equipment at their disposal. But can we demand this from students with financial weaker backgrounds? Or can we make universities responsible for providing the equipment needed? Fortunately, students of today's generation are much more familiar with digital devices, tools and structures. However, this does not equally apply to the teaching staff. Due to the usual high-age demographics among university professors, many of them do not belong to the generation of digiternators. Therefore, they are rarely familiar with the potential of digital formats and tools. They usually lack digital learning concepts and ideas about appropriate tasks beyond uploading documents. Accordingly, their attitude towards digital teaching methods often remain skeptical and they regard traditional classroom teaching as the only true form of teaching. To develop skills and to convey the added value of digital or blended learning and teaching formats among lecturers, special training programs are needed. In addition, tightening its examination regulations often leave a little space to accommodate smaller learning modules or alternative forms of assessment in the curriculum with a suitable number of credit points. With relaxation of the examination regulations cannot be achieved, the only option is to force the digital learning task into this corset. Moreover, the subject-specific conventions determine research practice. Archaeological research still seems to be strongly influenced by individual protagonists, institutes and regions trying to differentiate themselves through the expertise which secures jobs, legitimizes research positions and their recognition among colleagues. Behind this lies a collective research habit. These conventions, which cannot only be found in research but also in teaching, are well established and can only be changed slowly. It is therefore crucial that digital formats are used in a targeted and structured manner. Digital elements from the traditional classroom should be replaced synergistically with the digital elements and formats. However, all these points challenge the implementation quite a lot and it might not be as easy as it sounds. Adaptation at different levels is required. But in view of better educated students who can survive in the job market of the future, in view of lecturers who themselves keep on learning and in view of an entire research discipline that can ensure its quality also nationwide, make content sustainably available and can network more easily, the effort is worthwhile. And I would like to stress once more, it's not a question of either classroom or digital teaching. It is a question of how to sensibly unit the two. So will we return to the old forms of teaching and post-pandemic times or will we seize this opportunity? Thank you very much for your attention and interest and please feel free to ask questions.
|
The lecture was held at the online conference "Teaching Classics in the Digital Age" on 15 June 2020.
|
10.5446/52497 (DOI)
|
you Hi and welcome to this next session. My name is Kevin Valdek and together with Udv Björk Engren we will speak about designing an open communication framework for the connected car. This is a project within the Geneva Alliance, it's a non-profit organization. It's an open project meaning that all the work or things that we will show today is it's all open source, it's available and the project itself is also open and meeting minutes are online so if you see anything that's interesting today there's definitely good opportunities to contribute. As we're talking about communication framework maybe one of the first thing is to kind of clarify yes it's the kind of data that is communicated so we will look at the kind of needs or requirements that we had in mind when starting to design this framework. Then we'll go into depths of all the different kind of components or not too much in depth as the short talk but we'll talk about the different components that we have in this framework and a second part of the presentation will be a demonstration especially focusing on the vehicle server which is running in the vehicle and the client that is retrieving the data so it's more focused on the part where the vehicle is sending data. So the general needs that we had to kind of consider were that there are a lot of different types of data that can be generated and of course different use cases in using that data but some important things is that we can kind of categorize it to personalized services and big data services. If you look at what the needs were of course we have to be able to retrieve data from the vehicle that's the basics. We have to have some kind of subscription mechanism so when new data is becoming available we can get an event and fetch that data. We also have to consider that the vehicle could be offline for a longer periods of time still it has to collect data it has to be possible to retrieve that data. We could also consider streaming APIs and also customize jobs when we assign different jobs to the vehicle saying that okay now in this next journey the focus should be on this type of data because obviously the vehicle generating tons and tons of data and we need to be able to specify what data should be transmitted really. And when it comes to big data services we can also consider things like creating some kind of analytics or histograms and other type of advanced services like media streaming it's also a source of data in the vehicle could also be used in some use cases. To name a lot of this the first thing we need to we considered was okay we can select different components we can build different services but in essence we also need to have some kind of common language and for this case it's a common data model. So that was the first thing we set out to use and we decided on using vehicle signal specification. So the vehicle signal specification is it's an open protocol has been developed by Genevieve for several years and the main things it was kind of considering it when when starting to build this was that yes it has to be easy to read and write for a human but it also has to be computer processable so we can build different types of abstraction layers it cannot be used directly in different systems and so the young format was selected as the format to go to specify it. Here's an example of how it looks like so in the gray box we have vehicle.body.windshield.front.warsherfluid.level so quite a long path but this is how it's kind of built up as a tree structure with different paths where it starts from the vehicle as the main object and it goes downwards to all different types of sensors, actuators, attributes so it supports both or it supports fixed values signals which are dynamic values that can change any time also actuators so different kind of computers that can perform simple functions and here are some examples how it looks like and in the young format we have the data type defined unit the type and description and of course this is easily extendable to other type of data as well. Now the communication framework itself here's a kind of splash view of all of it starting from the bottom right we have the software that is running inside of the vehicle we have the OEM cloud so the vehicle manufacturer cloud in the top right and on the left hand side we have third parties so of course in the communication framework is a basis to support different services using that data and typically that can be third party services and in the bottom right we have the core of it where we actually produce the data in the vehicle and we capture that data. In this we also have this gray box we chose that the data model vss as mentioned is used we also have a github link so you can also download the presentation right now or afterwards you can also follow the links you can get the repositories and here right now in this red box this is where we are focusing this talk a bit is definitely the demonstration where we have the data server in the vehicle that provides the data to the vehicle client but before demonstration we can look at the components so starting from the in-vehicle components. First thing we have here is the state storage it's a simple mechanism of getting input from all the different sensors and networks in the vehicle and then outputting that data to the data server that has the interface to the cloud in our proof of concept implementation so we have done a proof of concept implementation which we finished by the end of last year so just a few months ago there we have built this PUC with a relational database using SQLite the manager implementation is done in go meaning that it's set up the database takes care of the configuration and then in the end the data server which needs to get access to data from the state storage is using SQL directly. In the github repository we have all the documentation of how that is set up and also how it is being interface so that's the kind of simple way of handling the latest cached or known data values of the vehicle. Next we have the data feeder in the proof of concept implementation we first we looked at different simulators, vehicle simulators, how that can be used in the end we decided for a bit more simpler approach that we have another database running in the vehicle which actually possesses data that has been captured in different ways earlier or let's say wheel trip data and when we play that simulator it just feeds the previously captured data into the state storage so it's a simple way for us to kind of use data we collect on the streets just put it in the vehicle we run it and then constantly as the timestamps update it sends new data updates directly to the state storage that can later then be transferred to the to the cloud. The in-vehicle data server this will be one of the key focus points in the demonstration it's using the W3C VISS v2 protocol so and we have done the implementation here that we have the protocol implemented exactly as it should be in the vehicle it is a data server so it's data server because of course the vehicle is one that possesses the the true source of data it constantly being updated new type of data and then it exposes that to the the cloud so it can be fetched. So when it gets a request the data server simply looks in the state storage if that data is available in the PUC or proof of concept we return dummy data if nothing is found and also this this data server is implemented in go and fully available on github so you can check it out also and and how that works. So on the cloud side we have first off the vehicle client so the vehicle client of course is the one connecting to the data server so of course it's also using the VISS v2 protocol and then pulse data in our proof of concept implementation and on a pre-autical basis the protocol itself supports subscriptions so the vehicle or the vehicle client can really subscribe to different events based on what the interest is in a given moment that what type of data to monitor what type of data to look after and as soon as data is being retrieved that data is then being stored in the in the database on the cloud side. In the proof of concept we have implemented the database also the relational database also SQLite similar to the in-vehicle state storage it's using go language as the language to kind of build a manager to set it up and configure it and but it can be accessed in two different ways either through SQL directly or through REST interface so there's different ways of interacting with them with the database to store this data from the vehicle client side. Then lastly the last component that we'll go into is the API for third parties so pretty simple it's of course any type of API could be exposed to third parties in this case we use the GraphQL API in the proof of concept so we have a GraphQL server that exposes a GraphQL schema using the vehicle signal specification so the same data model we have in the vehicle it's in the end exposed to the third parties and then it interfaces the the database in the cloud to get the latest data that is then being exposed so that's kind of the that's the general framework and how we have defined the different components so we are not locked into anything specific so if you look at these different implementation that we have done in Go we have selected SQLite and so on it's more about yeah just a selection for the proof of concept of course any relational database could be used any language could be used to write the write the adapters the goal language doesn't have to be used but the simple way of demonstrating this and it's all also openly available on github could be used as a reference implementation for anyone using it but it can also be used for by anyone or by you if you want to check it out or even in a hobby project or put it on some kind of raspberry and run it in the vehicle easy to set it up and get this type of concept running with a data server and a vehicle client on the on the cloud side. So with that we'll go to a demonstration I will hand over to Ulf who will show how the vehicle server and client works in practice. So the automotive group in the WCC is currently working on a second generation of the VISS specification which is a specification of an in-vehicle server working to serve in vehicle or off vehicle clients. The specification consists of two documents the core documents documents shown here which contains the higher level messaging layer description and access control model and so forth and there is also a transport document that describes the supported transport protocols, HTTP and web sockets and their respective payloads. These documents are still not made public however the plan is to do so to get the first working draft made public later on this spring. You can in the meantime if you like access the same information in html files at the automotive WCC automotive github if you like. There is also a github where an implementation of the server is being developed. Let's see we can have a quick look at the architecture of this server. You see here it consists of a server core part to which managers for the different transport protocols can register web socket and HTTP currently it can very easily be scaled into supporting more protocols that can register to the server core but currently we follow the specification with these two. On the south side of the server core there are the service managers that also register into the server core and then do the actual processing of the request to access data in the vehicle subsystem below. The server core is responsible for checking that the requests are valid and are found in the VSS tree which is the data model being used here and then the routing of the messages between the different service managers and transport managers in order to get the response back to the clients at the top. The actual directory structure of on this github follows a mirror this architecture so there is a server directory where you can find the different components the HTTP manager and web socket manager and the server core of course and so forth. And there is also a client directory where you can find simple test clients written in Java script that can be used to test the server. We have a HTTP client and we have a web socket client. We have also a web socket client where we test to compress the payloads that's outside of this actual specification but that's tested here. There is also a file containing templates for payloads request payloads that can be used by the by the clients. So let's try and see if we can run this in the root of the repo you can find a shell script that you can start the server with. So let's see first tries to stop it and there was nothing so now it's starting and it says that all the six different processes of which four constitutes the actual server and two are part of the access control system the access grant token server and access token server if one wants to play with the access control system also specified. So we have now started and then let's see we can go sorry let's see here so we can go to the directory where we have the test clients and we can start the web socket uncompressed client. It's very simple but you first have to insert the correct IP address and it's connected and now we can let's see go to the um file with the templates and we can copy one of the commands and insert here and we'll send it to the server and we'll see that it couldn't find this path so if you send something incorrect the server will tell you let's see if we can find some other so we'll try with this one instead so we'll copy that and go back here and insert that and send it to the server and we can see here that we got the correct response back we sent a get and we had a request ID which is not shown here but the same and then we have the the data coming back so from the vehicle acceleration longitudinal path we have a data point of the value of 1002 in the timestamp of I guess today no it's an old one well that the timestamp says when the value was captured so this is a value capture a couple of days ago. We can try some other requests according to the specification to for example to request multiple signal values in one request as specified so we copy that and we'll see what happens we insert that we couldn't find that either which was due to a bug you can see here that it's it's checked for historic data which is a feature also supported in the specification and we didn't ask for that but it couldn't find any because and that's correct because there isn't any so here's a bug that we have to fix the historic data is very recently added to the server so let's see if we can try something else like this request which is a subscription request that is time-based so that it will return a value signal a value every third second so let's see if we have more success with that one and it persists that it worked and then it starts now to return values every third second from the and it's you see the the data part here the path that we asked for and then the data points of value and the timestamp the value is just the dummy value that is counted up on the timestamp this is for the current time I believe now yes this so it works and I let's see if we can also unsubscribe to it so that it should stop again we'll try that unsubscribe response there and it seems like there are no more notifications sent to us it stopped so this shows that the server can indeed respond correctly but not correctly with everything right now but it will very soon when the bug is fixed to the different requests we have tested now web sockets and you can also start up the HTTP part actually you don't have to start anything up it's already sitting there ready to to respond to any any requests so but you have to start up the test HTTP test client and then apply IP address and then one could use the different templates here of course the paths can be changed and everything can be changed and tested these are just templates to to help test a tester there are also some templates for for payloads if one wants to play with the access control system so to for requests to the access grant token server something looking like this should be used and then for for the in the second step to the access token server the token received from this request should be one part of the payload and then some other parts here as as as shown so that that can also be tested and and and it works according to and follows the specification so finally I was I just want to say that what we've what I show now is is the example of the data server the viz version 2 server and a client not not the client here in the ccs architecture but the test client so we have tested this part here for the other components that you can find here like the state storage and and the actual database the open vehicle data set and the server that can be used to insert values or retrieve values from it and this vehicle client that reads data into the ovds server can be found on this github where we have under the ovds directory a server that as I said is responsible for reading and writing into the actual database and this is the client that is used in the in the architecture image that we just looked at here is also the state storage the the database the in vehicle database that is working as a buffer between the the vis server and the underlying vehicle system there are some other parts like live simulator that can be used to to simulate vehicle data given that it has a database of the ovds format that it can read so then it can then replay that and write it into the state storage and so forth all of this is used in the was used in the demo of the ccs architecture that was demoed at the old members meeting genievi all members meeting a couple of months ago if I recall correctly so I think this concludes my presentation so I live back to Kevin thank you Ulf great so we also have a video online about the full end turn demonstration so considering all the different components in the framework the link is in the presentation so feel free to follow that just a few last notes and so this work or the project is definitely work in progress so constant it's not a wrapped up finished framework just yet continues to evolve if you want to join we have different telcos twice per week I want to talk about vehicle data more on mondays and then the communication framework also one Asia friendly time on mondays and then on Wednesdays we have another time slot there's a mailing list there's a wiki on the bottom of this page you can see the links to the wiki you can find all different resources for this project that's it so thanks from both of us I hope this was interesting we're both available for to chat if you have any questions just let us know thank you you yeah not that doesn't so questions that we missed at least yeah all right I'll ask a few questions now so we didn't really get any questions in the uh and the chat window here so I just want to know uh we we what about the um what about the uh flash how do you how are you is there a lot of impact on the flash memory with are you storing a lot of data on the vehicle you you know up until now we have not used real vehicles we are so we we run the server only on a on a laptop or some some some computer so so uh in in the genievi ccs project where a lot of of this development is is is used the first phase of that project is is just sim simulated vehicle simulated vehicle data and then there is a second phase where we plan actually to to try to use real vehicles and vehicle data so so but we are not there yet so so therefore I have at least no information or or experience from from flash usage or or load and things like that right now okay yeah especially with the recent uh and recently announced tesla recall where their emce mc controller is uh failing yes right yeah I don't really have any any answer to that since we have not come to to that point so we have gained an experience from from such issues right right and then I'm not sure if we already got on the uh well it looks like you
|
The connected car has been around some time but we are still waiting for a large breakthrough when it comes to third party services powered by vehicle data. The fragmentation of different technical solutions makes it difficult for 3rd parties or developers to work with easily accessible vehicle APIs. To tackle this, the GENIVI Cloud & Connected Services project is designing an end-to-end communication framework starting from the data transfer from embedded systems in the vehicles and spanning to cloud based APIs. The framework is built on open protocols and is demonstrated with open-source reference code with the aim of simplifying implementation work for both car manufacturers and 3rd party developers. This presentation will detail the work results to date and will be co-presented by Kevin Valdek from HIGH MOBILITY and Ulf Bjorkengren from Geotab. We’ll have a look at the key technical challenges and considerations that are necessary to make in order to create a successful framework. Designing a useful API for developers starts in the vehicle where topics like the data model is already considered. Further on, we’ll show a Proof-of-Concept implementation that anyone can try out. The Proof-of-Concept implements a data server in the vehicle that exposes an API to data clients, such as backend servers. The data transfer interface follows the W3C Vehicle API protocol and the Proof-of-Concept brings in additional considerations in the cloud to make APIs available to 3rd party developers. The GENIVI Alliance is a non-profit automotive industry alliance that develops standard approaches for integrating operating systems and middleware present in the centralized and connected vehicle cockpit. The Cloud & Connected Services project is performed by a work group that is open for anyone to join and to contribute to.
|
10.5446/52499 (DOI)
|
Ανθρώπωσή, καθώς είμαι ελαιχανό, και είμαι εξαιρετικός, και είμαι εξαιρετικός. Ανθρώπωσή, καθώς είμαι εξαιρετικός. Είμαι ελαιχανό, και είμαι εξαιρετικός. Είμαι ελαιχανό, και είμαι εξαιρετικός. Είμαι εξαιρετικός. Είμαι εξαιρετικός. Πόλι οτι συμβουλεί τ undoubtedly ως μόνοители πλιόν might. δι'αγ Questo马τοσυ serial, Μεα, ολ να rebellion体ιούσε το Rossb discussion by the comments posted by all Review станει сторон his colleague who works and commentational tests by theChā expresende with jakieś εξαιρετικές如γάνεις όπου με όπου και μεgingisto το για ένα γ-R, οπότε είναι ένα γ-R στουμόρυο του ΒΟΜΟΤΟΥΡΑΑ. Το όνομα του ΒΟΜΟΤΟΥΡΑ, το όνομα του ΒΟΜΟΤΟΥΡΑ, είναι B3-DV, που είναι ο ίσοός όνομα που έφερε για το ΒΟΜΟΤΟΥΡΑ, με ένα B at the end, για ΒΟΜΟΤΑ. Το όνομα του έφερε σε έναν πλήκο πόλο της ΜΑΣΑ, και είναι αρκετά από τα πλήκο πόλο της ΜΑΣΑ. Για παράδειγμα, εμείς αρκετάμε πολύ από το κουμπιστό σύστημα, που είναι αυστράτη από το ΜΑΣΑ και είναι αρκετά από όλους τους πόλικους δυοκλούντων. Από το ΒΟΜΟΤΟΥΡΑ, αρκετάμε πολύ από όλους τους μπιθρύσεις, όμως χρειαζόμαστε να κάνουμε κάποιες χρήματα, αλλά πιο χρήμαστε να χρησιμοποιήσουμε το κομπιστό που είναι χρησιμοποιημένος για το ΒΟΜΟΥΡΑ. Και χρησιμοποιήσουμε το ίδιο κέρνι, το μπιθρύσιο, όπως το πόλο της ΜΑΣΑ, που έχουμε, όταν χρησιμοποιήσουμε αυτό το προσπαθείο, είναι να προσπαθούμε έναν πόλο και έναν μόνο. Λοιπόν, ξεκινήσαμε με την ιδέα του να μην χρήμαστε το μπιθρύσιο, όταν χρήσαμε αυτό, αλλά στο τέλος, είμαστε καλύτερος και αρκετά, να χρήμαστε να δούμε ένα κανένα μονο για αυτό. Ωραία, μπορούσαμε να το κάνουμε και να βρήκαμε κάποιες πράγματα, αλλά σε αυτό το στιγμό, εμείς είχαμε κάτι να δούμε αυτό. Λοιπόν, προσπαθήσαμε να εξηγήσουμε την εμπολίσια της δημιουργίας, το ΒΟΜΟΥΡΑ, από τα πόλια. Το ΒΟΜΟΥΡΑ, ξεκινήσαμε να δούμε αυτό το ΒΟΜΟΥΡΑ στο ΒΟΜΟΥΡΑ, το 2019, αλλά στο ξεκινή ήταν πιο δύο αναλεισμό, προσπαθήσαμε να καταστρέψουμε το μονο που χρήσαμε να κάνουμε, για να ξεκινήσουμε το δημιουργία. Σε το ΒΟΜΟΥΡΑ, εμείς δημιουργήσαμε να δούμε αυτή την αναλεισμή και στον Σεμβεύμα, ξεκινήσαμε να δούμε το ΒΟΜΟΥΡΑ. Και στο ΒΟΜΟΥΡΑ, είμαστε μπορούμε να δούμε το τριάγκο, το πιο δημιουργικό, το πιο δημιουργικό, το τριάγκο με τα κόλια που δουλειάζουμε, με το πιο δημιουργικό. Με το Μεί, είμαστε κάποια από το Ζάσα-Βουλκαν-Δεμος που δούμε, το Ζάσα-Βουλκαν-Δεμος που είναι πολύ popular, και είναι λίγο πιο δύο από το τριάγκο και από το τριάγκο, και είναι λίγο πιο δύο. Με το ΒΟΜΟΥΡΑ, είμαστε μπορούμε να δούμε το ΒΟΜΟΥΡΑ. Το πιο δημιουργικό είναι ότι, όταν πρέπει να κάνουμε όλα τα πιο δημιουργικό, και είμαστε κάνουμε το Ζάσα-Κόμπ, να δούμε και να δούμε και να δούμε, και να δούμε, όταν πρέπει να δούμε, είμαστε δημιουργικές στο ΒΟΜΟΥΡΑ. Το πιο δημιουργικό είναι ότι, στο πιο δημιουργικό είμαστε δημιουργικές, δημιουργικές, και είμαστε δημιουργικές στο ΒΟΜΟΥΡΑ. Το ΒΟΜΟΥΡΑ, νομίζω ότι είμαστε δημιουργικές, να δούμε το δημιουργικό, να δούμε το ΒΟΜΟΥΡΑ. Είναι πολύ ενδιαφέροντι, να δούμε. Δεν είμαστε δημιουργικές, δημιουργικές, δημιουργικές, δημιουργικές, δημιουργικές, και δημιουργικές, με την Ιολ이�應. Δεν έχαμε δάσiplα τα εφηλ væρτη στο Αυπαγγουμε δπε. Και για το τατον darned method, αυτή θα καθόν την μπορ pelaαι Nashel ruling but the implementation of all the features needed for work and what was there by that point. Now that we have all the features don't, look, the DRY wasît a bit more matugal, by October, we will move this development to eMESSA straight. Και θα ποτέφερουμε, διάβα ly something. Διόλυμα λάesy admir Και πάρατε να είναι, δενπορώ να α Absolute also and να είναι maybe we True, but,using για να βρεις πιο πολλές παραπλικές που χρησιμοποιήσουμε σε Vulkan, που μπορούμε να κάνουμε ένα τέσσεο στο Raspberry. Λοιπόν, εμείς δοκιμάζουμε να κάνουμε πρόσφυγες, πρόσφυγες, πρόσφυγες είναι ένα πρόσφυγό ή λιβρίο, που χρησιμοποιήσουμε την OpenGL δυνατία, αλλά επίσης χρησιμοποιήσουμε Vulkan. Λοιπόν, εμείς δημιουργήσουμε την ανταπλικία με το τέσσεο, γιατί μπορούμε να δοκιμάζουμε την ανταπλικία του OpenGL. Μπορούμε να δοκιμάζουμε το Vulkan driver με την ανταπλικία του OpenGL. Με το νομιβέντεο, έχουμε όλες τις τέσσεο και δοκιμάζουμε το driver για να εξεχθεί στον κρόνο. Λοιπόν, δημιουργήσουμε ότι στις τώρα, αυτό το driver είναι ουσιαλικά το Vulkan Water Zero, είναι ακόμα στο κρόνο's παίσιο, γιατί είναι στις τώρας το αυτοκλήτρος του driver που αφήσει το Vulkan Water Zero's API. Με το τέσσεο, εμείς δοκιμάζουμε το driver για 64-bit, γιατί ξεχάσαμε με το τεχνικό σύστημα Raspberry, Raspberry 32-bit, αλλά τώρα έχουμε αυτοκλεί το 64-bit και εμείς δοκιμάζουμε αυτό. Και ξεχάσαμε να δοκιμάζουμε να εξεχθεί το τεχνικό σύστημα του driver. Λοιπόν, μες τελικά στο ξεχάρι του προσπαθού, η αρχαία του τοιχανή was getting to render something on the hardware. Εμείς χρησιμοποιήσαμε πολύ το Vulkan CTS για να εξεχθεί τίρα-τύρι το φύτερο του driver. Το πράγμα είναι ότι με το CTS, χρησιμοποιήσαμε πολύ πιο από δεύτερες πράγματα. Για παράδειγμα, όταν εξεχθεί το Vulkan CTS, το CTS είναι το κρόνο της τεστ-τύρις, το οποίο χρησιμοποιήσετε να δοκιμάζετε το driver με το σπέχο. Ο αρχαίος του τοιχανή was using the Vulkan CTS to create the driver. Στο ξεχάρι, πρέπει να δοκιμάζουμε ή να δοκιμάζουμε ένα τεστ, γιατί για τα τεστ, πάνω πάνω πάνω μόνο πάνω μόνο πάνω μόνο πάνω. Για παράδειγμα, αν θέλεις να ρεχνάς κάτι, και θέλεις να δοκιμάζεις ότι θέλεις να ρεχνάς κάτι καλύτερα, πάνω πάνω πάνω πάνω πάνω πάνω πάνω πάνω πάνω πάνω πάνω πάνω. Αλλά στο ξεχάρι του δευτείου δεν είχαμε το σπέχο του σπέχου. Στο επόμενο στις εργαγίες, όταν εμείς είχαμε δοκιμάσει όλα αυτά τα πρόσφυγματα, το σχέδιο του δευτείου was guiding the development of the driver using CTS. Για παράδειγμα, είχαμε ένα υποσχέδιο του CTS για το σχέδιο του δευτείου, γιατί το πράγμα είναι ότι η CTS είναι πολύ καλύτερη και πολύ δημιουργή. Έτσι, έχουν 1000 εργαγίες. Αν θέλεις να δοκιμάσεις, αν είμαστε εξεγουδιές σε κάθε εργαγία, δεν είμαστε εξεγουδιές σε δευτείου εργαγίες, δεν ήταν πολύ πρακτικά. Αυτό μπορείς να δεις ότι είναι ένα σχέδιο του δευτείου. Λοιπόν, στις επόμενα έχουμε around 10,000 εργαγίες, αλλά η πράγμαση είναι ότι ξεκινάμε με κάποιες εργαγίες, και όταν είμαστε δημιουργίες, είμαστε δημιουργίες σε δευτείου, είμαστε δημιουργίες σε έναν εργαγίες. Με αυτές 10,000 εργαγίες, εμείς θα μπορούμε να δημιουργήσουμε τα εργαγίες σε around 10, 15 εξεγουδιές. Είναι καλύτερα, θα κάνεις ένας εξεγουδιές, θα δοκιμάσεις έναν εργαγίες, θα δημιουργίσεις ότι είναι όλοι εργαγίες, 10 εξεγουδιές, και θα μπορείς να δοκιμάσεις στον εργαγίες. Το πράγμα είναι ότι όλοι εμείς είναι δημιουργίες σε εργαγίες, θέλουμε να δοκιμάσουμε ότι όλοι εργαγίες είναι δημιουργίες. Λοιπόν, κάθε ειδικά, εμείς είμαστε δημιουργίες σε εξεγουδιές, όλες οι τεσκοδοί, και όπως είπατε, πρέπει να δοκιμάσουμε λίγο εξεγουδιές. Είχα πράγματα, όταν έχεις κάποιες τεσκοδοί που δοκιμάσουν, πρέπει να δοκιμάσεις λίγο εξεγουδιές. Λοιπόν, στο ξεκίνημα του προσπαθού, να κάνουμε έναν πόνο, θα μπορείς να δοκιμάσεις 7 εξεγουδιές, 10 εξεγουδιές, αλλά τώρα είναι 4 εξεγουδιές. Υπάρχει ένα άλλο πράγμα που έτσι είμαστε δημιουργίες, είναι πιστεύωμα. Το πράγμα είναι ότι, για παράδειγμα, αν είμαστε δοκιμάσεις σε έναν πόντο, δεν ήταν πιστεύωμα, γιατί, για παράδειγμα, αν είμαστε δοκιμάσεις σε έναν πόντο, αλλά δεν είμαστε δοκιμάσεις σε έναν πόντο, ειναι εξεγουδιές, αν οι δοκιμάσεις ήταν... πιστεύωμα, γιατί, το πράγμα είναι ότι, ειναι θέλω να δοκιμάσουμε εξεγουδιές, μετά, σε έναν πόντο, μετά να δοκιμάσουμε, ποιος είμαστε δοκιμάσεις. Ενώ, μόνο να δοκιμάσουμε εξεγουδιές, και εξεγουδιές. Και εξεγουδιές, προσπαθούν να δοκιμάσουμε εξεγουδιές, και είμαστε, δοκιμάσεις στις εξεγουδιές, στις εξεγουδιές. Οπότε, για το στιγμό του δωκιμάσεις, όπως είμαστε αυτοί, τώρα, έχουμε, 1.0, or the mandatory features complete. Βέσαι να έχουμε κάποιες υποστηρές υποστηρές, η πράγματα είναι ότι, εμείς δούμε ότι είναι εύκολο να δούμε, εμείς, πιο επειδή είμαστε υποστηρές από το αυτοί της ΕΕ. Αλλά, στην τέτοια στις εξεγουδιές, έχουμε πολλές υποστηρές, και εξεγουδιές, σε ολόγραμμα, έχουμε 1.0, εξεγουδιές, ο οποίος είμαστε αυτοί, πριν οι δυο πρόσφυρες είναι πιο καλύτερο, και με το Βουλκαν-Κουατότ-Σιρο, δηλαδή, για το CTS-Τοστασία, για το Τεστ-Σκητ, το ΕΕ, έχουμε πιο πιο πιο, πιο πιο, πιο πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο. πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, πιο, και δηλαδή θα δηλαδήσουμε ένας δηλαδής και έχουμε κάποιο σχέδιο. Και όπως είπατε πριν, έχουμε και το Βουλκανδροφίδο που δηλαδή δηλαδή δηλαδή χρήμαστε, δηλαδή δηλαδή ότι για παράδειγμα το OpenGL, δηλαδή ότι για παράδειγμα το Quick3, είμαστε μπορούμε να χρησιμοποιήσουμε ένας δηλαδής, το Βουλκανδροφίδο και δηλαδή είμαστε μπορούμε να χρησιμοποιήσουμε ένας δηλαδής, αλλά επίσης χρησιμοποιήσουμε ένας Βουλκανδροφίδο. Για το πρόσφυρο της πρόσφυρσης του Quick3, το Βουλκανδροφίδο έχει ένα καλύτερο πρόσφυρο που δηλαδή δηλαδή δηλαδή χρήμαστε. Είμαστε πιο πολύ από τα δηλαδή που δηλαδή δηλαδή χρησιμοποιήσουμε. Όλες δηλαδή που δεν χρήμαστε κάποια αυτοσυκλέτα, για παράδειγμα, ειναι εξαιρετικά. Και όπως είπα πριν, όπως το Quick3, το πρόσφυρο της Βουλκανδροφίδο και το Quick3 είναι πιο πιο σύντερο από το Βουλκανδροφίδο, δεν έκανε τόσο πολύ πρόσφυρο σπίτι. Πολλοί από τα πρόσφυρα, αλλά η πρόσφυρα που έκανε, ήταν πιο σύντερο γιατί, από το πρώτο, για παράδειγμα, το Quick3 που εξαιρετικά ήταν το πρόσφυρο της Βουλκανδροφίδο. Δεν έκανε τίποτα, όταν έκανε, ήταν πιο πρόσφυρο, γιατί υπήρχε πραγματικά πράγματα που θα κάνουμε διαφορετικά. Και αυτό είναι το μόνο που κάνουμε. Τώρα, για παράδειγμα του Βουλκανδροφίδου, νομίζω ότι είναι ένας προσφυροσφίδος, το Βουλκανδροφίδο, το Βουλκανδροφίδο, το Βουλκανδροφίδο. Αλλά η πρόσφυρα που έκανε, ήταν για τα πιο σύντερος πράγματα, οπότε, είναι πιο σύντερο για να κάνουμε κάτι δημιουργείο, για τα πρόσφυρα της Quick3, και για όλες άλλες παράδειγματα που βγήμαστε. Τα πρόσφυρα που είμαστε, είναι πως, υπάρχουν κάποιες πρόσφυρας στους πρόσφυρας, σαν παράδειγμα, όταν κάνουμε τρανσφελόπιδες, μεταξύ μπαφές, μεταξύ εμπέρας, etc. We know that in some cases we are not going for the faster paths, and we know that the TF unit, that is a test unit, that could be used more. We made some work on that, but there is still room for improvement. So some implementation challenge that we found in this year. One of the things that we can expect from the driver is that everything is to be executed from the GPU. But for our case it is not possible in some cases. So in those cases we need to use the GPU. That means that it makes the implementation, there are some frictional implementations, and it also means that you need to do some extra coordination. In this case of the coordination, it means to do some flashes and some waves. This is also some issue about the linear display in the Raspberry Pi, because the V3D hardware cannot sample from linear image. So that means that for now we don't support sampling on a good web change. The thing is that it would be possible to make that working when we are running inside the compositor. But we found, well we found, we think that it would be complex to spot that because first of all from the point of view of developer using our driver, it would be strange that this feature is not available on full screen, but it's available on Windows mode. So for now we are not doing that, we are not supporting something on this web change. We could repeat this on the future. As I mentioned at the beginning of the presentation, we are reusing the abstraction for the Windows system, but we found that for our case it's not optional. Because the optional path requires PCU GPU and PCI bus info, but for the Raspberry we don't have a PCI device, so we can't use that. So right now we have a mess request with some changes proposed, the reviews, the discussion is ongoing, so we hope that we will be able to improve this soon this year. So about the future plans. In the short term we need more real-world testing. The thing is that I mentioned several times the Quake ports, the thing is that right now those are the more complex applications that we tested the driver on. I mean right now we have tested our driver with the CTS test, but the CTS tests are the test suite from Kronos, mostly about regression checking, about verifying that specific features are working, but I really focus on the features are not real applications. We also tested some small demos that we have in-house and then we have these Quake ports. We also have some people in the community that tested with the PSP emulator etc, but we need more real-world testing, especially to verify that the driver is covering all the corner cases needed. For the short, medium term, as I mentioned before, we could explore using better the TF unit to improve the transfer operations. We are working on improving the window system platform support. We also have on the 2-2 list the input attachment. The input attachment is a feature that is specific to Vulkan, that is basically that when you are working on an application that has several passes to create the final image, on some specific cases you can just include the output of a previous pass as an input of the current pass. In theory, for a tile architecture like the Raspberry, the Raspberry Pi, that would be an improvement over not using the input attachment because you could just access the data to the tile as we are rendering. But right now our implementation is not doing that, it's just basically treating it as a texture, so there is a room for improvement there. There are also several, as I mentioned before, right now we have 1.0 plus some optional features, but obviously we could start to work on other optional features and extensions. Probably we should focus first on features and extensions that will add core for 1.1, but we also have some feedback from the from the community and we already have some some issues and some bad reports about missing extensions that are needed for specific applications. Finally for the sawmint term and sawmint dimension before, we also need to access the driver performance and trying to find ways to improve it because the driver is the focus, the initial focus of the driver was to getting the features done for 1.0, but obviously there are some ways to improve performance, but this is related with real-world testing. We need more applications to test this. Unfortunately for Vulkan there is not a lot of bandlocked for linux, so we also would try to find any patch map that we could use for this personal work. On long term probably we have in mind 1.1, I have a question mark there because also we have this in mind, we don't have a clear the line for that, we don't have a clear path for it, so initially we are only going to to start to work on optional features that will be part that are already part of core 1.1.1. The other thing that we have on our list is to improve the code reuse with the OpenGL driver, the thing is that one of the objectives of when we started to work with the Vulkan driver was not affecting the the Yale driver. The thing is that some of the things are similar, we were modifying them for the Vulkan driver, but we didn't want to do some eerily refactoring because we didn't want to do whatever factoring now and then realized that okay but now this Vulkan feature needs other things, we need to work after again and again. So now that we have the Vulkan driver done, we know everything that we need, there are some things that are really similar to the Vulkan driver or the OpenGL driver that probably we could reuse some refactors, so we don't have the same thing duplicated. We also probably could port some features to OpenGL drivers, the thing is that for the OpenGL driver we support 3.1 and some of the features are optional for 3.1 but are mandatory for Vulkan 1.0 so there are some features that we implemented first for the Vulkan driver like for example RoboSport Buffer access or sample write sharing and those will be nice to port those features to OpenGL to the OpenGL driver. So about how to contribute. Obviously we have only two people so we welcome any contribution from the community so we are trying to create a stable context to enable a stable contribution. Some people come mentioning that the hardware specifications are not available for the general public but for example we already have an OpenGL driver that somehow sets us the documentation for the hardware among other things for example it includes the tables for all the operations that the driver allows. As on our code we have a lot of fixings for things that could be improved like for example it seems that we think that it could be done in a different way to get a performance or for example if there are optional features we have a fix-make and we know where that optional feature should be implemented we also have a fix-make here if you want to do that feature do that here. So for example if anyone wants to contribute you just need to start to look for all the features list them and say okay I can work on this or I can work on other things. As you mentioned before there are several optional features pending in fact for example I remember a book report about someone asking for a feature and asking okay if I want to continue to this what I need to do we made that okay if you want to do this I think that what you should do is this this and this that was somewhat complex but I think that there are other optional features that could be more reasonable for our newcomer and as I mentioned several times we also need more testing and performance feedback for example as I mentioned before the community got the PSP emulator working and that was really nice and they also provided some feedback about that testing we also had some people that are testing this with a 3-cash emulator and got it working so this kind of testing with more applications and it doesn't need to be emulators any book and applications it will be really nice to get some testing from anyone as some performance feedback and also book reports. So for anyone that wants to contribute and provide feedback there is a video core channel at FreeNode for OERC this is also the message that they have made in this and for anyone that wants to create a bug report on glab they are called issues they can call the lab issues or make a question if they were able to create some patches. I want to end this presentation with special thanks we want to thank the MESA community because I said before we are reusing a lot of that is our own MESA we are reusing NIR we are reusing the SPI-V translator the window system integration bits etc we also want to thank the system MESA book and drivers because we obviously followed hardware things this to be done differently but we got a lot of inspiration from the full-cut drivers that includes the Intel Ambil or Frisureno Turnit and that's that bit. Finally we want to really thank Eric Anfield and how it was the Mateo Benjiel driver he wrote it he knows a lot he knew and knows a lot of the specific hardware and he replied and he reviewed a lot of he replied a lot of questions and he reviewed a lot of our patches so we want to thank him and finally we also have one to thank this Emmett that was a contact from Brozcom that is the one that that is providing the hardware because he has replied a lot of questions related to the hardware and how it works So that's all so you have to be questioned and thank you very much for being here
|
Igalia has been developing a new open source Mesa driver for the Raspberry Pi 4 since December 2019. This talk will discuss the development story and current status of the driver, provide a high level overview of the major design elements, discuss some of the challenges we found in bringing specific aspects of Vulkan 1.0 to the V3D GPU platform and finally, talk about future plans and how to contribute to the on-going development effort.
|
10.5446/52500 (DOI)
|
Hi everyone, welcome to our talk. I'm Simon and I will present the first half of our talk and Christian will present the second half. We are from Apex AI and we both work on an open source project called Eclipse Icerics. Okay, let's step back two years. We had a plan to open source Icerics and back then some colleges said what? You want to open source safety software? Random people will contribute. Yeah, some even said safety software needs to be closed source and back then we just thought okay, challenge accepted. First off, I want to introduce you to the agenda, give a short motivation, then further introduction to the topic and motivate you with an example why formal processes can be beneficial, give you an overview over the typical automotive software development, present you the goals that we had in mind when adapting the workflow for Eclipse Icerics, talk about tools, what tools we are using and then present you in detail the workflow for example when it comes to community contributions. At the end, we will conclude with some lessons learned and give you a brief outlook. Okay, what drives us? What is important to us? Well, we want to tear things apart and we want to look inside the black box, then we can start building trust and at Epex AI, we want to build software that does not fail, so open source is the perfect choice. Recently, I've read this very interesting quote from a board member of Bosch, he is named Harald Kröger when he talked about the automotive software development, he said the following, I don't think it's sensible that everyone works alone on this challenge and our answer is use more force. Okay, we work at Epex AI, Epex AI is the main product, is Epex OS 3rd, this is a fork of the robotic operating system ROS2, it's being currently certified and you can think of it as the redhead enterprise Linux for ROS2. Christian and I, we work as developers on Eclipse IsoRx, this is zero copy middleware for safety application and it's being currently integrated into Epex OS but can also be used standalone. The main benefit of IsoRx is the latency and the runtime is independent from the transmitted message size. If you want to learn more about IsoRx, I can recommend you to watch our last year's talk where we introduced IsoRx in detail. Okay, so just jump right into an example and why are formal processes beneficial? Okay, here's an example, we have a bool called my bool and it's not being initialized and depending on the value of the bool we then print out true or false. I will give you some seconds to think about what the possible for outputs could be. All right, here's the answer, the compiler can say, hey, I'm initializing it with true, so then the output could be true or the other way around and the output will be false. But hey, this is in my bool, this is an undefined state, so the compiler can also say this is undefined behavior, so he can say I'm outputting both true and false, but the compiler can also say, hey, this is undefined behavior, I'm optimizing this away, and then there's no output at all. So all in all undefined behavior means anything can happen. And in order to prevent mistakes that can happen on a daily basis like this, formal processes can be very beneficial. Okay, next up I want to introduce you to the V-Model, this is the development model most likely used in the automotive industry and A-Spice is the interpretation of the V-Model. It stands for automotive software process, improvement and capability determination. And I will start here on the top left corner, you typically start with your software requirement analysis. So for example, what should your software do in case of a middleware, for example, two executables should exchange data. Then you break down these requirements into a software architectural design, and then the different models that come from this design, you typically implement them in code, for example C++, and then additionally create software detail design. When you've finished your implementation, you start with the verification of what you have done. But typically this goes hand in hand with the development. So first off here on the bottom, you verify your unit construction or your code, it is called the software unit verification. So what is in there is the review, this can be a static code analysis, and also unit tests. If you then take these modules that you have created, integrate them together, and you test them together against the software architectural design. And at the very end, you do a software qualification test and validate your requirements against the requirements that you have written. For example, in case of a middleware, you would test if two executables now can exchange data between each of them. Okay, on the next slides, I want to introduce you to the ISO 26262 standard. It's a functional safety standard for road vehicles. So what is in there? What does it contain? First off, it contains a formal process like the one you've seen on the last slide. It also contains a formal definition for errors that might happen during a nomination operation of a car. A formal definition of risk with relation to the possible errors of risk assessment and denitigation is also included. And it also enforces an independent safety assessment. A term you will find very often when you read through the ISO standard is the term ASIL. It stands for automotive safety integrity level and comes in five different levels. QM being lowest and ASILD being highest. QM stands for quality management and for example, an item, QM item could be an entertainment device. Whereas on the other side, ASILD could be, for example, automated driving. Okay, if we dig further into the ISO standard, we can also see that safety is defined as an absence of unreasonable risk. And they use this formula here or define risk as a function of frequency of occurrence, controllability and severity. And the frequency has two parts. So it has the part exposure, how often is the car in a situation where such a hazard could occur? And as well on the other side, a failure rate. So the probability of a system to fail. And this is not considered an a risk assessment, but ASIL is used instead. Let me give you a short example here. A braking system, exposure. So when you're driving with a car, it is very likely that you will break. Let's jump to controllability. If the brakes start to fail, it will be very difficult to control your car. And now regarding severity, if the brake stops working, fatal injuries are very likely. Okay. Now we had a look at the processes. Now I want to talk about the goals that we had in mind when we created the workflow for ISORIX, considering, of course, the V model and the ISO standard. First of all, we want to make developers happy and be also transparent to the community. We want to be helpful to newbies and in general, just encourage knowledge sharing and make life easy for external contributors. We also have the idea to work as much as possible in the open, like any other open source project out there. Revulable requests in the open, do the planning, the discussions, every thing should be transparent. On the other side, we also wanted to shape the workflow after established guidelines. For example, do one use the Bosch AI. Okay. Now, how are you going to do, what kind of tools will you use when implementing such a V model? If you look at your typical open source project, what you typically will get is you get the code, you will get some basic unit tests with it, and most often also some design documents on how and why things got implemented. That's a great start for sure, but you can always use more tools in order to improve the quality of your code. And on the next slide, I want to show you what kind of tools we are using for ISORIX or what kind of tools or what kind of tools actually we are planning to use for ISORIX. Okay, I will walk you through this slide. Here on the top left corner, you can see that we are planning to use a tool called JAMA for the requirements. And when it comes to, or for the design, for the code, we are not planning to use any additional tools. And when it comes to unit tests, we are planning to use the tool called VectorCast. This is also certified. And it's a commercial tool. I know there's some downsides to commercial tools, because some people cannot buy these licenses or cannot afford it. So this is why we planned for the static code analysis. We are planning this together with Perforce Helix QAC to bring the scan results out into the open with the continuous integration, for example, in order to not make people buy a specific license when wanting to contribute. What we're planning for integration tests further up the V is the usage of a tool called Frida. We also want to use a tool developed at EPEX AI. It is a performance test tool and also want to adapt the tracing tool for IZL-RICS. It is called LTT and G. But what is very important is the code base alone does not qualify for the usage in a SACI system. So what you need in addition is a safety manual. And the safety manual guides you through the process. Well, it talks about things like what hardware do you have to use, what compiler version, for example. Okay, and another thing to note here is not all of these artifacts will be publicly available. So for example, the safety manual or extender tests will not be publicly available for commercial reasons. Okay, great. We're done with the first part of the talk. Now, Christian will talk about the bottom of the V, the code and the unit tests and talk about how to do things in practice. Thank you Simon for the introduction. Now we take a closer look at the tools and processes at our hands. The first thing we have to realize is that the tools and processes are not enough. Because not every programmer knows all the rules and we need some kind of safety net in place. This is our static code analysis tool. It will catch most of the issues you can find in the code, but not everything. And with every feature we implement, we also introduce a quite excessive test. But here also the same rule applies. Some kind of bug or corner case slips through your mind, you do not test it and then you introduce a bug with a new feature. And now we had the idea, we need some kind of programming paradigms to reduce these bugs and errors even further. One of the paradigms is enter and then or else paradigm, which I will introduce in the next upcoming slide. The next thing we did was, for instance, we implemented some STL constructs ourselves like the vector list and expected option and so on. For instance, in a safety critical system, we have to avoid memory fragmentation because you have to guarantee to the user if memory is available, he can acquire it. Additionally, we do not want to use exceptions in our safety critical system. And we definitely want to avoid undefined behavior, because undefined behavior can lead to anything can happen. And this is something you do not want in a safety critical system. The next thing we also enforce is boundary checks. The STL has a paradigm, which is called you do not pay for what you do not need or do not use. Here, for instance, we say safety first and then performance. Therefore, we enforce boundary checks to reduce the amount of errors even further. The next thing we also do be implemented some kind of extensive test strategies which go beyond the functional safety standard. We follow MCDC patterns, for instance, but we go even further to really catch most of the issues we can think of in a code. But let's take a look at a code example. Let's say you want to implement a function where you show the travel distance in your heads up display. The first thing you have to do is call receive position. Receive position will return an optional value. And this means that it's all right to not receive a position. You want to communicate this to the developer. Therefore, we decided to use an option here. But the option has a downside. You have to verify if current position contains a value that you really actually received position before you can access it. And now the question is what happens if no position was received? In this line, for instance, we want to calculate the travel distance with the start position and be dereference current position. But if it does not contain any value, you access valid memory, but with arbitrary content. And therefore, you have an object in an undefined state. And when you put it into the distance function, you have undefined behavior. And this is something we definitely want to avoid. And a safety critical system. And the question is how can we avoid this? Our answer was the and then or else paradigm. In our C++14 internal implementation, we added two additional methods called and then and rails. Both are provided with a lambda, which is called in the and then case when the object contains a value or in the or else case, if the object does not contain any value, and then the and then case, you can then access underlying value directly. Let's take a look how it would look like in practice. Here again, we would call receive position and then directly, we concatenate it with the and then call. Here we provide our lambda very calculated travel distance, and we get direct access to the underlying value called position. So we can use it in a travel distance calculation. And the other case that receive position does not receive any position update, we just print that no position update was received and then we can do some kind of error handling. In a side by side comparison, it looks like this. Here in the left side, you see the discussed algorithm. And on the right side, you see the classical approach where you have to verify first that current transition contains any kind of value. And then you can calculate the travel distance or in the else case, you say no position update was received. But here you have always the case, yet you can access current position and dereference it and access the underlying value, even that there's no underlying value present. And then you will run always into the undefined behavior case. And then you use the and then on the rails path, you avoid this. So now you may think contributing to such a safety critical project might be a challenge because you do not know all the rules. But Isorix is part of the Eclipse Foundation. We have some rules and some workflows in the Eclipse handbook, but we have also eight committers. The majority is from Epicci and some are from Robert Bosch. They will guide you through your pull request. They will review it, will pay attention that you do not use the heap, do not throw exceptions and so on and follow all these rules. And when you go when they guided you through this, they approve your pull request. And if you have two approvals, you can merge it into the repository. Somehow sometimes this is some a little bit restrictive, more restrictive than a common open source project. The reason is because we are working on a safety critical system. And in practice, it would look like this. You would create a pull request, and then you have a pull request review checklist like you see in the right side. There you can verify that every commit message for instance follows our rules and that all the other rules are applied here and you can verify them. And this template checklist is also always automatically added when you create a pull request. Additionally, when your pull request is created, we our static code analysis tools run through it and we verify against adaptive autos are misread and so on that all these rules are that you follow all these rules. Additionally, our continuous integration in GitHub runs and then we verify that macOS, Windows, Linux and so on is supported. Our clang sanitizers are running though that we try to find memory leaks and so on. And also, we treat every warning and C++ as an error and therefore it's impossible for you to introduce errors. So enough safety nets are in place and you can safely introduce a new feature on your bug fix into other isoex repository. And with every new feature you introduce, you have to also introduce some kind of tests. Let's take a look at this. Let's say for instance, you have implemented the calculate algorithm here. It's very simplistic. You subtract a from result if both numbers which are provided as arguments here are greater than zero and then you do something with the result and return it. And the first thing you are aiming for is a 100% line coverage. And this you can achieve if you call calculate with two positive numbers, two for instance in this case. And this is not enough for a safety critical system. For instance, we also aiming for full branch coverage. And it's possible for instance, if a and b are not greater than zero, do not go through the if statement. And therefore we have to test for this. And we have to, for instance, call with minus two and two. But in a safety critical system, this is even not enough. We are aiming for MCDC coverage, which means modified condition decision coverage. And this means that every condition must be executed twice once for true and once for false. And this means we have to call this function once with a greater zero and once with a less than zero and the same goes for b. Therefore, we have to call it with two additional parameters constructs here like this. If calculate minus two, minus two and calculate two and minus two. And now you could say, okay, I've tested this function quite excessively. Now everything is fixed and it's stable. But it's not here you see that also all the processes are limited. We have still a bug in the air. It's some kind of corner case. It's possible that a is greater zero. For instance, when it's one. And then in this line in the if statement, we subtract it from result. And then we end with value of zero. And later in the return statement, we devise by zero by result. And this leads also to undefined behavior or not undefined behavior, but to a critical state. And we have to deal with this. And to help our developers to deal with such situations, we have some kind of idea or guideline to think always in this case, when you try tests of zero, one, many corner cases and limits. Zero one many is for instance, interesting when you're dealing with containers that you say I want to test it container when it's empty, when it contains one value or when it's containing several values. And the corner case, in this case, for instance, would be when we call it with a equals zero this function, then we have a division by zero. But we did not test the limits here in this case. And the height is another bug. What happens if a is a maximum allowed number? Or the largest number possible as a floating point, then this again leads to undefined behavior in the return statement, because then we have the largest number possible and add something to it. And then we have a floating point overflow. And we do not need to know how to react here. And this leads to undefined behavior. So here we showed you how even with all the tools and ideas and processes we have in place, we can still encounter some bugs and did not test everything here. So okay, what did we learn here? The first thing is that open source is quite a good idea to go for it in a safety critical system. It reduces costs because you have a lot of contributors, you had a lot of user of your product or of your software. And they have quite nice ideas for new features, they find bugs, and this reduces your costs and increases the code quality massively. And also it's showing some kind of transparency to the community, what they are dealing with, what will be in their future car, what kind of software will they be running. Additionally, it's quite easy to develop safety software and you open. Yes, there are challenges we have to face, but it's not so challenging that it's kind of infeasible to develop open source software, safe open source software. And we encountered also the case that some vendors are very supportive and are offering their commercial tools for free to advertise them in the ISORICS project. And what we also learned in the testing case and also with the previous example, and certification does not mean that it's safe. It's a necessary step we have to undertake and we have something like MCD testing and so on. And these are the best practices in the industry, but they are not enough, we need more, we need skilled programmers, we need some kind of programming period times, and we need some really good ideas how to test code and these needs experience and experience programmers. And when you contribute to the ISORICS project, these experience programmers and contributors will guide you through the pull request. What's coming next to ISORICS? We are planning our 1.0 release in the early quarter of, second quarter of 2021, where we introduce end to end communication, we have a new functionality plus plus API, as you have seen in the example, a functional interfaces can make the interface more safe and less error prone. And this was where we're going for. Additionally, we have a C API, which can be very interesting for everyone who wants implement, for instance, a Python binding or other language binding. And we have macOS support. Additionally, ISORICS is not restricted anymore for inter-processing communication. You can also run ISORICS instances on different computers and they are connected via Cython DDS. And in 2020, we want to go for the ISO 26262 certification and release it. So that's it. Are there any questions? Okay. So thank you very much to Simon and Christian for that very interesting talk. So we have a question on whether the STL constructs are open source. Would you like to address that, Simon? Hi, everyone. Sure. It was partly answered in the chat already. So yes, they are all open source currently. And we deliver it as an extra CMIC package. So you can just grab it and, for example, use it in your open source project and discuss it briefly. I was in there. There are several constructs in there that come or that are implemented backwards. So it's everything C++14. And things from C++17 or even C++20 are backwards integrated, sort of implemented in C++14. So this is things like a variant, a type safe union, for example. Or Christian mentioned the optional. Also this error handling concept that we inspired, Rust inspired us to build this error handling concept, which is called STD expected. And also several lock free algorithms. So for example, a multi pusher, multi consumer queue, which is both thread safe and lock free. This is quite, yeah, I would say quite cool and definitely have a look if you're interested in this. Okay. One other thing that occurred to me, most of your examples in the slides are related to automotive applications. Are there any people you know of working using your techniques outside of automotive? Pretty sure. So as I said before, the Isarix util, so our STL construct, this is inspired by both Rust when it comes to expected. Also, we recently added a Haskell sort of pattern which is called new type. So we're definitely looking to make C++ neater and improving C++. And in general, just trying to build software or constructs where the user can hardly make mistakes. You cannot always avoid it. But this is the goal to just write good and safe quality software. Okay. Excellent. Somebody just asked the question, do you use real time scheduling in your system? Does this fit? Yeah, sure. It is. Also, we are supporting QNX besides Linux and currently Mac OS. So it's definitely supposed to work in a real time system. But what we currently
|
At FOSDEM 2020 we introduced Eclipse iceoryx, a true zero-copy middleware for safety-critical applications like automated driving. At FOSDEM 2021 we will give an overview of what needs to be considered when writing safety software in the open, share our experience regarding the development workflow and present the progress of the Eclipse iceoryx certification. Developing software in the automotive industry can be tedious. Old compilers, out-dated toolchains, resource constraint hardware. “Only use something which has been proven in-use” most safety engineers would argue. Well, hardly anyone would object, no one wants to jeopardise peoples lives when bringing a car on the road. The question we asked ourselves quite often in the last year: How can one combine the momentum and the freedom of an open source project while not compromising on the quality and safety? Apex.AI has extensive knowledge on the design and implementation of safety-critical applications written in modern C++ and is focused on certifying the robot operating system (ROS 2) according to the international standard for functional safety ISO26262. We will present an overview over the typical automotive software development process and discuss our modifications in the development workflow that we created for Eclipse iceoryx. Furthermore, we will share the key architectural design decisions, give examples of safe vs. unsafe code and conclude with a brief insight into the lessons learned.
|
10.5446/52501 (DOI)
|
Ion, ddail yw'r adeustio? Come on, it's open, it's open-down, it's open-down stadium. My composition needed to get into the team core warehouse. The training teampertsea is working on scheduling the system. Triwf meel o字, ac rai dim ond gwmhins yn y gyllide阿na, niech ei ddiw i dddangos yMEIG i bl newspapers rwy'i braw meloddiMusic. Felly mae'n tohop gyntaf ouffleddol a gwiaeth y dodgeurs mwy o gymryd. Felly AM linkssix enw. Mewnロrfydd dim.. Dw iftodi'm. Felly跟大家 Network Lle Sean Byrdd mae cyhoedd â llaunau'i prydaur Cadw sy'n gen i wybodol llaunau y lleol wedi bod ychydig o fentweithio weithio aimeters PUBG sy'n bod ni'n bobl â popreu cranafi nór cyflog uchydig tai o'i ni sized rai c caller cywordsag. A cofyn? Felly mae noodlesi Smith. Roi yma yw Llyfr GŸ Rhifennid. Roeddemayawn y dflos wych자f tua drwsno, mae rhai o'r 15 oed iddo o g môl. Efallai ti'n golyguingu mor hyn, neu dechydio i fi fy cort SmartGear penf여 shirt yw'r hyntai at y flyni. I've done my dues, I've built enough web apps and services. Luckily not doing that anymore. I now do contracting around lots of niche and interesting and fun things around things like real-time systems, real-time programming, media stuff, a lot of stuff with G-streamer and interaction with the JVM. Alongside all of that, I've occasionally had some time to make some interesting and fun things as an artist. Again, most of them are using Java, some fun interactive things outdoors at a larger scale, and some embedded little things from museums with a slightly smaller scale. Occasionally I go into nightclubs and make breakbeat music by writing Java lambdas because why wouldn't you? Talking about live-ness in programming. There's a thing called live coding, which you may or may not have heard of. People go into clubs, like I say, and do visuals, audio. There's people. Why would you not? Coding Lisp to make visuals. There are tools that are things like Sonic Pi by Sam Aran that are Ruby-based DSL for doing live programming, specifically music. What interests me particularly is these two worlds of live coding and interactive programming. If you look at the Wikipedia definitions of both, you'll see that they go, well, this is not this, and this is not this, and yet every example is the same. What interests me is how we make general-purpose live programming environments. One of the environments that comes out of a more live coding scene that's becoming more generic is called Ex Temporary by Andrew Sorenson, which includes a language called XT-lang, which is a scheme dialect for coding natively through LLVMs, not garbage collected. It's as well as a function doing memory regions, things like that, designed for low-latency systems programming. Then, if we're talking about live-ness and reasons for live-ness, it's about exploring code bases. You've got things like Ferro, which is a contemporary take on small talk. Obviously Java was partly influenced by small talk. Maybe we haven't taken everything we could do yet. This is interesting to look at evolving tools, which is how hopefully this talk fits into this room. Also, you've got systems like Erlang that allow you to hotswap code while it's running. These two things, in my mind, are very interrelated. Initially, in that artist hobby obsession, Part of Life, I was writing a live hybrid visual ID called Practice Live. At the heart of it is a system called Practice Core, which is basically a live, recoverable actor system. I say that was in that one, and this is starting now to bring the two sides of my life together, which is the fun bits and the actually making the living bits. I'm mainly going to look at Practice Core. Forrest of Actors runtime will look a little bit at that, what that means. Design for hot code-relating in real-time systems, or real-time in Java, but you know, soft real-time systems, and explore how its architect just supports this idea of live-ness. There are lots of ways we might want live-ness, to explore code we're working with, to explore the behaviour of a system we're working with, to understand the changes that we make to the system, and to immediately see their reactions, say if you're working with something that has a physical reaction, physical computing, to change code and see literally without very little latency what the changes have. To be able to code the flow of thought, not waiting two minutes for your thing to reload, metaprogramming, that way the code is out, or hot-patching, and because it's fun. So quick demo of some of the features and to show you what I'm talking about. So we're going to access live here. Hopefully working. Yeah, there you are. Just live video, you know, proof it's live. Hello world. So we'll ignore that tab for now. So we have what is probably quite familiar to you as a node-based graph, you've seen node-based programming systems before, I'm sure. We have a range of components here that we can change properties of. Exactly, we're sending, we've got a camera in there, we're sending some text to it, hello world. What is different in this system? So each of these represents one of the actors in the underlying practice score system. Is that we can right click, edit code. Let's zoom in a little bit. So this is wrapping a graphics library in this particular example called processing. Very simple API for doing drawing. So I can just write some more code in here. So I've typed some code. As soon as I hit save, you'll see it flash. And it's immediately injected into the system. So another thing I can do is... So we can make something that interacts with something. You'll see up here I've got some annotated fields over on the right top right hand side, the various properties. So this actor system has specific types of messaging or conventions basically for messaging. So we have a properties is one convention. They're also what you know typical function and triggering actions so that you'd be more familiar with in their typical actor system. So how do I add property here? We just annotate something. We can give it a range here and we'll call it size. And then down here I can do this. So immediately I have this ability to interact with this. Bearing in mind one thing we'll get onto. This is system, the IDE and the runtime are in separate processes. So this could be running on a separate machine entirely. So we're flinging things automatically across a network connection. So we've done that. The other thing we can do is... I'll change that into a property type. I need to get out of here. And I'll do it quick. So we'll call this blip. We'll make a little action. And we'll take size and we'll do 0 and n. OK. So I've added a little method into that. It's just going to animate this size. We've now seen that we've got a button has appeared. So I'll just do this and interact with it. One of the interesting things about this system, and I've realised I've definitely got more than 20 minute talk here, is that we don't just treat all the parameters and fields in here as a property. One of the properties of every actor is its own code. So we can send strings of code to an actor and it will in turn look up and bring in and compile and change its own behaviour and configuration. So I'm just going to add one more. So three dimensional things flying around in here. So what's different in the forest of actors approach as opposed to a standard actor system is each graph here is actually single threaded and lock free. So we can do real time processing through chain of actors that are in other ways encapsulated but everything happens synchronously. But then we can have as many graphs as we want on as many machines as we want more locally. So here possibly running if you're doing low latency stuff, one particular graph in a separate VM to control its garbage collection, pass things around. So here I've got a file listener. I'll come back to this, this idea of ref in a minute, but we've got some code that's wrapping a watch service. And I can just find the file here. So and you'll see that it's appeared down here. If I find that component. So you'll see the property is exposed. Now if I find that code itself and just open it in a standard text editor. So we can immediately change things based on whatever. We can have actors that change other actors and change them all running at runtime. Right. I'll go back to the slides for it. So we have a forest of actors architecture. Have a pipeline of actors looks like this. These are all communicating within a single thread context. But then we have multiple graphs that exist. And in a way take the scheduling part of the actor model and do that and the encapsulation part down into the tree. Now I have a range of other acts. I didn't actually show you this, but in the hook there's seven or eight other actors that are part of the system. So there's service registration. So an actor receives some code and say, right, I want something that will turn this into class. So the compiler background loading. And then we can split that however we want. So the actors receive their code as a string. Got code now what? So the key things this is doing is splitting typical Java object is behavior state and identity in one thing. And we split that into three. So a code component is the class that exists as the actor. It's the identity that lasts the entirety of the life of the actor itself. Code context is something that knows the state of the code you had and the state of the code you want and knows how to transform one into the other. So we're using basically a form of dependency injection to move state across. And code delegate is something that the user is writing their code and extending. There's a little code connector which is used different, which knows how to connect all the annotations together and build that recipe for you. So a chain of messages started. The actor says, right, I've got some new code. I need the context for this code. We look at the context. That may not have the byte code for that string already. So that looks up a second service, which is the compiler. The reason for splitting those two things up is that instantiation needs to happen on the same VM. Compilation doesn't. So very often you can send it off. So I wasn't extending a particular class here. So all the strings you send are actually a class of body. So the wrapping of that, I guess a bit like JShell does in terms of imports and certain configuration around it to compile it into a unit that's done for you. And all the way through this process, nothing hits the file system, so it's entirely in memory. So we're not using HotSwap or JShell. Both useful tools in the area. But it's a really simple way of getting code into a Java application. And that's just to use a class loader for every code iteration. So one of the benefits of doing it like this is we can ensure that the user's code is replicable. So it seems incremental. You see the code flash as I change it. You're developing something incremental in the ID. But we're trying for something that's transactional. It knows what you had, where you're going. So this idea, remove code should never have happened. So now you've got just happened to history. If you have a replable history, you just run the entire thing to ensure you get to the same state, potentially. For this idea, so if we had a piece of code like this, which is doing some filtering, which I didn't get around to showing you, but we take an input string and we map it to something else, whatever and send it out. We have this code, suffix as property, and then we do this. The only logical thing in an application for us to do that is that that connection doesn't exist and never did. So we can't keep the previous connection in place. We don't want the suffix to change. That's something we can, it's a state we can keep. So the environment tries very hard to look at what needs to be reset and what doesn't to be logical. And there's a lot of built-in support. Rather than thinking about the hotswap side of things, this is using an API to do the kind of transaction of change, to make it feel as natural as possible to use. And then of course, you need to interact with other Java code events and some things maybe, not maps. So, yeah, there's a ref type as a fallback. So here you define basically how you initialise it. So you supply, say, a constructor. And so, yeah, this is quite a complex API that hooks into all these things if you need to define something custom. But here we've just got lift ref init, pass it an error less constructor. Next time that gets injected in, that constructor will not be called. But if you haven't defined an initialisation and you try and apply to it to do something with it, it will still give you an error message. So you at least have to explicitly tell it what you think it should do. And also has this wonderful little bit of generic switch. It's a little bit confusing, but actually allows us to do a quite simple thing, which is to actually add and remove listeners automatically. And then we have generic data types, three minutes. I guess you're not going to hear me actually do some music on this one. So we'll get that to that. So we can pass generic data through a pipeline as well. So this is a fairly recent addition. At the end of that data poll, we define a sync and we say what should happen when you create it, when you need to create a new object, when it needs to be cleared, when it needs to accumulate. So you've got two nodes going into one. So you define that all at the sync point, so you can then pass through arbitrary Java types all the way through the active graph in one thread. What's interesting about that then is that we can do functions as well. So there's a way of sharing and even accumulating functions to apply to a big data set by literally drawing the lines between apply this and then this and run and it will just accumulate that down. So that was meant to be demo time too, but I guess this talk is more than 20 minutes long. So please do, if you look on there, there are various videos around of me doing various versions of other things with it, including the going into nightclubs and coding DSP lambdas to make breakbeat music. But I want to do something that was a little bit more about what's going on under the hood. So I hope you found that interesting. The project practicelive.org, there's a blog post that kind of covers some of this, just in time programming. Everything, open source on GitHub. That's me. If you're interested in conversations further, talking further, I'd love to talk to you or grab a business card and email me or whatever. Thank you very much. Any questions? Is the photograph of the forest, is that gooey older than you also? So you could actually create forests from another control? So the question was, do we have to use the gooey to control the graph or to create graph? No. One thing I realised I didn't have time to show you was actually terminal interaction to create. So practicecore and practicelive. Practicelive is the IDE based on the netbin's IDE. Practicecore is a completely separate application, which is controlled in different ways. So the command line is one where you could build it up or you can absolutely send messages. So all the add this component, whatever, build the graph connect, is all done again by message passing to the actors. So parent actor, you say add this type, connect these, connect this, it's all done. So you can control that through anything. So there is a command line interface to do that, but you can expose that in whatever way you want. So you mean, so you could, okay, the question was can we translate back and forth between CLI things and the gooey? At the moment, the way the IDE is written, it doesn't maintain a knowledge of things it hasn't created. But that's definitely something that I'm currently working on. There's no particular reason it has to do that, though we can sync back and forth. So it will notice if you deleted an actor on the command line or something else deleted it, it would disappear. But if the IDE hasn't created it, it doesn't sync to it, so it doesn't. What are you using as a communication mechanism between the IDE and the JVM or between JVM? Okay, the question was what the underlying communication mechanism. So internally in a particular JVM you have multiple plurals and different threads, that's a concurrent queue and whatever. At the moment going between, so there's only one class that handles all the messages going through an L. And at the moment I'm just using open sound control, which is a very simple binary mechanism. But nothing else in the environment needs to know, everything comes through basically a proxy actor in one system to a proxy actor in the other. So to replace that, it's replaced one class. So I have been looking at possibly something around GEROMQ or anything that can... Potentially there could be multiple implementations. The same as the actor system, it could be designed such that it can run on top of another existing distributed system. So it's much possible nothing in the graph knows how a message is. It just says, I'm sending a message, you deal with it. Thank you. I think we need the time to take the equipment. Yeah. Thank you.
|
TerosHDL is an open source project focused in the development and integration of EDA tools in an IDE. It's currently based on VSCode and Atom. The goal of TerosHDL is bringing all facilities of software code tools to the HDL development: linter, code completion, simulators management, automate documentation, snippets… TerosHDL is an open source project focused in the development and integration of EDA tools in an IDE. It is currently based on VSCode and Atom. The goal of TerosHDL is bringing all facilities of software code tools to the HDL development: linter, code completion, simulators management, automate documentation, snippets… We will introduce TerosHDL VSCode with multiple features. In the new release the architecture has been completely rebuild, reduce some dependencies and clarify the code. Some of the new features are: - Verilog/SystemVerilog support. - Linter. - Dependencies viewer. - State machine viewer. - More beautiful documentation.
|
10.5446/52502 (DOI)
|
Yo, and welcome to my presentation about Secureboot user space tooling. We'll talk a little bit about what the current state is, what I've been trying to do to improve it, and why Secureboot is difficult. So my name is Morten Linnardö, I go by the nickname of Foxbaron. I work as a security engineer at Defendable, and I've been a free and open source software developer since around 2013. Since 2016, I've been contributing to the Arch Linux distribution, where I've been doing security advice rework, tracking security vulnerabilities. I also do reproducible builds, figure out how we can do bit for bit identical builds of packages, and I also do a lot of packaging in sort of like the Go space and the container space. I also care a lot about supply chain security, and that's where my main interest from Secureboot stems from. So Secureboot is terrible, but it's a shame because Secureboot is terrible for the wrong reasons. And a lot of this stems from the complexity of the current tooling. So if you want to write, if you want to figure out how to encrypt the disk, you'd figure out that, oh, I just format it and then you open it. And that's easy, like if you do a new installation, it's trivial to remember those commands, it's sort of self-explanatory, and it's not really hard. If you want to figure out how we enroll your own Secureboot keys, you end up with like Erudig and Tuvicky, Rod Smith's controlling Secureboot guide, or like ArchVic is a Curb but page. And all of these are fine, but they're extremely long articles, they contain a lot of implementation details, and it's easy to make mistakes. So if you were to set up Secureboot, you'd need three keys, which is a platform key, which is sort of like the main thing that controls the platform. Key exchange keys, which authorizes the key that can sign your EFI executables. And you also want a database key, which is sort of the key which does the signing itself. And to do this, you sort of make three keys with OpenSL, you also do a Durkey, which, while making this presentation, I don't know why do it, I forgot, but apparently you need the Durkey as well, and people copy-paste all of these commands. So all of these commands are virtually the same across the Gintu, ArchVic, pages because they're all copied from the Rod Smith guide. And you see this because it's a 3,650 days expiring on the keys, and usually they all have RSA 2048 bit strength. Then you create a UID, which should be sort of random. You take the PEM keys, and you do an EFI signature list with them, and then you sign the ESL, the PK needs to sign the self, the PK needs to sign the KEK signature list, and the KEK needs to sign the database signature list. And already at this point, you might be asking, what's the EFI signature list? And, well, it's explained in the documentation or the man pages, but you don't really need to, you shouldn't really have to need to know how, what works. And then you need to enroll the keys, which is usually done with the EFI update words. You can also sort of do the key tool, EFI executable, and some mod boards also supports it in their sort of bias set of utilities. And the next part is to sign a kernel, and that's sort of where some people get a snipe because EFI tools does not support kernel signing. You need someone that implements the sort of the PA cough executable signing. You need another set of tools to do the signing. So a lot of people use SAP sign tools, which is maintained by Canonical, and that allows you to sign the keys, no, a kernel, and you reboot and hopefully it works. So that was 17 commands and two packages to set up secure boot, which is a lot, I think, we can do a lot better. And there's no really good reason why it is like this, like SAP sign tools sort of implements most of what EFI tools does, frankly, better, better way. It also does signing because it implements the PA cough library, which is used to sign the EFI executables. And it also actually supports an undocumented key enrollment system, which is called SP key sync, which has no man page, but it exists. And again, EFI tools doesn't provide any signing. There's also the P sign tools, which are maintained by Red Hat. They, I don't think they do all of the things EFI tools and SP sign tools does, but it implements yet another P cough library because see, I guess, people don't, this is sort of not that easy to distribute source libraries, so people either sort of write their own or how to distribute the surnames or the dynamic library for this. And if you want to have like a revocation list, people have been using DBX tool, but this is now deprecated and merged into the firmware update manager tooling. So there's a lot of like separate tools that does a lot of things, not easy to get a grasp of if you're setting this up the first time. So my conclusion has basically been that we can do better. We can do a lot better, actually. So that sort of brings me to the demo, which I have today, which is basically, we're going to do a quick chemo run. So we'll have, we use the OVMF, secure boot parameters with the kernel. So what we currently have is that we have SBCTL, we can run status and then we see we are in setup mode, which allows us to enroll keys, but do not don't have secure but enabled. So we're going to quickly fix that by creating some keys. So this is create, does all of the commands that you previously did. And I'm also going to enroll the keys. So now the keys are synced. So if you run status again, we'll see that we now are out of setup mode and secure is still disabled. So what we're going to do is that we're going to sign our kernel. And then we're also going to just save this for the future. And then we have signed a kernel. So now we can also list all of the files we have done. And sort of this helps us keep all of the files assigned. So now we have signed the kernel, we have enrolled all our keys. So we are now just going to prove to you it works. So this is chemo again. This time we're entering the EFI shell. So do boot manager, EFI tunnel shell, full screen. And we'll quickly go to this. So we have the kernel we have signed. So if we try run this on the kernel EFI, we should say see that this command there is still is access denied because kernel.EFI is not actually signed. However, if we do sign.EFI and we specify the root device as the SDA, we see that we have actually booted Linux. SPCTL status. And you now see that we actually have a setup mode disabled, security is enabled and everything is fine and dandy. So that was quick demonstration how it could be that much better. So if SPCTL, which is secure boot control, I originally wrote EFI roller three, four years ago, which was a huge batch script, we just generate all the keys and enrolls it. It works, but sort of secure boot. Now it bashes a little bit limiting when things get complicated fast. So SPCTL does all of this, it's a sign key management. It does key enrollment. And it also keeps track of what to sign. You can also use this along with EFI stub generation so you can have the interim FS and the kernel signed together, which enables people to not being able to modify the signed image when you're done, because that's still a threat if you only sign a kernel. So there's a few missing features that SPCTL does not implement. It's our key rotation. It doesn't have configuration files and preferably we'd have hardware tokens because most of the guides just expense to have to do sort of like clean keys, no passwords, no hardware tokens. And it works, but if you don't encrypt your disk, then you don't have that much of a secured gain. I've had a lot of help from Erko Rowling, Erko RR, which is a WordLinux package and developer working on SPCTL. And it works, I use it daily, my computer now runs SPCTL, designing kernels. And it's mostly like quality of life features left, I think. But there's still a slight problem. And even though we sort of have written all this in Go and stuff, I still shell out to SPCINE tools to accomplish all of these tasks, and we're not using Bash anymore, we're using a system language. So why are we still showing out? That's sort of what tickled my fancy to try to figure out, is there an easier way for this? So Go UFI is sort of my continuation of the project. And that's basically me at the start of Corona picking up the UFI specification, start reading it. So it implements a lot of the structures from the secure boot, not from the UFI specification. Mostly the things relevant to secure boot is what I've been doing, because the end goal is to have a Go library that supports SPCTL and secure boots and stuff. It implements Microsoft Authenticode, which is what's used for UFI signing. So we can check some of the kernels, we can also sign it, and it mostly works. Those implements a subset of PKS-C7, which is needed for verifying the files and ensuring we can sign stuff. And sort of how it works is that we try to provide some top-level API we can use to pick off text summing. We can read in some certificate and a key, and we can just sign the PECOF executable and write it back out again. And this works. I've booted my computer on this before, so that's very fun the first time that worked. We can also do EFI variable authentication, which is used to update the variables which is signed with. So we can do a string to Go ID. We can then read some X509 search file, we can do a new signature list and define which UID this is. Then a new bytes buffer, we can write a signature list and just sign the variable. This also works. So I've re-implemented most of the ESPiSign tools binaries in Go to just ensure things work and it does, which is a bit fun. So all of this is very much a work in progress. It's not done at all. I'm missing a nice top-level API, so nothing is really stable. The BKS7 code is, it doesn't parse ASM1 structures that well in some cases, so there's still edge cases where it's unable to verify signatures. That's a little bit bad, I think it should probably be fixed. I also wanted how integration tests, so we can ensure that the API and all the stuff actually works with the UFI specification. I've been sort of hammering about doing this with the VM test library from Anatol, which is another Arch Linux developer, and Tiano Core, LashiDK2, I think it's called. So it's also only currently Linux only because we only really care about the EFR files under stash tests. So this is not OS X or Windows compatible, but it works. It's nice. It's nice, it's a nice library. It fulfills its goals, but it needs a lot of work before being stable. The end goal is to move SP serial away from shelling out to SP sign tools, and much more I use GoUFI instead. So that's sort of my short introduction to these tools to try to figure out how we can do better. Both of these, all this is public code, it's all open source. Also I've tried to do sort of a secure.dev website to try to document the relevant things I've learned while working on this, but that's a project that's really not well underway yet. If you're curious about this work or have questions, you have two of my emails here. You can also ping me on Foxpar on free note as well. So Q&A I guess. I hope this was enlightening and I hope people are interested in sort of making it secure, but easier for users. Thank you!
|
Utilizing secure boot should be simple. Our current tooling is badly integrated, abstractions leaking and the code bases are not reusable. Functionality is spread between several projects and not one covers all your needs. This amounts to a confusing landscape. sbctl and go-uefi is a tool, and a low-level UEFI library, that attempts to push the secure boot landscape forward. In this talk I'll do a short introduction of secure boot and the tooling people normally use. We will look at the different use cases each of them provide and missing functionality. Then I'll do a short demonstration of sbctl and go-uefi. The goal is to try provoke some ideas how we can make secure boot more accessible for users. Currently the tooling assumes some familiarity with secure boot implementation details (signature lists, PK/KEK/db keys and so on) and that shouldn't be needed to have a fairly basic secure boot setup.
|
10.5446/52503 (DOI)
|
Hi, hello, welcome to my presentation on open BMC introduction and putting guide. My name is Anupani Sonia, I work in Intel for past 12 years in a firmware and before it I have a total industry experience of 50 years. Okay, I think for today's topic is I give big introduction to open BMC and key features in open BMC like system to Yachto and do some production and I will make your first work on the community open BMC repo and I will try to do for you and then I will also make a customized build for new platform based on Intel pause and then it's built from the Intel BMC repo and then I will show you how properties in the platform are modified in editing manager and then it's like a demo for putting into the new platform. Okay, going to the introduction open BMC is a minus formation project and then it's a open source in here and its target is to produce a customizable open source form as tech for baseboard management controllers and most people know about BMC the description I have given here, you can read it, there is already one talk available last year, post and 2020 which you can read from in a couple. These are some of the key features of open BMC I am just highlighting here from the traditional BMC form and how it is different. It uses the set of modern technologies like for Linux distribution it uses Yachto and the long-distance choice is modern CPP Python and JSON and IPC mechanism is system D and the Vibras and then it supports for natively protocols like Redfish and IPM and Redfish is the feature which provides the security and scalability for better than IPM and IPM and you might be able to get it open for a bit of time. So this is not a complete key features list I am just highlighting it here for the key building blocks in open BMC. These are also some of the links to getting started all are pointed to the GitHub.com has open BMC.oxydepot or Viki link which you can read them. Coming back to Yachto, Yachto is like most aware of in the embedded Linux community. It opens a project that delivers set of tools to create a custom embedded Linux for your distribution. So for desktop server platforms there are a lot of Linux framework distribution, some of them are like Fed or Orbit, like I just said, a Debian like that. Whereas for the embedded Linux space there is most analysis and these kind of creating the client distribution is cumbersome. So Yachto come into the picture and it provides set of tools for that. So if you take an analogy of final Linux for this embedded image is cake which have multiple layers like this, bottom layer, middle layer and then top layer which have a creams and this is like chocolate like this. So similarly we can make analogy for the OIS like bottom layer is like BSPs and journals and applications and middleware, those kind of layers. We group the functionalities into the layers and we define the layers like this. And some of the prefixed, these layers are prefixed with Meta-iPhone in Yachto's terminology. Some example Meta-Pocky, Meta-S2.0 Meta-Phosphor. This Phosphor is anything in Phosphor in all VMCs like a reference implementation of some particular open VMCs on this. So all the available services in open VMC are under Meta-iPhone Phosphor you can find it out. And Pocky is a reference embedded Linux OS distribution given by Yachto which most of the people will inherit most of the open VMC builds will inherit for the default features. And BigBake is like a Python-based task scheduler which does the actual making process of this final cake. So it's like execution engine it goes through and forces all the layers and the recipes and then you find out the dependencies in what order it can combine with BigBake this individual layers and then how it can combine and create into the final wires. And let's do our first open VMC build from our community repo. So instructions are available here. So let me check what is the distribution of my machine is 18.0.4, 18.0.4 which is okay I think.0.0.4 is already available but it's not there. So let me clone the community repo which is GitHub.com. That's open VMCs. That's open VMCs.bit. Now it's cloned and then let me get inside this open VMC and then let me go into some standard release in this repo. So if I give GitHub it shows all the releases in this I am picking the release 2.9.0 and I am creating a branch out of it. It's called my first build. So then I am building image. So when I give dot space setup sourcing the setup it shows so many options available already in the community. So the community introduction reading dot empty page here is giving instructions for building the IBM Romulus image. Now I am giving instructions for building the WolfPos image which is Intel server platform. So this looks like this. This is one of the server platform. So this is the code for that WolfPos platform. So I am just saying setup my WolfPos platform and then keep the output in this directory called the build. So once I give this it will say like machines s2600 call in this like this. It will tell all this information. So that's it. Now you have to provide a very powerful feature of this web UI or content toaster which is useful to manage this big process and to understand the yacht also it's very helpful. So I can start this easy. I have to just give source toaster. There is some of the dependencies that you can see from here to install the necessary packages. So once I install like that toaster is ready to use. So I am starting a toaster in this particular address. This is the address in this server's machine and in this particular port. So it will take some 5 to 10 minutes first time. So I mean why once it's ready I can directly give a command to pick up the fcfmpossible image which will come back. So let's wait for some time. Okay now this toaster web server is started at this particular address 10.19 to 1.104 which I am going to this port and then across the meter. You can see the web UI for this build process. So let me start the build and then I can see in real time how it is going through. So the build is going to start. Okay our build is completed and I can see from the toaster environment. This is the image it's called build and then it's for the machine is 2600 wef and the project command and this I can see if I click this I can see what is the configuration of this water under the layers participating in this image. How many tasks it's run and task executed I can click and see what are all the tasks and recipes, packages and all the information is available here. Okay let me check the file actually built. It's found a build template which is the intd file and let me download the qmo one store and then I can now start this new little image using qmo. This is like how it really looked like when the first image boots which really awesome to see this in the qmo also in that way. So we built the image and we built the image and then now I am logging into the image. I can see what is the way it's really send the distribution of those things. Yeah this is all about qmo and coming to the system D concept it's mostly the way of it it's like I'm just going through fast. It's an IPC mechanism and it's a D bus and it have two type of buses system bus and the system bus and the system bus is for user space and the system bus is for connecting from the current space to user space communication. And these services are demons. System D is the one which is used in some Linux distributions which is to during the boot up which is taught the services and so all the services are demons is like equivalent to process. So these process have their unique name or unique number. So in this case client process have colon one seven zero two and service process have colon one four. So they are integrated using this D bus underline the connection which is like this they use to communicate to other. And three it is they need to know what is the target service name and what is the target object it is trying to reach and what is the names case or interface in which this object is residing. So four type of operations are four type of messages are possible to be done in the D bus. So like this plane can send a message in working method on this target object for which this way can either send a successful response which is called method return or it may give a error response which is called error. So if there is no activity there can be a synchronous event can be messages can be sent from the service process for any kind of events to notify who you are on subscribe to this. So these are the four messages available in this and this interface is the concept which is make one two namespaces in the C++ so that object parts are uniquely identified using this interface mechanism. The conventions are interfaces are separated by you know what elements are separated by dot whereas the object part is the naming place it's like point system or starting from the C++ or C++ like this is the entity manager which is like a runtime configuration application which passes the configuration files which are in the decent format and produces the best representation of the files on D bus using this namespace. So it also creates the system that doesn't file for persistence. So ideally what it has is now all the entities in the board like PSU or this base board or whatever the entities you name it. So those properties like our base board say have some 1090 sensors and the 1080C sensors like that for example and those sensors have cross holes and scaling and those kind of properties. So this kind of properties needs to be exposed by some entity so that can be consumed by some other entity. So entity manager does the job basically it identify what are all the properties of this board I mean this and all represented in this JSON format like PSU is represented in properties are encoded in JSON and similarly base board are encoded in JSON. So like it have all the base boards or PSUs in some common directory and then based on the true directory during the boot up it will identify which JSON is appropriate for this board and then load the properties from in a boot up. So these properties are exposed in the system D service object interfaces format and then that will be consumed by other applications like D bus sensors which is like independent application which monitors the sensors and it gets the values but to refer to the threshold value to raise alarm or what it does it subscribes to the entity manager and then for the particular interface from that only it get to know this sensor threshold value is this one. So by separating the data and mechanism and this entity manager does the job of forcing the JSON files and just exposing the system D names system D address space whereas this D bus sensor does the job of monitoring the sensors and then referring to the entity manager for the threshold and other properties. This clearly distinguishes for modularity and extensivity of this design. So this is what I mentioned just now so many JSON files are there which represent very different entities which entity manager will go through each JSON file and check whether it is matching with the current proof detected and if it is detecting load of the properties found in this design file and expose it as a system D properties which can be consumed by D bus sensor or whatever the system D compliant services. Now let me take you to the real example that I have a first platform this is the action with first platform configuration that the JSON file this is available in this community repo. So I have another clone of the first platform say which I want to create a BMC format stack or support for that open BMC format stack support. So the only difference between this first platform and my clone first platform my customer first platform is this ADC sensor at the index one. So this has a scaling factor of 5407 and this threshold is 3.647 that is my clone will pass have these values differences like this. So in this case how I can do about it is I will just copy this rule pass this WFTBase board of JSON file as forced from 2021.json file and directly modify this name and scale factor and values and then the probe statement also here it takes it takes work to go through this. So in JSON file there are two sections one is called this exposes which is grouped under this and then there is this probe section. So this probe is here nothing but the system in this interface is why is the open BMC product of true device in this interface if it counts this particular key and this value pair it's like a wild card start here WFT. So here it's like in the food if it is detecting your food device which have the product name ending with WFT it's basically assume there's a full pass board and this interface is exposed by another demo called the food device and that does the job of only monitoring the food devices and then identifying the food properties and then exposing it in the deepest. So as soon as it gets the notification this entity manager subscribes for this kind of notification and as soon as it gets it it's such as for this and it exposes it. So to try this we should have a prove the name ending with forced down. So if we have such a thing it will expose the properties like this. So let us see how it is doing it. Also to expose any wolf us this thing three levels of customization is needed. Level one is for the creating the Apple layers we can create a custom layers and then second thing is modifying the device tree in the kernel level. So usually if it is a ST 2 500 there is always much changes. So even if there can be examples which I'm not covering here and third one is exposing the user space which is like this. Yes and that's what I just mentioned. You can see the example of how to modify this layer like that documentation. So bblayers.com that sample is the one to describe the layers in the team to create your custom layer. So first of all you need to create a custom layer and then layer.com you need to create this layer with first icon forced down and then local.com. For example you need to create a machine called the first option forced down and here also in the layers to find the.com we will have to rename it as in the layers to find the.com. So once you are done and you can build this new image and you can build it from the Intel VMC repo also which is like another repo provided by Intel for the features which is said to be upstream and that is also you can try. This is the real one platform. It can find all the platforms under. So upon including and building you can find the WOT based board adjacent file and whereas forced WOT 21.json file. So the only difference between these two files is this particular sensor thing and then now let me modify that through entity correctly. IDMI to zero which is like a base board of proof. Is having the value like board.product name as s26002 WFT which is what referred to product.product name is ending with WFT that's why it is able to identify it as the board. Now let me modify this to force them value and let me see how it behaves. Let me edit this proof into s force them and I am using IP name to comment edit it. I have edited it and now it is showing like force them and then now if I go and see that properties it should reflect the new value. So as you can see now entity manager P3D3 now it is printing this property as P3D3 forced them to be 21. If I introspect this particular object using this procedure introspect command they can see this new scale 0.66638.88 like this. So basically like this we can put this create our custom JSON files and the custom proofs and then based on the proof and it can expose all the JSON properties. So yeah that's all sorry for taking too much of time. Thank you and question answers. Thanks. We'll be online shortly I think. Yes. So we are waiting for questions. Unfortunately we have just three minutes left. There was one question about architecture which is used on your platform. Could you reply this question? I saw that. Go ahead. Yeah I can try that. The question was about the architecture which is used on the platform which you are testing. Yes I can answer that question. Okay go ahead. Okay Intel uses ASP to ASP to 500 chip and yeah it is used in other company boards as well and this is based on the ARM architecture and single core and it's the most prevalent in the industry also and you can find support like that kernel device tree and all those things in that community the VMC repo also for this and the ASP documentation also available and it is also integrated into the kernel Linux kernels. Yeah that is about the Intel ASP to 500 and used in the old pass board and other platforms. Did you test different platforms or just only in Intel ones? I am not sure about other company platforms like IBM but it's like a dominant player in this VMC functionality space. Basically VMC need to do some manageability operations on the ARM core like video engine and interfacing with the BIOS like that. So this particular chip is very good in compacting all the functionalities and adding a small amount of other functions in there.
|
OpenBMC is an Open Source Software project started in an effort to create a secure, scalable, open source firmware code for BMC. Apart from the usual benefits arising from Open Source nature, OpenBMC brings in additional advantages like a.) state-of-the-art build system based on Yocto - an embedded linux distribution - which simplifies the process of building customized Linux, b.) Robust Managebility framework based on (4 pillars - REST, JSON, HTTPS, ODATAv4) RedFish, c.) Superior Modularity with D-bus IPC mechanism which is known for its well defined interfaces, d.) Ability to customize the code, e.) Support for IPMI, etc.
|
10.5446/52504 (DOI)
|
Hello everyone, my name is Daniel Kipper, I work for Akul, I'm a software developer and GrabUpStream my tenor. Today I would like to present GrabProject status update. Let's take a look at the change in now. At the beginning I would like to introduce GrabUpStream my tenors, then let it there we'll discuss what has been happening in the project for the last two years and what is happening right now and later we'll move to the main way points for the project and we'll be happy to reply for your questions. So nothing has changed since last year, there are still three GrabMyTenors, two of them work for Akul, Alexander and I, Vladimir worked for Google and he's the most experienced my tenor and more guys. We have two additional my tenors who look after specific code in the Grab. Alexander the Grab takes care of risk-v code in the Grab and the last few looks after ARM and EFI code. So what has been happening in the project for the last two years? Alexander the Grab introduced initial risk-v support to the Grab and also he added some initial Travis CI support. Colin Watson improved GNULIP integration in the Grab. Currently it is happening when you run the Bootstrap script. Alex Snowberg from Oracle on the GNFORK request introduced IEEE 1275 disk driver. Disk driver is currently used on Spark platforms but currently we are discussing also addition of this driver to the Spark power platforms. This added some modules which allows us to read Intel MSR, register on Intel x86 and AMD machines. Under driver added native DHCP v4 support. This code was based on Android version of early development. So currently Grab is able to use BootP protocol and DHCP. John was looking at fixing Aout-Aput generation for Spark 64. The problem was that recent Bnewt packages dropped Aout-Aput generation for Spark 64. It means that we will not be able to build the Grab for these machines. So John was looking for the solution for this issue and he quite quickly realized that it is possible to generate Aout-Aput manually. So he added some assembly code which puts all the pieces together and this way we still have support for Spark 64 machines. Microchannel from SUSE fixed some issues which disallowed us usage of GCC9 and GCC10 compilers to compile the Grab. Also recently we found some issues with using GCC11 compiler. Currently we are working on fixing these issues. Patrick Tenhab and Daniel Axel discovered also some issues when you use C-Lunk 10 to build Grab and proposed some fixes. So currently we are able to use almost the latest compilers to build the Grab. Patrick Tenhab introduced initial support for LACS-2 and there is still some development happening around this driver for this file systems. And we also used Grab 2.04. These releases contain many fixes and cleanups especially fixes for the boot holdback. Last year FreeEndep and Oracle organized a Grab mini summit. It was held in November. We discussed the most technical issues and current developments which are happening in the Grab. We also discussed some licensing and legal issues which are to some extent problematic to the project. So let's take a look at the boot hold issue. This was the major security issue which was reported last year. It was reported to us by Miki and Jesse before working for Eclipse. The issue was discovered in Grab 2 security parts. After looking at this issue we quite realized that we have to do larger Grab security review and we have to cover the most important parts of the code in the Grab. So we are starting looking at different pieces of code in the Grab. We also started using the Covert and other statistical analysis to discover issues in the Grab code. This work allowed us to find many integral overs of some use of free issues at the Rad. So we tried to fix all of them and we succeeded to fix all known issues. Also during discussions about these issues among main trainers and various distros we quite realized that the issue will not be complete without fixing the shim and introducing some large scanner changes. It also required some discussions about revocation of shims and the signing process. So we discussed this very long. It required a lot of discussion between different parties. And we hammered out at least three signing schemes which allowed us to improve the situation of the signing and the vocation. So this was very challenging and I think that the part related to the shim and the signing was much more challenging than fixing bugs in the Grab because we have to find the solution which can be implemented by various distros which have a different limitation of resources etc. So we posted all Grab fixes at the CRD date. There were 28 patches. So quite big number of fixes. As I remember correctly we had seven CVEs. And to give you a hint how big undertaking it was, it is worth mentioning that around 100 people from 18 companies and organizations work together to mitigate all the issues. So this was a very challenging project. It took us four months to fix all the issues for the Grab, for the shim and Linux kernel. And also it was worth mentioning that it was done under some time period. At the end of this slide I'm listing some articles which are worth reading. At the beginning are articles prepared by Miki and Jesse about the boothole itself. It discusses UFI's acrobat and also the boothole issue itself. The second link points to my email which introduces all the fixes to the Grab and also contains a list of leaks to the articles which were available at the CRD date. And the final link points to the block post which I prepared with my colleagues from Oracle. It describes the boothole work from mostly organizational point of view. We discussed how we deal with communication, how we were looking at revocations and also discussed some other issues. So I think that it is worth looking at these articles and take some things from them and see how this process looks like. So what's happening right now? Currently we are preparing for 2.06 release. Unfortunately it is strongly delayed mostly due to boothole issues. This is the problem. At this point we are in cold freeze. I was going to release RC1 in December and I have merged all patches in December. But just before releasing RC1 I realized that we have some issues with binding translations. So I decided to postpone this work until January. But I announced that the code in the Git repository is ready for testing. So right now as far as I can tell some people are testing the Grab code. I hope that I will be able to cut RC1 soon, not later than at the beginning of February. And then I hope that we will be able to release Grab 2.06 in a few weeks. Currently we are also quite closely cooperating with the TransWord project. And Oracle are working on Intel TXT implementation. Maybe this was also mentioning what is the TransWord project. It is an RTM implementation. Currently we are focusing on x86 platforms. But we are also planning to add this functionality to other architectures which support the RTM. So as I said Intel and Oracle are focusing on Intel TXT implementation. For the Grab and for the Alliance kernel. I posted RFC patches at the beginning of May. And currently Intel to cover this work from me. And currently works on improving the Grab code from RFC patches. We are going to release the next version of this patches for the Grab after 2.06 release. Free and Depp company is working on AMD SCA implementation. In general it will be based on Intel TXT implementation. And these both implementations will share the command code to some extent. Currently Red Hat is forward porting to the Grab upstream patches from the Red Hat and Fedora Grab. Other to 0.06 I think that we will be able to drop around 50 custom patches from Red Hat and Fedora distributions. So I think that is a very big achievement for the project. And we are going to continue this work to limit number of custom patches in various distributions to the required menu. So I am going to encourage other distributions to do the same work and to this way ease maintenance of the Grab for upstream and also for downstream. So last year I prepared, I together with my colleagues from OACL prepared the firmware and boot loader specification. This project is rooted in the Trends Boot project. During work on this project we realized that we need some more information in the operating system about early boot phase. So we started thinking how to transfer this information from the boot loader to the operating system. But we quickly realized that this can be a feature which can be useful not only for the Trends Boot project. So we posted an RFC of specification on the Grab list and we started discussing that. I think that there is a lot of interest about having this thing, about having the specification for this thing. So it is very nice. I posted at the end of last year the second version of the specification. It was around Grab mean summits. Also I discussed the second version. I got some feedback. I am going to take this feedback into the account and release the third version in the following months. I hope that I will be able to speed up work on this in the following weeks. Arc and Attish are working on UEFI load file protocol in NTRT loader for Linux. This is very interesting work because currently we have two different UEFI boot protocols for X86 and other UEFI architectures. This project allows us to unify the boot protocols for all architectures. So this way we will have just only one boot protocol for all architectures. Daniel Axtens works on support for Apprenticesignatures. The goal is to have a mechanism which allows us to have something similar to UEFI secure boot on platform which doesn't have UEFI. So it is very interesting project I think. At this point it is targeted at power platforms but I think that later we will be able to use this also on other architectures. And RACA plans to use Linux Kexecto load and other OS from the grab. This potentially can also be very interesting project because potentially if we have this functionality then we need no, we do not need longer to work on specific drivers for the grab for the hardware and for example for the system. So I think that it would be a good idea to have this thing in the grab. Finally, we are planning to admit officially that the grab stream does not support small on the grab on X86 targets. The problem is that the core image size increases and because we are adding more and more code to this part of the grab. So this is the problem. Finally, it is very difficult or impossible to stop increasing the size of the core even if we try to move all the functionality to the modules. It is not possible simply to stop increasing the size of this image. So at some point we will not be able to install core image in such small and near grabs. But I think that we are quite close to this point. So we decided at this point to add some warnings if grab install detects some specific configuration. For example, if somebody tries to install grab with ZFS or battery fast support. So at this point it puts an warning also there is an update to documentation. All the patches currently are in upstream and probably during next release cycle will disallow installation of such complicated setups to small and near gaps. And also we are working on many more interesting features. So I think that we will be able to merge them pretty soon. So what are the pinpoints in the project? As I said earlier, the GAP-2.6 release is strongly delayed due to the boothole security work. This is not what I wanted to happen. I hope that we will be able to fix this thing and release in the following weeks. Not later in month or two I think. We also looking at increasing the patches review throughput and decreasing response delay for image. This is quite challenging for us but I am working on it. And we are trying to catch up on the emails. I think that it would help us if more people would be looking at the patches on the grab mailing list. For example, if you see spots on the grab mailing list which fit in your area of interest or expertise, then please comment these patches. If you are not sure that your comments are okay, just mention about that. And then we are happy to jump in and to comment what we think about this or that as a girl maintenance. Anyway, I think it is worth having a discussion around the patches, not only maintenance looking at them. This way I think that the project will have better developments I think. We are also working on improving overall cooperation with distributions and other parts. I think that we achieved almost this goal currently. We are in touch with Fedora, Adébian and Ubuntu. We are closely cooperating with maintenance of grab for these distributions. And this allows us to get feedback on what they think about upstream developments, etc. So this communication helps us a lot. But there is still some area for improvement. Another issue which is popping up from time to time on the mailing list is that some people start posting the patches and drop the work in the middle. The problem is that maintenance later spent their time reviewing these patches and nothing comes out of it. So simply we waste our time and to lose quite interesting features, fixes and cleanups. And additionally, usually maintenance are not able to take over this work or can do it much, much later. So simply as I said we lose these patches from upstream. So please treat us seriously and do not do that. Please finish your work if possible. If you are not able to finish this work, please tell about that on GrabDevile. Maybe somebody will be able to take over this work from you. Additionally, another issue which appears on the mailing list from time to time, people do not carefully read my comments. And simply they repost new version of the patches without taking all the requests into account. And later they complain that their work is delayed or something like that. I understand that authors may not fully agree with our comments. And if that happens, just say about that and we can discuss various technical approaches for the issues which you or we reported. So please do not do that, just say something if you think that our comment is incorrect or we did a mistake or you think that it will be better to do this or that in different way. We are open for discussion. And simply silent commissions do not help. And last but not least, if you work on a new feature, please do not work on that feature using GrabUpstream code. If you do otherwise and use Grab from specific distributions, you simply increase the backlog and you make the difficult GrabUpstream maintenance life much more difficult and also distribution maintenance much more difficult. So we can encourage you to work on the patches or using GrabUpstream in Git repository. This is the best approach. And we are happy to help as much as possible if you spot something which hinders your work. So I think that's it at this point and I'm happy to reply for your questions. Thank you. Thank you Daniel for your presentation. I can see that you have started answering questions. So I think we can go with this slide. Yeah. Thanks a lot for joining. I will try to, there are a lot of questions on the chat. I will try to reply to all of them shortly. Just before we try to release, I know this is a pain for all of you and for me. Unfortunately, the problem is most related to the boot hole issue, which made a lot of difficulties for development process. At this point, we are trying to fix it. And I hope that in the following weeks, I will be able to release 2.06 Rc1. The code is ready, as I said earlier, in the Git repository. I'm going to fix final to assertive fix, which is quite important at this point. And I hope that it will happen in 2 or 3 weeks. And then after that, I'm going to improve the release process. Also I'm going to clear the current development backlog. There is a lot of parts just waiting in the queue due to delays in the release. The plan is more or less to release more frequently. I mean more frequently, the discussion is around between half a year up to one year, no longer than one year. This is the plan. So additional questions were related to support. Currently, support is in the repository. It is initial, as far as I can tell, support but functional and can be used. There are at least two folks who are very interested in development support. Lacks to support, so I expect more patches after the point of 6 release. There was also a question about Red Hat project, which I mentioned during my presentation. The Red Hat project is related to running the grab on top of Linux kernel more or less. The idea is, first of all, that we want to have something which allows us to not write drivers once again for the grab. At this point, currently, if something new appears, we have to add the driver to the grab. Especially, this is important, for example, if you want to run grab from the Corboud. In these cases, usually there are no drivers. You don't have BIOS or EFI and you are not able to use this infrastructure. So you have to have drivers, for example, for disk, for USB devices, etc. So this creates some problems. On EFI, BIOS is much easier because in general, grab tries to use these interfaces and does not interact with hardware directly. So as I said, in general, we want to not write the drivers once again. Additionally, there are many questions about the backwater compatibility for the grab. So I know that Linux boot exists and other solutions exist in the wide, but for some people, compatibility with older systems is quite important. So that's why the discussion about this project emerged and I hope that we'll be working on this at some point. Nice to hear that many people are using grab in their solutions. So this is very important for me and it means that my work is important for other people. What else? Are there any plans moving to GitHub, GitLab, Garrett? This is... First of all, we have to deal with current development backwok. So my plan is to merge all the patches which currently on the list at this point after 2.06 release and then I hope that we'll be able to consider such a solution. We were... If you remember, I was mentioning this solution in my earlier presentations, but as I said, we are busy with other things which currently takes my time. So this is the problem. But as I said, after the release, we are going to clear the backlock and then we'll be taking a look at some solutions which allows us to use GitHub or something like that. What else? I'm looking at other questions. There is DJI question. Is there a way that regular contributors can help the patch review process? Yes. This is an important question and I try to... If I see that some people are active on the mail in case, then I try to convince them and experience in some areas. I try to convince them to take a look at the patches. The good example is Lux2 development. Patrick Steinhardt jumped in and provided first initial implementation of Lux2 and that quite quickly realized that his knowledge in the Lux2 area is huge. So when new patches were posted to the Lux code by another guy, I asked him to do reviews of this code and it helps me greatly because I'm not always experienced in all areas of the graph. If there is another pair of eyes looking at the code, it helps to review the code and spots bugs and other issues in it. So this is important. So if you are experienced developer in some areas and if you spot the patches on the graph which you can comment, you can jump in and comment. If you are afraid that something or you are not sure that something which you are commenting will be in line with developer, maintainers idea just underline it. This is okay and then we will try to confirm it is okay or not as soon as possible. I saw a slide. I could have missed it so maybe you mentioned it and I missed it. I saw a slide that you talked about 62 sector and beer gap or something like that. Doesn't the responsibility of the falls on FGD default values when you are partitioning a disk? Deliver a gap between first sector and first... Yes, this is important. Yes, this is true that F disk and G disk should leave a larger gap at the beginning of the disk. This is very important for MBR. This is not important for GPT. This is not the case for GPT because in case of GPT we are using something which is bias boot partition and usually currently the newest partitioning tool set this size of the partition at least one megabyte. The problem of MBR gap is that if you have older system which used older F disk which reserved usually many years ago 62 sectors for MBR gap then there is no easy way to migrate to newer gap. That is why I was saying that if we remove support for smaller gaps then older systems may not have a way to migrate to the latest gap. That is why I was mentioning that it will be issue especially for downstream not in particular for the gap upstream itself but for downstream. We try to cooperate with downstream projects and think about distributions quite closely. We are considering all such critical changes very carefully and in this particular case we are facing out support for small MBR gaps gradually. So this is important. What next? About grab being important is this? I have been using grab. I am not so sure. I am not sure if we have any more questions. Helping automate bits of code review is even better than part of CP creating. Yes that is true. Thank you. That is important. We are also going to work on some tools which is the reviews especially for code developers because for example the coding style for the grabs is quite weird and some people who start working on grab are not used to this coding style and confuse some things. So it can be boring if my turn is very often remind about this or that. So understand that we are going to provide some tools which like a check patch PL in the grab to check the format of the coding style in the C files. So this is the plan. What else? Since I have this excuse to have lower default values especially when dealing with non-technical users in the field. Yes as I said we are aware that removing support for smaller gaps will be painful for distributions user and we have to do that very carefully. So adding to that we added some warnings to the code right now at this point and also there is a documentation explaining the issue. Let's take a look. I am my tenure of patchwork. Note that I do a very active job which does management of patches and mailing list. I could set up that and hook up into Snowpatch. Use that to run automated tests but with so much tests it is broken. It is tricky to know. It is a good point. At this point the test shoot in the grab is very broken. I have asked one guy to take a look at some tests and as far as I can tell he is looking at it. He is looking at it. But I am aware that first of all we have to fix that. At this point currently when I push the patches to the Git tree all patches are both tested for all architectures and platforms. So at least we are sure that we do not break our own pilots on any platforms. Additionally I am going to introduce cover these guns from time to time because it will also find some issues at patch development later and then we will not introduce various issues like use after 3 or something like that in the grab code. So I have three minutes left for other questions. I am open to reply them and also I will be staying in grab project status update channel after the talk. So if we have some questions we forgot to grab please do not hesitate to join this channel. I can see any new questions in the chat. Once again thank you for your presentation. Daniel mentioned that the presentation channel will be open for your questions after the presentation. Once again it is very important to hear that you are using grab and it can convince me that it makes sense to work on this project. Once again for questions and for health attendants.
|
The presentation will discuss current state of GRUB upstream development and cooperation with distributions. The first part of presentation will be focusing on last year, current and future GRUB development efforts. The second part will discuss cooperation between GRUB upstream and distros. In general it will show current progress in the project and main pain points. One of the goals of the presentation is to solicit some help from the community. Maintainers are quite busy and they are not able to solve all issues themselves. So, help from others is greatly appreciated. At the end of presentation Q&A session is planned.
|
10.5446/52505 (DOI)
|
All right. Hi, everyone. Hi. Thanks for attending my talk, listening to my talk here. My name is Chris. And today I'd like to talk a little bit about firmware testing or more specific about open source firmware testing and how I think we could build an ecosystem around it. Myself, so I am also a firmware developer working in elements, doing work for customers, but also personally interested in firmware development and firmware testing actually started interested in me like one and a half years ago when I joined here. And it's actually, of course, not the most, the most, whoo, topic, right? Where everyone is super excited that we talk about testing. However, proper testing can make your life much, much easier. And I did experience that a couple of times now that when you work and when you try to find in bugs in your code, because our features are not working, but actually the master tree is broken. That is something that takes some time to find out. And I think we can do better than we're actually doing right now. And yeah, I'd like to talk a little bit. Okay, short agenda. So I will talk a little bit about open source firmware testing, what's current state, what's going on there. Our idea, how can we build an ecosystem around it? What can we up with? Also sketch the solution that I'm working on that I already worked with and how you can get involved, right? I can't do it alone or we can do it alone. So I'd love people to get involved in that. And we can push that a little bit more forward. Okay, open source firmware testing. Firmware, obviously, runs most of the time directly on hardware. It behaves differently on different architectures, different socks, different boards. Everything behaves differently depending on the components which are on the board. Most of the time firmware does some back and forth talk between the components and that always differs and it's quite hard to put that on an abstract layer. That makes firmware testing quite hard. Because you, it worked directly on hardware or also hardware behaves differently depending on kind of what you have and what components are on there and these kind of things. You cannot really simulate everything, right? So that means, firmware testing is kind of hard. There's no unified approach to that. I know that there are a couple of systems out there who do work with firmware testing and who implemented firmware testing frameworks. However, we did not agree on a common way yet, which is fine, of course. We can do whatever they want. However, there's no unified approach to that. Most of the time as we are working with hardware and it is hardware, it's quite complex to set up, right? So if you have some piece of software that you can virtualize in a machine or whatsoever, it's easier because you control everything. Hardware, if hardware is involved directly, that always tends to get complex. The firmware itself is like the first code that runs most of the time, that runs on the platform or on hardware. And so you have to set up a couple of things to flash the board to exchange that piece of software all the time to run a new test. So testing is complex, right? And there's also no centralized entity for firmware testing. As I said, I know there are a couple of projects out there who do firmware testing. Also a couple of companies out there who do firmware testing. However, there is not a single point of contact where I can go to, where I can check out, okay, what projects do I have, what's the test coverage of these kind of projects, what hardware support it, what boards are supported, what sockets are supported, and these kind of things. So there's nothing centralized, which we can leverage or where we can start from. And there's also firmware itself, like the firmware ecosystem that we're living in. So many projects out there that it's quite diverse. We get, we get all booed, we get core booed, which is my main project. We get you booed, we get open source, UFI, EDK2 stuff. So to be clear here, when I talk about firmware, it's everything's x86, right? Of course, the same principles also apply to other architectures, but I do mainly work with x86. And these kind of projects are quite diverse. They have different ways how they test firmware, how they manage the project. It's either a GitHub project or it's running on Garrett or GitLab or something else. I don't know. So that's quite a diverse field. Also attached to these projects, there's quite amount of different continuous integration systems. Core boot, for example, uses their own Jenkins CI that does build the code for you. Or boot, for example, I think they use Circle CI and GitHub. You would get lab under 100% sure. So that's quite diverse, right? So there's a lot of pieces moving around in that ecosystem. Also, most of the projects, not even all of them, but most of them do build testing. So one could say there's like a loose build testing around the open source firmware ecosystem. And most or none of them are very, very little actually do testing on real hardware, at least publicly known. I bet like many companies do firmware testing in their own basement for their own firmware that they have, all right, or for their own product. However, everything is closed source, of course, it's kept secret and for themselves. So there's nothing shared among the community. Coming back to the testing. As I said, nearly every project does build testing, right? It's easy to set up. So setting up a GitLab runner, which does basically just build the code that you wrote. That's quite easy to do. It's also like the minimal amount of confidence that you have when you write code. I mean, most of you might be programmers and if you write C code, for example, and it builds, that's good. Okay, maybe a couple of warnings, but still it builds. That's fine. However, it doesn't say anything about the functionality. So that's like the lowest barrier that you can actually jump over that you say, okay, I can compile my code to something. However, there is more and there should be more. I know that a couple of projects, the unit testing, I see you, you would, has a couple of paper scripts that do unit testing on the code. I think code started to implement unit tests. So that's a step forward. However, in the firmware world unit testing is also again quite complex because you got again a lot of layers that you need to simulate and a lot of sub functions that you have to write in these kind of things. So that's quite tedious to do. Of course, there are also things like functional testing, performance testing, regressions testing and all these kind of things that you can actually do. However, you will, one has always kept in mind or has to keep in mind that firmware is actually the most privileged code that's running on your system. And a lot of these things do have security implications. Like, if you run something in your OS, you implicitly assume at least that your firmware is secure and stable underneath. So that brings us to the point that there actually is a need for more OSF testing, for more firmware testing in general, but especially for open source testing. It should be made more accessible so everyone should be able to set up their own firmware testing infrastructure or to run their own paper tests. And also the results that you get should be shared, right? It doesn't make sense that 10 people run the same 10 tests over and over on the same hardware again. When we could achieve more from share results and 10 different people run on five different boards, which cover two different socks, right? So we get a wider spectrum that we can actually cover if we share results. So in my opinion, what we need is an open ecosystem. We need somehow centralized reporting infrastructure where every test system that is out there in the wild, in the basements or whatsoever, can actually report. We need one centralized point of contact where these results can be gathered. Of course, these results should be shared with everyone. They should be open. And people should be able to leverage these results and build their own solutions on top of this. Imagine you have a centralized server or a centralized reporting infrastructure and you can actually gather all the test results that are there for code boot. What you could easily do and what code already has, but that's statically more or less, is that you can build up like a board status page to say, okay, these kind of boards and socks and architectures are supported right now. The latest stable commit that has been tested is this and these are kind of things. It gives you way much stability and security in the project itself. However, of course, you have that centralized reporting infrastructure, but what you actually need, you need a decentralized testing infrastructure that feeds the results into that centralized reporting infrastructure. So that means run it in your own infrastructure, right? Don't change anything. Run your own code. Of course, I know that in the last couple of years, a lot of people came up with their own solutions on how to do firmware testing and what tests to run and these kind of things and integrated all super clever in their infrastructure. And I don't want to change that, right? That is nothing that can be changed now or maybe in the future. However, I just want to leverage the results that the people having and put them in a centralized space so that people on top can actually make something out of it. As I said, sharing results with others. So post them to this centralized reporting infrastructure. So, basically that everyone can set boards to the QA system and that we get a wider test spectrum on the on the open source firmware project. So if we set that up this or if you would set that up the system that we have for a centralized reporting infrastructure and decentralized testing infrastructure. There are a couple of pros and cons that come with that, right? Obviously, we have better test coverage, which is always good. We can measure the impact of changers. We can catch regressions that that comes up. We can keep un-maintained boards in the tree. And if I talk about un-maintained, I don't mean the board itself or the testing infrastructure. In Kobut, we talk about, sorry, I'm coming from the Kobut project. In Kobut, we talk about un-maintained code if no one ever feels responsible anymore for that part of code. Often, or what happens from time to time is that I know some specific sock or something like some old into whatsoever sock, right? Certain people were responsible for that kind of code. However, they step back from that responsibility and said, okay, I don't have any much time for that anymore. I cannot maintain the code anymore. I cannot keep care of the code. What we could do is hook up a board which tests that code and have that running our testing infrastructure. What would be the outcome is actually that that board would be still in the testing infrastructure. Everything would be gathered at a centralized page place and we can leverage these results from that board and keep that un-maintained board actually in the tree. Because we see, okay, it's running in the testing infrastructure and we do see the results. We can leverage that. And if errors come up, maybe someone will take care of it. The likelihood that someone takes care of it if tests are already running on that kind of board is much, much higher than it is. If you don't know what's up with that code. And of course we can combine multiple efforts here. Of course, these kind of things, these kind of big changes always come with cons, right? We have to agree on a common test interface. So if we talk about we had a decentralized testing infrastructure and a centralized reporting infrastructure, we somehow have to agree on an interface like on language that we speak, right? How do we push the results to that infrastructure? How would they look like? What should they contain? What not? And these kind of things. So there needs to be a specification around that. It's integration work, obviously. If you need to adapt to that new ecosystem, or if you want to adapt your own testing infrastructure to that ecosystem, it's always integration work. And of course, bringing new players to the table as we're trying to do right now, that is always more fragmentation, at least on the short term. The idea, ideally, would be that in long term, you have that centralized infrastructure and that is actually less fragmentation, at least on the result side. You can still do whatever you want on the testing side. However, there's a standardized way how to post the results to a server, right? And how to gather them. So that would be more fragmentation on the short term. However, long term, it would be less. On the OSFC 2020, so two months ago, we already came up with some kind of solution or parts of a solution. We did implement a test system, or a couple of tests system that do various tests, right, boot and build performance testing, functional testing whatsoever. These test systems, when they're ready, they do push their results to an S3 bucket. This S3 bucket here was meant as a centralized reporting infrastructure. So that's open, accessible by everyone. All the results are lying there in an adjacent machine readable format. So you can just pass them whatever you want, and you can attach consumers to this kind of S3 bucket. So whenever a new report comes in, you can take that report and consume that into whatever you're doing with it. So that could either be like a corporate status page that could be in terms of Tianok or that GitHub, people that they have, maybe you have an internal CI system, right, in the company. And you say, okay, I do see that there are test results coming in for socks or architectures that are interesting for us. For example, we are testing here a couple of Z on the speed like Skydeck SP or Cooper Lake SP. And if you're interested in these kind of results, you can actually grab these, the reports from us, they're running on an hourly basis and integrate them into your own CI system. So that was the first step that we did here. However, that wasn't enough. So I thought about the use case in Cobalt, right? As I said, I'm coming from a course of Cobalt specific world or centric world and like 80% of my time I'm working on that project. And what we have here is we have a Garrett system. And that Garrett system actually, yeah, you can upload patches there and once the patches are submitted, you put these kind of, so once the patches are submitted, our internal test system takes that patch and test it. So what I wanted to set up here a little bit is this extent the solution that we have already was the test setup and this free bucket with a thing that I call right now, like a working title. I call it firmware test results server. It's not very catchy. I got to be honest here, but it's basically what it is. So you the workflows a little bit like this, you still have your test setup that test setup when it starts. It kind of posts to the firmware test results server. Hey, yeah, I'm running a test now. I'm running it on this commit. This worker I'm working on that project. And that's like a roughly description, right? It's a JSON object and it's a it's a normal HTTP post. Here, as you can see, it says, okay, I'm working number two project ID one is actually co good. I'm working on this hash right now. So that's a commit I want to test. And I got a time out. That's actually quite important. And my description is I'm running also PTO pass skydegas speed boot chests with a Linux boot payload. The time out is here because if you run these kind of tests, you will notice quite fast that from time to time systems tend to break. And also your internal test system might break right. I know they might be power cut off internet might drop whatsoever. And that time out is actually meant here that in case something happens. The firmware test result server knows that okay, I wait for 242 40 minutes. If within the 240 minutes, nothing happens. I just assume that this test failed. So test it up first step post the job start to job start to then end point job start and say with that JSON object object that I just showed you okay I'm starting I'm starting my job now. Second step is run the job. So do whatever you want right. So run your building boot test run your functional test your regression testing boot time testing whatsoever do whatever you want there. I don't care yet. When the test system is done. It does push the results to their street bucket right so it makes the results accessible for everyone because it's an open my street bucket. Step four would be that you post to job done so somehow you have to indicate to the firmware test results server okay I'm done with the work that I did and you can move on here. It's again it's a json object that I post to again we had to work ID to because that's that's the idea of the worker project ID one which is called running on the same hash still. My test was successful. So I already indicate what the overall success of my test is. And I attached link to report. Right now it can whatever you can be whatever you want right so it can be your internal actual sheet it can be a JSON file that can be on a web page whatever you want. The long term is my deal would be that in that report link. You link to a publicly available JSON file where that fear where test results server can actually make a nice looking report out of it. All right. So once the fear test results server. Fetches a result from the from the reporting that you that you have it does actually post to the corporate Garrett server. How that looked like I will show you later. Maybe diving a little bit more into that fear test results that I talked about. This scheme more or less gets sketches how it looks internally. You get the two endpoints job start and job done. When you when you post something to job start. It actually adds a new entry that entry goes into the database. And it also will be forward to something that I call guardian that guardian has basically two jobs. The first job is check if if if the job that you actually posted right now, and which has a timeout check of that timeout is still valid right so it's basically just a timer and it continuously checks OK so timeout so that it's standard timeout so that it. The other thing is it tracks the status of that entry. So as long as that done flag is not set in that entry. It continuously check the timeout. Once the done flag is set by calling the job done endpoint. The job done endpoint actually updates the entry. The guardian how I call it actually sees that the job is done and it forwards that job to something called Bruce. Bruce is the one actually which does the feedback to the Garrett system. Right now it's only the Garrett system that I integrated. However long term it could be anything else it could be presenting a website. It could be posting to a GitHub repo. It could be anything. Bruce also has a couple of responsibilities that it takes me to take care of. First one is get the job and see where it has to post to. Second one is that Bruce does actually check if more jobs are running on the same same hash. Why am I doing this. Obviously, I don't I want to have like a like a comprehensive result right of. If you have one commit hash, I want to have all the test results on the one commit hash. So what I what I do is. Oh, how I imagined it. Let's say like this. Okay. You start working on the commit hash and other test systems also jump in and say, okay, I'm also testing the same commit, but maybe on different hardware. Bruce knows what other workers are actually working on these kind of commit. And if if all of these are done, it actually posted the Garrett system and has with a comprehensive results where all where all the test results are actually written down together. What I did in the last two months is that I that I'm that I'm working on and I'm still working on that. I'm putting in two of our internal test systems into that into that firmware firmware test results over that I just showed you right. So as a test setup here, I attached to system. The first system is the lava QA system and the second one is a contest system that we presented on the OSFC 2020. Talking about the lava QA system. That is actually the current hardware QA system of code, right, or at least available to the public. It does add to common comment after beneath every submit every submitted patch. Having like a comprehensive result on okay, these tests have been run and they failed or passed or whatsoever. Also, it runs on different hardware. So we have a T 500 attached right now to it. We got an HP Z 500 something we got another HP attached to it and a couple of QM or targets because they are easy to go. However, it's quite complex to set up. As I said, firmware testing is complete itself. So also setting that up is quite complex. There are four or five parts moving parts actually in that whole system. You have to solve the hardware and these kind of things right so if you want to attach a new board into that system, it's quite complex. And it only reports directly to the Garrett right now. So it doesn't share if results in a machine readable format or so it's publicly accessible so you can go on lava.90 sector by oh. However, it only reports to Garrett to get it only. If you check out lava.90 sector by oh, you do see that overview of all the tests that have been run. What kind of hardware and what kind of test it was and if it has been triggered by Garrett or any lava health checks right. The technical system that I want to attach is contest. As I said, there was a longer talk from Andrea and me about contest on the OSTFC 2020. So if you want to check that out, contest is a general proposed testing framework. It's open source. Obviously, you can check it out. I got the links on the last slide. So if you want to dig into that, feel free. It's modular. So you already have like a basic framework. But if you need special behaviors, you can they can be plugged in as as plugins, right. It's validating to minimize runtime errors. It's configuration based so the job is more as a description in text and text form. And it can either run on the device directly or it can be orchestrated. The thing that we did for the OSTFC 2020 was like this. And this is actually or this was the starting point of the whole, okay, let's build up an open ecosystem around it. What we did is we had a we had a Delta Lake and OCP Delta Lake server. It's a four node OCP server. And on one slot, there was a contest server running on that. The other three slots were there for testing, right. We had a rescue pie as a client and controller and that pie more or less submit submit jobs or job requests via Jason to the contest server that contest server does acquire or lock the target. It does start building the corporate builds, and then it works with with the individual slots to flash them and to test the firmware. Once the result or once the test is done, the results actually get get posted back to the to the to the recipe pie and the recipe pie does publish the report to the street bucket. All right. As I said, the lava to a that we already set up does make these kind of test report. And this would also be the outcome of the firmware test results that I wrote. That all works right now. So that is already implemented. You can also check out the code on GitHub. It's open source. And however, it's not live yet. Why it's not live yet, because it's still running on the test. I don't want to break systems and we got a working system that that is working fine and cool. So I don't want to break that. So what I what I did is it's running right now live, but in the test mode. So it's not really posting to the Garrett system. I assume that I will switch that the next couple of weeks. Okay. And if you want to get involved into contest feel free, you can check out the GitHub page, Facebook incubator contest. Also, there's a contest channel on the OSF. Oh, as FW slack. You can get your invite at slack dot OSFW dot def. Also, you can contact me directly on Twitter if you want to. Also, as I said, contest people is there. The fewer test results are you can check it out there. It's quite minimal right now, but it's there. LabrQA that I talked about. Also, please check it out. And thank you everyone for listening. Thanks for having me and enjoy FOSTA.
|
With the advancement of open source firmware projects, we need a reliable quality assurance process to automate the firmware level testing. In this talk I'd like to show how we build up a ecosystem for open-source firmware testing and show by example how we integrated one project into that ecosystem. This talk aims to give a status update what has been show on the OSFC2020, and also encourage people to get involved and participate in open-source firmware testing. All code shown is open-source and available by the time of FOSDEM'21.
|
10.5446/52508 (DOI)
|
Hello and welcome to my talk about the EDK2 implementation of UFI on the RISC-5 platform. First, I give a short introduction about us who implemented this and then for those of you who don't know, I'll give an introduction about EDK2 and RISC-5, what those are. To give some context, we'll look at how booting on RISC-5 has evolved over the time and also we'll do the same with this particular implementation and then we'll get into the details of how the EDK2 implementation boots on RISC-5, especially the details that changed from a regular RISC-5 to the EDK2. To prove the works, I do a demo of booting to Linux and then I explain where we're currently at, what we want to do and then what this project is going to enable and finally I'll explain how you can help if you would like. So we are me and Abner, we both implemented this and we are both UFI from engineers for Proline Service at HP. Abner is a senior engineer and he's been doing UFI for many years so he was the lead for this project. I joined last year after I graduated and this is really my first UFI project and this was a very big learning opportunity for both of us, especially me, because it required changes in the entire UFI EDK2 boot flow from the beginning through all of the stages until the operating system started. Disclaimer while we did work on this on company time, we are not speaking about a strategic direction of HP, so it's just speaking about technical details about the implementation. Okay, EDK2, well UFI is really only an interface specification. It specifies something about the implementation but mostly about the important thing is how the operating system interacts with the firmware and EDK2 in particular is the reference implementation of UFI that many vendors use as a basis and make their own implementation for the system. UFI was initially developed about 20 years ago for Intel's new ITANNium processors but they're not really in use anymore and now UFI is mainstream for x86 64-bit systems and has mostly replaced the old legacy pyres and in recent years it's been getting adoption for ARM and they've been making standards for how to use UFI on ARM and Tiannucor, the logo is the overall project name for EDK2 and related things. EDK5 is a free and open risk instruction set architecture, it's about 10 years old and tries to be simple and legacy free. The name risk already says it's reduced instruction set so not only the instruction set but other things they also want to have simple. Importantly it has three privilege modes, machine which is for firmware, supervisor, it's like ring zero and x86 and user mode which is like ring three on x86. Similarly to x86 the boot starts without an MMU in the lowest mode and machine mode and importantly for our implementation is that firmware can stay resident after boot and higher layers for example the operating system can call into the firmware and ask it to do something. On x86 there's something similar with SMM which also stays after boot but ITANNium has something even more similar because the SAL system has a specification for how the higher layers interact with this firmware that stays. For RISC-V this is the supervisor binary interface SPI for how the supervisor mode can call into machine mode. We started 2015 with BBL which is the Berkeley boot loader. I'm not sure if this was the first time this was functional but it's the first time it got the name and at the moment it's not really used much for booting actual systems only for research. In 2016 Abner published the EDK2 prototype which can boot from a custom Gmbu implementation that he made with PC80 architecture. In 2017 he would have patches accepted to be able to boot for RISC-V. In 2018 the UFI interface that he would support was also made work with RISC-V even though I don't think it could boot an operating system yet because boot load and operating system support was missing. The same with core boot got its port and a year later they forked core boot to all boot which is the same but removed C and replaced it with RUST. Last year we upstreamed a modified EDK2 implementation which now uses OpenSBI and it can boot the regular Gmbu Hi-Fi-FileLish target and also on the Hi-Fi-FileLish board. We started this implementation in 2015 at HPE and a year later just like I mentioned we presented a prototype at the UFI forum plugfest which is their conference. Last year we upstreamed it to EDK2 and it was able to boot to the UFI shell on the Hi-Fi-FileLish. We also had a board of patches to boot Linux but we haven't cleaned them up and upstreamed them yet. This is what we're going to do this year to boot the Linux CFI stub. Additionally we want to port more boards for this year. For example the RISC-V non-speaker 5 board which is much more affordable and hopefully will allow other people to more easily work on this on actual hardware rather than QMU. Okay let's look at the implementation. Anybody who knows about implementing UFI has probably seen this diagram which shows the different phases that UFI goes through to boot. So let's look at the detail how does it work actually on RISC-V implementation of EDK2. When the Hi-Fi-FileLish boots when you turn it on, you put it into power. It starts running in M-Mode without an M-Mu like I mentioned. And there's a so called zero stage boot loader which is embedded in the hardware in a mass gromm or on QMU it's hardcoded in the source code. And it doesn't do much, it has maybe 10 or 20 instructions and then jumps to a predefined address. This is where we're going to put our firmware. ACC is the first phase of UFI and some of it has to be written in assembly and it's fully custom for RISC-V meaning it doesn't share any code with the existing EDK2. First it sets up the scratch register and the structure that OpenSBI expects and to be able to transition to C code we set it up a stack in temporary RAM and then also set up a trap handler which preserves the registers, calls the OpenSBI trap handler and then restores the registers again. The last assembly instruction starts the C function of ACC and passes it the current heart ID and also the scratch pointer that we just set up. In C we add a private region to the scratch base that OpenSBI expects and we store some information that we're going to use in later stages about the machine. And then we initialize OpenSBI for the booting heart we continue and the other non-booting hearts we store what they're going to be started later. But UFI isn't really multithreaded so we just use one heart. When we initialize OpenSBI like I mentioned we pass it the pointer to the scratch base which includes the device tree that the operating system is going to use later and we need to tell it what mode we're going to initialize it with, what privilege mode and we give it a pointer to some platform specific functions that how to initialize the platform, how to use the console and so on. And also we register some additional SPI calls in addition to the ones that are defined in the standard and provided by OpenSBI because we need some functions of OpenSBI but we want to avoid linking later stages directly to the OpenSBI library because we want to also run them in S mode with less privileges. So really the main point of SCC is to be able to run PEI so the last thing we do is we find the PEI entry point in the firmware volume we switch to S mode, we enable MMU but there's no real translation happening yet it's just one to one mapping. And when jumping to PEI we pass the information about the boot firmware volume because that's what where the other phases are contained and it also needs to know where the temporary RAM and the stack is to continue executing. In PEI most of the code is shared with other architectures so I will not go into details about that code rather I will explain the differences that we do in RISC 5. So we discover the RAM which is hardcoded for our case in the platforms that we enabled and we migrate the current code from the temporary RAM to the actual RAM. Then PEI starts dispatching the modules. One of the ones that we added takes the device tree that previously we put in the scratch space and stores it in a handover blog for use in the later phases. We discover some additional processor features and also store them in a handover blog and those will later be stored in SMBIOS for the operating system to discover. To launch the next phase Dixie we build a new stack and we switch to that stack and execute the Dixie initial program loader. Again Dixie has the most code shared with every other architecture so it starts dispatching the Dixie modules and the ones that we added want to install a timer interrupt handler and a protocol to control the timer, to set the timer and to be notified. Also we installed the runtime services. Most of them aren't implemented yet but we are working on that. Then as I mentioned previously we saved some processor specific information and the next Dixie is extracting that information and putting it into SMBIOS tables. The ones that we add are type 4 for information about the CPU, type 7 for information about the caches and type 44 which we had to create for risks 5 because we need to have some more information about the CPU that the original type 4 table doesn't include. So we updated the SMBIOS specification. The last Dixie that we added is the one to install the device tree. It's also extracted from a hub given by PEI and one thing we need to do is to insert the heart ID of the current booting heart because it's required by the Linux booting protocol on risk 5. Once that's done we inserted into the EFI system configuration table with a specific GUID which Linux knows about and can fetch. So the next stage is the BDS stage which in our case we launched the UFI shell and the changes that we made here to boot Linux, we haven't upstreamed it like I mentioned. To be able to launch Linux you need to do some preparation because we don't have a disk driver for the SD card for the high 5 I leashed. So we need to do some work around and we embed the EFI stop corner and the initial round disk in disk image and then store that disk image in the flash firmware image directly with the medik2 code. So what you have to do under your I shell is there's a command that we created to load this disk image into memory and then load the appropriate drivers for the partitions and for the file system and turn it into a round disk. Then you can run the init.rd command to load the initial round disk onto a handle so that the kernel can later get it from there. The final step is just to execute the EFI stop. The stop was in the stop changes weren't implemented by us but they were implemented by Arthish Patra from West Indigilter and we heard that he was implementing it and we asked him to collaborate with us. So together with him we tested and finalized his patches and our patches to make Linux boot from EFI as an EFI application. So what it does is it takes the device tree from the configuration table where we previously put it extracts the init.rd from the device path where we installed the protocol load file to and then it executes the actual kernel which has some requirements like MMU needs to be disabled that as you remember we previously enabled and then it just jumps to the kernel by giving it the hard ID and the FDT which is the device tree. Here's a quick overview of what I just explained. I will not go over it again but you can refer to it later as a reference when looking at my slides. But you can see how the device tree is handed over from stage to stage and then finally goes to the Linux kernel. So you see OpenSPi is present everywhere because in some modules we use e-codes to interact with it but only in SCC it is directly linked to our code. Let's look at what it looks like putting to Linux. Starting EMU. Now we're in the share, loading the embedded RAM disk from a specific file GUID and it's loaded. The file system drivers are loaded. Now we can map it. You see the file system is present so it should get the file system and it shows that the Linux kernel and the initial RAM disk are present. So we can execute the kernel because we have no disk telling it to use a root file system in RAM and passing it the init.addy. In this case we use the deprecated init.addy command line parameter for the kernel instead of the init.addy.ufi.shale command. And in the case of booted to userspace we can log in and see that it's still in the same kernel running on risk5. Let's check that we are actually booted with the UI. As you have seen we can put to the UI shell, we can launch UI applications. For example we were able to launch scrub and we can also boot to Linux by launching it as the UI application. Currently we support the high-five unleashed and the EMU version of it fully by booting to Linux. On the freedomU500 FPGA we can boot to the UI shell and other platforms but it doesn't boot to Linux yet. Other platforms that we want to enable are the VRT machine of QMU because there we can use the VRT IOR devices and have an actual disk. We also have the Andes AE350 FPGA and hopefully soon we will receive a bigger 5W port to that one. So overall we had to amend the UFI specification which has been the case for three years I think SMBIOS 2, Type 44, EDK 2 port has merged upstream and Linux patches are still missing. Linux EFI stub is ported by Artish and has been merged since 5.10. Recently I also ported the UFI certification test to risk5 and it was confirmed to be working on the UBOOT implementation of UFI. The patches still need some cleanup and then they will be upstream too. Some goals for this implementation. I mentioned we installed the runtime services. One important one that's not working yet is the reset system to restart or power down the system. SPI has very recently a few weeks ago gained the ability to reset the system so I'm currently implementing that and then upstream it. Second one is to upstream the changes to boot Linux which are mainly the fix up of the device tree and storing it in the config table. To be able to use a disk on the HyperValve leashed we need to create a driver for the SD card. Currently we use an older version of the GNU Toolchain to compile our code. New implementations have implemented a new relocation type. So to make the build work with this new toolchain we need to implement those in the EDK2 tools. One major platform that we want to make work is the VRT platform of risk5 on QMU because like I said that has better your own devices and it would be like OVMF for x86. With that we can add boot tests to the EDK2 CI to make sure nobody else breaks in risk5 on accident and we can boot with an actual disk. We would like to port it to the Beagle 5 because it would be more affordable for everyone to increase security guarantees and would probably be also nice to have secure boot working on risk5. If you want to help you can of course port more boards that shouldn't be very hard. If you want to do so please talk to us and we'll point you at the files that need some modification. We have a fork on GitHub and made issues there so you can take a look at those on your own and see if you can fix them or help us because we are trying to fix some of them already. Don't duplicate our work but you can help. Ask how you can help. Spread the word and everybody knows about UFI but it's not popular on risk5 yet so there are people. We would like to make risk5 boot like the rest of the current mainstream industry and that's why we have done this port. Every x86 system pretty much boots with UFI. Every one of your laptops or servers if they are x86 some systems as I. Currently UFI is used for consumer and service systems and you boot for embedded. So those should also be the main ways of booting on risk5. We would like to follow in short steps what they've been doing recently to make booting boring by implementing the UFI interface specification between the operating system and the firmware so that the operating system doesn't need to care about the hardware it just interfaces with the firmware in a defined way. Additionally we would like to encourage discussion and thinking about risk5 desktops and servers. Thanks for listening. Please let us know if you have any questions and you can check out our development on GitHub. We have a documentation repository and two development repository where our development and the issues are and the upstream repositories are also available. If you want you can also send us an email. I have some additional slides with references and explanation of words for your reference. Thanks. Okay, I believe we are live. So hello everyone. We have around 15 minutes for Q&A and I don't see many questions yet but there was a lot of conversation about the firmware staying resident while OS is running. So maybe you would like to comment on that. Yeah, it seems like everybody got hung up on that single topic and they have also been unsets provided. So yeah, why would you have the firmware running after when the OS is booted? Yeah, like people said, you can avoid some platform specific drivers. Of course it's controversial but I didn't want to get into that debate in this talk. And it's not so bad in this case because we EDK2 installs OpenSBI and even without changing the source code, EDK2 installs the interrupt handlers, the exception handlers so we can change the behavior. And there's even a way to run as far as I'm aware, to run Linux and M mode without SBI. So the operating system has full control over the machine. If you have any other questions, please let me know. Yeah, I don't see any yet. We can wait. alloy framework as well. Okay, I see the one questions, one question. The UFI PI specification management mode now supports x86 SMM and ARM Trasuzon. Have you thought about having an MMPort to in risk five for the type of machine mode called that implements SBIs at CERN? That's a very good question. I haven't looked at the MMPasification in PI, so I don't know how it unifies SMM and ARM Trasuzon, but it would be a good idea to think about how SPI could also be unified with this. Okay, what are the next steps? The next steps are to make it more robust, to upstream the changes that make it possible to boot Linux, because at the moment, like I mentioned, I have the workaround because there's no disk drive yet. So the very first one would be to implement the relocations for the latest GCC toolchain and to implement an OVMF-like platform for the VDI or KMU machine. Okay, and have you taken into account another boot system, like Grubb? Yes, I have tried Grubb. It already has support for risk five. I don't remember what the problem was, but with the disk, I think that shouldn't be hard. What's the status of ASP? ACPI? ACPI, sorry. I think we haven't made any changes to that, so just like MMP. We haven't thought about that yet. But there could probably be some wrapper for SPI that we expose via ACPI. Okay. I'm not sure what says that the Grubb port for risk five UFI still needs to be completed. So it's not quite as finished as I thought. Okay, yeah. And when do you imagine ADK2 will be fully supported in all common risk five platforms? What does it mean, common risk five platforms? I think the most common until now was the high five unleashed. So we support that one and also that can boot to Linux with the same demo that I showed. And the recently announced was the Beagle five that is going to be a more affordable board, so I think 150 euros in that range. And we already requested to have that board and going to try to port it. Okay, I don't see any more questions. I hope I don't miss any. Daniel Kippeth, the maintainer of Grubb says he's going to merge the changes for Grubb after I could take this moment to thank life Lindholm down because he has reviewed many of our patches and help us getting the ADK2 board upstream and heine Schurrat has also reviewed my patches for the ACT risk five. There's a question if we thought about porting it to risk five 32 bit instead of just 64 bit. We have some of the infrastructure in the ADK2 is ready for that, but we haven't attempted it. Maybe there are some other things to change, but some of the build system is already in place. But we don't have any plan to do that. Thank you. Okay, thank you again! Somebody mentioned that FreeBSD has full support for UFI and it's the default and preferred method. So I guess it would be nice to try if FreeBSD works in risk 5 already. The comment doesn't mention that, but it would be interesting to try that. So, apparently specifically for risk 5, FreeBSD supports putting from UFI. I didn't know about that, so I guess they used UBUT to test this. I also saw that Haiku has been trying to make it put on risk 5 for UFI. So, we'll check this out. This is from a proper mistake. Okay, we have two minutes left. If anyone have any questions, the time is now. Okay, we have two minutes left. Okay, we have two minutes left. Okay, we have two minutes left. Okay, we have two minutes left.
|
RISC-V is a relatively new ISA and platform, which has been evolving rapidly. A few Linux distributions already have good support and have compiled most of their packages for it. The boot process has been neglected and only recently did everyone start using the widely used embedded bootloader U-Boot instead of a custom research bootloader. We have ported the EDK2 reference implementation of UEFI to make the boot process more like current desktops and servers. This talk explains how we did that, how it works and how we got Linux to boot. We also want to explain what's left to do and how we can move in the direction of a RISC-V server platform.
|
10.5446/52511 (DOI)
|
Hello, this is Secure Boot without UEFI, booting VMs on PowerPC. My name is Danny Laxton, I'm a Linux security engineer for IBM, and I live in Canberra, Australia, so it's really great to have the opportunity to present you virtually at FOSDEM 21, and a big thank you to the Dev Room organisers and the FOSDEM organisers for the opportunity. So what I'm going to talk to you today about is how we boot Secure Boot on Power. So what is Power? Who uses it and what for? What is the boot environment like? Because it's not UEFI. And then what are Appended Signatures? Because they're a big component of how we want to build out the system. And from that we'll do a fairly standard Secure Boot chain, we'll verify, we'll take our trusted firmware, we'll verify Grubb, and we'll take our trusted Grubb and we'll verify Linux. And then because we like open source and upstream, we'll talk about how we're going and getting our patches upstream. So what is this Power thing? So previously PowerPC was best known for being the chips inside of Apple Max. Since then Apple has moved to Intel based machines and now increasingly to their own Apple Silicon based machines. So today the Power architecture is best known as being what powers some large enterprise focused servers that IBM and some partners build. So on the right here you have an example which is one of the larger systems but not the largest system that you can get. Up to four sockets, up to 48 cores, each of which can have multiple hardware threads, up to 16 terabytes of RAM, that's quite a bit. The big thing to point out here though is this built in IBM Power VM. So Power VM is a firmware hypervisor, a type one hypervisor, and a full suite of system management software. And so rather than you boot your XS6 machine, you boot Linux, you run a bunch of KBM guests inside of Linux. Here Power VM, the P-Hype hypervisor comes up as firmware and manages your logical petitions directly. So there's no sort of operating system in the sort of Linux sense that runs the virtual machines. So in these machines all of the custom workloads are run as these power virtualized virtual machines which we call logical petitions. And what we want to do is to be able to securely boot a Linux logical petition or Linux LPAR. Also worth pointing out we do have bare metal machines under the open power brand so that way you do actually get your Linux. You don't have Power VM, you don't have the proprietary hypervisor, and then you can run KBM. You can run your KBM guests and you can have that sort of hypervisor. And indeed if you're booting QEMU guests you'll have a similar environment to what you would have if you were booting a Linux LPAR under Power VM. So let's look a little bit more at the boot environment. So the boot environment for these machines is based on a specification called OpenFirmware which is IEEE 1275 and that defines runtime environments and services that are provided to you and so on. How that applies and works on a power machine, an IBM power machine is fleshed out in this document called the Power Architecture Platform Reference or PAPA. So that's sort of our reference there. The configuration is based on device tree rather than something like ACPIs. That's how you find out about your platform. And one of the things that's noteworthy is that PAPA defines the boot loader as a 32-bit big Indian Elf binary. And this is in contrast to UEFI where it is defined as a PE format binary and that will be an important distinction. If you're running Grubb, the platform that we call this and this is the same whether you're booting an old Mac or a power guest is PowerPC IEEE 1275. So one of the things that you do if you're firmware, so if you're on a power VM system this will be petition firmware, PFW, if you're on a Linux system running a KVM guest this will be something called Slough. You've got to find the boot loader. When you're first installing your system, you put in a CD, CD has a file system, firmware will look at that for a file called bootinfo.txt in a PPC directory. And that file is described as being sort of SGML, XML sort of thing, angle brackets and tags. And that gives you a path to your 32-bit begin the in Elf binary on disk. Then your installable run and it will create this thing called a prep petition which is usually something like 4 or 8 megabytes. And the boot loader, so the Grubb binary usually is just stuck directly into that petition. There's no file system, you just have raw bytes. But Elf header starts from byte 0. This is obviously quite different to UEFI. UEFI, you have your EFI system petition but there's a file system on it. And then your boot loader lives all the time at these sorts of paths within the EFI system petition. So prep petition will come back to us a few times as we think about how we do secure boot on power. The next big concept is the thing called Appended Signatures. Appended signatures originally from Linux land as a way to sign kernel modules and they have a lot of properties that we kind of like. So one of the big ones is they're really simple. You calculate the signature over the entire unmodified data block. You're completely oblivious to its structure, its format, its contents. You create this cryptographic signature, you wrap it up in this pcs7 or CMS signed data message, you stick some fixed size metadata on the end of that and you stick a magic string after that. You construct it top to bottom. If you're then going to parse it, you then go bottom to top. So you're building it, you take your data, you sign it, you stick it in the message, you stick the metadata in, you stick the magic signature on. You're parsing it, you check for the magic signature, you peel that off. That gives you some fixed size metadata. You peel that off, you check, is this a pcs7 message and what's its size? Use that information to grab the pcs7 message and what's left is your unmodified data. And then you can check the signature of the unmodified data against what's in the pcs7 message. So existing crypto, which is really nice and because it's used in the Linux kernel and uses existing crypto, we've got existing tools to work with it. So sign file and extract module SIG are from the kernel and sign and parse these sorts of signatures and then once you extract the cryptographic material, you can just verify it with openSSL, which is lovely. Not only are these used to sign Linux kernel modules, but they can also be used to sign entire kernels, at least entire ELF format kernels. And then when you're Kexec-ing, the IMA system of the kernel can actually verify these suspended signatures. This is actually how OpenPower secure boot works. So rather than having a firmware hypervisor, firmware loads a little Linux environment, which contains a kernel and some user space that goes and looks through your disks for kernels. And then if you're secure booting, you load your keys into that little Linux environment and it verifies the appended signatures on the kernel that you're trying to boot into to run your workload. So this already exists, it's deployed and so that's quite helpful. So we've looked at PAP-EC, Power, the boot environment there, especially prep petitions and we've looked at appended signatures which are constructed based on unmodified data and stuck on the end. With these things, we then go to create a secure boot chain. So as I said, if you're running Power VM, the first thing that runs in your logical petition is this petition firmware. We trust that because it comes from a hardware secure boot chain. So that we trust implicitly for our purposes here. PFW loads grub, currently there is no verification there. Grub loads Linux and there's no verification of that either. So we want to close both of those locks, we want to verify both of those steps. The easy one to start with is actually grub verifying Linux. So as we said, there are already Linux kernels being signed with appended signatures. So all we really need to do is to teach grub to verify appended signatures based on an X509 certificate that we provide it with. Now we've already got some grub features and concepts that make this not too challenging. So there is the grub verify interface that we can use that's used for these esteem PGP verifier that's also used for UEFI secure boot verification as well. We borrow the concept of embedding your key material into your core grub image. So that's done for the PGP verifiers. We borrow the way they do that. And this means that we can write quite a small amount of actual crypto code, which is good because I'd try and be humble about my ability to write security critical crypto code in C. So PKS-S7 and X509 are both based on ASN1. The world does not need another C based ASN1 parser, so we import libtasn1 to do that parsing. ASN1 is not self-documenting unlike JSON, so we need a file that tells you how to parse a particular thing that's encoded in ASN1. So we just borrow that from GNU TLS. And then we already have code that can do the actual link maths for the signature verification. So all of the Gcrypt code is already there. So all we really need to do is write some code to pull out the relevant bits of the PKS-S7 message, the signature itself, the public key material, the common name of the certificate, and then write the sort of plumbing code, write the verifier, write the code to embed X509 certificates in the core image and pull them out again. This actually only ends up weighing in before you count tests and docs and imported code at only about 2,000 lines of code, which is not too bad. So that is actually all we need. And then grub is capable of checking appended signatures on the Linux kernels that it's loading, which is really nice. We've sent the code to the mailing list to do that, and that closes our link from grub to Linux. So that's the easy part. The more complicated part is going from PFW to grub or from slough to grub, so actually verifying the integrity of the grub core image itself. So a few design constraints that we were considering when we were trying to figure out how to do this. So really big one for us is backwards compatibility. We want to be able to run guests that support secure boot on older systems, and we want to be able to run older guests, migrate older guests to newer systems. We want to not have everything collapse. So this means we can't change the elf binary format that we're using. We can't change prep petitions. We especially can't move to UEFI. So backwards compatibility is a constraint for us. Another thing that we want to do is we want to try and make this as novel as possible. We don't want to invent new crypto formats. We don't want people to have to manage multiple different sets of keys and multiple different management processes, multiple different signing infrastructures. So we want to be able to keep as much familiarity from people coming from UEFI and keep as much commonality from firmware to grub as from grub to Linux. So we want to support multiple signers, so we'll talk a bit more about that later. So what this sort of we would like to do that we think is a nice way of solving this is to sign grub with an Appendix signature as well. This has some complexities. So if we use an Appendix signature, which would be lovely, how does firmware know that it's there? So as we've said before, grub is loaded from the prep petition. The prep petition is raw bytes. There's no file system. Because there's no file system, there's no file size, there's no clear end of file. So if you want to go to the end of the file to check for your module signature appended magic string, you can't because the end of file is not a concept in a prep petition. We also can't read the ELF headers go, oh, okay, the ELF binary ends here. Let's just run, let's skip a certain amount to try and find the module signature appended string. We can't do that because the length of a pkcs7 message actually depends on a few things. It depends on the number of signers that you have inside that message. Depends on the keys. Depends on the hash types that you're using. So it's not like it's always going to be like 4k after the end of the ELF binary. I think that we could do but would much rather not do is just sort of get to the end of the ELF binary and just sort of keep crawling through the prep petition until we either run out of prep petition or find a module signature magic string. The reason I'm not keen on this is because it makes it really difficult to distinguish between the case where we have a binary that is unsigned or a binary that is unsigned, sorry, a binary with an invalid signature or a binary that is unsigned that has been just dd'd into the prep petition leaving the remnants of an old signed binary there. If that, in that case, you know, you'll keep scanning, you'll find this old signature and you'll say the signature is invalid. But a much more helpful error message would be you've put in a binary that's unsigned. If you keep scanning, you lose the ability to differentiate between those two cases. So we'd rather not do that. So what we think would work better is to actually use the properties of the ELF format. So we want all of our data to be within the ELF structures, so firmware knows basically where to find it. We also want to keep the properties of the pended signatures that we really like. We really like that the pended signature is totally oblivious to the content of the data at signing. It doesn't need to know anything about it. So what we propose to do is to create an ELF node, so a section at the end of the binary that is designed to contain this sort of data. To create this ELF node, we give it a type which happens to work out as ACIG if you interpreted as ASCII. We give it a name, a pended signature. And in the description of that ELF node, so an ELF node contains a type, a name, and a description, that is where the pended signature data lives. And we make sure that this is always the final part of the ELF binary so that what we end up with is an ELF binary that simultaneously contains all of the data within the ELF structures and has a valid pended signature because that happens to be at the end of the file. And we do some clever tricks with the padding. We put the padding. We're required to have four byte alignment for these things. We stick the padding at the beginning so that again the pended signature magic is always at the end of the file. And this actually works. It's quite nice. And we can parse it. In slough we can parse it in PFW. It's pretty good. The complexity for this for us comes with multiple signatures. So this is always great for a signal signature. But sometimes you want to have more than one signature. So a couple reasons why you might want to do this. One is key rotation. So a distro might just want to rotate out its old keys. Another reason that you might want to do key rotation is at the moment we don't have a key management system. We have a fixed key in firmware and maybe you want to rotate the keys that are embedded in firmware in which case you might want to have something signed for the old firmware and for the new firmware. So in this case you have both signatures are made at one point in time and the signer has access to both sets of key material. The other use case that was brought to us as we've been talking about this is the idea where you have someone with a production infrastructure and say some test infrastructure. In the production infrastructure they only trust their own keys. But they also want to keep the existing distro signature because they use that in their test infrastructure or whatever. Here we have the signatures are made at two different points in time and the second signer doesn't have access to the first signer's key material. For pkss7 and appended signatures broadly this is not a problem. The pkss7 format is capable of holding multiple signatures. This is the signer info here. Multiple digests encrypted by different people with different issuers. And then you just adjust the size of the pkss7 message and everything goes on. Tools to actually create the sort of message described in the second use case are a bit lacking at the moment but you definitely can do two signs at the same time. Two signatures created at the same time. However we get into a bit of a pickle when we think about how this applies to the ELF note structure that we want to create. In particular the ELF note contains the size of the signature so it's not on the diagram but as well as the ELF note contains the contents of the description and the size of the description and the size of the description is the size of this signature block. This means everything works fine if you're signing everything at once. So if you're the distro you're creating a set of signatures at one point in time with both sets of key material. That's fine you know how big that will be. Where we get into problems is if you want to do that second use case where you're adding a signature after the fact. If you do that it changes the size of the pkss7 message. That means you then start spilling out past the end of the binary or you need to change the size of the note which will break the original signature. This is a bit tricky to work around right because if we could do a more complex format where you don't for example hash the size of the description and then you can keep changing that and it won't break the signature. But that means we've then broken some of those properties of the appended signature format that were really attractive in the first place. The appended signature is supposed to be oblivious to the data being signed and it's supposed to be made using standard tools. And if we then start to have to prod around inside our ELF binary in order to actually create this signature or verify this signature we've lost those properties that made appended signature so great in the first place. We really don't want to end up reinventing authentic code for ELF but worse and that's the real challenge here. So our proposed solution to this and we're open to better ideas if you have them is to actually pre-allocate a bunch of this space. So what we found out is basically we can create this pkss7 message block and include its original signature and include just some extra padding and include the size of that pkss7 message and extra padding inside the appended signature. When you go to PaaS the pkss7 message if you tell OpenSSL that we have 32k of message and your pkss7 message only uses 4k, OpenSSL will read the 4k and throw out the rest of it and everything will be happy. And this maintains a bunch of the properties we really like. It still sits at the end of the file, it still doesn't need to know anything about the contents that it's being signed over and it means there's one place, this works for everything, this works whether your file comes from the ppc.bootinfo.txt, it works if it comes from your prep petition, it works if it comes across the network. It's not perfect, there is a limit to the number of signatures that you can add, that's probably not going to be a problem practically because signatures aren't that big and you can allocate 32k and should be able to store dozens. It is an abuse of the size field and the metadata but that doesn't seem to be a practical problem either and this should in theory allow us to satisfy all our goals. So in summary, Appendix signatures everywhere. We build Grub with an Appendix signature and we do that by adding this elf note into the Grub core image that says we have an Appendix signature, it is here, it is this big. To allow Grub to verify Linux, we build in an X509 certificate and we teach Grub how to verify the Appendix signature on the Linux kernel with that embedded certificate. Nice thing about this is that it should be portable, there's nothing about this that requires the power platform, if you run something else that isn't UEFI you can absolutely also use this. And that allows us to close that loop from PFW to Grub and Grub to Linux. We've been working on this upstream now for a while and all of the parts that you should need are there so the code to sign Grub with an Appendix signature is pretty straightforward. Verifying Appendix signatures from Grub is a little bit more involved but it's all there and it works and there is a little bit of extra code that you can use if you want to have all of this process controlled by a secure boot property advertised by Firmware. This is potentially a bit more controversial because that could be used to implement the sort of lock-in that people have historically been concerned about with secure boot. Not a practical problem on power systems because you can turn off secure boot in Firmware, petition secure boot in Firmware. But yeah, there it is. And we also have, if you follow those links, there is links to a patched QEMU and a patched slough that allow you to test the entire end-to-end system entirely using open source software. So thank you very much for the opportunity to present this to you and I look forward to all of your very intelligent questions. Thank you very much. So we can start. Feel free to ask the questions. When do you, exact, slough to have this upstream and does this delay or create more time to boot? That's a good question. So there are patches for slough on GitHub and if you follow the tree of links in the emails there and I'll also post a link in the chat in a moment. Does it create a delay in booting? It depends on how large your image is. I haven't measured it so I can't say for sure. I certainly haven't noticed any delay in loading grub but it might add a couple of seconds if you're loading a Linux kernel. But I haven't measured it so it could just be that loading a Linux kernel takes a while anyway. It certainly hasn't been a particularly egregious delay. Just doesn't take long to boot a kernel. I don't know, I haven't measured. Sorry, a link to my GitHub which contains a branch with the patches. Do you have any other questions? So it looks like it's going. It's another question. I might have missed it during the talk but where does power have its root of trust? No, you didn't miss it during the talk because I didn't think to say that. Power has a hardware root of trust so it's baked into, there are keys baked into the chips used to verify the host boot, host boot, again, and it's different between open power and power VM. It is a hardware root of trust, there's a bunch of good documents about it publicly on the IBM website. Here is one from my history. So this is from the, this is for open power and that's part two of a series and they have more, they have bits of, part one of the series is there somewhere as well. That explains it for open power and if you Google it, power VM also has a similar explanation and it starts with keys stored in hardware baked into the chip. Yeah. So I'm just waiting to see if there's another question. I see someone's typing and if not, we can do, I've got a terminal window set up and we can experiment with it. We've got a live demo. We have about two minutes I guess. Oh, okay. So that might be coming in a bit fine, but let's see what we can do. So about hardware root of trust, I'm implemented in its Epic, some from OTP, fuses, from or whatever that's motherboard, firmware can write some keys to the processor, then checks the firmware. Yeah, I've got to say I'm not 100% sure on how that works because I am in the fortunate position of being able to trust that firmware has got it right to begin with and that I don't need to particularly care about that. But all of the documentation is on IBM's website and I would commend it to you because it's very good. I have read it before. I just can't remember all the details. This works better if I can spell. I can't remember my own variable names. Anyway, this is set up to actually verify as it, but because I've forgotten the variable name, that's not a very impressive demonstration. But you can see it's tried to load something and failed. So it's probably tried to load a module and then has successfully verified this kernel and is booting it. So that's nice. And I think that's probably about all we're going to have time for. But we do have a dev room. We have an extra room to chat in that will be printed in a moment. So you can ask me more questions there. So that has secure boot off. And then that will have not printed any message about trying to load something because it will have been successfully loaded. And this will load a red hat kernel and start booting it. And even that takes a couple of seconds to get going. So it's quite possible that Verifying doesn't add an appreciable delay. And what we could do if we wanted is pop on an unsigned grub image. So we could do sudo grub to install sda1 or vda1, I can't remember. And then if we reboot and we turn on secure boot mode, we should find that firmware refuses to load grub, which indeed it does. Which is good. And then it tries to TFTP and that fails. So yeah, we didn't secure boot mode, but no appended signature found. And if I then disable secure boot mode, if I disable secure boot mode, I should be able to then boot. Which, oh, that's not good. Well, I booted, but something else has gone horribly, horribly wrong. I did not synchronize my disk before rebooting and I have corrupted my file system. So that's live demos for you. Oh man, that's quite something. Well, thank you all for the opportunity. And as I said, if there are other questions, very happy to answer them in the chat and there'll be a room, especially for the talk quite soon. We actually had a problem with our test suite where we did something similar and it turned out that we needed to make sure we shut down properly. Trying to think if I can, I don't think I can rescue that in a timely fashion. Oh well. I will say that the slough code is not quite production ready just to go back to the question that was asked on that before. The slough code is primarily a demonstration testing tool. So it only verifies when you are loading an image. Oh, actually, I can't remember how it ended up. I thought it only checked if you loaded an image off the prep petition, but I think actually I put the code in the Elf loader. So it will actually check properly. A problem that you will have though is that it's not hardened at all. So for example, there's no checks that you aren't doing something nefarious in your Elf binary that upsets the Elf parser and tries to gain RCE through some sort of memory corruption through the Elf parser or any other part of slough. The productionised version is planned for petition firmware. But still, you can test whether you can experiment with it and it does work as we have demonstrated. We have four minutes for question and answer section in main dev room. So feel free to ask more questions if you have any. Just sort of thinking back over the questions we asked before, as well as the slough changes, you'll notice that you may remember that I changed whether a Comma secure boot property was passed to Quemue and that relies on some Quemue patches and I will post the link. Although you can sort of infer it from my username. The tree for that is, let me just get the correct branch. I believe that is this one here, but again it's in the email threads that I've sent to the grub mailing list. And that one is, that's the Quemue tree. So you build the Quemue, the Quemue passes the secure boot device tree property, slough checks the device tree property and if the device tree property is present, it verifies grub. Grub checks if the device tree property is present and if it does, it requires appended signatures. But you can also build it without the patch that checks the device tree property and that way it's not possible to lock anyone out of their system. But the secure boot mode is you can disable it in firmware so you can disable it through the hypervisor management console. So there's not a same sort of concern about security freedom that you would have, for example, with people had years ago with UEFI secure boot. But you know, we're conscious of that as a thing. Beyond that, yes, I don't think I can repair that VM immediately. But we're at the one minute mark so hopefully get any further questions. Feel free to ask me.
|
Much of the Secure and Trusted Boot ecosystem is built around UEFI. However, not all platforms implement UEFI, including IBM's Power machines. In this talk, I will talk about my team's ongoing work on secure boot of virtual machines on Power. This is an important use case, as many Power machines ship with a firmware hypervisor, and all user workloads run as virtual machines or "Logical Partitions" (LPARs). Linux Virtual Machines on Power boot via an OpenFirmware (IEEE1275) implementation which is loaded by the hypervisor. The OpenFirmware implementation then loads grub from disk, and grub then loads Linux. To secure this, we propose to: - Teach grub how to verify Linux-module-style "appended signatures". Distro kernels for Power are already signed with these signatures for use with the OpenPower 'host' secure boot scheme. - Sign grub itself with an appended signature, allowing firmware to verify grub.
|
10.5446/52513 (DOI)
|
Hello, and welcome to my AntmiHouse talk about open source firmer status on AMD platforms in 2021. My name is Pet Krul, I'm founder of TriumDeb Embedded System Consulting, Polish consulting company. I'm 12 years in business, 6 years doing open source firmer. I'm also C-level manager in various other companies. On the other side from the community, I do core boot contribution and some maintenance ship. I'm frequent conference speaker and organizer. I train various organizations and I'm former Intel BIOS software engineer. So TriumDeb as I said is Polish consulting company and what we do, we are core boot licensed service provider since 2016. We are members of leadership of core boot project. We also are UFI adopters for last two years. We are also Linux Foundation official consultants for FWPD LVFS project as well as YOKTO participants and embedded Linux experts. We also love to evangelize about open source firmer and definitely we want to fight to have as much open source firmer as possible. So what I will talk about today. First of all I will try with some definitions to get back to the, to maybe some knowledge that you have. What's the status of AMD platform in core boot? A little bit about Agisa and its history and how the AMD support looks right now and will be the future. And maybe I will try to answer the question is it just about Chromebooks or maybe other platforms and maybe a little bit about platform and maintenance and some platforms which were dropped. So let's start with definition. What is Agisa? In short it is some initialization source code for processor and memory and couple other components which are inside CPU. We can easily call it FSP for AMD. So FSP is firmer support package. It is released by Intel. Obviously those Agisa and FSP are binary blobs. Those are binary only components. Of course people with NDA sometimes can get the source code based on some special relations, some fulfilling some business policy with the silicon vendor. Typically this is not monolithic. It contains various components like platform initialization, sequentialization, some drivers, some external interface implementation. Unfortunately despite Agisa being compliant with UFI spec and it based on reference implementation from Tiamat core project which is called EDK2. It still does not support open source tool chains like GCC or LLVM. I asked about that people from AMD directly from open source firmer group and they said that they were working on modifications to improve that and to support GCC. Second thing that I want to talk is AMD security processor. So it is a co-processor on the chipset which performs some similar operation to ME. So there is some security, for example it can be FTPM implementation, some crypto support, CPU bring up and similar stuff. There is really great talk from Kiosk Communication Congress which you can find on YouTube. And for processor names and code names and architectural names which I will refer in this presentation you can look on the Wikipedia. How AMD say about Agisa, what is Agisa? So they saying that Agisa roughly consists of processor core subsystem, PAME which are special modules of PEI, UFI phase and Dixie drivers which are components of Dixie phase from UFI. And Agisa produce Agisa PPI and Agisa protocol and those two things are used by external UFI firmer or maybe other firmer to communicate with Agisa and get information about for example progress of memory initialization, CPU initialization, interconnect initialization. So as we can see on this diagram, UFI firmer talking to Agisa PPI and Agisa protocols and inside Agisa we have infinity fabric which is interconnect initialization, memory initialization, CPU initialization. This is what I guess I is. Agisa 9. Agisa 9 was a closed source implementation. We know only about two versions of Agisa and it seems there were no more of them. So V5 is famous open source Agisa from Corbuth source. Some devices still use that and V9 is a closed source one which is dedicated to family 17 like Ryzen or Epic. It use UFI interfaces. It integrates only with EDK2, so only with UFI reference implementation and at some point it happens that version 9 could not meet firmware requirements presented by Google for Chromebooks and because of that there was need to modify the interfaces between Agisa and external world to meet some abstraction that Corbuth can consume. Of course, obviously Corbuth will not implement UFI spec, so that's why there is need for some other abstraction here. And this other abstraction was solved. It was Intel FSP. So Intel created FSP specification which explains some abstraction on top of code which was delivered using QFI EDK2. And since 2014 FSP created well-established environment for integration of various open source firmware projects like Corbuth and Ubud. So it seemed that when AMD wants to have Ryzen, Epic, inside Chromebooks, that means V9 need to get FSP support. So how this FSP adaptation was made? It was made for family 17 Ryzen and later processors. So Google was leading partner for AMD to create Picasso FSP which is precisely the Agisa with FSP interface. It is compatible with not only with Chromebooks, it is compatible with all Picasso-based systems. Of course, this is question if we can get those systems, if we can start to port them, if those are not locked in. But at least if some OEM would like to create open source firmware for Picasso-based platform, it is possible after getting all these patches for Picasso to Corbuth. So after FSP adaptation, Agisa V9 conforms to FSP 2.0 specification. That's good because that specification is open. We know the interface. Of course, we would like to get rid of any proprietary code, but at least right now in x86 world it does not seem to be feasible. Since new AMD systems typically do not initialize RAM because DRAM initialization happened before starting main processor and it is done by AMD security processor, so there is no need for stages like cache as RAM, so to treat RAM as a cache. And that simplifies things a little bit, but of course gives us a little less control over what's going on and we still have to deal with some secret processor behind the closed source component. This effectively eliminates FSP-T stage. This was temporary RAM initialization phase. So V9 also contains few additional hands-off blocks. Those are kind of binary objects which are used to transfer some information from the secret world to the outside world for consumption. There are some more details in Kerry Brown's talk from open source firmware components 2019. Unfortunately, that one is a little bit outdated because other design decisions were made. The main product for which this port was done was Google Zorg Chromebook. The key point was creation of resource allocator V4 where they introduced some improvements over efficiency of device memory allocation, so there are no gaps in the memory and it's not like we have to allocate the memory after all the reservations, but we're looking if there is any gap between the memory spaces that we can take for given device. So as a result in 4.14, so the next release V4 will have to also support family 14 to 16 because otherwise those platforms will be dropped. So people that maintain or have 14- to 16-H platforms should care about and should support migration to new resource allocator. Of course, we as a PC Engine Firewall firmware maintainers will make it happen for our code base. From the source code point of view, from the reviews, what we can find is that first of all, if you are looking at AMD contribution, there is no contribution from AMD.com domain, but definitely the people who contributing from their own Gmail accounts or other are hired by AMD and this is nothing secret. The devs start saying that there was a lot of code removed from AMD, a little bit added, still there is a lot of purchase pending, but those which were added is mostly skeleton code for new platforms like Picasso on or CISAN. What's under review? So AMD CISAN support, this is like AMD Ryzen 5000. Of course, this mapping between the technical name and the commercial name is hard to be done and next position is a clear example of that. We also have a purchase for AMD Myolica or I don't know how to read that, which is FP6 APU and we don't know how to map that. From call boot leadership meeting, we heard that maybe AMD servers will also get open source firmware support. In short, if community asking us when we can get something usable and in short answer is that we have no idea, but what we have to say that work is in progress because of groundbreaking changes to architecture. As I said, we don't have the RAM initialization on my processor running. We have PSP that doing this or AMD security processor. We know that right now Picasso is under finalization and if this would be done, then all the new processor, the new processor from AMD which support no DRAM initialization on my CPU will be way faster. Of course, we see that AMD recruits firmware developers from core boot community. So if you're looking for job that may give you work on core boot, maybe it is also worth to consider. But of course, 3MDep also looking developers, so feel free to join us. In case of other processors, like let's say Epic one, we saw from Ron Meehe presentation on OSFC 2020 that he implemented from scratch orbit. It's written in Rust. It is fully open source so far, but there are some limitations to what was done. Some things this is like we have fully featured system, but it's not exactly like that. It was done on AMD Epic CRB. Most of the people cannot get that hardware. If we want to try that code, then we have to get some platform from market. And question is if we can find this kind of already still expensive platform, which is not vendor locking. And also if we're talking about some consulting companies like 3MDep that should obtain this kind of board, but to be honest, it's hard to justify the expenses without business goals. And still there are no people that want to leverage Ron's work. And you have to know that the code from Ron initialized minimal set of low speed interfaces, which allow of course booting Linux, but this is very limited usage. And if you're thinking about fully featured system, definitely there is more work that have to be done and some coordination since this is little bit bigger project. Other thing that AMD contributed are support for OpenBMC. So AMD Epic processors probably getting support for OpenBMC. You can find meta AMD in OpenBMC project repositories on GitHub. This is done based on Epic ethanol X customer reference server platform. It also as in this screen, of course, this screen coming from Talos. So this is different. But you can see that there is phosphory UI with AngularJS node.js. Phosphory UI will migrate to Vue.js soon. But this is maybe little bit different story. So you can watch the video from OSFC 2020, which presents this effort. What about the future? So there was a lot of work done by Kiosk, Malikaki and Michal from 3MDep. Mostly cleanup fixes and quite a lot of code as code based landed in the repository for given platforms. This was multi-v processor initialization tables, interrupt tables, also some ACP called which still have to be improved also to take care of power consumption improvements and this kind of stuff. The platform still keep alive and can be maintained in Corbwood project. From 3MDep site, we integrated Agisa V9 in a version that is without the FSP interface. And we did that integration using EDK2. We planning to release that integration code. So this is like, because right now the situation is that using this GCC compiler, you cannot use Agisa plus EDK2. It's like, it's problem because there are some conflicts between the interface. We clean up that stuff and would like to contribute that to EDK2 project. But this is still a best effort. We planning to do that in 2021. Of course, I don't know if this would happen. But we're doing this under the Dacharo Safety Critical Brand. If you don't know what's Dacharo, feel free to Google that. And this was done on some embedded platform from DFI. This was com-express module that we enabled. Problems we faced with that, of course, the problem with MS-Abi. This code was tested only with Visual Studio, Express 2010. That's really bad. Of course, GCC didn't work. But we cleaned that code up and want to publish whatever is there. There were some function definition mismatches and various other minor problems. And of course, some fixes could be contributed back to RG7-9. And we're talking with AMD if this even makes sense. But knowing the history of FSP contribution and how this propagate, this will not be a successful situation. Platform maintenance. Some platforms like ASUS, KCMA, D8 was dropped. So of course, those were un-maintained, there was no outer of the port. Nobody was interested. A lot of bugs around platforms. And that's why there was decision about dropping those. Of course, there is branch on which there's still version to which we can get back and report the platform to newer Corbut. But that makes sense for Corbut to drop, not maintain that stuff because they cannot move forward because of some old code which nobody can test, to be honest. Huge problem with those platforms is that those were block-free, fully-libre hardware. There was no PSP, there was no microcode required to boot the platform. And that's why those were very clean, clean one. What 3MDep tried to keep the platforms in the source code? So for example, we tried to get funds from NLNET, but unfortunately this was rejected because the platform is very old and they don't want to spend money on some old hardware. Definitely, it can be better about DRTM support. So we are trench-boot maintenance for AMD. We also, like in Surgo and 3MDep tried to revive platform, but there was no much interest in the community, not much demand for the platform. So in Surgo could not justify the expenses for sponsoring full port of the platform to Corbut or full re-upstream of the platform. So at this point, huge kudos to Thierry for going above and beyond to support that platform. Thierry, I know you did a lot of work and thank you for that. There is still a little chance that Free Software Foundation will engage to bring up the platform to sponsor this effort. But who knows, like I don't know if you have any means of reaching them, please let me know since at this point they did not reply to my NTL emails. So last hope is that 3MDep will organize and coordinate monthly hackathons. These hackathons have to be paid because we have to spend our resources to provide support for these hackathons. But this would be less smaller amount than, way smaller amount than if we would do the port ourselves. So if anyone is interested in taking part in that effort, please let me know and maybe we can bring up the KGP D16 to Corbut upstream. Some references and that's it. Thank you very much.
|
This is the continuation of the "Status of AMD platform in coreboot" presented last year on the Open Source Firmware, BMC and Bootloader devroom. The talk will cover the news around the AMD support in Open Source Firmware ecosystem from the past year. You will hear, among others, about: FSF RYF KGPE-D16 platform revival, AMD Ryzen R1000/V1000 series AGESA integration into open source TianoCore EDK2, TrenchBoot new features and updates and current support of AMD Picasso and Cezanne SoCs in coreboot, pure open-source on AMD Rome platform in oreboot. The history of AMD cooperation in coreboot projects reaches 2007 where the first contribution appeared for the Geode LX processors. AMD's open-source support continued for many years until now (with some break). This presentation will briefly introduce the history of AMD and coreboot, the evolution of the code, processors, creation of CIMX and AGESA and so on. It will also show the gradual change in the AMD attitude to open-source and introduction of binary platform initialization. Binary blobs, very disliked by the open-source community started to cause problems and raised the needs for workarounds to support basic processor features. Soon after that AMD stopped supporting the coreboot community. Moreover, recent coreboot releases started to enforce certain requirements on the features supported by the silicon code base. Aging platforms kept losing interest and many of them (including fully open ones) are being dropped from the main tree. Nowadays AMD released the newest AGESA with the cooperation of hired coreboot developers, but only for Google and their Chromebooks based on Ryzen processors. 3mdeb trying hard with this ecosystem showing that the AGESA can be integrated into Open Source Firmware like TianoCore EDK2 on the example of AMD Ryzen R1000/V1000 processor. Even FSF RYF KGPE-D16 platform is experiencing its second youth by being revived to the main coreboot tree. If you are curious about these activities and many more like TrenchBoot new features or AMD Picasso an Cezanne SoC support in coreboot, pure open-source on AMD Rome in oreboot, this presentation is for you.
|
10.5446/52515 (DOI)
|
Hi, and welcome to a talk about NARCS. We are an end source project written in Rust and WebAssembly, and we want to talk to you a bit about the project and why we made some of the choices we did. So first little bit about NARCS. So what we're trying to do is use trusted execution environments. You shouldn't be surprised by that, given the room we're appearing in. SGX, SEV, we hope to support TDX in the future, whatever else turns up, for confidential workloads. What makes it easy is possible for users to develop and deploy workloads, and we have some very strong security design principles. And we aim to be cloud native, so the longer term aim is to be able to deploy via Kubernetes or OpenShift or whatever, or via the command line if you wish. And of course, we're open source. It's a project we're not production ready yet. We've got a great demo to show you. And we're part of the open source, the confidential computing consortium, which is part itself of the Linux Foundation. So the first thing we're trying to address is the three types of isolation. So isolation type one is workload from workload, and we're pretty good at that. I think everyone knows that there's opportunities with containers and VMs to do that. Second is a host from workload isolation. Again, that's something we pretty much state of the art we know how to do. The difficult one is protecting workloads from the host. And our view is you shouldn't trust the host at all. And the only thing you're going to need to trust is the CPU, but we don't need to trust anything else other than that. It's important for lots and lots of different sectors and different types of hosts, whether it's on the edge, in the cloud, or whatever. So enough of that. Let's get on to some interesting stuff. So what are the problems with TTEs? Well, first of all, you've got already SGX and SEV and more coming. And if you deploy for different platforms, currently that generally means you need separate development, and that's a pain. Obviously, that comes typically with different SDKs. So that restricts significantly the languages you're going to be able to write in. A really big one is different attestation models. Now, attestation is difficult and not talked about as much at all to be frankly. But it's really very important. If you're not attesting your TTE instance, then you can't be sure that what you're running is safe and secure as possible. Because the attestation models are different for the different types of TTE, and that's tricky. And of course, you've got different vendors. So what if different vulnerabilities come out at different times? How do you deal with that? So if you actually just want to deploy workloads as a company or an individual, you have to deal with all of these things. And we feel, we believe in NOx, we should make this as easy as possible for you, but without making compromises in terms of security. So here's a bit about NOx. So here are our design principles. We take these very seriously indeed. And I'm going to talk about how Rust and WebAssembly and Open Source apply to all of these. First of all, we want you to have the minimal trust of computing base. The more you have to trust, the more there is that can go wrong. The second is minimal trust relationships. That's not just the size, but that's the number of relationships you have to have to have. So you do want to have to trust the host provider, that's the CSP maybe, and the OS provider, a middleware provider. How many different relationships do you need to trust? We believe that fewer as possible is good. We want you to be able to deploy very simply and portably. We don't want you have to choose upfront whether you're deploying on an SEV box, so that's an AMD box, or an Intel box running SGX or TDX, or in the future maybe an IBM box running PEP or anything else. We don't think you should have to choose that. We think you should just be able to compile once and deploy it. The fourth one, I could go into some detail, but we probably don't have time now, maybe we'll have a couple of questions. We decide to put the network stack outside the TCB. That's because, well, historically there have been lots of trouble, lots of vulnerabilities discovered over the years with network stacks. And if you're going to be encrypting everything on the way out, which is what we enforce, you can reduce significantly your concerns by putting the network stack outside the TCB. So security at rest in transit and in use. So TEs, we tend to talk about providing the third one, security in use. And security at rest, the storage, security in transit is network. Now, I talked just before about how if you encrypt all network flows going in and out of your TE, then you can actually have some control over that. We make the decision to enforce encryption of everything going in and out, network and storage. So we make it as difficult as possible for you to do the wrong thing with your application, with your workload in a TE. Auditability, auditability, auditability, hugely important obviously. If you're going to trust this stuff, you need to be able to audit it, which leads us to seven. If it's open source, it's all auditable, and that's the decision we made. Open standards. We don't want you to be forcing you to do new things. We want you to be using existing standards. We want memory safety and we want no back doors. So WebAssembly, I'm not going to talk in detail about WebAssembly. It has great support across pretty much all browsers now, very good support across and growing support across multiple vendor platforms, that's Silicon platforms. And it's portable. That's the thing. And jittable. Wazzy, which is the WebAssembly systems interfaces, designed to cover a headless type of WebAssembly so obviously not needed for a browser, and that's what we're using. We're using Wazm time as the runtime for that. Completely open source, of course. Very few trust relationships to worry about there. You get the deployment time portability and because it's open source, it's auditable. And also, Wazzy is being standardised now under W3C and WebAssembly is already standardised under W3C. Rust. Well, Rust is lovely. It gives us the opportunity to do all the other stuff we need. Wazm time is written in Rust. Obviously Rust itself doesn't give you the deployment time portability, but it allows us to compile up our binaries to allow that for you. But however, we do get all the auditability, open source, and a huge thing here, of course, is memory safety. Rust is very strong on memory safety, which is a vital requirement if you're not going to be leaking information all over the place in your workloads. Why open source? Well, I think there's conferences going to be surprised that we chose open source. Obviously, minimum trust and computed base, we don't need to be trusting. We want to have as small as possible and we can do that by choosing the bits we use. I could guess I could put minimum trust relationships in there, depending on how you look at distributed trust relationships. But hey, certainly auditability, open source, open standards. And last one, we are committed in the project to allowing no backdoors. And if everything is open source, then other people can look in as part of the auditability and check for that. So that's a key thing as far as we're concerned. So that's an intro. Let me just give you a bit of a view of kind of how the world looks. So we talk about keeps. A trusted execution environment with all the stuff in it that we want in it, we call in N-Arcs a keep. If you think of a castle, the keep is the bit where you keep all the safest piece. And what's important, the bottom is I talked about runtime portability. So at the top, you've got your application and we believe you shouldn't care. You shouldn't need to worry about what you're running underneath. So we provide the wazzy and WebAssembly layers and separate run times and shims, et cetera, based on whether you're in a process base keep like SVX or a VM base keep like SEV. But as far as the application is concerned, the properties, the security properties and the runtime should look the same. As far as the application is concerned, it's running in wazzy. Now, here we get back to the WebAssembly question. How do you get an application and compile it to WebAssembly and wazzy? Well, the answer is it's extremely, extremely simple. Most of the major languages out there already have compile time targets for WebAssembly. So let's say you're writing something in Rust. You do a cargo space build, space minus, minus target, space wazum32-wazzy. You press return and as long as you've got everything installed, you get a wazum binary out, a dot wazum file out. It is really that simple. Very similar for CC++. You can compile from Java go.net, lots and lots of different languages. You can compile directly to WebAssembly from. So WebAssembly is what gives us this portability and allows us to build the same runtime view and execution environment on top of some pretty radically different back ends. Not only do we have process versus VM, which are pretty big changes, but you'll see we've got PF and that's an IBM power-based approach. It's not available yet, but there's already some design information out there and architecture from IBM and we absolutely plan to support that. So this gives us a lot of opportunities. Architectural views. So on the left in the sort of big white box is what we're calling the keep. That's what the picture we just saw before. The NARX runtime is all those bits below and the application is a bit at the top. On the host, which is the thing you don't trust, there is something called NARX host agent. Now that is provided by NARX, but you do not need to trust it. It is from the point of the client and we talk about trust from the client, the tenant's point of view, an untrusted component. The client does need to trust the CPU and the firmware, but those are cryptographically assigned and you need to trust whatever you're actually running your code, of course. On the right hand side, we've got an NARX client agent and that is trusted, so that needs to run either in a T itself, although you're going to need to be doing some bootstrapping or on some trusted hardware. And that can be addressed by a CLI or in the future, an orchestrator such as OpenShift or Kubernetes or maybe OpenStack to deploy your application via the host. We don't have a huge amount of time to talk about the detail. We have some more complex view, which we're very happy to go through in the Q&A or maybe offline and we can talk about how you can engage later on. I'm also not going to go into this in huge amounts of detail just for matters of time, but the important thing is the NARX client agent talks to the CPU and the firmware via the NARX host agent, which as I said is untrusted and basically acts as a proxy. The NARX client agent gets a measurement from the CPU and firmware of the keep without the application in it. So before it's entirely provisioned, it gets the keep but with the NARX runtime and it attests, the NARX client agent attests and checks whether that measurement is correctly signed. If and only if that is the case, it will take the workload, the application, encrypted under a one-time session key associated with that particular keep, that's TE instance, and that goes straight into the keep where it is decrypted within the keep. No point is the workload or any data visible to the host agent or the host. The only bit is in the CPU and firmware, which of course needs to be doing that work. So the attestation is very important. This is another view of it. Again it's fairly simplified and exactly how it works depends on the attestation model from the hardware vendor, but this is a basic view of it. So where are we? What's the state of the project? So you're about to see an end-to-end demo of SEV on AMD. SGX is imminent. We are very, very close to that. We hope to have a standalone proof of concept framework very soon allowing you to be able to actually try it out yourself. This is where we've wanted to get to right from the beginning. So you can just take a WebAssembly workload and run it. There will be some restricted networking and storage capabilities, but you will be able to at least run some proof of concept. Next of course is documentation. That's very important. We are a little behind on that. We've been running very hard to get this demo out. Speaking of which, let's watch a demo. First of all, we're going to take a demo WebAssembly, make create a WebAssembly demo. We've just cloned it, and it's a very simple Rust file here, which creates a random number, formats it up, waits 20 seconds, and then runs it and then displays it. So let's build that. Remember I said using the target, Wazim32. Here we go, and it has compiled, fantastic. So now of course we need to deploy that. This is the client piece that you saw, and this is on a different machine. This is on a machine called Rapsion, and it is going to be deploying to a different machine. So this is, first of all, it needs to, we're going to say, deploy that file we just built, that Wazim file we just built, and we're going to deploy it to another machine called Rome.sev, there we go. So this is our sev machine, and deploy it to a server listening on 3030. And here we have, we're going to start that up, and that's the machine, and it is starting up. Excellent. So let's deploy it. Before we do that, we're going to need some way to see the output from that keep, and we set the keep up to pipe standard in and out to journal cut all. But also we're going to do a bad thing. We're going to try and look inside that process and see that secret that we created. Now first we're going to create a nil-keep. So this is not running in any sense in a keep in TE. So yes, it's just a standard Wazim process, and we found a secret. Let's see, secret should be coming up. And there it is. So when we run it plain in just a Wazim binary, we found it. So let's change the type we're going to do. Now we're just going to create a KVM. So this might be a standard KVM VM that you're running in a cloud. So can your cloud provider look into your standard cloud workloads? Well, let's find out. This one is a KVM one. We can see at the top, right? We're going to run the same process and see if we can look inside it, see if we're any better protected than we were with just running a standard thing. Oh, we found a secret, is that? And we found the secret. In other words, and unsurprisingly, we were not protected. OK, so this time what we're going to do is we're going to create an SEV one, i.e. a full trusted execution environment. We're going to run it and we're going to see whether we can find a secret. Run it down here. So we see on the top right that a secret has been created and we can see it's an SEV one if we look a few lines up as well. So that's keep load at brackets SEV. So we're searching, we're searching, we're searching. Are we going to find anything? Hopefully not. And we didn't, which is excellent. So we failed to find the secret. Good news for us at least. OK, so that was the demo. Quick bit about NARCID is an entirely open project. That's vital to us, not just the code, but the Wiki, the design, issues and PRs, our chat which is hosted by Rocket Chat. Thank you very much indeed. And our CI CD resources available to members of the project in good standing, I guess. I thank you to Equinex, Metal Equinix for that. Our standards are open and we have a commitment to diversity and we follow the contributor covenant CFC. Little bit about the different repositories you might find. We've got the keep loader which is the executing and loading of management of the keep. Wasm loader which actually is managing the workload itself. That's what's running the wasm time. The keep manager, and that is the bit which is untrusted, although the parts of the keep loader are also untrusted as well. And that manages multiple keeps per host. The client is the bit that provisions the workload. That's the bit we've just seen running. And then there's some infrastructure and glue pieces. There's some platform specifics, SCV cut all obviously SCV related and SGX is SGX related. And then there's the coin A which is a shared communications piece and sideborium which is for C boring coding. And not all of this code is available as of the time that we're giving this recording this demo, but hopefully it will be very, very soon. How can you get involved? Well, first of all, we're desperate for you to get involved. We're desperate. We very much want you to get involved. You can follow us on social media, you'll see links on the account slide. You can download, compile if you want to, run test report. We hope provide some binaries fairly soon to allow people who don't want to have to do all the compilation to play with it as well. So whether you're someone who actually just wants to run a workload or someone who actually wants to get involved with the project, we should help you in either way. We absolutely want people to audit our designs and our implementations. That's hugely important. They need to be documented. And of course, there's other things like community building outreach, all of those things for any successful project. What do we need for people who want to get involved in the project itself? Well, most important, we want people who are ready to learn. If you have SEV SGX experience, WebAssembly experience, particularly in compiler or WASI, that's fantastic. We have micro-carnals, syscore pieces, so Linux systems programming and networking and storage, Kubernetes and OpenShift integration skills, they're going to be coming soon. Oh, and security auditing research. Everything is written in Rust with the odd little bit of assembly language here and there, but that's where we are. So this is the last slide. That's where you can find us. Our website, our code itself, and chat, of course. And then of course, there's Twitter, LinkedIn, and YouTube. YouTube, you are as horrible, but if you search on NARC, you'll find it very simply. The license where everything is in Apache 2.0 and languages Rust with a smattering of X86 assembly. Thank you very much indeed for your time, and we look forward to any questions you might have. Thanks a lot. Bye.
|
If you’re designing a project where security is uppermost, but you want to make it easy to use and compatible with multiple platforms (existing and future), what principles should you follow, and how do they translate into an architecture and actual code. We’ll present the 10 security design principles of the Enarx project, and discuss why they led us to where we are today: a Rust-based open source project with a WebAssembly run-time.
|
10.5446/52516 (DOI)
|
Hello everyone. My name is Mohsen and I'm going to talk to you about what we have recently been doing to support asynchronous IO in SGX Enclips. Over the past few years we have been developing a Rust platform for writing code that runs in SGX Enclips. This Enclips development platform or EDP is available as open source software and we have been using it at Fortanix as the backbone of multiple security oriented products. Producing SGX Enclips has never been easier. All you need to do is compile your Rust code for the Fortanix SGX target as you can see here. This will produce an Enclips that can be run with our ready to use Enclips runner or a custom Enclips runner if you are choosing if you need special user space extensions. EDP is purposefully designed for network services and that allows us to limit the API surface needed from untrusted user space component. We have only 16 user calls in EDP which is a very small subset of what's needed to support a general purpose application framework. For example, Linux has hundreds of system calls and there are Enclips platforms that expose all or a significant portion of those system calls to the algorithm. A significant source of performance overhead in SGX is context switching. The example shown here compares the time it takes to do a single system call in Linux versus a single user call in EDP and as you can see there is a very significant difference. So to improve performance we simply need to avoid context switching as much as possible. In a network service where there are many concurrently running threads the operating system needs to switch between the threads so they can all access the CPU. In EDP the Enclips runner can have a smaller number of threads than the Enclips itself and yet service all the Enclips threads. We have implemented this M2N threading model in EDP's Enclips runner to reduce the cost of OS context switching. But what about SGX context switches? There are some published academic work on this. The first paper listed here uses shared memory buffers to send e-calls and on-calls between the Enclips and the user space instead of SGX context switching. There is also a busy wait algorithm to optimize performance when there are many consecutive e-calls and on-calls. The second paper takes a different approach. They implement M2N threading inside the Enclips. We have implemented an approach similar to the first paper for EDP and integrated that into Rust's Async IOS system. This makes it possible to compile code written in Rust using the Async Await syntax with EDP and leverage the performance benefits of fewer context switches as well. Now let's briefly talk about Async IOS in Rust. Rust provides an interface called Future which represents any computation that will finish at some point and produce an output. In order to make progress on a future, you need to have a runtime component that pulls the future. When pulled, the future either returns pending, in which case the future will be pulled again once it can make progress, or it returns ready with an output value. This is a very powerful abstraction because it allows a single thread to pull hundreds or even thousands of futures and drive them to completion. But how would you actually implement the future trade? Well, fortunately in most cases you don't need to explicitly implement the future trade. Rust provides a syntax that allows the programmer to write code that is very similar to non-Async code. The syntax consists of Async code blocks and functions and the Await keyword, which is applied to values that implement the future trade to get their output value. This example code shows how you might write code that uses Async Await syntax. As you can see, it looks very similar to something you might normally write except for the Async Await keyword. At compile time, Rust will de-sugar this function into a type that implements the future trade. This process in itself is a very interesting topic, but it has been discussed extensively elsewhere. So I won't go into the details of how the compiler transforms your code. Note that when writing Async code, you should not use types that block the current thread. For example, the TCP stream type provided in Rust's standard library does not support Async IO. However, there are Rust libraries that provide a version of these fundamental types that can work asynchronously. One such library is Tokyo, and I'll talk more about Tokyo later on. All right, let's get back to EDP and see what it takes to be able to compile and run Async IO code in EDP enclaves. First we need to talk about how we perform blocking IO in EDP enclaves. An enclaves needs to do IO, say, read from a TCP socket. It cannot directly call the read system call because all interactions with the operating system have to go through the user space component. This is a fundamental restriction imposed by Intel SGX. So instead, the enclaves will perform an enclaves exit to switch execution context to the user space and request to perform the IO operation. This is what we call a user call. The user space then in turn calls the appropriate system call and returns the results back to the enclave by doing another context switch to enter the enclave. So there are four context switches to satisfy a single IO operation in the traditional blocking IO model. As we established earlier, these context switches are expensive, and we should try to minimize the context switches as much as possible. The solution we have implemented for EDP is to use FIFO cubes shared between the enclave and the user space to send user calls and return values. When the enclave wants to perform a user call, it sends a user call descriptor through a shared memory FIFO cube. At this point, the enclave can continue execution and do other useful work instead of blocking for the results. The enclave runner, which is the user space component in charge of handling user space and user calls sent by the enclave, will receive the user call through the cube, performs the requested user call, and then sends back the results to the enclave through a separate FIFO cube. So in addition to avoiding S check context switching, this model also enables us to perform asynchronous IO. The enclave submits an IO call without blocking for the results and is notified once the results are ready. This is similar to IO completion ports in Microsoft Windows. Now if you recall, I mentioned a Rust library called Tokyo, which provides fundamental types that can be used in asynchronous programs. Tokyo uses another library called MIO. MIO is essentially a thin abstraction layer over the EpoL API, which is available in Linux and many other similar operating systems. So in order to get Tokyo to compile in EDP, we had to port MIO to EDP. Let's see how we did that. EpoL is an API that is used to monitor multiple file descriptors to see if it's possible to perform IO operations on any of them. Once a file descriptor is ready for IO, EpoL generates a readiness event so the application can actually try to perform the IO operation. Now as you can imagine, this is different from the IO completion model we have designed for EDP. So the challenge was to bridge the gap between these models. Fortunately, this was already done when MIO was first ported to Windows and we used a similar approach. It's interesting to note that the current MIO implementation for Windows has somewhat moved away from the IO completion ports since people have found hidden Windows APIs that provide readiness events. They did this to improve performance because using IO completion ports requires copying buffers in order to be correct with respect to Rust's lifetimes and borrow checking system. However, for EDP, we need to copy buffers to user space memory in any implementation because the output memory is encrypted and cannot be accessed by user space. That's why we have opted to use this model which also alleviates the SCX context switching overhead. I thought it might be interesting to elaborate more on how we actually translate the IO completion model to IO readiness model. Let's consider the case of reading from a TCP socket. On the left side, you can see how E-Poll can be used to read from a socket. Once the socket is connected, it cannot be read until the Read Readiness event is generated by E-Poll. Once the socket is ready, the program can call Read to read bytes from the socket. Once all the available bytes have been read, the read call will return an error to indicate that there are no more bytes available immediately and the program needs to wait for another readiness event. On the right side, you can see how we implemented this using our ACNQs in ADP. Once the socket is connected, we immediately send a read user call through the FICO queue and enter the pending Read State. Once the user space sends back the results of that user call, we generate a readable readiness event and transition to readable state. At this point, the program can call Read on the socket and receive bytes from the chunk of bytes that we have received. Once all of those bytes are delivered, we issue another read user call and transition to pending Read State and repeat the whole process. Writes are a bit more complicated. As you can see here, writes are very similar to reads in E-Poll lines, but that's not the case in ADP. Once the socket is connected, we immediately generate a writable readiness event and transition to writable state. Once the program calls write on the socket, we send a write user call using our FICO queue and transition to pending Write State. At this point, we're still waiting to receive the results of that first write, but the program is free to call write on the socket again, in which case we buffer those bytes and acknowledge that they will be sent. This can go on until the user space buffer for the socket is filled, in which case we will return a wood block indicating to the program that it cannot further write to the socket at this point. Once the write user call results are received from the user space, we mark the bytes as written and check the buffer. If there are more bytes that need to be sent, we issue another write user call and continue until the buffer is empty, at which point we transition back to writable state and generate a writable readiness event so the program can continue writing. Other user calls, such as connect and accept, are also implemented in a similar fashion. Overall, this seems like a reasonable method to implement MIU's abstraction layer in ADP using our async queue. To validate the performance benefits of our approach, we ran a few tests to compare blocking IO versus async IO in both Linux and ADP. In our first test, we wrote a small program that listens for HTTP requests with an input encoded in JSON and echoes back the message to the client encoded similarly. We wrote two versions of this code, one version uses traditional blocking IO and the other version uses async IO. We compiled and ran these two programs both on Linux and also as an SCX enclave with ADP. We measured how many queries per second each combination can achieve with various degrees of concurrent client connections. As you might imagine, running the programs natively on Linux has higher performance compared to SCX since there is no SCX overhead. The blocking versus async doesn't seem to differ a lot, but that's just an artifact of the simplicity of our test program. In the SCX case, you can see that the performance of the blocking version drops noticeably for more than 2000 connections, but the async version provides a steady performance even for a higher number of connections. To make the test a little more realistic, we added another service in this mix, a hash service that listens on a TCP port and computes the hash of input messages sends back the results. Then we modified the echo program to connect to the hash service for every request and include the hash result in its response to each client. This is more like a web service that might connect to a database to fulfill every HTTP request it receives. Here are the results. As you can see, there is a noticeable difference between blocking and async versions in Linux. The async version can achieve higher throughput. There is also a drop in performance of the blocking version after 2000 connections, just as we saw before. Note that the blocking version needs to handle each client connection in a separate thread, so 5000 connections means 5000 threads competing for resources, while the async IO version only uses a few threads regardless of the number of client connections. That can explain the superior performance of the async version. The SCX numbers also tell a similar story, although the overall numbers are smaller compared to Linux because of S-share overheads. The blocking version also has a drop in performance at the 2000 connection mark. One more thing that we should mention here is that both the blocking and async versions are run with the same Onclave runner that implements the M2N threading model that we discussed before. Therefore the blocking version is also benefiting from reduced OS context switching in ADP. There are other benefits to using async IO in ADP. For one, in SCX version 1, the Onclave needs to have a fixed number of thread control structures, which is determined at Onclave build time. This limits how many Onclave threads can execute concurrently. If you build an Onclave with 100 TCSs, you can only have up to 100 concurrent threads. Traditional blocking IO network applications usually dedicate one thread per client connection. So an ADP Onclave need to be built with a huge number of TCSs to be able to handle thousands of concurrent connections. On the other hand, by using async IO in ADP, the Onclave only needs a handful of threads and TCSs to be able to handle thousands of connections. Consequently, Onclaves can initialize faster and consume less stack space. Also, there will be no limit on the number of concurrent connections that your service can handle just because of fixed number of TCSs. Well, that's all for me. Thank you for listening, and I hope you found this presentation both useful and interesting. My colleague Dr. Beekman will answer your questions. Thank you.
|
Enclave technologies such as SGX generally have a relatively high context-switching cost. This is particularly noticeable when doing (network) I/O. In this talk we'll present the first non-LibOS implementation of an asynchronous I/O model for SGX. This gives you a language-native async I/O programming experience in Rust, outperforming any other way to build SGX network services.
|
10.5446/52518 (DOI)
|
Demes jak se je Given Poh Im sinksizacije stanj tudi, Izpojezve tsulej outgoingi iz Ab Na Evroko, dad touches izom. Bukvet na postarstke za alike pri temessu sk Consollik minorしてga. Paonly 즐 da sem ju tega tismetja undersight이에요. Proto fer tih pozornosti in zoreš globa delave na IVW se점 ni omrevi zakrem. Tudi tyi tih pozornosti should program crowdsports Lištr pod tr stalk roinas. Mi je Machy Pjanowski.���사 da so je na HPி. Mi je pre Laz 5 s evrem.bble procedural os 그래도ary appears, je to ponil kidsowe, joji je z nosenkem Kiko užim leestyle, z njega!!! Ab..... Vseš광sp Poland colo 9 2k Woj ku... Prahet? Premi z barkinga Vapp & YAMA differences Svečke učiti ta za v Squ sentido z parustanju, dobijanje, linuku, Banksutov Maria będzie zbridgealna spremke yani. Kaj�uhu se dost Senior 먹을 latency v Seung pa jih si začali... Oizni in ozvala za glade z k parameters na s electedell o znalej v deployedeljce veliki Commonwealthasking, MeterS Bootroom je wunderna ecista, ko bil zmiziti pred količ veio additional. Prijel JMimof secure boot bolje bear všakacking-kat, našli proישcy in aktivisolicje. Ndaj promptoija 쓰ijo skIZ emilitar. Folič renewor keman як u understavno pot ata lifalar postočst Viktor in za vsih polymer delaptimo električ扜il v anglijov, ne kako na delнее Raise kommunikala hovetchupna rež между Pa vse je za rap généralni poslas approaches.ениk, što je inه uratovano peri Sunny. Imale sa kadom postDOg abstruka pravii, and. in a verification of the first binary loan to pay the boot throw. The difficult workflow can look as follows. At first, we need to generate the keeper, so a private and public portion of a given key. In a secure location, the private key can be used to sign the binary. The public part of the key is then used into the SOC. Once we confirm that the verification works correctly, we can lock the platform, in države zapravenje v povali se。PLafoon throne-n plateformo so každ shirt powers smbykov Brooklyn portable performance. Todi Norovo prosimak sodaz nimi vse zelo po ravensih pilnotarji. Tkara te wanjo bilo zistero s vojo cel insectsi izepsaved.�revent nationsk salamanje suited overs이에요 in rotate je zelo商ena, kako se 듯 in tubeskaj smo cap juicy in oblikujemo do yieldselnih caf. Treba spin vai rešnje ttaj srednje парati. Bom to ja mu se drž thieves uppun. De le bo Creed Dely bracelet..." 수가 je pozal po v으면elj��요kljana in je mod knowinger, ko je bilo v dedicatednen vücken in se vpovodimo po v withholdi. Tr bo stabiliz близKOn Tako partnering je iz sam basejteH. Zelo smo 이제 kup egg izvežili tessite Zerva na vse desiko conclus beneath Sumba dem. startupimo in iz im od Tassim, V lesbian vose servici in desiko kbuild tismy leotto na forčanje 5 salji Po doličesi do sosattersne več investiter Car PM In izgledaj se, da hvajtevče bila vzivati vzivati nekaj, da je vzivati. Na različenju, da vzivati hvajtevče in vzivati karna imeg, zaznamo hvajtevče in vzivati vzivati vzivati vzivati vzivati vzivati. Zato vzivati vzivati vzivati vzivati vzivati vzivati vzivati vzivati. Bihšovate in do Reports in da vi Are Glapi hovorila, da nadež hun seema, na čelgodunu drugi kaž Hakumi, od cursega, o groove, što se laps vzegešnjen. I ta je bilo to da ne potešizatat in nazaj ne markov while v krima제 pranku kar Sid Kings 나왔, zaš successfully videl stones kar je flocko u vsokov km Ahačland, ne mora se očivati, Isprudov més nekih nerjuha izworkil poisonornih principlego Chaos K sequence jes bosses kけ g Holmes? K Surgl storma baosp 하�nje dla forsakac i K naszych tip Seven L Rosele in svoj udvolj kar je milijarto nalib.. bilo koraz assessoron Band meillä bila zdilo staying which is SDK for layback devices, which describes how to use the secure boot feature in chapter 6.1 which is named secure boot. To get the reference manual and the signing tool, the registration is also required. Now we will talk about the Marvr Armata SOCs, but the details about the secure boot features in this case are not publicly available, forged sentimentari in seekirboot application izmarvalo 7k in armada 8k famesih. Utaju bogat probably je periodin chybaopo ahla in luz reforstina in Green Blue lepektiv it Корp so kar abstornajo na,,in-patron mihabšt mostu s dilimenu judgesran马ljičem tk blood needs stainless st together T Vznača moj 언éc many, v reservoiri drove, jih sem smok ijang letie pallredit za bridge,anks mosmjέλj s niso, in Priaga boljih lahki iz Tr gradersen bohniko written, od mali prejistrega najn quality. T identification za glaje interference min Originally In navelacht situacije fits istračil je medals in priključni t исlučil od puree ju kot こ Rubik z checked three, Without registration. The application node describes the secure boot process presentation in great details. You could also learn about the secure boot from Overview in the Technical Reference Manual, in chapter 32 device secure boot. The next one is Nvidia with its Tigera SoC family. 2008 svoje 사람들이 slundincho gel 상 laughterredno, journalists, Edamipopov Crowbone, svojo peri Dr Additional fallen tr Kle multinationi kup invincibenj tudi od gre narrowermodhte, veliko na njomini bolj evenly tipsi in t desirable f 320k thanking За slabo i the osare Posrednji cela. Vi ne scanilo Without kar150 abusive, which is used for codesignak and FUS programming and i wilderness septial subset, but the tool itself<|haw|><|translate|> requires DnDA as well to recive. We can't say much about the secure boot in the TICならys Change Alright systemTwo. There public documents only mention that the secure boot feature is there but provide not much detail. The high level secure boot overview is truly high level. ta bi to bo forumitisčne, sem ko je nove želev ali bo потребovanja z držah pomejeru od bih vasela in bil tela mldu z proko line-orstandenah ml alltså. Proko sem koristite, da sem tvojo slizervo. Zelo je prej serf pas� Lightning. Venza je veliko in ur environmentalisti. in vsega činična soci vrština. Nelj je nekaj dokumentacij, vsega tehnika referensu vrština zelo, da vsega vrština in vsega vrština je vsega, ali je nekaj vsega, kako je vsega. Nelj je nekaj dokumentacij, ki je vsega vrština, tako vsega vrština. Vsega vsega vrština nekaj vsega vrština nekaj vsega vrština kako je vsega, kako je vsega vsega vrština. Tako v tem, vsega vrština nekaj vsega, kako je vsega, kako je vsega. Selo naredilo, kako je vsega, srečoveda, zelo, da ga vsega vrština zelo. No, nekaj vsega vsega vresne soci vrština pojedem battles, da sme in živoste na glasbo in ni nemak daha ano naaval t tako. In ta ne jemo za sedem listovana v 300 stopni. Nah raz 않 bo zapečimat no napite o eleg🤚 To Seni, da sem je izvečera, No Ad yapas je hoje,...... 스�aks Tak Slovenija Ti set likely … ni kako je nekaj poselst. tak rock nitvoj z fits tudi vsamori, potem avt machines na dvere. Nadi kravov skaste치... Snatter se, če le alley in k10. Liste linjo industrije in maske. Že nas dokune, kon becoming iseesu v jam residrtop višve, Frejmsk vse hrobraj je imprejem in každje vse.rem realuč Ke нейazne vse. Mas Naruto ili v 2328 m in vzice karijo saj vse. Na jdem bom u vrdu polega kije mali si vse možne laprv. Tudi ta protekta grej BMWaga. The SHAR256 is used as a hash function for our digital signature. Usually, also the fume decryption feature is present, but it was not in the scope of our presentation to discuss one. Feel free to contact us if you believe we can help you in NLV. We are always open to cooperate and discuss. Hardkvill needed for Mata World XIschool. K Thought Bahtim.
|
In the ARM world, Secure Boot is typically a BootROM feature, which allows for verification of the loaded binaries (firmware, bootloader, Linux kernel) prior executing it. The main idea is to prevent the untrusted code from running on our platform. The general approach is similar across vendors, but there is no standardization in this area. During this talk we will review the Secure Boot features in ARM SoCs from some of the most popular vendors. Not only will we analyze the Secure Boot presence or its features, but we will also focus on the tools and documentation availability. It is a known fact that often such documentation requires a signed NDA with an SoC vendor, which makes it difficult to use by regular users.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.