text
stringlengths
105
4.17k
source
stringclasses
883 values
"Data Structures and Algorithms in C++". Brook/Cole. Pacific Grove, CA. 2001. Second edition. - "Tree Transversal" (math.northwestern.edu) ## External links - Storing Hierarchical Data in a Database with traversal examples in PHP - Managing Hierarchical Data in MySQL - Working with Graphs in MySQL - See tree traversal implemented in various programming language on Rosetta Code - Tree traversal without recursion - Tree Traversal Algorithms - Binary Tree Traversal - Tree Traversal In Data Structure Category:Trees (data structures) Category:Articles with example pseudocode Category:Graph algorithms Category:Recursion Category:Iteration in programming de:Binärbaum#Traversierung ja:木構造 (データ構造)
https://en.wikipedia.org/wiki/Tree_traversal
An electrostatic generator, or electrostatic machine, is an electrical generator that produces static electricity, or electricity at high voltage and low continuous current. The knowledge of static electricity dates back to the earliest civilizations, but for millennia it remained merely an interesting and mystifying phenomenon, without a theory to explain its behavior and often confused with magnetism. By the end of the 17th century, researchers had developed practical means of generating electricity by friction, but the development of electrostatic machines did not begin in earnest until the 18th century, when they became fundamental instruments in the studies about the new science of electricity. Electrostatic generators operate by using manual (or other) power to transform mechanical work into electric energy, or using electric currents. Manual electrostatic generators develop electrostatic charges of opposite signs rendered to two conductors, using only electric forces, and work by using moving plates, drums, or belts to carry electric charge to a high potential electrode. ## Description Electrostatic machines are typically used in science classrooms to safely demonstrate electrical forces and high voltage phenomena. The elevated potential differences achieved have also been used for a variety of practical applications, such as operating X-ray tubes, particle accelerators, spectroscopy, medical applications, sterilization of food, and nuclear physics experiments. Electrostatic generators such as the
https://en.wikipedia.org/wiki/Electrostatic_generator
The elevated potential differences achieved have also been used for a variety of practical applications, such as operating X-ray tubes, particle accelerators, spectroscopy, medical applications, sterilization of food, and nuclear physics experiments. Electrostatic generators such as the #### Van de Graaff generator, and variations as the Pelletron, also find use in physics research. Electrostatic generators can be divided into categories depending on how the charge is generated: - ### Friction machines use the triboelectric effect (electricity generated by contact or friction) - ### Influence machines use electrostatic induction - ### Others Friction machines #### #### History The first electrostatic generators are called friction machines because of the friction in the generation process. A primitive form of frictional machine was invented around 1663 by Otto von Guericke, using a sulphur globe that could be rotated and rubbed by hand. It may not actually have been rotated during use and was not intended to produce electricity (rather cosmic virtues), but inspired many later machines that used rotating globes. Isaac Newton suggested the use of a glass globe instead of a sulphur one. About 1706 Francis Hauksbee improved the basic design, with his frictional electrical machine that enabled a glass sphere to be rotated rapidly against a woollen cloth.
https://en.wikipedia.org/wiki/Electrostatic_generator
Isaac Newton suggested the use of a glass globe instead of a sulphur one. About 1706 Francis Hauksbee improved the basic design, with his frictional electrical machine that enabled a glass sphere to be rotated rapidly against a woollen cloth. Generators were further advanced when, about 1730, Prof. Georg Matthias Bose of Wittenberg added a collecting conductor (an insulated tube or cylinder supported on silk strings). Bose was the first to employ the "prime conductor" in such machines, this consisting of an iron rod held in the hand of a person whose body was insulated by standing on a block of resin. In 1746, William Watson's machine had a large wheel turning several glass globes, with a sword and a gun barrel suspended from silk cords for its prime conductors. Johann Heinrich Winckler, professor of physics at Leipzig, substituted a leather cushion for the hand. During 1746, Jan Ingenhousz invented electric machines made of plate glass. Experiments with the electric machine were largely aided by the invention of the Leyden Jar. This early form of the capacitor, with conductive coatings on either side of the glass, can accumulate a charge of electricity when connected with a source of electromotive force.
https://en.wikipedia.org/wiki/Electrostatic_generator
Experiments with the electric machine were largely aided by the invention of the Leyden Jar. This early form of the capacitor, with conductive coatings on either side of the glass, can accumulate a charge of electricity when connected with a source of electromotive force. The electric machine was soon further improved by Andrew (Andreas) Gordon, a Scotsman and professor at Erfurt, who substituted a glass cylinder in place of a glass globe; and by Giessing of Leipzig who added a "rubber" consisting of a cushion of woollen material. The collector, consisting of a series of metal points, was added to the machine by Benjamin Wilson about 1746, and in 1762, John Canton of England (also the inventor of the first pith-ball electroscope) improved the efficiency of electric machines by sprinkling an amalgam of tin over the surface of the rubber. In 1768, Jesse Ramsden constructed a widely used version of a plate electrical generator. In 1783, Dutch scientist Martin van Marum of Haarlem designed a large electrostatic machine of high quality with glass disks 1.65 meters in diameter for his experiments. Capable of producing voltage with either polarity, it was built under his supervision by John Cuthbertson of Amsterdam the following year. The generator is currently on display at the Teylers Museum in Haarlem.
https://en.wikipedia.org/wiki/Electrostatic_generator
Capable of producing voltage with either polarity, it was built under his supervision by John Cuthbertson of Amsterdam the following year. The generator is currently on display at the Teylers Museum in Haarlem. In 1785, N. Rouland constructed a silk-belted machine that rubbed two grounded tubes covered with hare fur. Edward Nairne developed an electrostatic generator for medical purposes in 1787 that had the ability to generate either positive or negative electricity, the first of these being collected from the prime conductor carrying the collecting points and the second from another prime conductor carrying the friction pad. The Winter machine possessed higher efficiency than earlier friction machines. In the 1830s, Georg Ohm possessed a machine similar to the Van Marum machine for his research (which is now at the Deutsches Museum, Munich, Germany). In 1840, the Woodward machine was developed by improving the 1768 Ramsden machine, placing the prime conductor above the disk(s). Also in 1840, the Armstrong hydroelectric machine was developed, using steam as a charge carrier. #### Friction operation The presence of surface charge imbalance means that the objects will exhibit attractive or repulsive forces. This surface charge imbalance, which leads to static electricity, can be generated by touching two differing surfaces together and then separating them due to the phenomenon of the triboelectric effect.
https://en.wikipedia.org/wiki/Electrostatic_generator
The presence of surface charge imbalance means that the objects will exhibit attractive or repulsive forces. This surface charge imbalance, which leads to static electricity, can be generated by touching two differing surfaces together and then separating them due to the phenomenon of the triboelectric effect. Rubbing two non-conductive objects can generate a great amount of static electricity. This is not the result of friction; two non-conductive surfaces can become charged by just being placed one on top of the other. Since most surfaces have a rough texture, it takes longer to achieve charging through contact than through rubbing. Rubbing objects together increases amount of adhesive contact between the two surfaces. Usually insulators, e.g., substances that do not conduct electricity, are good at both generating, and holding, a surface charge. Some examples of these substances are rubber, plastic, glass, and pith. Conductive objects in contact generate charge imbalance too, but retain the charges only if insulated. The charge that is transferred during contact electrification is stored on the surface of each object. Note that the presence of electric current does not detract from the electrostatic forces nor from the sparking, from the corona discharge, or other phenomena. Both phenomena can exist simultaneously in the same system.
https://en.wikipedia.org/wiki/Electrostatic_generator
Note that the presence of electric current does not detract from the electrostatic forces nor from the sparking, from the corona discharge, or other phenomena. Both phenomena can exist simultaneously in the same system. Influence machines History Frictional machines were, in time, gradually superseded by the second class of instrument mentioned above, namely, influence machines. These operate by electrostatic induction and convert mechanical work into electrostatic energy by the aid of a small initial charge which is continually being replenished and reinforced. The first suggestion of an influence machine appears to have grown out of the invention of Volta's electrophorus. The electrophorus is a single-plate capacitor used to produce imbalances of electric charge via the process of electrostatic induction. The next step was when Abraham Bennet, the inventor of the gold leaf electroscope, described a "doubler of electricity" (Phil. Trans., 1787), as a device similar to the electrophorus, but that could amplify a small charge by means of repeated manual operations with three insulated plates, in order to make it observable in an electroscope. In 1788, William Nicholson proposed his rotating doubler, which can be considered as the first rotating influence machine. His instrument was described as "an instrument which by turning a winch produces the two states of electricity without friction or communication with the earth". (Phil.
https://en.wikipedia.org/wiki/Electrostatic_generator
His instrument was described as "an instrument which by turning a winch produces the two states of electricity without friction or communication with the earth". (Phil. Trans., 1788, p. 403) Nicholson later described a "spinning condenser" apparatus, as a better instrument for measurements. Erasmus Darwin, W. Wilson, G. C. Bohnenberger, and (later, 1841) J. C. E. Péclet developed various modifications of Bennet's 1787 device. Francis Ronalds automated the generation process in 1816 by adapting a pendulum bob as one of the plates, driven by clockwork or a steam engine – he created the device to power his electric telegraph. Others, including T. Cavallo (who developed the "Cavallo multiplier", a charge multiplier using simple addition, in 1795), John Read, Charles Bernard Desormes, and Jean Nicolas Pierre Hachette, developed further various forms of rotating doublers. In 1798, The German scientist and preacher Gottlieb Christoph Bohnenberger, described the Bohnenberger machine, along with several other doublers of Bennet and Nicholson types in a book. The most interesting of these were described in the "Annalen der Physik" (1801). Giuseppe Belli, in 1831, developed a simple symmetrical doubler which consisted of two curved metal plates between which revolved a pair of plates carried on an insulating stem.
https://en.wikipedia.org/wiki/Electrostatic_generator
The most interesting of these were described in the "Annalen der Physik" (1801). Giuseppe Belli, in 1831, developed a simple symmetrical doubler which consisted of two curved metal plates between which revolved a pair of plates carried on an insulating stem. It was the first symmetrical influence machine, with identical structures for both terminals. This apparatus was reinvented several times, by C. F. Varley, that patented a high power version in 1860, by Lord Kelvin (the "replenisher") 1868, and by A. D. Moore (the "dirod"), more recently. Lord Kelvin also devised a combined influence machine and electromagnetic machine, commonly called a mouse mill, for electrifying the ink in connection with his siphon recorder, and a water-drop electrostatic generator (1867), which he called the "water-dropping condenser". ##### Holtz machine Between 1864 and 1880, W. T. B. Holtz constructed and described a large number of influence machines which were considered the most advanced developments of the time. In one form, the Holtz machine consisted of a glass disk mounted on a horizontal axis which could be made to rotate at a considerable speed by a multiplying gear, interacting with induction plates mounted in a fixed disk close to it. In 1865, August J. I. Toepler developed an influence machine that consisted of two disks fixed on the same shaft and rotating in the same direction.
https://en.wikipedia.org/wiki/Electrostatic_generator
In one form, the Holtz machine consisted of a glass disk mounted on a horizontal axis which could be made to rotate at a considerable speed by a multiplying gear, interacting with induction plates mounted in a fixed disk close to it. In 1865, August J. I. Toepler developed an influence machine that consisted of two disks fixed on the same shaft and rotating in the same direction. In 1868, the Schwedoff machine had a curious structure to increase the output current. Also in 1868, several mixed friction-influence machine were developed, including the Kundt machine and the Carré machine. In 1866, the Piche machine (or Bertsch machine) was developed. In 1869, H. Julius Smith received the American patent for a portable and airtight device that was designed to ignite powder. Also in 1869, sectorless machines in Germany were investigated by Poggendorff. The action and efficiency of influence machines were further investigated by F. Rossetti, A. Righi, and Friedrich Kohlrausch. E. E. N. Mascart, A. Roiti, and E. Bouchotte also examined the efficiency and current producing power of influence machines. In 1871, sectorless machines were investigated by Musaeus. In 1872, Righi's electrometer was developed and was one of the first antecedents of the Van de Graaff generator. In 1873, Leyser developed the Leyser machine, a variation of the Holtz machine.
https://en.wikipedia.org/wiki/Electrostatic_generator
In 1872, Righi's electrometer was developed and was one of the first antecedents of the Van de Graaff generator. In 1873, Leyser developed the Leyser machine, a variation of the Holtz machine. In 1880, Robert Voss (a Berlin instrument maker) devised a form of machine in which he claimed that the principles of Toepler and Holtz were combined. The same structure become also known as the Toepler–Holtz machine. ##### Wimshurst machine In 1878, the British inventor James Wimshurst started his studies about electrostatic generators, improving the Holtz machine, in a powerful version with multiple disks. The classical Wimshurst machine, that became the most popular form of influence machine, was reported to the scientific community by 1883, although previous machines with very similar structures were previously described by Holtz and Musaeus. In 1885, one of the largest-ever Wimshurst machines was built in England (it is now at the Chicago Museum of Science and Industry). The Wimshurst machine is a considerably simple machine; it works, as all influence machines, with electrostatic induction of charges, which means that it uses even the slightest existing charge to create and accumulate more charges, and repeats this process for as long as the machine is in action.
https://en.wikipedia.org/wiki/Electrostatic_generator
In 1885, one of the largest-ever Wimshurst machines was built in England (it is now at the Chicago Museum of Science and Industry). The Wimshurst machine is a considerably simple machine; it works, as all influence machines, with electrostatic induction of charges, which means that it uses even the slightest existing charge to create and accumulate more charges, and repeats this process for as long as the machine is in action. Wimshurst machines are composed of: two insulated disks attached to pulleys of opposite rotation, the disks have small conductive (usually metal) plates on their outward-facing sides; two double-ended brushes that serve as charge stabilizers and are also the place where induction happens, creating the new charges to be collected; two pairs of collecting combs, which are, as the name implies, the collectors of electrical charge produced by the machine; two Leyden Jars, the capacitors of the machine; a pair of electrodes, for the transfer of charges once they have been sufficiently accumulated. The simple structure and components of the Wimshurst Machine make it a common choice for a homemade electrostatic experiment or demonstration, these characteristics were factors that contributed to its popularity, as previously mentioned. In 1887, Weinhold modified the Leyser machine with a system of vertical metal bar inductors with wooden cylinders close to the disk for avoiding polarity reversals.
https://en.wikipedia.org/wiki/Electrostatic_generator
The simple structure and components of the Wimshurst Machine make it a common choice for a homemade electrostatic experiment or demonstration, these characteristics were factors that contributed to its popularity, as previously mentioned. In 1887, Weinhold modified the Leyser machine with a system of vertical metal bar inductors with wooden cylinders close to the disk for avoiding polarity reversals. M. L. Lebiez described the Lebiez machine, that was essentially a simplified Voss machine (L'Électricien, April 1895, pp. 225–227). In 1893, Louis Bonetti patented a machine with the structure of the Wimshurst machine, but without metal sectors in the disks. See also: - (Anon.) (April 14, 1894) "Machines d'induction électrostatique sans secteurs" (Electrostatic induction machines without sectors), La Nature, 22 (1089) : 305–306. - English translation of La Nature article (above): (Anon.) (May 26, 1894) "Electrostatic induction machines without sectors," Scientific American, 70 (21) : 325-326. - S. M. Keenan (August 1897) "Sectorless Wimshurst machines," American Electrician, 9 (8) : 316–317 - Instructions for building a Bonetti machine - G. Pellissier (1891)
https://en.wikipedia.org/wiki/Electrostatic_generator
"Electrostatic induction machines without sectors," Scientific American, 70 (21) : 325-326. - S. M. Keenan (August 1897) "Sectorless Wimshurst machines," American Electrician, 9 (8) : 316–317 - Instructions for building a Bonetti machine - G. Pellissier (1891) "Théorie de la machine de Wimshurst" (Theory of Wimshurt's machine), Journal de Physique théoretique et appliquée, 2nd series, 10 (1) : 414–419. On p. 418, French lighting engineer Georges Pellissier describes what is essentially a Bonetti machine: " ...  la machine de Wimshurst pourrait, en effet, être construite avec des plateaux de verre unis et des peignes au lieu de brosses aux extrémités des conducteurs diamétraux. L'amorçage au départ devrait être fait à l'aide d'une source étrangère, placée, par example, en face de A1, à l'extérieur." (... Wimshurst's machine could, in effect, be constructed with plain glass plates and with combs in place of brushes at the ends of the diametrical conductors. The initial charging could be done with the aid of an external source placed, for example, opposite and outside of [section] A1 [of the glass disk].)
https://en.wikipedia.org/wiki/Electrostatic_generator
The initial charging could be done with the aid of an external source placed, for example, opposite and outside of [section] A1 [of the glass disk].) Pellissier then states that "the role of the metallic sectors of the Wimshurst machine seems to be primarily, in effect, to facilitate its automatic starting and to reduce the influence of atmospheric humidity." This machine is significantly more powerful than the sectored version, but it must usually be started with an externally applied charge. ##### Pidgeon machine In 1898, the Pidgeon machine was developed with a unique setup by W. R. Pidgeon. On October 28 that year, Pidgeon presented this machine to the Physical Society after several years of investigation into influence machines (beginning at the start of the decade). The device was later reported in the Philosophical Magazine (December 1898, pg. 564) and the Electrical Review (Vol. XLV, pg. 748). A Pidgeon machine possesses fixed electrostatic inductors arranged in a manner that increases the electrostatic induction effect (and its electrical output is at least double that of typical machines of this type [except when it is overtaxed]). The essential features of the Pidgeon machine are, one, the combination of the rotating support and the fixed support for inducing charge, and, two, the improved insulation of all parts of the machine (but more especially of the generator's carriers).
https://en.wikipedia.org/wiki/Electrostatic_generator
[except when it is overtaxed]). The essential features of the Pidgeon machine are, one, the combination of the rotating support and the fixed support for inducing charge, and, two, the improved insulation of all parts of the machine (but more especially of the generator's carriers). Pidgeon machines are a combination of a Wimshurst Machine and Voss Machine, with special features adapted to reduce the amount of charge leakage. Pidgeon machines excite themselves more readily than the best of these types of machines. In addition, Pidgeon investigated higher current "triplex" section machines (or "double machines with a single central disk") with enclosed sectors (and went on to receive British Patent 22517 (1899) for this type of machine). Multiple disk machines and "triplex" electrostatic machines (generators with three disks) were also developed extensively around the turn of the 20th century. In 1900, F. Tudsbury discovered that enclosing a generator in a metallic chamber containing compressed air, or better, carbon dioxide, the insulating properties of compressed gases enabled a greatly improved effect to be obtained owing to the increase in the breakdown voltage of the compressed gas, and reduction of the leakage across the plates and insulating supports. In 1903, Alfred Wehrsen patented an ebonite rotating disk possessing embedded sectors with button contacts at the disk surface.
https://en.wikipedia.org/wiki/Electrostatic_generator
In 1900, F. Tudsbury discovered that enclosing a generator in a metallic chamber containing compressed air, or better, carbon dioxide, the insulating properties of compressed gases enabled a greatly improved effect to be obtained owing to the increase in the breakdown voltage of the compressed gas, and reduction of the leakage across the plates and insulating supports. In 1903, Alfred Wehrsen patented an ebonite rotating disk possessing embedded sectors with button contacts at the disk surface. In 1907, Heinrich Wommelsdorf reported a variation of the Holtz machine using this disk and inductors embedded in celluloid plates (DE154175; "Wehrsen machine"). Wommelsdorf also developed several high-performance electrostatic generators, of which the best known were his "Condenser machines" (1920). These were single disk machines, using disks with embedded sectors that were accessed at the edges. Van de Graaff The Van de Graaff generator was invented by American physicist Robert J. Van de Graaff in 1929 at MIT as a particle accelerator. The first model was demonstrated in October 1929. In the Van de Graaff machine, an insulating belt transports electric charge to the interior of an insulated hollow metal high voltage terminal, where it is transferred to the terminal by a "comb" of metal points.
https://en.wikipedia.org/wiki/Electrostatic_generator
The first model was demonstrated in October 1929. In the Van de Graaff machine, an insulating belt transports electric charge to the interior of an insulated hollow metal high voltage terminal, where it is transferred to the terminal by a "comb" of metal points. The advantage of the design was that since there was no electric field in the interior of the terminal, the charge on the belt could continue to be discharged onto the terminal regardless of how high the voltage on the terminal was. Thus the only limit to the voltage on the machine is ionization of the air next to the terminal. This occurs when the electric field at the terminal exceeds the dielectric strength of air, about 30 kV per centimeter. Since the highest electric field is produced at sharp points and edges, the terminal is made in the form of a smooth hollow sphere; the larger the diameter the higher the voltage attained. The first machine used a silk ribbon bought at a five and dime store as the charge transport belt. In 1931 a version able to produce 1,000,000 volts was described in a patent disclosure. The Van de Graaff generator was a successful particle accelerator, producing the highest energies until the late 1930s when the cyclotron superseded it. The voltage on open air Van de Graaff machines is limited to a few million volts by air breakdown.
https://en.wikipedia.org/wiki/Electrostatic_generator
The Van de Graaff generator was a successful particle accelerator, producing the highest energies until the late 1930s when the cyclotron superseded it. The voltage on open air Van de Graaff machines is limited to a few million volts by air breakdown. Higher voltages, up to about 25 megavolts, were achieved by enclosing the generator inside a tank of pressurized insulating gas. This type of Van de Graaff particle accelerator is still used in medicine and research. Other variations were also invented for physics research, such as the Pelletron, that uses a chain with alternating insulating and conducting links for charge transport. Small Van de Graaff generators are commonly used in science museums and science education to demonstrate the principles of static electricity. A popular demonstration is to have a person touch the high voltage terminal while standing on an insulated support; the high voltage charges the person's hair, causing the strands to stand out from the head. Others Not all electrostatic generators use the triboelectric effect or electrostatic induction. Electric charges can be generated by electric currents directly. Examples are ionizers and ESD guns. ## Applications ### Gridded ion thruster
https://en.wikipedia.org/wiki/Electrostatic_generator
## Applications ### Gridded ion thruster ### EWICON An electrostatic vaneless ion wind generator, the EWICON, has been developed by The School of Electrical Engineering, Mathematics and Computer Science at Delft University of Technology (TU Delft). Its stands near Mecanoo, an architecture firm. The main developers were Johan Smit and Dhiradj Djairam. Other than the wind, it has no moving parts. It is powered by the wind carrying away charged particles from its collector. The design suffers from poor efficiency. ### Dutch Windwheel The technology developed for EWICON has been reused in the Dutch Windwheel. Dutch Windwheel ### Air ioniser ## Fringe science and devices These generators have been used, sometimes inappropriately and with some controversy, to support various fringe science investigations. In 1911, George Samuel Piggott received a patent for a compact double machine enclosed within a pressurized box for his experiments concerning radiotelegraphy and "antigravity". Much later (in the 1960s), a machine known as "Testatika" was built by German engineer, Paul Suisse Bauman, and promoted by a Swiss community, the Methernithans. Testatika is an electromagnetic generator based on the 1898 Pidgeon electrostatic machine, said to produce "free energy" available directly from the environment.
https://en.wikipedia.org/wiki/Electrostatic_generator
The Internet protocol suite, commonly known as TCP/IP, is a framework for organizing the set of communication protocols used in the Internet and similar computer networks according to functional criteria. The foundational protocols in the suite are the Transmission Control Protocol (TCP), the User Datagram Protocol (UDP), and the Internet Protocol (IP). Early versions of this networking model were known as the Department of Defense (DoD) model because the research and development were funded by the United States Department of Defense through DARPA. The Internet protocol suite provides end-to-end data communication specifying how data should be packetized, addressed, transmitted, routed, and received. This functionality is organized into four abstraction layers, which classify all related protocols according to each protocol's scope of networking. An implementation of the layers for a particular application forms a protocol stack. From lowest to highest, the layers are the link layer, containing communication methods for data that remains within a single network segment (link); the internet layer, providing internetworking between independent networks; the transport layer, handling host-to-host communication; and the application layer, providing process-to-process data exchange for applications. The technical standards underlying the Internet protocol suite and its constituent protocols are maintained by the Internet Engineering Task Force (IETF). The Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems. ## History
https://en.wikipedia.org/wiki/Internet_protocol_suite
The Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems. ## History ### Early research Initially referred to as the DOD Internet Architecture Model, the Internet protocol suite has its roots in research and development sponsored by the Defense Advanced Research Projects Agency (DARPA) in the late 1960s. After DARPA initiated the pioneering ARPANET in 1969, Steve Crocker established a "Networking Working Group" which developed a host-host protocol, the Network Control Program (NCP). In the early 1970s, DARPA started work on several other data transmission technologies, including mobile packet radio, packet satellite service, local area networks, and other data networks in the public and private domains. In 1972, Bob Kahn joined the DARPA Information Processing Technology Office, where he worked on both satellite packet networks and ground-based radio packet networks, and recognized the value of being able to communicate across both. In the spring of 1973, Vinton Cerf joined Kahn with the goal of designing the next protocol generation for the ARPANET to enable internetworking. They drew on the experience from the ARPANET research community, the International Network Working Group, which Cerf chaired, and researchers at Xerox PARC.
https://en.wikipedia.org/wiki/Internet_protocol_suite
In the spring of 1973, Vinton Cerf joined Kahn with the goal of designing the next protocol generation for the ARPANET to enable internetworking. They drew on the experience from the ARPANET research community, the International Network Working Group, which Cerf chaired, and researchers at Xerox PARC. By the summer of 1973, Kahn and Cerf had worked out a fundamental reformulation, in which the differences between local network protocols were hidden by using a common internetwork protocol, and, instead of the network being responsible for reliability, as in the existing ARPANET protocols, this function was delegated to the hosts. Cerf credits Louis Pouzin and Hubert Zimmermann, designers of the CYCLADES network, with important influences on this design. The new protocol was implemented as the Transmission Control Program in 1974 by Cerf, Yogen Dalal and Carl Sunshine. Initially, the Transmission Control Program (the Internet Protocol did not then exist as a separate protocol) provided only a reliable byte stream service to its users, not datagrams. Several versions were developed through the Internet Experiment Note series. As experience with the protocol grew, collaborators recommended division of functionality into layers of distinct protocols, allowing users direct access to datagram service.
https://en.wikipedia.org/wiki/Internet_protocol_suite
Several versions were developed through the Internet Experiment Note series. As experience with the protocol grew, collaborators recommended division of functionality into layers of distinct protocols, allowing users direct access to datagram service. Advocates included Bob Metcalfe and Yogen Dalal at Xerox PARC; Danny Cohen, who needed it for his packet voice work; and Jonathan Postel of the University of Southern California's Information Sciences Institute, who edited the Request for Comments (RFCs), the technical and strategic document series that has both documented and catalyzed Internet development. Postel stated, "We are screwing up in our design of Internet protocols by violating the principle of layering." Encapsulation of different mechanisms was intended to create an environment where the upper layers could access only what was needed from the lower layers. A monolithic design would be inflexible and lead to scalability issues. In version 4, written in 1978, Postel split the Transmission Control Program into two distinct protocols, the Internet Protocol as connectionless layer and the Transmission Control Protocol as a reliable connection-oriented service. For records of discussions leading up to the TCP/IP split, see the series of Internet Experiment Notes at the Internet Experiment Notes Index. The design of the network included the recognition that it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes.
https://en.wikipedia.org/wiki/Internet_protocol_suite
For records of discussions leading up to the TCP/IP split, see the series of Internet Experiment Notes at the Internet Experiment Notes Index. The design of the network included the recognition that it should provide only the functions of efficiently transmitting and routing traffic between end nodes and that all other intelligence should be located at the edge of the network, in the end nodes. This end-to-end principle was pioneered by Louis Pouzin in the CYCLADES network, based on the ideas of Donald Davies. Using this design, it became possible to connect other networks to the ARPANET that used the same principle, irrespective of other local characteristics, thereby solving Kahn's initial internetworking problem. A popular expression is that TCP/IP, the eventual product of Cerf and Kahn's work, can run over "two tin cans and a string." Years later, as a joke in 1999, the IP over Avian Carriers formal protocol specification was created and successfully tested two years later. 10 years later still, it was adapted for IPv6. DARPA contracted with BBN Technologies, Stanford University, and the University College London to develop operational versions of the protocol on several hardware platforms. During development of the protocol the version number of the packet routing layer progressed from version 1 to version 4, the latter of which was installed in the ARPANET in 1983.
https://en.wikipedia.org/wiki/Internet_protocol_suite
DARPA contracted with BBN Technologies, Stanford University, and the University College London to develop operational versions of the protocol on several hardware platforms. During development of the protocol the version number of the packet routing layer progressed from version 1 to version 4, the latter of which was installed in the ARPANET in 1983. It became known as Internet Protocol version 4 (IPv4) as the protocol that is still in use in the Internet, alongside its current successor, Internet Protocol version 6 (IPv6). ### Early implementation In 1975, a two-network IP communications test was performed between Stanford and University College London. In November 1977, a three-network IP test was conducted between sites in the US, the UK, and Norway. Several other IP prototypes were developed at multiple research centers between 1978 and 1983. A computer called a router is provided with an interface to each network. It forwards network packets back and forth between them. Originally a router was called gateway, but the term was changed to avoid confusion with other types of gateways. ### Adoption In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking. In the same year, NORSAR/NDRE and Peter Kirstein's research group at University College London adopted the protocol.
https://en.wikipedia.org/wiki/Internet_protocol_suite
### Adoption In March 1982, the US Department of Defense declared TCP/IP as the standard for all military computer networking. In the same year, NORSAR/NDRE and Peter Kirstein's research group at University College London adopted the protocol. The migration of the ARPANET from NCP to TCP/IP was officially completed on flag day January 1, 1983, when the new protocols were permanently activated. In 1985, the Internet Advisory Board (later Internet Architecture Board) held a three-day TCP/IP workshop for the computer industry, attended by 250 vendor representatives, promoting the protocol and leading to its increasing commercial use. In 1985, the first Interop conference focused on network interoperability by broader adoption of TCP/IP. The conference was founded by Dan Lynch, an early Internet activist. From the beginning, large corporations, such as IBM and DEC, attended the meeting. IBM, AT&T and DEC were the first major corporations to adopt TCP/IP, this despite having competing proprietary protocols. In IBM, from 1984, Barry Appelman's group did TCP/IP development. They navigated the corporate politics to get a stream of TCP/IP products for various IBM systems, including MVS, VM, and OS/2. At the same time, several smaller companies, such as FTP Software and the Wollongong Group, began offering TCP/IP stacks for DOS and Microsoft Windows.
https://en.wikipedia.org/wiki/Internet_protocol_suite
They navigated the corporate politics to get a stream of TCP/IP products for various IBM systems, including MVS, VM, and OS/2. At the same time, several smaller companies, such as FTP Software and the Wollongong Group, began offering TCP/IP stacks for DOS and Microsoft Windows. The first VM/CMS TCP/IP stack came from the University of Wisconsin. Some of the early TCP/IP stacks were written single-handedly by a few programmers. Jay Elinsky and Oleg Vishnepolsky of IBM Research wrote TCP/IP stacks for VM/CMS and OS/2, respectively. In 1984 Donald Gillies at MIT wrote a ntcp multi-connection TCP which runs atop the IP/PacketDriver layer maintained by John Romkey at MIT in 1983–84. Romkey leveraged this TCP in 1986 when FTP Software was founded. Starting in 1985, Phil Karn created a multi-connection TCP application for ham radio systems (KA9Q TCP). The spread of TCP/IP was fueled further in June 1989, when the University of California, Berkeley agreed to place the TCP/IP code developed for BSD UNIX into the public domain. Various corporate vendors, including IBM, included this code in commercial TCP/IP software releases.
https://en.wikipedia.org/wiki/Internet_protocol_suite
The spread of TCP/IP was fueled further in June 1989, when the University of California, Berkeley agreed to place the TCP/IP code developed for BSD UNIX into the public domain. Various corporate vendors, including IBM, included this code in commercial TCP/IP software releases. For Windows 3.1, the dominant PC operating system among consumers in the first half of the 1990s, Peter Tattam's release of the Trumpet Winsock TCP/IP stack was key to bringing the Internet to home users. Trumpet Winsock allowed TCP/IP operations over a serial connection (SLIP or PPP). The typical home PC of the time had an external Hayes-compatible modem connected via an RS-232 port with an 8250 or 16550 UART which required this type of stack. Later, Microsoft would release their own TCP/IP add-on stack for Windows for Workgroups 3.11 and a native stack in Windows 95. These events helped cement TCP/IP's dominance over other protocols on Microsoft-based networks, which included IBM's Systems Network Architecture (SNA), and on other platforms such as Digital Equipment Corporation's DECnet, Open Systems Interconnection (OSI), and Xerox Network Systems (XNS).
https://en.wikipedia.org/wiki/Internet_protocol_suite
Later, Microsoft would release their own TCP/IP add-on stack for Windows for Workgroups 3.11 and a native stack in Windows 95. These events helped cement TCP/IP's dominance over other protocols on Microsoft-based networks, which included IBM's Systems Network Architecture (SNA), and on other platforms such as Digital Equipment Corporation's DECnet, Open Systems Interconnection (OSI), and Xerox Network Systems (XNS). Nonetheless, for a period in the late 1980s and early 1990s, engineers, organizations and nations were polarized over the issue of which standard, the OSI model or the Internet protocol suite, would result in the best and most robust computer networks. ### Formal specification and standards The technical standards underlying the Internet protocol suite and its constituent protocols have been delegated to the Internet Engineering Task Force (IETF). The characteristic architecture of the Internet protocol suite is its broad division into operating scopes for the protocols that constitute its core functionality. The defining specifications of the suite are RFC 1122 and 1123, which broadly outlines four abstraction layers (as well as related protocols); the link layer, IP layer, transport layer, and application layer, along with support protocols. These have stood the test of time, as the IETF has never modified this structure. As such a model of networking, the Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems.
https://en.wikipedia.org/wiki/Internet_protocol_suite
These have stood the test of time, as the IETF has never modified this structure. As such a model of networking, the Internet protocol suite predates the OSI model, a more comprehensive reference framework for general networking systems. ## Key architectural principles The end-to-end principle has evolved over time. Its original expression put the maintenance of state and overall intelligence at the edges, and assumed the Internet that connected the edges retained no state and concentrated on speed and simplicity. Real-world needs for firewalls, network address translators, web content caches and the like have forced changes in this principle. The robustness principle states: "In general, an implementation must be conservative in its sending behavior, and liberal in its receiving behavior. That is, it must be careful to send well-formed datagrams, but must accept any datagram that it can interpret (e.g., not object to technical errors where the meaning is still clear)." "The second part of the principle is almost as important: software on other hosts may contain deficiencies that make it unwise to exploit legal but obscure protocol features. " Encapsulation is used to provide abstraction of protocols and services. Encapsulation is usually aligned with the division of the protocol suite into layers of general functionality. In general, an application (the highest level of the model) uses a set of protocols to send its data down the layers.
https://en.wikipedia.org/wiki/Internet_protocol_suite
Encapsulation is usually aligned with the division of the protocol suite into layers of general functionality. In general, an application (the highest level of the model) uses a set of protocols to send its data down the layers. The data is further encapsulated at each level. An early pair of architectural documents, and , titled Requirements for Internet Hosts, emphasizes architectural principles over layering. RFC 1122/23 are structured in sections referring to layers, but the documents refer to many other architectural principles, and do not emphasize layering. They loosely defines a four-layer model, with the layers having names, not numbers, as follows: - The application layer is the scope within which applications, or processes, create user data and communicate this data to other applications on another or the same host. The applications make use of the services provided by the underlying lower layers, especially the transport layer which provides reliable or unreliable pipes to other processes. The communications partners are characterized by the application architecture, such as the client–server model and peer-to-peer networking. This is the layer in which all application protocols, such as SMTP, FTP, SSH, HTTP, operate. Processes are addressed via ports which essentially represent services. -
https://en.wikipedia.org/wiki/Internet_protocol_suite
This is the layer in which all application protocols, such as SMTP, FTP, SSH, HTTP, operate. Processes are addressed via ports which essentially represent services. - The transport layer performs host-to-host communications on either the local network or remote networks separated by routers. It provides a channel for the communication needs of applications. UDP is the basic transport layer protocol, providing an unreliable connectionless datagram service. The Transmission Control Protocol provides flow-control, connection establishment, and reliable transmission of data. - The internet layer exchanges datagrams across network boundaries. It provides a uniform networking interface that hides the actual topology (layout) of the underlying network connections. It is therefore also the layer that establishes internetworking. Indeed, it defines and establishes the Internet. This layer defines the addressing and routing structures used for the TCP/IP protocol suite. The primary protocol in this scope is the Internet Protocol, which defines IP addresses. Its function in routing is to transport datagrams to the next host, functioning as an IP router, that has the connectivity to a network closer to the final data destination. - The link layer defines the networking methods within the scope of the local network link on which hosts communicate without intervening routers.
https://en.wikipedia.org/wiki/Internet_protocol_suite
Its function in routing is to transport datagrams to the next host, functioning as an IP router, that has the connectivity to a network closer to the final data destination. - The link layer defines the networking methods within the scope of the local network link on which hosts communicate without intervening routers. This layer includes the protocols used to describe the local network topology and the interfaces needed to effect the transmission of internet layer datagrams to next-neighbor hosts. ## Link layer The protocols of the link layer operate within the scope of the local network connection to which a host is attached. This regime is called the link in TCP/IP parlance and is the lowest component layer of the suite. The link includes all hosts accessible without traversing a router. The size of the link is therefore determined by the networking hardware design. In principle, TCP/IP is designed to be hardware independent and may be implemented on top of virtually any link-layer technology. This includes not only hardware implementations but also virtual link layers such as virtual private networks and networking tunnels. The link layer is used to move packets between the internet layer interfaces of two different hosts on the same link. The processes of transmitting and receiving packets on the link can be controlled in the device driver for the network card, as well as in firmware or by specialized chipsets.
https://en.wikipedia.org/wiki/Internet_protocol_suite
The link layer is used to move packets between the internet layer interfaces of two different hosts on the same link. The processes of transmitting and receiving packets on the link can be controlled in the device driver for the network card, as well as in firmware or by specialized chipsets. These perform functions, such as framing, to prepare the internet layer packets for transmission, and finally transmit the frames to the physical layer and over a transmission medium. The TCP/IP model includes specifications for translating the network addressing methods used in the Internet Protocol to link-layer addresses, such as media access control (MAC) addresses. All other aspects below that level, however, are implicitly assumed to exist and are not explicitly defined in the TCP/IP model. The link layer in the TCP/IP model has corresponding functions in Layer 2 of the OSI model. ## Internet layer Internetworking requires sending data from the source network to the destination network. This process is called routing and is supported by host addressing and identification using the hierarchical IP addressing system. The internet layer provides an unreliable datagram transmission facility between hosts located on potentially different IP networks by forwarding datagrams to an appropriate next-hop router for further relaying to its destination. The internet layer has the responsibility of sending packets across potentially multiple networks.
https://en.wikipedia.org/wiki/Internet_protocol_suite
The internet layer provides an unreliable datagram transmission facility between hosts located on potentially different IP networks by forwarding datagrams to an appropriate next-hop router for further relaying to its destination. The internet layer has the responsibility of sending packets across potentially multiple networks. With this functionality, the internet layer makes possible internetworking, the interworking of different IP networks, and it essentially establishes the Internet. The internet layer does not distinguish between the various transport layer protocols. IP carries data for a variety of different upper layer protocols. These protocols are each identified by a unique protocol number: for example, Internet Control Message Protocol (ICMP) and Internet Group Management Protocol (IGMP) are protocols 1 and 2, respectively. The Internet Protocol is the principal component of the internet layer, and it defines two addressing systems to identify network hosts and to locate them on the network. The original address system of the ARPANET and its successor, the Internet, is Internet Protocol version 4 (IPv4). It uses a 32-bit IP address and is therefore capable of identifying approximately four billion hosts. This limitation was eliminated in 1998 by the standardization of Internet Protocol version 6 (IPv6) which uses 128-bit addresses. IPv6 production implementations emerged in approximately 2006. ## Transport layer The transport layer establishes basic data channels that applications use for task-specific data exchange.
https://en.wikipedia.org/wiki/Internet_protocol_suite
IPv6 production implementations emerged in approximately 2006. ## Transport layer The transport layer establishes basic data channels that applications use for task-specific data exchange. The layer establishes host-to-host connectivity in the form of end-to-end message transfer services that are independent of the underlying network and independent of the structure of user data and the logistics of exchanging information. Connectivity at the transport layer can be categorized as either connection-oriented, implemented in TCP, or connectionless, implemented in UDP. The protocols in this layer may provide error control, segmentation, flow control, congestion control, and application addressing (port numbers). For the purpose of providing process-specific transmission channels for applications, the layer establishes the concept of the network port. This is a numbered logical construct allocated specifically for each of the communication channels an application needs. For many types of services, these port numbers have been standardized so that client computers may address specific services of a server computer without the involvement of service discovery or directory services. Because IP provides only a best-effort delivery, some transport-layer protocols offer reliability.
https://en.wikipedia.org/wiki/Internet_protocol_suite
For many types of services, these port numbers have been standardized so that client computers may address specific services of a server computer without the involvement of service discovery or directory services. Because IP provides only a best-effort delivery, some transport-layer protocols offer reliability. TCP is a connection-oriented protocol that addresses numerous reliability issues in providing a reliable byte stream: - data arrives in-order - data has minimal error (i.e., correctness) - duplicate data is discarded - lost or discarded packets are resent - includes traffic congestion control The newer Stream Control Transmission Protocol (SCTP) is also a reliable, connection-oriented transport mechanism. It is message-stream-oriented, not byte-stream-oriented like TCP, and provides multiple streams multiplexed over a single connection. It also provides multihoming support, in which a connection end can be represented by multiple IP addresses (representing multiple physical interfaces), such that if one fails, the connection is not interrupted. It was developed initially for telephony applications (to transport SS7 over IP). Reliability can also be achieved by running IP over a reliable data-link protocol such as the High-Level Data Link Control (HDLC). The User Datagram Protocol (UDP) is a connectionless datagram protocol. Like IP, it is a best-effort, unreliable protocol.
https://en.wikipedia.org/wiki/Internet_protocol_suite
The User Datagram Protocol (UDP) is a connectionless datagram protocol. Like IP, it is a best-effort, unreliable protocol. Reliability is addressed through error detection using a checksum algorithm. UDP is typically used for applications such as streaming media (audio, video, Voice over IP, etc.) where on-time arrival is more important than reliability, or for simple query/response applications like DNS lookups, where the overhead of setting up a reliable connection is disproportionately large. Real-time Transport Protocol (RTP) is a datagram protocol that is used over UDP and is designed for real-time data such as streaming media. The applications at any given network address are distinguished by their TCP or UDP port. By convention, certain well-known ports are associated with specific applications. The TCP/IP model's transport or host-to-host layer corresponds roughly to the fourth layer in the OSI model, also called the transport layer. QUIC is rapidly emerging as an alternative transport protocol. Whilst it is technically carried via UDP packets it seeks to offer enhanced transport connectivity relative to TCP. HTTP/3 works exclusively via QUIC. ## Application layer The application layer includes the protocols used by most applications for providing user services or exchanging application data over the network connections established by the lower-level protocols.
https://en.wikipedia.org/wiki/Internet_protocol_suite
HTTP/3 works exclusively via QUIC. ## Application layer The application layer includes the protocols used by most applications for providing user services or exchanging application data over the network connections established by the lower-level protocols. This may include some basic network support services such as routing protocols and host configuration. Examples of application layer protocols include the Hypertext Transfer Protocol (HTTP), the File Transfer Protocol (FTP), the Simple Mail Transfer Protocol (SMTP), and the Dynamic Host Configuration Protocol (DHCP). Data coded according to application layer protocols are encapsulated into transport layer protocol units (such as TCP streams or UDP datagrams), which in turn use lower layer protocols to effect actual data transfer. The TCP/IP model does not consider the specifics of formatting and presenting data and does not define additional layers between the application and transport layers as in the OSI model (presentation and session layers). According to the TCP/IP model, such functions are the realm of libraries and application programming interfaces. The application layer in the TCP/IP model is often compared to a combination of the fifth (session), sixth (presentation), and seventh (application) layers of the OSI model. Application layer protocols are often associated with particular client–server applications, and common services have well-known port numbers reserved by the Internet Assigned Numbers Authority (IANA).
https://en.wikipedia.org/wiki/Internet_protocol_suite
The application layer in the TCP/IP model is often compared to a combination of the fifth (session), sixth (presentation), and seventh (application) layers of the OSI model. Application layer protocols are often associated with particular client–server applications, and common services have well-known port numbers reserved by the Internet Assigned Numbers Authority (IANA). For example, the HyperText Transfer Protocol uses server port 80 and Telnet uses server port 23. Clients connecting to a service usually use ephemeral ports, i.e., port numbers assigned only for the duration of the transaction at random or from a specific range configured in the application. At the application layer, the TCP/IP model distinguishes between user protocols and support protocols. Support protocols provide services to a system of network infrastructure. User protocols are used for actual user applications. For example, FTP is a user protocol and DNS is a support protocol. Although the applications are usually aware of key qualities of the transport layer connection such as the endpoint IP addresses and port numbers, application layer protocols generally treat the transport layer (and lower) protocols as black boxes which provide a stable network connection across which to communicate. The transport layer and lower-level layers are unconcerned with the specifics of application layer protocols. Routers and switches do not typically examine the encapsulated traffic, rather they just provide a conduit for it.
https://en.wikipedia.org/wiki/Internet_protocol_suite
The transport layer and lower-level layers are unconcerned with the specifics of application layer protocols. Routers and switches do not typically examine the encapsulated traffic, rather they just provide a conduit for it. However, some firewall and bandwidth throttling applications use deep packet inspection to interpret application data. An example is the Resource Reservation Protocol (RSVP). It is also sometimes necessary for Applications affected by NAT to consider the application payload. ## Layering evolution and representations in the literature The Internet protocol suite evolved through research and development funded over a period of time. In this process, the specifics of protocol components and their layering changed. In addition, parallel research and commercial interests from industry associations competed with design features. In particular, efforts in the International Organization for Standardization led to a similar goal, but with a wider scope of networking in general. Efforts to consolidate the two principal schools of layering, which were superficially similar, but diverged sharply in detail, led independent textbook authors to formulate abridging teaching tools. The following table shows various such networking models. The number of layers varies between three and seven. Arpanet Reference Model(RFC 871) Internet Standard(RFC 1122)
https://en.wikipedia.org/wiki/Internet_protocol_suite
Arpanet Reference Model(RFC 871) Internet Standard(RFC 1122) Internet model(Cisco Academy) TCP/IP 5-layer reference model(Kozierok, Comer) TCP/IP 5-layer reference model(Tanenbaum) TCP/IP protocol suite or Five-layer Internet model(Forouzan, Kurose) TCP/IP model(Stallings) OSI model(ISO/IEC 7498-1:1994) Three layers Four layers Four layers Four+one layers Five layers Five layers Five layers Seven layers Application/ Process Application Application Application Application Application Application Application Presentation Session Host-to-host Transport Transport Transport Transport Transport Host-to-host or transport Transport Internet Internetwork Internet Internet Network Internet Network Network interface Link Network interface Data link (Network interface) Data link Data link Network access Data link (Hardware) Physical Physical Physical Physical Some of the networking models are from textbooks, which are secondary sources that may conflict with the intent of RFC 1122 and other IETF primary sources. ## Comparison of TCP/IP and OSI layering The three top layers in the OSI model, i.e. the application layer, the presentation layer and the session layer, are not distinguished separately in the TCP/IP model which only has an application layer above the transport layer.
https://en.wikipedia.org/wiki/Internet_protocol_suite
Data link Data link Network access Data link (Hardware) Physical Physical Physical Physical Some of the networking models are from textbooks, which are secondary sources that may conflict with the intent of RFC 1122 and other IETF primary sources. ## Comparison of TCP/IP and OSI layering The three top layers in the OSI model, i.e. the application layer, the presentation layer and the session layer, are not distinguished separately in the TCP/IP model which only has an application layer above the transport layer. While some pure OSI protocol applications, such as X.400, also combined them, there is no requirement that a TCP/IP protocol stack must impose monolithic architecture above the transport layer. For example, the NFS application protocol runs over the External Data Representation (XDR) presentation protocol, which, in turn, runs over a protocol called Remote Procedure Call (RPC). RPC provides reliable record transmission, so it can safely use the best-effort UDP transport. Different authors have interpreted the TCP/IP model differently, and disagree whether the link layer, or any aspect of the TCP/IP model, covers OSI layer 1 (physical layer) issues, or whether TCP/IP assumes a hardware layer exists below the link layer.
https://en.wikipedia.org/wiki/Internet_protocol_suite
RPC provides reliable record transmission, so it can safely use the best-effort UDP transport. Different authors have interpreted the TCP/IP model differently, and disagree whether the link layer, or any aspect of the TCP/IP model, covers OSI layer 1 (physical layer) issues, or whether TCP/IP assumes a hardware layer exists below the link layer. Several authors have attempted to incorporate the OSI model's layers 1 and 2 into the TCP/IP model since these are commonly referred to in modern standards (for example, by IEEE and ITU). This often results in a model with five layers, where the link layer or network access layer is split into the OSI model's layers 1 and 2. The IETF protocol development effort is not concerned with strict layering. Some of its protocols may not fit cleanly into the OSI model, although RFCs sometimes refer to it and often use the old OSI layer numbers. The IETF has repeatedly stated that Internet Protocol and architecture development is not intended to be OSI-compliant. RFC 3439, referring to the internet architecture, contains a section entitled: "Layering Considered Harmful". For example, the session and presentation layers of the OSI suite are considered to be included in the application layer of the TCP/IP suite.
https://en.wikipedia.org/wiki/Internet_protocol_suite
RFC 3439, referring to the internet architecture, contains a section entitled: "Layering Considered Harmful". For example, the session and presentation layers of the OSI suite are considered to be included in the application layer of the TCP/IP suite. The functionality of the session layer can be found in protocols like HTTP and SMTP and is more evident in protocols like Telnet and the Session Initiation Protocol (SIP). Session-layer functionality is also realized with the port numbering of the TCP and UDP protocols, which are included in the transport layer of the TCP/IP suite. Functions of the presentation layer are realized in the TCP/IP applications with the MIME standard in data exchange. Another difference is in the treatment of routing protocols. The OSI routing protocol IS-IS belongs to the network layer, and does not depend on CLNS for delivering packets from one router to another, but defines its own layer-3 encapsulation. In contrast, OSPF, RIP, BGP and other routing protocols defined by the IETF are transported over IP, and, for the purpose of sending and receiving routing protocol packets, routers act as hosts. As a consequence, routing protocols are included in the application layer.
https://en.wikipedia.org/wiki/Internet_protocol_suite
In contrast, OSPF, RIP, BGP and other routing protocols defined by the IETF are transported over IP, and, for the purpose of sending and receiving routing protocol packets, routers act as hosts. As a consequence, routing protocols are included in the application layer. Some authors, such as Tanenbaum in Computer Networks, describe routing protocols in the same layer as IP, reasoning that routing protocols inform decisions made by the forwarding process of routers. IETF protocols can be encapsulated recursively, as demonstrated by tunnelling protocols such as Generic Routing Encapsulation (GRE). GRE uses the same mechanism that OSI uses for tunnelling at the network layer. ## Implementations The Internet protocol suite does not presume any specific hardware or software environment. It only requires that hardware and a software layer exists that is capable of sending and receiving packets on a computer network. As a result, the suite has been implemented on essentially every computing platform. A minimal implementation of TCP/IP includes the following: Internet Protocol (IP), Address Resolution Protocol (ARP), Internet Control Message Protocol (ICMP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Internet Group Management Protocol (IGMP).
https://en.wikipedia.org/wiki/Internet_protocol_suite
As a result, the suite has been implemented on essentially every computing platform. A minimal implementation of TCP/IP includes the following: Internet Protocol (IP), Address Resolution Protocol (ARP), Internet Control Message Protocol (ICMP), Transmission Control Protocol (TCP), User Datagram Protocol (UDP), and Internet Group Management Protocol (IGMP). In addition to IP, ICMP, TCP, UDP, Internet Protocol version 6 requires Neighbor Discovery Protocol (NDP), ICMPv6, and Multicast Listener Discovery (MLD) and is often accompanied by an integrated IPSec security layer.
https://en.wikipedia.org/wiki/Internet_protocol_suite
Nuclear fission products are the atomic fragments left after a large atomic nucleus undergoes nuclear fission. Typically, a large nucleus like that of uranium fissions by splitting into two smaller nuclei, along with a few neutrons, the release of heat energy (kinetic energy of the nuclei), and gamma rays. The two smaller nuclei are the fission products. (See also Fission products (by element)). About 0.2% to 0.4% of fissions are ternary fissions, producing a third light nucleus such as helium-4 (90%) or tritium (7%). The fission products themselves are usually unstable and therefore radioactive. Due to being relatively neutron-rich for their atomic number, many of them quickly undergo beta decay. This releases additional energy in the form of beta particles, antineutrinos, and gamma rays. Thus, fission events normally result in beta and additional gamma radiation that begins immediately after, even though this radiation is not produced directly by the fission event itself. The produced radionuclides have varying half-lives, and therefore vary in radioactivity. For instance, strontium-89 and strontium-90 are produced in similar quantities in fission, and each nucleus decays by beta emission.
https://en.wikipedia.org/wiki/Nuclear_fission_product
The produced radionuclides have varying half-lives, and therefore vary in radioactivity. For instance, strontium-89 and strontium-90 are produced in similar quantities in fission, and each nucleus decays by beta emission. But 90Sr has a 30-year half-life, and 89Sr a 50.5-day half-life. Thus in the 50.5 days it takes half the 89Sr atoms to decay, emitting the same number of beta particles as there were decays, less than 0.4% of the 90Sr atoms have decayed, emitting only 0.4% of the betas. The radioactive emission rate is highest for the shortest lived radionuclides, although they also decay the fastest. Additionally, less stable fission products are less likely to decay to stable nuclides, instead decaying to other radionuclides, which undergo further decay and radiation emission, adding to the radiation output. It is these short lived fission products that are the immediate hazard of spent fuel, and the energy output of the radiation also generates significant heat which must be considered when storing spent fuel. As there are hundreds of different radionuclides created, the initial radioactivity level fades quickly as short lived radionuclides decay, but never ceases completely as longer lived radionuclides make up more and more of the remaining unstable atoms.
https://en.wikipedia.org/wiki/Nuclear_fission_product
It is these short lived fission products that are the immediate hazard of spent fuel, and the energy output of the radiation also generates significant heat which must be considered when storing spent fuel. As there are hundreds of different radionuclides created, the initial radioactivity level fades quickly as short lived radionuclides decay, but never ceases completely as longer lived radionuclides make up more and more of the remaining unstable atoms. In fact the short lived products are so predominant that 87 percent decay to stable isotopes within the first month after removal from the reactor core. ## Formation and decay The sum of the atomic mass of the two atoms produced by the fission of one fissile atom is always less than the atomic mass of the original atom. This is because some of the mass is lost as free neutrons, and once kinetic energy of the fission products has been removed (i.e., the products have been cooled to extract the heat provided by the reaction), then the mass associated with this energy is lost to the system also, and thus appears to be "missing" from the cooled fission products.
https://en.wikipedia.org/wiki/Nuclear_fission_product
## Formation and decay The sum of the atomic mass of the two atoms produced by the fission of one fissile atom is always less than the atomic mass of the original atom. This is because some of the mass is lost as free neutrons, and once kinetic energy of the fission products has been removed (i.e., the products have been cooled to extract the heat provided by the reaction), then the mass associated with this energy is lost to the system also, and thus appears to be "missing" from the cooled fission products. Since the nuclei that can readily undergo fission are particularly neutron-rich (e.g. 61% of the nucleons in uranium-235 are neutrons), the initial fission products are often more neutron-rich than stable nuclei of the same mass as the fission product (e.g. stable zirconium-90 is 56% neutrons compared to unstable strontium-90 at 58%). The initial fission products therefore may be unstable and typically undergo beta decay to move towards a stable configuration, converting a neutron to a proton with each beta emission. (Most fission products do not decay via alpha decay.)
https://en.wikipedia.org/wiki/Nuclear_fission_product
The initial fission products therefore may be unstable and typically undergo beta decay to move towards a stable configuration, converting a neutron to a proton with each beta emission. (Most fission products do not decay via alpha decay.) A few neutron-rich and short-lived initial fission products decay by ordinary beta decay (this is the source of perceptible half life, typically a few tenths of a second to a few seconds), followed by immediate emission of a neutron by the excited daughter-product. This process is the source of so-called delayed neutrons, which play an important role in control of a nuclear reactor. The first beta decays are rapid and may release high energy beta particles or gamma radiation. However, as the fission products approach stable nuclear conditions, the last one or two decays may have a long half-life and release less energy. ## Radioactivity over time Fission products have half-lives of 90 years (samarium-151) or less, except for seven long-lived fission products that have half lives of 211,100 years (technetium-99) or more. Therefore, the total radioactivity of a mixture of pure fission products decreases rapidly for the first several hundred years (controlled by the short-lived products) before stabilizing at a low level that changes little for hundreds of thousands of years (controlled by the seven long-lived products).
https://en.wikipedia.org/wiki/Nuclear_fission_product
## Radioactivity over time Fission products have half-lives of 90 years (samarium-151) or less, except for seven long-lived fission products that have half lives of 211,100 years (technetium-99) or more. Therefore, the total radioactivity of a mixture of pure fission products decreases rapidly for the first several hundred years (controlled by the short-lived products) before stabilizing at a low level that changes little for hundreds of thousands of years (controlled by the seven long-lived products). This behavior of pure fission products with actinides removed, contrasts with the decay of fuel that still contains actinides. This fuel is produced in the so-called "open" (i.e., no nuclear reprocessing) nuclear fuel cycle. A number of these actinides have half lives in the missing range of about 100 to 200,000 years, causing some difficulty with storage plans in this time-range for open cycle non-reprocessed fuels. Proponents of nuclear fuel cycles which aim to consume all their actinides by fission, such as the Integral Fast Reactor and molten salt reactor, use this fact to claim that within 200 years, their fuel wastes are no more radioactive than the original uranium ore. Fission products primarily emit beta radiation, while actinides primarily emit alpha radiation. Many of each also emit gamma radiation.
https://en.wikipedia.org/wiki/Nuclear_fission_product
Fission products primarily emit beta radiation, while actinides primarily emit alpha radiation. Many of each also emit gamma radiation. ## Yield Each fission of a parent atom produces a different set of fission product atoms. However, while an individual fission is not predictable, the fission products are statistically predictable. The amount of any particular isotope produced per fission is called its yield, typically expressed as percent per parent fission; therefore, yields total to 200%, not 100%. (The true total is in fact slightly greater than 200%, owing to rare cases of ternary fission.) While fission products include every element from zinc through the lanthanides, the majority of the fission products occur in two peaks. One peak occurs at about (expressed by atomic masses 85 through 105) strontium to ruthenium while the other peak is at about tellurium to neodymium (expressed by atomic masses 130 through 145). The yield is somewhat dependent on the parent atom and also on the energy of the initiating neutron. In general the higher the energy of the state that undergoes nuclear fission, the more likely that the two fission products have similar mass. Hence, as the neutron energy increases and/or the energy of the fissile atom increases, the valley between the two peaks becomes more shallow.
https://en.wikipedia.org/wiki/Nuclear_fission_product
In general the higher the energy of the state that undergoes nuclear fission, the more likely that the two fission products have similar mass. Hence, as the neutron energy increases and/or the energy of the fissile atom increases, the valley between the two peaks becomes more shallow. For instance, the curve of yield against mass for 239Pu has a more shallow valley than that observed for 235U when the neutrons are thermal neutrons. The curves for the fission of the later actinides tend to make even more shallow valleys. In extreme cases such as 259Fm, only one peak is seen; this is a consequence of symmetric fission becoming dominant due to shell effects. The adjacent figure shows a typical fission product distribution from the fission of uranium. Note that in the calculations used to make this graph, the activation of fission products was ignored and the fission was assumed to occur in a single moment rather than a length of time. In this bar chart results are shown for different cooling times (time after fission). Because of the stability of nuclei with even numbers of protons and/or neutrons, the curve of yield against element is not a smooth curve but tends to alternate. Note that the curve against mass number is smooth.
https://en.wikipedia.org/wiki/Nuclear_fission_product
Because of the stability of nuclei with even numbers of protons and/or neutrons, the curve of yield against element is not a smooth curve but tends to alternate. Note that the curve against mass number is smooth. ## Production Small amounts of fission products are naturally formed as the result of either spontaneous fission of natural uranium, which occurs at a low rate, or as a result of neutrons from radioactive decay or reactions with cosmic ray particles. The microscopic tracks left by these fission products in some natural minerals (mainly apatite and zircon) are used in fission track dating to provide the cooling (crystallization) ages of natural rocks. The technique has an effective dating range of 0.1 Ma to >1.0 Ga depending on the mineral used and the concentration of uranium in that mineral. About 1.5 billion years ago in a uranium ore body in Africa, a natural nuclear fission reactor operated for a few hundred thousand years and produced approximately 5 tonnes of fission products. These fission products were important in providing proof that the natural reactor had occurred. Fission products are produced in nuclear weapon explosions, with the amount depending on the type of weapon. The largest source of fission products is from nuclear reactors. In current nuclear power reactors, about 3% of the uranium in the fuel is converted into fission products as a by-product of energy generation.
https://en.wikipedia.org/wiki/Nuclear_fission_product
The largest source of fission products is from nuclear reactors. In current nuclear power reactors, about 3% of the uranium in the fuel is converted into fission products as a by-product of energy generation. Most of these fission products remain in the fuel unless there is fuel element failure or a nuclear accident, or the fuel is reprocessed. ### Power reactors Commercial nuclear fission reactors are operated in the otherwise self-extinguishing prompt subcritical state. Certain fission products decay over seconds to minutes, producing additional delayed neutrons crucial to sustaining criticality. An example is bromine-87 with a half-life of about a minute. Operating in this delayed critical state, power changes slowly enough to permit human and automatic control. Analogous to fire dampers varying the movement of wood embers towards new fuel, control rods are moved as the nuclear fuel burns up over time.nuclear education for K-12 students Myths About Nuclear Energy It is impossible for a reactor to explode like a nuclear weapon; these weapons contain very special materials in very particular configurations, neither of which are present in a nuclear reactor. In a nuclear power reactor, the main sources of radioactivity are fission products along with actinides and activation products. Fission products are most of the radioactivity for the first several hundred years, while actinides dominate roughly 103 to 105 years after fuel use.
https://en.wikipedia.org/wiki/Nuclear_fission_product
In a nuclear power reactor, the main sources of radioactivity are fission products along with actinides and activation products. Fission products are most of the radioactivity for the first several hundred years, while actinides dominate roughly 103 to 105 years after fuel use. Most fission products are retained near their points of production. They are important to reactor operation not only because some contribute delayed neutrons useful for reactor control, but some are neutron poisons that inhibit the nuclear reaction. Buildup of neutron poisons is a key to how long a given fuel element can be kept in the reactor. Fission product decay also generates heat that continues even after the reactor has been shut down and fission stopped. This decay heat requires removal after shutdown; loss of this cooling damaged the reactors at Three Mile Island and Fukushima. If the fuel cladding around the fuel develops holes, fission products can leak into the primary coolant. Depending on the chemistry, they may settle within the reactor core or travel through the coolant system and chemistry control systems are provided to remove them. In a well-designed power reactor running under normal conditions, coolant radioactivity is very low. The isotope responsible for most of the gamma exposure in fuel reprocessing plants (and the Chernobyl site in 2005) is caesium-137.
https://en.wikipedia.org/wiki/Nuclear_fission_product
In a well-designed power reactor running under normal conditions, coolant radioactivity is very low. The isotope responsible for most of the gamma exposure in fuel reprocessing plants (and the Chernobyl site in 2005) is caesium-137. ### Iodine -129 is a major radioactive isotope released from reprocessing plants. In nuclear reactors both caesium-137 and strontium-90 are found in locations away from the fuel because they're formed by the beta decay of noble gases (xenon-137, with a 3.8-minute half-life, and krypton-90, with a 32-second half-life) which enable them to be deposited away from the fuel, e.g. on control rods. #### Nuclear reactor poisons Some fission products decay with the release of delayed neutrons, important to nuclear reactor control. Other fission products, such as xenon-135 and samarium-149, have a high neutron absorption cross section. Since a nuclear reactor must balance neutron production and absorption rates, fission products that absorb neutrons tend to "poison" or shut the reactor down; this is controlled with burnable poisons and control rods. Build-up of xenon-135 during shutdown or low-power operation may poison the reactor enough to impede restart or interfere with normal control of the reaction during restart or restoration of full power.
https://en.wikipedia.org/wiki/Nuclear_fission_product
Since a nuclear reactor must balance neutron production and absorption rates, fission products that absorb neutrons tend to "poison" or shut the reactor down; this is controlled with burnable poisons and control rods. Build-up of xenon-135 during shutdown or low-power operation may poison the reactor enough to impede restart or interfere with normal control of the reaction during restart or restoration of full power. This played a major role in the Chernobyl disaster. ### Nuclear weapons Nuclear weapons use fission as either the partial or the main energy source. Depending on the weapon design and where it is exploded, the relative importance of the fission product radioactivity will vary compared to the activation product radioactivity in the total fallout radioactivity. The immediate fission products from nuclear weapon fission are essentially the same as those from any other fission source, depending slightly on the particular nuclide that is fissioning. However, the very short time scale for the reaction makes a difference in the particular mix of isotopes produced from an atomic bomb. For example, the 134Cs/137Cs ratio provides an easy method of distinguishing between fallout from a bomb and the fission products from a power reactor. Almost no caesium-134 is formed by nuclear fission (because xenon-134 is stable).
https://en.wikipedia.org/wiki/Nuclear_fission_product
For example, the 134Cs/137Cs ratio provides an easy method of distinguishing between fallout from a bomb and the fission products from a power reactor. Almost no caesium-134 is formed by nuclear fission (because xenon-134 is stable). The 134Cs is formed by the neutron activation of the stable 133Cs which is formed by the decay of isotopes in the isobar (A = 133). So in a momentary criticality, by the time that the neutron flux becomes zero too little time will have passed for any 133Cs to be present. While in a power reactor plenty of time exists for the decay of the isotopes in the isobar to form 133Cs, the 133Cs thus formed can then be activated to form 134Cs only if the time between the start and the end of the criticality is long. According to Jiri Hala's textbook, the radioactivity in the fission product mixture in an atom bomb is mostly caused by short-lived isotopes such as iodine-131 and barium-140. After about four months, cerium-141, zirconium-95/niobium-95, and strontium-89 represent the largest share of radioactive material.
https://en.wikipedia.org/wiki/Nuclear_fission_product
According to Jiri Hala's textbook, the radioactivity in the fission product mixture in an atom bomb is mostly caused by short-lived isotopes such as iodine-131 and barium-140. After about four months, cerium-141, zirconium-95/niobium-95, and strontium-89 represent the largest share of radioactive material. After two to three years, cerium-144/praseodymium-144, ruthenium-106/rhodium-106, and promethium-147 are responsible for the bulk of the radioactivity. After a few years, the radiation is dominated by strontium-90 and caesium-137, whereas in the period between 10,000 and a million years it is technetium-99 that dominates. ### Application Some fission products (such as 137Cs) are used in medical and industrial radioactive sources. 99TcO4− (pertechnetate) ion can react with steel surfaces to form a corrosion resistant layer. In this way these metaloxo anions act as anodic corrosion inhibitors - it renders the steel surface passive. The formation of 99TcO2 on steel surfaces is one effect which will retard the release of 99Tc from nuclear waste drums and nuclear equipment which has become lost prior to decontamination (e.g. nuclear submarine reactors which have been lost at sea).
https://en.wikipedia.org/wiki/Nuclear_fission_product
In this way these metaloxo anions act as anodic corrosion inhibitors - it renders the steel surface passive. The formation of 99TcO2 on steel surfaces is one effect which will retard the release of 99Tc from nuclear waste drums and nuclear equipment which has become lost prior to decontamination (e.g. nuclear submarine reactors which have been lost at sea). In a similar way the release of radio-iodine in a serious power reactor accident could be retarded by adsorption on metal surfaces within the nuclear plant. Much of the other work on the iodine chemistry which would occur during a bad accident has been done. ## Decay For fission of uranium-235, the predominant radioactive fission products include isotopes of iodine, caesium, strontium, xenon and barium. The threat becomes smaller with the passage of time. Locations where radiation fields once posed immediate mortal threats, such as much of the Chernobyl Nuclear Power Plant on day one of the accident and the ground zero sites of U.S. atomic bombings in Japan (6 hours after detonation) are now relatively safe because the radioactivity has decreased to a low level. Many of the fission products decay through very short-lived isotopes to form stable isotopes, but a considerable number of the radioisotopes have half-lives longer than a day.
https://en.wikipedia.org/wiki/Nuclear_fission_product
Locations where radiation fields once posed immediate mortal threats, such as much of the Chernobyl Nuclear Power Plant on day one of the accident and the ground zero sites of U.S. atomic bombings in Japan (6 hours after detonation) are now relatively safe because the radioactivity has decreased to a low level. Many of the fission products decay through very short-lived isotopes to form stable isotopes, but a considerable number of the radioisotopes have half-lives longer than a day. The radioactivity in the fission product mixture is initially mostly caused by short lived isotopes such as 131I and 140Ba; after about four months 141Ce, 95Zr/95Nb and 89Sr take the largest share, while after about two or three years the largest share is taken by 144Ce/144Pr, 106Ru/106Rh and 147Pm. Later 90Sr and 137Cs are the main radioisotopes, being succeeded by 99Tc. In the case of a release of radioactivity from a power reactor or used fuel, only some elements are released; as a result, the isotopic signature of the radioactivity is very different from an open air nuclear detonation, where all the fission products are dispersed. ## Fallout countermeasures The purpose of radiological emergency preparedness is to protect people from the effects of radiation exposure after a nuclear accident or bomb. Evacuation is the most effective protective measure.
https://en.wikipedia.org/wiki/Nuclear_fission_product
## Fallout countermeasures The purpose of radiological emergency preparedness is to protect people from the effects of radiation exposure after a nuclear accident or bomb. Evacuation is the most effective protective measure. However, if evacuation is impossible or even uncertain, then local fallout shelters and other measures provide the best protection. Iodine At least three isotopes of iodine are important. 129I, 131I (radioiodine) and 132I. Open air nuclear testing and the Chernobyl disaster both released iodine-131. The short-lived isotopes of iodine are particularly harmful because the thyroid collects and concentrates iodide – radioactive as well as stable. Absorption of radioiodine can lead to acute, chronic, and delayed effects. Acute effects from high doses include thyroiditis, while chronic and delayed effects include hypothyroidism, thyroid nodules, and thyroid cancer. It has been shown that the active iodine released from Chernobyl and Mayak has resulted in an increase in the incidence of thyroid cancer in the former Soviet Union. One measure which protects against the risk from radio-iodine is taking a dose of potassium iodide (KI) before exposure to radioiodine. The non-radioactive iodide "saturates" the thyroid, causing less of the radioiodine to be stored in the body.
https://en.wikipedia.org/wiki/Nuclear_fission_product
One measure which protects against the risk from radio-iodine is taking a dose of potassium iodide (KI) before exposure to radioiodine. The non-radioactive iodide "saturates" the thyroid, causing less of the radioiodine to be stored in the body. Administering potassium iodide reduces the effects of radio-iodine by 99% and is a prudent, inexpensive supplement to fallout shelters. A low-cost alternative to commercially available iodine pills is a saturated solution of potassium iodide. Long-term storage of KI is normally in the form of reagent-grade crystals. The administration of known goitrogen substances can also be used as a prophylaxis in reducing the bio-uptake of iodine, (whether it be the nutritional non-radioactive iodine-127 or radioactive iodine, radioiodine - most commonly iodine-131, as the body cannot discern between different iodine isotopes). Perchlorate ions, a common water contaminant in the USA due to the aerospace industry, has been shown to reduce iodine uptake and thus is classified as a goitrogen. Perchlorate ions are a competitive inhibitor of the process by which iodide is actively deposited into thyroid follicular cells.
https://en.wikipedia.org/wiki/Nuclear_fission_product
Perchlorate ions, a common water contaminant in the USA due to the aerospace industry, has been shown to reduce iodine uptake and thus is classified as a goitrogen. Perchlorate ions are a competitive inhibitor of the process by which iodide is actively deposited into thyroid follicular cells. Studies involving healthy adult volunteers determined that at levels above 0.007 milligrams per kilogram per day (mg/(kg·d)), perchlorate begins to temporarily inhibit the thyroid gland's ability to absorb iodine from the bloodstream ("iodide uptake inhibition", thus perchlorate is a known goitrogen). The reduction of the iodide pool by perchlorate has dual effects – reduction of excess hormone synthesis and hyperthyroidism, on the one hand, and reduction of thyroid inhibitor synthesis and hypothyroidism on the other. Perchlorate remains very useful as a single dose application in tests measuring the discharge of radioiodide accumulated in the thyroid as a result of many different disruptions in the further metabolism of iodide in the thyroid gland.
https://en.wikipedia.org/wiki/Nuclear_fission_product
The reduction of the iodide pool by perchlorate has dual effects – reduction of excess hormone synthesis and hyperthyroidism, on the one hand, and reduction of thyroid inhibitor synthesis and hypothyroidism on the other. Perchlorate remains very useful as a single dose application in tests measuring the discharge of radioiodide accumulated in the thyroid as a result of many different disruptions in the further metabolism of iodide in the thyroid gland. Treatment of thyrotoxicosis (including Graves' disease) with 600–2,000 mg potassium perchlorate (430-1,400 mg perchlorate) daily for periods of several months or longer was once common practice, particularly in Europe, and perchlorate use at lower doses to treat thyroid problems continues to this day. Although 400 mg of potassium perchlorate divided into four or five daily doses was used initially and found effective, higher doses were introduced when 400 mg/day was discovered not to control thyrotoxicosis in all subjects. Current regimens for treatment of thyrotoxicosis (including Graves' disease), when a patient is exposed to additional sources of iodine, commonly include 500 mg potassium perchlorate twice per day for 18–40 days.
https://en.wikipedia.org/wiki/Nuclear_fission_product
Although 400 mg of potassium perchlorate divided into four or five daily doses was used initially and found effective, higher doses were introduced when 400 mg/day was discovered not to control thyrotoxicosis in all subjects. Current regimens for treatment of thyrotoxicosis (including Graves' disease), when a patient is exposed to additional sources of iodine, commonly include 500 mg potassium perchlorate twice per day for 18–40 days. Prophylaxis with perchlorate-containing water at concentrations of 17 ppm, which corresponds to 0.5 mg/kg-day personal intake, if one is 70 kg and consumes 2 litres of water per day, was found to reduce baseline radioiodine uptake by 67% This is equivalent to ingesting a total of just 35 mg of perchlorate ions per day. In another related study where subjects drank just 1 litre of perchlorate-containing water per day at a concentration of 10 ppm, i.e. daily 10 mg of perchlorate ions were ingested, an average 38% reduction in the uptake of iodine was observed. However, when the average perchlorate absorption in perchlorate plant workers subjected to the highest exposure has been estimated as approximately 0.5 mg/kg-day, as in the above paragraph, a 67% reduction of iodine uptake would be expected.
https://en.wikipedia.org/wiki/Nuclear_fission_product
In another related study where subjects drank just 1 litre of perchlorate-containing water per day at a concentration of 10 ppm, i.e. daily 10 mg of perchlorate ions were ingested, an average 38% reduction in the uptake of iodine was observed. However, when the average perchlorate absorption in perchlorate plant workers subjected to the highest exposure has been estimated as approximately 0.5 mg/kg-day, as in the above paragraph, a 67% reduction of iodine uptake would be expected. Studies of chronically exposed workers though have thus far failed to detect any abnormalities of thyroid function, including the uptake of iodine. this may well be attributable to sufficient daily exposure or intake of healthy iodine-127 among the workers and the short 8 hr biological half life of perchlorate in the body. To completely block the uptake of iodine-131 by the purposeful addition of perchlorate ions to a populace's water supply, aiming at dosages of 0.5 mg/kg-day, or a water concentration of 17 ppm, would therefore be grossly inadequate at truly reducing radioiodine uptake.
https://en.wikipedia.org/wiki/Nuclear_fission_product
this may well be attributable to sufficient daily exposure or intake of healthy iodine-127 among the workers and the short 8 hr biological half life of perchlorate in the body. To completely block the uptake of iodine-131 by the purposeful addition of perchlorate ions to a populace's water supply, aiming at dosages of 0.5 mg/kg-day, or a water concentration of 17 ppm, would therefore be grossly inadequate at truly reducing radioiodine uptake. Perchlorate ion concentrations in a region's water supply would need to be much higher, at least 7.15 mg/kg of body weight per day, or a water concentration of 250 ppm, assuming people drink 2 liters of water per day, to be truly beneficial to the population at preventing bioaccumulation when exposed to a radioiodine environment, independent of the availability of iodate or iodide drugs. The continual distribution of perchlorate tablets or the addition of perchlorate to the water supply would need to continue for no less than 80–90 days, beginning immediately after the initial release of radioiodine was detected. After 80–90 days passed, released radioactive iodine-131 would have decayed to less than 0.1% of its initial quantity, at which time the danger from biouptake of iodine-131 is essentially over.
https://en.wikipedia.org/wiki/Nuclear_fission_product
The continual distribution of perchlorate tablets or the addition of perchlorate to the water supply would need to continue for no less than 80–90 days, beginning immediately after the initial release of radioiodine was detected. After 80–90 days passed, released radioactive iodine-131 would have decayed to less than 0.1% of its initial quantity, at which time the danger from biouptake of iodine-131 is essentially over. In the event of a radioiodine release, the ingestion of prophylaxis potassium iodide, if available, or even iodate, would rightly take precedence over perchlorate administration, and would be the first line of defense in protecting the population from a radioiodine release. However, in the event of a radioiodine release too massive and widespread to be controlled by the limited stock of iodide and iodate prophylaxis drugs, then the addition of perchlorate ions to the water supply, or distribution of perchlorate tablets would serve as a cheap, efficacious, second line of defense against carcinogenic radioiodine bioaccumulation. The ingestion of goitrogen drugs is, much like potassium iodide also not without its dangers, such as hypothyroidism.
https://en.wikipedia.org/wiki/Nuclear_fission_product
However, in the event of a radioiodine release too massive and widespread to be controlled by the limited stock of iodide and iodate prophylaxis drugs, then the addition of perchlorate ions to the water supply, or distribution of perchlorate tablets would serve as a cheap, efficacious, second line of defense against carcinogenic radioiodine bioaccumulation. The ingestion of goitrogen drugs is, much like potassium iodide also not without its dangers, such as hypothyroidism. In all these cases however, despite the risks, the prophylaxis benefits of intervention with iodide, iodate, or perchlorate outweigh the serious cancer risk from radioiodine bioaccumulation in regions where radioiodine has sufficiently contaminated the environment. ### Caesium The Chernobyl accident released a large amount of caesium isotopes which were dispersed over a wide area. 137Cs is an isotope which is of long-term concern as it remains in the top layers of soil. Plants with shallow root systems tend to absorb it for many years. Hence grass and mushrooms can carry a considerable amount of 137Cs, which can be transferred to humans through the food chain. One of the best countermeasures in dairy farming against 137Cs is to mix up the soil by deeply ploughing the soil.
https://en.wikipedia.org/wiki/Nuclear_fission_product
Hence grass and mushrooms can carry a considerable amount of 137Cs, which can be transferred to humans through the food chain. One of the best countermeasures in dairy farming against 137Cs is to mix up the soil by deeply ploughing the soil. This has the effect of putting the 137Cs out of reach of the shallow roots of the grass, hence the level of radioactivity in the grass will be lowered. Also the removal of top few centimeters of soil and its burial in a shallow trench will reduce the dose to humans and animals as the gamma rays from 137Cs will be attenuated by their passage through the soil. The deeper and more remote the trench is, the better the degree of protection. Fertilizers containing potassium can be used to dilute cesium and limit its uptake by plants. In livestock farming, another countermeasure against 137Cs is to feed to animals prussian blue. This compound acts as an ion-exchanger. The cyanide is so tightly bonded to the iron that it is safe for a human to consume several grams of prussian blue per day. The prussian blue reduces the biological half-life (different from the nuclear half-life) of the caesium. The physical or nuclear half-life of 137Cs is about 30 years.
https://en.wikipedia.org/wiki/Nuclear_fission_product
The prussian blue reduces the biological half-life (different from the nuclear half-life) of the caesium. The physical or nuclear half-life of 137Cs is about 30 years. Caesium in humans normally has a biological half-life of between one and four months. An added advantage of the prussian blue is that the caesium which is stripped from the animal in the droppings is in a form which is not available to plants. Hence it prevents the caesium from being recycled. The form of prussian blue required for the treatment of animals, including humans is a special grade. Attempts to use the pigment grade used in paints have not been successful. ### Strontium The addition of lime to soils which are poor in calcium can reduce the uptake of strontium by plants. Likewise in areas where the soil is low in potassium, the addition of a potassium fertilizer can discourage the uptake of cesium into plants. However such treatments with either lime or potash should not be undertaken lightly as they can alter the soil chemistry greatly, so resulting in a change in the plant ecology of the land. ## Health concerns For introduction of radionuclides into an organism, ingestion is the most important route. Insoluble compounds are not absorbed from the gut and cause only local irradiation before they are excreted.
https://en.wikipedia.org/wiki/Nuclear_fission_product
## Health concerns For introduction of radionuclides into an organism, ingestion is the most important route. Insoluble compounds are not absorbed from the gut and cause only local irradiation before they are excreted. Soluble forms however show wide range of absorption percentages. Isotope Radiation Half-life GI absorption Strontium-90/yttrium-90 β 28 years 30% Caesium-137 β, γ 30 years 100% Promethium-147 β 2.6 years 0.01% Cerium-144 β, γ 285 days 0.01% Ruthenium-106/rhodium-106 β, γ 1.0 years 0.03% Zirconium-95 β, γ 65 days 0.01% Strontium-89 β 51 days 30% Ruthenium-103 β, γ 39.7 days 0.03% Niobium-95 β, γ 35 days 0.01% Cerium-141 β, γ 33 days 0.01% Barium-140/lanthanum-140 β, γ 12.8 days 5% Iodine-131 β, γ 8.05 days 100% Tritium β 12.3 years 100%
https://en.wikipedia.org/wiki/Nuclear_fission_product