source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Noise%20barrier
A noise barrier (also called a soundwall, noise wall, sound berm, sound barrier, or acoustical barrier) is an exterior structure designed to protect inhabitants of sensitive land use areas from noise pollution. Noise barriers are the most effective method of mitigating roadway, railway, and industrial noise sources – other than cessation of the source activity or use of source controls. In the case of surface transportation noise, other methods of reducing the source noise intensity include encouraging the use of hybrid and electric vehicles, improving automobile aerodynamics and tire design, and choosing low-noise paving material. Extensive use of noise barriers began in the United States after noise regulations were introduced in the early 1970s. History Noise barriers have been built in the United States since the mid-twentieth century, when vehicular traffic burgeoned. I-680 in Milpitas, California was the first noise barrier. In the late 1960s, analytic acoustical technology emerged to mathematically evaluate the efficacy of a noise barrier design adjacent to a specific roadway. By the 1990s, noise barriers that included use of transparent materials were being designed in Denmark and other western European countries. The best of these early computer models considered the effects of roadway geometry, topography, vehicle volumes, vehicle speeds, truck mix, road surface type, and micro-meteorology. Several U.S. research groups developed variations of the computer modeling techniques: Caltrans Headquarters in Sacramento, California; the ESL Inc. group in Sunnyvale, California; the Bolt, Beranek and Newman group in Cambridge, Massachusetts, and a research team at the University of Florida. Possibly the earliest published work that scientifically designed a specific noise barrier was the study for the Foothill Expressway in Los Altos, California. Numerous case studies across the U.S. soon addressed dozens of different existing and planned highways. Most were co
https://en.wikipedia.org/wiki/National%20Internet%20registry
A national Internet registry (or NIR) is an organization under the umbrella of a regional Internet registry with the task of coordinating IP address allocations and other Internet resource management functions at a national level within a country or economic unit. NIRs operate primarily in the Asia Pacific region, under the authority of APNIC, the regional Internet registry for that region. The following NIRs are currently operating in the APNIC region: IDNIC-APJII (Indonesia Network Information Centre-Asosiasi Penyelenggara Jasa Internet Indonesia) CNNIC, China Internet Network Information Center JPNIC, Japan Network Information Center KRNIC, Korea Internet & Security Agency TWNIC, Taiwan Network Information Center VNNIC, Vietnam Internet Network Information Center Indian Registry for Internet Names and Numbers The following NIRs are currently operating in the Latin American (LACNIC) region: NIC Mexico NIC.br There are no NIRs operating in the RIPE NCC region. See also Country code top-level domain Geolocation software Internet governance Local Internet registry References External links APNIC website NIC Mexico website NIC Chile website Regional Internet registries Internet Assigned Numbers Authority Internet Standards Internet governance
https://en.wikipedia.org/wiki/Pi-system
In mathematics, a -system (or pi-system) on a set is a collection of certain subsets of such that is non-empty. If then That is, is a non-empty family of subsets of that is closed under non-empty finite intersections. The importance of -systems arises from the fact that if two probability measures agree on a -system, then they agree on the -algebra generated by that -system. Moreover, if other properties, such as equality of integrals, hold for the -system, then they hold for the generated -algebra as well. This is the case whenever the collection of subsets for which the property holds is a -system. -systems are also useful for checking independence of random variables. This is desirable because in practice, -systems are often simpler to work with than -algebras. For example, it may be awkward to work with -algebras generated by infinitely many sets So instead we may examine the union of all -algebras generated by finitely many sets This forms a -system that generates the desired -algebra. Another example is the collection of all intervals of the real line, along with the empty set, which is a -system that generates the very important Borel -algebra of subsets of the real line. Definitions A -system is a non-empty collection of sets that is closed under non-empty finite intersections, which is equivalent to containing the intersection of any two of its elements. If every set in this -system is a subset of then it is called a For any non-empty family of subsets of there exists a -system called the , that is the unique smallest -system of containing every element of It is equal to the intersection of all -systems containing and can be explicitly described as the set of all possible non-empty finite intersections of elements of A non-empty family of sets has the finite intersection property if and only if the -system it generates does not contain the empty set as an element. Examples For any real numbers and the intervals form a
https://en.wikipedia.org/wiki/Seqlock
A seqlock (short for sequence lock) is a special locking mechanism used in Linux for supporting fast writes of shared variables between two parallel operating system routines. The semantics stabilized as of version 2.5.59, and they are present in the 2.6.x stable kernel series. The seqlocks were developed by Stephen Hemminger and originally called frlocks, based on earlier work by Andrea Arcangeli. The first implementation was in the x86-64 time code where it was needed to synchronize with user space where it was not possible to use a real lock. It is a reader–writer consistent mechanism which avoids the problem of writer starvation. A seqlock consists of storage for saving a sequence number in addition to a lock. The lock is to support synchronization between two writers and the counter is for indicating consistency in readers. In addition to updating the shared data, the writer increments the sequence number, both after acquiring the lock and before releasing the lock. Readers read the sequence number before and after reading the shared data. If the sequence number is odd on either occasion, a writer had taken the lock while the data was being read and it may have changed. If the sequence numbers are different, a writer has changed the data while it was being read. In either case readers simply retry (using a loop) until they read the same even sequence number before and after. The reader never blocks, but it may have to retry if a write is in progress; this speeds up the readers in the case where the data was not modified, since they do not have to acquire the lock as they would with a traditional read–write lock. Also, writers do not wait for readers, whereas with traditional read–write locks they do, leading to potential resource starvation in a situation where there are a number of readers (because the writer must wait for there to be no readers). Because of these two factors, seqlocks are more efficient than traditional read–write locks for the situation
https://en.wikipedia.org/wiki/Neuroimmunology
Neuroimmunology is a field combining neuroscience, the study of the nervous system, and immunology, the study of the immune system. Neuroimmunologists seek to better understand the interactions of these two complex systems during development, homeostasis, and response to injuries. A long-term goal of this rapidly developing research area is to further develop our understanding of the pathology of certain neurological diseases, some of which have no clear etiology. In doing so, neuroimmunology contributes to development of new pharmacological treatments for several neurological conditions. Many types of interactions involve both the nervous and immune systems including the physiological functioning of the two systems in health and disease, malfunction of either and or both systems that leads to disorders, and the physical, chemical, and environmental stressors that affect the two systems on a daily basis. Background Neural targets that control thermogenesis, behavior, sleep, and mood can be affected by pro-inflammatory cytokines which are released by activated macrophages and monocytes during infection. Within the central nervous system production of cytokines has been detected as a result of brain injury, during viral and bacterial infections, and in neurodegenerative processes. From the US National Institute of Health: "Despite the brain's status as an immune privileged site, an extensive bi-directional communication takes place between the nervous and the immune system in both health and disease. Immune cells and neuroimmune molecules such as cytokines, chemokines, and growth factors modulate brain function through multiple signaling pathways throughout the lifespan. Immunological, physiological and psychological stressors engage cytokines and other immune molecules as mediators of interactions with neuroendocrine, neuropeptide, and neurotransmitter systems. For example, brain cytokine levels increase following stress exposure, while treatments designed to
https://en.wikipedia.org/wiki/Behavioral%20modeling
The behavioral approach to systems theory and control theory was initiated in the late-1970s by J. C. Willems as a result of resolving inconsistencies present in classical approaches based on state-space, transfer function, and convolution representations. This approach is also motivated by the aim of obtaining a general framework for system analysis and control that respects the underlying physics. The main object in the behavioral setting is the behavior – the set of all signals compatible with the system. An important feature of the behavioral approach is that it does not distinguish a priority between input and output variables. Apart from putting system theory and control on a rigorous basis, the behavioral approach unified the existing approaches and brought new results on controllability for nD systems, control via interconnection, and system identification. Dynamical system as a set of signals In the behavioral setting, a dynamical system is a triple where is the "time set" – the time instances over which the system evolves, is the "signal space" – the set in which the variables whose time evolution is modeled take on their values, and the "behavior" – the set of signals that are compatible with the laws of the system ( denotes the set of all signals, i.e., functions from into ). means that is a trajectory of the system, while means that the laws of the system forbid the trajectory to happen. Before the phenomenon is modeled, every signal in is deemed possible, while after modeling, only the outcomes in remain as possibilities. Special cases: – continuous-time systems – discrete-time systems – most physical systems a finite set – discrete event systems Linear time-invariant differential systems System properties are defined in terms of the behavior. The system is said to be "linear" if is a vector space and is a linear subspace of , "time-invariant" if the time set consists of the real or natural numbers and for a
https://en.wikipedia.org/wiki/NTLM
In a Windows network, NT (New Technology) LAN Manager (NTLM) is a suite of Microsoft security protocols intended to provide authentication, integrity, and confidentiality to users. NTLM is the successor to the authentication protocol in Microsoft LAN Manager (LANMAN), an older Microsoft product. The NTLM protocol suite is implemented in a Security Support Provider, which combines the LAN Manager authentication protocol, NTLMv1, NTLMv2 and NTLM2 Session protocols in a single package. Whether these protocols are used or can be used on a system which is governed by Group Policy settings, for which different versions of Windows have different default settings. NTLM passwords are considered weak because they can be brute-forced very easily with modern hardware. Protocol NTLM is a challenge–response authentication protocol which uses three messages to authenticate a client in a connection-oriented environment (connectionless is similar), and a fourth additional message if integrity is desired. First, the client establishes a network path to the server and sends a NEGOTIATE_MESSAGE advertising its capabilities. Next, the server responds with CHALLENGE_MESSAGE which is used to establish the identity of the client. Finally, the client responds to the challenge with an AUTHENTICATE_MESSAGE. The NTLM protocol uses one or both of two hashed password values, both of which are also stored on the server (or domain controller), and which through a lack of salting are password equivalent, meaning that if you grab the hash value from the server, you can authenticate without knowing the actual password. The two are the LM hash (a DES-based function applied to the first 14 characters of the password converted to the traditional 8-bit PC charset for the language), and the NT hash (MD4 of the little endian UTF-16 Unicode password). Both hash values are 16 bytes (128 bits) each. The NTLM protocol also uses one of two one-way functions, depending on the NTLM version; NT LanMan and
https://en.wikipedia.org/wiki/Structure%20%28mathematical%20logic%29
In universal algebra and in model theory, a structure consists of a set along with a collection of finitary operations and relations that are defined on it. Universal algebra studies structures that generalize the algebraic structures such as groups, rings, fields and vector spaces. The term universal algebra is used for structures of first-order theories with no relation symbols. Model theory has a different scope that encompasses more arbitrary first-order theories, including foundational structures such as models of set theory. From the model-theoretic point of view, structures are the objects used to define the semantics of first-order logic, cf. also Tarski's theory of truth or Tarskian semantics. For a given theory in model theory, a structure is called a model if it satisfies the defining axioms of that theory, although it is sometimes disambiguated as a semantic model when one discusses the notion in the more general setting of mathematical models. Logicians sometimes refer to structures as "interpretations", whereas the term "interpretation" generally has a different (although related) meaning in model theory, see interpretation (model theory). In database theory, structures with no functions are studied as models for relational databases, in the form of relational models. History In the context of mathematical logic, the term "model" was first applied in 1940 by the philosopher Willard Van Orman Quine, in a reference to mathematician Richard Dedekind (1831 – 1916), a pioneer in the development of set theory. Since the 19th century, one main method for proving the consistency of a set of axioms has been to provide a model for it. Definition Formally, a structure can be defined as a triple consisting of a domain a signature and an interpretation function that indicates how the signature is to be interpreted on the domain. To indicate that a structure has a particular signature one can refer to it as a -structure. Domain The domain of a struct
https://en.wikipedia.org/wiki/The%20Castle%20%28video%20game%29
The Castle is a video game released by ASCII Corporation in 1986 for the FM-7 and X1 computers. It was later ported to the MSX and NEC branded personal computers, and got a single console port for the SG-1000. The game is set within a castle containing 100 rooms, most of which contain one or more puzzles. It was followed by Castlequest (Castle Excellent in Japan). Both games are early examples of the Metroidvania genre. Gameplay The object of the game is to navigate through the Castle to rescue the Princess. The player can push certain objects throughout the game to accomplish progress. In some rooms, the prince can only advance to the next room by aligning cement blocks, Honey Jars, Candle Cakes, and Elevator Controlling Block. Additionally, the player's progress is blocked by many doors requiring a key of the same color to unlock, and a key is removed from the player's inventory upon use. The prince must be standing on a platform next to the door to be able to unlock it, and cannot simply jump or fall and press against the door. The player can navigate the castle with the help of a map that can be obtained early in the game. The map will provide the player with a matrix of 10x10 rooms and will highlight the room in which the princess is located and the rooms that he had visited. The player must also avoid touching enemies like Knights, Bishops, Wizards, Fire Spirits, Attack Cats and Phantom Flowers. References External links 1986 video games HAL Laboratory games Metroidvania games MSX games SG-1000 games NEC PC-6001 games NEC PC-8801 games NEC PC-9801 games FM-7 games Sharp X1 games Video games developed in Japan Video games set in castles Single-player video games
https://en.wikipedia.org/wiki/Robot%20software
Robot software is the set of coded commands or instructions that tell a mechanical device and electronic system, known together as a robot, what tasks to perform. Robot software is used to perform autonomous tasks. Many software systems and frameworks have been proposed to make programming robots easier. Some robot software aims at developing intelligent mechanical devices. Common tasks include feedback loops, control, pathfinding, data filtering, locating and sharing data. Introduction While it is a specific type of software, it is still quite diverse. Each manufacturer has their own robot software. While the vast majority of software is about manipulation of data and seeing the result on-screen, robot software is for the manipulation of objects or tools in the real world. Industrial robot software Software for industrial robots consists of data objects and lists of instructions, known as program flow (list of instructions). For example, Go to Jig1 It is an instruction to the robot to go to positional data named Jig1. Of course, programs can also contain implicit data for example Tell axis 1 move 30 degrees. Data and program usually reside in separate sections of the robot controller memory. One can change the data without changing the program and vice versa. For example, one can write a different program using the same Jig1 or one can adjust the position of Jig1 without changing the programs that use it. Examples of programming languages for industrial robots Due to the highly proprietary nature of robot software, most manufacturers of robot hardware also provide their own software. While this is not unusual in other automated control systems, the lack of standardization of programming methods for robots does pose certain challenges. For example, there are over 30 different manufacturers of industrial robots, so there are also 30 different robot programming languages required. There are enough similarities between the different robots that it is possib
https://en.wikipedia.org/wiki/Binary%20moment%20diagram
A binary moment diagram (BMD) is a generalization of the binary decision diagram (BDD) to linear functions over domains such as booleans (like BDDs), but also to integers or to real numbers. They can deal with Boolean functions with complexity comparable to BDDs, but also some functions that are dealt with very inefficiently in a BDD are handled easily by BMD, most notably multiplication. The most important properties of BMD is that, like with BDDs, each function has exactly one canonical representation, and many operations can be efficiently performed on these representations. The main features that differentiate BMDs from BDDs are using linear instead of pointwise diagrams, and having weighted edges. The rules that ensure the canonicity of the representation are: Decision over variables higher in the ordering may only point to decisions over variables lower in the ordering. No two nodes may be identical (in normalization such nodes all references to one of these nodes should be replaced be references to another) No node may have all decision parts equivalent to 0 (links to such nodes should be replaced by links to their always part) No edge may have weight zero (all such edges should be replaced by direct links to 0) Weights of the edges should be coprime. Without this rule or some equivalent of it, it would be possible for a function to have many representations, for example 2x + 2 could be represented as 2 · (1 + x) or 1 · (2 + 2x). Pointwise and linear decomposition In pointwise decomposition, like in BDDs, on each branch point we store result of all branches separately. An example of such decomposition for an integer function (2x + y) is: In linear decomposition we provide instead a default value and a difference: It can easily be seen that the latter (linear) representation is much more efficient in case of additive functions, as when we add many elements the latter representation will have only O(n) elements, while the former (pointwise), even
https://en.wikipedia.org/wiki/Selection%20coefficient
In population genetics, a selection coefficient, usually denoted by the letter s, is a measure of differences in relative fitness. Selection coefficients are central to the quantitative description of evolution, since fitness differences determine the change in genotype frequencies attributable to selection. The following definition of s is commonly used. Suppose that there are two genotypes A and B in a population with relative fitnesses and respectively. Then, choosing genotype A as our point of reference, we have , and , where s measures the fitness advantage (s>0) or disadvantage (s<0) of B. For example, the lactose-tolerant allele spread from very low frequencies to high frequencies in less than 9000 years since farming with an estimated selection coefficient of 0.09-0.19 for a Scandinavian population. Though this selection coefficient might seem like a very small number, over evolutionary time, the favored alleles accumulate in the population and become more and more common, potentially reaching fixation. See also Evolutionary pressure References Population genetics Evolutionary biology
https://en.wikipedia.org/wiki/Hong%20Kong%20Mathematical%20High%20Achievers%20Selection%20Contest
Hong Kong Mathematical High Achievers Selection Contest (HKMHASC, Traditional Chinese: 香港青少年數學精英選拔賽) is a yearly mathematics competition for students of or below Secondary 3 in Hong Kong. It is jointly organized by Po Leung Kuk and Hong Kong Association of Science and Mathematics Education since the academic year 1998-1999. Recently, there are more than 250 secondary schools participating. Format and Scoring Each participating school may send at most 5 students into the contest. There is one paper, divided into Part A and Part B, with two hours given. Part A is usually made up of 14 - 18 easier questions, carrying one mark each. In Part A, only answers are required. Part B is usually made up of 2 - 4 problems with different difficulties, and may carry different number of marks, varying from 4 to 8. In Part B, workings are required and marked. No calculators or calculation assisting equipments (e.g. printed mathematical tables) are allowed. Awards and Further Training Awards are given according to the total mark. The top 40 contestants are given the First Honour Award (一等獎), the next 80 the Second Honour Award (二等獎), and the Third Honour Award (三等獎) for the next 120. Moreover, the top 4 can obtain an award, namely the Champion and the 1st, 2nd and 3rd Runner-up. Group Awards are given to schools, according to the sum of marks of the 3 contestants with highest mark. The first 4 are given the honour of Champion and 1st, 2nd and 3rd Runner-up. The honour of Top 10 (首十名最佳成績) is given to the 5th-10th, and Group Merit Award (團體優異獎) is given to the next 10. First Honour Award achievers would receive further training. Eight students with best performance will be chosen to participate in the Invitational World Youth Mathematics Inter-City Competition (IWYMIC). List of Past Champions (1999-2019) 98-99: Queen Elizabeth School, Ying Wa College 99-00: Queen's College 00-01: La Salle College 01-02: St. Paul's College 02-03: Queen's College 03-04: La Salle College 04-05: La
https://en.wikipedia.org/wiki/Intracrine
Intracrine refers to a hormone that acts inside a cell, regulating intracellular events. In simple terms it means that the cell stimulates itself by cellular production of a factor that acts within the cell. Steroid hormones act through intracellular (mostly nuclear) receptors and, thus, may be considered to be intracrines. In contrast, peptide or protein hormones, in general, act as endocrines, autocrines, or paracrines by binding to their receptors present on the cell surface. Several peptide/protein hormones or their isoforms also act inside the cell through different mechanisms. These peptide/protein hormones, which have intracellular functions, are also called intracrines. The term 'intracrine' is thought to have been coined to represent peptide/protein hormones that also have intracellular actions. To better understand intracrine, we can compare it to paracrine, autocrine and endocrine. The autocrine system deals with the autocrine receptors of a cell allowing for the hormones to bind, which have been secreted from that same cell. The paracrine system is one where nearby cells get hormones from a cell, and change the functioning of those nearby cells. The endocrine system refers to when the hormones from a cell affect another cell that is very distant from the one that released the hormone. Paracrine physiology has been understood for decades now and the effects of paracrine hormones have been observed when for example, an obesity associate tumor will face the effects of local adipocytes, even if it is not in direct contact with the fat pads in concern. Endocrine physiology on the other hand is a growing field and has had a new area explored, called intracrinology. In intracrinology, the sex steroids produced locally, exert their action in the same cell where they are produced. The biological effects produced by intracellular actions are referred as intracrine effects, whereas those produced by binding to cell surface receptors are called endocrine, autocrin
https://en.wikipedia.org/wiki/Charge%20sharing
Charge sharing is an effect of signal degradation through transfer of charges from one electronic domain to another. Charge sharing in semiconductor radiation detectors In pixelated semiconductor radiation detectors - such as photon-counting or hybrid-pixel-detectors, charge sharing refers to the diffusion of electrical charges with a negative impact on image quality. Formation of charge sharing In the active detector layer of photon detectors, incident photons are converted to electron-hole pairs via the photoelectric effect. The resulting charge cloud is being accelerated towards the readout electronics via an applied voltage bias. Because of thermic energy and repulsion due to the electric fields inside such a device, the charge cloud diffuses, effectively getting larger in lateral size. In pixelated detectors, this effect can lead to a detection of parts of the initial charge cloud in neighbouring pixels. As the probability for this cross talk increases towards pixel edges, it is more prominent in detectors with smaller pixel size. Furthermore, fluorescence of the detector material above its K-edge can lead to additional charge carriers that add to the effect of charge-sharing. Especially in photon counting detectors, charge sharing can lead to errors in the signal count. Problems of charge sharing Especially in photon counting detectors, the energy of an incident photon is correlated with the net sum of the charge in the primary charge cloud. This kind of detectors often use thresholds to be able to act over a certain noise level but also to discriminate incident photons of different energies. If a certain part of the charge cloud is diffusing to the read-out electronics of a neighbouring pixel, this results in the detection of two events with lower energy than the primary photon. Furthermore, if the resulting charge in one of the affected pixels is smaller than the threshold, the event is discarded as noise. In general, this leads to the underestimation
https://en.wikipedia.org/wiki/Zero-product%20property
In algebra, the zero-product property states that the product of two nonzero elements is nonzero. In other words, This property is also known as the rule of zero product, the null factor law, the multiplication property of zero, the nonexistence of nontrivial zero divisors, or one of the two zero-factor properties. All of the number systems studied in elementary mathematics — the integers , the rational numbers , the real numbers , and the complex numbers — satisfy the zero-product property. In general, a ring which satisfies the zero-product property is called a domain. Algebraic context Suppose is an algebraic structure. We might ask, does have the zero-product property? In order for this question to have meaning, must have both additive structure and multiplicative structure. Usually one assumes that is a ring, though it could be something else, e.g. the set of nonnegative integers with ordinary addition and multiplication, which is only a (commutative) semiring. Note that if satisfies the zero-product property, and if is a subset of , then also satisfies the zero product property: if and are elements of such that , then either or because and can also be considered as elements of . Examples A ring in which the zero-product property holds is called a domain. A commutative domain with a multiplicative identity element is called an integral domain. Any field is an integral domain; in fact, any subring of a field is an integral domain (as long as it contains 1). Similarly, any subring of a skew field is a domain. Thus, the zero-product property holds for any subring of a skew field. If is a prime number, then the ring of integers modulo has the zero-product property (in fact, it is a field). The Gaussian integers are an integral domain because they are a subring of the complex numbers. In the strictly skew field of quaternions, the zero-product property holds. This ring is not an integral domain, because the multiplication is not
https://en.wikipedia.org/wiki/BORGChat
BORGChat is a LAN messaging software program. It has achieved a relative state of popularity and it is considered to be a complete LAN chat program. It has been superseded by commercial products which allow voice chat, video conferencing, central monitoring and administration. An extension called "BORGVoice" adds word producing chat capabilities to BORGChat, the extension remains in alpha stage. History BORGChat was first published from Ionut Cioflan (nickname "IOn") in 2002. The name comes from the BORG race from Star Trek: The Borg is a massive society of cybernetic automatons abducted and assimilated from thousands of species. The Borg collective improves by consuming technologies, in a similar way wishes BORGChat to "assimilate". Features The software supports the following features: Public and private chat rooms (channels), support for own chat rooms Avatars with user information and online alerts Sending private messages Sending files and pictures, with pause and bandwidth management Animated smileys (emoticons) and sound effects (beep) View computers and network shares Discussion logs in the LAN Message filter, ignore messages from other users Message board with Bulletin Board Code (bold, italic, underline) Multiple chat status modes: Available/Busy/Away with customizable messages Multi language support (with the possibility of adding more languages): English, Romanian, Swedish, Spanish, Polish, Slovak, Italian, Bulgarian, German, Russian, Turkish, Ukrainian, Slovenian, Czech, Danish, French, Latvian, Portuguese, Urdu, Dutch, Hungarian, Serbian, Macedonian. See also Synchronous conferencing Comparison of LAN messengers References External links Official BORGChat website 10 Best Free Chat Rooms LAN messengers Online chat
https://en.wikipedia.org/wiki/Content-addressable%20storage
Content-addressable storage (CAS), also referred to as content-addressed storage or fixed-content storage, is a way to store information so it can be retrieved based on its content, not its name or location. It has been used for high-speed storage and retrieval of fixed content, such as documents stored for compliance with government regulations. Content-addressable storage is similar to content-addressable memory. CAS systems work by passing the content of the file through a cryptographic hash function to generate a unique key, the "content address". The file system's directory stores these addresses and a pointer to the physical storage of the content. Because an attempt to store the same file will generate the same key, CAS systems ensure that the files within them are unique, and because changing the file will result in a new key, CAS systems provide assurance that the file is unchanged. CAS became a significant market during the 2000s, especially after the introduction of the 2002 Sarbanes–Oxley Act which required the storage of enormous numbers of documents for long periods and retrieved only rarely. Ever-increasing performance of traditional file systems and new software systems have eroded the value of legacy CAS systems, which have become increasingly rare after roughly 2018. However, the principles of content addressability continue to be of great interest to computer scientists, and form the core of numerous emerging technologies, such as peer-to-peer file sharing, cryptocurrencies, and distributed computing. Description Location-based approaches Traditional file systems generally track files based on their filename. On random-access media like a floppy disk, this is accomplished using a directory that consists of some sort of list of filenames and pointers to the data. The pointers refer to a physical location on the disk, normally using disk sectors. On more modern systems and larger formats like hard drives, the directory is itself split into many
https://en.wikipedia.org/wiki/Phenomics
Phenomics is the systematic study of traits that make up a phenotype. It was coined by UC Berkeley and LBNL scientist Steven A. Garan. As such, it is a transdisciplinary area of research that involves biology, data sciences, engineering and other fields. Phenomics is concerned with the measurement of the phenotype where a phenome is a set of traits (physical and biochemical traits) that can be produced by a given organism over the course of development and in response to genetic mutation and environmental influences. It is also important to remember that an organisms phenotype changes with time. The relationship between phenotype and genotype enables researchers to understand and study pleiotropy. Phenomics concepts are used in functional genomics, pharmaceutical research, metabolic engineering, agricultural research, and increasingly in phylogenetics. Technical challenges involve improving, both qualitatively and quantitatively, the capacity to measure phenomes. Applications Plant sciences In plant sciences, phenomics research occurs in both field and controlled environments. Field phenomics encompasses the measurement of phenotypes that occur in both cultivated and natural conditions, whereas controlled environment phenomics research involves the use of glass houses, growth chambers, and other systems where growth conditions can be manipulated. The University of Arizona's Field Scanner in Maricopa, Arizona is a platform developed to measure field phenotypes. Controlled environment systems include the Enviratron at Iowa State University, the Plant Cultivation Hall under construction at IPK, and platforms at the Donald Danforth Plant Science Center, the University of Nebraska-Lincoln, and elsewhere. Standards, methods, tools, and instrumentation A Minimal Information About a Plant Phenotyping Experiment (MIAPPE) standard is available and in use among many researchers collecting and organizing plant phenomics data. A diverse set of computer vision methods exist
https://en.wikipedia.org/wiki/Elliott%20Avedon%20Museum%20and%20Archive%20of%20Games
The Elliott Avedon Museum and Archive of Games was a public board game museum housed at the University of Waterloo, in Waterloo, Ontario, Canada. It was established in 1971 as the Museum and Archive of Games, and renamed in 2000 in honour of its founder and first curator. It housed over 5,000 objects and documents related to games. It was administered by the Faculty of Applied Health Sciences, and was found within B.C. Matthews Hall, near the north end of the main campus. The museum had both physical and virtual exhibits about a diversity of board games and related objects. The resources of the museum contributed to the university's program in Recreation and Leisure Studies. The University closed the museum in 2009 and transferred the physical collection to the Canadian Museum of Civilization (now known as the Canadian Museum of History) however information about the collection, which includes over 5000 objects and a large number of archival documents about games, is still hosted on the University website. There are over 700 web pages of virtual exhibits which includes videos, photographs, diagrams, other graphics, and textual information about games. See also History of games History of video games References External links Elliott Avedon Museum & Archive of Games website Obituary for Elliott Avedon University museums in Canada Virtual museums Archives in Canada Museums in Waterloo, Ontario University of Waterloo Amusement museums in Canada Defunct museums in Canada 2009 disestablishments in Ontario 1971 establishments in Ontario Museums established in 1971 Board game websites
https://en.wikipedia.org/wiki/Particle%20aggregation
Particle agglomeration refers to the formation of assemblages in a suspension and represents a mechanism leading to the functional destabilization of colloidal systems. During this process, particles dispersed in the liquid phase stick to each other, and spontaneously form irregular particle assemblages, flocs, or agglomerates. This phenomenon is also referred to as coagulation or flocculation and such a suspension is also called unstable. Particle agglomeration can be induced by adding salts or other chemicals referred to as coagulant or flocculant. Particle agglomeration can be a reversible or irreversible process. Particle agglomerates defined as "hard agglomerates" are more difficult to redisperse to the initial single particles. In the course of agglomeration, the agglomerates will grow in size, and as a consequence they may settle to the bottom of the container, which is referred to as sedimentation. Alternatively, a colloidal gel may form in concentrated suspensions which changes its rheological properties. The reverse process whereby particle agglomerates are re-dispersed as individual particles, referred to as peptization, hardly occurs spontaneously, but may occur under stirring or shear. Colloidal particles may also remain dispersed in liquids for long periods of time (days to years). This phenomenon is referred to as colloidal stability and such a suspension is said to be functionally stable. Stable suspensions are often obtained at low salt concentrations or by addition of chemicals referred to as stabilizers or stabilizing agents. The stability of particles, colloidal or otherwise, is most commonly evaluated in terms of zeta potential. This parameter provides a readily quantifiable measure of interparticle repulsion, which is the key inhibitor of particle aggregation. Similar agglomeration processes occur in other dispersed systems too. In emulsions, they may also be coupled to droplet coalescence, and not only lead to sedimentation but also to crea
https://en.wikipedia.org/wiki/Thermal%20physics
Thermal physics is the combined study of thermodynamics, statistical mechanics, and kinetic theory of gases. This umbrella-subject is typically designed for physics students and functions to provide a general introduction to each of three core heat-related subjects. Other authors, however, define thermal physics loosely as a summation of only thermodynamics and statistical mechanics. Thermal physics can be seen as the study of system with larger number of atom, it unites thermodynamics to statistical mechanics. Overview Thermal physics, generally speaking, is the study of the statistical nature of physical systems from an energetic perspective. Starting with the basics of heat and temperature, thermal physics analyzes the first law of thermodynamics and second law of thermodynamics from the statistical perspective, in terms of the number of microstates corresponding to a given macrostate. In addition, the concept of entropy is studied via quantum theory. A central topic in thermal physics is the canonical probability distribution. The electromagnetic nature of photons and phonons are studied which show that the oscillations of electromagnetic fields and of crystal lattices have much in common. Waves form a basis for both, provided one incorporates quantum theory. Other topics studied in thermal physics include: chemical potential, the quantum nature of an ideal gas, i.e. in terms of fermions and bosons, Bose–Einstein condensation, Gibbs free energy, Helmholtz free energy, chemical equilibrium, phase equilibrium, the equipartition theorem, entropy at absolute zero, and transport processes as mean free path, viscosity, and conduction. See also Heat transfer physics Information theory Philosophy of thermal and statistical physics Thermodynamic instruments References Further reading External links Thermal Physics Links on the Web Physics education Thermodynamics
https://en.wikipedia.org/wiki/Idle%20animation
Idle animations are animations within video games that occur when the player character is not performing any actions. They serve to give games personality, as an Easter Egg for the player, or for realism. History One of the earliest games to feature an idle animation was Android Nim in 1978. The androids blink, look around, and seemingly talk to one another until the player gives an order. Another two early examples are Maziacs and The Pharaoh's Curse released in 1983. Idle animations grew in usage throughout the 16 bit era. Incorporating idle animations was done to give personality towards games and their characters as they are the only in-game actions aside from cutscenes where the characters are free to act independent of the player's input. The idle animation length and details can depend on interaction between the player and character, such as third person player idle animations are longer to avoid looking robotic on repeated viewing.  In modern 3D games idle animation are done to give realism. For games targeting towards younger audiences the idle animations are more likely to be complex or humorous. In comparison games targeted towards older audiences tend to include more basic idle animations. Examples Maziacs - The sprite character will tap his feet, blink, and sit down. Sonic the Hedgehog - Sonic will impatiently tap his foot when the player does not move. Donkey Kong Country 2: Diddy Kong's Quest - Diddy Kong juggles a few balls after a few seconds without input. Super Mario 64 - Mario looks around and eventually will fall asleep. Grand Theft Auto: San Andreas - Carl "CJ" Johnson will sing songs including "Nuthin' But A'G' Thang" and "My Lovin' (You're Never Gonna Get It)." Red Dead Redemption 2 - When left on a horse for a while Arthur Morgan will pet the horse. References External links Idle animations at Giant Bomb, games with idle animations Video game development Animation techniques
https://en.wikipedia.org/wiki/Carlson%27s%20theorem
In mathematics, in the area of complex analysis, Carlson's theorem is a uniqueness theorem which was discovered by Fritz David Carlson. Informally, it states that two different analytic functions which do not grow very fast at infinity can not coincide at the integers. The theorem may be obtained from the Phragmén–Lindelöf theorem, which is itself an extension of the maximum-modulus theorem. Carlson's theorem is typically invoked to defend the uniqueness of a Newton series expansion. Carlson's theorem has generalized analogues for other expansions. Statement Assume that satisfies the following three conditions. The first two conditions bound the growth of at infinity, whereas the third one states that vanishes on the non-negative integers. is an entire function of exponential type, meaning that for some real values , . There exists such that for every non-negative integer . Then is identically zero. Sharpness First condition The first condition may be relaxed: it is enough to assume that is analytic in , continuous in , and satisfies for some real values , . Second condition To see that the second condition is sharp, consider the function . It vanishes on the integers; however, it grows exponentially on the imaginary axis with a growth rate of , and indeed it is not identically zero. Third condition A result, due to , relaxes the condition that vanish on the integers. Namely, Rubel showed that the conclusion of the theorem remains valid if vanishes on a subset of upper density 1, meaning that This condition is sharp, meaning that the theorem fails for sets of upper density smaller than 1. Applications Suppose is a function that possesses all finite forward differences . Consider then the Newton series with is the binomial coefficient and is the -th forward difference. By construction, one then has that for all non-negative integers , so that the difference . This is one of the conditions of Carlson's theorem; if obeys the othe
https://en.wikipedia.org/wiki/%CE%A9-consistent%20theory
In mathematical logic, an ω-consistent (or omega-consistent, also called numerically segregative) theory is a theory (collection of sentences) that is not only (syntactically) consistent (that is, does not prove a contradiction), but also avoids proving certain infinite combinations of sentences that are intuitively contradictory. The name is due to Kurt Gödel, who introduced the concept in the course of proving the incompleteness theorem. Definition A theory T is said to interpret the language of arithmetic if there is a translation of formulas of arithmetic into the language of T so that T is able to prove the basic axioms of the natural numbers under this translation. A T that interprets arithmetic is ω-inconsistent if, for some property P of natural numbers (defined by a formula in the language of T), T proves P(0), P(1), P(2), and so on (that is, for every standard natural number n, T proves that P(n) holds), but T also proves that there is some natural number n such that P(n) fails. This may not generate a contradiction within T because T may not be able to prove for any specific value of n that P(n) fails, only that there is such an n. In particular, such n is necessarily a nonstandard integer in any model for T (Quine has thus called such theories "numerically insegregative"). T is ω-consistent if it is not ω-inconsistent. There is a weaker but closely related property of Σ1-soundness. A theory T is Σ1-sound (or 1-consistent, in another terminology) if every Σ01-sentence provable in T is true in the standard model of arithmetic N (i.e., the structure of the usual natural numbers with addition and multiplication). If T is strong enough to formalize a reasonable model of computation, Σ1-soundness is equivalent to demanding that whenever T proves that a Turing machine C halts, then C actually halts. Every ω-consistent theory is Σ1-sound, but not vice versa. More generally, we can define an analogous concept for higher levels of the arithmetical hierarchy
https://en.wikipedia.org/wiki/Light-addressable%20potentiometric%20sensor
A light-addressable potentiometric sensor (LAPS) is a sensor that uses light (e.g. LEDs) to select what will be measured. Light can activate carriers in semiconductors. History An example is the pH-sensitive LAPS (range pH4 to pH10) that uses LEDs in combination with (semi-conducting) silicon and pH-sensitive Ta2O5 (SiO2; Si3N4) insulator. The LAPS has several advantages over other types of chemical sensors. The sensor surface is completely flat, no structures, wiring or passivation are required. At the same time, the "light-addressability" of the LAPS makes it possible to obtain a spatially resolved map of the distribution of the ion concentration in the specimen. The spatial resolution of the LAPS is an important factor and is determined by the beam size and the lateral diffusion of photocarries in the semiconductor substrate. By illuminating parts of the semiconductor surface, electron-hole pairs are generated and a photocurrent flows. The LAPS is a semiconductor based chemical sensor with an electrolyte-insulator-semiconductor (EIS) structure. Under a fixed bias voltage, the AC (kHz range) photocurrent signal varies depending on the solution. A two-dimensional mapping of the surface from the LAPS is possible by using a scanning laser beam. Optoelectronics Sensors
https://en.wikipedia.org/wiki/Sort%20%28C%2B%2B%29
sort is a generic function in the C++ Standard Library for doing comparison sorting. The function originated in the Standard Template Library (STL). The specific sorting algorithm is not mandated by the language standard and may vary across implementations, but the worst-case asymptotic complexity of the function is specified: a call to must perform no more than comparisons when applied to a range of elements. Usage The function is included from the header of the C++ Standard Library, and carries three arguments: . Here, is a templated type that must be a random access iterator, and and must define a sequence of values, i.e., must be reachable from by repeated application of the increment operator to . The third argument, also of a templated type, denotes a comparison predicate. This comparison predicate must define a strict weak ordering on the elements of the sequence to be sorted. The third argument is optional; if not given, the "less-than" () operator is used, which may be overloaded in C++. This code sample sorts a given array of integers (in ascending order) and prints it out. #include <algorithm> #include <iostream> int main() { int array[] = { 23, 5, -10, 0, 0, 321, 1, 2, 99, 30 }; std::sort(std::begin(array), std::end(array)); for (size_t i = 0; i < std::size(array); ++i) { std::cout << array[i] << ' '; } std::cout << '\n'; } The same functionality using a container, using its and methods to obtain iterators: #include <algorithm> #include <iostream> #include <vector> int main() { std::vector<int> vec = { 23, 5, -10, 0, 0, 321, 1, 2, 99, 30 }; std::sort(vec.begin(), vec.end()); for (size_t i = 0; i < vec.size(); ++i) { std::cout << vec[i] << ' '; } std::cout << '\n'; } Genericity is specified generically, so that it can work on any random-access container and any way of determining that an element of such a container should be placed before another element . Although generically specified, is not easil
https://en.wikipedia.org/wiki/NAT%20Port%20Mapping%20Protocol
NAT Port Mapping Protocol (NAT-PMP) is a network protocol for establishing network address translation (NAT) settings and port forwarding configurations automatically without user effort. The protocol automatically determines the external IPv4 address of a NAT gateway, and provides means for an application to communicate the parameters for communication to peers. Apple introduced NAT-PMP in 2005 by as part of the Bonjour specification, as an alternative to the more common ISO Standard Internet Gateway Device Protocol implemented in many NAT routers. The protocol was published as an informational Request for Comments (RFC) by the Internet Engineering Task Force (IETF) in RFC 6886. NAT-PMP runs over the User Datagram Protocol (UDP) and uses port number 5351. It has no built-in authentication mechanisms because forwarding a port typically does not allow any activity that could not also be achieved using STUN methods. The benefit of NAT-PMP over STUN is that it does not require a STUN server and a NAT-PMP mapping has a known expiration time, allowing the application to avoid sending inefficient keep-alive packets. NAT-PMP is the predecessor to the Port Control Protocol (PCP). See also Port Control Protocol (PCP) Internet Gateway Device Protocol (UPnP IGD) Universal Plug and Play (UPnP) NAT traversal STUN Zeroconf References Apple Inc. services Network protocols Network address translation
https://en.wikipedia.org/wiki/The%20Last%20Blade%202
The Last Blade 2 is a video game developed and released by SNK in 1998. Like its predecessor, The Last Blade, it is a weapons-based versus fighting game originally released to arcades via the Neo Geo MVS arcade system, although it has since been released for various other platforms. Gameplay Gameplay elements remain the same as their predecessor with some minor adjustments. An "EX" mode was added to play, which is a combination of "Speed" and "Power". The mood is grimmer than its predecessor through the introduction to the game. The characters are colored slightly darker, and the game's cut-scenes are made longer to emphasize the importance of the plot. Characters are no longer equal, hosting greater differences in strengths and weaknesses than before. Plot The game is set one year after the events of the first game. Long before humanity existed, death was an unknown, equally distant concept. The "Messenger from Afar" was born when death first came to the world. With time, the Sealing Rite was held to seal Death behind Hell's Gate. At that time, two worlds were born, one near and one far, beginning the history of life and death. Half a year has passed since Suzaku's madness, and the underworld is still linked by a great portal. Our world has been called upon. Legends of long ago told of the sealing of the boundary between the two worlds. The Sealing Rite would be necessary to hold back the spirits of that far away world. Characters Three new characters were introduced: Hibiki Takane: daughter of a famed swordsmith, she is searching for the silver-haired man that requested the final blade her father would ever make. Setsuna: a being believed to be the "Messenger from Afar", he requested a blade to be forged by Hibiki's father and is out to slay the Sealing Maiden. Kojiroh Sanada: Shinsengumi captain of Unit Zero; investigating the Hell's Portal. Kojiroh is actually Kaori, his sister, who assumed his identity after his death to carry on his work. Home versi
https://en.wikipedia.org/wiki/Microecosystem
Microecosystems can exist in locations which are precisely defined by critical environmental factors within small or tiny spaces. Such factors may include temperature, pH, chemical milieu, nutrient supply, presence of symbionts or solid substrates, gaseous atmosphere (aerobic or anaerobic) etc. Some examples Pond microecosystems These microecosystems with limited water volume are often only of temporary duration and hence colonized by organisms which possess a drought-resistant spore stage in the lifecycle, or by organisms which do not need to live in water continuously. The ecosystem conditions applying at a typical pond edge can be quite different from those further from shore. Extremely space-limited water ecosystems can be found in, for example, the water collected in bromeliad leaf bases and the "pitchers" of Nepenthes. Animal gut microecosystems These include the buccal region (especially cavities in the gingiva), rumen, caecum etc. of mammalian herbivores or even invertebrate digestive tracts. In the case of mammalian gastrointestinal microecology, microorganisms such as protozoa, bacteria, as well as curious incompletely defined organisms (such as certain large structurally complex Selenomonads, Quinella ovalis "Quin's Oval", Magnoovum eadii "Eadie's Oval", Oscillospira etc.) can exist in the rumen as incredibly complex, highly enriched mixed populations, (see Moir and Masson images ). This type of microecosystem can adjust rapidly to changes in the nutrition or health of the host animal (usually a ruminant such as cow, sheep, goat etc.); see Hungate's "The Rumen and its microbes 1966). Even within a small closed system such as the rumen there may exist a range of ecological conditions: Many organisms live freely in the rumen fluid whereas others require the substrate and metabolic products supplied by the stomach wall tissue with its folds and interstices. Interesting questions are also posed concerning the transfer of the strict anaerobe organisms in t
https://en.wikipedia.org/wiki/Fault%20Simulator
DevPartner Fault Simulator is a software development tool used to simulate application errors. It helps developers and quality assurance engineers write, test and debug those parts of the software responsible for handling fault situations which can occur within applications. The target application, where faults are simulated, behaves as if those faults were the result of a real software or hardware problem which the application could face. DevPartner Fault Simulator works with applications written for Microsoft Windows and .NET platforms and is integrated with the Microsoft Visual Studio development environment. DevPartner Fault Simulator belonged to the DevPartner family of products offered by Compuware. At some point before the product line was sold to Micro Focus in 2009, the product was retired. See also NuMega Software testing tools
https://en.wikipedia.org/wiki/IEC%2061508
IEC 61508 is an international standard published by the International Electrotechnical Commission (IEC) consisting of methods on how to apply, design, deploy and maintain automatic protection systems called safety-related systems. It is titled Functional Safety of Electrical/Electronic/Programmable Electronic Safety-related Systems (E/E/PE, or E/E/PES). IEC 61508 is a basic functional safety standard applicable to all industries. It defines functional safety as: “part of the overall safety relating to the EUC (Equipment Under Control) and the EUC control system which depends on the correct functioning of the E/E/PE safety-related systems, other technology safety-related systems and external risk reduction facilities.” The fundamental concept is that any safety-related system must work correctly or fail in a predictable (safe) way. The standard has two fundamental principles: An engineering process called the safety life cycle is defined based on best practices in order to discover and eliminate design errors and omissions. A probabilistic failure approach to account for the safety impact of device failures. The safety life cycle has 16 phases which roughly can be divided into three groups as follows: Phases 1–5 address analysis Phases 6–13 address realisation Phases 14–16 address operation. All phases are concerned with the safety function of the system. The standard has seven parts: Parts 1–3 contain the requirements of the standard (normative) Part 4 contains definitions Parts 5–7 are guidelines and examples for development and thus informative. Central to the standard are the concepts of probabilistic risk for each safety function. The risk is a function of frequency (or likelihood) of the hazardous event and the event consequence severity. The risk is reduced to a tolerable level by applying safety functions which may consist of E/E/PES, associated mechanical devices, or other technologies. Many requirements apply to all technologies but there is
https://en.wikipedia.org/wiki/Difference%20polynomials
In mathematics, in the area of complex analysis, the general difference polynomials are a polynomial sequence, a certain subclass of the Sheffer polynomials, which include the Newton polynomials, Selberg's polynomials, and the Stirling interpolation polynomials as special cases. Definition The general difference polynomial sequence is given by where is the binomial coefficient. For , the generated polynomials are the Newton polynomials The case of generates Selberg's polynomials, and the case of generates Stirling's interpolation polynomials. Moving differences Given an analytic function , define the moving difference of f as where is the forward difference operator. Then, provided that f obeys certain summability conditions, then it may be represented in terms of these polynomials as The conditions for summability (that is, convergence) for this sequence is a fairly complex topic; in general, one may say that a necessary condition is that the analytic function be of less than exponential type. Summability conditions are discussed in detail in Boas & Buck. Generating function The generating function for the general difference polynomials is given by This generating function can be brought into the form of the generalized Appell representation by setting , , and . See also Carlson's theorem Bernoulli polynomials of the second kind References Ralph P. Boas, Jr. and R. Creighton Buck, Polynomial Expansions of Analytic Functions (Second Printing Corrected), (1964) Academic Press Inc., Publishers New York, Springer-Verlag, Berlin. Library of Congress Card Number 63-23263. Polynomials Finite differences Factorial and binomial topics
https://en.wikipedia.org/wiki/Stirling%20polynomials
In mathematics, the Stirling polynomials are a family of polynomials that generalize important sequences of numbers appearing in combinatorics and analysis, which are closely related to the Stirling numbers, the Bernoulli numbers, and the generalized Bernoulli polynomials. There are multiple variants of the Stirling polynomial sequence considered below most notably including the Sheffer sequence form of the sequence, , defined characteristically through the special form of its exponential generating function, and the Stirling (convolution) polynomials, , which also satisfy a characteristic ordinary generating function and that are of use in generalizing the Stirling numbers (of both kinds) to arbitrary complex-valued inputs. We consider the "convolution polynomial" variant of this sequence and its properties second in the last subsection of the article. Still other variants of the Stirling polynomials are studied in the supplementary links to the articles given in the references. Definition and examples For nonnegative integers k, the Stirling polynomials, Sk(x), are a Sheffer sequence for defined by the exponential generating function The Stirling polynomials are a special case of the Nørlund polynomials (or generalized Bernoulli polynomials) each with exponential generating function given by the relation . The first 10 Stirling polynomials are given in the following table: {| class="wikitable" !k !! Sk(x) |- | 0 || |- | 1 || |- | 2 || |- | 3 || |- | 4 || |- | 5 || |- | 6 || |- | 7 || |- | 8 || |- | 9 || |} Yet another variant of the Stirling polynomials is considered in (see also the subsection on Stirling convolution polynomials below). In particular, the article by I. Gessel and R. P. Stanley defines the modified Stirling polynomial sequences, and where are the unsigned Stirling numbers of the first kind, in terms of the two Stirling number triangles for non-negative integers . For fixed , both and are polynomials of the input each of d
https://en.wikipedia.org/wiki/Source%20code%20escrow
Source code escrow is the deposit of the source code of software with a third-party escrow agent. Escrow is typically requested by a party licensing software (the licensee), to ensure maintenance of the software instead of abandonment or orphaning. The software's source code is released to the licensee if the licensor files for bankruptcy or otherwise fails to maintain and update the software as promised in the software license agreement. Necessity of escrow As the continued operation and maintenance of custom software is critical to many companies, they usually desire to make sure that it continues even if the licensor becomes unable to do so, such as because of bankruptcy. This is most easily achieved by obtaining a copy of the up-to-date source code. The licensor, however, will often be unwilling to agree to this, as the source code will generally represent one of their most closely guarded trade secrets. As a solution to this conflict of interest, source code escrow ensures that the licensee obtains access to the source code only when the maintenance of the software cannot otherwise be assured, as defined in contractually agreed-upon conditions. Escrow agreements Source code escrow takes place in a contractual relationship, formalized in a source code escrow agreement, between at least three parties: one or several licensors, one or several licensees, the escrow agent. The service provided by the escrow agent – generally a business dedicated to that purpose and independent from either party – consists principally in taking custody of the source code from the licensor and releasing it to the licensee only if the conditions specified in the escrow agreement are met. Source code escrow agreements provide for the following: They specify the subject and scope of the escrow. This is generally the source code of a specific software, accompanied by everything that the licensee requires to independently maintain the software, such as documentation, software tool
https://en.wikipedia.org/wiki/ToonTalk
ToonTalk is a computer programming system intended to be programmed by children. The "Toon" part stands for cartoon. The system's presentation is in the form of animated characters, including robots that can be trained by example. It is one of the few successful implementations outside academia of the concurrent constraint logic programming paradigm. It was created by Kenneth M. Kahn in 1995, and implemented as part of the ToonTalk IDE, a software package distributed worldwide between 1996 and 2009. Since 2009, its specification is scholarly published and its implementation is freely available. Beginning 2014 a JavaScript HTML5 version of ToonTalk called ToonTalk Reborn for the Web has been available. It runs on any modern web browser and differs from the desktop version of ToonTalk in a few ways. ToonTalk programs can run on any DOM element and various browser capabilities (audio, video, style sheets, speech input and output, and browser events) are available to ToonTalk programs. Web services such as Google Drive are integrated. ToonTalk Reborn is free and open source. Beyond its life as a commercial product, ToonTalk evolved via significant academic use in various research projects, notably at the London Knowledge Lab and the Institute of Education - projects Playground and WebLabs, which involved research partners from Cambridge (Addison Wesley Longman through their Logotron subsidiary), Portugal (Cnotinfor and the University of Lisbon), Sweden (Royal Institute of Technology), Slovakia (Comenius University), Bulgaria (Sofia University), Cyprus (University of Cyprus), and Italy (Institute for Educational Technology of the Consiglio Nazionale delle Ricerche). It was also source of academic interest in Sweden, where Mikael Kindborg proposed a static representation of ToonTalk programs and in Portugal, where Leonel Morgado studied its potential to enable computer programming by preliterate children. ToonTalk was influenced by the Janus computer programming lan
https://en.wikipedia.org/wiki/Lunar%20distance%20%28navigation%29
In celestial navigation, lunar distance, also called a lunar, is the angular distance between the Moon and another celestial body. The lunar distances method uses this angle and a nautical almanac to calculate Greenwich time if so desired, or by extension any other time. That calculated time can be used in solving a spherical triangle. The theory was first published by Johannes Werner in 1524, before the necessary almanacs had been published. A fuller method was published in 1763 and used until about 1850 when it was superseded by the marine chronometer. A similar method uses the positions of the Galilean moons of Jupiter. Purpose In celestial navigation, knowledge of the time at Greenwich (or another known place) and the measured positions of one or more celestial objects allows the navigator to calculate latitude and longitude. Reliable marine chronometers were unavailable until the late 18th century and not affordable until the 19th century. After the method was first published in 1763 by British Astronomer Royal Nevil Maskelyne, based on pioneering work by Tobias Mayer, for about a hundred years (until about 1850) mariners lacking a chronometer used the method of lunar distances to determine Greenwich time as a key step in determining longitude. Conversely, a mariner with a chronometer could check its accuracy using a lunar determination of Greenwich time. The method saw usage all the way up to the beginning of the 20th century on smaller vessels that could not afford a chronometer or had to rely on this technique for correction of the chronometer. Method Summary The method relies on the relatively quick movement of the moon across the background sky, completing a circuit of 360 degrees in 27.3 days (the sidereal month), or 13.2 degrees per day. In one hour it will move approximately half a degree, roughly its own angular diameter, with respect to the background stars and the Sun. Using a sextant, the navigator precisely measures the angle between the m
https://en.wikipedia.org/wiki/Liesegang%20rings
Liesegang rings () are a phenomenon seen in many, if not most, chemical systems undergoing a precipitation reaction under certain conditions of concentration and in the absence of convection. Rings are formed when weakly soluble salts are produced from reaction of two soluble substances, one of which is dissolved in a gel medium. The phenomenon is most commonly seen as rings in a Petri dish or bands in a test tube; however, more complex patterns have been observed, such as dislocations of the ring structure in a Petri dish, helices, and "Saturn rings" in a test tube. Despite continuous investigation since rediscovery of the rings in 1896, the mechanism for the formation of Liesegang rings is still unclear. History The phenomenon was first noticed in 1855 by the German chemist Friedlieb Ferdinand Runge. He observed them in the course of experiments on the precipitation of reagents in blotting paper. In 1896 the German chemist Raphael E. Liesegang noted the phenomenon when he dropped a solution of silver nitrate onto a thin layer of gel containing potassium dichromate. After a few hours, sharp concentric rings of insoluble silver dichromate formed. It has aroused the curiosity of chemists for many years. When formed in a test tube by diffusing one component from the top, layers or bands of precipitate form, rather than rings. Silver nitrate–potassium dichromate reaction The reactions are most usually carried out in test tubes into which a gel is formed that contains a dilute solution of one of the reactants. If a hot solution of agar gel also containing a dilute solution of potassium dichromate is poured in a test tube, and after the gel solidifies a more concentrated solution of silver nitrate is poured on top of the gel, the silver nitrate will begin to diffuse into the gel. It will then encounter the potassium dichromate and will form a continuous region of precipitate at the top of the tube. After some hours, the continuous region of precipitation is followed
https://en.wikipedia.org/wiki/Data%20architecture
Data architecture consist of models, policies, rules, and standards that govern which data is collected and how it is stored, arranged, integrated, and put to use in data systems and in organizations. Data is usually one of several architecture domains that form the pillars of an enterprise architecture or solution architecture. Overview A data architecture aims to set data standards for all its data systems as a vision or a model of the eventual interactions between those data systems. Data integration, for example, should be dependent upon data architecture standards since data integration requires data interactions between two or more data systems. A data architecture, in part, describes the data structures used by a business and its computer applications software. Data architectures address data in storage, data in use, and data in motion; descriptions of data stores, data groups, and data items; and mappings of those data artifacts to data qualities, applications, locations, etc. Essential to realizing the target state, data architecture describes how data is processed, stored, and used in an information system. It provides criteria for data processing operations to make it possible to design data flows and also control the flow of data in the system. The data architect is typically responsible for defining the target state, aligning during development and then following up to ensure enhancements are done in the spirit of the original blueprint. During the definition of the target state, the data architecture breaks a subject down to the atomic level and then builds it back up to the desired form. The data architect breaks the subject down by going through three traditional architectural stages: Conceptual - represents all business entities. Logical - represents the logic of how entities are related. Physical - the realization of the data mechanisms for a specific type of functionality. The "data" column of the Zachman Framework for enterprise architec
https://en.wikipedia.org/wiki/Informational%20self-determination
The term informational self-determination was first used in the context of a German constitutional ruling relating to personal information collected during the 1983 census. The German term is informationelle Selbstbestimmung. It is formally defined as "the authority of the individual to decide himself, on the basis of the idea of self-determination, when and within what limits information about his private life should be communicated to others." Freedom of speech, protection of privacy, right to active private life, right to education, protection of personal data, and the right to public sector information all fall under the umbrella of informational self-determination. On that occasion, the German Federal Constitutional Court ruled that: “[...] in the context of modern data processing, the protection of the individual against unlimited collection, storage, use and disclosure of his/her personal data is encompassed by the general personal rights of the German constitution. This basic right warrants in this respect the capacity of the individual to determine in principle the disclosure and use of his/her personal data. Limitations to this informational self-determination are allowed only in case of overriding public interest.” Informational self-determination is often considered similar to the right to privacy but has unique characteristics that distinguish it from the "right to privacy" in the United States tradition. Informational self-determination reflects Westin's description of privacy: “The right of the individual to decide what information about himself should be communicated to others and under what circumstances” (Westin, 1970). In contrast, the "right to privacy" in the United States legal tradition is commonly considered to originate in Warren and Brandeis' article, which focuses on the right to "solitude" (i.e., being "left alone") and in the Constitution's Fourth Amendment, which protects persons and their belongings from warrantless search. Views fr
https://en.wikipedia.org/wiki/UKNC
UKNC () is a Soviet PDP-11-compatible educational micro computer, aimed at teaching school informatics courses. It is also known as Elektronika MS-0511. UKNC stands for Educational Computer by Scientific Centre. Hardware Processor: KM1801VM2 1801 series CPU @ 8 MHz, 16 bit data bus, 17 bit address bus Peripheral processor: KM1801VM2 @ 6.25 MHz CPU RAM: 64 KiB PPU RAM: 32 KiB ROM: 32 KiB video RAM: 96 KiB (3 planes 32 KiB each, each 3-bit pixel had a bit in each plane) Graphics: max 640×288 with 8 colors in one line (16 or 53 colors on whole screen), it is possible to set an individual palette, resolution (80, 160, 320, or 640 dots per line) and memory address for each of 288 screen lines; no text mode. Keyboard: 88 keys (MS-7007), JCUKEN layout built-in LAN controller built-in controller for common or special tape-recorder with computer control (to use for data storage, usually 5-inch FDD's were used) One unique part of the design is the usage of a peripheral processing unit (PPU). Management of peripheral devices (display, audio, and so on) was offloaded to the PPU, which can also run user programs. The computer was released in 3 sub-models: 0511, 0511.1, 0511.2. The 0511.1 model, intended for home use, has a power supply for 220 V AC, while others use 42 V AC. The 0511.2 features new firmware with extended functionality and changed the marking of the keyboard's gray keys, compared to the initial version. The photo shows an 0511.2 variant. There is no active cooling, and at least the 0511.2 variant tends to overheat and halt after several hours of operation. The design of the case, the layout of the keyboard, the location and the shape of expansion slots are inspired by the Yamaha MSX system, which was purchased by the Soviet Union in the early 1980s for use in schools. The same case, with changed markings, is found with the IBM PC clone called Elektronika MS-1502. The same case and keyboard are found on another educational computer called Rusich (i8085 based)
https://en.wikipedia.org/wiki/Cycle%20stealing
In computing, traditionally cycle stealing is a method of accessing computer memory (RAM) or bus without interfering with the CPU. It is similar to direct memory access (DMA) for allowing I/O controllers to read or write RAM without CPU intervention. Clever exploitation of specific CPU or bus timings can permit the CPU to run at full speed without any delay if external devices access memory not actively participating in the CPU's current activity and complete the operations before any possible CPU conflict. Cycle stealing was common in older platforms, first on supercomputers which used complex systems to time their memory access, and later on early microcomputers where cycle stealing was used both for peripherals as well as display drivers. It is more difficult to implement in modern platforms because there are often several layers of memory running at different speeds, and access is often mediated by the memory management unit. In the cases where the functionality is needed, modern systems often use dual-port RAM which allows access by two systems, but this tends to be expensive. In older references, the term is also used to describe traditional DMA systems where the CPU stops during memory transfers. In this case the device is stealing cycles from the CPU, so it is the opposite sense of the more modern usage. In the smaller models of the IBM System/360 and System/370, the control store contains microcode for both the processor architecture and the channel architecture. When a channel needs service, the hardware steals cycles from the CPU microcode in order to run the channel microcode. Common implementations Some processors were designed to allow cycle stealing, or at least supported it easily. This was the case for the Motorola 6800 and MOS 6502 systems due to a design feature which meant the CPU only accessed memory every other clock cycle. Using RAM that was running twice as fast as the CPU clock allowed a second system to interleave its accesses between t
https://en.wikipedia.org/wiki/Model%20elimination
Model elimination is the name attached to a pair of proof procedures invented by Donald W. Loveland, the first of which was published in 1968 in the Journal of the ACM. Their primary purpose is to carry out automated theorem proving, though they can readily be extended to logic programming, including the more general disjunctive logic programming. Model elimination is closely related to resolution while also bearing characteristics of a tableaux method. It is a progenitor of the SLD resolution procedure used in the Prolog logic programming language. While somewhat eclipsed by attention to, and progress in, resolution theorem provers, model elimination has continued to attract the attention of researchers and software developers. Today there are several theorem provers under active development that are based on the model elimination procedure. References Loveland, D. W. (1968) Mechanical theorem-proving by model elimination. Journal of the ACM, 15, 236—251. Automated theorem proving Logical calculi Logic in computer science
https://en.wikipedia.org/wiki/Stochastic%20modelling%20%28insurance%29
This page is concerned with the stochastic modelling as applied to the insurance industry. For other stochastic modelling applications, please see Monte Carlo method and Stochastic asset models. For mathematical definition, please see Stochastic process. "Stochastic" means being or having a random variable. A stochastic model is a tool for estimating probability distributions of potential outcomes by allowing for random variation in one or more inputs over time. The random variation is usually based on fluctuations observed in historical data for a selected period using standard time-series techniques. Distributions of potential outcomes are derived from a large number of simulations (stochastic projections) which reflect the random variation in the input(s). Its application initially started in physics. It is now being applied in engineering, life sciences, social sciences, and finance. See also Economic capital. Valuation Like any other company, an insurer has to show that its assets exceeds its liabilities to be solvent. In the insurance industry, however, assets and liabilities are not known entities. They depend on how many policies result in claims, inflation from now until the claim, investment returns during that period, and so on. So the valuation of an insurer involves a set of projections, looking at what is expected to happen, and thus coming up with the best estimate for assets and liabilities, and therefore for the company's level of solvency. Deterministic approach The simplest way of doing this, and indeed the primary method used, is to look at best estimates. The projections in financial analysis usually use the most likely rate of claim, the most likely investment return, the most likely rate of inflation, and so on. The projections in engineering analysis usually use both the most likely rate and the most critical rate. The result provides a point estimate - the best single estimate of what the company's current solvency position is, or m
https://en.wikipedia.org/wiki/Indiscernibles
In mathematical logic, indiscernibles are objects that cannot be distinguished by any property or relation defined by a formula. Usually only first-order formulas are considered. Examples If a, b, and c are distinct and {a, b, c} is a set of indiscernibles, then, for example, for each binary formula , we must have Historically, the identity of indiscernibles was one of the laws of thought of Gottfried Leibniz. Generalizations In some contexts one considers the more general notion of order-indiscernibles, and the term sequence of indiscernibles often refers implicitly to this weaker notion. In our example of binary formulas, to say that the triple (a, b, c) of distinct elements is a sequence of indiscernibles implies Applications Order-indiscernibles feature prominently in the theory of Ramsey cardinals, Erdős cardinals, and zero sharp. See also Identity of indiscernibles Rough set References Model theory
https://en.wikipedia.org/wiki/System%20Contention%20Scope
In computer science, The System Contention Scope is one of two thread-scheduling schemes used in operating systems. This scheme is used by the kernel to decide which kernel-level thread to schedule onto a CPU, wherein all threads (as opposed to only user-level threads, as in the Process Contention Scope scheme) in the system compete for the CPU. Operating systems that use only the one-to-one model, such as Windows, Linux, and Solaris, schedule threads using only System Contention Scope. References Operating system kernels Processor scheduling algorithms
https://en.wikipedia.org/wiki/Emergent%20design
Emergent design is a phrase coined by David Cavallo to describe a theoretical framework for the implementation of systemic change in education and learning environments. This examines how choice of design methodology contributes to the success or failure of education reforms through studies in Thailand. It is related to the theories of situated learning and of constructionist learning. The term constructionism was coined by Seymour Papert under whom Cavallo studied. Emergent design holds that education systems cannot adapt effectively to technology change unless the education is rooted in the existing skills and needs of the local culture. Applications The most notable non-theoretical application of the principles of emergent design is in the OLPC, whose concept work is supported in Cavallo's paper "Models of growth — towards fundamental change in learning environment". Emergent design in agile software development Emergent design is a consistent topic in agile software development, as a result of the methodology's focus on delivering small pieces of working code with business value. With emergent design, a development organization starts delivering functionality and lets the design emerge. Development will take a piece of functionality A and implement it using best practices and proper test coverage and then move on to delivering functionality B. Once B is built, or while it is being built, the organization will look at what A and B have in common and refactor out the commonality, allowing the design to emerge. This process continues as the organization continually delivers functionality. At the end of an agile release cycle, development is left with the smallest set of the design needed, as opposed to the design that could have been anticipated in advance. The end result is a simpler design with a smaller code base, which is more easily understood and maintained and naturally has less room for defects. Emergent design for social change Emergent design is al
https://en.wikipedia.org/wiki/Gentzen%27s%20consistency%20proof
Gentzen's consistency proof is a result of proof theory in mathematical logic, published by Gerhard Gentzen in 1936. It shows that the Peano axioms of first-order arithmetic do not contain a contradiction (i.e. are "consistent"), as long as a certain other system used in the proof does not contain any contradictions either. This other system, today called "primitive recursive arithmetic with the additional principle of quantifier-free transfinite induction up to the ordinal ε0", is neither weaker nor stronger than the system of Peano axioms. Gentzen argued that it avoids the questionable modes of inference contained in Peano arithmetic and that its consistency is therefore less controversial. Gentzen's theorem Gentzen's theorem is concerned with first-order arithmetic: the theory of the natural numbers, including their addition and multiplication, axiomatized by the first-order Peano axioms. This is a "first-order" theory: the quantifiers extend over natural numbers, but not over sets or functions of natural numbers. The theory is strong enough to describe recursively defined integer functions such as exponentiation, factorials or the Fibonacci sequence. Gentzen showed that the consistency of the first-order Peano axioms is provable over the base theory of primitive recursive arithmetic with the additional principle of quantifier-free transfinite induction up to the ordinal ε0. Primitive recursive arithmetic is a much simplified form of arithmetic that is rather uncontroversial. The additional principle means, informally, that there is a well-ordering on the set of finite rooted trees. Formally, ε0 is the first ordinal such that , i.e. the limit of the sequence It is a countable ordinal much smaller than large countable ordinals. To express ordinals in the language of arithmetic, an ordinal notation is needed, i.e. a way to assign natural numbers to ordinals less than ε0. This can be done in various ways, one example provided by Cantor's normal form theorem.
https://en.wikipedia.org/wiki/Directivity
In electromagnetics, directivity is a parameter of an antenna or optical system which measures the degree to which the radiation emitted is concentrated in a single direction. It is the ratio of the radiation intensity in a given direction from the antenna to the radiation intensity averaged over all directions. Therefore, the directivity of a hypothetical isotropic radiator is 1, or 0 dBi. An antenna's directivity is greater than its gain by an efficiency factor, radiation efficiency. Directivity is an important measure because many antennas and optical systems are designed to radiate electromagnetic waves in a single direction or over a narrow-angle. By the principle of reciprocity, the directivity of an antenna when receiving is equal to its directivity when transmitting. The directivity of an actual antenna can vary from 1.76 dBi for a short dipole to as much as 50 dBi for a large dish antenna. Definition The directivity, , of an antenna is defined for all incident angles of an antenna. The term "directive gain" is deprecated by IEEE. If an angle relative to the antenna is not specified, then directivity is presumed to refer to the axis of maximum radiation intensity. Here and are the zenith angle and azimuth angle respectively in the standard spherical coordinate angles; is the radiation intensity, which is the power per unit solid angle; and is the total radiated power. The quantities and satisfy the relation that is, the total radiated power is the power per unit solid angle integrated over a spherical surface. Since there are 4π steradians on the surface of a sphere, the quantity represents the average power per unit solid angle. In other words, directivity is the radiation intensity of an antenna at a particular coordinate combination divided by what the radiation intensity would have been had the antenna been an isotropic antenna radiating the same amount of total power into space. Directivity, if a direction is not specified, is the
https://en.wikipedia.org/wiki/Monkey%20patch
Monkey patching is a technique used to dynamically update the behavior of a piece of code at run-time. A monkey patch (also spelled monkey-patch, MonkeyPatch) is a way to extend or modify the runtime code of dynamic languages (e.g. Smalltalk, JavaScript, Objective-C, Ruby, Perl, Python, Groovy, etc.) without altering the original source code. Etymology The term monkey patch seems to have come from an earlier term, guerrilla patch, which referred to changing code sneakily – and possibly incompatibly with other such patches – at runtime. The word guerrilla, nearly homophonous with gorilla, became monkey, possibly to make the patch sound less intimidating. An alternative etymology is that it refers to “monkeying about” with the code (messing with it). Despite the name's suggestion, the "monkey patch" is sometimes the official method of extending a program. For example, web browsers such as Firefox and Internet Explorer used to encourage this, although modern browsers (including Firefox) now have an official extensions system. Definitions The definition of the term varies depending upon the community using it. In Ruby, Python, and many other dynamic programming languages, the term monkey patch only refers to dynamic modifications of a class or module at runtime, motivated by the intent to patch existing third-party code as a workaround to a bug or feature which does not act as desired. Other forms of modifying classes at runtime have different names, based on their different intents. For example, in Zope and Plone, security patches are often delivered using dynamic class modification, but they are called hot fixes. Applications Monkey patching is used to: Replace methods / classes / attributes / functions at runtime, e.g. to stub out a function during testing; Modify/extend behaviour of a third-party product without maintaining a private copy of the source code; Apply the result of a patch at runtime to the state in memory, instead of the source code on disk; Distr
https://en.wikipedia.org/wiki/Identity%20transform
The identity transform is a data transformation that copies the source data into the destination data without change. The identity transformation is considered an essential process in creating a reusable transformation library. By creating a library of variations of the base identity transformation, a variety of data transformation filters can be easily maintained. These filters can be chained together in a format similar to UNIX shell pipes. Examples of recursive transforms The "copy with recursion" permits, changing little portions of code, produce entire new and different output, filtering or updating the input. Understanding the "identity by recursion" we can understand the filters. Using XSLT The most frequently cited example of the identity transform (for XSLT version 1.0) is the "copy.xsl" transform as expressed in XSLT. This transformation uses the xsl:copy command to perform the identity transformation: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="@*|node()"> <xsl:copy> <xsl:apply-templates select="@*|node()"/> </xsl:copy> </xsl:template> </xsl:stylesheet> This template works by matching all attributes (@*) and other nodes (node()), copying each node matched, then applying the identity transformation to all attributes and child nodes of the context node. This recursively descends the element tree and outputs all structures in the same structure they were found in the original file, within the limitations of what information is considered significant in the XPath data model. Since node() matches text, processing instructions, root, and comments, as well as elements, all XML nodes are copied. A more explicit version of the identity transform is: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:template match="@*|*|processing-instruction()|comment()"> <xsl:copy> <xsl:apply-templates select="*|@*|text()|processing-instruction()|c
https://en.wikipedia.org/wiki/Relaxation%20labelling
Relaxation labelling is an image treatment methodology. Its goal is to associate a label to the pixels of a given image or nodes of a given graph. See also Digital image processing References Further reading (Full text: ) (Full text: ) Computer vision
https://en.wikipedia.org/wiki/PAH%20world%20hypothesis
The PAH world hypothesis is a speculative hypothesis that proposes that polycyclic aromatic hydrocarbons (PAHs), known to be abundant in the universe, including in comets, and assumed to be abundant in the primordial soup of the early Earth, played a major role in the origin of life by mediating the synthesis of RNA molecules, leading into the RNA world. However, as yet, the hypothesis is untested. Background The 1952 Miller–Urey experiment demonstrated the synthesis of organic compounds, such as amino acids, formaldehyde and sugars, from the original inorganic precursors the researchers presumed to have been present in the primordial soup (but is no longer considered likely). This experiment inspired many others. In 1961, Joan Oró found that the nucleotide base adenine could be made from hydrogen cyanide (HCN) and ammonia in a water solution. Experiments conducted later showed that the other RNA and DNA nucleobases could be obtained through simulated prebiotic chemistry with a reducing atmosphere. The RNA world hypothesis shows how RNA can become its own catalyst (a ribozyme). In between there are some missing steps such as how the first RNA molecules could be formed. The PAH world hypothesis was proposed by Simon Nicholas Platts in May 2004 to try to fill in this missing step. A more thoroughly elaborated idea has been published by Ehrenfreund et al. Polycyclic aromatic hydrocarbons Polycyclic aromatic hydrocarbons are the most common and abundant of the known polyatomic molecules in the visible universe, and are considered a likely constituent of the primordial sea. PAHs, along with fullerenes (or "buckyballs"), have been recently detected in nebulae. In April 2019, scientists, working with the Hubble Space Telescope, reported the confirmed detection of the large and complex ionized molecules of buckminsterfullerene (C60) in the interstellar medium spaces between the stars. (Fullerenes are also implicated in the origin of life; according to astronomer Letizi
https://en.wikipedia.org/wiki/Aleksandr%20Korkin
Aleksandr Nikolayevich Korkin (; – ) was a Russian mathematician. He made contribution to the development of partial differential equations, and was second only to Chebyshev among the founders of the Saint Petersburg Mathematical School. Among others, his students included Yegor Ivanovich Zolotarev. Some publications References External links Korkin's Biography , the St. Petersburg University Pages (in Russian, but with an image) 1837 births 1908 deaths People from Vologda Oblast People from Vologda Governorate 19th-century mathematicians from the Russian Empire Mathematical analysts
https://en.wikipedia.org/wiki/John%20Herivel
John William Jamieson Herivel (29 August 1918 – 18 January 2011) was a British science historian and World War II codebreaker at Bletchley Park. As a codebreaker concerned with Cryptanalysis of the Enigma, Herivel is remembered chiefly for the discovery of what was soon dubbed the Herivel tip or Herivelismus. Herivelismus consisted of the idea, the Herivel tip and the method of establishing whether it applied using the Herivel square. It was based on Herivel's insight into the habits of German operators of the Enigma cipher machine that allowed Bletchley Park to easily deduce part of the daily key. For a brief but critical period after May 1940, the Herivel tip in conjunction with "cillies" (another class of operator error) was the main technique used to solve Enigma. After the war, Herivel became an academic, studying the history and philosophy of science at Queen's University Belfast, particularly Isaac Newton, Joseph Fourier, Christiaan Huygens. In 1956, he took a brief leave of absence from Queen's to work as a scholar at the Dublin Institute for Advanced Studies. In retirement, he wrote an autobiographical account of his work at Bletchley Park entitled Herivelismus and the German Military Enigma. Recruitment to Bletchley Park John Herivel was born in Belfast, and attended Methodist College Belfast from 1924 to 1936. In 1937 he was awarded a Kitchener Scholarship to study mathematics at Sidney Sussex College, Cambridge, where his supervisor was Gordon Welchman. Welchman recruited Herivel to the Government Code and Cypher School (GC&CS) at Bletchley Park. Welchman worked with Alan Turing in the newly formed Hut 6 section created to solve Army and Air Force Enigma. Herivel, then aged 21, arrived at Bletchley on 29 January 1940, and was briefed on Enigma by Alan Turing and Tony Kendrick. Enigma At the time that Herivel started work at Bletchley Park, Hut 6 was having only limited success with Enigma-enciphered messages, mostly from the Luftwaffe Enigma network
https://en.wikipedia.org/wiki/Cheekwood%20Botanical%20Garden%20and%20Museum%20of%20Art
Cheekwood is a historic estate on the western edge of Nashville, Tennessee that houses the Cheekwood Estate & Gardens. Formerly the residence of Nashville's Cheek family, the Georgian-style mansion was opened as a botanical garden and art museum in 1960. History Christopher Cheek founded a wholesale grocery business in Nashville in the 1880s. His son, Leslie Cheek, joined him as a partner, and by 1915 was president of the family-owned company. Leslie's wife, Mabel Wood, was a member of a prominent Clarksville, Tennessee, family. Meanwhile, Joel Owsley Cheek, Leslie's cousin, had developed an acclaimed blend of coffee that was marketed through Nashville's finest hotel, the Maxwell House Hotel. Cheek's extended family, including Leslie and Mabel Cheek, were investors. In 1928, the Postum Cereals Company (now General Foods) purchased Maxwell House's parent company, Cheek-Neal Coffee, for more than $40 million. After the sale of the family business, Leslie Cheek bought of woodland in West Nashville for a country estate. He hired New York residential and landscape architect Bryant Fleming to design the house and gardens, and gave him full control over every detail of the project, including interior furnishings. The resulting limestone mansion and extensive formal gardens were completed in 1932. The estate design was inspired by the grand English manors of the 18th century. Leslie Cheek died just two years after moving into the mansion. Mabel Cheek and their daughter, Huldah Cheek Sharp, lived at Cheekwood until the 1950s, when Huldah Sharp and her husband offered the property as a site for a botanical garden and art museum. The Exchange Club of Nashville, the Horticultural Society of Middle Tennessee and other civic groups led the redevelopment of the property aided by funds raised from the sale of the former building of the defunct Nashville Museum of Art. The new Cheekwood museum opened in 1960. Art museum Cheekwood's art collection was founded in 1959
https://en.wikipedia.org/wiki/Quick%20Wertkarte
Quick was an electronic purse system available on Austrian bank cards to allow small purchases to be made without cash. The history of the Quick system goes back to 1996. Quick was discontinued on July 31, 2017. The system was aimed at small retailers such as bakeries, cafés, drink, and parking automats (but even small discount shops such as Billa accept it) and intended for purchases of less than €400. The card was inserted into a handheld Quick reader by the merchant who enters the transaction amount for the customer. The customer then confirms the purchase by pushing a button on the keypad, the exact amount debited from the card within a few seconds. As well as the multipurpose bank card version, anonymous cards (also smart cards) are available for the use of people without bank accounts, such as children and tourists. At ATMs, one can transfer money for free between bank cards and the Quick chip (either on a standalone smart card, or contained in the bank card). The scheme was operated by Europay Austria and most of the Maestro cards in use contain Quick support, but new ones are not issued without it. See also Octopus card Moneo External links The official Quick site (in German) Quick, Austria’s electronic purse Banking in Austria Smart cards Payment cards
https://en.wikipedia.org/wiki/Thermoplastic%20olefin
Thermoplastic olefin, thermoplastic polyolefin (TPO), or olefinic thermoplastic elastomers refer to polymer/filler blends usually consisting of some fraction of a thermoplastic, an elastomer or rubber, and usually a filler. Outdoor applications such as roofing frequently contain TPO because it does not degrade under solar UV radiation, a common problem with nylons. TPO is used extensively in the automotive industry. Materials Thermoplastics Thermoplastics may include polypropylene (PP), polyethylene (PE), block copolymer polypropylene (BCPP), and others. Fillers Common fillers include, though are not restricted to talc, fiberglass, carbon fiber, wollastonite, and MOS (Metal Oxy Sulfate). Elastomers Common elastomers include ethylene propylene rubber (EPR), EPDM (EP-diene rubber), ethylene-octene (EO), ethylbenzene (EB), and styrene ethylene butadiene styrene (SEBS). Currently there are a great variety of commercially available rubbers and BCPP's. They are produced using regioselective and stereoselective catalysts known as metallocenes. The metallocene catalyst becomes embedded in the polymer and cannot be recovered. Creation Components for TPO are blended together at 210 - 270 °C under high shear. A twin screw extruder or a continuous mixer may be employed to achieve a continuous stream, or a Banbury compounder may be employed for batch production. A higher degree of mixing and dispersion is achieved in the batch process, but the superheat batch must immediately be processed through an extruder to be pelletized into a transportable intermediate. Thus batch production essentially adds an additional cost step. Structure The geometry of the metallocene catalyst will determine the sequence of chirality in the chain, as in, atactic, syndiotactic, isotactic, as well as average block length, molecular weight and distribution. These characteristics will in turn govern the microstructure of the blend. As in metal alloys the properties of a TPO product depend gr
https://en.wikipedia.org/wiki/Windows%20CardSpace
Windows CardSpace (codenamed InfoCard) is a discontinued identity selector app by Microsoft. It stores references to digital identities of the users, presenting them as visual information cards. CardSpace provides a consistent UI designed to help people to easily and securely use these identities in applications and web sites where they are accepted. Resistance to phishing attacks and adherence to Kim Cameron's "7 Laws of Identity" were goals in its design. CardSpace is a built-in component of Windows 7 and Windows Vista, and has been made available for Windows XP and Windows Server 2003 as part of the .NET Framework 3.x package. Overview When an information card-enabled application or website wishes to obtain information about the user, it requests a particular set of claims. The CardSpace UI then appears, switching the display to the CardSpace service, which displays the user's stored identities as visual cards. The user selects a card to use, and the CardSpace software contacts the issuer of the identity to obtain a digitally signed XML token that contains the requested information. CardSpace also allows users to create personal (also known as self-issued) information cards, which can contain one or more of 14 fields of identity information such as full name and address. Other transactions may require a managed information card; these are issued by a third-party identity provider that makes the claims on the person's behalf, such as a bank, employer, or a government agency. Windows CardSpace is built on top of the Web services protocol stack, an open set of XML-based protocols, including WS-Security, WS-Trust, WS-MetadataExchange and WS-SecurityPolicy. This means that any technology or platform that supports these protocols can integrate with CardSpace. To accept information cards, a web developer needs to declare an HTML <OBJECT> tag that specifies the claims the website is demanding and implement code to decrypt the returned token and extract the claim valu
https://en.wikipedia.org/wiki/Selective%20soldering
Selective soldering is the process of selectively soldering components to printed circuit boards and molded modules that could be damaged by the heat of a reflow oven or wave soldering in a traditional surface-mount technology (SMT) or through-hole technology assembly processes. This usually follows an SMT oven reflow process; parts to be selectively soldered are usually surrounded by parts that have been previously soldered in a surface-mount reflow process, and the selective-solder process must be sufficiently precise to avoid damaging them. Processes Assembly processes used in selective soldering include: Selective aperture tooling over wave solder: These tools mask off areas previously soldered in the SMT reflow soldering process, exposing only those areas to be selectively soldered in the tool's aperture or window. The tool and printed circuit board (PCB) assembly are then passed over wave soldering equipment to complete the process. Each tool is specific to a PCB assembly. Mass selective dip solder fountain: A variant of selective-aperture soldering in which specialized tooling (with apertures to allow solder to be pumped through it) represent the areas to be soldered. The PCB is then presented over the selective-solder fountain; all selective soldering of the PCB is soldered simultaneously as the board is lowered into the solder fountain. Each tool is specific to a PCB assembly. Miniature wave selective solder : This typically uses a round miniature pumped solder wave, similar to the end of a pencil or crayon, to sequentially solder the PCB. The process is slower than the two previous methods, but more accurate. The PCB may be fixed, and the wave solder pot moved underneath the PCB; alternately, the PCB may be articulated over a fixed wave or solder bath to undergo the selective-soldering process. Unlike the first two examples, this process is toolless. Laser Selective Soldering System: A new system, able to import CAD-based board layouts and use that da
https://en.wikipedia.org/wiki/Aleksandar%20Totic
Aleksandar Totic is one of the original developers of the Mosaic browser. He cofounded and was a partner at Netscape Communications Corporation. He was born in Belgrade, Serbia, on 23 September 1966. He moved to America after his degree from Kuwait was not recognized by Yugoslav government, and currently lives in Palo Alto, CA San Francisco, CA. External links Mosaic - The First Global Web Browser Software engineers Serbian computer scientists Computer programmers Living people Year of birth missing (living people) Place of birth missing (living people)
https://en.wikipedia.org/wiki/Instinctive%20drift
Instinctive drift, alternately known as instinctual drift, is the tendency of an animal to revert to unconscious and automatic behaviour that interferes with learned behaviour from operant conditioning. Instinctive drift was coined by Keller and Marian Breland, former students of B.F. Skinner at the University of Minnesota, describing the phenomenon as "a clear and utter failure of conditioning theory." B.F. Skinner was an American psychologist and father of operant conditioning (or instrumental conditioning), which is learning strategy that teaches the performance of an action either through reinforcement or punishment. It is through the association of the behaviour and the reward or consequence that follows that depicts whether an animal will maintain a behaviour, or if it will become extinct. Instinctive drift is a phenomenon where such conditioning erodes and an animal reverts to its natural behaviour. B.F. Skinner B.F. Skinner was an American behaviourist inspired by John Watson's philosophy of behaviorism. Skinner was captivated with systematically controlling behaviour to result in desirable or beneficial outcomes. This passion led Skinner to become the father of operant conditioning. Skinner made significant contributions to the research concepts of reinforcement, punishment, schedules of reinforcement, behaviour modification and behaviour shaping. The mere existence of the instinctive drift phenomenon challenged Skinner's initial beliefs on operant conditioning and reinforcement. Operant conditioning Skinner described operant conditioning as strengthening behaviour through reinforcement. Reinforcement can consist of positive reinforcement, in which a desirable stimulus is added; negative reinforcement, in which an undesirable stimulus is taken away; positive punishment, in which an undesirable stimulus is added; and negative punishment, in which a desirable stimulus is taken away. Through these practices, animals shape their behaviour and are motivated
https://en.wikipedia.org/wiki/Genetic%20analysis
Genetic analysis is the overall process of studying and researching in fields of science that involve genetics and molecular biology. There are a number of applications that are developed from this research, and these are also considered parts of the process. The base system of analysis revolves around general genetics. Basic studies include identification of genes and inherited disorders. This research has been conducted for centuries on both a large-scale physical observation basis and on a more microscopic scale. Genetic analysis can be used generally to describe methods both used in and resulting from the sciences of genetics and molecular biology, or to applications resulting from this research. Genetic analysis may be done to identify genetic/inherited disorders and also to make a differential diagnosis in certain somatic diseases such as cancer. Genetic analyses of cancer include detection of mutations, fusion genes, and DNA copy number changes. History of genetic analysis Much of the research that set the foundation of genetic analysis began in prehistoric times. Early humans found that they could practice selective breeding to improve crops and animals. They also identified inherited traits in humans that were eliminated over the years. The many genetic analyses gradually evolved over time. Mendelian research Modern genetic analysis began in the mid-1800s with research conducted by Gregor Mendel. Mendel, who is known as the "father of modern genetics", was inspired to study variation in plants. Between 1856 and 1863, Mendel cultivated and tested some 29,000 pea plants (i.e., Pisum sativum). This study showed that one in four pea plants had purebred recessive alleles, two out of four were hybrid and one out of four were purebred dominant. His experiments led him to make two generalizations, the Law of Segregation and the Law of Independent Assortment, which later became known as Mendel's Laws of Inheritance. Lacking the basic understanding of heredity,
https://en.wikipedia.org/wiki/Two-way%20finite%20automaton
In computer science, in particular in automata theory, a two-way finite automaton is a finite automaton that is allowed to re-read its input. Two-way deterministic finite automaton A two-way deterministic finite automaton (2DFA) is an abstract machine, a generalized version of the deterministic finite automaton (DFA) which can revisit characters already processed. As in a DFA, there are a finite number of states with transitions between them based on the current character, but each transition is also labelled with a value indicating whether the machine will move its position in the input to the left, right, or stay at the same position. Equivalently, 2DFAs can be seen as read-only Turing machines with no work tape, only a read-only input tape. 2DFAs were introduced in a seminal 1959 paper by Rabin and Scott, who proved them to have equivalent power to one-way DFAs. That is, any formal language which can be recognized by a 2DFA can be recognized by a DFA which only examines and consumes each character in order. Since DFAs are obviously a special case of 2DFAs, this implies that both kinds of machines recognize precisely the class of regular languages. However, the equivalent DFA for a 2DFA may require exponentially many states, making 2DFAs a much more practical representation for algorithms for some common problems. 2DFAs are also equivalent to read-only Turing machines that use only a constant amount of space on their work tape, since any constant amount of information can be incorporated into the finite control state via a product construction (a state for each combination of work tape state and control state). Formal description Formally, a two-way deterministic finite automaton can be described by the following 8-tuple: where is the finite, non-empty set of states is the finite, non-empty set of input symbols is the left endmarker is the right endmarker is the start state is the end state is the reject state In addition, the following tw
https://en.wikipedia.org/wiki/Autostereoscopy
Autostereoscopy is any method of displaying stereoscopic images (adding binocular perception of 3D depth) without the use of special headgear, glasses, something that affects vision, or anything for eyes on the part of the viewer. Because headgear is not required, it is also called "glasses-free 3D" or "glassesless 3D". There are two broad approaches currently used to accommodate motion parallax and wider viewing angles: eye-tracking, and multiple views so that the display does not need to sense where the viewer's eyes are located. Examples of autostereoscopic displays technology include lenticular lens, parallax barrier, and may include Integral imaging, but notably do not include volumetric display or holographic displays. Technology Many organizations have developed autostereoscopic 3D displays, ranging from experimental displays in university departments to commercial products, and using a range of different technologies. The method of creating autostereoscopic flat panel video displays using lenses was mainly developed in 1985 by Reinhard Boerner at the Heinrich Hertz Institute (HHI) in Berlin. Prototypes of single-viewer displays were already being presented in the 1990s, by Sega AM3 (Floating Image System) and the HHI. Nowadays, this technology has been developed further mainly by European and Japanese companies. One of the best-known 3D displays developed by HHI was the Free2C, a display with very high resolution and very good comfort achieved by an eye tracking system and a seamless mechanical adjustment of the lenses. Eye tracking has been used in a variety of systems in order to limit the number of displayed views to just two, or to enlarge the stereoscopic sweet spot. However, as this limits the display to a single viewer, it is not favored for consumer products. Currently, most flat-panel displays employ lenticular lenses or parallax barriers that redirect imagery to several viewing regions; however, this manipulation requires reduced image resolutio
https://en.wikipedia.org/wiki/Square-free%20word
In combinatorics, a squarefree word is a word (a sequence of symbols) that does not contain any squares. A square is a word of the form , where is not empty. Thus, a squarefree word can also be defined as a word that avoids the pattern . Finite squarefree words Binary alphabet Over a binary alphabet , the only squarefree words are the empty word , and . Ternary alphabet Over a ternary alphabet , there are infinitely many squarefree words. It is possible to count the number of ternary squarefree words of length . This number is bounded by , where . The upper bound on can be found via Fekete's Lemma and approximation by automata. The lower bound can be found by finding a substitution that preserves squarefreeness. Alphabet with more than three letters Since there are infinitely many squarefree words over three-letter alphabets, this implies there are also infinitely many squarefree words over an alphabet with more than three letters. The following table shows the exact growth rate of the -ary squarefree words: 2-dimensional words Consider a map from to , where is an alphabet and is called a 2-dimensional word. Let be the entry . A word is a line of if there exists such that , and for . Carpi proves that there exists a 2-dimensional word over a 16-letter alphabet such that every line of is squarefree. A computer search shows that there are no 2-dimensional words over a 7-letter alphabet, such that every line of is squarefree. Generating finite squarefree words Shur proposes an algorithm called R2F (random-t(w)o-free) that can generate a squarefree word of length over any alphabet with three or more letters. This algorithm is based on a modification of entropy compression: it randomly selects letters from a k-letter alphabet to generate a -ary squarefree word. algorithm R2F is input: alphabet size , word length output: a -ary squarefree word of length . choose in uniformly at random set to fol
https://en.wikipedia.org/wiki/Refractometer
A refractometer is a laboratory or field device for the measurement of an index of refraction (refractometry). The index of refraction is calculated from the observed refraction angle using Snell's law. For mixtures, the index of refraction then allows to determine the concentration using mixing rules such as the Gladstone–Dale relation and Lorentz–Lorenz equation. Refractometry Standard refractometers measure the extent of light refraction (as part of a refractive index) of transparent substances in either a liquid or solid-state; this is then used in order to identify a liquid sample, analyze the sample's purity, and determine the amount or concentration of dissolved substances within the sample. As light passes through the liquid from the air it will slow down and create a ‘bending’ illusion, the severity of the ‘bend’ will depend on the amount of substance dissolved in the liquid. For example, the amount of sugar in a glass of water. Types There are four main types of refractometers: traditional handheld refractometers, digital handheld refractometers, laboratory or Abbe refractometers (named for the instrument's inventor and based on Ernst Abbe's original design of the 'critical angle') and inline process refractometers. There is also the Rayleigh Refractometer used (typically) for measuring the refractive indices of gases. In laboratory medicine, a refractometer is used to measure the total plasma protein in a blood sample and urine specific gravity in a urine sample. In drug diagnostics, a refractometer is used to measure the specific gravity of human urine. In gemology, the gemstone refractometer is one of the fundamental pieces of equipment used in a gemological laboratory. Gemstones are transparent minerals and can therefore be examined using optical methods. Refractive index is a material constant, dependent on the chemical composition of a substance. The refractometer is used to help identify gem materials by measuring their refractive index, on
https://en.wikipedia.org/wiki/Comparison%20function
In applied mathematics, comparison functions are several classes of continuous functions, which are used in stability theory to characterize the stability properties of control systems as Lyapunov stability, uniform asymptotic stability etc. 1 + 1 equals 2, which can be used in comparison functions. Let be a space of continuous functions acting from to . The most important classes of comparison functions are: Functions of class are also called positive-definite functions. One of the most important properties of comparison functions is given by Sontag’s -Lemma, named after Eduardo Sontag. It says that for each and any there exist : Many further useful properties of comparison functions can be found in. Comparison functions are primarily used to obtain quantitative restatements of stability properties as Lyapunov stability, uniform asymptotic stability, etc. These restatements are often more useful than the qualitative definitions of stability properties given in language. As an example, consider an ordinary differential equation where is locally Lipschitz. Then: () is globally stable if and only if there is a so that for any initial condition and for any it holds that () is globally asymptotically stable if and only if there is a so that for any initial condition and for any it holds that The comparison-functions formalism is widely used in input-to-state stability theory. References Types of functions Stability theory
https://en.wikipedia.org/wiki/Logical%20constant
In logic, a logical constant or constant symbol of a language is a symbol that has the same semantic value under every interpretation of . Two important types of logical constants are logical connectives and quantifiers. The equality predicate (usually written '=') is also treated as a logical constant in many systems of logic. One of the fundamental questions in the philosophy of logic is "What is a logical constant?"; that is, what special feature of certain constants makes them logical in nature? Some symbols that are commonly treated as logical constants are: Many of these logical constants are sometimes denoted by alternate symbols (for instance, the use of the symbol "&" rather than "∧" to denote the logical and). Defining logical constants is a major part of the work of Gottlob Frege and Bertrand Russell. Russell returned to the subject of logical constants in the preface to the second edition (1937) of The Principles of Mathematics noting that logic becomes linguistic: "If we are to say anything definite about them, [they] must be treated as part of the language, not as part of what the language speaks about." The text of this book uses relations R, their converses and complements as primitive notions, also taken as logical constants in the form aRb. See also Logical connective Logical value Non-logical symbol References External links Stanford Encyclopedia of Philosophy entry on logical constants Concepts in logic Logic symbols Logical truth Philosophical logic Syntax (logic)
https://en.wikipedia.org/wiki/Musix%20GNU%2BLinux
Musix GNU+Linux is a discontinued live CD and DVD Linux distribution for the IA-32 processor family based on Debian. It contained a collection of software for audio production, graphic design, video editing and general-purpose applications. Musix GNU+Linux was one of the few Linux distributions recognized by the Free Software Foundation as being composed completely of free software. The main language used in development discussion and documentation was Spanish. Software Musix 2.0 Musix 2.0 was developed using the live-helper scripts from the Debian-Live project. The first Alpha version of Musix 2.0 was released on 25 March 2009 including two realtime-patched Linux-Libre kernels. On 17 May 2009 the first beta version of Musix 2.0 was released. See also Comparison of Linux distributions dyne:bolic – another free distribution for multimedia enthusiasts GNU/Linux naming controversy List of Linux distributions based on Debian References External links Debian-based distributions Free audio software Operating system distributions bootable from read-only media Knoppix Linux media creation distributions Free software only Linux distributions 2008 software Linux distributions
https://en.wikipedia.org/wiki/Dipsogen
A dipsogen is an agent that causes thirst. (From Greek: δίψα (dipsa), "thirst" and the suffix -gen, "to create".) Physiology Angiotensin II is thought to be a powerful dipsogen, and is one of the products of the renin–angiotensin pathway, a biological homeostatic mechanism for the regulation of electrolytes and water. External links 'Fluid Physiology' by Kerry Brandis (from http://www.anaesthesiamcq.com) Physiology
https://en.wikipedia.org/wiki/Software%20Communications%20Architecture
The Software Communications Architecture (SCA) is an open architecture framework that defines a standard way for radios to instantiate, configure, and manage waveform applications running on their platform. The SCA separates waveform software from the underlying hardware platform, facilitating waveform software portability and re-use to avoid costs of redeveloping waveforms. The latest version is SCA 4.1. Overview The SCA is published by the Joint Tactical Networking Center (JTNC). This architecture was developed to assist in the development of Software Defined Radio (SDR) communication systems, capturing the benefits of recent technology advances which are expected to greatly enhance interoperability of communication systems and reduce development and deployment costs. The architecture is also applicable to other embedded, distributed-computing applications such as Communications Terminals or Electronic Warfare (EW). The SCA has been structured to: Provide for portability of applications software between different SCA implementations, Leverage commercial standards to reduce development cost, Reduce software development time through the ability to reuse design modules, and Build on evolving commercial frameworks and architectures. The SCA is deliberately designed to meet commercial application requirements as well as those of military applications. Since the SCA is intended to become a self-sustaining standard, a wide cross-section of industry has been invited to participate in the development and validation of the SCA. The SCA is not a system specification but an implementation independent set of rules that constrain the design of systems to achieve the objectives listed above. Core Framework The Core Framework (CF) defines the essential "core" set of open software interfaces and profiles that provide for the deployment, management, interconnection, and intercommunication of software application components in an embedded, distributed-computing communication
https://en.wikipedia.org/wiki/Air-to-cloth%20ratio
The air-to-cloth ratio is the volumetric flow rate of air (m3/minute; SI m3/second) flowing through a dust collector's inlet duct divided by the total cloth area (m2) in the filters. The result is expressed in units of velocity. The air-to-cloth ratio is typically between 1.5 and 3.5 metres per minute, mainly depending on the concentration of dust loading. External links Details on how to calculate air-to-cloth ratio Filters Engineering ratios
https://en.wikipedia.org/wiki/Monte%20Carlo%20localization
Monte Carlo localization (MCL), also known as particle filter localization, is an algorithm for robots to localize using a particle filter. Given a map of the environment, the algorithm estimates the position and orientation of a robot as it moves and senses the environment. The algorithm uses a particle filter to represent the distribution of likely states, with each particle representing a possible state, i.e., a hypothesis of where the robot is. The algorithm typically starts with a uniform random distribution of particles over the configuration space, meaning the robot has no information about where it is and assumes it is equally likely to be at any point in space. Whenever the robot moves, it shifts the particles to predict its new state after the movement. Whenever the robot senses something, the particles are resampled based on recursive Bayesian estimation, i.e., how well the actual sensed data correlate with the predicted state. Ultimately, the particles should converge towards the actual position of the robot. Basic description Consider a robot with an internal map of its environment. When the robot moves around, it needs to know where it is within this map. Determining its location and rotation (more generally, the pose) by using its sensor observations is known as robot localization. Because the robot may not always behave in a perfectly predictable way, it generates many random guesses of where it is going to be next. These guesses are known as particles. Each particle contains a full description of a possible future state. When the robot observes the environment, it discards particles inconsistent with this observation, and generates more particles close to those that appear consistent. In the end, hopefully most particles converge to where the robot actually is. State representation The state of the robot depends on the application and design. For example, the state of a typical 2D robot may consist of a tuple for position and orientation . For
https://en.wikipedia.org/wiki/System%20in%20a%20package
A system in a package (SiP) or system-in-package is a number of integrated circuits (ICs) enclosed in one chip carrier package or encompassing an IC package substrate that may include passive components and perform the functions of an entire system. The ICs may be stacked using package on package, placed side by side, and/or embedded in the substrate. The SiP performs all or most of the functions of an electronic system, and is typically used when designing components for mobile phones, digital music players, etc. Dies containing integrated circuits may be stacked vertically on a substrate. They are internally connected by fine wires that are bonded to the package. Alternatively, with a flip chip technology, solder bumps are used to join stacked chips together. SiPs are like systems on a chip (SoCs) but less tightly integrated and not on a single semiconductor die. Technology SiP dies can be stacked vertically or tiled horizontally, with techniques like chiplets or quilt packaging, unlike less dense multi-chip modules, which place dies horizontally on a carrier. SiPs connect the dies with standard off-chip wire bonds or solder bumps, unlike slightly denser three-dimensional integrated circuits which connect stacked silicon dies with conductors running through the die. Many different 3D packaging techniques have been developed for stacking many fairly standard chip dies into a compact area. SiPs can contain several chips—such as a specialized processor, DRAM, flash memory—combined with passive components—resistors and capacitors—all mounted on the same substrate. This means that a complete functional unit can be built in a multi-chip package, so that few external components need to be added to make it work. This is particularly valuable in space constrained environments like MP3 players and mobile phones as it reduces the complexity of the printed circuit board and overall design. Despite its benefits, this technique decreases the yield of fabrication since any d
https://en.wikipedia.org/wiki/MAFA
MAFA (Mast cell function-associated antigen) is a type II membrane glycoprotein, first identified on the surface of rat mucosal-type mast cells of the RBL-2H3 line. More recently, human and mouse homologues of MAFA have been discovered yet also (or only) expressed by NK and T-cells. MAFA is closely linked with the type 1 Fcɛ receptors in not only mucosal mast cells of humans and mice but also in the serosal mast cells of these same organisms. It has the ability to function as both a channel for calcium ions along with interact with other receptors to inhibit certain cell processes. It function is based on its specialized structure, which contains many specialized motifs and sequences that allow its functions to take place. Discovery Experimental discovery MAFA was initially discovered by Enrique Ortega and Israel Pecht in 1988 while studying the type 1 Fcɛ receptors (FcɛRI) and the unknown Ca2+ channels that allowed these receptors to work in the cellular membrane. Ortega and Pecht experimented through using a series of monoclonal antibodies on the RBL -2H3 line of rat mast cells. While experimenting and trying to find a specific antibody that would raise a response, the G63 monoclonal antibody was shown to raise a response by inhibiting the cellular secretions linked to the FcɛRI receptors in these rat mucosal mast cells. The G63 antibody attached to a specific membrane receptor protein that caused the inhibition process to occur. Specifically, the inhibition occurred by the G63 antibody and glycoprotein cross-linking so that the processes of inflammation mediator formation, Ca2+ intake into the cell, and the hydrolysis of phosphatidylinositides were all stopped. This caused biochemical inhibition of the normal FcɛRI response. The identified receptor protein was then isolated and studied where it was found that when cross-linked, the protein actually had a conformational change that localized the FcɛRI receptors. Based on these results, both Ortega and Pecht na
https://en.wikipedia.org/wiki/Type%E2%80%93length%E2%80%93value
Within communication protocols, TLV (type-length-value or tag-length-value) is an encoding scheme used for informational elements. A TLV-encoded data stream contains code related to the record type, the record value's length, and finally the value itself. Details The type and length are fixed in size (typically 1–4 bytes), and the value field is of variable size. These fields are used as follows: Type A binary code, often simply alphanumeric, which indicates the kind of field that this part of the message represents; Length The size of the value field (typically in bytes); Value Variable-sized series of bytes which contains data for this part of the message. Some advantages of using a TLV representation data system solution are: TLV sequences are easily searched using generalized parsing functions; New message elements which are received at an older node can be safely skipped and the rest of the message can be parsed. This is similar to the way that unknown XML tags can be safely skipped; TLV elements can be placed in any order inside the message body; TLV elements are typically used in a binary format and binary protocols which makes parsing faster and the data smaller than in comparable text based protocols. Examples Real-world examples Transport protocols TLS (and its predecessor SSL) use TLV-encoded messages. SSH COPS IS-IS RADIUS Link Layer Discovery Protocol allows for the sending of organizational-specific information as a TLV element within LLDP packets Media Redundancy Protocol allows organizational-specific information Dynamic Host Configuration Protocol (DHCP) uses TLV encoded options RR protocol used in GSM cell phones (defined in 3GPP 04.18). In this protocol each message is defined as a sequence of information elements. Data storage formats IFF Matroska uses TLV for markup tags QTFF (the basis for MPEG-4 containers) Other ubus used for IPC in OpenWrt Other examples Imagine a message to make a telephone call. In a fi
https://en.wikipedia.org/wiki/Link%20Layer%20Discovery%20Protocol
The Link Layer Discovery Protocol (LLDP) is a vendor-neutral link layer protocol used by network devices for advertising their identity, capabilities, and neighbors on a local area network based on IEEE 802 technology, principally wired Ethernet. The protocol is formally referred to by the IEEE as Station and Media Access Control Connectivity Discovery specified in IEEE 802.1AB with additional support in IEEE 802.3 section 6 clause 79. LLDP performs functions similar to several proprietary protocols, such as Cisco Discovery Protocol, Foundry Discovery Protocol, Nortel Discovery Protocol and Link Layer Topology Discovery. Information gathered Information gathered with LLDP can be stored in the device management information base (MIB) and queried with the Simple Network Management Protocol (SNMP) as specified in RFC 2922. The topology of an LLDP-enabled network can be discovered by crawling the hosts and querying this database. Information that may be retrieved include: System name and description Port name and description VLAN name IP management address System capabilities (switching, routing, etc.) MAC/PHY information MDI power Link aggregation Applications The Link Layer Discovery Protocol may be used as a component in network management and network monitoring applications. One such example is its use in data center bridging requirements. The (DCBX) is a discovery and capability exchange protocol that is used for conveying capabilities and configuration of the above features between neighbors to ensure consistent configuration across the network. LLDP is used to advertise power over Ethernet capabilities and requirements and negotiate power delivery. Media endpoint discovery extension Media Endpoint Discovery is an enhancement of LLDP, known as LLDP-MED, that provides the following facilities: Auto-discovery of LAN policies (such as VLAN, Layer 2 Priority and Differentiated services (Diffserv) settings) enabling plug and play networking. Device
https://en.wikipedia.org/wiki/Value%20%28mathematics%29
In mathematics, value may refer to several, strongly related notions. In general, a mathematical value may be any definite mathematical object. In elementary mathematics, this is most often a number – for example, a real number such as or an integer such as 42. The value of a variable or a constant is any number or other mathematical object assigned to it. The value of a mathematical expression is the result of the computation described by this expression when the variables and constants in it are assigned values. The value of a function, given the value(s) assigned to its argument(s), is the quantity assumed by the function for these argument values. For example, if the function is defined by , then assigning the value 3 to its argument yields the function value 10, since . If the variable, expression or function only assumes real values, it is called real-valued. Likewise, a complex-valued variable, expression or function only assumes complex values. See also Value function Value (computer science) Absolute value Truth value References Elementary mathematics nl:Reëel-waardige functie
https://en.wikipedia.org/wiki/Logic%20of%20information
The logic of information, or the logical theory of information, considers the information content of logical signs and expressions along the lines initially developed by Charles Sanders Peirce. In this line of work, the concept of information serves to integrate the aspects of signs and expressions that are separately covered, on the one hand, by the concepts of denotation and extension, and on the other hand, by the concepts of connotation and comprehension. Peirce began to develop these ideas in his lectures "On the Logic of Science" at Harvard University (1865) and the Lowell Institute (1866). See also Charles Sanders Peirce bibliography Information theory Inquiry Philosophy of information Pragmatic maxim Pragmatic theory of information Pragmatic theory of truth Pragmaticism Pragmatism Scientific method Semeiotic Semiosis Semiotics Semiotic information theory Sign relation Sign relational complex Triadic relation References Luciano Floridi, The Logic of Information, presentation, discussion, Télé-université (Université du Québec), 11 May 2005, Montréal, Canada. Luciano Floridi, The logic of being informed, Logique et Analyse. 2006, 49.196, 433–460. External links Peirce, C.S. (1867), "Upon Logical Comprehension and Extension", Eprint Information theory Semiotics Logic Charles Sanders Peirce
https://en.wikipedia.org/wiki/Mass-to-light%20ratio
In astrophysics and physical cosmology the mass-to-light ratio, normally designated with the Greek letter upsilon, , is the quotient between the total mass of a spatial volume (typically on the scales of a galaxy or a cluster) and its luminosity. These ratios are often reported using the value calculated for the Sun as a baseline ratio which is a constant  = 5133 kg/W: equal to the solar mass divided by the solar luminosity , . The mass-to-light ratios of galaxies and clusters are all much greater than due in part to the fact that most of the matter in these objects does not reside within stars and observations suggest that a large fraction is present in the form of dark matter. Luminosities are obtained from photometric observations, correcting the observed brightness of the object for the distance dimming and extinction effects. In general, unless a complete spectrum of the radiation emitted by the object is obtained, a model must be extrapolated through either power law or blackbody fits. The luminosity thus obtained is known as the bolometric luminosity. Masses are often calculated from the dynamics of the virialized system or from gravitational lensing. Typical mass-to-light ratios for galaxies range from 2 to 10  while on the largest scales, the mass to light ratio of the observable universe is approximately 100 , in concordance with the current best fit cosmological model. References External links Physical cosmology Astrophysics Ratios
https://en.wikipedia.org/wiki/Cray%20CS6400
The Cray Superserver 6400, or CS6400, is a discontinued multiprocessor server computer system produced by Cray Research Superservers, Inc., a subsidiary of Cray Research, and launched in 1993. The CS6400 was also sold as the Amdahl SPARCsummit 6400E. The CS6400 (codenamed SuperDragon during development) superseded the earlier SPARC-based Cray S-MP system, which was designed by Floating Point Systems. However, the CS6400 adopted the XDBus packet-switched inter-processor bus also used in Sun Microsystems' SPARCcenter 2000 (Dragon) and SPARCserver 1000 (Baby Dragon or Scorpion) Sun4d systems. This bus originated in the Xerox Dragon multiprocessor workstation designed at Xerox PARC. The CS6400 was available with either 60 MHz SuperSPARC-I or 85 MHz SuperSPARC-II processors, maximum RAM capacity was 16 GB. Other features shared with the Sun servers included use of the same SuperSPARC microprocessor and Solaris operating system. However, the CS6400 could be configured with four to 64 processors on quad XDBusses at 55 MHz, compared with the SPARCcenter 2000's maximum of 20 on dual XDBusses at 40 or 50 MHz and the SPARCserver 1000's maximum of 8 on a single XDBus. Unlike the Sun SPARCcenter 2000 and SPARCserver 1000, each CS6400 is equipped with an external System Service Processor (SSP), a SPARCstation fitted with a JTAG interface to communicate with the CS6400 to configure its internal bus control card. The other systems have a JTAG interface, but it is not used for this purpose. While the CS6400 only requires the SSP to be used for configuration changes (e.g. a CPU card is pulled for maintenance), some derivative designs, in particular the Sun Enterprise 10000, are useless without their SSP. Upon Silicon Graphics' acquisition of Cray Research in 1996, the Superserver business (by now the Cray Business Systems Division) was sold to Sun. This included Starfire, the CS6400's successor then under development, which became the Sun Enterprise 10000. References External
https://en.wikipedia.org/wiki/Certificate%20policy
A certificate policy (CP) is a document which aims to state what are the different entities of a public key infrastructure (PKI), their roles and their duties. This document is published in the PKI perimeter. When in use with X.509 certificates, a specific field can be set to include a link to the associated certificate policy. Thus, during an exchange, any relying party has an access to the assurance level associated with the certificate, and can decide on the level of trust to put in the certificate. RFC 3647 The reference document for writing a certificate policy is, , . The RFC proposes a framework for the writing of certificate policies and Certification Practice Statements (CPS). The points described below are based on the framework presented in the RFC. Main points Architecture The document should describe the general architecture of the related PKI, present the different entities of the PKI and any exchange based on certificates issued by this very same PKI. Certificate uses An important point of the certificate policy is the description of the authorized and prohibited certificate uses. When a certificate is issued, it can be stated in its attributes what use cases it is intended to fulfill. For example, a certificate can be issued for digital signature of e-mail (aka S/MIME), encryption of data, authentication (e.g. of a Web server, as when one uses HTTPS) or further issuance of certificates (delegation of authority). Prohibited uses are specified in the same way. Naming, identification and authentication The document also describes how certificates names are to be chosen, and besides, the associated needs for identification and authentication. When a certification application is filled, the certification authority (or, by delegation, the registration authority) is in charge of checking the information provided by the applicant, such as his identity. This is to make sure that the CA does not take part in an identity theft. Key generation The ge
https://en.wikipedia.org/wiki/Sort%20%28Unix%29
In computing, sort is a standard command line program of Unix and Unix-like operating systems, that prints the lines of its input or concatenation of all files listed in its argument list in sorted order. Sorting is done based on one or more sort keys extracted from each line of input. By default, the entire input is taken as sort key. Blank space is the default field separator. The command supports a number of command-line options that can vary by implementation. For instance the "-r" flag will reverse the sort order. History A command that invokes a general sort facility was first implemented within Multics. Later, it appeared in Version 1 Unix. This version was originally written by Ken Thompson at AT&T Bell Laboratories. By Version 4 Thompson had modified it to use pipes, but sort retained an option to name the output file because it was used to sort a file in place. In Version 5, Thompson invented "-" to represent standard input. The version of bundled in GNU coreutils was written by Mike Haertel and Paul Eggert. This implementation employs the merge sort algorithm. Similar commands are available on many other operating systems, for example a command is part of ASCII's MSX-DOS2 Tools for MSX-DOS version 2. The command has also been ported to the IBM i operating system. Syntax sort [OPTION]... [FILE]... With no FILE, or when FILE is -, the command reads from standard input. Parameters Examples Sort a file in alphabetical order $ cat phonebook Smith, Brett 555-4321 Doe, John 555-1234 Doe, Jane 555-3214 Avery, Cory 555-4132 Fogarty, Suzie 555-2314 $ sort phonebook Avery, Cory 555-4132 Doe, Jane 555-3214 Doe, John 555-1234 Fogarty, Suzie 555-2314 Smith, Brett 555-4321 Sort by number The -n option makes the program sort according to numerical value. The command produces output that starts with a number, the file size, so its output can be piped to to produce a list of files sorted by (ascending) fil
https://en.wikipedia.org/wiki/Brian%20Conrad
Brian Conrad (born November 20, 1970) is an American mathematician and number theorist, working at Stanford University. Previously, he taught at the University of Michigan and at Columbia University. Conrad and others proved the modularity theorem, also known as the Taniyama-Shimura Conjecture. He proved this in 1999 with Christophe Breuil, Fred Diamond and Richard Taylor, while holding a joint postdoctoral position at Harvard University and the Institute for Advanced Study in Princeton, New Jersey. Conrad received his bachelor's degree from Harvard in 1992, where he won a prize for his undergraduate thesis. He did his doctoral work under Andrew Wiles and went on to receive his Ph.D. from Princeton University in 1996 with a dissertation titled Finite Honda Systems And Supersingular Elliptic Curves. He was also featured as an extra in Nova's The Proof. His identical twin brother Keith Conrad, also a number theorist, is a professor at the University of Connecticut. References External links Homepage at Stanford University On the modularity of elliptic curves over Q - Proof of Taniyama-Shimura coauthored by Conrad. Brian Conrad, Fred Diamond, Richard Taylor: Modularity of certain potentially Barsotti-Tate Galois representations, Journal of the American Mathematical Society 12 (1999), pp. 521–567. Also contains the proof C. Breuil, B. Conrad, F. Diamond, R. Taylor : On the modularity of elliptic curves over Q: wild 3-adic exercises, Journal of the American Mathematical Society 14 (2001), 843–939. 20th-century American mathematicians 21st-century American mathematicians Number theorists Harvard University staff Princeton University alumni University of Michigan faculty Scientists from New York City 1970 births Living people Harvard College alumni Fermat's Last Theorem Mathematicians from New York (state) American identical twins Recipients of the Presidential Early Career Award for Scientists and Engineers
https://en.wikipedia.org/wiki/Transgene
A transgene is a gene that has been transferred naturally, or by any of a number of genetic engineering techniques, from one organism to another. The introduction of a transgene, in a process known as transgenesis, has the potential to change the phenotype of an organism. Transgene describes a segment of DNA containing a gene sequence that has been isolated from one organism and is introduced into a different organism. This non-native segment of DNA may either retain the ability to produce RNA or protein in the transgenic organism or alter the normal function of the transgenic organism's genetic code. In general, the DNA is incorporated into the organism's germ line. For example, in higher vertebrates this can be accomplished by injecting the foreign DNA into the nucleus of a fertilized ovum. This technique is routinely used to introduce human disease genes or other genes of interest into strains of laboratory mice to study the function or pathology involved with that particular gene. The construction of a transgene requires the assembly of a few main parts. The transgene must contain a promoter, which is a regulatory sequence that will determine where and when the transgene is active, an exon, a protein coding sequence (usually derived from the cDNA for the protein of interest), and a stop sequence. These are typically combined in a bacterial plasmid and the coding sequences are typically chosen from transgenes with previously known functions. Transgenic or genetically modified organisms, be they bacteria, viruses or fungi, serve many research purposes. Transgenic plants, insects, fish and mammals (including humans) have been bred. Transgenic plants such as corn and soybean have replaced wild strains in agriculture in some countries (e.g. the United States). Transgene escape has been documented for GMO crops since 2001 with persistence and invasiveness. Transgenetic organisms pose ethical questions and may cause biosafety problems. History The idea of shaping a
https://en.wikipedia.org/wiki/Prism%20%28chipset%29
The Prism brand is used for wireless networking integrated circuit (commonly called "chips") technology from Conexant for wireless LANs. They were formerly produced by Intersil Corporation. Legacy 802.11b products (Prism 2/2.5/3) The open-source HostAP driver supports the IEEE 802.11b Prism 2/2.5/3 family of chips. Wireless adaptors which use the Prism chipset are known for compatibility, and are preferred for specialist applications such as packet capture. No win64 drivers are known to exist. Intersil firmware WEP WPA (TKIP), after update WPA2 (CCMP), after update Lucent/Agere WEP WPA (TKIP in hardware) 802.11b/g products (Prism54, ISL38xx) The chipset has undergone a major redesign for 802.11g compatibility and cost reduction, and newer "Prism54" chipsets are not compatible with their predecessors. Intersil initially provided a Linux driver for the first Prism54 chips which implemented a large part of the 802.11 stack in the firmware. However, further cost reductions caused a new, lighter firmware to be designed and the amount of on-chip memory to shrink, making it impossible to run the older version of the firmware on the latest chips. In the meantime, the PRISM business was sold to Conexant, which never published information about the newer firmware API that would enable a Linux driver to be written. However, a reverse engineering effort eventually made it possible to use the new Prism54 chipsets under the Linux and BSD operating systems. See also HostAP driver for prism chipsets External links PRISM solutions at Conexant GPL drivers and firmware for the ISL38xx-based Prism chipsets (mostly reverse engineered) Wireless networking hardware
https://en.wikipedia.org/wiki/QuickWin
QuickWin was a library from Microsoft that made it possible to compile command line MS-DOS programs as Windows 3.1 applications, displaying their output in a window. Since the release of Windows NT, Microsoft has included support for console applications in the Windows operating system itself via the Windows Console, eliminating the need for QuickWin. But Intel Visual Fortran still uses that library. Borland's equivalent in Borland C++ 5 was called EasyWin. There is a program called QuickWin on CodeProject, which does a similar thing. See also Command-line interface References Computer libraries
https://en.wikipedia.org/wiki/Sharon%20R.%20Long
Sharon Rugel Long (born March 2, 1951) is an American plant biologist. She is the Steere-Pfizer Professor of Biological Science in the Department of Biology at Stanford University, and the Principal Investigator of the Long Laboratory at Stanford. Long studies the symbiosis between bacteria and plants, in particular the relationship of nitrogen-fixing bacteria to legumes. Her work has applications for energy conservation and sustainable agriculture. She is a 1992 MacArthur Fellows Program recipient, and became a Member of the National Academy of Sciences in 1993. Early life and education Sharon Rugel Long was born on to Harold Eugene and Florence Jean (Rugel) Long. She attended George Washington High School in Denver, Colorado. Long spent a year at Harvey Mudd College before becoming one of the first women to attend Caltech in September 1970. She completed a double major in biochemistry and French literature in the Independent Studies Program, and obtained her B.S. in 1973. Long went on to study biochemistry and genetics at Yale, receiving her Ph.D. in 1979. She began her research on plants and symbiosis while a postdoc at Frederick M Ausubels lab at Harvard University. Career and research Long joined the Stanford University faculty in 1982 as an assistant professor, rising to associate professor in 1987, and full professor in 1992. From 1994-2001 she was also an Investigator of the Howard Hughes Medical Institute. She currently holds the Steere-Pfizer chair in Biological Sciences at Stanford. From 1993-1996 she was part of the National Research Councils Committee on Undergraduate Science Education. She served as Dean of Humanities and Sciences at Stanford University from 2001 to 2007. In September 2008 she was identified as one of 5 science advisors for Democratic presidential candidate Barack Obama. In 2011, she was appointed to the President's Committee on the National Medal of Science by President Obama. Long identified and cloned genes that allow bact
https://en.wikipedia.org/wiki/Takt%20time
Takt time, or simply takt, is a manufacturing term to describe the required product assembly duration that is needed to match the demand. Often confused with cycle time, takt time is a tool used to design work and it measures the average time interval between the start of production of one unit and the start of production of the next unit when items are produced sequentially. For calculations, it is the time to produce parts divided by the number of parts demanded in that time interval. The takt time is based on customer demand; if a process or a production line are unable to produce at takt time, either demand leveling, additional resources, or process re-engineering is needed to ensure on-time delivery. For example, if the customer demand is 10 units per week, then, given a 40-hour workweek and steady flow through the production line, the average duration between production starts should be 4 hours, ideally. This interval is further reduced to account for things like machine downtime and scheduled employee breaks. Etymology Takt time is a borrowing of the Japanese word , which in turn was borrowed from the German word , meaning 'cycle time'. The word was likely introduced to Japan by German engineers in the 1930s. The word originates from the Latin word "tactus" meaning "touch, sense of touch, feeling". Some earlier meanings include: (16th century) "beat triggered by regular contact, clock beat", then in music "beat indicating the rhythm" and (18th century) "regular unit of note values". History Takt time has played an important role in production systems even before the industrial revolution. From 16th-century shipbuilding in Venice, mass-production of Model T by Henry Ford, synchronizing airframe movement in the German aviation industry and many more. Cooperation between the German aviation industry and Mitsubishi brought takt to Japan, where Toyota incorporated it in the Toyota Production System (TPS). James P. Womack and Daniel T. Jones in The Machine T
https://en.wikipedia.org/wiki/Rocker%20box
A rocker box (also known as a cradle) is a gold mining implement for separating alluvial placer gold from sand and gravel which was used in placer mining in the 19th century. It consists of a high-sided box, which is open on one end and on top, and was placed on rockers. The inside bottom of the box is lined with riffles and usually a carpet (called Miner's Moss) similar to a sluice box. On top of the box is a classifier sieve (usually with half-inch or quarter-inch openings) which screens-out larger pieces of rock and other material, allowing only finer sand and gravel through. Between the sieve and the lower sluice section is a baffle, which acts as another trap for fine gold and also ensures that the aggregate material being processed is evenly distributed before it enters the sluice section. It sits at an angle and points towards the closed back of the box. Traditionally, the baffle consisted of a flexible apron or made of canvas or a similar material, which had a sag of about an inch and a half in the center, to act as a collection pocket for fine gold. Later rockers (including most modern ones) dispensed with the flexible apron and used a pair of solid wood or metal baffle boards. These are sometimes covered with carpet to trap fine gold. The entire device sits on rockers at a slight gradient, which allows it to be rocked side to side. Today, the rocker box is not used as extensively as the sluice, but still is an effective method of recovering gold in areas where there is not enough available water to operate a sluice effectively. Like a sluice box, the rocker box has riffles and a carpet in it to trap gold. It was designed to be used in areas with less water than a sluice box. The mineral processing involves pouring water out of a small cup and then rocking the small sluice box like a cradle, thus the name rocker box or cradle. Rocker boxes must be manipulated carefully, to prevent losing the gold. Although big, and difficult to move, the rocker can pic
https://en.wikipedia.org/wiki/Flux%20balance%20analysis
Flux balance analysis (FBA) is a mathematical method for simulating metabolism in genome-scale reconstructions of metabolic networks. In comparison to traditional methods of modeling, FBA is less intensive in terms of the input data required for constructing the model. Simulations performed using FBA are computationally inexpensive and can calculate steady-state metabolic fluxes for large models (over 2000 reactions) in a few seconds on modern personal computers. The related method of metabolic pathway analysis seeks to find and list all possible pathways between metabolites. FBA finds applications in bioprocess engineering to systematically identify modifications to the metabolic networks of microbes used in fermentation processes that improve product yields of industrially important chemicals such as ethanol and succinic acid. It has also been used for the identification of putative drug targets in cancer and pathogens, rational design of culture media, and host–pathogen interactions. The results of FBA can be visualized using flux maps similar to the image on the right, which illustrates the steady-state fluxes carried by reactions in glycolysis. The thickness of the arrows is proportional to the flux through the reaction. FBA formalizes the system of equations describing the concentration changes in a metabolic network as the dot product of a matrix of the stoichiometric coefficients (the stoichiometric matrix S) and the vector v of the unsolved fluxes. The right-hand side of the dot product is a vector of zeros representing the system at steady state. Linear programming is then used to calculate a solution of fluxes corresponding to the steady state. History Some of the earliest work in FBA dates back to the early 1980s. Papoutsakis demonstrated that it was possible to construct flux balance equations using a metabolic map. It was Watson, however, who first introduced the idea of using linear programming and an objective function to solve for the fluxes in
https://en.wikipedia.org/wiki/Filopodia
Filopodia (: filopodium) are slender cytoplasmic projections that extend beyond the leading edge of lamellipodia in migrating cells. Within the lamellipodium, actin ribs are known as microspikes, and when they extend beyond the lamellipodia, they're known as filopodia. They contain microfilaments (also called actin filaments) cross-linked into bundles by actin-bundling proteins, such as fascin and fimbrin. Filopodia form focal adhesions with the substratum, linking them to the cell surface. Many types of migrating cells display filopodia, which are thought to be involved in both sensation of chemotropic cues, and resulting changes in directed locomotion. Activation of the Rho family of GTPases, particularly cdc42 and their downstream intermediates, results in the polymerization of actin fibers by Ena/Vasp homology proteins. Growth factors bind to receptor tyrosine kinases resulting in the polymerization of actin filaments, which, when cross-linked, make up the supporting cytoskeletal elements of filopodia. Rho activity also results in activation by phosphorylation of ezrin-moesin-radixin family proteins that link actin filaments to the filopodia membrane. Filopodia have roles in sensing, migration, neurite outgrowth, and cell-cell interaction. To close a wound in vertebrates, growth factors stimulate the formation of filopodia in fibroblasts to direct fibroblast migration and wound closure. In macrophages, filopodia act as phagocytic tentacles, pulling bound objects towards the cell for phagocytosis. In infections Filopodia are also used for movement of bacteria between cells, so as to evade the host immune system. The intracellular bacteria Ehrlichia are transported between cells through the host cell filopodia induced by the pathogen during initial stages of infection. Filopodia are the initial contact that human retinal pigment epithelial (RPE) cells make with elementary bodies of Chlamydia trachomatis, the bacteria that causes Chlamydia. Viruses have been
https://en.wikipedia.org/wiki/EN%2014214
EN 14214 is a standard published by the European Committee for Standardization that describes the requirements and test methods for FAME - the most common type of biodiesel. The technical definition of biodiesel is a fuel suitable for use in compression ignition (diesel) engines that is made of fatty acid monoalkyl esters derived from biologically produced oils or fats including vegetable oils, animal fats and microalgal oils. When biodiesel is produced from these types of oil using methanol fatty acid methyl esters (FAME) are produced. Biodiesel fuels can also be produced using other alcohols, for example using ethanol to produce fatty acid ethyl esters, however these types of biodiesel are not covered by EN 14214 which applies only to methyl esters i.e. biodiesel produced using methanol. This European Standard exists in three official versions - English, French, German. The current version of the standard was published in November 2008 and supersedes EN 14214:2003. Differences exist between the national versions of the EN 14214 standard. These differences relate to cold weather requirements and are detailed in the national annex of each standard. It is broadly based on the earlier German standard DIN 51606. The ASTM and EN standards both recommend very similar methods for the GC based analyses. Blends are designated as "B" followed by a number indicating the percentage biodiesel. For example: B100 is pure biodiesel. B99 is 99% biodiesel, 1% petrodiesel. B20 is 20% biodiesel and 80% fossil diesel. Specifications See also ASTM D6751 — the standard used in USA and Canada EN EN 590 List of EN standards References External links CEN homepage Country specific CFPP requirements according to national annexes of EN 14214 14214 Biodiesel
https://en.wikipedia.org/wiki/AVR32
AVR32 is a 32-bit RISC microcontroller architecture produced by Atmel. The microcontroller architecture was designed by a handful of people educated at the Norwegian University of Science and Technology, including lead designer Øyvind Strøm and CPU architect Erik Renno in Atmel's Norwegian design center. Most instructions are executed in a single-cycle. The multiply–accumulate unit can perform a 32-bit × 16-bit + 48-bit arithmetic operation in two cycles (result latency), issued once per cycle. It does not resemble the 8-bit AVR microcontroller family, even though they were both designed at Atmel Norway, in Trondheim. Some of the debug-tools are similar. Support for AVR32 has been dropped from Linux as of kernel 4.12; Atmel has switched mostly to M variants of the ARM architecture. Architecture The AVR32 has at least two micro-architectures, the AVR32A and AVR32B. These differ in the instruction set architecture, register configurations and the use of caches for instructions and data. The AVR32A CPU cores are for inexpensive applications. They do not provide dedicated hardware registers for shadowing the register file, status and return address in interrupts. This saves chip area at the expense of slower interrupt-handling. The AVR32B CPU cores are designed for fast interrupts. They have dedicated registers to hold these values for interrupts, exceptions and supervisor calls. The AVR32B cores also support a Java virtual machine in hardware. The AVR32 instruction set has 16-bit (compact) and 32-bit (extended) instructions, similar to e.g. some ARM, with several specialized instructions not found in older ARMv5 or ARMv6 or MIPS32. Several U.S. patents are filed for the AVR32 ISA and design platform. Just like the AVR 8-bit microcontroller architecture, the AVR32 was designed for high code density (packing much function in few instructions) and fast instructions with few clock cycles. Atmel used the independent benchmark consortium EEMBC to benchmark the a
https://en.wikipedia.org/wiki/Luis%20Castiglioni
Luis Alberto Castiglioni Soria (born 31 July 1962) is a Paraguayan politician. He was Vice President of Paraguay for the Colorado Party from 2003 to 2007. Career Castiglioni was born in Itacurubí del Rosario and obtained a qualification in civil engineering from the Catholic University of Asunción. His national political career began in 1984 as leader of Colorado party's juvenile wing. In 2003 Nicanor Duarte chose him as his running mate in the 2003 presidential election. Castiglioni served as Vice President of Paraguay from 15 August 2003 to October 2007, when he resigned in order to pursue the presidency. He was a candidate for the Colorado Party's nomination in the April 2008 presidential election. Initial results in the December 2007 party primary election showed rival candidate Blanca Ovelar, who is backed by President Nicanor Duarte, narrowly defeating Castiglioni; however, the result was disputed, leading to a recount. On 21 January 2008, the Colorado Party electoral commission announced that Ovelar had won with 45.04% of the vote against 44.5% for Castiglioni. Castiglioni said that he would never accept defeat, claiming to have proof that 30,000 votes in his favor were "stolen", and said that he would take the matter to court. References 1962 births Living people People from San Pedro Department, Paraguay Paraguayan people of Italian descent Colorado Party (Paraguay) politicians Vice presidents of Paraguay Foreign Ministers of Paraguay Government ministers of Paraguay Members of the Chamber of Deputies of Paraguay Members of the Senate of Paraguay Paraguayan engineers Universidad Católica Nuestra Señora de la Asunción alumni
https://en.wikipedia.org/wiki/Nuclear%20gene
A nuclear gene is a gene whose physical DNA nucleotide sequence is located in the cell nucleus of a eukaryote. The term is used to distinguish nuclear genes from genes found in mitochondria or chloroplasts. The vast majority of genes in eukaryotes are nuclear. Endosymbiotic theory Mitochondria and plastids evolved from free-living prokaryotes into current cytoplasmic organelles through endosymbiotic evolution. Mitochondria are thought to be necessary for eukaryotic life to exist. They are known as the cell's powerhouses because they provide the majority of the energy or ATP required by the cell. The mitochondrial genome (mtDNA) is replicated separately from the host genome. Human mtDNA codes for 13 proteins, most of which are involved in oxidative phosphorylation (OXPHOS). The nuclear genome encodes the remaining mitochondrial proteins, which are then transported into the mitochondria. The genomes of these organelles have become far smaller than those of their free-living predecessors. This is mostly due to the widespread transfer of genes from prokaryote progenitors to the nuclear genome, followed by their elimination from organelle genomes. In evolutionary timescales, the continuous entry of organelle DNA into the nucleus has provided novel nuclear genes. Endosymbiotic organelle interactions Though separated from one another within the cell, nuclear genes and those of mitochondria and chloroplasts can affect each other in a number of ways. Nuclear genes play major roles in the expression of chloroplast genes and mitochondrial genes. Additionally, gene products of mitochondria can themselves affect the expression of genes within the cell nucleus. This can be done through metabolites as well as through certain peptides trans-locating from the mitochondria to the nucleus, where they can then affect gene expression. Structure Eukaryotic genomes have distinct higher-order chromatin structures that are closely packaged functional relates to gene expression. Chroma
https://en.wikipedia.org/wiki/How%20to%20Solve%20it%20by%20Computer
How to Solve it by Computer is a computer science book by R. G. Dromey, first published by Prentice-Hall in 1982. It is occasionally used as a textbook, especially in India. It is an introduction to the whys of algorithms and data structures. Features of the book: The design factors associated with problems The creative process behind coming up with innovative solutions for algorithms and data structures The line of reasoning behind the constraints, factors and the design choices made. The very fundamental algorithms portrayed by this book are mostly presented in pseudocode and/or Pascal notation. See also How to Solve It, by George Pólya, the author's mentor and inspiration for writing the book. References 1982 non-fiction books Algorithms Computer science books Heuristics Problem solving Prentice Hall books
https://en.wikipedia.org/wiki/Ogden%27s%20lemma
In the theory of formal languages, Ogden's lemma (named after William F. Ogden) is a generalization of the pumping lemma for context-free languages. Statement We will use underlines to indicate "marked" positions. Special cases Ogden's lemma is often stated in the following form, which can be obtained by "forgetting about" the grammar, and concentrating on the language itself: If a language is context-free, then there exists some number (where may or may not be a pumping length) such that for any string of length at least in and every way of "marking" or more of the positions in , can be written as with strings and , such that has at least one marked position, has at most marked positions, and for all . In the special case where every position is marked, Ogden's lemma is equivalent to the pumping lemma for context-free languages. Ogden's lemma can be used to show that certain languages are not context-free in cases where the pumping lemma is not sufficient. An example is the language . Example applications Non-context-freeness The special case of Ogden's lemma is often sufficient to prove some languages are not context-free. For example, is a standard example of non-context-free language (, p. 128). Similarly, one can prove the "copy twice" language is not context-free, by using Ogden's lemma on . And the given example last section is not context-free by using Ogden's lemma on . Inherent ambiguity Ogden's lemma can be used to prove the inherent ambiguity of some languages, which is implied by the title of Ogden's paper. Example: Let . The language is inherently ambiguous. (Example from page 3 of Ogden's paper.) Similarly, is inherently ambiguous, and for any CFG of the language, letting be the constant for Ogden's lemma, we find that has at least different parses. Thus has an unbounded degree of inherent ambiguity. Undecidability The proof can be extended to show that deciding whether a CFG is inherently ambiguous is undecida
https://en.wikipedia.org/wiki/Bounce%20Address%20Tag%20Validation
In computing, Bounce Address Tag Validation (BATV) is a method, defined in an Internet Draft, for determining whether the bounce address specified in an E-mail message is valid. It is designed to reject backscatter, that is, bounce messages to forged return addresses. Overview The basic idea is to send all e-mail with a return address that includes a timestamp and a cryptographic token that cannot be forged. Any e-mail that is returned as a bounce without a valid signature can then be rejected. E-mail that is being bounced back should have an empty (null) return address so that bounces are never created for a bounce and therefore preventing messages from bouncing back and forth forever. BATV replaces an envelope sender like mailbox@example.com with prvs=tag-value=mailbox@example.com, where prvs, called "Simple Private Signature", is just one of the possible tagging schemes; actually, the only one fully specified in the draft. The BATV draft gives a framework that other possible techniques can fit into. Other types of implementations, such as using public key signatures that can be verified by third parties, are mentioned but left undefined. The overall framework is vague/flexible enough that similar systems such as Sender Rewriting Scheme can fit into this framework. History Sami Farin proposed an Anti-Bogus Bounce System in 2003 in news.admin.net-abuse.email, which used the same basic idea of putting a hard to forge hash in a message's bounce address. In late 2004, Goodman et al. proposed a much more complex "Signed Envelope Sender" that included a hash of the message body and was intended to address a wide variety of forgery threats, including bounces from forged mail. Several months later, Levine and Crocker proposed BATV under its current name and close to its current form. Problems The draft anticipates some problems running BATV. Some mailing lists managers (e.g. ezmlm) still key on the bounce address, and will not recognize it after BATV mangling.