source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Bertrand%20Russell
Bertrand Arthur William Russell, 3rd Earl Russell, (18 May 1872 – 2 February 1970) was a British mathematician, philosopher, logician, and public intellectual. He had a considerable influence on mathematics, logic, set theory, linguistics, artificial intelligence, cognitive science, computer science, and various areas of analytic philosophy, especially philosophy of mathematics, philosophy of language, epistemology, and metaphysics. He was one of the early 20th century's most prominent logicians and a founder of analytic philosophy, along with his predecessor Gottlob Frege, his friend and colleague G. E. Moore, and his student and protégé Ludwig Wittgenstein. Russell with Moore led the British "revolt against idealism". Together with his former teacher A. N. Whitehead, Russell wrote Principia Mathematica, a milestone in the development of classical logic and a major attempt to reduce the whole of mathematics to logic (see Logicism). Russell's article "On Denoting" has been considered a "paradigm of philosophy". Russell was a pacifist who championed anti-imperialism and chaired the India League. He went to prison for his pacifism during World War I, but also saw the war against Adolf Hitler's Nazi Germany as a necessary "lesser of two evils". In the wake of World War II, he welcomed American global hegemony in favour of either Soviet hegemony or no (or ineffective) world leadership, even if it were to come at the cost of using their nuclear weapons. He would later criticise Stalinist totalitarianism, condemn the United States' involvement in the Vietnam War, and become an outspoken proponent of nuclear disarmament. In 1950, Russell was awarded the Nobel Prize in Literature "in recognition of his varied and significant writings in which he champions humanitarian ideals and freedom of thought". He was also the recipient of the De Morgan Medal (1932), Sylvester Medal (1934), Kalinga Prize (1957), and Jerusalem Prize (1963). Biography Early life and background Ber
https://en.wikipedia.org/wiki/Botany
Botany, also called plant science (or plant sciences), plant biology or phytology, is the science of plant life and a branch of biology. A botanist, plant scientist or phytologist is a scientist who specialises in this field. The term "botany" comes from the Ancient Greek word () meaning "pasture", "herbs" "grass", or "fodder"; is in turn derived from (), "to feed" or "to graze". Traditionally, botany has also included the study of fungi and algae by mycologists and phycologists respectively, with the study of these three groups of organisms remaining within the sphere of interest of the International Botanical Congress. Nowadays, botanists (in the strict sense) study approximately 410,000 species of land plants of which some 391,000 species are vascular plants (including approximately 369,000 species of flowering plants), and approximately 20,000 are bryophytes. Botany originated in prehistory as herbalism with the efforts of early humans to identify – and later cultivate – plants that were edible, poisonous, and possibly medicinal, making it one of the first endeavours of human investigation. Medieval physic gardens, often attached to monasteries, contained plants possibly having medicinal benefit. They were forerunners of the first botanical gardens attached to universities, founded from the 1540s onwards. One of the earliest was the Padua botanical garden. These gardens facilitated the academic study of plants. Efforts to catalogue and describe their collections were the beginnings of plant taxonomy, and led in 1753 to the binomial system of nomenclature of Carl Linnaeus that remains in use to this day for the naming of all biological species. In the 19th and 20th centuries, new techniques were developed for the study of plants, including methods of optical microscopy and live cell imaging, electron microscopy, analysis of chromosome number, plant chemistry and the structure and function of enzymes and other proteins. In the last two decades of the 20th ce
https://en.wikipedia.org/wiki/Bactericide
A bactericide or bacteriocide, sometimes abbreviated Bcidal, is a substance which kills bacteria. Bactericides are disinfectants, antiseptics, or antibiotics. However, material surfaces can also have bactericidal properties based solely on their physical surface structure, as for example biomaterials like insect wings. Disinfectants The most used disinfectants are those applying active chlorine (i.e., hypochlorites, chloramines, dichloroisocyanurate and trichloroisocyanurate, wet chlorine, chlorine dioxide, etc.), active oxygen (peroxides, such as peracetic acid, potassium persulfate, sodium perborate, sodium percarbonate, and urea perhydrate), iodine (povidone-iodine, Lugol's solution, iodine tincture, iodinated nonionic surfactants), concentrated alcohols (mainly ethanol, 1-propanol, called also n-propanol and 2-propanol, called isopropanol and mixtures thereof; further, 2-phenoxyethanol and 1- and 2-phenoxypropanols are used), phenolic substances (such as phenol (also called "carbolic acid"), cresols such as thymol, halogenated (chlorinated, brominated) phenols, such as hexachlorophene, triclosan, trichlorophenol, tribromophenol, pentachlorophenol, salts and isomers thereof), cationic surfactants, such as some quaternary ammonium cations (such as benzalkonium chloride, cetyl trimethylammonium bromide or chloride, didecyldimethylammonium chloride, cetylpyridinium chloride, benzethonium chloride) and others, non-quaternary compounds, such as chlorhexidine, glucoprotamine, octenidine dihydrochloride etc.), strong oxidizers, such as ozone and permanganate solutions; heavy metals and their salts, such as colloidal silver, silver nitrate, mercury chloride, phenylmercury salts, copper sulfate, copper oxide-chloride etc. Heavy metals and their salts are the most toxic and environment-hazardous bactericides and therefore their use is strongly discouraged or prohibited strong acids (phosphoric, nitric, sulfuric, amidosulfuric, toluenesulfonic acids), pH < 1, and alkali
https://en.wikipedia.org/wiki/Bipedalism
Bipedalism is a form of terrestrial locomotion where a tetrapod moves by means of its two rear (or lower) limbs or legs. An animal or machine that usually moves in a bipedal manner is known as a biped , meaning 'two feet' (from Latin bis 'double' and pes 'foot'). Types of bipedal movement include walking or running (a bipedal gait) and hopping. Several groups of modern species are habitual bipeds whose normal method of locomotion is two-legged. In the Triassic period some groups of archosaurs (a group that includes crocodiles and dinosaurs) developed bipedalism; among the dinosaurs, all the early forms and many later groups were habitual or exclusive bipeds; the birds are members of a clade of exclusively bipedal dinosaurs, the theropods. Within mammals, habitual bipedalism has evolved multiple times, with the macropods, kangaroo rats and mice, springhare, hopping mice, pangolins and hominin apes (australopithecines, including humans) as well as various other extinct groups evolving the trait independently. A larger number of modern species intermittently or briefly use a bipedal gait. Several lizard species move bipedally when running, usually to escape from threats. Many primate and bear species will adopt a bipedal gait in order to reach food or explore their environment, though there are a few cases where they walk on their hind limbs only. Several arboreal primate species, such as gibbons and indriids, exclusively walk on two legs during the brief periods they spend on the ground. Many animals rear up on their hind legs while fighting or copulating. Some animals commonly stand on their hind legs to reach food, keep watch, threaten a competitor or predator, or pose in courtship, but do not move bipedally. Etymology The word is derived from the Latin words bi(s) 'two' and ped- 'foot', as contrasted with quadruped 'four feet'. Advantages Limited and exclusive bipedalism can offer a species several advantages. Bipedalism raises the head; this allows a greater f
https://en.wikipedia.org/wiki/Bioinformatics
Bioinformatics () is an interdisciplinary field of science that develops methods and software tools for understanding biological data, especially when the data sets are large and complex. Bioinformatics uses biology, chemistry, physics, computer science, computer programming, information engineering, mathematics and statistics to analyze and interpret biological data. The subsequent process of analyzing and interpreting data is referred to as computational biology. Computational, statistical, and computer programming techniques have been used for computer simulation analyses of biological queries. They include reused specific analysis "pipelines", particularly in the field of genomics, such as by the identification of genes and single nucleotide polymorphisms (SNPs). These pipelines are used to better understand the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. Bioinformatics also includes proteomics, which tries to understand the organizational principles within nucleic acid and protein sequences. Image and signal processing allow extraction of useful results from large amounts of raw data. In the field of genetics, it aids in sequencing and annotating genomes and their observed mutations. Bioinformatics includes text mining of biological literature and the development of biological and gene ontologies to organize and query biological data. It also plays a role in the analysis of gene and protein expression and regulation. Bioinformatics tools aid in comparing, analyzing and interpreting genetic and genomic data and more generally in the understanding of evolutionary aspects of molecular biology. At a more integrative level, it helps analyze and catalogue the biological pathways and networks that are an important part of systems biology. In structural biology, it aids in the simulation and modeling of DNA, RNA, proteins as well as biomolecular interactions. History The fi
https://en.wikipedia.org/wiki/Cell%20%28biology%29
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'. Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell. Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms. The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology. Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago. Discovery With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
https://en.wikipedia.org/wiki/Dots%20and%20boxes
Dots and boxes is a pencil-and-paper game for two players (sometimes more). It was first published in the 19th century by French mathematician Édouard Lucas, who called it . It has gone by many other names, including the dots and dashes, game of dots, dot to dot grid, boxes, and pigs in a pen. The game starts with an empty grid of dots. Usually two players take turns adding a single horizontal or vertical line between two unjoined adjacent dots. A player who completes the fourth side of a 1×1 box earns one point and takes another turn. A point is typically recorded by placing a mark that identifies the player in the box, such as an initial. The game ends when no more lines can be placed. The winner is the player with the most points. The board may be of any size grid. When short on time, or to learn the game, a 2×2 board (3×3 dots) is suitable. A 5×5 board, on the other hand, is good for experts. Strategy For most novice players, the game begins with a phase of more-or-less randomly connecting dots, where the only strategy is to avoid adding the third side to any box. This continues until all the remaining (potential) boxes are joined together into chains – groups of one or more adjacent boxes in which any move gives all the boxes in the chain to the opponent. At this point, players typically take all available boxes, then open the smallest available chain to their opponent. For example, a novice player faced with a situation like position 1 in the diagram on the right, in which some boxes can be captured, may take all the boxes in the chain, resulting in position 2. But with their last move, they have to open the next, larger chain, and the novice loses the game. A more experienced player faced with position 1 will instead play the double-cross strategy, taking all but 2 of the boxes in the chain and leaving position 3. The opponent will take these two boxes and then be forced to open the next chain. By achieving position 3, player A wins. The same double-cros
https://en.wikipedia.org/wiki/Broadcast%20domain
A broadcast domain is a logical division of a computer network, in which all nodes can reach each other by broadcast at the data link layer. A broadcast domain can be within the same LAN segment or it can be bridged to other LAN segments. In terms of current popular technologies, any computer connected to the same Ethernet repeater or switch is a member of the same broadcast domain. Further, any computer connected to the same set of interconnected switches/repeaters is a member of the same broadcast domain. Routers and other higher-layer devices form boundaries between broadcast domains. The notion of broadcast domain should be contrasted with that of collision domain, which would be all nodes on the same set of inter-connected repeaters, divided by switches and learning bridges. Collision domains are generally smaller than, and contained within, broadcast domains. While some data-link-layer devices are able to divide the collision domains, broadcast domains are only divided by layer 3 network devices such as routers or layer 3 switches. Separating VLANs divides broadcast domains as well. Further explanation The distinction between broadcast and collision domains comes about because simple Ethernet and similar systems use a shared transmission system. In simple Ethernet (without switches or bridges), data frames are transmitted to all other nodes on a network. Each receiving node checks the destination address of each frame, and simply ignores any frame not addressed to its own MAC address or the broadcast address. Switches act as buffers, receiving and analyzing the frames from each connected network segment. Frames destined for nodes connected to the originating segment are not forwarded by the switch. Frames destined for a specific node on a different segment are sent only to that segment. Only broadcast frames are forwarded to all other segments. This reduces unnecessary traffic and collisions. In such a switched network, transmitted frames may not be re
https://en.wikipedia.org/wiki/Biconditional%20introduction
In propositional logic, biconditional introduction is a valid rule of inference. It allows for one to infer a biconditional from two conditional statements. The rule makes it possible to introduce a biconditional statement into a logical proof. If is true, and if is true, then one may infer that is true. For example, from the statements "if I'm breathing, then I'm alive" and "if I'm alive, then I'm breathing", it can be inferred that "I'm breathing if and only if I'm alive". Biconditional introduction is the converse of biconditional elimination. The rule can be stated formally as: where the rule is that wherever instances of "" and "" appear on lines of a proof, "" can validly be placed on a subsequent line. Formal notation The biconditional introduction rule may be written in sequent notation: where is a metalogical symbol meaning that is a syntactic consequence when and are both in a proof; or as the statement of a truth-functional tautology or theorem of propositional logic: where , and are propositions expressed in some formal system.
https://en.wikipedia.org/wiki/Biconditional%20elimination
Biconditional elimination is the name of two valid rules of inference of propositional logic. It allows for one to infer a conditional from a biconditional. If is true, then one may infer that is true, and also that is true. For example, if it's true that I'm breathing if and only if I'm alive, then it's true that if I'm breathing, I'm alive; likewise, it's true that if I'm alive, I'm breathing. The rules can be stated formally as: and where the rule is that wherever an instance of "" appears on a line of a proof, either "" or "" can be placed on a subsequent line; Formal notation The biconditional elimination rule may be written in sequent notation: and where is a metalogical symbol meaning that , in the first case, and in the other are syntactic consequences of in some logical system; or as the statement of a truth-functional tautology or theorem of propositional logic: where , and are propositions expressed in some formal system. See also Logical biconditional
https://en.wikipedia.org/wiki/Base%20pair
A base pair (bp) is a fundamental unit of double-stranded nucleic acids consisting of two nucleobases bound to each other by hydrogen bonds. They form the building blocks of the DNA double helix and contribute to the folded structure of both DNA and RNA. Dictated by specific hydrogen bonding patterns, "Watson–Crick" (or "Watson–Crick–Franklin") base pairs (guanine–cytosine and adenine–thymine) allow the DNA helix to maintain a regular helical structure that is subtly dependent on its nucleotide sequence. The complementary nature of this based-paired structure provides a redundant copy of the genetic information encoded within each strand of DNA. The regular structure and data redundancy provided by the DNA double helix make DNA well suited to the storage of genetic information, while base-pairing between DNA and incoming nucleotides provides the mechanism through which DNA polymerase replicates DNA and RNA polymerase transcribes DNA into RNA. Many DNA-binding proteins can recognize specific base-pairing patterns that identify particular regulatory regions of genes. Intramolecular base pairs can occur within single-stranded nucleic acids. This is particularly important in RNA molecules (e.g., transfer RNA), where Watson–Crick base pairs (guanine–cytosine and adenine–uracil) permit the formation of short double-stranded helices, and a wide variety of non–Watson–Crick interactions (e.g., G–U or A–A) allow RNAs to fold into a vast range of specific three-dimensional structures. In addition, base-pairing between transfer RNA (tRNA) and messenger RNA (mRNA) forms the basis for the molecular recognition events that result in the nucleotide sequence of mRNA becoming translated into the amino acid sequence of proteins via the genetic code. The size of an individual gene or an organism's entire genome is often measured in base pairs because DNA is usually double-stranded. Hence, the number of total base pairs is equal to the number of nucleotides in one of the strands (wit
https://en.wikipedia.org/wiki/Beatmatching
Beatmatching or pitch cue is a disc jockey technique of pitch shifting or time stretching an upcoming track to match its tempo to that of the currently playing track, and to adjust them such that the beats (and, usually, the bars) are synchronized—e.g. the kicks and snares in two house records hit at the same time when both records are played simultaneously. Beatmatching is a component of beatmixing which employs beatmatching combined with equalization, attention to phrasing and track selection in an attempt to make a single mix that flows together and has a good structure. The technique was developed to keep the people from leaving the dancefloor at the end of the song. These days it is considered basic among disc jockeys (DJs) in electronic dance music genres, and it is standard practice in clubs to keep the constant beat through the night, even if DJs change in the middle. Technique The beatmatching technique consists of the following steps: While a record is playing, start a second record playing, but only monitored through headphones, not being fed to the main PA system. Use gain (or trim) control on the mixer to match the levels of the two records. Restart and slip-cue the new record at the right time, on beat with the record currently playing. If the beat on the new record hits before the beat on the current record, then the new record is too fast; reduce the pitch and manually slow the speed of the new record to bring the beats back in sync. If the beat on the new record hits after the beat on the current record, then the new record is too slow; increase the pitch and manually increase the speed of the new record to bring the beats back in sync. Continue this process until the two records are in sync with each other. It can be difficult to sync the two records perfectly, so manual adjustment of the records is necessary to maintain the beat synchronization. Gradually fade in parts of the new track while fading out the old track. While in the mix, en
https://en.wikipedia.org/wiki/Backplane
A backplane (or "backplane system") is a group of electrical connectors in parallel with each other, so that each pin of each connector is linked to the same relative pin of all the other connectors, forming a computer bus. It is used to connect several printed circuit boards together to make up a complete computer system. Backplanes commonly use a printed circuit board, but wire-wrapped backplanes have also been used in minicomputers and high-reliability applications. A backplane is generally differentiated from a motherboard by the lack of on-board processing and storage elements. A backplane uses plug-in cards for storage and processing. Usage Early microcomputer systems like the Altair 8800 used a backplane for the processor and expansion cards. Backplanes are normally used in preference to cables because of their greater reliability. In a cabled system, the cables need to be flexed every time that a card is added or removed from the system; this flexing eventually causes mechanical failures. A backplane does not suffer from this problem, so its service life is limited only by the longevity of its connectors. For example, DIN 41612 connectors (used in the VMEbus system) have three durability grades built to withstand (respectively) 50, 400 and 500 insertions and removals, or "mating cycles". To transmit information, Serial Back-Plane technology uses a low-voltage differential signaling transmission method for sending information. In addition, there are bus expansion cables which will extend a computer bus to an external backplane, usually located in an enclosure, to provide more or different slots than the host computer provides. These cable sets have a transmitter board located in the computer, an expansion board in the remote backplane, and a cable between the two. Active versus passive backplanes Backplanes have grown in complexity from the simple Industry Standard Architecture (ISA) (used in the original IBM PC) or S-100 style where all the connectors
https://en.wikipedia.org/wiki/Biological%20warfare
Biological warfare, also known as germ warfare, is the use of biological toxins or infectious agents such as bacteria, viruses, insects, and fungi with the intent to kill, harm or incapacitate humans, animals or plants as an act of war. Biological weapons (often termed "bio-weapons", "biological threat agents", or "bio-agents") are living organisms or replicating entities (i.e. viruses, which are not universally considered "alive"). Entomological (insect) warfare is a subtype of biological warfare. Offensive biological warfare in international armed conflicts is a war crime under the 1925 Geneva Protocol and several international humanitarian law treaties. In particular, the 1972 Biological Weapons Convention (BWC) bans the development, production, acquisition, transfer, stockpiling and use of biological weapons. In contrast, defensive biological research for prophylactic, protective or other peaceful purposes is not prohibited by the BWC. Biological warfare is distinct from warfare involving other types of weapons of mass destruction (WMD), including nuclear warfare, chemical warfare, and radiological warfare. None of these are considered conventional weapons, which are deployed primarily for their explosive, kinetic, or incendiary potential. Biological weapons may be employed in various ways to gain a strategic or tactical advantage over the enemy, either by threats or by actual deployments. Like some chemical weapons, biological weapons may also be useful as area denial weapons. These agents may be lethal or non-lethal, and may be targeted against a single individual, a group of people, or even an entire population. They may be developed, acquired, stockpiled or deployed by nation states or by non-national groups. In the latter case, or if a nation-state uses it clandestinely, it may also be considered bioterrorism. Biological warfare and chemical warfare overlap to an extent, as the use of toxins produced by some living organisms is considered under the prov
https://en.wikipedia.org/wiki/Bilinear%20map
In mathematics, a bilinear map is a function combining elements of two vector spaces to yield an element of a third vector space, and is linear in each of its arguments. Matrix multiplication is an example. Definition Vector spaces Let and be three vector spaces over the same base field . A bilinear map is a function such that for all , the map is a linear map from to and for all , the map is a linear map from to In other words, when we hold the first entry of the bilinear map fixed while letting the second entry vary, the result is a linear operator, and similarly for when we hold the second entry fixed. Such a map satisfies the following properties. For any , The map is additive in both components: if and then and If and we have for all then we say that B is symmetric. If X is the base field F, then the map is called a bilinear form, which are well-studied (for example: scalar product, inner product, and quadratic form). Modules The definition works without any changes if instead of vector spaces over a field F, we use modules over a commutative ring R. It generalizes to n-ary functions, where the proper term is multilinear. For non-commutative rings R and S, a left R-module M and a right S-module N, a bilinear map is a map with T an -bimodule, and for which any n in N, is an R-module homomorphism, and for any m in M, is an S-module homomorphism. This satisfies B(r ⋅ m, n) = r ⋅ B(m, n) B(m, n ⋅ s) = B(m, n) ⋅ s for all m in M, n in N, r in R and s in S, as well as B being additive in each argument. Properties An immediate consequence of the definition is that whenever or . This may be seen by writing the zero vector 0V as (and similarly for 0W) and moving the scalar 0 "outside", in front of B, by linearity. The set of all bilinear maps is a linear subspace of the space (viz. vector space, module) of all maps from into X. If V, W, X are finite-dimensional, then so is . For that is, bilinear forms, the dimension of th
https://en.wikipedia.org/wiki/Buffer%20overflow
In programming and information security, a buffer overflow or buffer overrun is an anomaly whereby a program writes data to a buffer beyond the buffer's allocated memory, overwriting adjacent memory locations. Buffers are areas of memory set aside to hold data, often while moving it from one section of a program to another, or between programs. Buffer overflows can often be triggered by malformed inputs; if one assumes all inputs will be smaller than a certain size and the buffer is created to be that size, then an anomalous transaction that produces more data could cause it to write past the end of the buffer. If this overwrites adjacent data or executable code, this may result in erratic program behavior, including memory access errors, incorrect results, and crashes. Exploiting the behavior of a buffer overflow is a well-known security exploit. On many systems, the memory layout of a program, or the system as a whole, is well defined. By sending in data designed to cause a buffer overflow, it is possible to write into areas known to hold executable code and replace it with malicious code, or to selectively overwrite data pertaining to the program's state, therefore causing behavior that was not intended by the original programmer. Buffers are widespread in operating system (OS) code, so it is possible to make attacks that perform privilege escalation and gain unlimited access to the computer's resources. The famed Morris worm in 1988 used this as one of its attack techniques. Programming languages commonly associated with buffer overflows include C and C++, which provide no built-in protection against accessing or overwriting data in any part of memory and do not automatically check that data written to an array (the built-in buffer type) is within the boundaries of that array. Bounds checking can prevent buffer overflows, but requires additional code and processing time. Modern operating systems use a variety of techniques to combat malicious buffer overflows
https://en.wikipedia.org/wiki/Bioterrorism
Bioterrorism is terrorism involving the intentional release or dissemination of biological agents. These agents include bacteria, viruses, insects, fungi, and/or toxins, and may be in a naturally occurring or a human-modified form, in much the same way as in biological warfare. Further, modern agribusiness is vulnerable to anti-agricultural attacks by terrorists, and such attacks can seriously damage economy as well as consumer confidence. The latter destructive activity is called agrobioterrorism and is a subtype of agro-terrorism. Definition Bioterrorism is the deliberate release of viruses, bacteria, toxins, or other harmful agents to cause illness or death in people, animals, or plants. These agents are typically found in nature, but could be mutated or altered to increase their ability to cause disease, make them resistant to current medicines, or to increase their ability to be spread into the environment. Biological agents can be spread through the air, water, or in food. Biological agents are attractive to terrorists because they are extremely difficult to detect and do not cause illness for several hours to several days. Some bioterrorism agents, like the smallpox virus, can be spread from person to person and some, like anthrax, cannot. Bioterrorism may be favored because biological agents are relatively easy and inexpensive to obtain, can be easily disseminated, and can cause widespread fear and panic beyond the actual physical damage. Military leaders, however, have learned that, as a military asset, bioterrorism has some important limitations; it is difficult to use a bioweapon in a way that only affects the enemy and not friendly forces. A biological weapon is useful to terrorists mainly as a method of creating mass panic and disruption to a state or a country. However, technologists such as Bill Joy have warned of the potential power which genetic engineering might place in the hands of future bio-terrorists. The use of agents that do not cause harm
https://en.wikipedia.org/wiki/BCS%20theory
In physics, the Bardeen–Cooper–Schrieffer (BCS) theory (named after John Bardeen, Leon Cooper, and John Robert Schrieffer) is the first microscopic theory of superconductivity since Heike Kamerlingh Onnes's 1911 discovery. The theory describes superconductivity as a microscopic effect caused by a condensation of Cooper pairs. The theory is also used in nuclear physics to describe the pairing interaction between nucleons in an atomic nucleus. It was proposed by Bardeen, Cooper, and Schrieffer in 1957; they received the Nobel Prize in Physics for this theory in 1972. History Rapid progress in the understanding of superconductivity gained momentum in the mid-1950s. It began with the 1948 paper, "On the Problem of the Molecular Theory of Superconductivity", where Fritz London proposed that the phenomenological London equations may be consequences of the coherence of a quantum state. In 1953, Brian Pippard, motivated by penetration experiments, proposed that this would modify the London equations via a new scale parameter called the coherence length. John Bardeen then argued in the 1955 paper, "Theory of the Meissner Effect in Superconductors", that such a modification naturally occurs in a theory with an energy gap. The key ingredient was Leon Cooper's calculation of the bound states of electrons subject to an attractive force in his 1956 paper, "Bound Electron Pairs in a Degenerate Fermi Gas". In 1957 Bardeen and Cooper assembled these ingredients and constructed such a theory, the BCS theory, with Robert Schrieffer. The theory was first published in April 1957 in the letter, "Microscopic theory of superconductivity". The demonstration that the phase transition is second order, that it reproduces the Meissner effect and the calculations of specific heats and penetration depths appeared in the December 1957 article, "Theory of superconductivity". They received the Nobel Prize in Physics in 1972 for this theory. In 1986, high-temperature superconductivity was discov
https://en.wikipedia.org/wiki/Brewing
Brewing is the production of beer by steeping a starch source (commonly cereal grains, the most popular of which is barley) in water and fermenting the resulting sweet liquid with yeast. It may be done in a brewery by a commercial brewer, at home by a homebrewer, or communally. Brewing has taken place since around the 6th millennium BC, and archaeological evidence suggests that emerging civilizations, including ancient Egypt, China, and Mesopotamia, brewed beer. Since the nineteenth century the brewing industry has been part of most western economies. The basic ingredients of beer are water and a fermentable starch source such as malted barley. Most beer is fermented with a brewer's yeast and flavoured with hops. Less widely used starch sources include millet, sorghum and cassava. Secondary sources (adjuncts), such as maize (corn), rice, or sugar, may also be used, sometimes to reduce cost, or to add a feature, such as adding wheat to aid in retaining the foamy head of the beer. The most common starch source is ground cereal or "grist" - the proportion of the starch or cereal ingredients in a beer recipe may be called grist, grain bill, or simply mash ingredients. Steps in the brewing process include malting, milling, mashing, lautering, boiling, fermenting, conditioning, filtering, and packaging. There are three main fermentation methods: warm, cool and spontaneous. Fermentation may take place in an open or closed fermenting vessel; a secondary fermentation may also occur in the cask or bottle. There are several additional brewing methods, such as Burtonisation, double dropping, and Yorkshire Square, as well as post-fermentation treatment such as filtering, and barrel-ageing. History Brewing has taken place since around the 6th millennium BC, and archaeological evidence suggests emerging civilizations including China, ancient Egypt, and Mesopotamia brewed beer. Descriptions of various beer recipes can be found in cuneiform (the oldest known writing) from ancie
https://en.wikipedia.org/wiki/Brownian%20motion
Brownian motion is the random motion of particles suspended in a medium (a liquid or a gas). This motion pattern typically consists of random fluctuations in a particle's position inside a fluid sub-domain, followed by a relocation to another sub-domain. Each relocation is followed by more fluctuations within the new closed volume. This pattern describes a fluid at thermal equilibrium, defined by a given temperature. Within such a fluid, there exists no preferential direction of flow (as in transport phenomena). More specifically, the fluid's overall linear and angular momenta remain null over time. The kinetic energies of the molecular Brownian motions, together with those of molecular rotations and vibrations, sum up to the caloric component of a fluid's internal energy (the equipartition theorem). This motion is named after the botanist Robert Brown, who first described the phenomenon in 1827, while looking through a microscope at pollen of the plant Clarkia pulchella immersed in water. In 1900, almost eighty years later, the French mathematician Louis Bachelier modeled the stochastic process now called Brownian motion in his doctoral thesis, The Theory of Speculation (Théorie de la spéculation), prepared under the supervision of Henri Poincaré. Then, in 1905, theoretical physicist Albert Einstein published a paper where he modeled the motion of the pollen particles as being moved by individual water molecules, making one of his first major scientific contributions. The direction of the force of atomic bombardment is constantly changing, and at different times the particle is hit more on one side than another, leading to the seemingly random nature of the motion. This explanation of Brownian motion served as convincing evidence that atoms and molecules exist and was further verified experimentally by Jean Perrin in 1908. Perrin was awarded the Nobel Prize in Physics in 1926 "for his work on the discontinuous structure of matter". The many-body interactions th
https://en.wikipedia.org/wiki/Backward%20compatibility
Backward compatibility (sometimes known as backwards compatibility) is a property of an operating system, software, real-world product, or technology that allows for interoperability with an older legacy system, or with input designed for such a system, especially in telecommunications and computing. Modifying a system in a way that does not allow backward compatibility is sometimes called "breaking" backward compatibility. Such breaking usually incurs various types of costs, such as switching cost. A complementary concept is forward compatibility. A design that is forward-compatible usually has a roadmap for compatibility with future standards and products. Usage In hardware A simple example of both backward and forward compatibility is the introduction of FM radio in stereo. FM radio was initially mono, with only one audio channel represented by one signal. With the introduction of two-channel stereo FM radio, many listeners had only mono FM receivers. Forward compatibility for mono receivers with stereo signals was achieved by sending the sum of both left and right audio channels in one signal and the difference in another signal. That allows mono FM receivers to receive and decode the sum signal while ignoring the difference signal, which is necessary only for separating the audio channels. Stereo FM receivers can receive a mono signal and decode it without the need for a second signal, and they can separate a sum signal to left and right channels if both sum and difference signals are received. Without the requirement for backward compatibility, a simpler method could have been chosen. Full backward compatibility is particularly important in computer instruction set architectures, one of the most successful being the x86 family of microprocessors. Their full backward compatibility spans back to the 16-bit Intel 8086/8088 processors introduced in 1978. (The 8086/8088, in turn, were designed with easy machine-translatability of programs written for its prede
https://en.wikipedia.org/wiki/Bacterial%20conjugation
Bacterial conjugation is the transfer of genetic material between bacterial cells by direct cell-to-cell contact or by a bridge-like connection between two cells. This takes place through a pilus. It is a parasexual mode of reproduction in bacteria. It is a mechanism of horizontal gene transfer as are transformation and transduction although these two other mechanisms do not involve cell-to-cell contact. Classical E. coli bacterial conjugation is often regarded as the bacterial equivalent of sexual reproduction or mating since it involves the exchange of genetic material. However, it is not sexual reproduction, since no exchange of gamete occurs, and indeed no generation of a new organism: instead an existing organism is transformed. During classical E. coli conjugation the donor cell provides a conjugative or mobilizable genetic element that is most often a plasmid or transposon. Most conjugative plasmids have systems ensuring that the recipient cell does not already contain a similar element. The genetic information transferred is often beneficial to the recipient. Benefits may include antibiotic resistance, xenobiotic tolerance or the ability to use new metabolites. Other elements can be detrimental and may be viewed as bacterial parasites. Conjugation in Escherichia coli by spontaneous zygogenesis and in Mycobacterium smegmatis by distributive conjugal transfer differ from the better studied classical E. coli conjugation in that these cases involve substantial blending of the parental genomes. History The process was discovered by Joshua Lederberg and Edward Tatum in 1946. Mechanism Conjugation diagram Donor cell produces pilus. Pilus attaches to recipient cell and brings the two cells together. The mobile plasmid is nicked and a single strand of DNA is then transferred to the recipient cell. Both cells synthesize a complementary strand to produce a double stranded circular plasmid and also reproduce pili; both cells are now viable donor for the F-f
https://en.wikipedia.org/wiki/BIOS
In computing, BIOS (, ; Basic Input/Output System, also known as the System BIOS, ROM BIOS, BIOS ROM or PC BIOS) is firmware used to provide runtime services for operating systems and programs and to perform hardware initialization during the booting process (power-on startup). The BIOS firmware comes pre-installed on an IBM PC or IBM PC compatible's system board and exists in some UEFI-based systems to maintain compatibility with operating systems that do not support UEFI native operation. The name originates from the Basic Input/Output System used in the CP/M operating system in 1975. The BIOS originally proprietary to the IBM PC has been reverse engineered by some companies (such as Phoenix Technologies) looking to create compatible systems. The interface of that original system serves as a de facto standard. The BIOS in modern PCs initializes and tests the system hardware components (Power-on self-test), and loads a boot loader from a mass storage device which then initializes a kernel. In the era of DOS, the BIOS provided BIOS interrupt calls for the keyboard, display, storage, and other input/output (I/O) devices that standardized an interface to application programs and the operating system. More recent operating systems do not use the BIOS interrupt calls after startup. Most BIOS implementations are specifically designed to work with a particular computer or motherboard model, by interfacing with various devices especially system chipset. Originally, BIOS firmware was stored in a ROM chip on the PC motherboard. In later computer systems, the BIOS contents are stored on flash memory so it can be rewritten without removing the chip from the motherboard. This allows easy, end-user updates to the BIOS firmware so new features can be added or bugs can be fixed, but it also creates a possibility for the computer to become infected with BIOS rootkits. Furthermore, a BIOS upgrade that fails could brick the motherboard. The last version of Microsoft Windows to offi
https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein%20condensate
In condensed matter physics, a Bose–Einstein condensate (BEC) is a state of matter that is typically formed when a gas of bosons at very low densities is cooled to temperatures very close to absolute zero (−273.15 °C or −459.67 °F). Under such conditions, a large fraction of bosons occupy the lowest quantum state, at which microscopic quantum mechanical phenomena, particularly wavefunction interference, become apparent macroscopically. This state was first predicted, generally, in 1924–1925 by Albert Einstein, crediting a pioneering paper by Satyendra Nath Bose on the new field now known as quantum statistics. In 1995, the Bose–Einstein condensate was created by Eric Cornell and Carl Wieman of the University of Colorado Boulder using rubidium atoms; later that year, Wolfgang Ketterle of MIT produced a BEC using sodium atoms. In 2001 Cornell, Wieman and Ketterle shared the Nobel Prize in Physics "for the achievement of Bose-Einstein condensation in dilute gases of alkali atoms, and for early fundamental studies of the properties of the condensates." History Bose first sent a paper to Einstein on the quantum statistics of light quanta (now called photons), in which he derived Planck's quantum radiation law without any reference to classical physics. Einstein was impressed, translated the paper himself from English to German and submitted it for Bose to the Zeitschrift für Physik, which published it in 1924. (The Einstein manuscript, once believed to be lost, was found in a library at Leiden University in 2005.) Einstein then extended Bose's ideas to matter in two other papers. The result of their efforts is the concept of a Bose gas, governed by Bose–Einstein statistics, which describes the statistical distribution of identical particles with integer spin, now called bosons. Bosons, particles that include the photon as well as atoms such as helium-4 (), are allowed to share a quantum state. Einstein proposed that cooling bosonic atoms to a very low temperature wou
https://en.wikipedia.org/wiki/B%20%28programming%20language%29
B is a programming language developed at Bell Labs circa 1969 by Ken Thompson and Dennis Ritchie. B was derived from BCPL, and its name may possibly be a contraction of BCPL. Thompson's coworker Dennis Ritchie speculated that the name might be based on Bon, an earlier, but unrelated, programming language that Thompson designed for use on Multics. B was designed for recursive, non-numeric, machine-independent applications, such as system and language software. It was a typeless language, with the only data type being the underlying machine's natural memory word format, whatever that might be. Depending on the context, the word was treated either as an integer or a memory address. As machines with ASCII processing became common, notably the DEC PDP-11 that arrived at Bell, support for character data stuffed in memory words became important. The typeless nature of the language was seen as a disadvantage, which led Thompson and Ritchie to develop an expanded version of the language supporting new internal and user-defined types, which became the C programming language. History Circa 1969, Ken Thompson and later Dennis Ritchie developed B basing it mainly on the BCPL language Thompson used in the Multics project. B was essentially the BCPL system stripped of any component Thompson felt he could do without in order to make it fit within the memory capacity of the minicomputers of the time. The BCPL to B transition also included changes made to suit Thompson's preferences (mostly along the lines of reducing the number of non-whitespace characters in a typical program). Much of the typical ALGOL-like syntax of BCPL was rather heavily changed in this process. The assignment operator := reverted to the = of Rutishauser's Superplan, and the equality operator = was replaced by ==. Thompson added "two-address assignment operators" using x =+ y syntax to add y to x (in C the operator is written +=). This syntax came from Douglas McIlroy's implementation of TMG, in which B
https://en.wikipedia.org/wiki/Beer%E2%80%93Lambert%20law
The Beer-Lambert law is commonly applied to chemical analysis measurements to determine the concentration of chemical species that absorb light. It is often referred to as Beer's law. In physics, the Bouguer–Lambert law is an empirical law which relates the extinction or attenuation of light to the properties of the material through which the light is travelling. It had its first use in astronomical extinction. The fundamental law of extinction (the process is linear in the intensity of radiation and amount of radiatively active matter, provided that the physical state is held constant) is sometimes called the Beer-Bouguer-Lambert law or the Bouguer-Beer-Lambert law or merely the extinction law. The extinction law is also used in understanding attenuation in physical optics, for photons, neutrons, or rarefied gases. In mathematical physics, this law arises as a solution of the BGK equation. History Bouguer-Lambert law: This law is based on observations made by Pierre Bouguer before 1729. It is often attributed to Johann Heinrich Lambert, who cited Bouguer's (Claude Jombert, Paris, 1729) – and even quoted from it – in his Photometria in 1760. Lambert expressed the law, which states that the loss of light intensity when it propagates in a medium is directly proportional to intensity and path length, in the mathematical form used today. Lambert began by assuming that the intensity of light traveling into an absorbing body would be given by the differential equation: which is compatible with Bouguer's observations. The constant of proportionality was often termed the "optical density" of the body. Integrating to find the intensity at a distance into the body, one obtains: For a homogeneous medium, this reduces to: from which follows the exponential attenuation law: Beer's law: Much later, in 1852, the German scientist August Beer studied another attenuation relation. In the introduction to his classic paper, he wrote: "The absorption of light dur
https://en.wikipedia.org/wiki/Bean
A bean is the seed of several plants in the family Fabaceae, which are used as vegetables for human or animal food. They can be cooked in many different ways, including boiling, frying, and baking, and are used in many traditional dishes throughout the world. Terminology The word "bean" and its Germanic cognates (e.g. German Bohne) have existed in common use in West Germanic languages since before the 12th century, referring to broad beans, chickpeas, and other pod-borne seeds. This was long before the New World genus Phaseolus was known in Europe. With the Columbian exchange of domestic plants between Europe and the Americas, use of the word was extended to pod-borne seeds of Phaseolus, such as the common bean and the runner bean, and the related genus Vigna. The term has long been applied generally to many other seeds of similar form, such as Old World soybeans, peas, other vetches, and lupins, and even to those with slighter resemblances, such as coffee beans, vanilla beans, castor beans, and cocoa beans. Thus the term "bean" in general usage can refer to a host of different species. Seeds called "beans" are often included among the crops called "pulses" (legumes), although the words are not always interchangeable (usage varies by plant variety and by region). Both terms, beans and pulses, are usually reserved for grain crops and thus exclude those legumes that have tiny seeds and are used exclusively for non-grain purposes (forage, hay, and silage), such as clover and alfalfa. The United Nations Food and Agriculture Organization defines "BEANS, DRY" (item code 176) as applicable only to species of Phaseolus. This is one of various examples of how narrower word senses enforced in trade regulations or botany often coexist in natural language with broader senses in culinary use and general use; other common examples are the narrow sense of the word nut and the broader sense of the word nut, and the fact that tomatoes are fruit, botanically speaking, but are oft
https://en.wikipedia.org/wiki/Breast
The breast is one of two prominences located on the upper ventral region of a primate's torso. Both females and males develop breasts from the same embryological tissues. In females, it serves as the mammary gland, which produces and secretes milk to feed infants. Subcutaneous fat covers and envelops a network of ducts that converge on the nipple, and these tissues give the breast its size and shape. At the ends of the ducts are lobules, or clusters of alveoli, where milk is produced and stored in response to hormonal signals. During pregnancy, the breast responds to a complex interaction of hormones, including estrogens, progesterone, and prolactin, that mediate the completion of its development, namely lobuloalveolar maturation, in preparation of lactation and breastfeeding. Humans are the only animals with permanent breasts. At puberty, estrogens, in conjunction with growth hormone, cause permanent breast growth in female humans. This happens only to a much lesser extent in other primates—breast development in other primates generally only occurs with pregnancy. Along with their major function in providing nutrition for infants, female breasts have social and sexual characteristics. Breasts have been featured in ancient and modern sculpture, art, and photography. They can figure prominently in the perception of a woman's body and sexual attractiveness. A number of cultures associate breasts with sexuality and tend to regard bare breasts in public as immodest or indecent. Breasts, especially the nipples, are an erogenous zone. Etymology and terminology The English word breast derives from the Old English word ('breast, bosom') from Proto-Germanic (breast), from the Proto-Indo-European base (to swell, to sprout). The breast spelling conforms to the Scottish and North English dialectal pronunciations. The Merriam-Webster Dictionary states that "Middle English , [comes] from Old English ; akin to Old High German ..., Old Irish [belly], [and] Russian "; the fir
https://en.wikipedia.org/wiki/Biotechnology
Biotechnology is a multidisciplinary field that involves the integration of natural sciences and engineering sciences in order to achieve the application of organisms, cells, parts thereof and molecular analogues for products and services. The term biotechnology was first used by Károly Ereky in 1919, to refer to the production of products from raw materials with the aid of living organisms. The core principle of biotechnology involves harnessing biological systems and organisms, such as bacteria, yeast, and plants, to perform specific tasks or produce valuable substances. Biotechnology had a significant impact on many areas of society, from medicine to agriculture to environmental science. One of the key techniques used in biotechnology is genetic engineering, which allows scientists to modify the genetic makeup of organisms to achieve desired outcomes. This can involve inserting genes from one organism into another, creating new traits or modifying existing ones. Other important techniques used in biotechnology include tissue culture, which allows researchers to grow cells and tissues in the lab for research and medical purposes, and fermentation, which is used to produce a wide range of products such as beer, wine, and cheese. The applications of biotechnology are diverse and have led to the development of essential products like life-saving drugs, biofuels, genetically modified crops, and innovative materials. It has also been used to address environmental challenges, such as developing biodegradable plastics and using microorganisms to clean up contaminated sites. Biotechnology is a rapidly evolving field with significant potential to address pressing global challenges and improve the quality of life for people around the world; however, despite its numerous benefits, it also poses ethical and societal challenges, such as questions around genetic modification and intellectual property rights. As a result, there is ongoing debate and regulation surroundin
https://en.wikipedia.org/wiki/Backbone%20cabal
The backbone cabal was an informal organization of large-site news server administrators of the worldwide distributed newsgroup-based discussion system Usenet. It existed from about 1983 at least into the 2000s. The cabal was created in an effort to facilitate reliable propagation of new Usenet posts. While in the 1970s and 1980s many news servers only operated during night time to save on the cost of long-distance communication, servers of the backbone cabal were available 24 hours a day. The administrators of these servers gained sufficient influence in the otherwise anarchic Usenet community to be able to push through controversial changes, for instance the Great Renaming of Usenet newsgroups during 1987. History Mary Ann Horton recruited membership in and designed the original physical topology of the Usenet Backbone in 1983. Gene "Spaf" Spafford then created an email list of the backbone administrators, plus a few influential posters. This list became known as the Backbone Cabal and served as a "political (i.e. decision making) backbone". Other prominent members of the cabal were Brian Reid, Bob Allisat, Chuq von Rospach and Rick Adams. In internet culture During most of its existence, the cabal (sometimes capitalized) steadfastly denied its own existence; those involved would often respond "There is no Cabal" (sometimes abbreviated as "TINC"'). The result of this policy was an aura of mystery, even a decade after the cabal mailing list disbanded in late 1988 following an internal fight.
https://en.wikipedia.org/wiki/Burnt-in%20timecode
Burnt-in timecode (often abbreviated to BITC by analogy to VITC) is a human-readable on-screen version of the timecode information for a piece of material superimposed on a video image. BITC is sometimes used in conjunction with "real" machine-readable timecode, but more often used in copies of original material on to a non-broadcast format such as VHS, so that the VHS copies can be traced back to their master tape and the original time codes easily located. Many professional VTRs can "burn" (overlay) the tape timecode onto one of their outputs. This output (which usually also displays the setup menu or on-screen display) is known as the super out or monitor out. The character switch or menu item turns this behaviour on or off. The character function is also used to display the timecode on the preview monitors in linear editing suites. Videotapes that are recorded with timecode numbers overlaid on the video are referred to as window dubs, named after the "window" that displays the burnt-in timecode on-screen. When editing was done using magnetic tapes that were subject to damage from excessive wear, it was common to use a window dub as a working copy for the majority of the editing process. Editing decisions would be made using a window dub, and no specialized equipment was needed to write down an edit decision list which would then be replicated from the high-quality masters. Timecode can also be superimposed on video using a dedicated overlay device, often called a "window dub inserter". This inputs a video signal and its separate timecode audio signal, reads the timecode, superimposes the timecode display over the video, and outputs the combined display (usually via composite), all in real time. Stand-alone timecode generator / readers often have the window dub function built-in. Some consumer cameras, in particular DV cameras, can "burn" (overlay) the tape timecode onto the composite output. This output typically is semi-transparent and may include ot
https://en.wikipedia.org/wiki/Bra%E2%80%93ket%20notation
Bra–ket notation, also called Dirac notation, is a notation for linear algebra and linear operators on complex vector spaces together with their dual space both in the finite-dimensional and infinite-dimensional case. It is specifically designed to ease the types of calculations that frequently come up in quantum mechanics. Its use in quantum mechanics is quite widespread. Bra-ket notation was created by Paul Dirac in his 1939 publication A New Notation for Quantum Mechanics. The notation was introduced as an easier way to write quantum mechanical expressions. The name comes from the English word "Bracket". Quantum mechanics In quantum mechanics, bra–ket notation is used ubiquitously to denote quantum states. The notation uses angle brackets, and , and a vertical bar , to construct "bras" and "kets". A ket is of the form . Mathematically it denotes a vector, , in an abstract (complex) vector space , and physically it represents a state of some quantum system. A bra is of the form . Mathematically it denotes a linear form , i.e. a linear map that maps each vector in to a number in the complex plane . Letting the linear functional act on a vector is written as . Assume that on there exists an inner product with antilinear first argument, which makes an inner product space. Then with this inner product each vector can be identified with a corresponding linear form, by placing the vector in the anti-linear first slot of the inner product: . The correspondence between these notations is then . The linear form is a covector to , and the set of all covectors form a subspace of the dual vector space , to the initial vector space . The purpose of this linear form can now be understood in terms of making projections on the state , to find how linearly dependent two states are, etc. For the vector space , kets can be identified with column vectors, and bras with row vectors. Combinations of bras, kets, and linear operators are interpreted using matrix multiplic
https://en.wikipedia.org/wiki/Blue
Blue is one of the three primary colours in the RYB colour model (traditional colour theory), as well as in the RGB (additive) colour model. It lies between violet and cyan on the spectrum of visible light. The term blue generally describes colors perceived by humans observing light with a dominant wavelength between approximately 450 and 495 nanometres. Most blues contain a slight mixture of other colours; azure contains some green, while ultramarine contains some violet. The clear daytime sky and the deep sea appear blue because of an optical effect known as Rayleigh scattering. An optical effect called the Tyndall effect explains blue eyes. Distant objects appear more blue because of another optical effect called aerial perspective. Blue has been an important colour in art and decoration since ancient times. The semi-precious stone lapis lazuli was used in ancient Egypt for jewellery and ornament and later, in the Renaissance, to make the pigment ultramarine, the most expensive of all pigments. In the eighth century Chinese artists used cobalt blue to colour fine blue and white porcelain. In the Middle Ages, European artists used it in the windows of cathedrals. Europeans wore clothing coloured with the vegetable dye woad until it was replaced by the finer indigo from America. In the 19th century, synthetic blue dyes and pigments gradually replaced organic dyes and mineral pigments. Dark blue became a common colour for military uniforms and later, in the late 20th century, for business suits. Because blue has commonly been associated with harmony, it was chosen as the colour of the flags of the United Nations and the European Union. In the United States and Europe, blue is the colour that both men and women are most likely to choose as their favourite, with at least one recent survey showing the same across several other countries, including China, Malaysia, and Indonesia. Past surveys in the US and Europe have found that blue is the colour most commonly associ
https://en.wikipedia.org/wiki/Bugzilla
Bugzilla is a web-based general-purpose bug tracking system and testing tool originally developed and used by the Mozilla project, and licensed under the Mozilla Public License. Released as open-source software by Netscape Communications in 1998, it has been adopted by a variety of organizations for use as a bug tracking system for both free and open-source software and proprietary projects and products. Bugzilla is used, among others, by the Mozilla Foundation, WebKit, Linux kernel, FreeBSD, KDE, Apache, Eclipse and LibreOffice. Red Hat uses it, but is gradually migrating its product to use Jira. It is also self-hosting. History Bugzilla was originally devised by Terry Weissman in 1998 for the nascent Mozilla.org project, as an open source application to replace the in-house system then in use at Netscape Communications for tracking defects in the Netscape Communicator suite. Bugzilla was originally written in Tcl, but Weissman decided to port it to Perl before its release as part of Netscape's early open-source code drops, in the hope that more people would be able to contribute to it, given that Perl seemed to be a more popular language at the time. Bugzilla 2.0 was the result of that port to Perl, and the first version was released to the public via anonymous CVS. In April 2000, Weissman handed over control of the Bugzilla project to Tara Hernandez. Under her leadership, some of the regular contributors were coerced into taking more responsibility, and Bugzilla development became more community-driven. In July 2001, facing distraction from her other responsibilities in Netscape, Hernandez handed control to Dave Miller, who was still in charge . Bugzilla 3.0 was released on May 10, 2007 and brought a refreshed UI, an XML-RPC interface, custom fields and resolutions, mod_perl support, shared saved searches, and improved UTF-8 support, along with other changes. Bugzilla 4.0 was released on February 15, 2011 and Bugzilla 5.0 was released in July 2015. Timeli
https://en.wikipedia.org/wiki/Block%20cipher
In cryptography, a block cipher is a deterministic algorithm that operates on fixed-length groups of bits, called blocks. Block ciphers are the elementary building blocks of many cryptographic protocols. They are ubiquitous in the storage and exchange of data, where such data is secured and authenticated via encryption. A block cipher uses blocks as an unvarying transformation. Even a secure block cipher is suitable for the encryption of only a single block of data at a time, using a fixed key. A multitude of modes of operation have been designed to allow their repeated use in a secure way to achieve the security goals of confidentiality and authenticity. However, block ciphers may also feature as building blocks in other cryptographic protocols, such as universal hash functions and pseudorandom number generators. Definition A block cipher consists of two paired algorithms, one for encryption, , and the other for decryption, . Both algorithms accept two inputs: an input block of size bits and a key of size bits; and both yield an -bit output block. The decryption algorithm is defined to be the inverse function of encryption, i.e., . More formally, a block cipher is specified by an encryption function which takes as input a key , of bit length (called the key size), and a bit string , of length (called the block size), and returns a string of bits. is called the plaintext, and is termed the ciphertext. For each , the function () is required to be an invertible mapping on . The inverse for is defined as a function taking a key and a ciphertext to return a plaintext value , such that For example, a block cipher encryption algorithm might take a 128-bit block of plaintext as input, and output a corresponding 128-bit block of ciphertext. The exact transformation is controlled using a second input – the secret key. Decryption is similar: the decryption algorithm takes, in this example, a 128-bit block of ciphertext together with the secret key, and yields
https://en.wikipedia.org/wiki/Wireless%20broadband
Wireless broadband is a telecommunications technology that provides high-speed wireless Internet access or computer networking access over a wide area. The term encompasses both fixed and mobile broadband. The term broadband Originally the word "broadband" had a technical meaning, but became a marketing term for any kind of relatively high-speed computer network or Internet access technology. According to the 802.16-2004 standard, broadband means "having instantaneous bandwidths greater than 1 MHz and supporting data rates greater than about 1.5 Mbit/s." The Federal Communications Commission (FCC) recently re-defined the definition to mean download speeds of at least 25 Mbit/s and upload speeds of at least 3 Mbit/s. Technology and speeds A wireless broadband network is an outdoor fixed and/or mobile wireless network providing point-to-multipoint or point-to-point terrestrial wireless links for broadband services. Wireless networks can feature data rates exceeding 1 Gbit/s. Many fixed wireless networks are exclusively half-duplex (HDX), however, some licensed and unlicensed systems can also operate at full-duplex (FDX) allowing communication in both directions simultaneously. Outdoor fixed wireless broadband networks commonly utilize a priority TDMA based protocol in order to divide communication into timeslots. This timeslot technique eliminates many of the issues common to 802.11 Wi-Fi protocol in outdoor networks such as the hidden node problem. Few wireless Internet service providers (WISPs) provide download speeds of over 100 Mbit/s; most broadband wireless access (BWA) services are estimated to have a range of from a tower. Technologies used include Local Multipoint Distribution Service (LMDS) and Multichannel Multipoint Distribution Service (MMDS), as well as heavy use of the industrial, scientific and medical (ISM) radio bands and one particular access technology was standardized by IEEE 802.16, with products known as WiMAX. WiMAX is highly popular in
https://en.wikipedia.org/wiki/Booch%20method
The Booch method is a method for object-oriented software development. It is composed of an object modeling language, an iterative object-oriented development process, and a set of recommended practices. The method was authored by Grady Booch when he was working for Rational Software (acquired by IBM), published in 1992 and revised in 1994. It was widely used in software engineering for object-oriented analysis and design and benefited from ample documentation and support tools. The notation aspect of the Booch methodology was superseded by the Unified Modeling Language (UML), which features graphical elements from the Booch method along with elements from the object-modeling technique (OMT) and object-oriented software engineering (OOSE). Methodological aspects of the Booch method have been incorporated into several methodologies and processes, the primary such methodology being the Rational Unified Process (RUP). Content of the method The Booch notation is characterized by cloud shapes to represent classes and distinguishes the following diagrams: The process is organized around a macro and a micro process. The macro process identifies the following activities cycle: Conceptualization : establish core requirements Analysis : develop a model of the desired behavior Design : create an architecture Evolution: for the implementation Maintenance : for evolution after the delivery The micro process is applied to new classes, structures or behaviors that emerge during the macro process. It is made of the following cycle: Identification of classes and objects Identification of their semantics Identification of their relationships Specification of their interfaces and implementation
https://en.wikipedia.org/wiki/Bilinear%20transform
The bilinear transform (also known as Tustin's method, after Arnold Tustin) is used in digital signal processing and discrete-time control theory to transform continuous-time system representations to discrete-time and vice versa. The bilinear transform is a special case of a conformal mapping (namely, a Möbius transformation), often used to convert a transfer function of a linear, time-invariant (LTI) filter in the continuous-time domain (often called an analog filter) to a transfer function of a linear, shift-invariant filter in the discrete-time domain (often called a digital filter although there are analog filters constructed with switched capacitors that are discrete-time filters). It maps positions on the axis, , in the s-plane to the unit circle, , in the z-plane. Other bilinear transforms can be used to warp the frequency response of any discrete-time linear system (for example to approximate the non-linear frequency resolution of the human auditory system) and are implementable in the discrete domain by replacing a system's unit delays with first order all-pass filters. The transform preserves stability and maps every point of the frequency response of the continuous-time filter, to a corresponding point in the frequency response of the discrete-time filter, although to a somewhat different frequency, as shown in the Frequency warping section below. This means that for every feature that one sees in the frequency response of the analog filter, there is a corresponding feature, with identical gain and phase shift, in the frequency response of the digital filter but, perhaps, at a somewhat different frequency. This is barely noticeable at low frequencies but is quite evident at frequencies close to the Nyquist frequency. Discrete-time approximation The bilinear transform is a first-order Padé approximant of the natural logarithm function that is an exact mapping of the z-plane to the s-plane. When the Laplace transform is performed on a discret
https://en.wikipedia.org/wiki/Black%20hole
A black hole is a region of spacetime where gravity is so strong that nothing, including light and other electromagnetic waves, has enough energy to escape it. The theory of general relativity predicts that a sufficiently compact mass can deform spacetime to form a black hole. The boundary of no escape is called the event horizon. Although it has a great effect on the fate and circumstances of an object crossing it, it has no locally detectable features according to general relativity. In many ways, a black hole acts like an ideal black body, as it reflects no light. Moreover, quantum field theory in curved spacetime predicts that event horizons emit Hawking radiation, with the same spectrum as a black body of a temperature inversely proportional to its mass. This temperature is of the order of billionths of a kelvin for stellar black holes, making it essentially impossible to observe directly. Objects whose gravitational fields are too strong for light to escape were first considered in the 18th century by John Michell and Pierre-Simon Laplace. In 1916, Karl Schwarzschild found the first modern solution of general relativity that would characterize a black hole. David Finkelstein, in 1958, first published the interpretation of "black hole" as a region of space from which nothing can escape. Black holes were long considered a mathematical curiosity; it was not until the 1960s that theoretical work showed they were a generic prediction of general relativity. The discovery of neutron stars by Jocelyn Bell Burnell in 1967 sparked interest in gravitationally collapsed compact objects as a possible astrophysical reality. The first black hole known was Cygnus X-1, identified by several researchers independently in 1971. Black holes of stellar mass form when massive stars collapse at the end of their life cycle. After a black hole has formed, it can grow by absorbing mass from its surroundings. Supermassive black holes of millions of solar masses () may form by absorbing
https://en.wikipedia.org/wiki/Beta%20decay
In nuclear physics, beta decay (β-decay) is a type of radioactive decay in which an atomic nucleus emits a beta particle (fast energetic electron or positron), transforming into an isobar of that nuclide. For example, beta decay of a neutron transforms it into a proton by the emission of an electron accompanied by an antineutrino; or, conversely a proton is converted into a neutron by the emission of a positron with a neutrino in so-called positron emission. Neither the beta particle nor its associated (anti-)neutrino exist within the nucleus prior to beta decay, but are created in the decay process. By this process, unstable atoms obtain a more stable ratio of protons to neutrons. The probability of a nuclide decaying due to beta and other forms of decay is determined by its nuclear binding energy. The binding energies of all existing nuclides form what is called the nuclear band or valley of stability. For either electron or positron emission to be energetically possible, the energy release (see below) or Q value must be positive. Beta decay is a consequence of the weak force, which is characterized by relatively lengthy decay times. Nucleons are composed of up quarks and down quarks, and the weak force allows a quark to change its flavour by emission of a W boson leading to creation of an electron/antineutrino or positron/neutrino pair. For example, a neutron, composed of two down quarks and an up quark, decays to a proton composed of a down quark and two up quarks. Electron capture is sometimes included as a type of beta decay, because the basic nuclear process, mediated by the weak force, is the same. In electron capture, an inner atomic electron is captured by a proton in the nucleus, transforming it into a neutron, and an electron neutrino is released. Description The two types of beta decay are known as beta minus and beta plus. In beta minus (β−) decay, a neutron is converted to a proton, and the process creates an electron and an electron antineutrino;
https://en.wikipedia.org/wiki/Binomial%20coefficient
In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem. Commonly, a binomial coefficient is indexed by a pair of integers and is written It is the coefficient of the term in the polynomial expansion of the binomial power ; this coefficient can be computed by the multiplicative formula which using factorial notation can be compactly expressed as For example, the fourth power of is and the binomial coefficient is the coefficient of the term. Arranging the numbers in successive rows for gives a triangular array called Pascal's triangle, satisfying the recurrence relation The binomial coefficients occur in many areas of mathematics, and especially in combinatorics. The symbol is usually read as " choose " because there are ways to choose an (unordered) subset of elements from a fixed set of elements. For example, there are ways to choose 2 elements from namely and The binomial coefficients can be generalized to for any complex number and integer , and many of their properties continue to hold in this more general form. History and notation Andreas von Ettingshausen introduced the notation in 1826, although the numbers were known centuries earlier (see Pascal's triangle). In about 1150, the Indian mathematician Bhaskaracharya gave an exposition of binomial coefficients in his book Līlāvatī. Alternative notations include , , , , , and in all of which the stands for combinations or choices. Many calculators use variants of the because they can represent it on a single-line display. In this form the binomial coefficients are easily compared to -permutations of , written as , etc. Definition and interpretations For natural numbers (taken to include 0) n and k, the binomial coefficient can be defined as the coefficient of the monomial Xk in the expansion of . The same coefficient also occurs (if ) in the binomial formula (valid for any elements x, y of a commutative ring), which
https://en.wikipedia.org/wiki/Binomial%20theorem
In elementary algebra, the binomial theorem (or binomial expansion) describes the algebraic expansion of powers of a binomial. According to the theorem, it is possible to expand the polynomial into a sum involving terms of the form , where the exponents and are nonnegative integers with , and the coefficient of each term is a specific positive integer depending on and . For example, for , The coefficient in the term of is known as the binomial coefficient or (the two have the same value). These coefficients for varying and can be arranged to form Pascal's triangle. These numbers also occur in combinatorics, where gives the number of different combinations of elements that can be chosen from an -element set. Therefore is often pronounced as " choose ". History Special cases of the binomial theorem were known since at least the 4th century BC when Greek mathematician Euclid mentioned the special case of the binomial theorem for exponent . Greek mathematican Diophantus cubed various binomials, including . Indian mathematican Aryabhata's method for finding cube roots, from around 510 CE, suggests that he knew the binomial formula for exponent . Binomial coefficients, as combinatorial quantities expressing the number of ways of selecting objects out of without replacement, were of interest to ancient Indian mathematicians. The earliest known reference to this combinatorial problem is the Chandaḥśāstra by the Indian lyricist Pingala (c. 200 BC), which contains a method for its solution. The commentator Halayudha from the 10th century AD explains this method. By the 6th century AD, the Indian mathematicians probably knew how to express this as a quotient , and a clear statement of this rule can be found in the 12th century text Lilavati by Bhaskara. The first formulation of the binomial theorem and the table of binomial coefficients, to our knowledge, can be found in a work by Al-Karaji, quoted by Al-Samaw'al in his "al-Bahir". Al-Karaji described
https://en.wikipedia.org/wiki/Blissymbols
Blissymbols or Blissymbolics is a constructed language conceived as an ideographic writing system called Semantography consisting of several hundred basic symbols, each representing a concept, which can be composed together to generate new symbols that represent new concepts. Blissymbols differ from most of the world's major writing systems in that the characters do not correspond at all to the sounds of any spoken language. Semantography was published by Charles K. Bliss in 1949 and found use in the education of people with communication difficulties. History Semantography was invented by Charles K. Bliss (1897–1985), born Karl Kasiel Blitz to a Jewish family in Chernivtsi (then Czernowitz, Austria-Hungary), which had a mixture of different nationalities that "hated each other, mainly because they spoke and thought in different languages." Bliss graduated as a chemical engineer at the Vienna University of Technology, and joined an electronics company. After the Nazi annexation of Austria in 1938, Bliss was sent to concentration camps but his German wife Claire managed to get him released, and they finally became exiles in Shanghai, where Bliss had a cousin. Bliss devised the symbols while a refugee at the Shanghai Ghetto and Sydney, from 1942 to 1949. He wanted to create an easy-to-learn international auxiliary language to allow communication between different linguistic communities. He was inspired by Chinese characters, with which he became familiar at Shanghai. Bliss published his system in Semantography (1949, exp. 2nd ed. 1965, 3rd ed. 1978.) It had several names: As the "tourist explosion" took place in the 1960s, a number of researchers were looking for new standard symbols to be used at roads, stations, airports, etc. Bliss then adopted the name Blissymbolics in order that no researcher could plagiarize his system of symbols. Since the 1960s/1970s, Blissymbols have become popular as a method to teach disabled people to communicate. In 1971 Shirley Mc
https://en.wikipedia.org/wiki/Bessel%20function
Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions of Bessel's differential equation for an arbitrary complex number , which represents the order of the Bessel function. Although and produce the same differential equation, it is conventional to define different Bessel functions for these two values in such a way that the Bessel functions are mostly smooth functions of . The most important cases are when is an integer or half-integer. Bessel functions for integer are also known as cylinder functions or the cylindrical harmonics because they appear in the solution to Laplace's equation in cylindrical coordinates. Spherical Bessel functions with half-integer are obtained when solving the Helmholtz equation in spherical coordinates. Applications of Bessel functions The Bessel function is a generalization of the sine function. It can be interpreted as the vibration of a string with variable thickness, variable tension (or both conditions simultaneously); vibrations in a medium with variable properties; vibrations of the disc membrane, etc. Bessel's equation arises when finding separable solutions to Laplace's equation and the Helmholtz equation in cylindrical or spherical coordinates. Bessel functions are therefore especially important for many problems of wave propagation and static potentials. In solving problems in cylindrical coordinate systems, one obtains Bessel functions of integer order (); in spherical problems, one obtains half-integer orders (). For example: Electromagnetic waves in a cylindrical waveguide Pressure amplitudes of inviscid rotational flows Heat conduction in a cylindrical object Modes of vibration of a thin circular or annular acoustic membrane (such as a drumhead or other membranophone) or thicker plates such as sheet metal (see Kirchhoff–Love plate theory, Mindlin–Reissner plate theory) Diffusion problems on a lattice Solutions to the radial S
https://en.wikipedia.org/wiki/Boolean%20satisfiability%20problem
In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable. SAT is the first problem that was proven to be NP-complete; see Cook–Levin theorem. This means that all problems in the complexity class NP, which includes a wide range of natural decision and optimization problems, are at most as difficult to solve as SAT. There is no known algorithm that efficiently solves each SAT problem, and it is generally believed that no such algorithm exists; yet this belief has not been proved mathematically, and resolving the question of whether SAT has a polynomial-time algorithm is equivalent to the P versus NP problem, which is a famous open problem in the theory of computing. Nevertheless, as of 2007, heuristic SAT-algorithms are able to solve problem instances involving tens of thousands of variables and formulas consisting of millions of symbols, which is sufficient for many practical SAT problems from, e.g., artificial intelligence, circuit design, and automatic theorem proving. Definitions A propositional logic formula, also called Boolean expression, is built from variables, operators AND (conjunction, also denoted by ∧), OR (disjunction, ∨), NOT (negation, ¬), and parentheses. A
https://en.wikipedia.org/wiki/Bidirectional%20text
A bidirectional text contains two text directionalities, right-to-left (RTL) and left-to-right (LTR). It generally involves text containing different types of alphabets, but may also refer to boustrophedon, which is changing text direction in each row. Many computer programs fail to display bidirectional text correctly. For example, this page is mostly LTR English script, and here is the RTL Hebrew name Sarah: , spelled sin () on the right, resh () in the middle, and heh () on the left. Some so-called right-to-left scripts such as the Persian script and Arabic are mostly, but not exclusively, right-to-left—mathematical expressions, numeric dates and numbers bearing units are embedded from left to right. That also happens if text from a left-to-right language such as English is embedded in them; or vice versa, if Arabic is embedded in a left-to-right script such as English. Bidirectional script support Bidirectional script support is the capability of a computer system to correctly display bidirectional text. The term is often shortened to "BiDi" or "bidi". Early computer installations were designed only to support a single writing system, typically for left-to-right scripts based on the Latin alphabet only. Adding new character sets and character encodings enabled a number of other left-to-right scripts to be supported, but did not easily support right-to-left scripts such as Arabic or Hebrew, and mixing the two was not practical. Right-to-left scripts were introduced through encodings like ISO/IEC 8859-6 and ISO/IEC 8859-8, storing the letters (usually) in writing and reading order. It is possible to simply flip the left-to-right display order to a right-to-left display order, but doing this sacrifices the ability to correctly display left-to-right scripts. With bidirectional script support, it is possible to mix characters from different scripts on the same page, regardless of writing direction. In particular, the Unicode standard provides foundations for c
https://en.wikipedia.org/wiki/Bernoulli%27s%20inequality
In mathematics, Bernoulli's inequality (named after Jacob Bernoulli) is an inequality that approximates exponentiations of . It is often employed in real analysis. It has several useful variants: Integer exponent Case 1: for every integer and real number . The inequality is strict if and . Case 2: for every integer and every real number . Case 3: for every even integer and every real number . Real exponent for every real number and . The inequality is strict if and . for every real number and . History Jacob Bernoulli first published the inequality in his treatise "Positiones Arithmeticae de Seriebus Infinitis" (Basel, 1689), where he used the inequality often. According to Joseph E. Hofmann, Über die Exercitatio Geometrica des M. A. Ricci (1963), p. 177, the inequality is actually due to Sluse in his Mesolabum (1668 edition), Chapter IV "De maximis & minimis". Proof for integer exponent The first case has a simple inductive proof: Suppose the statement is true for : Then it follows that Bernoulli's inequality can be proved for case 2, in which is a non-negative integer and , using mathematical induction in the following form: we prove the inequality for , from validity for some r we deduce validity for . For , is equivalent to which is true. Similarly, for we have Now suppose the statement is true for : Then it follows that since as well as . By the modified induction we conclude the statement is true for every non-negative integer . By noting that if , then is negative gives case 3. Generalizations Generalization of exponent The exponent can be generalized to an arbitrary real number as follows: if , then for or , and for . This generalization can be proved by comparing derivatives. The strict versions of these inequalities require and . Generalization of base Instead of the inequality holds also in the form where are real numbers, all greater than , all with the same sign. Bernoulli's inequality is a spec
https://en.wikipedia.org/wiki/Bastard%20Operator%20From%20Hell
The Bastard Operator From Hell (BOFH) is a fictional rogue computer operator created by Simon Travaglia, who takes out his anger on users (who are "lusers" to him) and others who pester him with their computer problems, uses his expertise against his enemies and manipulates his employer. Several people have written stories about BOFHs, but only those by Simon Travaglia are considered canonical. The BOFH stories were originally posted in 1992 to Usenet by Travaglia, with some being reprinted in Datamation. Since 2000 they have been published regularly in The Register (UK). Several collections of the stories have been published as books. By extension, the term is also used to refer to any system administrator who displays the qualities of the original. The early accounts of the BOFH took place in a university; later the scenes were set in an office workplace. In 2000 (BOFH 2k), the BOFH and his pimply-faced youth (PFY) assistant moved to a new company. Other characters The PFY (Pimply-Faced Youth, the assistant to the BOFH. Real name is Stephen) Possesses a temperament similar to the BOFH, and often either teams up with or plots against him. The Boss (often portrayed as having no IT knowledge but believing otherwise; identity changes as successive bosses are sacked, leave, are committed, or have nasty "accidents") CEO of the company – The PFY's uncle Brian from 1996 until 2000, when the BOFH and PFY moved to a new company. The help desk operators, referred to as the "Helldesk" and often scolded for giving out the BOFH's personal number. The Boss's secretary, Sharon. The security department George, the cleaner (an invaluable source of information to the BOFH and PFY) Books Games BOFH is a text adventure game written by Howard A. Sherman and published in 2002. It is available via Malinche.
https://en.wikipedia.org/wiki/Plague%20%28disease%29
Plague is an infectious disease caused by the bacterium Yersinia pestis. Symptoms include fever, weakness and headache. Usually this begins one to seven days after exposure. There are three forms of plague, each affecting a different part of the body and causing associated symptoms. Pneumonic plague infects the lungs, causing shortness of breath, coughing and chest pain; bubonic plague affects the lymph nodes, making them swell; and septicemic plague infects the blood and can cause tissues to turn black and die. The bubonic and septicemic forms are generally spread by flea bites or handling an infected animal, whereas pneumonic plague is generally spread between people through the air via infectious droplets. Diagnosis is typically by finding the bacterium in fluid from a lymph node, blood or sputum. Those at high risk may be vaccinated. Those exposed to a case of pneumonic plague may be treated with preventive medication. If infected, treatment is with antibiotics and supportive care. Typically antibiotics include a combination of gentamicin and a fluoroquinolone. The risk of death with treatment is about 10% while without it is about 70%. Globally, about 600 cases are reported a year. In 2017, the countries with the most cases include the Democratic Republic of the Congo, Madagascar and Peru. In the United States, infections occasionally occur in rural areas, where the bacteria are believed to circulate among rodents. It has historically occurred in large outbreaks, with the best known being the Black Death in the 14th century, which resulted in more than 50 million deaths in Europe. Signs and symptoms There are several different clinical manifestations of plague. The most common form is bubonic plague, followed by septicemic and pneumonic plague. Other clinical manifestations include plague meningitis, plague pharyngitis, and ocular plague. General symptoms of plague include fever, chills, headaches, and nausea. Many people experience swelling in their lymph
https://en.wikipedia.org/wiki/Baudot%20code
The Baudot code () is an early character encoding for telegraphy invented by Émile Baudot in the 1870s. It was the predecessor to the International Telegraph Alphabet No. 2 (ITA2), the most common teleprinter code in use before ASCII. Each character in the alphabet is represented by a series of five bits, sent over a communication channel such as a telegraph wire or a radio signal by asynchronous serial communication. The symbol rate measurement is known as baud, and is derived from the same name. History Baudot code (ITA1) In the below table, Columns I, II, III, IV, and V show the code; the Let. and Fig. columns show the letters and numbers for the Continental and UK versions; and the sort keys present the table in the order: alphabetical, Gray and UK Baudot developed his first multiplexed telegraph in 1872 and patented it in 1874. In 1876, he changed from a six-bit code to a five-bit code, as suggested by Carl Friedrich Gauss and Wilhelm Weber in 1834, with equal on and off intervals, which allowed for transmission of the Roman alphabet, and included punctuation and control signals. The code itself was not patented (only the machine) because French patent law does not allow concepts to be patented. Baudot's 5-bit code was adapted to be sent from a manual keyboard, and no teleprinter equipment was ever constructed that used it in its original form. The code was entered on a keyboard which had just five piano-type keys and was operated using two fingers of the left hand and three fingers of the right hand. Once the keys had been pressed, they were locked down until mechanical contacts in a distributor unit passed over the sector connected to that particular keyboard, at which time the keyboard was unlocked ready for the next character to be entered, with an audible click (known as the "cadence signal") to warn the operator. Operators had to maintain a steady rhythm, and the usual speed of operation was 30 words per minute. The table "shows the allocation of th
https://en.wikipedia.org/wiki/Bestiary
A bestiary (from bestiarum vocabulum) is a compendium of beasts. Originating in the ancient world, bestiaries were made popular in the Middle Ages in illustrated volumes that described various animals and even rocks. The natural history and illustration of each beast was usually accompanied by a moral lesson. This reflected the belief that the world itself was the Word of God and that every living thing had its own special meaning. For example, the pelican, which was believed to tear open its breast to bring its young to life with its own blood, was a living representation of Jesus. Thus the bestiary is also a reference to the symbolic language of animals in Western Christian art and literature. History The bestiary — the medieval book of beasts — was among the most popular illuminated texts in northern Europe during the Middle Ages (about 500–1500). Medieval Christians understood every element of the world as a manifestation of God, and bestiaries largely focused on each animal's religious meaning. Much of what is in the bestiary came from the ancient Greeks and their philosophers. The earliest bestiary in the form in which it was later popularized was an anonymous 2nd-century Greek volume called the Physiologus, which itself summarized ancient knowledge and wisdom about animals in the writings of classical authors such as Aristotle's Historia Animalium and various works by Herodotus, Pliny the Elder, Solinus, Aelian and other naturalists. Following the Physiologus, Saint Isidore of Seville (Book XII of the Etymologiae) and Saint Ambrose expanded the religious message with reference to passages from the Bible and the Septuagint. They and other authors freely expanded or modified pre-existing models, constantly refining the moral content without interest or access to much more detail regarding the factual content. Nevertheless, the often fanciful accounts of these beasts were widely read and generally believed to be true. A few observations found in bestiaries, su
https://en.wikipedia.org/wiki/British%20Standards
British Standards (BS) are the standards produced by the BSI Group which is incorporated under a royal charter and which is formally designated as the national standards body (NSB) for the UK. The BSI Group produces British Standards under the authority of the charter, which lays down as one of the BSI's objectives to: Formally, as stated in a 2002 memorandum of understanding between the BSI and the United Kingdom Government, British Standards are defined as: Products and services which BSI certifies as having met the requirements of specific standards within designated schemes are awarded the Kitemark. History BSI Group began in 1901 as the Engineering Standards Committee, led by James Mansergh, to standardize the number and type of steel sections, in order to make British manufacturers more efficient and competitive. Over time the standards developed to cover many aspects of tangible engineering, and then engineering methodologies including quality systems, safety and security. British Standards creation The BSI Group as a whole does not produce British Standards, as standards work within the BSI is decentralized. The governing board of BSI establishes a Standards Board. The Standards Board does little apart from setting up sector boards (a sector in BSI parlance being a field of standardization such as ICT, quality, agriculture, manufacturing, or fire). Each sector board, in turn, constitutes several technical committees. It is the technical committees that, formally, approve a British Standard, which is then presented to the secretary of the supervisory sector board for endorsement of the fact that the technical committee has indeed completed a task for which it was constituted. Standards The standards produced are titled British Standard XXXX[-P]:YYYY where XXXX is the number of the standard, P is the number of the part of the standard (where the standard is split into multiple parts) and YYYY is the year in which the standard came into effect. BSI Gro
https://en.wikipedia.org/wiki/Body%20mass%20index
Body mass index (BMI) is a value derived from the mass (weight) and height of a person. The BMI is defined as the body mass divided by the square of the body height, and is expressed in units of kg/m2, resulting from mass in kilograms (kg) and height in metres (m). The BMI may be determined first by measuring its components by means of a weighing scale and a stadiometer. The multiplication and division may be carried out directly, by hand or using a calculator, or indirectly using a lookup table (or chart). The table displays BMI as a function of mass and height and may show other units of measurement (converted to metric units for the calculation). The table may also show contour lines or colours for different BMI categories. The BMI is a convenient rule of thumb used to broadly categorize a person as based on tissue mass (muscle, fat, and bone) and height. Major adult BMI classifications are underweight (under 18.5 kg/m2), normal weight (18.5 to 24.9), overweight (25 to 29.9), and obese (30 or more). When used to predict an individual's health, rather than as a statistical measurement for groups, the BMI has limitations that can make it less useful than some of the alternatives, especially when applied to individuals with abdominal obesity, short stature, or high muscle mass. BMIs under 20 and over 25 have been associated with higher all-cause mortality, with the risk increasing with distance from the 20–25 range. History Adolphe Quetelet, a Belgian astronomer, mathematician, statistician, and sociologist, devised the basis of the BMI between 1830 and 1850 as he developed what he called "social physics". Quetelet himself never intended for the index, then called the Quetelet Index, to be used as a means of medical assessment. Instead, it was a component of his study of , or the average man. Quetelet thought of the average man as a social ideal, and developed the body mass index as a means of discovering the socially ideal human person. According to Lars Grue
https://en.wikipedia.org/wiki/BeOS
BeOS is an operating system for personal computers first developed by Be Inc. in 1990. It was first written to run on BeBox hardware. BeOS was positioned as a multimedia platform that could be used by a substantial population of desktop users and a competitor to Classic Mac OS and Microsoft Windows. It was ultimately unable to achieve a significant market share, and did not prove commercially viable for Be Inc. The company was acquired by Palm, Inc. Today BeOS is mainly used, and derivatives developed, by a small population of enthusiasts. The open-source operating system Haiku is a continuation of BeOS concepts and most of the application level compatibility. The latest version, Beta 4 released December 2022, still retains BeOS 5 compatibility in its x86 32-bit images. History Initially designed to run on AT&T Hobbit-based hardware, BeOS was later modified to run on PowerPC-based processors: first Be's own systems, later Apple Computer's PowerPC Reference Platform and Common Hardware Reference Platform, with the hope that Apple would purchase or license BeOS as a replacement for its aging Classic Mac OS. Toward the end of 1996, Apple was still looking for a replacement to Copland in their operating system strategy. Amidst rumours of Apple's interest in purchasing BeOS, Be wanted to increase their user base, to try to convince software developers to write software for the operating system. Be courted Macintosh clone vendors to ship BeOS with their hardware. Apple CEO Gil Amelio started negotiations to buy Be Inc., but negotiations stalled when Be CEO Jean-Louis Gassée wanted $300 million; Apple was unwilling to offer any more than $125 million. Apple's board of directors decided NeXTSTEP was a better choice and purchased NeXT in 1996 for $429 million, bringing back Apple co-founder Steve Jobs. In 1997, Power Computing began bundling BeOS (on a CD for optional installation) with its line of PowerPC-based Macintosh clones. These systems could dual boot either
https://en.wikipedia.org/wiki/Biome
A biome () is a biogeographical unit consisting of a biological community that has formed in response to the physical environment in which they are found and a shared regional climate. Biomes may span more than one continent. Biome is a broader term than habitat and can comprise a variety of habitats. While a biome can cover small areas, a microbiome is a mix of organisms that coexist in a defined space on a much smaller scale. For example, the human microbiome is the collection of bacteria, viruses, and other microorganisms that are present on or in a human body. A biota is the total collection of organisms of a geographic region or a time period, from local geographic scales and instantaneous temporal scales all the way up to whole-planet and whole-timescale spatiotemporal scales. The biotas of the Earth make up the biosphere. Etymology The term was suggested in 1916 by Clements, originally as a synonym for biotic community of Möbius (1877). Later, it gained its current definition, based on earlier concepts of phytophysiognomy, formation and vegetation (used in opposition to flora), with the inclusion of the animal element and the exclusion of the taxonomic element of species composition. In 1935, Tansley added the climatic and soil aspects to the idea, calling it ecosystem. The International Biological Program (1964–74) projects popularized the concept of biome. However, in some contexts, the term biome is used in a different manner. In German literature, particularly in the Walter terminology, the term is used similarly as biotope (a concrete geographical unit), while the biome definition used in this article is used as an international, non-regional, terminology—irrespectively of the continent in which an area is present, it takes the same biome name—and corresponds to his "zonobiome", "orobiome" and "pedobiome" (biomes determined by climate zone, altitude or soil). In Brazilian literature, the term "biome" is sometimes used as synonym of biogeographic pr
https://en.wikipedia.org/wiki/Behavior
Behavior (American English) or behaviour (British English) is the range of actions and mannerisms made by individuals, organisms, systems or artificial entities in some environment. These systems can include other systems or organisms as well as the inanimate physical environment. It is the computed response of the system or organism to various stimuli or inputs, whether internal or external, conscious or subconscious, overt or covert, and voluntary or involuntary. Taking a behavior informatics perspective, a behavior consists of actor, operation, interactions, and their properties. This can be represented as a behavior vector. Models Biology Although disagreement exists as to how to precisely define behavior in a biological context, one common interpretation based on a meta-analysis of scientific literature states that "behavior is the internally coordinated responses (actions or inactions) of whole living organisms (individuals or groups) to internal or external stimuli". A broader definition of behavior, applicable to plants and other organisms, is similar to the concept of phenotypic plasticity. It describes behavior as a response to an event or environment change during the course of the lifetime of an individual, differing from other physiological or biochemical changes that occur more rapidly, and excluding changes that are a result of development (ontogeny). Behaviors can be either innate or learned from the environment. Behavior can be regarded as any action of an organism that changes its relationship to its environment. Behavior provides outputs from the organism to the environment. Human behavior The endocrine system and the nervous system likely influence human behavior. Complexity in the behavior of an organism may be correlated to the complexity of its nervous system. Generally, organisms with more complex nervous systems have a greater capacity to learn new responses and thus adjust their behavior. Animal behavior Ethology is the scientifi
https://en.wikipedia.org/wiki/Biosphere
The biosphere (from Greek βίος bíos "life" and σφαῖρα sphaira "sphere"), also known as the ecosphere (from Greek οἶκος oîkos "environment" and σφαῖρα), is the worldwide sum of all ecosystems. It can also be termed the zone of life on Earth. The biosphere (which is technically a spherical shell) is virtually a closed system with regard to matter, with minimal inputs and outputs. Regarding energy, it is an open system, with photosynthesis capturing solar energy at a rate of around 100 terawatts. By the most general biophysiological definition, the biosphere is the global ecological system integrating all living beings and their relationships, including their interaction with the elements of the lithosphere, cryosphere, hydrosphere, and atmosphere. The biosphere is postulated to have evolved, beginning with a process of biopoiesis (life created naturally from matter, such as simple organic compounds) or biogenesis (life created from living matter), at least some 3.5 billion years ago. In a general sense, biospheres are any closed, self-regulating systems containing ecosystems. This includes artificial biospheres such as and , and potentially ones on other planets or moons. Origin and use of the term The term "biosphere" was coined in 1875 by geologist Eduard Suess, who defined it as the place on Earth's surface where life dwells. While the concept has a geological origin, it is an indication of the effect of both Charles Darwin and Matthew F. Maury on the Earth sciences. The biosphere's ecological context comes from the 1920s (see Vladimir I. Vernadsky), preceding the 1935 introduction of the term "ecosystem" by Sir Arthur Tansley (see ecology history). Vernadsky defined ecology as the science of the biosphere. It is an interdisciplinary concept for integrating astronomy, geophysics, meteorology, biogeography, evolution, geology, geochemistry, hydrology and, generally speaking, all life and Earth sciences. Narrow definition Geochemists define the biosphere as
https://en.wikipedia.org/wiki/Biological%20membrane
A biological membrane, biomembrane or cell membrane is a selectively permeable membrane that separates the interior of a cell from the external environment or creates intracellular compartments by serving as a boundary between one part of the cell and another. Biological membranes, in the form of eukaryotic cell membranes, consist of a phospholipid bilayer with embedded, integral and peripheral proteins used in communication and transportation of chemicals and ions. The bulk of lipids in a cell membrane provides a fluid matrix for proteins to rotate and laterally diffuse for physiological functioning. Proteins are adapted to high membrane fluidity environment of the lipid bilayer with the presence of an annular lipid shell, consisting of lipid molecules bound tightly to the surface of integral membrane proteins. The cell membranes are different from the isolating tissues formed by layers of cells, such as mucous membranes, basement membranes, and serous membranes. Composition Asymmetry The lipid bilayer consists of two layers- an outer leaflet and an inner leaflet. The components of bilayers are distributed unequally between the two surfaces to create asymmetry between the outer and inner surfaces. This asymmetric organization is important for cell functions such as cell signaling. The asymmetry of the biological membrane reflects the different functions of the two leaflets of the membrane. As seen in the fluid membrane model of the phospholipid bilayer, the outer leaflet and inner leaflet of the membrane are asymmetrical in their composition. Certain proteins and lipids rest only on one surface of the membrane and not the other. • Both the plasma membrane and internal membranes have cytosolic and exoplasmic faces • This orientation is maintained during membrane trafficking – proteins, lipids, glycoconjugates facing the lumen of the ER and Golgi get expressed on the extracellular side of the plasma membrane. In eucaryotic cells, new phospholipids are manufactur
https://en.wikipedia.org/wiki/Bohr%20model
In atomic physics, the Bohr model or Rutherford–Bohr model of the atom, presented by Niels Bohr and Ernest Rutherford in 1913, consists of a small, dense nucleus surrounded by orbiting electrons. It is analogous to the structure of the Solar System, but with attraction provided by electrostatic force rather than gravity, and with the electron energies quantized (assuming only discrete values). In the history of atomic physics, it followed, and ultimately replaced, several earlier models, including Joseph Larmor's Solar System model (1897), Jean Perrin's model (1901), the cubical model (1902), Hantaro Nagaoka's Saturnian model (1904), the plum pudding model (1904), Arthur Haas's quantum model (1910), the Rutherford model (1911), and John William Nicholson's nuclear quantum model (1912). The improvement over the 1911 Rutherford model mainly concerned the new quantum mechanical interpretation introduced by Haas and Nicholson, but forsaking any attempt to explain radiation according to classical physics. The model's key success lay in explaining the Rydberg formula for hydrogen's spectral emission lines. While the Rydberg formula had been known experimentally, it did not gain a theoretical basis until the Bohr model was introduced. Not only did the Bohr model explain the reasons for the structure of the Rydberg formula, it also provided a justification for the fundamental physical constants that make up the formula's empirical results. The Bohr model is a relatively primitive model of the hydrogen atom, compared to the valence shell model. As a theory, it can be derived as a first-order approximation of the hydrogen atom using the broader and much more accurate quantum mechanics and thus may be considered to be an obsolete scientific theory. However, because of its simplicity, and its correct results for selected systems (see below for application), the Bohr model is still commonly taught to introduce students to quantum mechanics or energy level diagrams before mov
https://en.wikipedia.org/wiki/Blitz%20BASIC
Blitz BASIC is the programming language dialect of the first Blitz compilers, devised by New Zealand-based developer Mark Sibly. Being derived from BASIC, Blitz syntax was designed to be easy to pick up for beginners first learning to program. The languages are game-programming oriented but are often found general purpose enough to be used for most types of application. The Blitz language evolved as new products were released, with recent incarnations offering support for more advanced programming techniques such as object-orientation and multithreading. This led to the languages losing their BASIC moniker in later years. History The first iteration of the Blitz language was created for the Amiga platform and published by the Australian firm Memory and Storage Technology. Returning to New Zealand, Blitz BASIC 2 was published several years later (around 1993 according this press release ) by Acid Software (a local Amiga game publisher). Since then, Blitz compilers have been released on several platforms. Following the demise of the Amiga as a commercially viable platform, the Blitz BASIC 2 source code was released to the Amiga community. Development continues to this day under the name AmiBlitz. BlitzBasic Idigicon published BlitzBasic for Microsoft Windows in October 2000. The language included a built-in API for performing basic 2D graphics and audio operations. Following the release of Blitz3D, BlitzBasic is often synonymously referred to as Blitz2D. Recognition of BlitzBasic increased when a limited range of "free" versions were distributed in popular UK computer magazines such as PC Format. This resulted in a legal dispute between the developer and publisher which was eventually resolved amicably. BlitzPlus In February 2003, Blitz Research Ltd. released BlitzPlus also for Microsoft Windows. It lacked the 3D engine of Blitz3D, but did bring new features to the 2D side of the language by implementing limited Microsoft Windows control support for creating nati
https://en.wikipedia.org/wiki/Blood%20alcohol%20content
Blood alcohol content (BAC), also called blood alcohol concentration or blood alcohol level, is a measurement of alcohol intoxication used for legal or medical purposes; it is expressed as mass of alcohol per volume of blood. For example, a BAC of 0.10 (0.10% or one tenth of one percent) means that there is 0.10 g of alcohol for every 100  of blood, which is the same as 21.7 . A BAC of 0.0 is sober; in different countries the maximum permitted BAC when driving ranges from about 0.02% to 0.08%; BAC levels over 0.08% are considered impaired; above 0.40% is potentially fatal. Effects by alcohol level As BAC increases, the short-term effects of alcohol become more perceptible. At low levels of impairment (BAC 0.01–0.05%), people may experience mild relaxation and reduced social inhibition, along with impaired judgment and coordination. At moderate levels of impairment (BAC 0.06–0.20%), effects can include emotional swings, impaired vision, hearing, speech, and motor skills. Beginning at a BAC greater than 0.2%, people may experience urinary incontinence, vomiting, and symptoms of alcohol intoxication. At a BAC greater than 0.3%, people may experience total loss of consciousness and show signs of severe alcohol intoxication. A BAC of 0.4% or higher is potentially fatal and can result in a coma or respiratory failure. Estimation Direct measurement Blood samples for BAC analysis are typically obtained by taking a venous blood sample from the arm. A variety of methods exist for determining blood-alcohol concentration in a blood sample. Forensic laboratories typically use headspace-gas chromatography combined with mass spectrometry or flame ionization detection, as this method is accurate and efficient. Hospitals typically use enzyme multiplied immunoassay, which measures the co-enzyme NADH. This method is more subject to error but may be performed rapidly in parallel with other blood sample measurements. By breathalyzer The amount of alcohol on the breath can be me
https://en.wikipedia.org/wiki/Bucket%20argument
Isaac Newton's rotating bucket argument (also known as Newton's bucket) was designed to demonstrate that true rotational motion cannot be defined as the relative rotation of the body with respect to the immediately surrounding bodies. It is one of five arguments from the "properties, causes, and effects" of "true motion and rest" that support his contention that, in general, true motion and rest cannot be defined as special instances of motion or rest relative to other bodies, but instead can be defined only by reference to absolute space. Alternatively, these experiments provide an operational definition of what is meant by "absolute rotation", and do not pretend to address the question of "rotation relative to what?" General relativity dispenses with absolute space and with physics whose cause is external to the system, with the concept of geodesics of spacetime. Background These arguments, and a discussion of the distinctions between absolute and relative time, space, place and motion, appear in a scholium at the end of Definitions sections in Book I of Newton's work, The Mathematical Principles of Natural Philosophy (1687) (not to be confused with General Scholium at the end of Book III), which established the foundations of classical mechanics and introduced his law of universal gravitation, which yielded the first quantitatively adequate dynamical explanation of planetary motion. Despite their embrace of the principle of rectilinear inertia and the recognition of the kinematical relativity of apparent motion (which underlies whether the Ptolemaic or the Copernican system is correct), natural philosophers of the seventeenth century continued to consider true motion and rest as physically separate descriptors of an individual body. The dominant view Newton opposed was devised by René Descartes, and was supported (in part) by Gottfried Leibniz. It held that empty space is a metaphysical impossibility because space is nothing other than the extension of matte
https://en.wikipedia.org/wiki/Background%20radiation
Background radiation is a measure of the level of ionizing radiation present in the environment at a particular location which is not due to deliberate introduction of radiation sources. Background radiation originates from a variety of sources, both natural and artificial. These include both cosmic radiation and environmental radioactivity from naturally occurring radioactive materials (such as radon and radium), as well as man-made medical X-rays, fallout from nuclear weapons testing and nuclear accidents. Definition Background radiation is defined by the International Atomic Energy Agency as "Dose or the dose rate (or an observed measure related to the dose or dose rate) attributable to all sources other than the one(s) specified. So a distinction is made between the dose which is already in a location, which is defined here as being "background", and the dose due to a deliberately introduced and specified source. This is important where radiation measurements are taken of a specified radiation source, where the existing background may affect this measurement. An example would be measurement of radioactive contamination in a gamma radiation background, which could increase the total reading above that expected from the contamination alone. However, if no radiation source is specified as being of concern, then the total radiation dose measurement at a location is generally called the background radiation, and this is usually the case where an ambient dose rate is measured for environmental purposes. Background dose rate examples Background radiation varies with location and time, and the following table gives examples: Natural background radiation Radioactive material is found throughout nature. Detectable amounts occur naturally in soil, rocks, water, air, and vegetation, from which it is inhaled and ingested into the body. In addition to this internal exposure, humans also receive external exposure from radioactive materials that remain outside the body a
https://en.wikipedia.org/wiki/Beta%20sheet
The beta sheet, (β-sheet) (also β-pleated sheet) is a common motif of the regular protein secondary structure. Beta sheets consist of beta strands (β-strands) connected laterally by at least two or three backbone hydrogen bonds, forming a generally twisted, pleated sheet. A β-strand is a stretch of polypeptide chain typically 3 to 10 amino acids long with backbone in an extended conformation. The supramolecular association of β-sheets has been implicated in the formation of the fibrils and protein aggregates observed in amyloidosis, Alzheimer's disease and other proteinopathies. History The first β-sheet structure was proposed by William Astbury in the 1930s. He proposed the idea of hydrogen bonding between the peptide bonds of parallel or antiparallel extended β-strands. However, Astbury did not have the necessary data on the bond geometry of the amino acids in order to build accurate models, especially since he did not then know that the peptide bond was planar. A refined version was proposed by Linus Pauling and Robert Corey in 1951. Their model incorporated the planarity of the peptide bond which they previously explained as resulting from keto-enol tautomerization. Structure and orientation Geometry The majority of β-strands are arranged adjacent to other strands and form an extensive hydrogen bond network with their neighbors in which the N−H groups in the backbone of one strand establish hydrogen bonds with the C=O groups in the backbone of the adjacent strands. In the fully extended β-strand, successive side chains point straight up and straight down in an alternating pattern. Adjacent β-strands in a β-sheet are aligned so that their Cα atoms are adjacent and their side chains point in the same direction. The "pleated" appearance of β-strands arises from tetrahedral chemical bonding at the Cα atom; for example, if a side chain points straight up, then the bonds to the C′ must point slightly downwards, since its bond angle is approximately 109.5°. The pl
https://en.wikipedia.org/wiki/Blue%20whale
The blue whale (Balaenoptera musculus) is a marine mammal and a baleen whale. Reaching a maximum confirmed length of and weighing up to , it is the largest animal known ever to have existed. The blue whale's long and slender body can be of various shades of greyish-blue dorsally and somewhat lighter underneath. Four subspecies are recognized: B. m. musculus in the North Atlantic and North Pacific, B. m. intermedia in the Southern Ocean, B. m. brevicauda (the pygmy blue whale) in the Indian Ocean and South Pacific Ocean, B. m. indica in the Northern Indian Ocean. There is also a population in the waters off Chile that may constitute a fifth subspecies. In general, blue whale populations migrate between their summer feeding areas near the poles and their winter breeding grounds near the tropics. There is also evidence of year-round residencies, and partial or age/sex-based migration. Blue whales are filter feeders; their diet consists almost exclusively of krill. They are generally solitary or gather in small groups, and have no well-defined social structure other than mother-calf bonds. The fundamental frequency for blue whale vocalizations ranges from 8 to 25 Hz and the production of vocalizations may vary by region, season, behavior, and time of day. Orcas are their only natural predators. The blue whale was once abundant in nearly all the Earth's oceans until the end of the 19th century. It was hunted almost to the point of extinction by whalers until the International Whaling Commission banned all blue whale hunting in 1966. The International Union for Conservation of Nature has listed blue whales as Endangered as of 2018. It continues to face numerous man-made threats such as ship strikes, pollution, ocean noise and climate change. Taxonomy Nomenclature The genus name, Balaenoptera, means winged whale while the species name, musculus, could mean "muscle" or a diminutive form of "mouse", possibly a pun by Carl Linnaeus when he named the species in Systema N
https://en.wikipedia.org/wiki/Naive%20set%20theory
Naive set theory is any of several theories of sets used in the discussion of the foundations of mathematics. Unlike axiomatic set theories, which are defined using formal logic, naive set theory is defined informally, in natural language. It describes the aspects of mathematical sets familiar in discrete mathematics (for example Venn diagrams and symbolic reasoning about their Boolean algebra), and suffices for the everyday use of set theory concepts in contemporary mathematics. Sets are of great importance in mathematics; in modern formal treatments, most mathematical objects (numbers, relations, functions, etc.) are defined in terms of sets. Naive set theory suffices for many purposes, while also serving as a stepping stone towards more formal treatments. Method A naive theory in the sense of "naive set theory" is a non-formalized theory, that is, a theory that uses natural language to describe sets and operations on sets. The words and, or, if ... then, not, for some, for every are treated as in ordinary mathematics. As a matter of convenience, use of naive set theory and its formalism prevails even in higher mathematics – including in more formal settings of set theory itself. The first development of set theory was a naive set theory. It was created at the end of the 19th century by Georg Cantor as part of his study of infinite sets and developed by Gottlob Frege in his Grundgesetze der Arithmetik. Naive set theory may refer to several very distinct notions. It may refer to Informal presentation of an axiomatic set theory, e.g. as in Naive Set Theory by Paul Halmos. Early or later versions of Georg Cantor's theory and other informal systems. Decidedly inconsistent theories (whether axiomatic or not), such as a theory of Gottlob Frege that yielded Russell's paradox, and theories of Giuseppe Peano and Richard Dedekind. Paradoxes The assumption that any property may be used to form a set, without restriction, leads to paradoxes. One common example is Rus
https://en.wikipedia.org/wiki/B%C3%A9zout%27s%20identity
In mathematics, Bézout's identity (also called Bézout's lemma), named after Étienne Bézout who proved it for polynomials, is the following theorem: Here the greatest common divisor of and is taken to be . The integers and are called Bézout coefficients for ; they are not unique. A pair of Bézout coefficients can be computed by the extended Euclidean algorithm, and this pair is, in the case of integers one of the two pairs such that and equality occurs only if one of and is a multiple of the other. As an example, the greatest common divisor of 15 and 69 is 3, and 3 can be written as a combination of 15 and 69 as with Bézout coefficients −9 and 2. Many other theorems in elementary number theory, such as Euclid's lemma or the Chinese remainder theorem, result from Bézout's identity. A Bézout domain is an integral domain in which Bézout's identity holds. In particular, Bézout's identity holds in principal ideal domains. Every theorem that results from Bézout's identity is thus true in all principal ideal domains. Structure of solutions If and are not both zero and one pair of Bézout coefficients has been computed (for example, using the extended Euclidean algorithm), all pairs can be represented in the form where is an arbitrary integer, is the greatest common divisor of and , and the fractions simplify to integers. If and are both nonzero, then exactly two of these pairs of Bézout coefficients satisfy and equality may occur only if one of and divides the other. This relies on a property of Euclidean division: given two non-zero integers and , if does not divide , there is exactly one pair such that and and another one such that and The two pairs of small Bézout's coefficients are obtained from the given one by choosing for in the above formula either of the two integers next to . The extended Euclidean algorithm always produces one of these two minimal pairs. Example Let and , then . Then the following Bézout's identities a
https://en.wikipedia.org/wiki/Bovril
Bovril is the trademarked name of a thick and salty meat extract paste, similar to a yeast extract, developed in the 1870s by John Lawson Johnston. It is sold in a distinctive bulbous jar and as cubes and granules. Bovril is owned and distributed by Unilever UK. Its appearance is similar to the British Marmite and its Australian equivalent Vegemite; however, unlike these products, Bovril is not vegetarian. Bovril can be made into a drink (referred to in the UK as a "beef tea") by diluting with hot water or, less commonly, with milk. It can be used as a flavouring for soups, broth, stews or porridge, or as a spread, especially on toast in a similar fashion to Marmite and Vegemite. Etymology The first part of the product's name comes from Latin , meaning "ox". Johnston took the -vril suffix from Edward Bulwer-Lytton's then-popular novel, The Coming Race (1871), the plot of which revolves around a superior race of people, the Vril-ya, who derive their powers from an electromagnetic substance named "Vril". Therefore, Bovril indicates great strength obtained from an ox. History In 1870, in the Franco-Prussian War, Napoleon III ordered one million cans of beef to feed his troops. The task of providing this went to John Lawson Johnston, a Scottish butcher living in Canada. Large quantities of beef were available across the British Dominions and South America, but transport and storage were problematic. Therefore, Johnston created a product known as 'Johnston's Fluid Beef', later called Bovril, to meet Napoleon's needs. By 1888, over 3,000 UK public houses, grocers and dispensing chemists were selling Bovril. In 1889, Bovril Ltd was formed to develop Johnston's business further. During the 1900 Siege of Ladysmith in the Second Boer War, a Bovril-like paste was produced from horsemeat within the garrison. Nicknamed Chevril (a portmanteau of Bovril and cheval, French for horse) it was made by boiling down horse or mule meat to a jelly and serving it as a beef tea-like m
https://en.wikipedia.org/wiki/Bernoulli%20number
In mathematics, the Bernoulli numbers are a sequence of rational numbers which occur frequently in analysis. The Bernoulli numbers appear in (and can be defined by) the Taylor series expansions of the tangent and hyperbolic tangent functions, in Faulhaber's formula for the sum of m-th powers of the first n positive integers, in the Euler–Maclaurin formula, and in expressions for certain values of the Riemann zeta function. The values of the first 20 Bernoulli numbers are given in the adjacent table. Two conventions are used in the literature, denoted here by and ; they differ only for , where and . For every odd , . For every even , is negative if is divisible by 4 and positive otherwise. The Bernoulli numbers are special values of the Bernoulli polynomials , with and . The Bernoulli numbers were discovered around the same time by the Swiss mathematician Jacob Bernoulli, after whom they are named, and independently by Japanese mathematician Seki Takakazu. Seki's discovery was posthumously published in 1712 in his work Katsuyō Sanpō; Bernoulli's, also posthumously, in his Ars Conjectandi of 1713. Ada Lovelace's note G on the Analytical Engine from 1842 describes an algorithm for generating Bernoulli numbers with Babbage's machine. As a result, the Bernoulli numbers have the distinction of being the subject of the first published complex computer program. Notation The superscript used in this article distinguishes the two sign conventions for Bernoulli numbers. Only the term is affected: with ( / ) is the sign convention prescribed by NIST and most modern textbooks. with ( / ) was used in the older literature, and (since 2022) by Donald Knuth following Peter Luschny's "Bernoulli Manifesto". In the formulas below, one can switch from one sign convention to the other with the relation , or for integer = 2 or greater, simply ignore it. Since for all odd , and many formulas only involve even-index Bernoulli numbers, a few authors write "" instead o
https://en.wikipedia.org/wiki/Zebrafish
The zebrafish (Danio rerio) is a freshwater fish belonging to the minnow family (Cyprinidae) of the order Cypriniformes. Native to India and South Asia, it is a popular aquarium fish, frequently sold under the trade name zebra danio (and thus often called a "tropical fish" although both tropical and subtropical). It is also found in private ponds. The zebrafish is an important and widely used vertebrate model organism in scientific research. Zebrafish has been used for biomedicine and developmental biology. The species is used for studies, such as neurobehavioral phenomena. It is also used for psychological reasons such as abuse, cognitive, and affective disorders. The species are used to study and observe behavioral research. Taxonomy The zebrafish is a derived member of the genus Brachydanio, of the family Cyprinidae. It has a sister-group relationship with Danio aesculapii. Zebrafish are also closely related to the genus Devario, as demonstrated by a phylogenetic tree of close species. Distribution Range The zebrafish is native to freshwater habitats in South Asia where it is found in India, Pakistan, Bangladesh, Nepal and Bhutan. The northern limit is in the South Himalayas, ranging from the Sutlej river basin in the Pakistan–India border region to the state of Arunachal Pradesh in northeast Indian. Its range is concentrated in the Ganges and Brahmaputra River basins, and the species was first described from Kosi River (lower Ganges basin) of India. Its range further south is more local, with scattered records from the Western and Eastern Ghats regions. It has frequently been said to occur in Myanmar (Burma), but this is entirely based on pre-1930 records and likely refers to close relatives only described later, notably Danio kyathit. Likewise, old records from Sri Lanka are highly questionable and remain unconfirmed. Zebrafish have been introduced to California, Connecticut, Florida and New Mexico in the United States, presumably by deliberate release
https://en.wikipedia.org/wiki/Bistability
In a dynamical system, bistability means the system has two stable equilibrium states. A bistable structure can be resting in either of two states. An example of a mechanical device which is bistable is a light switch. The switch lever is designed to rest in the "on" or "off" position, but not between the two. Bistable behavior can occur in mechanical linkages, electronic circuits, nonlinear optical systems, chemical reactions, and physiological and biological systems. In a conservative force field, bistability stems from the fact that the potential energy has two local minima, which are the stable equilibrium points. These rest states need not have equal potential energy. By mathematical arguments, a local maximum, an unstable equilibrium point, must lie between the two minima. At rest, a particle will be in one of the minimum equilibrium positions, because that corresponds to the state of lowest energy. The maximum can be visualized as a barrier between them. A system can transition from one state of minimal energy to the other if it is given enough activation energy to penetrate the barrier (compare activation energy and Arrhenius equation for the chemical case). After the barrier has been reached, assuming the system has damping, it will relax into the other minimum state in a time called the relaxation time. Bistability is widely used in digital electronics devices to store binary data. It is the essential characteristic of the flip-flop, a circuit which is a fundamental building block of computers and some types of semiconductor memory. A bistable device can store one bit of binary data, with one state representing a "0" and the other state a "1". It is also used in relaxation oscillators, multivibrators, and the Schmitt trigger. Optical bistability is an attribute of certain optical devices where two resonant transmissions states are possible and stable, dependent on the input. Bistability can also arise in biochemical systems, where it creates digi
https://en.wikipedia.org/wiki/Berry%20paradox
The Berry paradox is a self-referential paradox arising from an expression like "The smallest positive integer not definable in under sixty letters" (a phrase with fifty-seven letters). Bertrand Russell, the first to discuss the paradox in print, attributed it to G. G. Berry (1867–1928), a junior librarian at Oxford's Bodleian Library. Russell called Berry "the only person in Oxford who understood mathematical logic". The paradox was called "Richard's paradox" by Jean-Yves Girard. Overview Consider the expression: "The smallest positive integer not definable in under sixty letters." Since there are only twenty-six letters in the English alphabet, there are finitely many phrases of under sixty letters, and hence finitely many positive integers that are defined by phrases of under sixty letters. Since there are infinitely many positive integers, this means that there are positive integers that cannot be defined by phrases of under sixty letters. If there are positive integers that satisfy a given property, then there is a smallest positive integer that satisfies that property; therefore, there is a smallest positive integer satisfying the property "not definable in under sixty letters". This is the integer to which the above expression refers. But the above expression is only fifty-seven letters long, therefore it is definable in under sixty letters, and is not the smallest positive integer not definable in under sixty letters, and is not defined by this expression. This is a paradox: there must be an integer defined by this expression, but since the expression is self-contradictory (any integer it defines is definable in under sixty letters), there cannot be any integer defined by it. Perhaps another helpful analogy to Berry's Paradox would be the phrase "indescribable feeling". If the feeling is indeed indescribable, then no description of the feeling would be true. But if the word "indescribable" communicates something about the feeling, then it may be cons
https://en.wikipedia.org/wiki/Chess
Chess is a board game for two players, called White and Black, each controlling an army of chess pieces, with the objective to checkmate the opponent's king. It is sometimes called international chess or Western chess to distinguish it from related games such as (Chinese chess) and (Japanese chess). The recorded history of chess goes back at least to the emergence of a similar game, chaturanga, in seventh century India. The rules of chess as they are known today emerged in Europe at the end of the 15th century, with standardization and universal acceptance by the end of the 19th century. Today, chess is one of the world's most popular games played by millions of people worldwide. Chess is an abstract strategy game that involves no hidden information and no elements of chance. It is played on a chessboard with 64 squares arranged in an 8×8 grid. At the start, each player controls sixteen pieces: one king, one queen, two rooks, two bishops, two knights, and eight pawns. White moves first, followed by Black. The game is won by checkmating the opponent's king, i.e. threatening it with inescapable capture. There are also several ways a game can end in a draw. Organized chess arose in the 19th century. Chess competition today is governed internationally by FIDE (the International Chess Federation). The first universally recognized World Chess Champion, Wilhelm Steinitz, claimed his title in 1886; Ding Liren is the current World Champion. A huge body of chess theory has developed since the game's inception. Aspects of art are found in chess composition, and chess in its turn influenced Western culture and the arts, and has connections with other fields such as mathematics, computer science, and psychology. One of the goals of early computer scientists was to create a chess-playing machine. In 1997, Deep Blue became the first computer to beat the reigning World Champion in a match when it defeated Garry Kasparov. Today's chess engines are significantly stronger than th
https://en.wikipedia.org/wiki/Combinatorics
Combinatorics is an area of mathematics primarily concerned with counting, both as a means and an end in obtaining results, and certain properties of finite structures. It is closely related to many other areas of mathematics and has many applications ranging from logic to statistical physics and from evolutionary biology to computer science. Combinatorics is well known for the breadth of the problems it tackles. Combinatorial problems arise in many areas of pure mathematics, notably in algebra, probability theory, topology, and geometry, as well as in its many application areas. Many combinatorial questions have historically been considered in isolation, giving an ad hoc solution to a problem arising in some mathematical context. In the later twentieth century, however, powerful and general theoretical methods were developed, making combinatorics into an independent branch of mathematics in its own right. One of the oldest and most accessible parts of combinatorics is graph theory, which by itself has numerous natural connections to other areas. Combinatorics is used frequently in computer science to obtain formulas and estimates in the analysis of algorithms. A mathematician who studies combinatorics is called a . Definition The full scope of combinatorics is not universally agreed upon. According to H.J. Ryser, a definition of the subject is difficult because it crosses so many mathematical subdivisions. Insofar as an area can be described by the types of problems it addresses, combinatorics is involved with: the enumeration (counting) of specified structures, sometimes referred to as arrangements or configurations in a very general sense, associated with finite systems, the existence of such structures that satisfy certain given criteria, the construction of these structures, perhaps in many ways, and optimization: finding the "best" structure or solution among several possibilities, be it the "largest", "smallest" or satisfying some other optimality cr
https://en.wikipedia.org/wiki/Calculus
Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizations of arithmetic operations. It has two major branches, differential calculus and integral calculus; the former concerns instantaneous rates of change, and the slopes of curves, while the latter concerns accumulation of quantities, and areas under or between curves. These two branches are related to each other by the fundamental theorem of calculus, and they make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit. Infinitesimal calculus was developed independently in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz. Later work, including codifying the idea of limits, put these developments on a more solid conceptual footing. Today, calculus has widespread uses in science, engineering, and social science. Etymology In mathematics education, calculus denotes courses of elementary mathematical analysis, which are mainly devoted to the study of functions and limits. The word calculus is Latin for "small pebble" (the diminutive of calx, meaning "stone"), a meaning which still persists in medicine. Because such pebbles were used for counting out distances, tallying votes, and doing abacus arithmetic, the word came to mean a method of computation. In this sense, it was used in English at least as early as 1672, several years before the publications of Leibniz and Newton. In addition to differential calculus and integral calculus, the term is also used for naming specific methods of calculation and related theories that seek to model a particular concept in terms of mathematics. Examples of this convention include propositional calculus, Ricci calculus, calculus of variations, lambda calculus, and process calculus. Furthermore, the term "calculus" has variously been applied in ethics and philosophy, for such systems as Bentham's felicific cal
https://en.wikipedia.org/wiki/Cytoplasm
In cell biology, the cytoplasm describes all material within a eukaryotic cell, enclosed by the cell membrane, except for the cell nucleus. The material inside the nucleus and contained within the nuclear membrane is termed the nucleoplasm. The main components of the cytoplasm are the cytosol (a gel-like substance), the organelles (the cell's internal sub-structures), and various cytoplasmic inclusions. The cytoplasm is about 80% water and is usually colorless. The submicroscopic ground cell substance, or cytoplasmic matrix, that remains after the exclusion of the cell organelles and particles is groundplasm. It is the hyaloplasm of light microscopy, a highly complex, polyphasic system in which all resolvable cytoplasmic elements are suspended, including the larger organelles such as the ribosomes, mitochondria, plant plastids, lipid droplets, and vacuoles. Many cellular activities take place within the cytoplasm, such as many metabolic pathways, including glycolysis, photosynthesis, and processes such as cell division. The concentrated inner area is called the endoplasm and the outer layer is called the cell cortex, or ectoplasm. Movement of calcium ions in and out of the cytoplasm is a signaling activity for metabolic processes. In plants, movement of the cytoplasm around vacuoles is known as cytoplasmic streaming. History The term was introduced by Rudolf von Kölliker in 1863, originally as a synonym for protoplasm, but later it has come to mean the cell substance and organelles outside the nucleus. There has been certain disagreement on the definition of cytoplasm, as some authors prefer to exclude from it some organelles, especially the vacuoles and sometimes the plastids. Physical nature It remains uncertain how the various components of the cytoplasm interact to allow movement of organelles while maintaining the cell's structure. The flow of cytoplasmic components plays an important role in many cellular functions which are dependent on the permeabi
https://en.wikipedia.org/wiki/Central%20processing%20unit
A central processing unit (CPU)—also called a central processor or main processor—is the most important processor in a given computer. Its electronic circuitry executes instructions of a computer program, such as arithmetic, logic, controlling, and input/output (I/O) operations. This role contrasts with that of external components, such as main memory and I/O circuitry, and specialized coprocessors such as graphics processing units (GPUs). The form, design, and implementation of CPUs have changed over time, but their fundamental operation remains almost unchanged. Principal components of a CPU include the arithmetic–logic unit (ALU) that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations, and a control unit that orchestrates the fetching (from memory), decoding and execution (of instructions) by directing the coordinated operations of the ALU, registers, and other components. Most modern CPUs are implemented on integrated circuit (IC) microprocessors, with one or more CPUs on a single IC chip. Microprocessor chips with multiple CPUs are multi-core processors. The individual physical CPUs, processor cores, can also be multithreaded to create additional virtual or logical CPUs. An IC that contains a CPU may also contain memory, peripheral interfaces, and other components of a computer; such integrated devices are variously called microcontrollers or systems on a chip (SoC). Array processors or vector processors have multiple processors that operate in parallel, with no unit considered central. Virtual CPUs are an abstraction of dynamically aggregated computational resources. History Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called "fixed-program computers". The "central processing unit" term has been in use since as early as 1955. Since the term "CPU" is generally defined as a device for software (c
https://en.wikipedia.org/wiki/Code
In communications and information processing, code is a system of rules to convert information—such as a letter, word, sound, image, or gesture—into another form, sometimes shortened or secret, for communication through a communication channel or storage in a storage medium. An early example is an invention of language, which enabled a person, through speech, to communicate what they thought, saw, heard, or felt to others. But speech limits the range of communication to the distance a voice can carry and limits the audience to those present when the speech is uttered. The invention of writing, which converted spoken language into visual symbols, extended the range of communication across space and time. The process of encoding converts information from a source into symbols for communication or storage. Decoding is the reverse process, converting code symbols back into a form that the recipient understands, such as English or/and Spanish. One reason for coding is to enable communication in places where ordinary plain language, spoken or written, is difficult or impossible. For example, semaphore, where the configuration of flags held by a signaler or the arms of a semaphore tower encodes parts of the message, typically individual letters, and numbers. Another person standing a great distance away can interpret the flags and reproduce the words sent. Theory In information theory and computer science, a code is usually considered as an algorithm that uniquely represents symbols from some source alphabet, by encoded strings, which may be in some other target alphabet. An extension of the code for representing sequences of symbols over the source alphabet is obtained by concatenating the encoded strings. Before giving a mathematically precise definition, this is a brief example. The mapping is a code, whose source alphabet is the set and whose target alphabet is the set . Using the extension of the code, the encoded string 0011001 can be grouped into codewords a
https://en.wikipedia.org/wiki/Carl%20Linnaeus
Carl Linnaeus (23 May 1707 – 10 January 1778), also known after ennoblement in 1761 as Carl von Linné, was a Swedish biologist and physician who formalised binomial nomenclature, the modern system of naming organisms. He is known as the "father of modern taxonomy". Many of his writings were in Latin; his name is rendered in Latin as and, after his 1761 ennoblement, as . Linnaeus was the son of a curate and he was born in Råshult, the countryside of Småland, in southern Sweden. He received most of his higher education at Uppsala University and began giving lectures in botany there in 1730. He lived abroad between 1735 and 1738, where he studied and also published the first edition of his in the Netherlands. He then returned to Sweden where he became professor of medicine and botany at Uppsala. In the 1740s, he was sent on several journeys through Sweden to find and classify plants and animals. In the 1750s and 1760s, he continued to collect and classify animals, plants, and minerals, while publishing several volumes. By the time of his death in 1778, he was one of the most acclaimed scientists in Europe. Philosopher Jean-Jacques Rousseau sent him the message: "Tell him I know no greater man on Earth." Johann Wolfgang von Goethe wrote: "With the exception of Shakespeare and Spinoza, I know no one among the no longer living who has influenced me more strongly." Swedish author August Strindberg wrote: "Linnaeus was in reality a poet who happened to become a naturalist." Linnaeus has been called (Prince of Botanists) and "The Pliny of the North". He is also considered one of the founders of modern ecology. In botany and zoology, the abbreviation L. is used to indicate Linnaeus as the authority for a species' name. In older publications, the abbreviation "Linn." is found. Linnaeus's remains constitute the type specimen for the species Homo sapiens following the International Code of Zoological Nomenclature, since the sole specimen that he is known to have examined w
https://en.wikipedia.org/wiki/Cipher
In cryptography, a cipher (or cypher) is an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is encipherment. To encipher or encode is to convert information into cipher or code. In common parlance, "cipher" is synonymous with "code", as they are both a set of steps that encrypt a message; however, the concepts are distinct in cryptography, especially classical cryptography. Codes generally substitute different length strings of characters in the output, while ciphers generally substitute the same number of characters as are input. A code maps one meaning with another. Words and phrases can be coded as letters or numbers. Codes typically have direct meaning from input to key. Codes primarily function to save time. Ciphers are algorithmic. The given input must follow the cipher's process to be solved. Ciphers are commonly used to encrypt written information. Codes operated by substituting according to a large codebook which linked a random string of characters or numbers to a word or phrase. For example, "UQJHSE" could be the code for "Proceed to the following coordinates." When using a cipher the original information is known as plaintext, and the encrypted form as ciphertext. The ciphertext message contains all the information of the plaintext message, but is not in a format readable by a human or computer without the proper mechanism to decrypt it. The operation of a cipher usually depends on a piece of auxiliary information, called a key (or, in traditional NSA parlance, a cryptovariable). The encrypting procedure is varied depending on the key, which changes the detailed operation of the algorithm. A key must be selected before using a cipher to encrypt a message. Without knowledge of the key, it should be extremely difficult, if not impossible, to decrypt the resulting ciphertext into readable plaintext. Most modern ciphers can be categorized in several wa
https://en.wikipedia.org/wiki/Common%20descent
Common descent is a concept in evolutionary biology applicable when one species is the ancestor of two or more species later in time. According to modern evolutionary biology, all living beings could be descendants of a unique ancestor commonly referred to as the last universal common ancestor (LUCA) of all life on Earth. Common descent is an effect of speciation, in which multiple species derive from a single ancestral population. The more recent the ancestral population two species have in common, the more closely are they related. The most recent common ancestor of all currently living organisms is the last universal ancestor, which lived about 3.9 billion years ago. The two earliest pieces of evidence for life on Earth are graphite found to be biogenic in 3.7 billion-year-old metasedimentary rocks discovered in western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone discovered in Western Australia. All currently living organisms on Earth share a common genetic heritage, though the suggestion of substantial horizontal gene transfer during early evolution has led to questions about the monophyly (single ancestry) of life. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago in the Precambrian. Universal common descent through an evolutionary process was first proposed by the British naturalist Charles Darwin in the concluding sentence of his 1859 book On the Origin of Species: History The idea that all living things (including things considered non-living by science) are related is a recurring theme in many indigenous worldviews across the world. Later on, in the 1740s, the French mathematician Pierre Louis Maupertuis arrived at the idea that all organisms had a common ancestor, and had diverged through random variation and natural selection. In Essai de cosmologie (1750), Maupertuis noted: May we not say that, in the fortuito
https://en.wikipedia.org/wiki/Constellation
A constellation is an area on the celestial sphere in which a group of visible stars forms a perceived pattern or outline, typically representing an animal, mythological subject, or inanimate object. The origins of the earliest constellations likely go back to prehistory. People used them to relate stories of their beliefs, experiences, creation, or mythology. Different cultures and countries invented their own constellations, some of which lasted into the early 20th century before today's constellations were internationally recognized. The recognition of constellations has changed significantly over time. Many changed in size or shape. Some became popular, only to drop into obscurity. Some were limited to a single culture or nation. Naming constellations also helped astronomers and navigators identify stars more easily. Twelve (or thirteen) ancient constellations belong to the zodiac (straddling the ecliptic, which the Sun, Moon, and planets all traverse). The origins of the zodiac remain historically uncertain; its astrological divisions became prominent 400 BC in Babylonian or Chaldean astronomy. Constellations appear in Western culture via Greece and are mentioned in the works of Hesiod, Eudoxus and Aratus. The traditional 48 constellations, consisting of the Zodiac and 36 more (now 38, following the division of Argo Navis into three constellations) are listed by Ptolemy, a Greco-Roman astronomer from Alexandria, Egypt, in his Almagest. The formation of constellations was the subject of extensive mythology, most notably in the Metamorphoses of the Latin poet Ovid. Constellations in the far southern sky were added from the 15th century until the mid-18th century when European explorers began traveling to the Southern Hemisphere. Due to Roman and European transmission, each constellation has a Latin name. In 1922, the International Astronomical Union (IAU) formally accepted the modern list of 88 constellations, and in 1928 adopted official constellation bounda
https://en.wikipedia.org/wiki/Character%20encoding
Character encoding is the process of assigning numbers to graphical characters, especially the written characters of human language, allowing them to be stored, transmitted, and transformed using digital computers. The numerical values that make up a character encoding are known as "code points" and collectively comprise a "code space", a "code page", or a "character map". Early character codes associated with the optical or electrical telegraph could only represent a subset of the characters used in written languages, sometimes restricted to upper case letters, numerals and some punctuation only. The low cost of digital representation of data in modern computer systems allows more elaborate character codes (such as Unicode) which represent most of the characters used in many written languages. Character encoding using internationally accepted standards permits worldwide interchange of text in electronic form. History The history of character codes illustrates the evolving need for machine-mediated character-based symbolic information over a distance, using once-novel electrical means. The earliest codes were based upon manual and hand-written encoding and cyphering systems, such as Bacon's cipher, Braille, international maritime signal flags, and the 4-digit encoding of Chinese characters for a Chinese telegraph code (Hans Schjellerup, 1869). With the adoption of electrical and electro-mechanical techniques these earliest codes were adapted to the new capabilities and limitations of the early machines. The earliest well-known electrically transmitted character code, Morse code, introduced in the 1840s, used a system of four "symbols" (short signal, long signal, short space, long space) to generate codes of variable length. Though some commercial use of Morse code was via machinery, it was often used as a manual code, generated by hand on a telegraph key and decipherable by ear, and persists in amateur radio and aeronautical use. Most codes are of fixed per-char
https://en.wikipedia.org/wiki/Computer%20data%20storage
Computer data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers. The central processing unit (CPU) of a computer is what manipulates data by performing computations. In practice, almost all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but less expensive and larger options further away. Generally, the fast technologies are referred to as "memory", while slower persistent technologies are referred to as "storage". Even the first computer designs, Charles Babbage's Analytical Engine and Percy Ludgate's Analytical Machine, clearly distinguished between processing and memory (Babbage stored numbers as rotations of gears, while Ludgate stored numbers as displacements of rods in shuttles). This distinction was extended in the Von Neumann architecture, where the CPU consists of two main parts: The control unit and the arithmetic logic unit (ALU). The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data. Functionality Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, and other specialized devices. Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann
https://en.wikipedia.org/wiki/Chemical%20equilibrium
In a chemical reaction, chemical equilibrium is the state in which both the reactants and products are present in concentrations which have no further tendency to change with time, so that there is no observable change in the properties of the system. This state results when the forward reaction proceeds at the same rate as the reverse reaction. The reaction rates of the forward and backward reactions are generally not zero, but they are equal. Thus, there are no net changes in the concentrations of the reactants and products. Such a state is known as dynamic equilibrium. Historical introduction The concept of chemical equilibrium was developed in 1803, after Berthollet found that some chemical reactions are reversible. For any reaction mixture to exist at equilibrium, the rates of the forward and backward (reverse) reactions must be equal. In the following chemical equation, arrows point both ways to indicate equilibrium. A and B are reactant chemical species, S and T are product species, and α, β, σ, and τ are the stoichiometric coefficients of the respective reactants and products: α A + β B σ S + τ T The equilibrium concentration position of a reaction is said to lie "far to the right" if, at equilibrium, nearly all the reactants are consumed. Conversely the equilibrium position is said to be "far to the left" if hardly any product is formed from the reactants. Guldberg and Waage (1865), building on Berthollet's ideas, proposed the law of mass action: where A, B, S and T are active masses and k+ and k− are rate constants. Since at equilibrium forward and backward rates are equal: and the ratio of the rate constants is also a constant, now known as an equilibrium constant. By convention, the products form the numerator. However, the law of mass action is valid only for concerted one-step reactions that proceed through a single transition state and is not valid in general because rate equations do not, in general, follow the stoichiometry of the reaction
https://en.wikipedia.org/wiki/Combination
In mathematics, a combination is a selection of items from a set that has distinct members, such that the order of selection does not matter (unlike permutations). For example, given three fruits, say an apple, an orange and a pear, there are three combinations of two that can be drawn from this set: an apple and a pear; an apple and an orange; or a pear and an orange. More formally, a k-combination of a set S is a subset of k distinct elements of S. So, two combinations are identical if and only if each combination has the same members. (The arrangement of the members in each set does not matter.) If the set has n elements, the number of k-combinations, denoted by or , is equal to the binomial coefficient which can be written using factorials as whenever , and which is zero when . This formula can be derived from the fact that each k-combination of a set S of n members has permutations so or . The set of all k-combinations of a set S is often denoted by . A combination is a combination of n things taken k at a time without repetition. To refer to combinations in which repetition is allowed, the terms k-combination with repetition, k-multiset, or k-selection, are often used. If, in the above example, it were possible to have two of any one kind of fruit there would be 3 more 2-selections: one with two apples, one with two oranges, and one with two pears. Although the set of three fruits was small enough to write a complete list of combinations, this becomes impractical as the size of the set increases. For example, a poker hand can be described as a 5-combination (k = 5) of cards from a 52 card deck (n = 52). The 5 cards of the hand are all distinct, and the order of cards in the hand does not matter. There are 2,598,960 such combinations, and the chance of drawing any one hand at random is 1 / 2,598,960. Number of k-combinations The number of k-combinations from a given set S of n elements is often denoted in elementary combinatorics texts by , or by a va
https://en.wikipedia.org/wiki/Software
Software is a set of computer programs and associated documentation and data. This is in contrast to hardware, from which the system is built and which actually performs the work. At the lowest programming level, executable code consists of machine language instructions supported by an individual processor—typically a central processing unit (CPU) or a graphics processing unit (GPU). Machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. For example, an instruction may change the value stored in a particular storage location in the computer—an effect that is not directly observable to the user. An instruction may also invoke one of many input or output operations, for example, displaying some text on a computer screen, causing state changes that should be visible to the user. The processor executes the instructions in the order they are provided, unless it is instructed to "jump" to a different instruction or is interrupted by the operating system. , most personal computers, smartphone devices, and servers have processors with multiple execution units, or multiple processors performing computation together, so computing has become a much more concurrent activity than in the past. The majority of software is written in high-level programming languages. They are easier and more efficient for programmers because they are closer to natural languages than machine languages. High-level languages are translated into machine language using a compiler, an interpreter, or a combination of the two. Software may also be written in a low-level assembly language that has a strong correspondence to the computer's machine language instructions and is translated into machine language using an assembler. History An algorithm for what would have been the first piece of software was written by Ada Lovelace in the 19th century, for the planned Analytical Engine. She created proofs to show
https://en.wikipedia.org/wiki/Computer%20programming
Computer programming or coding is the composition of sequences of instructions, called programs, that computers can follow to perform tasks. It involves designing and implementing algorithms, step-by-step specifications of procedures, by writing code in one or more programming languages. Programmers typically use high-level programming languages that are more easily intelligible to humans than machine code, which is directly executed by the central processing unit. Proficient programming usually requires expertise in several different subjects, including knowledge of the application domain, details of programming languages and generic code libraries, specialized algorithms, and formal logic. Auxiliary tasks accompanying and related to programming include analyzing requirements, testing, debugging (investigating and fixing problems), implementation of build systems, and management of derived artifacts, such as programs' machine code. While these are sometimes considered programming, often the term software development is used for this larger overall process – with the terms programming, implementation, and coding reserved for the writing and editing of code per se. Sometimes software development is known as software engineering, especially when it employs formal methods or follows an engineering design process. History Programmable devices have existed for centuries. As early as the 9th century, a programmable music sequencer was invented by the Persian Banu Musa brothers, who described an automated mechanical flute player in the Book of Ingenious Devices. In 1206, the Arab engineer Al-Jazari invented a programmable drum machine where a musical mechanical automaton could be made to play different rhythms and drum patterns, via pegs and cams. In 1801, the Jacquard loom could produce entirely different weaves by changing the "program" – a series of pasteboard cards with holes punched in them. Code-breaking algorithms have also existed for centuries. In the 9th cent
https://en.wikipedia.org/wiki/Colloid
A colloid is a mixture in which one substance consisting of microscopically dispersed insoluble particles is suspended throughout another substance. Some definitions specify that the particles must be dispersed in a liquid, while others extend the definition to include substances like aerosols and gels. The term colloidal suspension refers unambiguously to the overall mixture (although a narrower sense of the word suspension is distinguished from colloids by larger particle size). A colloid has a dispersed phase (the suspended particles) and a continuous phase (the medium of suspension). The dispersed phase particles have a diameter of approximately 1 nanometre to 1 micrometre. Some colloids are translucent because of the Tyndall effect, which is the scattering of light by particles in the colloid. Other colloids may be opaque or have a slight color. Colloidal suspensions are the subject of interface and colloid science. This field of study began in 1845 by Francesco Selmi and expanded by Michael Faraday and Thomas Graham, who coined the term colloid in 1861. Classification of colloids Colloids can be classified as follows: Homogeneous mixtures with a dispersed phase in this size range may be called colloidal aerosols, colloidal emulsions, colloidal suspensions, colloidal foams, colloidal dispersions, or hydrosols. Hydrocolloids Hydrocolloids describe certain chemicals (mostly polysaccharides and proteins) that are colloidally dispersible in water. Thus becoming effectively "soluble" they change the rheology of water by raising the viscosity and/or inducing gelation. They may provide other interactive effects with other chemicals, in some cases synergistic, in others antagonistic. Using these attributes hydrocolloids are very useful chemicals since in many areas of technology from foods through pharmaceuticals, personal care and industrial applications, they can provide stabilization, destabilization and separation, gelation, flow control, crystallization cont
https://en.wikipedia.org/wiki/Country%20code
A country code is a short alphanumeric identification code for countries and dependent areas. Its primary use is in data processing and communications. Several identification systems have been developed. The term country code frequently refers to ISO 3166-1 alpha-2, as well as the telephone country code, which is embodied in the E.164 recommendation by the International Telecommunication Union (ITU). ISO 3166-1 The standard ISO 3166-1 defines short identification codes for most countries and dependent areas: ISO 3166-1 alpha-2: two-letter code ISO 3166-1 alpha-3: three-letter code ISO 3166-1 numeric: three-digit code The two-letter codes are used as the basis for other codes and applications, for example, for ISO 4217 currency codes with deviations, for country code top-level domain names (ccTLDs) on the Internet: list of Internet TLDs. Other applications are defined in ISO 3166-1 alpha-2. ITU country codes In telecommunication, a country code, or international subscriber dialing (ISD) code, is a telephone number prefix used in international direct dialing (IDD) and for destination routing of telephone calls to a country other than the caller's. A country or region with an autonomous telephone administration must apply for membership in the International Telecommunication Union (ITU) to participate in the international public switched telephone network (PSTN). County codes are defined by the ITU-T section of the ITU in standards E.123 and E.164. Country codes constitute the international telephone numbering plan, and are dialed only when calling a telephone number in another country. They are dialed before the national telephone number. International calls require at least one additional prefix to be dialing before the country code, to connect the call to international circuits, the international call prefix. When printing telephone numbers this is indicated by a plus-sign (+) in front of a complete international telephone number, per recommendation E164 by the
https://en.wikipedia.org/wiki/Cladistics
Cladistics (; ) is an approach to biological classification in which organisms are categorized in groups ("clades") based on hypotheses of most recent common ancestry. The evidence for hypothesized relationships is typically shared derived characteristics (synapomorphies) that are not present in more distant groups and ancestors. However, from an empirical perspective, common ancestors are inferences based on a cladistic hypothesis of relationships of taxa whose character states can be observed. Theoretically, a last common ancestor and all its descendants constitute a (minimal) clade. Importantly, all descendants stay in their overarching ancestral clade. For example, if the terms worms or fishes were used within a strict cladistic framework, these terms would include humans. Many of these terms are normally used paraphyletically, outside of cladistics, e.g. as a 'grade', which are fruitless to precisely delineate, especially when including extinct species. Radiation results in the generation of new subclades by bifurcation, but in practice sexual hybridization may blur very closely related groupings. As a hypothesis, a clade can be rejected only if some groupings were explicitly excluded. It may then be found that the excluded group did actually descend from the last common ancestor of the group, and thus emerged within the group. ("Evolved from" is misleading, because in cladistics all descendants stay in the ancestral group). Upon finding that the group is paraphyletic this way, either such excluded groups should be granted to the clade, or the group should be abolished. Branches down to the divergence to the next significant (e.g. extant) sister are considered stem-groupings of the clade, but in principle each level stands on its own, to be assigned a unique name. For a fully bifurcated tree, adding a group to a tree also adds an additional (named) clade, and a new level on that branch. Specifically, also extinct groups are always put on a side-branch, not d
https://en.wikipedia.org/wiki/Physical%20cosmology
Physical cosmology is a branch of cosmology concerned with the study of cosmological models. A cosmological model, or simply cosmology, provides a description of the largest-scale structures and dynamics of the universe and allows study of fundamental questions about its origin, structure, evolution, and ultimate fate. Cosmology as a science originated with the Copernican principle, which implies that celestial bodies obey identical physical laws to those on Earth, and Newtonian mechanics, which first allowed those physical laws to be understood. Physical cosmology, as it is now understood, began with the development in 1915 of Albert Einstein's general theory of relativity, followed by major observational discoveries in the 1920s: first, Edwin Hubble discovered that the universe contains a huge number of external galaxies beyond the Milky Way; then, work by Vesto Slipher and others showed that the universe is expanding. These advances made it possible to speculate about the origin of the universe, and allowed the establishment of the Big Bang theory, by Georges Lemaître, as the leading cosmological model. A few researchers still advocate a handful of alternative cosmologies; however, most cosmologists agree that the Big Bang theory best explains the observations. Dramatic advances in observational cosmology since the 1990s, including the cosmic microwave background, distant supernovae and galaxy redshift surveys, have led to the development of a standard model of cosmology. This model requires the universe to contain large amounts of dark matter and dark energy whose nature is currently not well understood, but the model gives detailed predictions that are in excellent agreement with many diverse observations. Cosmology draws heavily on the work of many disparate areas of research in theoretical and applied physics. Areas relevant to cosmology include particle physics experiments and theory, theoretical and observational astrophysics, general relativity, quant
https://en.wikipedia.org/wiki/Condensed%20matter%20physics
Condensed matter physics is the field of physics that deals with the macroscopic and microscopic physical properties of matter, especially the solid and liquid phases which arise from electromagnetic forces between atoms. More generally, the subject deals with condensed phases of matter: systems of many constituents with strong interactions among them. More exotic condensed phases include the superconducting phase exhibited by certain materials at extremely low cryogenic temperature, the ferromagnetic and antiferromagnetic phases of spins on crystal lattices of atoms, and the Bose–Einstein condensate found in ultracold atomic systems. Condensed matter physicists seek to understand the behavior of these phases by experiments to measure various material properties, and by applying the physical laws of quantum mechanics, electromagnetism, statistical mechanics, and other physics theories to develop mathematical models. The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists self-identify as condensed matter physicists, and the Division of Condensed Matter Physics is the largest division at the American Physical Society. The field overlaps with chemistry, materials science, engineering and nanotechnology, and relates closely to atomic physics and biophysics. The theoretical physics of condensed matter shares important concepts and methods with that of particle physics and nuclear physics. A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc., were treated as distinct areas until the 1940s, when they were grouped together as solid-state physics. Around the 1960s, the study of physical properties of liquids was added to this list, forming the basis for the more comprehensive specialty of condensed matter physics. The Bell Telephone Laboratories was one of the first institutes to conduct a research program in con
https://en.wikipedia.org/wiki/Conversion%20of%20units
Conversion of units is the conversion between different units of measurement for the same quantity, typically through multiplicative conversion factors which change the measured quantity value without changing its effects. Unit conversion is often easier within the metric or the SI than in others, due to the regular 10-base in all units and the prefixes that increase or decrease by 3 powers of 10 at a time. Overview The process of conversion depends on the specific situation and the intended purpose. This may be governed by regulation, contract, technical specifications or other published standards. Engineering judgment may include such factors as: The precision and accuracy of measurement and the associated uncertainty of measurement. The statistical confidence interval or tolerance interval of the initial measurement. The number of significant figures of the measurement. The intended use of the measurement including the engineering tolerances. Historical definitions of the units and their derivatives used in old measurements; e.g., international foot vs. US survey foot. Some conversions from one system of units to another need to be exact, without increasing or decreasing the precision of the first measurement. This is sometimes called soft conversion. It does not involve changing the physical configuration of the item being measured. By contrast, a hard conversion or an adaptive conversion may not be exactly equivalent. It changes the measurement to convenient and workable numbers and units in the new system. It sometimes involves a slightly different configuration, or size substitution, of the item. Nominal values are sometimes allowed and used. Factor-label method The factor-label method, also known as the unit-factor method or the unity bracket method, is a widely used technique for unit conversions using the rules of algebra. The factor-label method is the sequential application of conversion factors expressed as fractions and arranged so that an
https://en.wikipedia.org/wiki/Computational%20linguistics
Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics draws upon linguistics, computer science, artificial intelligence, mathematics, logic, philosophy, cognitive science, cognitive psychology, psycholinguistics, anthropology and neuroscience, among others. Since the 2020s, computational linguistics has become a near-synonym of either natural language processing or language technology, with deep learning approaches, such as large language models, outperforming the specific approaches previously used in the field. Origins The field overlapped with artificial intelligence since the efforts in the United States in the 1950s to use computers to automatically translate texts from foreign languages, particularly Russian scientific journals, into English. Since rule-based approaches were able to make arithmetic (systematic) calculations much faster and more accurately than humans, it was expected that lexicon, morphology, syntax and semantics can be learned using explicit rules, as well. After the failure of rule-based approaches, David Hays coined the term in order to distinguish the field from AI and co-founded both the Association for Computational Linguistics (ACL) and the International Committee on Computational Linguistics (ICCL) in the 1970s and 1980s. What started as an effort to translate between languages evolved into a much wider field of natural language processing. Annotated corpora In order to be able to meticulously study the English language, an annotated text corpus was much needed. The Penn Treebank was one of the most used corpora. It consisted of IBM computer manuals, transcribed telephone conversations, and other texts, together containing over 4.5 million words of American English, annotated using both part-of-speech tagging and syntactic bracketing. Japanese
https://en.wikipedia.org/wiki/Claude%20Shannon
Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electrical engineer, computer scientist and cryptographer known as the "father of information theory". He is credited alongside George Boole for laying the foundations of the Information Age. As a 21-year-old master's degree student at the Massachusetts Institute of Technology (MIT), he wrote his thesis demonstrating that electrical applications of Boolean algebra could construct any logical numerical relationship. Shannon contributed to the field of cryptanalysis for national defense of the United States during World War II, including his fundamental work on codebreaking and secure telecommunications, writing a paper which would be considered one of the foundational pieces of modern cryptography. His mathematical theory of information laid the foundations for the field of information theory, with his famous paper being called the "Magna Carta of the Information Age" by Scientific American. He also made contributions to artificial intelligence. His achievements are said to be on par with those of Albert Einstein and Alan Turing in their fields. Biography Childhood The Shannon family lived in Gaylord, Michigan, and Claude was born in a hospital in nearby Petoskey. His father, Claude Sr. (1862–1934), was a businessman and, for a while, a judge of probate in Gaylord. His mother, Mabel Wolf Shannon (1890–1945), was a language teacher, who also served as the principal of Gaylord High School. Claude Sr. was a descendant of New Jersey settlers, while Mabel was a child of German immigrants. Shannon's family was active in their Methodist Church during his youth. Most of the first 16 years of Shannon's life were spent in Gaylord, where he attended public school, graduating from Gaylord High School in 1932. Shannon showed an inclination towards mechanical and electrical things. His best subjects were science and mathematics. At home, he constructed such devices as models of planes,
https://en.wikipedia.org/wiki/Continuum%20hypothesis
In mathematics, specifically set theory, the continuum hypothesis (abbreviated CH) is a hypothesis about the possible sizes of infinite sets. It states that or equivalently, that In Zermelo–Fraenkel set theory with the axiom of choice (ZFC), this is equivalent to the following equation in aleph numbers: , or even shorter with beth numbers: . The continuum hypothesis was advanced by Georg Cantor in 1878, and establishing its truth or falsehood is the first of Hilbert's 23 problems presented in 1900. The answer to this problem is independent of ZFC, so that either the continuum hypothesis or its negation can be added as an axiom to ZFC set theory, with the resulting theory being consistent if and only if ZFC is consistent. This independence was proved in 1963 by Paul Cohen, complementing earlier work by Kurt Gödel in 1940. The name of the hypothesis comes from the term the continuum for the real numbers. History Cantor believed the continuum hypothesis to be true and for many years tried in vain to prove it. It became the first on David Hilbert's list of important open questions that was presented at the International Congress of Mathematicians in the year 1900 in Paris. Axiomatic set theory was at that point not yet formulated. Kurt Gödel proved in 1940 that the negation of the continuum hypothesis, i.e., the existence of a set with intermediate cardinality, could not be proved in standard set theory. The second half of the independence of the continuum hypothesis – i.e., unprovability of the nonexistence of an intermediate-sized set – was proved in 1963 by Paul Cohen. Cardinality of infinite sets Two sets are said to have the same cardinality or cardinal number if there exists a bijection (a one-to-one correspondence) between them. Intuitively, for two sets S and T to have the same cardinality means that it is possible to "pair off" elements of S with elements of T in such a fashion that every element of S is paired off with exactly one element of T and vice
https://en.wikipedia.org/wiki/Cryptanalysis
Cryptanalysis (from the Greek kryptós, "hidden", and analýein, "to analyze") refers to the process of analyzing information systems in order to understand hidden aspects of the systems. Cryptanalysis is used to breach cryptographic security systems and gain access to the contents of encrypted messages, even if the cryptographic key is unknown. In addition to mathematical analysis of cryptographic algorithms, cryptanalysis includes the study of side-channel attacks that do not target weaknesses in the cryptographic algorithms themselves, but instead exploit weaknesses in their implementation. Even though the goal has been the same, the methods and techniques of cryptanalysis have changed drastically through the history of cryptography, adapting to increasing cryptographic complexity, ranging from the pen-and-paper methods of the past, through machines like the British Bombes and Colossus computers at Bletchley Park in World War II, to the mathematically advanced computerized schemes of the present. Methods for breaking modern cryptosystems often involve solving carefully constructed problems in pure mathematics, the best-known being integer factorization. Overview In encryption, confidential information (called the "plaintext") is sent securely to a recipient by the sender first converting it into an unreadable form ("ciphertext") using an encryption algorithm. The ciphertext is sent through an insecure channel to the recipient. The recipient decrypts the ciphertext by applying an inverse decryption algorithm, recovering the plaintext. To decrypt the ciphertext, the recipient requires a secret knowledge from the sender, usually a string of letters, numbers, or bits, called a cryptographic key. The concept is that even if an unauthorized person gets access to the ciphertext during transmission, without the secret key they cannot convert it back to plaintext. Encryption has been used throughout history to send important military, diplomatic and commercial mess
https://en.wikipedia.org/wiki/Compiler
In computing, a compiler is a computer program that translates computer code written in one programming language (the source language) into another language (the target language). The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a low-level programming language (e.g. assembly language, object code, or machine code) to create an executable program. There are many different types of compilers which produce output in different useful forms. A cross-compiler produces code for a different CPU or operating system than the one on which the cross-compiler itself runs. A bootstrap compiler is often a temporary compiler, used for compiling a more permanent or better optimised compiler for a language. Related software include, a program that translates from a low-level language to a higher level one is a decompiler; a program that translates between high-level languages, usually called a source-to-source compiler or transpiler. A language rewriter is usually a program that translates the form of expressions without a change of language. A compiler-compiler is a compiler that produces a compiler (or part of one), often in a generic and reusable way so as to be able to produce many differing compilers. A compiler is likely to perform some or all of the following operations, often called phases: preprocessing, lexical analysis, parsing, semantic analysis (syntax-directed translation), conversion of input programs to an intermediate representation, code optimization and machine specific code generation. Compilers generally implement these phases as modular components, promoting efficient design and correctness of transformations of source input to target output. Program faults caused by incorrect compiler behavior can be very difficult to track down and work around; therefore, compiler implementers invest significant effort to ensure compiler correctness. Compilers are not the only language processor us
https://en.wikipedia.org/wiki/Key%20size
In cryptography, key size or key length refers to the number of bits in a key used by a cryptographic algorithm (such as a cipher). Key length defines the upper-bound on an algorithm's security (i.e. a logarithmic measure of the fastest known attack against an algorithm), because the security of all algorithms can be violated by brute-force attacks. Ideally, the lower-bound on an algorithm's security is by design equal to the key length (that is, the algorithm's design does not detract from the degree of security inherent in the key length). Most symmetric-key algorithms are designed to have security equal to their key length. However, after design, a new attack might be discovered. For instance, Triple DES was designed to have a 168-bit key, but an attack of complexity 2112 is now known (i.e. Triple DES now only has 112 bits of security, and of the 168 bits in the key the attack has rendered 56 'ineffective' towards security). Nevertheless, as long as the security (understood as "the amount of effort it would take to gain access") is sufficient for a particular application, then it does not matter if key length and security coincide. This is important for asymmetric-key algorithms, because no such algorithm is known to satisfy this property; elliptic curve cryptography comes the closest with an effective security of roughly half its key length. Significance Keys are used to control the operation of a cipher so that only the correct key can convert encrypted text (ciphertext) to plaintext. All commonly-used ciphers are based on publicly known algorithms or are open source and so it is only the difficulty of obtaining the key that determines security of the system, provided that there is no analytic attack (i.e. a "structural weakness" in the algorithms or protocols used), and assuming that the key is not otherwise available (such as via theft, extortion, or compromise of computer systems). The widely accepted notion that the security of the system should depend o