source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Berlin%20Wall
The Berlin Wall (, ) was a guarded concrete barrier that encircled West Berlin of the Federal Republic of Germany (FRG; West Germany) from 1961 to 1989, separating it from East Berlin and the German Democratic Republic (GDR; East Germany). Construction of the Berlin Wall was commenced by the government of the GDR on 13 August 1961. It included guard towers placed along large concrete walls, accompanied by a wide area (later known as the "death strip") that contained anti-vehicle trenches, beds of nails and other defenses. The primary intention for the Wall's construction was to prevent East German citizens from fleeing to the West. The Soviet Bloc propaganda portrayed the Wall as protecting its population from "fascist elements conspiring to prevent the will of the people" from building a communist state in the GDR. The authorities officially referred to the Berlin Wall as the Anti-Fascist Protection Rampart (, ). The West Berlin city government sometimes referred to it as the "Wall of Shame", a term coined by mayor Willy Brandt in reference to the Wall's restriction on freedom of movement. Along with the separate and much longer inner German border, which demarcated the border between East and West Germany, it came to symbolize physically the Iron Curtain that separated the Western Bloc and Soviet satellite states of the Eastern Bloc during the Cold War. Before the Wall's erection, 3.5 million East Germans circumvented Eastern Bloc emigration restrictions and defected from the GDR, many by crossing over the border from East Berlin into West Berlin; from there they could then travel to West Germany and to other Western European countries. Between 1961 and 1989, the Wall prevented almost all such emigration. During this period, over 100,000 people attempted to escape, and over 5,000 people succeeded in escaping over the Wall, with an estimated death toll of those murdered by East German authorities ranging from 136 to more than 200 in and around Berlin. In 1989, a
https://en.wikipedia.org/wiki/Bluetooth
Bluetooth is a short-range wireless technology standard that is used for exchanging data between fixed and mobile devices over short distances and building personal area networks (PANs). In the most widely used mode, transmission power is limited to 2.5 milliwatts, giving it a very short range of up to . It employs UHF radio waves in the ISM bands, from 2.402GHz to 2.48GHz. It is mainly used as an alternative to wire connections, to exchange files between nearby portable devices and connect cell phones and music players with wireless headphones. Bluetooth is managed by the Bluetooth Special Interest Group (SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. The IEEE standardized Bluetooth as IEEE 802.15.1, but no longer maintains the standard. The Bluetooth SIG oversees development of the specification, manages the qualification program, and protects the trademarks. A manufacturer must meet Bluetooth SIG standards to market it as a Bluetooth device. A network of patents apply to the technology, which are licensed to individual qualifying devices. , 4.7 billion Bluetooth integrated circuit chips are shipped annually. Etymology The name "Bluetooth" was proposed in 1997 by Jim Kardach of Intel, one of the founders of the Bluetooth SIG. The name was inspired by a conversation with Sven Mattisson who related Scandinavian history through tales from Frans G. Bengtsson's The Long Ships, a historical novel about Vikings and the 10th-century Danish king Harald Bluetooth. Upon discovering a picture of the runestone of Harald Bluetooth in the book A History of the Vikings by Gwyn Jones, Kardach proposed Bluetooth as the codename for the short-range wireless program which is now called Bluetooth. According to Bluetooth's official website, Bluetooth is the Anglicised version of the Scandinavian Blåtand/Blåtann (or in Old Norse blátǫnn). It was the epithet of King Harald Bluetooth, who united th
https://en.wikipedia.org/wiki/Bluetooth%20Special%20Interest%20Group
The Bluetooth Special Interest Group (Bluetooth SIG) is the standards organization that oversees the development of Bluetooth standards and the licensing of the Bluetooth technologies and trademarks to manufacturers. The SIG is a not-for-profit, non-stock corporation founded in September 1998. The SIG is headquartered in Kirkland, Washington. The SIG does not make, manufacture or sell Bluetooth-enabled products. Introduction Bluetooth technology provides a way to exchange information between wireless devices such as PDAs, laptops, computers, printers and digital cameras via a secure, low-cost, globally available short-range radio frequency band. Originally developed by Ericsson, Bluetooth technology is now used in many different products by many different manufacturers. These manufacturers must be either Associate or Promoter members of(see below) the Bluetooth SIG before they are granted early access to the Bluetooth specifications, but published Bluetooth specifications are available online via the Bluetooth SIG Website bluetooth.com. The SIG owns the Bluetooth word mark, figure mark and combination mark. These trademarks are licensed out for use to companies that are incorporating Bluetooth wireless technology into their products. To become a licensee, a company must become a member of the Bluetooth SIG. The SIG also manages the Bluetooth SIG Qualification program, a certification process required for any product using Bluetooth wireless technology and a pre-condition of the intellectual property license for Bluetooth technology. The main tasks for the SIG are to publish the Bluetooth specifications, protect the Bluetooth trademarks and evangelize Bluetooth wireless technology. In 2016, the SIG introduced a new visual and creative identity to support Bluetooth technology as the catalyst for the Internet of Things (IoT). This change included an updated logo, a new tagline and deprecation of the Bluetooth Smart and Bluetooth Smart Ready logos. At its incept
https://en.wikipedia.org/wiki/Binary-coded%20decimal
In computing and electronic systems, binary-coded decimal (BCD) is a class of binary encodings of decimal numbers where each digit is represented by a fixed number of bits, usually four or eight. Sometimes, special bit patterns are used for a sign or other indications (e.g. error or overflow). In byte-oriented systems (i.e. most modern computers), the term unpacked BCD usually implies a full byte for each digit (often including a sign), whereas packed BCD typically encodes two digits within a single byte by taking advantage of the fact that four bits are enough to represent the range 0 to 9. The precise four-bit encoding, however, may vary for technical reasons (e.g. Excess-3). The ten states representing a BCD digit are sometimes called tetrades (the nibble typically needed to hold them is also known as a tetrade) while the unused, don't care-states are named , pseudo-decimals or pseudo-decimal digits. BCD's main virtue, in comparison to binary positional systems, is its more accurate representation and rounding of decimal quantities, as well as its ease of conversion into conventional human-readable representations. Its principal drawbacks are a slight increase in the complexity of the circuits needed to implement basic arithmetic as well as slightly less dense storage. BCD was used in many early decimal computers, and is implemented in the instruction set of machines such as the IBM System/360 series and its descendants, Digital Equipment Corporation's VAX, the Burroughs B1700, and the Motorola 68000-series processors. BCD per se is not as widely used as in the past, and is unavailable or limited in newer instruction sets (e.g., ARM; x86 in long mode). However, decimal fixed-point and decimal floating-point formats are still important and continue to be used in financial, commercial, and industrial computing, where the subtle conversion and fractional rounding errors that are inherent in binary floating point formats cannot be tolerated. Background BCD takes
https://en.wikipedia.org/wiki/Binomial%20distribution
In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: success (with probability p) or failure (with probability ). A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance. The binomial distribution is frequently used to model the number of successes in a sample of size n drawn with replacement from a population of size N. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for N much larger than n, the binomial distribution remains a good approximation, and is widely used. Definitions Probability mass function In general, if the random variable X follows the binomial distribution with parameters n ∈ and p ∈ [0,1], we write X ~ B(n, p). The probability of getting exactly k successes in n independent Bernoulli trials (with the same rate p) is given by the probability mass function: for k = 0, 1, 2, ..., n, where is the binomial coefficient, hence the name of the distribution. The formula can be understood as follows: k successes occur with probability pk and n − k failures occur with probability . However, the k successes can occur anywhere among the n trials, and there are different ways of distributing k successes in a sequence of n trials. In creating reference tables for binomial distribution probability, usually the table is filled in up to n/2 values. This is because for k > n/2, the probability can be calculated by its complement a
https://en.wikipedia.org/wiki/Braille
Braille ( , ) is a tactile writing system used by people who are visually impaired. It can be read either on embossed paper or by using refreshable braille displays that connect to computers and smartphone devices. Braille can be written using a slate and stylus, a braille writer, an electronic braille notetaker or with the use of a computer connected to a braille embosser. Braille is named after its creator, Louis Braille, a Frenchman who lost his sight as a result of a childhood accident. In 1824, at the age of fifteen, he developed the braille code based on the French alphabet as an improvement on night writing. He published his system, which subsequently included musical notation, in 1829. The second revision, published in 1837, was the first binary form of writing developed in the modern era. Braille characters are formed using a combination of six raised dots arranged in a 3 × 2 matrix, called the braille cell. The number and arrangement of these dots distinguishes one character from another. Since the various braille alphabets originated as transcription codes for printed writing, the mappings (sets of character designations) vary from language to language, and even within one; in English Braille there are 3 levels of braille: uncontracted braille a letter-by-letter transcription used for basic literacy; contracted braille an addition of abbreviations and contractions used as a space-saving mechanism; and grade 3 various non-standardized personal stenography that is less commonly used. In addition to braille text (letters, punctuation, contractions), it is also possible to create embossed illustrations and graphs, with the lines either solid or made of series of dots, arrows, and bullets that are larger than braille dots. A full braille cell includes six raised dots arranged in two columns, each column having three dots. The dot positions are identified by numbers from one to six. There are 64 possible combinations, including no dots at all for a word spa
https://en.wikipedia.org/wiki/Bijection
A bijection is a function that is both injective (one-to-one) and surjective (onto). In other words, every element in the codomain of the function is mapped to by exactly one element in the domain of the function. Equivalently, a bijection is a binary relation between two sets, such that each element of one set is paired with exactly one element of the other set, and each element of the other set is paired with exactly one element of the first set. A bijection is also called as a bijective function, one-to-one correspondence, or invertible function. The term one-to-one correspondence must not be confused with one-to-one function, which refers to an injective function (see examples on figures). A bijection from a set X to a set Y has an inverse function from Y to X. There exists a bijection between two sets if and only if they have the same cardinal number, which, in the case of finite sets is simply the number of their elements. A bijective function from a set to itself is also called a permutation, and the set of all permutations of a set forms its symmetric group. Some bijections with further properties have received specific names, which include automorphisms, isomorphisms, homeomorphisms, diffeomorphisms, permutation groups, and most geometric transformations. Galois correspondences are bijections between sets of mathematical objects of apparently very different nature. Definition For a pairing between X and Y (where Y need not be different from X) to be a bijection, four properties must hold: each element of X must be paired with at least one element of Y, no element of X may be paired with more than one element of Y, each element of Y must be paired with at least one element of X, and no element of Y may be paired with more than one element of X. Satisfying properties (1) and (2) means that a pairing is a function with domain X. It is more common to see properties (1) and (2) written as a single statement: Every element of X is paired with exactl
https://en.wikipedia.org/wiki/Biochemistry
Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena. Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles a
https://en.wikipedia.org/wiki/Boolean%20algebra%20%28structure%29
In abstract algebra, a Boolean algebra or Boolean lattice is a complemented distributive lattice. This type of algebraic structure captures essential properties of both set operations and logic operations. A Boolean algebra can be seen as a generalization of a power set algebra or a field of sets, or its elements can be viewed as generalized truth values. It is also a special case of a De Morgan algebra and a Kleene algebra (with involution). Every Boolean algebra gives rise to a Boolean ring, and vice versa, with ring multiplication corresponding to conjunction or meet ∧, and ring addition to exclusive disjunction or symmetric difference (not disjunction ∨). However, the theory of Boolean rings has an inherent asymmetry between the two operators, while the axioms and theorems of Boolean algebra express the symmetry of the theory described by the duality principle. History The term "Boolean algebra" honors George Boole (1815–1864), a self-educated English mathematician. He introduced the algebraic system initially in a small pamphlet, The Mathematical Analysis of Logic, published in 1847 in response to an ongoing public controversy between Augustus De Morgan and William Hamilton, and later as a more substantial book, The Laws of Thought, published in 1854. Boole's formulation differs from that described above in some important respects. For example, conjunction and disjunction in Boole were not a dual pair of operations. Boolean algebra emerged in the 1860s, in papers written by William Jevons and Charles Sanders Peirce. The first systematic presentation of Boolean algebra and distributive lattices is owed to the 1890 Vorlesungen of Ernst Schröder. The first extensive treatment of Boolean algebra in English is A. N. Whitehead's 1898 Universal Algebra. Boolean algebra as an axiomatic algebraic structure in the modern axiomatic sense begins with a 1904 paper by Edward V. Huntington. Boolean algebra came of age as serious mathematics with the work of Marshall S
https://en.wikipedia.org/wiki/Bandwidth%20%28signal%20processing%29
Bandwidth is the difference between the upper and lower frequencies in a continuous band of frequencies. It is typically measured in hertz, and depending on context, may specifically refer to passband bandwidth or baseband bandwidth. Passband bandwidth is the difference between the upper and lower cutoff frequencies of, for example, a band-pass filter, a communication channel, or a signal spectrum. Baseband bandwidth applies to a low-pass filter or baseband signal; the bandwidth is equal to its upper cutoff frequency. Bandwidth in hertz is a central concept in many fields, including electronics, information theory, digital communications, radio communications, signal processing, and spectroscopy and is one of the determinants of the capacity of a given communication channel. A key characteristic of bandwidth is that any band of a given width can carry the same amount of information, regardless of where that band is located in the frequency spectrum. For example, a 3 kHz band can carry a telephone conversation whether that band is at baseband (as in a POTS telephone line) or modulated to some higher frequency. However, wide bandwidths are easier to obtain and process at higher frequencies because the is smaller. Overview Bandwidth is a key concept in many telecommunications applications. In radio communications, for example, bandwidth is the frequency range occupied by a modulated carrier signal. An FM radio receiver's tuner spans a limited range of frequencies. A government agency (such as the Federal Communications Commission in the United States) may apportion the regionally available bandwidth to broadcast license holders so that their signals do not mutually interfere. In this context, bandwidth is also known as channel spacing. For other applications, there are other definitions. One definition of bandwidth, for a system, could be the range of frequencies over which the system produces a specified level of performance. A less strict and more practica
https://en.wikipedia.org/wiki/Biopolymer
Biopolymers are natural polymers produced by the cells of living organisms. Like other polymers, biopolymers consist of monomeric units that are covalently bonded in chains to form larger molecules. There are three main classes of biopolymers, classified according to the monomers used and the structure of the biopolymer formed: polynucleotides, polypeptides, and polysaccharides. The Polynucleotides, RNA and DNA, are long polymers of nucleotides. Polypeptides include proteins and shorter polymers of amino acids; some major examples include collagen, actin, and fibrin. Polysaccharides are linear or branched chains of sugar carbohydrates; examples include starch, cellulose, and alginate. Other examples of biopolymers include natural rubbers (polymers of isoprene), suberin and lignin (complex polyphenolic polymers), cutin and cutan (complex polymers of long-chain fatty acids), melanin, and polyhydroxyalkanoates (PHAs). In addition to their many essential roles in living organisms, biopolymers have applications in many fields including the food industry, manufacturing, packaging, and biomedical engineering. Biopolymers versus synthetic polymers A major defining difference between biopolymers and synthetic polymers can be found in their structures. All polymers are made of repetitive units called monomers. Biopolymers often have a well-defined structure, though this is not a defining characteristic (example: lignocellulose): The exact chemical composition and the sequence in which these units are arranged is called the primary structure, in the case of proteins. Many biopolymers spontaneously fold into characteristic compact shapes (see also "protein folding" as well as secondary structure and tertiary structure), which determine their biological functions and depend in a complicated way on their primary structures. Structural biology is the study of the structural properties of biopolymers. In contrast, most synthetic polymers have much simpler and more random (or st
https://en.wikipedia.org/wiki/BASIC
BASIC (Beginners' All-purpose Symbolic Instruction Code) is a family of general-purpose, high-level programming languages designed for ease of use. The original version was created by John G. Kemeny and Thomas E. Kurtz at Dartmouth College in 1963. They wanted to enable students in non-scientific fields to use computers. At the time, nearly all computers required writing custom software, which only scientists and mathematicians tended to learn. In addition to the program language, Kemeny and Kurtz developed the Dartmouth Time Sharing System (DTSS), which allowed multiple users to edit and run BASIC programs simultaneously on remote terminals. This general model became popular on minicomputer systems like the PDP-11 and Data General Nova in the late 1960s and early 1970s. Hewlett-Packard produced an entire computer line for this method of operation, introducing the HP2000 series in the late 1960s and continuing sales into the 1980s. Many early video games trace their history to one of these versions of BASIC. The emergence of microcomputers in the mid-1970s led to the development of multiple BASIC dialects, including Microsoft BASIC in 1975. Due to the tiny main memory available on these machines, often 4 KB, a variety of Tiny BASIC dialects were also created. BASIC was available for almost any system of the era, and became the de facto programming language for home computer systems that emerged in the late 1970s. These PCs almost always had a BASIC interpreter installed by default, often in the machine's firmware or sometimes on a ROM cartridge. BASIC declined in popularity in the 1990s, as more powerful microcomputers came to market and programming languages with advanced features (such as Pascal and C) became tenable on such computers. In 1991, Microsoft released Visual Basic, combining an updated version of BASIC with a visual forms builder. This reignited use of the language and "VB" remains a major programming language in the form of VB.NET, while a hobbyist
https://en.wikipedia.org/wiki/Butterfly%20effect
In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state. The term is closely associated with the work of mathematician and meteorologist Edward Norton Lorenz. He noted that the butterfly effect is derived from the metaphorical example of the details of a tornado (the exact time of formation, the exact path taken) being influenced by minor perturbations such as a distant butterfly flapping its wings several weeks earlier. Lorenz originally used a seagull causing a storm but was persuaded to make it more poetic with the use of a butterfly and tornado by 1972. He discovered the effect when he observed runs of his weather model with initial condition data that were rounded in a seemingly inconsequential manner. He noted that the weather model would fail to reproduce the results of runs with the unrounded initial condition data. A very small change in initial conditions had created a significantly different outcome. The idea that small causes may have large effects in weather was earlier acknowledged by French mathematician and engineer Henri Poincaré. American mathematician and philosopher Norbert Wiener also contributed to this theory. Lorenz's work placed the concept of instability of the Earth's atmosphere onto a quantitative base and linked the concept of instability to the properties of large classes of dynamic systems which are undergoing nonlinear dynamics and deterministic chaos. The butterfly effect concept has since been used outside the context of weather science as a broad term for any situation where a small change is supposed to be the cause of larger consequences. History In The Vocation of Man (1800), Johann Gottlieb Fichte says "you could not remove a single grain of sand from its place without thereby ... changing something throughout all parts of the immeasurable whole". Chaos theory and the se
https://en.wikipedia.org/wiki/Bletchley%20Park
Bletchley Park is an English country house and estate in Bletchley, Milton Keynes (Buckinghamshire) that became the principal centre of Allied code-breaking during the Second World War. The mansion was constructed during the years following 1883 for the financier and politician Sir Herbert Leon in the Victorian Gothic, Tudor, and Dutch Baroque styles, on the site of older buildings of the same name. During World War II, the estate housed the Government Code and Cypher School (GC&CS), which regularly penetrated the secret communications of the Axis Powersmost importantly the German Enigma and Lorenz ciphers. The GC&CS team of codebreakers included Alan Turing, Gordon Welchman, Hugh Alexander, Bill Tutte, and Stuart Milner-Barry. The nature of the work at Bletchley remained secret until many years after the war. According to the official historian of British Intelligence, the "Ultra" intelligence produced at Bletchley shortened the war by two to four years, and without it the outcome of the war would have been uncertain. The team at Bletchley Park devised automatic machinery to help with decryption, culminating in the development of Colossus, the world's first programmable digital electronic computer. Codebreaking operations at Bletchley Park came to an end in 1946 and all information about the wartime operations was classified until the mid-1970s. After the war it had various uses including as a teacher-training college and local GPO headquarters. By 1990 the huts in which the codebreakers worked were being considered for demolition and redevelopment. The Bletchley Park Trust was formed in February 1992 to save large portions of the site from development. More recently, Bletchley Park has been open to the public, featuring interpretive exhibits and huts that have been rebuilt to appear as they did during their wartime operations. It receives hundreds of thousands of visitors annually. The separate National Museum of Computing, which includes a working replica Bom
https://en.wikipedia.org/wiki/Brian%20Kernighan
Brian Wilson Kernighan (; born January 30, 1942) is a Canadian computer scientist. He worked at Bell Labs and contributed to the development of Unix alongside Unix creators Ken Thompson and Dennis Ritchie. Kernighan's name became widely known through co-authorship of the first book on the C programming language (The C Programming Language) with Dennis Ritchie. Kernighan affirmed that he had no part in the design of the C language ("it's entirely Dennis Ritchie's work"). He authored many Unix programs, including ditroff. Kernighan is coauthor of the AWK and AMPL programming languages. The "K" of K&R C and of AWK both stand for "Kernighan". In collaboration with Shen Lin he devised well-known heuristics for two NP-complete optimization problems: graph partitioning and the travelling salesman problem. In a display of authorial equity, the former is usually called the Kernighan–Lin algorithm, while the latter is known as the Lin–Kernighan heuristic. Kernighan has been a professor of computer science at Princeton University since 2000 and is the director of undergraduate studies in the department of computer science. In 2015, he co-authored the book The Go Programming Language. Early life and education Kernighan was born in Toronto. He attended the University of Toronto between 1960 and 1964, earning his bachelor's degree in engineering physics. He received his Ph.D. in electrical engineering from Princeton University in 1969, completing a doctoral dissertation titled "Some graph partitioning problems related to program segmentation" under the supervision of Peter G. Weiner. Career and research Kernighan has held a professorship in the department of computer science at Princeton since 2000. Each fall he teaches a course called "Computers in Our World", which introduces the fundamentals of computing to non-majors. Kernighan was the software editor for Prentice Hall International. His "Software Tools" series spread the essence of "C/Unix thinking" with makeovers for
https://en.wikipedia.org/wiki/Borsuk%E2%80%93Ulam%20theorem
In mathematics, the Borsuk–Ulam theorem states that every continuous function from an n-sphere into Euclidean n-space maps some pair of antipodal points to the same point. Here, two points on a sphere are called antipodal if they are in exactly opposite directions from the sphere's center. Formally: if is continuous then there exists an such that: . The case can be illustrated by saying that there always exist a pair of opposite points on the Earth's equator with the same temperature. The same is true for any circle. This assumes the temperature varies continuously in space, which is, however, not always the case. The case is often illustrated by saying that at any moment, there is always a pair of antipodal points on the Earth's surface with equal temperatures and equal barometric pressures, assuming that both parameters vary continuously in space. Since temperature, pressure or other such physical variables do not necessarily vary continuously, the predictions of the theorem are unlikely to be true in some necessary sense (as following from a mathematical necessity). The Borsuk–Ulam theorem has several equivalent statements in terms of odd functions. Recall that is the n-sphere and is the n-ball: If is a continuous odd function, then there exists an such that: . If is a continuous function which is odd on (the boundary of ), then there exists an such that: . History According to , the first historical mention of the statement of the Borsuk–Ulam theorem appears in . The first proof was given by , where the formulation of the problem was attributed to Stanisław Ulam. Since then, many alternative proofs have been found by various authors, as collected by . Equivalent statements The following statements are equivalent to the Borsuk–Ulam theorem. With odd functions A function is called odd (aka antipodal or antipode-preserving) if for every : . The Borsuk–Ulam theorem is equivalent to the following statement: A continuous odd function fro
https://en.wikipedia.org/wiki/Binary%20prefix
A binary prefix is a unit prefix that indicates a multiple of a unit of measurement by an integer power of two. The most commonly used binary prefixes are kibi (symbol Ki, meaning 210= 1024), mebi (Mi, 220 = ), and gibi (Gi, 230 = ). They are most often used in information technology as multipliers of bit and byte, when expressing the capacity of storage devices or the size of computer files. The binary prefixes "kibi", "mebi", etc. were defined in 1999 by the International Electrotechnical Commission (IEC), in the IEC 60027-2 standard (Amendment 2). They were meant to replace the metric (SI) decimal power prefixes, such as "kilo" ("k", 103 = 1000), "mega" ("M", 106 = ) and "giga" ("G", 109 = ), that were commonly used in the computer industry to indicate the nearest powers of two. For example, a memory module whose capacity was specified by the manufacturer as "2 megabytes" or "2 MB" would hold 2 × 220 = bytes, instead of 2 × 106 = . On the other hand, a hard disk whose capacity is specified by the manufacturer as "10 gigabytes" or "10 GB", holds 10 × 109 = bytes, or a little more than that, but less than 10 × 230 = and a file whose size is listed as "2.3 GB" may have a size closer to 2.3 × 230 ≈ or to 2.3 × 109 = , depending on the program or operating system providing that measurement. This kind of ambiguity is often confusing to computer system users and has resulted in lawsuits. The IEC 60027-2 binary prefixes have been incorporated in the ISO/IEC 80000 standard and are supported by other standards bodies, including the BIPM, which defines the SI system, the US NIST, and the European Union. Prior to the 1999 IEC standard, some industry organizations, such as the Joint Electron Device Engineering Council (JEDEC), attempted to redefine the terms kilobyte, megabyte, and gigabyte, and the corresponding symbols KB, MB, and GB in the binary sense, for use in storage capacity measurements. However, other computer industry sectors (such as magnetic storage)
https://en.wikipedia.org/wiki/BQP
In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP. A decision problem is a member of BQP if there exists a quantum algorithm (an algorithm that runs on a quantum computer) that solves the decision problem with high probability and is guaranteed to run in polynomial time. A run of the algorithm will correctly solve the decision problem with a probability of at least 2/3. Definition BQP can be viewed as the languages associated with certain bounded-error uniform families of quantum circuits. A language L is in BQP if and only if there exists a polynomial-time uniform family of quantum circuits , such that For all , Qn takes n qubits as input and outputs 1 bit For all x in L, For all x not in L, Alternatively, one can define BQP in terms of quantum Turing machines. A language L is in BQP if and only if there exists a polynomial quantum Turing machine that accepts L with an error probability of at most 1/3 for all instances. Similarly to other "bounded error" probabilistic classes the choice of 1/3 in the definition is arbitrary. We can run the algorithm a constant number of times and take a majority vote to achieve any desired probability of correctness less than 1, using the Chernoff bound. The complexity class is unchanged by allowing error as high as 1/2 − n−c on the one hand, or requiring error as small as 2−nc on the other hand, where c is any positive constant, and n is the length of input. A complete problem for Promise-BQP Similar to the notion of NP-completeness and other complete problems, we can define a complete problem as a problem that is in Promise-BQP and that every problem in Promise-BQP reduces to it in polynomial time. Here is an intuitive problem that is complete for efficient quantum computation, which stems
https://en.wikipedia.org/wiki/Bioleaching
Bioleaching is the extraction or liberation of metals from their ores through the use of living organisms. Bioleaching is one of several applications within biohydrometallurgy and several methods are used to treat ores or concentrates containing copper, zinc, lead, arsenic, antimony, nickel, molybdenum, gold, silver, and cobalt. Bioleaching falls into two broad categories. The first, is the use of microorganisms to oxidize refractory minerals to release valuable metals such and gold and silver. Most commonly the minerals that are the target of oxidization are pyrite and arsenopyrite. The second category is leaching of sulphide minerals to release the associated metal, for example, leaching of pentlandite to release nickel, or the leaching of chalcocite, covellite or chalcopyrite to release copper. Process Bioleaching can involve numerous ferrous iron and sulfur oxidizing bacteria, including Acidithiobacillus ferrooxidans (formerly known as Thiobacillus ferrooxidans) and Acidithiobacillus thiooxidans (formerly known as Thiobacillus thiooxidans). As a general principle, in one proposed method of bacterial leaching known as Indirect Leaching, Fe3+ ions are used to oxidize the ore. This step is entirely independent of microbes. The role of the bacteria is further oxidation of the ore, but also the regeneration of the chemical oxidant Fe3+ from Fe2+. For example, bacteria catalyse the breakdown of the mineral pyrite (FeS2) by oxidising the sulfur and metal (in this case ferrous iron, (Fe2+)) using oxygen. This yields soluble products that can be further purified and refined to yield the desired metal. Pyrite leaching (FeS2): In the first step, disulfide is spontaneously oxidized to thiosulfate by ferric ion (Fe3+), which in turn is reduced to give ferrous ion (Fe2+): (1)      spontaneous The ferrous ion is then oxidized by bacteria using oxygen: (2)      (iron oxidizers) Thiosulfate is also oxidized by bacteria to give sulfate: (3)      (sulfur oxidizers)
https://en.wikipedia.org/wiki/Botany
Botany, also called plant science (or plant sciences), plant biology or phytology, is the science of plant life and a branch of biology. A botanist, plant scientist or phytologist is a scientist who specialises in this field. The term "botany" comes from the Ancient Greek word () meaning "pasture", "herbs" "grass", or "fodder"; is in turn derived from (), "to feed" or "to graze". Traditionally, botany has also included the study of fungi and algae by mycologists and phycologists respectively, with the study of these three groups of organisms remaining within the sphere of interest of the International Botanical Congress. Nowadays, botanists (in the strict sense) study approximately 410,000 species of land plants of which some 391,000 species are vascular plants (including approximately 369,000 species of flowering plants), and approximately 20,000 are bryophytes. Botany originated in prehistory as herbalism with the efforts of early humans to identify – and later cultivate – plants that were edible, poisonous, and possibly medicinal, making it one of the first endeavours of human investigation. Medieval physic gardens, often attached to monasteries, contained plants possibly having medicinal benefit. They were forerunners of the first botanical gardens attached to universities, founded from the 1540s onwards. One of the earliest was the Padua botanical garden. These gardens facilitated the academic study of plants. Efforts to catalogue and describe their collections were the beginnings of plant taxonomy, and led in 1753 to the binomial system of nomenclature of Carl Linnaeus that remains in use to this day for the naming of all biological species. In the 19th and 20th centuries, new techniques were developed for the study of plants, including methods of optical microscopy and live cell imaging, electron microscopy, analysis of chromosome number, plant chemistry and the structure and function of enzymes and other proteins. In the last two decades of the 20th ce
https://en.wikipedia.org/wiki/Cell%20%28biology%29
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'. Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell. Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms. The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology. Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure and function in all living organisms, and that all cells come from pre-existing cells. Cells emerged on Earth about 4 billion years ago. Discovery With continual improvements made to microscopes over time, magnification technology became advanced enough to discover cells. This discovery is largely attributed to Robert Hooke, and began the scientific study of cells, known as cell biology. When observing a piece of cork under the scope, he was able to see pores. This was shocking at the time as i
https://en.wikipedia.org/wiki/Dots%20and%20boxes
Dots and boxes is a pencil-and-paper game for two players (sometimes more). It was first published in the 19th century by French mathematician Édouard Lucas, who called it . It has gone by many other names, including the dots and dashes, game of dots, dot to dot grid, boxes, and pigs in a pen. The game starts with an empty grid of dots. Usually two players take turns adding a single horizontal or vertical line between two unjoined adjacent dots. A player who completes the fourth side of a 1×1 box earns one point and takes another turn. A point is typically recorded by placing a mark that identifies the player in the box, such as an initial. The game ends when no more lines can be placed. The winner is the player with the most points. The board may be of any size grid. When short on time, or to learn the game, a 2×2 board (3×3 dots) is suitable. A 5×5 board, on the other hand, is good for experts. Strategy For most novice players, the game begins with a phase of more-or-less randomly connecting dots, where the only strategy is to avoid adding the third side to any box. This continues until all the remaining (potential) boxes are joined together into chains – groups of one or more adjacent boxes in which any move gives all the boxes in the chain to the opponent. At this point, players typically take all available boxes, then open the smallest available chain to their opponent. For example, a novice player faced with a situation like position 1 in the diagram on the right, in which some boxes can be captured, may take all the boxes in the chain, resulting in position 2. But with their last move, they have to open the next, larger chain, and the novice loses the game. A more experienced player faced with position 1 will instead play the double-cross strategy, taking all but 2 of the boxes in the chain and leaving position 3. The opponent will take these two boxes and then be forced to open the next chain. By achieving position 3, player A wins. The same double-cros
https://en.wikipedia.org/wiki/Broadcast%20domain
A broadcast domain is a logical division of a computer network, in which all nodes can reach each other by broadcast at the data link layer. A broadcast domain can be within the same LAN segment or it can be bridged to other LAN segments. In terms of current popular technologies, any computer connected to the same Ethernet repeater or switch is a member of the same broadcast domain. Further, any computer connected to the same set of interconnected switches/repeaters is a member of the same broadcast domain. Routers and other higher-layer devices form boundaries between broadcast domains. The notion of broadcast domain should be contrasted with that of collision domain, which would be all nodes on the same set of inter-connected repeaters, divided by switches and learning bridges. Collision domains are generally smaller than, and contained within, broadcast domains. While some data-link-layer devices are able to divide the collision domains, broadcast domains are only divided by layer 3 network devices such as routers or layer 3 switches. Separating VLANs divides broadcast domains as well. Further explanation The distinction between broadcast and collision domains comes about because simple Ethernet and similar systems use a shared transmission system. In simple Ethernet (without switches or bridges), data frames are transmitted to all other nodes on a network. Each receiving node checks the destination address of each frame, and simply ignores any frame not addressed to its own MAC address or the broadcast address. Switches act as buffers, receiving and analyzing the frames from each connected network segment. Frames destined for nodes connected to the originating segment are not forwarded by the switch. Frames destined for a specific node on a different segment are sent only to that segment. Only broadcast frames are forwarded to all other segments. This reduces unnecessary traffic and collisions. In such a switched network, transmitted frames may not be re
https://en.wikipedia.org/wiki/Beatmatching
Beatmatching or pitch cue is a disc jockey technique of pitch shifting or time stretching an upcoming track to match its tempo to that of the currently playing track, and to adjust them such that the beats (and, usually, the bars) are synchronized—e.g. the kicks and snares in two house records hit at the same time when both records are played simultaneously. Beatmatching is a component of beatmixing which employs beatmatching combined with equalization, attention to phrasing and track selection in an attempt to make a single mix that flows together and has a good structure. The technique was developed to keep the people from leaving the dancefloor at the end of the song. These days it is considered basic among disc jockeys (DJs) in electronic dance music genres, and it is standard practice in clubs to keep the constant beat through the night, even if DJs change in the middle. Technique The beatmatching technique consists of the following steps: While a record is playing, start a second record playing, but only monitored through headphones, not being fed to the main PA system. Use gain (or trim) control on the mixer to match the levels of the two records. Restart and slip-cue the new record at the right time, on beat with the record currently playing. If the beat on the new record hits before the beat on the current record, then the new record is too fast; reduce the pitch and manually slow the speed of the new record to bring the beats back in sync. If the beat on the new record hits after the beat on the current record, then the new record is too slow; increase the pitch and manually increase the speed of the new record to bring the beats back in sync. Continue this process until the two records are in sync with each other. It can be difficult to sync the two records perfectly, so manual adjustment of the records is necessary to maintain the beat synchronization. Gradually fade in parts of the new track while fading out the old track. While in the mix, en
https://en.wikipedia.org/wiki/Big%20Dig
The Central Artery/Tunnel Project (CA/T Project), commonly known as the Big Dig, was a megaproject in Boston that rerouted the Central Artery of Interstate 93 (I-93), the chief highway through the heart of the city, into the 1.5-mile (2.4 km) Thomas P. O'Neill Jr. Tunnel. The project also included the construction of the Ted Williams Tunnel (extending I-90 to Logan International Airport), the Leonard P. Zakim Bunker Hill Memorial Bridge over the Charles River, and the Rose Kennedy Greenway in the space vacated by the previous I-93 elevated roadway. Initially, the plan was also to include a rail connection between Boston's two major train terminals. Planning began in 1982; the construction work was carried out between 1991 and 2006; and the project concluded on December 31, 2007, when the partnership between the program manager and the Massachusetts Turnpike Authority ended. The Big Dig was the most expensive highway project in the United States, and was plagued by cost overruns, delays, leaks, design flaws, charges of poor execution and use of substandard materials, criminal charges and arrests, and the death of one motorist. The project was originally scheduled to be completed in 1998 at an estimated cost of $2.8 billion (in 1982 dollars, US$7.4 billion adjusted for inflation ). However, the project was completed in December 2007 at a cost of over $8.08 billion (in 1982 dollars, $21.5 billion adjusted for inflation, meaning a cost overrun of about 190%) . The Boston Globe estimated that the project will ultimately cost $22 billion, including interest, and that it would not be paid off until 2038. As a result of a death, leaks, and other design flaws, Bechtel and Parsons Brinckerhoff—the consortium that oversaw the project—agreed to pay $407 million in restitution and several smaller companies agreed to pay a combined sum of approximately $51 million. The Rose Fitzgerald Kennedy Greenway is a roughly series of parks and public spaces which were the final part of
https://en.wikipedia.org/wiki/Backplane
A backplane (or "backplane system") is a group of electrical connectors in parallel with each other, so that each pin of each connector is linked to the same relative pin of all the other connectors, forming a computer bus. It is used to connect several printed circuit boards together to make up a complete computer system. Backplanes commonly use a printed circuit board, but wire-wrapped backplanes have also been used in minicomputers and high-reliability applications. A backplane is generally differentiated from a motherboard by the lack of on-board processing and storage elements. A backplane uses plug-in cards for storage and processing. Usage Early microcomputer systems like the Altair 8800 used a backplane for the processor and expansion cards. Backplanes are normally used in preference to cables because of their greater reliability. In a cabled system, the cables need to be flexed every time that a card is added or removed from the system; this flexing eventually causes mechanical failures. A backplane does not suffer from this problem, so its service life is limited only by the longevity of its connectors. For example, DIN 41612 connectors (used in the VMEbus system) have three durability grades built to withstand (respectively) 50, 400 and 500 insertions and removals, or "mating cycles". To transmit information, Serial Back-Plane technology uses a low-voltage differential signaling transmission method for sending information. In addition, there are bus expansion cables which will extend a computer bus to an external backplane, usually located in an enclosure, to provide more or different slots than the host computer provides. These cable sets have a transmitter board located in the computer, an expansion board in the remote backplane, and a cable between the two. Active versus passive backplanes Backplanes have grown in complexity from the simple Industry Standard Architecture (ISA) (used in the original IBM PC) or S-100 style where all the connectors
https://en.wikipedia.org/wiki/Buffer%20overflow
In programming and information security, a buffer overflow or buffer overrun is an anomaly whereby a program writes data to a buffer beyond the buffer's allocated memory, overwriting adjacent memory locations. Buffers are areas of memory set aside to hold data, often while moving it from one section of a program to another, or between programs. Buffer overflows can often be triggered by malformed inputs; if one assumes all inputs will be smaller than a certain size and the buffer is created to be that size, then an anomalous transaction that produces more data could cause it to write past the end of the buffer. If this overwrites adjacent data or executable code, this may result in erratic program behavior, including memory access errors, incorrect results, and crashes. Exploiting the behavior of a buffer overflow is a well-known security exploit. On many systems, the memory layout of a program, or the system as a whole, is well defined. By sending in data designed to cause a buffer overflow, it is possible to write into areas known to hold executable code and replace it with malicious code, or to selectively overwrite data pertaining to the program's state, therefore causing behavior that was not intended by the original programmer. Buffers are widespread in operating system (OS) code, so it is possible to make attacks that perform privilege escalation and gain unlimited access to the computer's resources. The famed Morris worm in 1988 used this as one of its attack techniques. Programming languages commonly associated with buffer overflows include C and C++, which provide no built-in protection against accessing or overwriting data in any part of memory and do not automatically check that data written to an array (the built-in buffer type) is within the boundaries of that array. Bounds checking can prevent buffer overflows, but requires additional code and processing time. Modern operating systems use a variety of techniques to combat malicious buffer overflows
https://en.wikipedia.org/wiki/Brownian%20motion
Brownian motion is the random motion of particles suspended in a medium (a liquid or a gas). This motion pattern typically consists of random fluctuations in a particle's position inside a fluid sub-domain, followed by a relocation to another sub-domain. Each relocation is followed by more fluctuations within the new closed volume. This pattern describes a fluid at thermal equilibrium, defined by a given temperature. Within such a fluid, there exists no preferential direction of flow (as in transport phenomena). More specifically, the fluid's overall linear and angular momenta remain null over time. The kinetic energies of the molecular Brownian motions, together with those of molecular rotations and vibrations, sum up to the caloric component of a fluid's internal energy (the equipartition theorem). This motion is named after the botanist Robert Brown, who first described the phenomenon in 1827, while looking through a microscope at pollen of the plant Clarkia pulchella immersed in water. In 1900, almost eighty years later, the French mathematician Louis Bachelier modeled the stochastic process now called Brownian motion in his doctoral thesis, The Theory of Speculation (Théorie de la spéculation), prepared under the supervision of Henri Poincaré. Then, in 1905, theoretical physicist Albert Einstein published a paper where he modeled the motion of the pollen particles as being moved by individual water molecules, making one of his first major scientific contributions. The direction of the force of atomic bombardment is constantly changing, and at different times the particle is hit more on one side than another, leading to the seemingly random nature of the motion. This explanation of Brownian motion served as convincing evidence that atoms and molecules exist and was further verified experimentally by Jean Perrin in 1908. Perrin was awarded the Nobel Prize in Physics in 1926 "for his work on the discontinuous structure of matter". The many-body interactions th
https://en.wikipedia.org/wiki/Backward%20compatibility
Backward compatibility (sometimes known as backwards compatibility) is a property of an operating system, software, real-world product, or technology that allows for interoperability with an older legacy system, or with input designed for such a system, especially in telecommunications and computing. Modifying a system in a way that does not allow backward compatibility is sometimes called "breaking" backward compatibility. Such breaking usually incurs various types of costs, such as switching cost. A complementary concept is forward compatibility. A design that is forward-compatible usually has a roadmap for compatibility with future standards and products. Usage In hardware A simple example of both backward and forward compatibility is the introduction of FM radio in stereo. FM radio was initially mono, with only one audio channel represented by one signal. With the introduction of two-channel stereo FM radio, many listeners had only mono FM receivers. Forward compatibility for mono receivers with stereo signals was achieved by sending the sum of both left and right audio channels in one signal and the difference in another signal. That allows mono FM receivers to receive and decode the sum signal while ignoring the difference signal, which is necessary only for separating the audio channels. Stereo FM receivers can receive a mono signal and decode it without the need for a second signal, and they can separate a sum signal to left and right channels if both sum and difference signals are received. Without the requirement for backward compatibility, a simpler method could have been chosen. Full backward compatibility is particularly important in computer instruction set architectures, one of the most successful being the x86 family of microprocessors. Their full backward compatibility spans back to the 16-bit Intel 8086/8088 processors introduced in 1978. (The 8086/8088, in turn, were designed with easy machine-translatability of programs written for its prede
https://en.wikipedia.org/wiki/Bacterial%20conjugation
Bacterial conjugation is the transfer of genetic material between bacterial cells by direct cell-to-cell contact or by a bridge-like connection between two cells. This takes place through a pilus. It is a parasexual mode of reproduction in bacteria. It is a mechanism of horizontal gene transfer as are transformation and transduction although these two other mechanisms do not involve cell-to-cell contact. Classical E. coli bacterial conjugation is often regarded as the bacterial equivalent of sexual reproduction or mating since it involves the exchange of genetic material. However, it is not sexual reproduction, since no exchange of gamete occurs, and indeed no generation of a new organism: instead an existing organism is transformed. During classical E. coli conjugation the donor cell provides a conjugative or mobilizable genetic element that is most often a plasmid or transposon. Most conjugative plasmids have systems ensuring that the recipient cell does not already contain a similar element. The genetic information transferred is often beneficial to the recipient. Benefits may include antibiotic resistance, xenobiotic tolerance or the ability to use new metabolites. Other elements can be detrimental and may be viewed as bacterial parasites. Conjugation in Escherichia coli by spontaneous zygogenesis and in Mycobacterium smegmatis by distributive conjugal transfer differ from the better studied classical E. coli conjugation in that these cases involve substantial blending of the parental genomes. History The process was discovered by Joshua Lederberg and Edward Tatum in 1946. Mechanism Conjugation diagram Donor cell produces pilus. Pilus attaches to recipient cell and brings the two cells together. The mobile plasmid is nicked and a single strand of DNA is then transferred to the recipient cell. Both cells synthesize a complementary strand to produce a double stranded circular plasmid and also reproduce pili; both cells are now viable donor for the F-f
https://en.wikipedia.org/wiki/BIOS
In computing, BIOS (, ; Basic Input/Output System, also known as the System BIOS, ROM BIOS, BIOS ROM or PC BIOS) is firmware used to provide runtime services for operating systems and programs and to perform hardware initialization during the booting process (power-on startup). The BIOS firmware comes pre-installed on an IBM PC or IBM PC compatible's system board and exists in some UEFI-based systems to maintain compatibility with operating systems that do not support UEFI native operation. The name originates from the Basic Input/Output System used in the CP/M operating system in 1975. The BIOS originally proprietary to the IBM PC has been reverse engineered by some companies (such as Phoenix Technologies) looking to create compatible systems. The interface of that original system serves as a de facto standard. The BIOS in modern PCs initializes and tests the system hardware components (Power-on self-test), and loads a boot loader from a mass storage device which then initializes a kernel. In the era of DOS, the BIOS provided BIOS interrupt calls for the keyboard, display, storage, and other input/output (I/O) devices that standardized an interface to application programs and the operating system. More recent operating systems do not use the BIOS interrupt calls after startup. Most BIOS implementations are specifically designed to work with a particular computer or motherboard model, by interfacing with various devices especially system chipset. Originally, BIOS firmware was stored in a ROM chip on the PC motherboard. In later computer systems, the BIOS contents are stored on flash memory so it can be rewritten without removing the chip from the motherboard. This allows easy, end-user updates to the BIOS firmware so new features can be added or bugs can be fixed, but it also creates a possibility for the computer to become infected with BIOS rootkits. Furthermore, a BIOS upgrade that fails could brick the motherboard. The last version of Microsoft Windows to offi
https://en.wikipedia.org/wiki/B%20%28programming%20language%29
B is a programming language developed at Bell Labs circa 1969 by Ken Thompson and Dennis Ritchie. B was derived from BCPL, and its name may possibly be a contraction of BCPL. Thompson's coworker Dennis Ritchie speculated that the name might be based on Bon, an earlier, but unrelated, programming language that Thompson designed for use on Multics. B was designed for recursive, non-numeric, machine-independent applications, such as system and language software. It was a typeless language, with the only data type being the underlying machine's natural memory word format, whatever that might be. Depending on the context, the word was treated either as an integer or a memory address. As machines with ASCII processing became common, notably the DEC PDP-11 that arrived at Bell, support for character data stuffed in memory words became important. The typeless nature of the language was seen as a disadvantage, which led Thompson and Ritchie to develop an expanded version of the language supporting new internal and user-defined types, which became the C programming language. History Circa 1969, Ken Thompson and later Dennis Ritchie developed B basing it mainly on the BCPL language Thompson used in the Multics project. B was essentially the BCPL system stripped of any component Thompson felt he could do without in order to make it fit within the memory capacity of the minicomputers of the time. The BCPL to B transition also included changes made to suit Thompson's preferences (mostly along the lines of reducing the number of non-whitespace characters in a typical program). Much of the typical ALGOL-like syntax of BCPL was rather heavily changed in this process. The assignment operator := reverted to the = of Rutishauser's Superplan, and the equality operator = was replaced by ==. Thompson added "two-address assignment operators" using x =+ y syntax to add y to x (in C the operator is written +=). This syntax came from Douglas McIlroy's implementation of TMG, in which B
https://en.wikipedia.org/wiki/Breast
The breast is one of two prominences located on the upper ventral region of a primate's torso. Both females and males develop breasts from the same embryological tissues. In females, it serves as the mammary gland, which produces and secretes milk to feed infants. Subcutaneous fat covers and envelops a network of ducts that converge on the nipple, and these tissues give the breast its size and shape. At the ends of the ducts are lobules, or clusters of alveoli, where milk is produced and stored in response to hormonal signals. During pregnancy, the breast responds to a complex interaction of hormones, including estrogens, progesterone, and prolactin, that mediate the completion of its development, namely lobuloalveolar maturation, in preparation of lactation and breastfeeding. Humans are the only animals with permanent breasts. At puberty, estrogens, in conjunction with growth hormone, cause permanent breast growth in female humans. This happens only to a much lesser extent in other primates—breast development in other primates generally only occurs with pregnancy. Along with their major function in providing nutrition for infants, female breasts have social and sexual characteristics. Breasts have been featured in ancient and modern sculpture, art, and photography. They can figure prominently in the perception of a woman's body and sexual attractiveness. A number of cultures associate breasts with sexuality and tend to regard bare breasts in public as immodest or indecent. Breasts, especially the nipples, are an erogenous zone. Etymology and terminology The English word breast derives from the Old English word ('breast, bosom') from Proto-Germanic (breast), from the Proto-Indo-European base (to swell, to sprout). The breast spelling conforms to the Scottish and North English dialectal pronunciations. The Merriam-Webster Dictionary states that "Middle English , [comes] from Old English ; akin to Old High German ..., Old Irish [belly], [and] Russian "; the fir
https://en.wikipedia.org/wiki/Outline%20of%20biology
Biology – The natural science that studies life. Areas of focus include structure, function, growth, origin, evolution, distribution, and taxonomy. History of biology History of anatomy History of biochemistry History of biotechnology History of ecology History of genetics History of evolutionary thought: The eclipse of Darwinism – Catastrophism – Lamarckism – Orthogenesis – Mutationism – Structuralism – Vitalism Modern (evolutionary) synthesis History of molecular evolution History of speciation History of medicine History of model organisms History of molecular biology Natural history History of plant systematics Overview Biology Science Life Properties: Adaptation – Energy processing – Growth – Order – Regulation – Reproduction – Response to environment Biological organization: atom – molecule – cell – tissue – organ – organ system – organism – population – community – ecosystem – biosphere Approach: Reductionism – emergent property – mechanistic Biology as a science: Natural science Scientific method: observation – research question – hypothesis – testability – prediction – experiment – data – statistics Scientific theory – scientific law Research method List of research methods in biology Scientific literature List of biology journals: peer review Chemical basis Outline of biochemistry Atoms and molecules matter – element – atom – proton – neutron – electron– Bohr model – isotope – chemical bond – ionic bond – ions – covalent bond – hydrogen bond – molecule Water: properties of water – solvent – cohesion – surface tension – Adhesion – pH Organic compounds: carbon – carbon-carbon bonds – hydrocarbon – monosaccharide – amino acids – nucleotide – functional group – monomer – adenosine triphosphate (ATP) – lipids – oil – sugar – vitamins – neurotransmitter – wax Macromolecules: polysaccharide: cellulose – carbohydrate – chitin – glycogen – starch proteins: primary structure – secondary structure – tertiary structure – conformation – native state – protein fo
https://en.wikipedia.org/wiki/Biotechnology
Biotechnology is a multidisciplinary field that involves the integration of natural sciences and engineering sciences in order to achieve the application of organisms, cells, parts thereof and molecular analogues for products and services. The term biotechnology was first used by Károly Ereky in 1919, to refer to the production of products from raw materials with the aid of living organisms. The core principle of biotechnology involves harnessing biological systems and organisms, such as bacteria, yeast, and plants, to perform specific tasks or produce valuable substances. Biotechnology had a significant impact on many areas of society, from medicine to agriculture to environmental science. One of the key techniques used in biotechnology is genetic engineering, which allows scientists to modify the genetic makeup of organisms to achieve desired outcomes. This can involve inserting genes from one organism into another, creating new traits or modifying existing ones. Other important techniques used in biotechnology include tissue culture, which allows researchers to grow cells and tissues in the lab for research and medical purposes, and fermentation, which is used to produce a wide range of products such as beer, wine, and cheese. The applications of biotechnology are diverse and have led to the development of essential products like life-saving drugs, biofuels, genetically modified crops, and innovative materials. It has also been used to address environmental challenges, such as developing biodegradable plastics and using microorganisms to clean up contaminated sites. Biotechnology is a rapidly evolving field with significant potential to address pressing global challenges and improve the quality of life for people around the world; however, despite its numerous benefits, it also poses ethical and societal challenges, such as questions around genetic modification and intellectual property rights. As a result, there is ongoing debate and regulation surroundin
https://en.wikipedia.org/wiki/Backbone%20cabal
The backbone cabal was an informal organization of large-site news server administrators of the worldwide distributed newsgroup-based discussion system Usenet. It existed from about 1983 at least into the 2000s. The cabal was created in an effort to facilitate reliable propagation of new Usenet posts. While in the 1970s and 1980s many news servers only operated during night time to save on the cost of long-distance communication, servers of the backbone cabal were available 24 hours a day. The administrators of these servers gained sufficient influence in the otherwise anarchic Usenet community to be able to push through controversial changes, for instance the Great Renaming of Usenet newsgroups during 1987. History Mary Ann Horton recruited membership in and designed the original physical topology of the Usenet Backbone in 1983. Gene "Spaf" Spafford then created an email list of the backbone administrators, plus a few influential posters. This list became known as the Backbone Cabal and served as a "political (i.e. decision making) backbone". Other prominent members of the cabal were Brian Reid, Bob Allisat, Chuq von Rospach and Rick Adams. In internet culture During most of its existence, the cabal (sometimes capitalized) steadfastly denied its own existence; those involved would often respond "There is no Cabal" (sometimes abbreviated as "TINC"'). The result of this policy was an aura of mystery, even a decade after the cabal mailing list disbanded in late 1988 following an internal fight. References Further reading Henry Edward Hardy, 1993. The Usenet System, ITCA Teleconferencing Yearbook 1993, ITCA Research Committee, International Teleconferencing Association, Washington, DC. pp 140–151, esp. subheading "The Great Renaming" and "The Breaking of the Backbone Cartel". External links Cabal Conspiracy FAQ (archived May 2013) Lumber Cartel The Eric Conspiracy  This article incorporates text from the corresponding entry in the Jargon File, which is i
https://en.wikipedia.org/wiki/Burnt-in%20timecode
Burnt-in timecode (often abbreviated to BITC by analogy to VITC) is a human-readable on-screen version of the timecode information for a piece of material superimposed on a video image. BITC is sometimes used in conjunction with "real" machine-readable timecode, but more often used in copies of original material on to a non-broadcast format such as VHS, so that the VHS copies can be traced back to their master tape and the original time codes easily located. Many professional VTRs can "burn" (overlay) the tape timecode onto one of their outputs. This output (which usually also displays the setup menu or on-screen display) is known as the super out or monitor out. The character switch or menu item turns this behaviour on or off. The character function is also used to display the timecode on the preview monitors in linear editing suites. Videotapes that are recorded with timecode numbers overlaid on the video are referred to as window dubs, named after the "window" that displays the burnt-in timecode on-screen. When editing was done using magnetic tapes that were subject to damage from excessive wear, it was common to use a window dub as a working copy for the majority of the editing process. Editing decisions would be made using a window dub, and no specialized equipment was needed to write down an edit decision list which would then be replicated from the high-quality masters. Timecode can also be superimposed on video using a dedicated overlay device, often called a "window dub inserter". This inputs a video signal and its separate timecode audio signal, reads the timecode, superimposes the timecode display over the video, and outputs the combined display (usually via composite), all in real time. Stand-alone timecode generator / readers often have the window dub function built-in. Some consumer cameras, in particular DV cameras, can "burn" (overlay) the tape timecode onto the composite output. This output typically is semi-transparent and may include ot
https://en.wikipedia.org/wiki/Bra%E2%80%93ket%20notation
Bra–ket notation, also called Dirac notation, is a notation for linear algebra and linear operators on complex vector spaces together with their dual space both in the finite-dimensional and infinite-dimensional case. It is specifically designed to ease the types of calculations that frequently come up in quantum mechanics. Its use in quantum mechanics is quite widespread. Bra-ket notation was created by Paul Dirac in his 1939 publication A New Notation for Quantum Mechanics. The notation was introduced as an easier way to write quantum mechanical expressions. The name comes from the English word "Bracket". Quantum mechanics In quantum mechanics, bra–ket notation is used ubiquitously to denote quantum states. The notation uses angle brackets, and , and a vertical bar , to construct "bras" and "kets". A ket is of the form . Mathematically it denotes a vector, , in an abstract (complex) vector space , and physically it represents a state of some quantum system. A bra is of the form . Mathematically it denotes a linear form , i.e. a linear map that maps each vector in to a number in the complex plane . Letting the linear functional act on a vector is written as . Assume that on there exists an inner product with antilinear first argument, which makes an inner product space. Then with this inner product each vector can be identified with a corresponding linear form, by placing the vector in the anti-linear first slot of the inner product: . The correspondence between these notations is then . The linear form is a covector to , and the set of all covectors form a subspace of the dual vector space , to the initial vector space . The purpose of this linear form can now be understood in terms of making projections on the state , to find how linearly dependent two states are, etc. For the vector space , kets can be identified with column vectors, and bras with row vectors. Combinations of bras, kets, and linear operators are interpreted using matrix multiplic
https://en.wikipedia.org/wiki/Bugzilla
Bugzilla is a web-based general-purpose bug tracking system and testing tool originally developed and used by the Mozilla project, and licensed under the Mozilla Public License. Released as open-source software by Netscape Communications in 1998, it has been adopted by a variety of organizations for use as a bug tracking system for both free and open-source software and proprietary projects and products. Bugzilla is used, among others, by the Mozilla Foundation, WebKit, Linux kernel, FreeBSD, KDE, Apache, Eclipse and LibreOffice. Red Hat uses it, but is gradually migrating its product to use Jira. It is also self-hosting. History Bugzilla was originally devised by Terry Weissman in 1998 for the nascent Mozilla.org project, as an open source application to replace the in-house system then in use at Netscape Communications for tracking defects in the Netscape Communicator suite. Bugzilla was originally written in Tcl, but Weissman decided to port it to Perl before its release as part of Netscape's early open-source code drops, in the hope that more people would be able to contribute to it, given that Perl seemed to be a more popular language at the time. Bugzilla 2.0 was the result of that port to Perl, and the first version was released to the public via anonymous CVS. In April 2000, Weissman handed over control of the Bugzilla project to Tara Hernandez. Under her leadership, some of the regular contributors were coerced into taking more responsibility, and Bugzilla development became more community-driven. In July 2001, facing distraction from her other responsibilities in Netscape, Hernandez handed control to Dave Miller, who was still in charge . Bugzilla 3.0 was released on May 10, 2007 and brought a refreshed UI, an XML-RPC interface, custom fields and resolutions, mod_perl support, shared saved searches, and improved UTF-8 support, along with other changes. Bugzilla 4.0 was released on February 15, 2011 and Bugzilla 5.0 was released in July 2015. Timeli
https://en.wikipedia.org/wiki/Block%20cipher
In cryptography, a block cipher is a deterministic algorithm that operates on fixed-length groups of bits, called blocks. Block ciphers are the elementary building blocks of many cryptographic protocols. They are ubiquitous in the storage and exchange of data, where such data is secured and authenticated via encryption. A block cipher uses blocks as an unvarying transformation. Even a secure block cipher is suitable for the encryption of only a single block of data at a time, using a fixed key. A multitude of modes of operation have been designed to allow their repeated use in a secure way to achieve the security goals of confidentiality and authenticity. However, block ciphers may also feature as building blocks in other cryptographic protocols, such as universal hash functions and pseudorandom number generators. Definition A block cipher consists of two paired algorithms, one for encryption, , and the other for decryption, . Both algorithms accept two inputs: an input block of size bits and a key of size bits; and both yield an -bit output block. The decryption algorithm is defined to be the inverse function of encryption, i.e., . More formally, a block cipher is specified by an encryption function which takes as input a key , of bit length (called the key size), and a bit string , of length (called the block size), and returns a string of bits. is called the plaintext, and is termed the ciphertext. For each , the function () is required to be an invertible mapping on . The inverse for is defined as a function taking a key and a ciphertext to return a plaintext value , such that For example, a block cipher encryption algorithm might take a 128-bit block of plaintext as input, and output a corresponding 128-bit block of ciphertext. The exact transformation is controlled using a second input – the secret key. Decryption is similar: the decryption algorithm takes, in this example, a 128-bit block of ciphertext together with the secret key, and yields
https://en.wikipedia.org/wiki/Wireless%20broadband
Wireless broadband is a telecommunications technology that provides high-speed wireless Internet access or computer networking access over a wide area. The term encompasses both fixed and mobile broadband. The term broadband Originally the word "broadband" had a technical meaning, but became a marketing term for any kind of relatively high-speed computer network or Internet access technology. According to the 802.16-2004 standard, broadband means "having instantaneous bandwidths greater than 1 MHz and supporting data rates greater than about 1.5 Mbit/s." The Federal Communications Commission (FCC) recently re-defined the definition to mean download speeds of at least 25 Mbit/s and upload speeds of at least 3 Mbit/s. Technology and speeds A wireless broadband network is an outdoor fixed and/or mobile wireless network providing point-to-multipoint or point-to-point terrestrial wireless links for broadband services. Wireless networks can feature data rates exceeding 1 Gbit/s. Many fixed wireless networks are exclusively half-duplex (HDX), however, some licensed and unlicensed systems can also operate at full-duplex (FDX) allowing communication in both directions simultaneously. Outdoor fixed wireless broadband networks commonly utilize a priority TDMA based protocol in order to divide communication into timeslots. This timeslot technique eliminates many of the issues common to 802.11 Wi-Fi protocol in outdoor networks such as the hidden node problem. Few wireless Internet service providers (WISPs) provide download speeds of over 100 Mbit/s; most broadband wireless access (BWA) services are estimated to have a range of from a tower. Technologies used include Local Multipoint Distribution Service (LMDS) and Multichannel Multipoint Distribution Service (MMDS), as well as heavy use of the industrial, scientific and medical (ISM) radio bands and one particular access technology was standardized by IEEE 802.16, with products known as WiMAX. WiMAX is highly popular in
https://en.wikipedia.org/wiki/Booch%20method
The Booch method is a method for object-oriented software development. It is composed of an object modeling language, an iterative object-oriented development process, and a set of recommended practices. The method was authored by Grady Booch when he was working for Rational Software (acquired by IBM), published in 1992 and revised in 1994. It was widely used in software engineering for object-oriented analysis and design and benefited from ample documentation and support tools. The notation aspect of the Booch methodology was superseded by the Unified Modeling Language (UML), which features graphical elements from the Booch method along with elements from the object-modeling technique (OMT) and object-oriented software engineering (OOSE). Methodological aspects of the Booch method have been incorporated into several methodologies and processes, the primary such methodology being the Rational Unified Process (RUP). Content of the method The Booch notation is characterized by cloud shapes to represent classes and distinguishes the following diagrams: The process is organized around a macro and a micro process. The macro process identifies the following activities cycle: Conceptualization : establish core requirements Analysis : develop a model of the desired behavior Design : create an architecture Evolution: for the implementation Maintenance : for evolution after the delivery The micro process is applied to new classes, structures or behaviors that emerge during the macro process. It is made of the following cycle: Identification of classes and objects Identification of their semantics Identification of their relationships Specification of their interfaces and implementation References External links Class diagrams, Object diagrams, State Event diagrams and Module diagrams. The Booch Method of Object-Oriented Analysis & Design Software design Object-oriented programming Programming principles de:Grady Booch#Booch-Notation
https://en.wikipedia.org/wiki/Bilinear%20transform
The bilinear transform (also known as Tustin's method, after Arnold Tustin) is used in digital signal processing and discrete-time control theory to transform continuous-time system representations to discrete-time and vice versa. The bilinear transform is a special case of a conformal mapping (namely, a Möbius transformation), often used to convert a transfer function of a linear, time-invariant (LTI) filter in the continuous-time domain (often called an analog filter) to a transfer function of a linear, shift-invariant filter in the discrete-time domain (often called a digital filter although there are analog filters constructed with switched capacitors that are discrete-time filters). It maps positions on the axis, , in the s-plane to the unit circle, , in the z-plane. Other bilinear transforms can be used to warp the frequency response of any discrete-time linear system (for example to approximate the non-linear frequency resolution of the human auditory system) and are implementable in the discrete domain by replacing a system's unit delays with first order all-pass filters. The transform preserves stability and maps every point of the frequency response of the continuous-time filter, to a corresponding point in the frequency response of the discrete-time filter, although to a somewhat different frequency, as shown in the Frequency warping section below. This means that for every feature that one sees in the frequency response of the analog filter, there is a corresponding feature, with identical gain and phase shift, in the frequency response of the digital filter but, perhaps, at a somewhat different frequency. This is barely noticeable at low frequencies but is quite evident at frequencies close to the Nyquist frequency. Discrete-time approximation The bilinear transform is a first-order Padé approximant of the natural logarithm function that is an exact mapping of the z-plane to the s-plane. When the Laplace transform is performed on a discret
https://en.wikipedia.org/wiki/Binomial%20coefficient
In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem. Commonly, a binomial coefficient is indexed by a pair of integers and is written It is the coefficient of the term in the polynomial expansion of the binomial power ; this coefficient can be computed by the multiplicative formula which using factorial notation can be compactly expressed as For example, the fourth power of is and the binomial coefficient is the coefficient of the term. Arranging the numbers in successive rows for gives a triangular array called Pascal's triangle, satisfying the recurrence relation The binomial coefficients occur in many areas of mathematics, and especially in combinatorics. The symbol is usually read as " choose " because there are ways to choose an (unordered) subset of elements from a fixed set of elements. For example, there are ways to choose 2 elements from namely and The binomial coefficients can be generalized to for any complex number and integer , and many of their properties continue to hold in this more general form. History and notation Andreas von Ettingshausen introduced the notation in 1826, although the numbers were known centuries earlier (see Pascal's triangle). In about 1150, the Indian mathematician Bhaskaracharya gave an exposition of binomial coefficients in his book Līlāvatī. Alternative notations include , , , , , and in all of which the stands for combinations or choices. Many calculators use variants of the because they can represent it on a single-line display. In this form the binomial coefficients are easily compared to -permutations of , written as , etc. Definition and interpretations For natural numbers (taken to include 0) n and k, the binomial coefficient can be defined as the coefficient of the monomial Xk in the expansion of . The same coefficient also occurs (if ) in the binomial formula (valid for any elements x, y of a commutative ring), which
https://en.wikipedia.org/wiki/Binomial%20theorem
In elementary algebra, the binomial theorem (or binomial expansion) describes the algebraic expansion of powers of a binomial. According to the theorem, it is possible to expand the polynomial into a sum involving terms of the form , where the exponents and are nonnegative integers with , and the coefficient of each term is a specific positive integer depending on and . For example, for , The coefficient in the term of is known as the binomial coefficient or (the two have the same value). These coefficients for varying and can be arranged to form Pascal's triangle. These numbers also occur in combinatorics, where gives the number of different combinations of elements that can be chosen from an -element set. Therefore is often pronounced as " choose ". History Special cases of the binomial theorem were known since at least the 4th century BC when Greek mathematician Euclid mentioned the special case of the binomial theorem for exponent . Greek mathematican Diophantus cubed various binomials, including . Indian mathematican Aryabhata's method for finding cube roots, from around 510 CE, suggests that he knew the binomial formula for exponent . Binomial coefficients, as combinatorial quantities expressing the number of ways of selecting objects out of without replacement, were of interest to ancient Indian mathematicians. The earliest known reference to this combinatorial problem is the Chandaḥśāstra by the Indian lyricist Pingala (c. 200 BC), which contains a method for its solution. The commentator Halayudha from the 10th century AD explains this method. By the 6th century AD, the Indian mathematicians probably knew how to express this as a quotient , and a clear statement of this rule can be found in the 12th century text Lilavati by Bhaskara. The first formulation of the binomial theorem and the table of binomial coefficients, to our knowledge, can be found in a work by Al-Karaji, quoted by Al-Samaw'al in his "al-Bahir". Al-Karaji described
https://en.wikipedia.org/wiki/Boolean%20satisfiability%20problem
In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable. SAT is the first problem that was proven to be NP-complete; see Cook–Levin theorem. This means that all problems in the complexity class NP, which includes a wide range of natural decision and optimization problems, are at most as difficult to solve as SAT. There is no known algorithm that efficiently solves each SAT problem, and it is generally believed that no such algorithm exists; yet this belief has not been proved mathematically, and resolving the question of whether SAT has a polynomial-time algorithm is equivalent to the P versus NP problem, which is a famous open problem in the theory of computing. Nevertheless, as of 2007, heuristic SAT-algorithms are able to solve problem instances involving tens of thousands of variables and formulas consisting of millions of symbols, which is sufficient for many practical SAT problems from, e.g., artificial intelligence, circuit design, and automatic theorem proving. Definitions A propositional logic formula, also called Boolean expression, is built from variables, operators AND (conjunction, also denoted by ∧), OR (disjunction, ∨), NOT (negation, ¬), and parentheses. A
https://en.wikipedia.org/wiki/Bidirectional%20text
A bidirectional text contains two text directionalities, right-to-left (RTL) and left-to-right (LTR). It generally involves text containing different types of alphabets, but may also refer to boustrophedon, which is changing text direction in each row. Many computer programs fail to display bidirectional text correctly. For example, this page is mostly LTR English script, and here is the RTL Hebrew name Sarah: , spelled sin () on the right, resh () in the middle, and heh () on the left. Some so-called right-to-left scripts such as the Persian script and Arabic are mostly, but not exclusively, right-to-left—mathematical expressions, numeric dates and numbers bearing units are embedded from left to right. That also happens if text from a left-to-right language such as English is embedded in them; or vice versa, if Arabic is embedded in a left-to-right script such as English. Bidirectional script support Bidirectional script support is the capability of a computer system to correctly display bidirectional text. The term is often shortened to "BiDi" or "bidi". Early computer installations were designed only to support a single writing system, typically for left-to-right scripts based on the Latin alphabet only. Adding new character sets and character encodings enabled a number of other left-to-right scripts to be supported, but did not easily support right-to-left scripts such as Arabic or Hebrew, and mixing the two was not practical. Right-to-left scripts were introduced through encodings like ISO/IEC 8859-6 and ISO/IEC 8859-8, storing the letters (usually) in writing and reading order. It is possible to simply flip the left-to-right display order to a right-to-left display order, but doing this sacrifices the ability to correctly display left-to-right scripts. With bidirectional script support, it is possible to mix characters from different scripts on the same page, regardless of writing direction. In particular, the Unicode standard provides foundations for c
https://en.wikipedia.org/wiki/Bastard%20Operator%20From%20Hell
The Bastard Operator From Hell (BOFH) is a fictional rogue computer operator created by Simon Travaglia, who takes out his anger on users (who are "lusers" to him) and others who pester him with their computer problems, uses his expertise against his enemies and manipulates his employer. Several people have written stories about BOFHs, but only those by Simon Travaglia are considered canonical. The BOFH stories were originally posted in 1992 to Usenet by Travaglia, with some being reprinted in Datamation. Since 2000 they have been published regularly in The Register (UK). Several collections of the stories have been published as books. By extension, the term is also used to refer to any system administrator who displays the qualities of the original. The early accounts of the BOFH took place in a university; later the scenes were set in an office workplace. In 2000 (BOFH 2k), the BOFH and his pimply-faced youth (PFY) assistant moved to a new company. Other characters The PFY (Pimply-Faced Youth, the assistant to the BOFH. Real name is Stephen) Possesses a temperament similar to the BOFH, and often either teams up with or plots against him. The Boss (often portrayed as having no IT knowledge but believing otherwise; identity changes as successive bosses are sacked, leave, are committed, or have nasty "accidents") CEO of the company – The PFY's uncle Brian from 1996 until 2000, when the BOFH and PFY moved to a new company. The help desk operators, referred to as the "Helldesk" and often scolded for giving out the BOFH's personal number. The Boss's secretary, Sharon. The security department George, the cleaner (an invaluable source of information to the BOFH and PFY) Books Games BOFH is a text adventure game written by Howard A. Sherman and published in 2002. It is available via Malinche. References in other media The protagonist in Charles Stross's The Laundry Files series of novels named himself Bob Oliver Francis Howard in reference to the BOFH.
https://en.wikipedia.org/wiki/Plague%20%28disease%29
Plague is an infectious disease caused by the bacterium Yersinia pestis. Symptoms include fever, weakness and headache. Usually this begins one to seven days after exposure. There are three forms of plague, each affecting a different part of the body and causing associated symptoms. Pneumonic plague infects the lungs, causing shortness of breath, coughing and chest pain; bubonic plague affects the lymph nodes, making them swell; and septicemic plague infects the blood and can cause tissues to turn black and die. The bubonic and septicemic forms are generally spread by flea bites or handling an infected animal, whereas pneumonic plague is generally spread between people through the air via infectious droplets. Diagnosis is typically by finding the bacterium in fluid from a lymph node, blood or sputum. Those at high risk may be vaccinated. Those exposed to a case of pneumonic plague may be treated with preventive medication. If infected, treatment is with antibiotics and supportive care. Typically antibiotics include a combination of gentamicin and a fluoroquinolone. The risk of death with treatment is about 10% while without it is about 70%. Globally, about 600 cases are reported a year. In 2017, the countries with the most cases include the Democratic Republic of the Congo, Madagascar and Peru. In the United States, infections occasionally occur in rural areas, where the bacteria are believed to circulate among rodents. It has historically occurred in large outbreaks, with the best known being the Black Death in the 14th century, which resulted in more than 50 million deaths in Europe. Signs and symptoms There are several different clinical manifestations of plague. The most common form is bubonic plague, followed by septicemic and pneumonic plague. Other clinical manifestations include plague meningitis, plague pharyngitis, and ocular plague. General symptoms of plague include fever, chills, headaches, and nausea. Many people experience swelling in their lymph
https://en.wikipedia.org/wiki/Baudot%20code
The Baudot code () is an early character encoding for telegraphy invented by Émile Baudot in the 1870s. It was the predecessor to the International Telegraph Alphabet No. 2 (ITA2), the most common teleprinter code in use before ASCII. Each character in the alphabet is represented by a series of five bits, sent over a communication channel such as a telegraph wire or a radio signal by asynchronous serial communication. The symbol rate measurement is known as baud, and is derived from the same name. History Baudot code (ITA1) In the below table, Columns I, II, III, IV, and V show the code; the Let. and Fig. columns show the letters and numbers for the Continental and UK versions; and the sort keys present the table in the order: alphabetical, Gray and UK Baudot developed his first multiplexed telegraph in 1872 and patented it in 1874. In 1876, he changed from a six-bit code to a five-bit code, as suggested by Carl Friedrich Gauss and Wilhelm Weber in 1834, with equal on and off intervals, which allowed for transmission of the Roman alphabet, and included punctuation and control signals. The code itself was not patented (only the machine) because French patent law does not allow concepts to be patented. Baudot's 5-bit code was adapted to be sent from a manual keyboard, and no teleprinter equipment was ever constructed that used it in its original form. The code was entered on a keyboard which had just five piano-type keys and was operated using two fingers of the left hand and three fingers of the right hand. Once the keys had been pressed, they were locked down until mechanical contacts in a distributor unit passed over the sector connected to that particular keyboard, at which time the keyboard was unlocked ready for the next character to be entered, with an audible click (known as the "cadence signal") to warn the operator. Operators had to maintain a steady rhythm, and the usual speed of operation was 30 words per minute. The table "shows the allocation of th
https://en.wikipedia.org/wiki/Bestiary
A bestiary (from bestiarum vocabulum) is a compendium of beasts. Originating in the ancient world, bestiaries were made popular in the Middle Ages in illustrated volumes that described various animals and even rocks. The natural history and illustration of each beast was usually accompanied by a moral lesson. This reflected the belief that the world itself was the Word of God and that every living thing had its own special meaning. For example, the pelican, which was believed to tear open its breast to bring its young to life with its own blood, was a living representation of Jesus. Thus the bestiary is also a reference to the symbolic language of animals in Western Christian art and literature. History The bestiary — the medieval book of beasts — was among the most popular illuminated texts in northern Europe during the Middle Ages (about 500–1500). Medieval Christians understood every element of the world as a manifestation of God, and bestiaries largely focused on each animal's religious meaning. Much of what is in the bestiary came from the ancient Greeks and their philosophers. The earliest bestiary in the form in which it was later popularized was an anonymous 2nd-century Greek volume called the Physiologus, which itself summarized ancient knowledge and wisdom about animals in the writings of classical authors such as Aristotle's Historia Animalium and various works by Herodotus, Pliny the Elder, Solinus, Aelian and other naturalists. Following the Physiologus, Saint Isidore of Seville (Book XII of the Etymologiae) and Saint Ambrose expanded the religious message with reference to passages from the Bible and the Septuagint. They and other authors freely expanded or modified pre-existing models, constantly refining the moral content without interest or access to much more detail regarding the factual content. Nevertheless, the often fanciful accounts of these beasts were widely read and generally believed to be true. A few observations found in bestiaries, su
https://en.wikipedia.org/wiki/Body%20mass%20index
Body mass index (BMI) is a value derived from the mass (weight) and height of a person. The BMI is defined as the body mass divided by the square of the body height, and is expressed in units of kg/m2, resulting from mass in kilograms (kg) and height in metres (m). The BMI may be determined first by measuring its components by means of a weighing scale and a stadiometer. The multiplication and division may be carried out directly, by hand or using a calculator, or indirectly using a lookup table (or chart). The table displays BMI as a function of mass and height and may show other units of measurement (converted to metric units for the calculation). The table may also show contour lines or colours for different BMI categories. The BMI is a convenient rule of thumb used to broadly categorize a person as based on tissue mass (muscle, fat, and bone) and height. Major adult BMI classifications are underweight (under 18.5 kg/m2), normal weight (18.5 to 24.9), overweight (25 to 29.9), and obese (30 or more). When used to predict an individual's health, rather than as a statistical measurement for groups, the BMI has limitations that can make it less useful than some of the alternatives, especially when applied to individuals with abdominal obesity, short stature, or high muscle mass. BMIs under 20 and over 25 have been associated with higher all-cause mortality, with the risk increasing with distance from the 20–25 range. History Adolphe Quetelet, a Belgian astronomer, mathematician, statistician, and sociologist, devised the basis of the BMI between 1830 and 1850 as he developed what he called "social physics". Quetelet himself never intended for the index, then called the Quetelet Index, to be used as a means of medical assessment. Instead, it was a component of his study of , or the average man. Quetelet thought of the average man as a social ideal, and developed the body mass index as a means of discovering the socially ideal human person. According to Lars Grue
https://en.wikipedia.org/wiki/BeOS
BeOS is an operating system for personal computers first developed by Be Inc. in 1990. It was first written to run on BeBox hardware. BeOS was positioned as a multimedia platform that could be used by a substantial population of desktop users and a competitor to Classic Mac OS and Microsoft Windows. It was ultimately unable to achieve a significant market share, and did not prove commercially viable for Be Inc. The company was acquired by Palm, Inc. Today BeOS is mainly used, and derivatives developed, by a small population of enthusiasts. The open-source operating system Haiku is a continuation of BeOS concepts and most of the application level compatibility. The latest version, Beta 4 released December 2022, still retains BeOS 5 compatibility in its x86 32-bit images. History Initially designed to run on AT&T Hobbit-based hardware, BeOS was later modified to run on PowerPC-based processors: first Be's own systems, later Apple Computer's PowerPC Reference Platform and Common Hardware Reference Platform, with the hope that Apple would purchase or license BeOS as a replacement for its aging Classic Mac OS. Toward the end of 1996, Apple was still looking for a replacement to Copland in their operating system strategy. Amidst rumours of Apple's interest in purchasing BeOS, Be wanted to increase their user base, to try to convince software developers to write software for the operating system. Be courted Macintosh clone vendors to ship BeOS with their hardware. Apple CEO Gil Amelio started negotiations to buy Be Inc., but negotiations stalled when Be CEO Jean-Louis Gassée wanted $300 million; Apple was unwilling to offer any more than $125 million. Apple's board of directors decided NeXTSTEP was a better choice and purchased NeXT in 1996 for $429 million, bringing back Apple co-founder Steve Jobs. In 1997, Power Computing began bundling BeOS (on a CD for optional installation) with its line of PowerPC-based Macintosh clones. These systems could dual boot either
https://en.wikipedia.org/wiki/Behavior
Behavior (American English) or behaviour (British English) is the range of actions and mannerisms made by individuals, organisms, systems or artificial entities in some environment. These systems can include other systems or organisms as well as the inanimate physical environment. It is the computed response of the system or organism to various stimuli or inputs, whether internal or external, conscious or subconscious, overt or covert, and voluntary or involuntary. Taking a behavior informatics perspective, a behavior consists of actor, operation, interactions, and their properties. This can be represented as a behavior vector. Models Biology Although disagreement exists as to how to precisely define behavior in a biological context, one common interpretation based on a meta-analysis of scientific literature states that "behavior is the internally coordinated responses (actions or inactions) of whole living organisms (individuals or groups) to internal or external stimuli". A broader definition of behavior, applicable to plants and other organisms, is similar to the concept of phenotypic plasticity. It describes behavior as a response to an event or environment change during the course of the lifetime of an individual, differing from other physiological or biochemical changes that occur more rapidly, and excluding changes that are a result of development (ontogeny). Behaviors can be either innate or learned from the environment. Behavior can be regarded as any action of an organism that changes its relationship to its environment. Behavior provides outputs from the organism to the environment. Human behavior The endocrine system and the nervous system likely influence human behavior. Complexity in the behavior of an organism may be correlated to the complexity of its nervous system. Generally, organisms with more complex nervous systems have a greater capacity to learn new responses and thus adjust their behavior. Animal behavior Ethology is the scientifi
https://en.wikipedia.org/wiki/Biosphere
The biosphere (from Greek βίος bíos "life" and σφαῖρα sphaira "sphere"), also known as the ecosphere (from Greek οἶκος oîkos "environment" and σφαῖρα), is the worldwide sum of all ecosystems. It can also be termed the zone of life on Earth. The biosphere (which is technically a spherical shell) is virtually a closed system with regard to matter, with minimal inputs and outputs. Regarding energy, it is an open system, with photosynthesis capturing solar energy at a rate of around 100 terawatts. By the most general biophysiological definition, the biosphere is the global ecological system integrating all living beings and their relationships, including their interaction with the elements of the lithosphere, cryosphere, hydrosphere, and atmosphere. The biosphere is postulated to have evolved, beginning with a process of biopoiesis (life created naturally from matter, such as simple organic compounds) or biogenesis (life created from living matter), at least some 3.5 billion years ago. In a general sense, biospheres are any closed, self-regulating systems containing ecosystems. This includes artificial biospheres such as and , and potentially ones on other planets or moons. Origin and use of the term The term "biosphere" was coined in 1875 by geologist Eduard Suess, who defined it as the place on Earth's surface where life dwells. While the concept has a geological origin, it is an indication of the effect of both Charles Darwin and Matthew F. Maury on the Earth sciences. The biosphere's ecological context comes from the 1920s (see Vladimir I. Vernadsky), preceding the 1935 introduction of the term "ecosystem" by Sir Arthur Tansley (see ecology history). Vernadsky defined ecology as the science of the biosphere. It is an interdisciplinary concept for integrating astronomy, geophysics, meteorology, biogeography, evolution, geology, geochemistry, hydrology and, generally speaking, all life and Earth sciences. Narrow definition Geochemists define the biosphere as
https://en.wikipedia.org/wiki/Blitz%20BASIC
Blitz BASIC is the programming language dialect of the first Blitz compilers, devised by New Zealand-based developer Mark Sibly. Being derived from BASIC, Blitz syntax was designed to be easy to pick up for beginners first learning to program. The languages are game-programming oriented but are often found general purpose enough to be used for most types of application. The Blitz language evolved as new products were released, with recent incarnations offering support for more advanced programming techniques such as object-orientation and multithreading. This led to the languages losing their BASIC moniker in later years. History The first iteration of the Blitz language was created for the Amiga platform and published by the Australian firm Memory and Storage Technology. Returning to New Zealand, Blitz BASIC 2 was published several years later (around 1993 according this press release ) by Acid Software (a local Amiga game publisher). Since then, Blitz compilers have been released on several platforms. Following the demise of the Amiga as a commercially viable platform, the Blitz BASIC 2 source code was released to the Amiga community. Development continues to this day under the name AmiBlitz. BlitzBasic Idigicon published BlitzBasic for Microsoft Windows in October 2000. The language included a built-in API for performing basic 2D graphics and audio operations. Following the release of Blitz3D, BlitzBasic is often synonymously referred to as Blitz2D. Recognition of BlitzBasic increased when a limited range of "free" versions were distributed in popular UK computer magazines such as PC Format. This resulted in a legal dispute between the developer and publisher which was eventually resolved amicably. BlitzPlus In February 2003, Blitz Research Ltd. released BlitzPlus also for Microsoft Windows. It lacked the 3D engine of Blitz3D, but did bring new features to the 2D side of the language by implementing limited Microsoft Windows control support for creating nati
https://en.wikipedia.org/wiki/Bunsen%20burner
A Bunsen burner, named after Robert Bunsen, is a kind of ambient air gas burner used as laboratory equipment; it produces a single open gas flame, and is used for heating, sterilization, and combustion. The gas can be natural gas (which is mainly methane) or a liquefied petroleum gas, such as propane, butane, or a mixture. Combustion temperature achieved depends in part on the adiabatic flame temperature of the chosen fuel mixture. History In 1852, the University of Heidelberg hired Bunsen and promised him a new laboratory building. The city of Heidelberg had begun to install coal-gas street lighting, and the university laid gas lines to the new laboratory. The designers of the building intended to use the gas not just for lighting, but also as fuel for burners for laboratory operations. For any burner lamp, it was desirable to maximize the temperature of its flame, and minimize its luminosity (which represented lost heating energy). Bunsen sought to improve existing laboratory burner lamps as regards economy, simplicity, and flame temperature, and adapt them to coal-gas fuel. While the building was under construction in late 1854, Bunsen suggested certain design principles to the university's mechanic, Peter Desaga, and asked him to construct a prototype. Similar principles had been used in an earlier burner design by Michael Faraday, and in a device patented in 1856 by gas engineer R. W. Elsner. The Bunsen/Desaga design generated a hot, sootless, non-luminous flame by mixing the gas with air in a controlled fashion before combustion. Desaga created adjustable slits for air at the bottom of the cylindrical burner, with the flame issuing at the top. When the building opened early in 1855, Desaga had made 50 burners for Bunsen's students. Two years later Bunsen published a description, and many of his colleagues soon adopted the design. Bunsen burners are now used in laboratories around the world. Operation The device in use today safely burns a continuous st
https://en.wikipedia.org/wiki/Blue%20whale
The blue whale (Balaenoptera musculus) is a marine mammal and a baleen whale. Reaching a maximum confirmed length of and weighing up to , it is the largest animal known ever to have existed. The blue whale's long and slender body can be of various shades of greyish-blue dorsally and somewhat lighter underneath. Four subspecies are recognized: B. m. musculus in the North Atlantic and North Pacific, B. m. intermedia in the Southern Ocean, B. m. brevicauda (the pygmy blue whale) in the Indian Ocean and South Pacific Ocean, B. m. indica in the Northern Indian Ocean. There is also a population in the waters off Chile that may constitute a fifth subspecies. In general, blue whale populations migrate between their summer feeding areas near the poles and their winter breeding grounds near the tropics. There is also evidence of year-round residencies, and partial or age/sex-based migration. Blue whales are filter feeders; their diet consists almost exclusively of krill. They are generally solitary or gather in small groups, and have no well-defined social structure other than mother-calf bonds. The fundamental frequency for blue whale vocalizations ranges from 8 to 25 Hz and the production of vocalizations may vary by region, season, behavior, and time of day. Orcas are their only natural predators. The blue whale was once abundant in nearly all the Earth's oceans until the end of the 19th century. It was hunted almost to the point of extinction by whalers until the International Whaling Commission banned all blue whale hunting in 1966. The International Union for Conservation of Nature has listed blue whales as Endangered as of 2018. It continues to face numerous man-made threats such as ship strikes, pollution, ocean noise and climate change. Taxonomy Nomenclature The genus name, Balaenoptera, means winged whale while the species name, musculus, could mean "muscle" or a diminutive form of "mouse", possibly a pun by Carl Linnaeus when he named the species in Systema N
https://en.wikipedia.org/wiki/Naive%20set%20theory
Naive set theory is any of several theories of sets used in the discussion of the foundations of mathematics. Unlike axiomatic set theories, which are defined using formal logic, naive set theory is defined informally, in natural language. It describes the aspects of mathematical sets familiar in discrete mathematics (for example Venn diagrams and symbolic reasoning about their Boolean algebra), and suffices for the everyday use of set theory concepts in contemporary mathematics. Sets are of great importance in mathematics; in modern formal treatments, most mathematical objects (numbers, relations, functions, etc.) are defined in terms of sets. Naive set theory suffices for many purposes, while also serving as a stepping stone towards more formal treatments. Method A naive theory in the sense of "naive set theory" is a non-formalized theory, that is, a theory that uses natural language to describe sets and operations on sets. The words and, or, if ... then, not, for some, for every are treated as in ordinary mathematics. As a matter of convenience, use of naive set theory and its formalism prevails even in higher mathematics – including in more formal settings of set theory itself. The first development of set theory was a naive set theory. It was created at the end of the 19th century by Georg Cantor as part of his study of infinite sets and developed by Gottlob Frege in his Grundgesetze der Arithmetik. Naive set theory may refer to several very distinct notions. It may refer to Informal presentation of an axiomatic set theory, e.g. as in Naive Set Theory by Paul Halmos. Early or later versions of Georg Cantor's theory and other informal systems. Decidedly inconsistent theories (whether axiomatic or not), such as a theory of Gottlob Frege that yielded Russell's paradox, and theories of Giuseppe Peano and Richard Dedekind. Paradoxes The assumption that any property may be used to form a set, without restriction, leads to paradoxes. One common example is Rus
https://en.wikipedia.org/wiki/B%C3%A9zout%27s%20identity
In mathematics, Bézout's identity (also called Bézout's lemma), named after Étienne Bézout who proved it for polynomials, is the following theorem: Here the greatest common divisor of and is taken to be . The integers and are called Bézout coefficients for ; they are not unique. A pair of Bézout coefficients can be computed by the extended Euclidean algorithm, and this pair is, in the case of integers one of the two pairs such that and equality occurs only if one of and is a multiple of the other. As an example, the greatest common divisor of 15 and 69 is 3, and 3 can be written as a combination of 15 and 69 as with Bézout coefficients −9 and 2. Many other theorems in elementary number theory, such as Euclid's lemma or the Chinese remainder theorem, result from Bézout's identity. A Bézout domain is an integral domain in which Bézout's identity holds. In particular, Bézout's identity holds in principal ideal domains. Every theorem that results from Bézout's identity is thus true in all principal ideal domains. Structure of solutions If and are not both zero and one pair of Bézout coefficients has been computed (for example, using the extended Euclidean algorithm), all pairs can be represented in the form where is an arbitrary integer, is the greatest common divisor of and , and the fractions simplify to integers. If and are both nonzero, then exactly two of these pairs of Bézout coefficients satisfy and equality may occur only if one of and divides the other. This relies on a property of Euclidean division: given two non-zero integers and , if does not divide , there is exactly one pair such that and and another one such that and The two pairs of small Bézout's coefficients are obtained from the given one by choosing for in the above formula either of the two integers next to . The extended Euclidean algorithm always produces one of these two minimal pairs. Example Let and , then . Then the following Bézout's identities a
https://en.wikipedia.org/wiki/Bernoulli%20number
In mathematics, the Bernoulli numbers are a sequence of rational numbers which occur frequently in analysis. The Bernoulli numbers appear in (and can be defined by) the Taylor series expansions of the tangent and hyperbolic tangent functions, in Faulhaber's formula for the sum of m-th powers of the first n positive integers, in the Euler–Maclaurin formula, and in expressions for certain values of the Riemann zeta function. The values of the first 20 Bernoulli numbers are given in the adjacent table. Two conventions are used in the literature, denoted here by and ; they differ only for , where and . For every odd , . For every even , is negative if is divisible by 4 and positive otherwise. The Bernoulli numbers are special values of the Bernoulli polynomials , with and . The Bernoulli numbers were discovered around the same time by the Swiss mathematician Jacob Bernoulli, after whom they are named, and independently by Japanese mathematician Seki Takakazu. Seki's discovery was posthumously published in 1712 in his work Katsuyō Sanpō; Bernoulli's, also posthumously, in his Ars Conjectandi of 1713. Ada Lovelace's note G on the Analytical Engine from 1842 describes an algorithm for generating Bernoulli numbers with Babbage's machine. As a result, the Bernoulli numbers have the distinction of being the subject of the first published complex computer program. Notation The superscript used in this article distinguishes the two sign conventions for Bernoulli numbers. Only the term is affected: with ( / ) is the sign convention prescribed by NIST and most modern textbooks. with ( / ) was used in the older literature, and (since 2022) by Donald Knuth following Peter Luschny's "Bernoulli Manifesto". In the formulas below, one can switch from one sign convention to the other with the relation , or for integer = 2 or greater, simply ignore it. Since for all odd , and many formulas only involve even-index Bernoulli numbers, a few authors write "" instead o
https://en.wikipedia.org/wiki/Bistability
In a dynamical system, bistability means the system has two stable equilibrium states. A bistable structure can be resting in either of two states. An example of a mechanical device which is bistable is a light switch. The switch lever is designed to rest in the "on" or "off" position, but not between the two. Bistable behavior can occur in mechanical linkages, electronic circuits, nonlinear optical systems, chemical reactions, and physiological and biological systems. In a conservative force field, bistability stems from the fact that the potential energy has two local minima, which are the stable equilibrium points. These rest states need not have equal potential energy. By mathematical arguments, a local maximum, an unstable equilibrium point, must lie between the two minima. At rest, a particle will be in one of the minimum equilibrium positions, because that corresponds to the state of lowest energy. The maximum can be visualized as a barrier between them. A system can transition from one state of minimal energy to the other if it is given enough activation energy to penetrate the barrier (compare activation energy and Arrhenius equation for the chemical case). After the barrier has been reached, assuming the system has damping, it will relax into the other minimum state in a time called the relaxation time. Bistability is widely used in digital electronics devices to store binary data. It is the essential characteristic of the flip-flop, a circuit which is a fundamental building block of computers and some types of semiconductor memory. A bistable device can store one bit of binary data, with one state representing a "0" and the other state a "1". It is also used in relaxation oscillators, multivibrators, and the Schmitt trigger. Optical bistability is an attribute of certain optical devices where two resonant transmissions states are possible and stable, dependent on the input. Bistability can also arise in biochemical systems, where it creates digi
https://en.wikipedia.org/wiki/Berry%20paradox
The Berry paradox is a self-referential paradox arising from an expression like "The smallest positive integer not definable in under sixty letters" (a phrase with fifty-seven letters). Bertrand Russell, the first to discuss the paradox in print, attributed it to G. G. Berry (1867–1928), a junior librarian at Oxford's Bodleian Library. Russell called Berry "the only person in Oxford who understood mathematical logic". The paradox was called "Richard's paradox" by Jean-Yves Girard. Overview Consider the expression: "The smallest positive integer not definable in under sixty letters." Since there are only twenty-six letters in the English alphabet, there are finitely many phrases of under sixty letters, and hence finitely many positive integers that are defined by phrases of under sixty letters. Since there are infinitely many positive integers, this means that there are positive integers that cannot be defined by phrases of under sixty letters. If there are positive integers that satisfy a given property, then there is a smallest positive integer that satisfies that property; therefore, there is a smallest positive integer satisfying the property "not definable in under sixty letters". This is the integer to which the above expression refers. But the above expression is only fifty-seven letters long, therefore it is definable in under sixty letters, and is not the smallest positive integer not definable in under sixty letters, and is not defined by this expression. This is a paradox: there must be an integer defined by this expression, but since the expression is self-contradictory (any integer it defines is definable in under sixty letters), there cannot be any integer defined by it. Perhaps another helpful analogy to Berry's Paradox would be the phrase "indescribable feeling". If the feeling is indeed indescribable, then no description of the feeling would be true. But if the word "indescribable" communicates something about the feeling, then it may be cons
https://en.wikipedia.org/wiki/Combinatorics
Combinatorics is an area of mathematics primarily concerned with counting, both as a means and an end in obtaining results, and certain properties of finite structures. It is closely related to many other areas of mathematics and has many applications ranging from logic to statistical physics and from evolutionary biology to computer science. Combinatorics is well known for the breadth of the problems it tackles. Combinatorial problems arise in many areas of pure mathematics, notably in algebra, probability theory, topology, and geometry, as well as in its many application areas. Many combinatorial questions have historically been considered in isolation, giving an ad hoc solution to a problem arising in some mathematical context. In the later twentieth century, however, powerful and general theoretical methods were developed, making combinatorics into an independent branch of mathematics in its own right. One of the oldest and most accessible parts of combinatorics is graph theory, which by itself has numerous natural connections to other areas. Combinatorics is used frequently in computer science to obtain formulas and estimates in the analysis of algorithms. A mathematician who studies combinatorics is called a . Definition The full scope of combinatorics is not universally agreed upon. According to H.J. Ryser, a definition of the subject is difficult because it crosses so many mathematical subdivisions. Insofar as an area can be described by the types of problems it addresses, combinatorics is involved with: the enumeration (counting) of specified structures, sometimes referred to as arrangements or configurations in a very general sense, associated with finite systems, the existence of such structures that satisfy certain given criteria, the construction of these structures, perhaps in many ways, and optimization: finding the "best" structure or solution among several possibilities, be it the "largest", "smallest" or satisfying some other optimality cr
https://en.wikipedia.org/wiki/Calculus
Calculus is the mathematical study of continuous change, in the same way that geometry is the study of shape, and algebra is the study of generalizations of arithmetic operations. It has two major branches, differential calculus and integral calculus; the former concerns instantaneous rates of change, and the slopes of curves, while the latter concerns accumulation of quantities, and areas under or between curves. These two branches are related to each other by the fundamental theorem of calculus, and they make use of the fundamental notions of convergence of infinite sequences and infinite series to a well-defined limit. Infinitesimal calculus was developed independently in the late 17th century by Isaac Newton and Gottfried Wilhelm Leibniz. Later work, including codifying the idea of limits, put these developments on a more solid conceptual footing. Today, calculus has widespread uses in science, engineering, and social science. Etymology In mathematics education, calculus denotes courses of elementary mathematical analysis, which are mainly devoted to the study of functions and limits. The word calculus is Latin for "small pebble" (the diminutive of calx, meaning "stone"), a meaning which still persists in medicine. Because such pebbles were used for counting out distances, tallying votes, and doing abacus arithmetic, the word came to mean a method of computation. In this sense, it was used in English at least as early as 1672, several years before the publications of Leibniz and Newton. In addition to differential calculus and integral calculus, the term is also used for naming specific methods of calculation and related theories that seek to model a particular concept in terms of mathematics. Examples of this convention include propositional calculus, Ricci calculus, calculus of variations, lambda calculus, and process calculus. Furthermore, the term "calculus" has variously been applied in ethics and philosophy, for such systems as Bentham's felicific cal
https://en.wikipedia.org/wiki/Central%20processing%20unit
A central processing unit (CPU)—also called a central processor or main processor—is the most important processor in a given computer. Its electronic circuitry executes instructions of a computer program, such as arithmetic, logic, controlling, and input/output (I/O) operations. This role contrasts with that of external components, such as main memory and I/O circuitry, and specialized coprocessors such as graphics processing units (GPUs). The form, design, and implementation of CPUs have changed over time, but their fundamental operation remains almost unchanged. Principal components of a CPU include the arithmetic–logic unit (ALU) that performs arithmetic and logic operations, processor registers that supply operands to the ALU and store the results of ALU operations, and a control unit that orchestrates the fetching (from memory), decoding and execution (of instructions) by directing the coordinated operations of the ALU, registers, and other components. Most modern CPUs are implemented on integrated circuit (IC) microprocessors, with one or more CPUs on a single IC chip. Microprocessor chips with multiple CPUs are multi-core processors. The individual physical CPUs, processor cores, can also be multithreaded to create additional virtual or logical CPUs. An IC that contains a CPU may also contain memory, peripheral interfaces, and other components of a computer; such integrated devices are variously called microcontrollers or systems on a chip (SoC). Array processors or vector processors have multiple processors that operate in parallel, with no unit considered central. Virtual CPUs are an abstraction of dynamically aggregated computational resources. History Early computers such as the ENIAC had to be physically rewired to perform different tasks, which caused these machines to be called "fixed-program computers". The "central processing unit" term has been in use since as early as 1955. Since the term "CPU" is generally defined as a device for software (c
https://en.wikipedia.org/wiki/Code
In communications and information processing, code is a system of rules to convert information—such as a letter, word, sound, image, or gesture—into another form, sometimes shortened or secret, for communication through a communication channel or storage in a storage medium. An early example is an invention of language, which enabled a person, through speech, to communicate what they thought, saw, heard, or felt to others. But speech limits the range of communication to the distance a voice can carry and limits the audience to those present when the speech is uttered. The invention of writing, which converted spoken language into visual symbols, extended the range of communication across space and time. The process of encoding converts information from a source into symbols for communication or storage. Decoding is the reverse process, converting code symbols back into a form that the recipient understands, such as English or/and Spanish. One reason for coding is to enable communication in places where ordinary plain language, spoken or written, is difficult or impossible. For example, semaphore, where the configuration of flags held by a signaler or the arms of a semaphore tower encodes parts of the message, typically individual letters, and numbers. Another person standing a great distance away can interpret the flags and reproduce the words sent. Theory In information theory and computer science, a code is usually considered as an algorithm that uniquely represents symbols from some source alphabet, by encoded strings, which may be in some other target alphabet. An extension of the code for representing sequences of symbols over the source alphabet is obtained by concatenating the encoded strings. Before giving a mathematically precise definition, this is a brief example. The mapping is a code, whose source alphabet is the set and whose target alphabet is the set . Using the extension of the code, the encoded string 0011001 can be grouped into codewords a
https://en.wikipedia.org/wiki/Cipher
In cryptography, a cipher (or cypher) is an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is encipherment. To encipher or encode is to convert information into cipher or code. In common parlance, "cipher" is synonymous with "code", as they are both a set of steps that encrypt a message; however, the concepts are distinct in cryptography, especially classical cryptography. Codes generally substitute different length strings of characters in the output, while ciphers generally substitute the same number of characters as are input. A code maps one meaning with another. Words and phrases can be coded as letters or numbers. Codes typically have direct meaning from input to key. Codes primarily function to save time. Ciphers are algorithmic. The given input must follow the cipher's process to be solved. Ciphers are commonly used to encrypt written information. Codes operated by substituting according to a large codebook which linked a random string of characters or numbers to a word or phrase. For example, "UQJHSE" could be the code for "Proceed to the following coordinates." When using a cipher the original information is known as plaintext, and the encrypted form as ciphertext. The ciphertext message contains all the information of the plaintext message, but is not in a format readable by a human or computer without the proper mechanism to decrypt it. The operation of a cipher usually depends on a piece of auxiliary information, called a key (or, in traditional NSA parlance, a cryptovariable). The encrypting procedure is varied depending on the key, which changes the detailed operation of the algorithm. A key must be selected before using a cipher to encrypt a message. Without knowledge of the key, it should be extremely difficult, if not impossible, to decrypt the resulting ciphertext into readable plaintext. Most modern ciphers can be categorized in several wa
https://en.wikipedia.org/wiki/Common%20descent
Common descent is a concept in evolutionary biology applicable when one species is the ancestor of two or more species later in time. According to modern evolutionary biology, all living beings could be descendants of a unique ancestor commonly referred to as the last universal common ancestor (LUCA) of all life on Earth. Common descent is an effect of speciation, in which multiple species derive from a single ancestral population. The more recent the ancestral population two species have in common, the more closely are they related. The most recent common ancestor of all currently living organisms is the last universal ancestor, which lived about 3.9 billion years ago. The two earliest pieces of evidence for life on Earth are graphite found to be biogenic in 3.7 billion-year-old metasedimentary rocks discovered in western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone discovered in Western Australia. All currently living organisms on Earth share a common genetic heritage, though the suggestion of substantial horizontal gene transfer during early evolution has led to questions about the monophyly (single ancestry) of life. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago in the Precambrian. Universal common descent through an evolutionary process was first proposed by the British naturalist Charles Darwin in the concluding sentence of his 1859 book On the Origin of Species: History The idea that all living things (including things considered non-living by science) are related is a recurring theme in many indigenous worldviews across the world. Later on, in the 1740s, the French mathematician Pierre Louis Maupertuis arrived at the idea that all organisms had a common ancestor, and had diverged through random variation and natural selection. In Essai de cosmologie (1750), Maupertuis noted: May we not say that, in the fortuito
https://en.wikipedia.org/wiki/Character%20encoding
Character encoding is the process of assigning numbers to graphical characters, especially the written characters of human language, allowing them to be stored, transmitted, and transformed using digital computers. The numerical values that make up a character encoding are known as "code points" and collectively comprise a "code space", a "code page", or a "character map". Early character codes associated with the optical or electrical telegraph could only represent a subset of the characters used in written languages, sometimes restricted to upper case letters, numerals and some punctuation only. The low cost of digital representation of data in modern computer systems allows more elaborate character codes (such as Unicode) which represent most of the characters used in many written languages. Character encoding using internationally accepted standards permits worldwide interchange of text in electronic form. History The history of character codes illustrates the evolving need for machine-mediated character-based symbolic information over a distance, using once-novel electrical means. The earliest codes were based upon manual and hand-written encoding and cyphering systems, such as Bacon's cipher, Braille, international maritime signal flags, and the 4-digit encoding of Chinese characters for a Chinese telegraph code (Hans Schjellerup, 1869). With the adoption of electrical and electro-mechanical techniques these earliest codes were adapted to the new capabilities and limitations of the early machines. The earliest well-known electrically transmitted character code, Morse code, introduced in the 1840s, used a system of four "symbols" (short signal, long signal, short space, long space) to generate codes of variable length. Though some commercial use of Morse code was via machinery, it was often used as a manual code, generated by hand on a telegraph key and decipherable by ear, and persists in amateur radio and aeronautical use. Most codes are of fixed per-char
https://en.wikipedia.org/wiki/Computer%20data%20storage
Computer data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers. The central processing unit (CPU) of a computer is what manipulates data by performing computations. In practice, almost all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but less expensive and larger options further away. Generally, the fast technologies are referred to as "memory", while slower persistent technologies are referred to as "storage". Even the first computer designs, Charles Babbage's Analytical Engine and Percy Ludgate's Analytical Machine, clearly distinguished between processing and memory (Babbage stored numbers as rotations of gears, while Ludgate stored numbers as displacements of rods in shuttles). This distinction was extended in the Von Neumann architecture, where the CPU consists of two main parts: The control unit and the arithmetic logic unit (ALU). The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data. Functionality Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, and other specialized devices. Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann
https://en.wikipedia.org/wiki/Combination
In mathematics, a combination is a selection of items from a set that has distinct members, such that the order of selection does not matter (unlike permutations). For example, given three fruits, say an apple, an orange and a pear, there are three combinations of two that can be drawn from this set: an apple and a pear; an apple and an orange; or a pear and an orange. More formally, a k-combination of a set S is a subset of k distinct elements of S. So, two combinations are identical if and only if each combination has the same members. (The arrangement of the members in each set does not matter.) If the set has n elements, the number of k-combinations, denoted by or , is equal to the binomial coefficient which can be written using factorials as whenever , and which is zero when . This formula can be derived from the fact that each k-combination of a set S of n members has permutations so or . The set of all k-combinations of a set S is often denoted by . A combination is a combination of n things taken k at a time without repetition. To refer to combinations in which repetition is allowed, the terms k-combination with repetition, k-multiset, or k-selection, are often used. If, in the above example, it were possible to have two of any one kind of fruit there would be 3 more 2-selections: one with two apples, one with two oranges, and one with two pears. Although the set of three fruits was small enough to write a complete list of combinations, this becomes impractical as the size of the set increases. For example, a poker hand can be described as a 5-combination (k = 5) of cards from a 52 card deck (n = 52). The 5 cards of the hand are all distinct, and the order of cards in the hand does not matter. There are 2,598,960 such combinations, and the chance of drawing any one hand at random is 1 / 2,598,960. Number of k-combinations The number of k-combinations from a given set S of n elements is often denoted in elementary combinatorics texts by , or by a va
https://en.wikipedia.org/wiki/Software
Software is a set of computer programs and associated documentation and data. This is in contrast to hardware, from which the system is built and which actually performs the work. At the lowest programming level, executable code consists of machine language instructions supported by an individual processor—typically a central processing unit (CPU) or a graphics processing unit (GPU). Machine language consists of groups of binary values signifying processor instructions that change the state of the computer from its preceding state. For example, an instruction may change the value stored in a particular storage location in the computer—an effect that is not directly observable to the user. An instruction may also invoke one of many input or output operations, for example, displaying some text on a computer screen, causing state changes that should be visible to the user. The processor executes the instructions in the order they are provided, unless it is instructed to "jump" to a different instruction or is interrupted by the operating system. , most personal computers, smartphone devices, and servers have processors with multiple execution units, or multiple processors performing computation together, so computing has become a much more concurrent activity than in the past. The majority of software is written in high-level programming languages. They are easier and more efficient for programmers because they are closer to natural languages than machine languages. High-level languages are translated into machine language using a compiler, an interpreter, or a combination of the two. Software may also be written in a low-level assembly language that has a strong correspondence to the computer's machine language instructions and is translated into machine language using an assembler. History An algorithm for what would have been the first piece of software was written by Ada Lovelace in the 19th century, for the planned Analytical Engine. She created proofs to show
https://en.wikipedia.org/wiki/Computer%20programming
Computer programming or coding is the composition of sequences of instructions, called programs, that computers can follow to perform tasks. It involves designing and implementing algorithms, step-by-step specifications of procedures, by writing code in one or more programming languages. Programmers typically use high-level programming languages that are more easily intelligible to humans than machine code, which is directly executed by the central processing unit. Proficient programming usually requires expertise in several different subjects, including knowledge of the application domain, details of programming languages and generic code libraries, specialized algorithms, and formal logic. Auxiliary tasks accompanying and related to programming include analyzing requirements, testing, debugging (investigating and fixing problems), implementation of build systems, and management of derived artifacts, such as programs' machine code. While these are sometimes considered programming, often the term software development is used for this larger overall process – with the terms programming, implementation, and coding reserved for the writing and editing of code per se. Sometimes software development is known as software engineering, especially when it employs formal methods or follows an engineering design process. History Programmable devices have existed for centuries. As early as the 9th century, a programmable music sequencer was invented by the Persian Banu Musa brothers, who described an automated mechanical flute player in the Book of Ingenious Devices. In 1206, the Arab engineer Al-Jazari invented a programmable drum machine where a musical mechanical automaton could be made to play different rhythms and drum patterns, via pegs and cams. In 1801, the Jacquard loom could produce entirely different weaves by changing the "program" – a series of pasteboard cards with holes punched in them. Code-breaking algorithms have also existed for centuries. In the 9th cent
https://en.wikipedia.org/wiki/Country%20code
A country code is a short alphanumeric identification code for countries and dependent areas. Its primary use is in data processing and communications. Several identification systems have been developed. The term country code frequently refers to ISO 3166-1 alpha-2, as well as the telephone country code, which is embodied in the E.164 recommendation by the International Telecommunication Union (ITU). ISO 3166-1 The standard ISO 3166-1 defines short identification codes for most countries and dependent areas: ISO 3166-1 alpha-2: two-letter code ISO 3166-1 alpha-3: three-letter code ISO 3166-1 numeric: three-digit code The two-letter codes are used as the basis for other codes and applications, for example, for ISO 4217 currency codes with deviations, for country code top-level domain names (ccTLDs) on the Internet: list of Internet TLDs. Other applications are defined in ISO 3166-1 alpha-2. ITU country codes In telecommunication, a country code, or international subscriber dialing (ISD) code, is a telephone number prefix used in international direct dialing (IDD) and for destination routing of telephone calls to a country other than the caller's. A country or region with an autonomous telephone administration must apply for membership in the International Telecommunication Union (ITU) to participate in the international public switched telephone network (PSTN). County codes are defined by the ITU-T section of the ITU in standards E.123 and E.164. Country codes constitute the international telephone numbering plan, and are dialed only when calling a telephone number in another country. They are dialed before the national telephone number. International calls require at least one additional prefix to be dialing before the country code, to connect the call to international circuits, the international call prefix. When printing telephone numbers this is indicated by a plus-sign (+) in front of a complete international telephone number, per recommendation E164 by the
https://en.wikipedia.org/wiki/Cladistics
Cladistics (; ) is an approach to biological classification in which organisms are categorized in groups ("clades") based on hypotheses of most recent common ancestry. The evidence for hypothesized relationships is typically shared derived characteristics (synapomorphies) that are not present in more distant groups and ancestors. However, from an empirical perspective, common ancestors are inferences based on a cladistic hypothesis of relationships of taxa whose character states can be observed. Theoretically, a last common ancestor and all its descendants constitute a (minimal) clade. Importantly, all descendants stay in their overarching ancestral clade. For example, if the terms worms or fishes were used within a strict cladistic framework, these terms would include humans. Many of these terms are normally used paraphyletically, outside of cladistics, e.g. as a 'grade', which are fruitless to precisely delineate, especially when including extinct species. Radiation results in the generation of new subclades by bifurcation, but in practice sexual hybridization may blur very closely related groupings. As a hypothesis, a clade can be rejected only if some groupings were explicitly excluded. It may then be found that the excluded group did actually descend from the last common ancestor of the group, and thus emerged within the group. ("Evolved from" is misleading, because in cladistics all descendants stay in the ancestral group). Upon finding that the group is paraphyletic this way, either such excluded groups should be granted to the clade, or the group should be abolished. Branches down to the divergence to the next significant (e.g. extant) sister are considered stem-groupings of the clade, but in principle each level stands on its own, to be assigned a unique name. For a fully bifurcated tree, adding a group to a tree also adds an additional (named) clade, and a new level on that branch. Specifically, also extinct groups are always put on a side-branch, not d
https://en.wikipedia.org/wiki/Condensed%20matter%20physics
Condensed matter physics is the field of physics that deals with the macroscopic and microscopic physical properties of matter, especially the solid and liquid phases which arise from electromagnetic forces between atoms. More generally, the subject deals with condensed phases of matter: systems of many constituents with strong interactions among them. More exotic condensed phases include the superconducting phase exhibited by certain materials at extremely low cryogenic temperature, the ferromagnetic and antiferromagnetic phases of spins on crystal lattices of atoms, and the Bose–Einstein condensate found in ultracold atomic systems. Condensed matter physicists seek to understand the behavior of these phases by experiments to measure various material properties, and by applying the physical laws of quantum mechanics, electromagnetism, statistical mechanics, and other physics theories to develop mathematical models. The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists self-identify as condensed matter physicists, and the Division of Condensed Matter Physics is the largest division at the American Physical Society. The field overlaps with chemistry, materials science, engineering and nanotechnology, and relates closely to atomic physics and biophysics. The theoretical physics of condensed matter shares important concepts and methods with that of particle physics and nuclear physics. A variety of topics in physics such as crystallography, metallurgy, elasticity, magnetism, etc., were treated as distinct areas until the 1940s, when they were grouped together as solid-state physics. Around the 1960s, the study of physical properties of liquids was added to this list, forming the basis for the more comprehensive specialty of condensed matter physics. The Bell Telephone Laboratories was one of the first institutes to conduct a research program in con
https://en.wikipedia.org/wiki/Conversion%20of%20units
Conversion of units is the conversion between different units of measurement for the same quantity, typically through multiplicative conversion factors which change the measured quantity value without changing its effects. Unit conversion is often easier within the metric or the SI than in others, due to the regular 10-base in all units and the prefixes that increase or decrease by 3 powers of 10 at a time. Overview The process of conversion depends on the specific situation and the intended purpose. This may be governed by regulation, contract, technical specifications or other published standards. Engineering judgment may include such factors as: The precision and accuracy of measurement and the associated uncertainty of measurement. The statistical confidence interval or tolerance interval of the initial measurement. The number of significant figures of the measurement. The intended use of the measurement including the engineering tolerances. Historical definitions of the units and their derivatives used in old measurements; e.g., international foot vs. US survey foot. Some conversions from one system of units to another need to be exact, without increasing or decreasing the precision of the first measurement. This is sometimes called soft conversion. It does not involve changing the physical configuration of the item being measured. By contrast, a hard conversion or an adaptive conversion may not be exactly equivalent. It changes the measurement to convenient and workable numbers and units in the new system. It sometimes involves a slightly different configuration, or size substitution, of the item. Nominal values are sometimes allowed and used. Factor-label method The factor-label method, also known as the unit-factor method or the unity bracket method, is a widely used technique for unit conversions using the rules of algebra. The factor-label method is the sequential application of conversion factors expressed as fractions and arranged so that an
https://en.wikipedia.org/wiki/Computational%20linguistics
Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics draws upon linguistics, computer science, artificial intelligence, mathematics, logic, philosophy, cognitive science, cognitive psychology, psycholinguistics, anthropology and neuroscience, among others. Since the 2020s, computational linguistics has become a near-synonym of either natural language processing or language technology, with deep learning approaches, such as large language models, outperforming the specific approaches previously used in the field. Origins The field overlapped with artificial intelligence since the efforts in the United States in the 1950s to use computers to automatically translate texts from foreign languages, particularly Russian scientific journals, into English. Since rule-based approaches were able to make arithmetic (systematic) calculations much faster and more accurately than humans, it was expected that lexicon, morphology, syntax and semantics can be learned using explicit rules, as well. After the failure of rule-based approaches, David Hays coined the term in order to distinguish the field from AI and co-founded both the Association for Computational Linguistics (ACL) and the International Committee on Computational Linguistics (ICCL) in the 1970s and 1980s. What started as an effort to translate between languages evolved into a much wider field of natural language processing. Annotated corpora In order to be able to meticulously study the English language, an annotated text corpus was much needed. The Penn Treebank was one of the most used corpora. It consisted of IBM computer manuals, transcribed telephone conversations, and other texts, together containing over 4.5 million words of American English, annotated using both part-of-speech tagging and syntactic bracketing. Japanese
https://en.wikipedia.org/wiki/Claude%20Shannon
Claude Elwood Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electrical engineer, computer scientist and cryptographer known as the "father of information theory". He is credited alongside George Boole for laying the foundations of the Information Age. As a 21-year-old master's degree student at the Massachusetts Institute of Technology (MIT), he wrote his thesis demonstrating that electrical applications of Boolean algebra could construct any logical numerical relationship. Shannon contributed to the field of cryptanalysis for national defense of the United States during World War II, including his fundamental work on codebreaking and secure telecommunications, writing a paper which would be considered one of the foundational pieces of modern cryptography. His mathematical theory of information laid the foundations for the field of information theory, with his famous paper being called the "Magna Carta of the Information Age" by Scientific American. He also made contributions to artificial intelligence. His achievements are said to be on par with those of Albert Einstein and Alan Turing in their fields. Biography Childhood The Shannon family lived in Gaylord, Michigan, and Claude was born in a hospital in nearby Petoskey. His father, Claude Sr. (1862–1934), was a businessman and, for a while, a judge of probate in Gaylord. His mother, Mabel Wolf Shannon (1890–1945), was a language teacher, who also served as the principal of Gaylord High School. Claude Sr. was a descendant of New Jersey settlers, while Mabel was a child of German immigrants. Shannon's family was active in their Methodist Church during his youth. Most of the first 16 years of Shannon's life were spent in Gaylord, where he attended public school, graduating from Gaylord High School in 1932. Shannon showed an inclination towards mechanical and electrical things. His best subjects were science and mathematics. At home, he constructed such devices as models of planes,
https://en.wikipedia.org/wiki/Continuum%20hypothesis
In mathematics, specifically set theory, the continuum hypothesis (abbreviated CH) is a hypothesis about the possible sizes of infinite sets. It states that or equivalently, that In Zermelo–Fraenkel set theory with the axiom of choice (ZFC), this is equivalent to the following equation in aleph numbers: , or even shorter with beth numbers: . The continuum hypothesis was advanced by Georg Cantor in 1878, and establishing its truth or falsehood is the first of Hilbert's 23 problems presented in 1900. The answer to this problem is independent of ZFC, so that either the continuum hypothesis or its negation can be added as an axiom to ZFC set theory, with the resulting theory being consistent if and only if ZFC is consistent. This independence was proved in 1963 by Paul Cohen, complementing earlier work by Kurt Gödel in 1940. The name of the hypothesis comes from the term the continuum for the real numbers. History Cantor believed the continuum hypothesis to be true and for many years tried in vain to prove it. It became the first on David Hilbert's list of important open questions that was presented at the International Congress of Mathematicians in the year 1900 in Paris. Axiomatic set theory was at that point not yet formulated. Kurt Gödel proved in 1940 that the negation of the continuum hypothesis, i.e., the existence of a set with intermediate cardinality, could not be proved in standard set theory. The second half of the independence of the continuum hypothesis – i.e., unprovability of the nonexistence of an intermediate-sized set – was proved in 1963 by Paul Cohen. Cardinality of infinite sets Two sets are said to have the same cardinality or cardinal number if there exists a bijection (a one-to-one correspondence) between them. Intuitively, for two sets S and T to have the same cardinality means that it is possible to "pair off" elements of S with elements of T in such a fashion that every element of S is paired off with exactly one element of T and vice
https://en.wikipedia.org/wiki/Cryptanalysis
Cryptanalysis (from the Greek kryptós, "hidden", and analýein, "to analyze") refers to the process of analyzing information systems in order to understand hidden aspects of the systems. Cryptanalysis is used to breach cryptographic security systems and gain access to the contents of encrypted messages, even if the cryptographic key is unknown. In addition to mathematical analysis of cryptographic algorithms, cryptanalysis includes the study of side-channel attacks that do not target weaknesses in the cryptographic algorithms themselves, but instead exploit weaknesses in their implementation. Even though the goal has been the same, the methods and techniques of cryptanalysis have changed drastically through the history of cryptography, adapting to increasing cryptographic complexity, ranging from the pen-and-paper methods of the past, through machines like the British Bombes and Colossus computers at Bletchley Park in World War II, to the mathematically advanced computerized schemes of the present. Methods for breaking modern cryptosystems often involve solving carefully constructed problems in pure mathematics, the best-known being integer factorization. Overview In encryption, confidential information (called the "plaintext") is sent securely to a recipient by the sender first converting it into an unreadable form ("ciphertext") using an encryption algorithm. The ciphertext is sent through an insecure channel to the recipient. The recipient decrypts the ciphertext by applying an inverse decryption algorithm, recovering the plaintext. To decrypt the ciphertext, the recipient requires a secret knowledge from the sender, usually a string of letters, numbers, or bits, called a cryptographic key. The concept is that even if an unauthorized person gets access to the ciphertext during transmission, without the secret key they cannot convert it back to plaintext. Encryption has been used throughout history to send important military, diplomatic and commercial mess
https://en.wikipedia.org/wiki/Compiler
In computing, a compiler is a computer program that translates computer code written in one programming language (the source language) into another language (the target language). The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a low-level programming language (e.g. assembly language, object code, or machine code) to create an executable program. There are many different types of compilers which produce output in different useful forms. A cross-compiler produces code for a different CPU or operating system than the one on which the cross-compiler itself runs. A bootstrap compiler is often a temporary compiler, used for compiling a more permanent or better optimised compiler for a language. Related software include, a program that translates from a low-level language to a higher level one is a decompiler; a program that translates between high-level languages, usually called a source-to-source compiler or transpiler. A language rewriter is usually a program that translates the form of expressions without a change of language. A compiler-compiler is a compiler that produces a compiler (or part of one), often in a generic and reusable way so as to be able to produce many differing compilers. A compiler is likely to perform some or all of the following operations, often called phases: preprocessing, lexical analysis, parsing, semantic analysis (syntax-directed translation), conversion of input programs to an intermediate representation, code optimization and machine specific code generation. Compilers generally implement these phases as modular components, promoting efficient design and correctness of transformations of source input to target output. Program faults caused by incorrect compiler behavior can be very difficult to track down and work around; therefore, compiler implementers invest significant effort to ensure compiler correctness. Compilers are not the only language processor us
https://en.wikipedia.org/wiki/Key%20size
In cryptography, key size or key length refers to the number of bits in a key used by a cryptographic algorithm (such as a cipher). Key length defines the upper-bound on an algorithm's security (i.e. a logarithmic measure of the fastest known attack against an algorithm), because the security of all algorithms can be violated by brute-force attacks. Ideally, the lower-bound on an algorithm's security is by design equal to the key length (that is, the algorithm's design does not detract from the degree of security inherent in the key length). Most symmetric-key algorithms are designed to have security equal to their key length. However, after design, a new attack might be discovered. For instance, Triple DES was designed to have a 168-bit key, but an attack of complexity 2112 is now known (i.e. Triple DES now only has 112 bits of security, and of the 168 bits in the key the attack has rendered 56 'ineffective' towards security). Nevertheless, as long as the security (understood as "the amount of effort it would take to gain access") is sufficient for a particular application, then it does not matter if key length and security coincide. This is important for asymmetric-key algorithms, because no such algorithm is known to satisfy this property; elliptic curve cryptography comes the closest with an effective security of roughly half its key length. Significance Keys are used to control the operation of a cipher so that only the correct key can convert encrypted text (ciphertext) to plaintext. All commonly-used ciphers are based on publicly known algorithms or are open source and so it is only the difficulty of obtaining the key that determines security of the system, provided that there is no analytic attack (i.e. a "structural weakness" in the algorithms or protocols used), and assuming that the key is not otherwise available (such as via theft, extortion, or compromise of computer systems). The widely accepted notion that the security of the system should depend o
https://en.wikipedia.org/wiki/Civil%20engineering
Civil engineering is a professional engineering discipline that deals with the design, construction, and maintenance of the physical and naturally built environment, including public works such as roads, bridges, canals, dams, airports, sewage systems, pipelines, structural components of buildings, and railways. Civil engineering is traditionally broken into a number of sub-disciplines. It is considered the second-oldest engineering discipline after military engineering, and it is defined to distinguish non-military engineering from military engineering. Civil engineering can take place in the public sector from municipal public works departments through to federal government agencies, and in the private sector from locally based firms to global Fortune 500 companies. History Civil engineering as a discipline Civil engineering is the application of physical and scientific principles for solving the problems of society, and its history is intricately linked to advances in the understanding of physics and mathematics throughout history. Because civil engineering is a broad profession, including several specialized sub-disciplines, its history is linked to knowledge of structures, materials science, geography, geology, soils, hydrology, environmental science, mechanics, project management, and other fields. Throughout ancient and medieval history most architectural design and construction was carried out by artisans, such as stonemasons and carpenters, rising to the role of master builder. Knowledge was retained in guilds and seldom supplanted by advances. Structures, roads, and infrastructure that existed were repetitive, and increases in scale were incremental. One of the earliest examples of a scientific approach to physical and mathematical problems applicable to civil engineering is the work of Archimedes in the 3rd century BC, including Archimedes' principle, which underpins our understanding of buoyancy, and practical solutions such as Archimedes' screw. B
https://en.wikipedia.org/wiki/Computer%20program
A computer program is a sequence or set of instructions in a programming language for a computer to execute. It is one component of software, which also includes documentation and other intangible components. A computer program in its human-readable form is called source code. Source code needs another computer program to execute because computers can only execute their native machine instructions. Therefore, source code may be translated to machine instructions using the language's compiler. (Assembly language programs are translated using an assembler.) The resulting file is called an executable. Alternatively, source code may execute within the language's interpreter. If the executable is requested for execution, then the operating system loads it into memory and starts a process. The central processing unit will soon switch to this process so it can fetch, decode, and then execute each machine instruction. If the source code is requested for execution, then the operating system loads the corresponding interpreter into memory and starts a process. The interpreter then loads the source code into memory to translate and execute each statement. Running the source code is slower than running an executable. Moreover, the interpreter must be installed on the computer. Example computer program The "Hello, World!" program is used to illustrate a language's basic syntax. The syntax of the language BASIC (1964) was intentionally limited to make the language easy to learn. For example, variables are not declared before being used. Also, variables are automatically initialized to zero. Here is an example computer program, in Basic, to average a list of numbers: 10 INPUT "How many numbers to average?", A 20 FOR I = 1 TO A 30 INPUT "Enter number:", B 40 LET C = C + B 50 NEXT I 60 LET D = C/A 70 PRINT "The average is", D 80 END Once the mechanics of basic computer programming are learned, more sophisticated and powerful languages are available to build large computer syst
https://en.wikipedia.org/wiki/Complex%20number
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation ; every complex number can be expressed in the form , where and are real numbers. Because no real number satisfies the above equation, was called an imaginary number by René Descartes. For the complex number , is called the , and is called the . The set of complex numbers is denoted by either of the symbols or . Despite the historical nomenclature "imaginary", complex numbers are regarded in the mathematical sciences as just as "real" as the real numbers and are fundamental in many aspects of the scientific description of the natural world. Complex numbers allow solutions to all polynomial equations, even those that have no solutions in real numbers. More precisely, the fundamental theorem of algebra asserts that every non-constant polynomial equation with real or complex coefficients has a solution which is a complex number. For example, the equation has no real solution, since the square of a real number cannot be negative, but has the two nonreal complex solutions and . Addition, subtraction and multiplication of complex numbers can be naturally defined by using the rule combined with the associative, commutative, and distributive laws. Every nonzero complex number has a multiplicative inverse. This makes the complex numbers a field that has the real numbers as a subfield. The complex numbers also form a real vector space of dimension two, with as a standard basis. This standard basis makes the complex numbers a Cartesian plane, called the complex plane. This allows a geometric interpretation of the complex numbers and their operations, and conversely expressing in terms of complex numbers some geometric properties and constructions. For example, the real numbers form the real line which is identified to the horizontal axis of the complex plane. The complex numbers of a
https://en.wikipedia.org/wiki/Cryptozoology
Cryptozoology is a pseudoscience and subculture that searches for and studies unknown, legendary, or extinct animals whose present existence is disputed or unsubstantiated, particularly those popular in folklore, such as Bigfoot, the Loch Ness Monster, Yeti, the chupacabra, the Jersey Devil, or the Mokele-mbembe. Cryptozoologists refer to these entities as cryptids, a term coined by the subculture. Because it does not follow the scientific method, cryptozoology is considered a pseudoscience by mainstream science: it is neither a branch of zoology nor of folklore studies. It was originally founded in the 1950s by zoologists Bernard Heuvelmans and Ivan T. Sanderson. Scholars have noted that the subculture rejected mainstream approaches from an early date, and that adherents often express hostility to mainstream science. Scholars have studied cryptozoologists and their influence (including cryptozoology's association with Young Earth creationism), noted parallels in cryptozoology and other pseudosciences such as ghost hunting and ufology, and highlighted uncritical media propagation of cryptozoologist claims. Terminology, history, and approach As a field, cryptozoology originates from the works of Bernard Heuvelmans, a Belgian zoologist, and Ivan T. Sanderson, a Scottish zoologist. Notably, Heuvelmans published On the Track of Unknown Animals (French Sur la Piste des Bêtes Ignorées) in 1955, a landmark work among cryptozoologists that was followed by numerous other like works. Similarly, Sanderson published a series of books that contributed to the developing hallmarks of cryptozoology, including Abominable Snowmen: Legend Come to Life (1961). Heuvelmans himself traced cryptozoology to the work of Anthonie Cornelis Oudemans, who theorized that a large unidentified species of seal was responsible for sea serpent reports. The term cryptozoology dates from 1959 or before—Heuvelmans attributes the coinage of the term cryptozoology 'the study of hidden animals' (from A
https://en.wikipedia.org/wiki/Category%20theory
Category theory is a general theory of mathematical structures and their relations that was introduced by Samuel Eilenberg and Saunders Mac Lane in the middle of the 20th century in their foundational work on algebraic topology. Category theory is used in almost all areas of mathematics. In particular, numerous constructions of new mathematical objects from previous ones that appear similarly in several contexts are conveniently expressed and unified in terms of categories. Examples include quotient spaces, direct products, completion, and duality. Many areas of computer science also rely on category theory, such as functional programming and semantics. A category is formed by two sorts of objects: the objects of the category, and the morphisms, which relate two objects called the source and the target of the morphism. One often says that a morphism is an arrow that maps its source to its target. Morphisms can be composed if the target of the first morphism equals the source of the second one, and morphism composition has similar properties as function composition (associativity and existence of identity morphisms). Morphisms are often some sort of function, but this is not always the case. For example, a monoid may be viewed as a category with a single object, whose morphisms are the elements of the monoid. The second fundamental concept of category theory is the concept of a functor, which plays the role of a morphism between two categories and it maps objects of to objects of and morphisms of to morphisms of in such a way that sources are mapped to sources and targets are mapped to targets (or, in the case of a contravariant functor, sources are mapped to targets and vice-versa). A third fundamental concept is a natural transformation that may be viewed as a morphism of functors. Categories, objects, and morphisms Categories A category C consists of the following three mathematical entities: A class ob(C), whose elements are called objects; A class
https://en.wikipedia.org/wiki/Circumference
In geometry, the circumference (from Latin circumferens, meaning "carrying around") is the perimeter of a circle or ellipse. The circumference is the arc length of the circle, as if it were opened up and straightened out to a line segment. More generally, the perimeter is the curve length around any closed figure. Circumference may also refer to the circle itself, that is, the locus corresponding to the edge of a disk. The is the circumference, or length, of any one of its great circles. Circle The circumference of a circle is the distance around it, but if, as in many elementary treatments, distance is defined in terms of straight lines, this cannot be used as a definition. Under these circumstances, the circumference of a circle may be defined as the limit of the perimeters of inscribed regular polygons as the number of sides increases without bound. The term circumference is used when measuring physical objects, as well as when considering abstract geometric forms. Relationship with The circumference of a circle is related to one of the most important mathematical constants. This constant, pi, is represented by the Greek letter The first few decimal digits of the numerical value of are 3.141592653589793 ... Pi is defined as the ratio of a circle's circumference to its diameter Or, equivalently, as the ratio of the circumference to twice the radius. The above formula can be rearranged to solve for the circumference: The ratio of the circle's circumference to its radius is called the circle constant, and is equivalent to . The value is also the amount of radians in one turn. The use of the mathematical constant is ubiquitous in mathematics, engineering, and science. In Measurement of a Circle written circa 250 BCE, Archimedes showed that this ratio ( since he did not use the name ) was greater than 3 but less than 3 by calculating the perimeters of an inscribed and a circumscribed regular polygon of 96 sides. This method for approximating was used
https://en.wikipedia.org/wiki/Color
Color (American English) or colour (Commonwealth English) is the visual perception based on the electromagnetic spectrum. Though color is not an inherent property of matter, color perception is related to an object's light absorption, reflection, emission spectra and interference. For most humans, colors are perceived in the visible light spectrum with three types of cone cells (trichromacy). Other animals may have a different number of cone cell types or have eyes sensitive to different wavelength, such as bees that can distinguish ultraviolet, and thus have a different color sensitivity range. Animal perception of color originates from different light wavelength or spectral sensitivity in cone cell types, which is then processed by the brain. Colors have perceived properties such as hue, colorfulness (saturation) and luminance. Colors can also be additively mixed (commonly used for actual light) or subtractively mixed (commonly used for materials). If the colors are mixed in the right proportions, because of metamerism, they may look the same as a single-wavelength light. For convenience, colors can be organized in a color space, which when being abstracted as a mathematical color model can assign each region of color with a corresponding set of numbers. As such, color spaces are an essential tool for color reproduction in print, photography, computer monitors and television. The most well-known color models are RGB, CMYK, YUV, HSL and HSV. Because the perception of color is an important aspect of human life, different colors have been associated with emotions, activity, and nationality. Names of color regions in different cultures can have different, sometimes overlapping areas. In visual arts, color theory is used to govern the use of colors in an aesthetically pleasing and harmonious way. The theory of color includes the color complements; color balance; and classification of primary colors (traditionally red, yellow, blue), secondary colors (traditionally or
https://en.wikipedia.org/wiki/Computation
A computation is any type of arithmetic or non-arithmetic calculation that is well-defined. Common examples of computations are mathematical equations and computer algorithms. Mechanical or electronic devices (or, historically, people) that perform computations are known as computers. The study of computation is the field of computability, itself a sub-field of computer science. Introduction The notion that mathematical statements should be 'well-defined' had been argued by mathematicians since at least the 1600s, but agreement on a suitable definition proved elusive. A candidate definition was proposed independently by several mathematicians in the 1930s. The best-known variant was formalised by the mathematician Alan Turing, who defined a well-defined statement or calculation as any statement that could be expressed in terms of the initialisation parameters of a Turing Machine. Other (mathematically equivalent) definitions include Alonzo Church's lambda-definability, Herbrand-Gödel-Kleene's general recursiveness and Emil Post's 1-definability. Today, any formal statement or calculation that exhibits this quality of well-definedness is termed computable, while the statement or calculation itself is referred to as a computation. Turing's definition apportioned "well-definedness" to a very large class of mathematical statements, including all well-formed algebraic statements, and all statements written in modern computer programming languages. Despite the widespread uptake of this definition, there are some mathematical concepts that have no well-defined characterisation under this definition. This includes the halting problem and the busy beaver game. It remains an open question as to whether there exists a more powerful definition of 'well-defined' that is able to capture both computable and 'non-computable' statements. Some examples of mathematical statements that are computable include: All statements characterised in modern programming languages, includ
https://en.wikipedia.org/wiki/Smart%20host
A smart host or smarthost is an email server via which third parties can send emails and have them forwarded on to the email recipients' email servers. Smarthosts were originally open mail relays, but most providers now require authentication from the sender, to verify that the sender is authorised – for example, an ISP might run a smarthost for their paying customers only. Use in spam control efforts In an effort to reduce email spam originating from their customer's IP addresses, some internet service providers (ISPs), will not allow their customers to communicate directly with recipient mailservers via the default SMTP port number 25. Instead, often they will set up a smarthost to which their customers can direct all their outward mail – or customers could alternatively use one of the commercial smarthost services. Sometimes, even if an outward port 25 is not blocked, an individual or organisation's normal external IP address has a difficulty in getting SMTP mail accepted. This could be because that IP was assigned in the past to someone who sent spam from it, or appears to be a dynamic address such as typically used for home connection. Whatever the reason for the "poor reputation" or "blacklisting", they can choose to redirect all their email out to an external smarthost for delivery. Reducing complexity When a host runs its own local mail server, a smart host is often used to transmit all mail to other systems through a central mail server. This is used to ease the management of a single mail server with aliases, security, and Internet access rather than maintaining numerous local mail servers. See also Mail submission agent References Email Internet terminology
https://en.wikipedia.org/wiki/Minidish
The Minidish is the tradename used for the small-sized satellite dish used by Freesat and Sky. The term has entered the vocabulary in the UK and Ireland as a generic term for a satellite dish, particularly small ones. The Minidish is an oval, mesh satellite dish capable of reflecting signals broadcast in the upper X band and . Two sizes exist: "Zone 1" dishes are issued in southern and Northern England and parts of Scotland and were 43 cm vertically prior to 2009; newer mark 4 dishes are approximately 50 cm "Zone 2" dishes are issued in elsewhere (Wales, Northern Ireland, Republic of Ireland, Scotland and northern England), which are 57 cm vertically. The Minidish uses a non-standard connector for the LNB, consisting of a peg about in width and in height prior to the mark 4 dishes introduced in 2009, as opposed to the 40 mm collar. This enforces the use of Sky-approved equipment, but also ensures that a suitable LNB is used. Due to the shape of the dish, an LNB with an oval feedhorn is required to get full signal. References Satellite television Radio electronics Sky Group Brands that became generic
https://en.wikipedia.org/wiki/Indexing%20Service
Indexing Service (originally called Index Server) was a Windows service that maintained an index of most of the files on a computer to improve searching performance on PCs and corporate computer networks. It updated indexes without user intervention. In Windows Vista it was replaced by the newer Windows Search Indexer. The IFilter plugins to extend the indexing capabilities to more file formats and protocols are compatible between the legacy Indexing Service how and the newer Windows Search Indexer. History Indexing Service was a desktop search service included with Windows NT 4.0 Option Pack as well as Windows 2000 and later. The first incarnation of the indexing service was shipped in August 1996 as a content search system for Microsoft's web server software, Internet Information Services. Its origins, however, date further back to Microsoft's Cairo operating system project, with the component serving as the Content Indexer for the Object File System. Cairo was eventually shelved, but the content indexing capabilities would go on to be included as a standard component of later Windows desktop and server operating systems, starting with Windows 2000, which includes Indexing Service 3.0. In Windows Vista, the content indexer was replaced with the Windows Search indexer which was enabled by default. Indexing Service is still included with Windows Server 2008 but is not installed or running by default. Indexing Service has been deprecated in Windows 7 and Windows Server 2008 R2. It has been removed from Windows 8. Search interfaces Comprehensive searching is available after initial building of the index, which can take up to hours or days, depending on the size of the specified directories, the speed of the hard drive, user activity, indexer settings and other factors. Searching using Indexing service works also on UNC paths and/or mapped network drives if the sharing server indexes appropriate directory and is aware of its sharing. Once the indexing service ha
https://en.wikipedia.org/wiki/Polite%20number
In number theory, a polite number is a positive integer that can be written as the sum of two or more consecutive positive integers. A positive integer which is not polite is called impolite. The impolite numbers are exactly the powers of two, and the polite numbers are the natural numbers that are not powers of two. Polite numbers have also been called staircase numbers because the Young diagrams which represent graphically the partitions of a polite number into consecutive integers (in the French notation of drawing these diagrams) resemble staircases. If all numbers in the sum are strictly greater than one, the numbers so formed are also called trapezoidal numbers because they represent patterns of points arranged in a trapezoid. The problem of representing numbers as sums of consecutive integers and of counting the number of representations of this type has been studied by Sylvester, Mason, Leveque, and many other more recent authors. The polite numbers describe the possible numbers of sides of the Reinhardt polygons. Examples and characterization The first few polite numbers are 3, 5, 6, 7, 9, 10, 11, 12, 13, 14, 15, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, ... . The impolite numbers are exactly the powers of two. It follows from the Lambek–Moser theorem that the nth polite number is f(n + 1), where Politeness The politeness of a positive number is defined as the number of ways it can be expressed as the sum of consecutive integers. For every x, the politeness of x equals the number of odd divisors of x that are greater than one. The politeness of the numbers 1, 2, 3, ... is 0, 0, 1, 0, 1, 1, 1, 0, 2, 1, 1, 1, 1, 1, 3, 0, 1, 2, 1, 1, 3, ... . For instance, the politeness of 9 is 2 because it has two odd divisors, 3 and 9, and two polite representations 9 = 2 + 3 + 4 = 4 + 5; the politeness of 15 is 3 because it has three odd divisors, 3, 5, and 15, and (as is famili
https://en.wikipedia.org/wiki/R10000
The R10000, code-named "T5", is a RISC microprocessor implementation of the MIPS IV instruction set architecture (ISA) developed by MIPS Technologies, Inc. (MTI), then a division of Silicon Graphics, Inc. (SGI). The chief designers are Chris Rowen and Kenneth C. Yeager. The R10000 microarchitecture is known as ANDES, an abbreviation for Architecture with Non-sequential Dynamic Execution Scheduling. The R10000 largely replaces the R8000 in the high-end and the R4400 elsewhere. MTI was a fabless semiconductor company; the R10000 was fabricated by NEC and Toshiba. Previous fabricators of MIPS microprocessors such as Integrated Device Technology (IDT) and three others did not fabricate the R10000 as it was more expensive to do so than the R4000 and R4400. History The R10000 was introduced in January 1996 at clock frequencies of 175 MHz and 195 MHz. A 150 MHz version was introduced in the O2 product line in 1997, but discontinued shortly after due to customer preference for the 175 MHz version. The R10000 was not available in large volumes until later in the year due to fabrication problems at MIPS's foundries. The 195 MHz version was in short supply throughout 1996, and was priced at US$3,000 as a result. On 25 September 1996, SGI announced that R10000s fabricated by NEC between March and the end of July that year were faulty, drawing too much current and causing systems to shut down during operation. SGI recalled 10,000 R10000s that had shipped in systems as a result, which impacted the company's earnings. In 1997, a version of R10000 fabricated in a 0.25 µm process enabled the microprocessor to reach 250 MHz. Users Users of the R10000 include: SGI: Workstations: Indigo2 (IMPACT Generation), Octane, O2 Servers: Challenge, Origin 2000 Supercomputers: Onyx, Onyx2 NEC, in its Cenju-4 supercomputer Siemens Nixdorf, in its servers run under SINIX Tandem Computers, in its Himalaya fault-tolerant servers Description The R10000 is a four-way superscalar design
https://en.wikipedia.org/wiki/R4000
The R4000 is a microprocessor developed by MIPS Computer Systems that implements the MIPS III instruction set architecture (ISA). Officially announced on 1 October 1991, it was one of the first 64-bit microprocessors and the first MIPS III implementation. In the early 1990s, when RISC microprocessors were expected to replace CISC microprocessors such as the Intel i486, the R4000 was selected to be the microprocessor of the Advanced Computing Environment (ACE), an industry standard that intended to define a common RISC platform. ACE ultimately failed for a number of reasons, but the R4000 found success in the workstation and server markets. Models There are three configurations of the R4000: the R4000PC, an entry-level model with no support for a secondary cache; the R4000SC, a model with secondary cache but no multiprocessor capability; and the R4000MC, a model with secondary cache and support for the cache coherency protocols required by multiprocessor systems. Description The R4000 is a scalar superpipelined microprocessor with an eight-stage integer pipeline. During the first stage (IF), a virtual address for an instruction is generated and the instruction translation lookaside buffer (TLB) begins the translation of the address to a physical address. In the second stage (IS), translation is completed and the instruction is fetched from an internal 8 KB instruction cache. The instruction cache is direct-mapped and virtually indexed, physically tagged. It has a 16- or 32-byte line size. Architecturally, it could be expanded to 32 KB. During the third stage (RF), the instruction is decoded and the register file is read. The MIPS III defines two register files, one for the integer unit and the other for floating-point. Each register file is 64 bits wide and contained 32 entries. The integer register file has two read ports and one write port, while the floating-point register file has two read ports and two write ports. Execution begins at stage four (EX) for bo
https://en.wikipedia.org/wiki/R5000
The R5000 is a 64-bit, bi-endian, superscalar, in-order execution 2-issue design microprocessor, that implements the MIPS IV instruction set architecture (ISA) developed by Quantum Effect Design (QED) in 1996. The project was funded by MIPS Technologies, Inc (MTI), also the licensor. MTI then licensed the design to Integrated Device Technology (IDT), NEC, NKK, and Toshiba. The R5000 succeeded the QED R4600 and R4700 as their flagship high-end embedded microprocessor. IDT marketed its version of the R5000 as the 79RV5000, NEC as VR5000, NKK as the NR5000, and Toshiba as the TX5000. The R5000 was sold to PMC-Sierra when the company acquired QED. Derivatives of the R5000 are still in production today for embedded systems. Users Users of the R5000 in workstation and server computers were Silicon Graphics, Inc. (SGI) and Siemens-Nixdorf. SGI used the R5000 in their O2 and Indy low-end workstations. The R5000 was also used in embedded systems such as network routers and high-end printers. The R5000 found its way into the arcade gaming industry, R5000 powered mainboards were used by Atari and Midway. Initially the Cobalt Qube and Cobalt RaQ used a derivative model, the RM5230 and RM5231. The Qube 2700 used the RM5230 microprocessor, whereas the Qube 2 used the RM5231. The original RaQ systems were equipped with RM5230 or RM5231 CPUs but later models used AMD K6-2 chips and then eventually Intel Pentium III CPUs for the final models. History The original roadmap called for 200 MHz operation in early 1996, 250 MHz in late 1996, succeeded in 1997 by R5000A. The R5000 was introduced in January 1996 and failed to achieve 200 MHz, topping out at 180 MHz. When positioned as a low-end workstation microprocessor, the competition included the IBM and Motorola PowerPC 604, the HP PA-7300LC and the Intel Pentium Pro. Description The R5000 is a two-way superscalar design that executes instructions in-order. The R5000 could simultaneously issue an integer and a floating-point instru
https://en.wikipedia.org/wiki/R8000
The R8000 is a microprocessor chipset developed by MIPS Technologies, Inc. (MTI), Toshiba, and Weitek. It was the first implementation of the MIPS IV instruction set architecture. The R8000 is also known as the TFP, for Tremendous Floating-Point, its name during development. History Development of the R8000 started in the early 1990s at Silicon Graphics, Inc. (SGI). The R8000 was specifically designed to provide the performance of circa 1990s supercomputers with a microprocessor instead of a central processing unit (CPU) built from many discrete components such as gate arrays. At the time, the performance of traditional supercomputers was not advancing as rapidly as reduced instruction set computer (RISC) microprocessors. It was predicted that RISC microprocessors would eventually match the performance of more expensive and larger supercomputers at a fraction of the cost and size, making computers with this level of performance more accessible and enabling deskside workstations and servers to replace supercomputers in many situations. First details of the R8000 emerged in April 1992 in an announcement by MIPS Computer Systems detailing future MIPS microprocessors. In March 1992, SGI announced it was acquiring MIPS Computer Systems, which became a subsidiary of SGI called MIPS Technologies, Inc. (MTI) in mid-1992. Development of the R8000 was transferred to MTI, where it continued. The R8000 was expected to be introduced in 1993, but it was delayed until mid-1994. The first R8000, a 75 MHz part, was introduced on 7 June 1994. It was priced at US$2,500 at the time. In mid-1995, a 90 MHz part appeared in systems from SGI. The R8000's high cost and narrow market (technical and scientific computing) restricted its market share, and although it was popular in its intended market, it was largely replaced with the cheaper and generally better performing R10000 introduced January 1996. Users of the R8000 were SGI, who used it in their Power Indigo2 workstation, Power Chal