source
stringlengths
31
227
text
stringlengths
9
2k
https://en.wikipedia.org/wiki/Axiom%20of%20regularity
In mathematics, the axiom of regularity (also known as the axiom of foundation) is an axiom of Zermelo–Fraenkel set theory that states that every non-empty set A contains an element that is disjoint from A. In first-order logic, the axiom reads: The axiom of regularity together with the axiom of pairing implies that no set is an element of itself, and that there is no infinite sequence (an) such that ai+1 is an element of ai for all i. With the axiom of dependent choice (which is a weakened form of the axiom of choice), this result can be reversed: if there are no such infinite sequences, then the axiom of regularity is true. Hence, in this context the axiom of regularity is equivalent to the sentence that there are no downward infinite membership chains. The axiom is the contribution of ; it was adopted in a formulation closer to the one found in contemporary textbooks by . Virtually all results in the branches of mathematics based on set theory hold even in the absence of regularity; see chapter 3 of . However, regularity makes some properties of ordinals easier to prove; and it not only allows induction to be done on well-ordered sets but also on proper classes that are well-founded relational structures such as the lexicographical ordering on Given the other axioms of Zermelo–Fraenkel set theory, the axiom of regularity is equivalent to the axiom of induction. The axiom of induction tends to be used in place of the axiom of regularity in intuitionistic theories (ones that do not accept the law of the excluded middle), where the two axioms are not equivalent. In addition to omitting the axiom of regularity, non-standard set theories have indeed postulated the existence of sets that are elements of themselves. Elementary implications of regularity No set is an element of itself Let A be a set, and apply the axiom of regularity to {A}, which is a set by the axiom of pairing. We see that there must be an element of {A} which is disjoint from {A}. Since the
https://en.wikipedia.org/wiki/IBM%20AIX
AIX (Advanced Interactive eXecutive, pronounced ,) is a series of proprietary Unix operating systems developed and sold by IBM for several of its computer platforms. Background Originally released for the IBM RT PC RISC workstation in 1986, AIX has supported a wide variety of hardware platforms, including the IBM RS/6000 series and later Power and PowerPC-based systems, IBM System i, System/370 mainframes, PS/2 personal computers, and the Apple Network Server. It is currently supported on IBM Power Systems alongside IBM i and Linux. AIX is based on UNIX System V with 4.3BSD-compatible extensions. It is certified to the UNIX 03 and UNIX V7 marks of the Single UNIX Specification, beginning with AIX versions 5.3 and 7.2 TL5 respectively. Older versions were previously certified to the UNIX 95 and UNIX 98 marks. AIX was the first operating system to have a journaling file system, and IBM has continuously enhanced the software with features such as processor, disk and network virtualization, dynamic hardware resource allocation (including fractional processor units), and reliability engineering ported from its mainframe designs. History Unix started life at AT&T's Bell Labs research center in the early 1970s, running on DEC minicomputers. By 1976, the operating system was in use at various academic institutions, including Princeton, where Tom Lyon and others ported it to the S/370, to run as a guest OS under VM/370. This port would later grow out to become UTS, a mainframe Unix offering by IBM's competitor Amdahl Corporation. IBM's own involvement in Unix can be dated to 1979, when it assisted Bell Labs in doing its own Unix port to the 370 (to be used as a build host for the 5ESS switch's software). In the process, IBM made modifications to the TSS/370 hypervisor to better support Unix. It took until 1985 for IBM to offer its own Unix on the S/370 platform, IX/370, which was developed by Interactive Systems Corporation and intended by IBM to compete with Amdahl
https://en.wikipedia.org/wiki/AppleTalk
AppleTalk is a discontinued proprietary suite of networking protocols developed by Apple Computer for their Macintosh computers. AppleTalk includes a number of features that allow local area networks to be connected with no prior setup or the need for a centralized router or server of any sort. Connected AppleTalk-equipped systems automatically assign addresses, update the distributed namespace, and configure any required inter-networking routing. AppleTalk was released in 1985 and was the primary protocol used by Apple devices through the 1980s and 1990s. Versions were also released for the IBM PC and compatibles and the Apple IIGS. AppleTalk support was also available in most networked printers (especially laser printers), some file servers, and a number of routers. The rise of TCP/IP during the 1990s led to a reimplementation of most of these types of support on that protocol, and AppleTalk became unsupported as of the release of Mac OS X v10.6 in 2009. Many of AppleTalk's more advanced autoconfiguration features have since been introduced in Bonjour, while Universal Plug and Play serves similar needs. History AppleNet After the release of the Apple Lisa computer in January 1983, Apple invested considerable effort in the development of a local area networking (LAN) system for the machines. Known as AppleNet, it was based on the seminal Xerox XNS protocol stack but running on a custom 1 Mbit/s coaxial cable system rather than Xerox's 2.94 Mbit/s Ethernet. AppleNet was announced early in 1983 with a full introduction at the target price of $500 for plug-in AppleNet cards for the Lisa and the Apple II. At that time, early LAN systems were just coming to market, including Ethernet, Token Ring, Econet, and ARCNET. This was a topic of major commercial effort at the time, dominating shows like the National Computer Conference (NCC) in Anaheim in May 1983. All of the systems were jockeying for position in the market, but even at this time, Ethernet's widespread acce
https://en.wikipedia.org/wiki/Apple%20III
The Apple III (styled as apple ///) is a business-oriented personal computer produced by Apple Computer and released in 1980. Running the Apple SOS operating system, it was intended as the successor to the Apple II series, but was largely considered a failure in the market. It was designed to provide key features business users wanted in a personal computer: a true typewriter-style upper/lowercase keyboard (the Apple II only supported uppercase) and an 80-column display. Work on the Apple III started in late 1978 under the guidance of Dr. Wendell Sander. It had the internal code name of "Sara", named after Sander's daughter. The system was announced on May 19, 1980 and released in late November that year. Serious stability issues required a design overhaul and a recall of the first 14,000 machines produced. The Apple III was formally reintroduced on November 9, 1981. Damage to the computer's reputation had already been done, however, and it failed to do well commercially. Development stopped, and the Apple III was discontinued on April 24, 1984. Its last successor, the III Plus, was dropped from the Apple product line in September 1985. An estimated 65,000–75,000 Apple III computers were sold. The Apple III Plus brought this up to approximately 120,000. Apple co-founder Steve Wozniak stated that the primary reason for the Apple III's failure was that the system was designed by Apple's marketing department, unlike Apple's previous engineering-driven projects. The Apple III's failure led Apple to reevaluate its plan to phase out the Apple II, prompting the eventual continuation of development of the older machine. As a result, later Apple II models incorporated some hardware and software technologies of the Apple III. Overview Design Steve Wozniak and Steve Jobs expected hobbyists to purchase the Apple II, but because of VisiCalc and Disk II, small businesses purchased 90% of the computers. The Apple III was designed to be a business computer and successor. Thoug
https://en.wikipedia.org/wiki/Atari%20ST
The Atari ST is a line of personal computers from Atari Corporation and the successor to the Atari 8-bit family. The initial model, the Atari 520ST, had limited release in April–June 1985 and was widely available in July. It was the first personal computer with a bitmapped color GUI, using a version of Digital Research's GEM from February 1985. The Atari 1040ST, released in 1986 with 1 MB of RAM, was the first home computer with a cost-per-kilobyte of less than US$1. After Jack Tramiel purchased the assets of the Atari, Inc. consumer division to create Atari Corporation, the 520ST was designed in five months by a small team led by Shiraz Shivji. Alongside the Macintosh, Amiga, Apple IIGS, and Acorn Archimedes, the ST is part of a mid-1980s generation of computers with 16- or 32-bit processors, 256 KB or more of RAM, and mouse-controlled graphical user interfaces. "ST" officially stands for "Sixteen/Thirty-two", referring to the Motorola 68000's 16-bit external bus and 32-bit internals. The ST was sold with either Atari's color monitor or less expensive monochrome monitor. Color graphics modes are available only on the former while the highest-resolution mode requires the monochrome monitor. Some models can display the color modes on a TV. In Germany and some other markets, the ST gained a foothold for CAD and desktop publishing. With built-in MIDI ports, it was popular for music sequencing and as a controller of musical instruments among amateur and professional musicians. The primary competitor of the Atari ST was the Amiga from Commodore. The 520ST and 1040ST were followed by the Mega series, the STE, and the portable STacy. In the early 1990s, Atari released three final evolutions of the ST with significant technical differences from the original models: TT030 (1990), Mega STE (1991), and Falcon (1992). Atari discontinued the entire ST computer line in 1993, shifting the company's focus to the Jaguar video game console. Development The Atari ST was born fr
https://en.wikipedia.org/wiki/Analytic%20geometry
In mathematics, analytic geometry, also known as coordinate geometry or Cartesian geometry, is the study of geometry using a coordinate system. This contrasts with synthetic geometry. Analytic geometry is used in physics and engineering, and also in aviation, rocketry, space science, and spaceflight. It is the foundation of most modern fields of geometry, including algebraic, differential, discrete and computational geometry. Usually the Cartesian coordinate system is applied to manipulate equations for planes, straight lines, and circles, often in two and sometimes three dimensions. Geometrically, one studies the Euclidean plane (two dimensions) and Euclidean space. As taught in school books, analytic geometry can be explained more simply: it is concerned with defining and representing geometric shapes in a numerical way and extracting numerical information from shapes' numerical definitions and representations. That the algebra of the real numbers can be employed to yield results about the linear continuum of geometry relies on the Cantor–Dedekind axiom. History Ancient Greece The Greek mathematician Menaechmus solved problems and proved theorems by using a method that had a strong resemblance to the use of coordinates and it has sometimes been maintained that he had introduced analytic geometry. Apollonius of Perga, in On Determinate Section, dealt with problems in a manner that may be called an analytic geometry of one dimension; with the question of finding points on a line that were in a ratio to the others. Apollonius in the Conics further developed a method that is so similar to analytic geometry that his work is sometimes thought to have anticipated the work of Descartes by some 1800 years. His application of reference lines, a diameter and a tangent is essentially no different from our modern use of a coordinate frame, where the distances measured along the diameter from the point of tangency are the abscissas, and the segments parallel to the tangent
https://en.wikipedia.org/wiki/Ad%20hominem
, short for argumentum ad hominem, is a term that refers to several types of arguments, most of which are fallacious. Typically this term refers to a rhetorical strategy where the speaker attacks the character, motive, or some other attribute of the person making an argument rather than attacking the substance of the argument itself. This avoids genuine debate by creating a diversion to some irrelevant but often highly charged issue. The most common form of this fallacy is "A makes a claim x, B asserts that A holds a property that is unwelcome, and hence B concludes that argument x is wrong". The valid types of ad hominem arguments are generally only encountered in specialized philosophical usage. These typically refer to the dialectical strategy of using the target's own beliefs and arguments against them, while not agreeing with the validity of those beliefs and arguments. Ad hominem arguments were first studied in ancient Greece; John Locke revived the examination of ad hominem arguments in the 17th century. Many contemporary politicians routinely use ad hominem attacks, which can be encapsulated to a derogatory nickname for a political opponent. History The various types of ad hominem arguments have been known in the West since at least the ancient Greeks. Aristotle, in his work Sophistical Refutations, detailed the fallaciousness of putting the questioner but not the argument under scrutiny. His description was somewhat different from the modern understanding, referring to a class of sophistry that applies an ambiguously worded question about people to a specific person. The proper refutation, he wrote, is not to debate the attributes of the person (solutio ad hominem) but to address the original ambiguity. Many examples of ancient non-fallacious ad hominem arguments are preserved in the works of the Pyrrhonist philosopher Sextus Empiricus. In these arguments, the concepts and assumptions of the opponents are used as part of a dialectical strategy against t
https://en.wikipedia.org/wiki/Abiotic%20stress
Abiotic stress is the negative impact of non-living factors on the living organisms in a specific environment. The non-living variable must influence the environment beyond its normal range of variation to adversely affect the population performance or individual physiology of the organism in a significant way. Whereas a biotic stress would include living disturbances such as fungi or harmful insects, abiotic stress factors, or stressors, are naturally occurring, often intangible and inanimate factors such as intense sunlight, temperature or wind that may cause harm to the plants and animals in the area affected. Abiotic stress is essentially unavoidable. Abiotic stress affects animals, but plants are especially dependent, if not solely dependent, on environmental factors, so it is particularly constraining. Abiotic stress is the most harmful factor concerning the growth and productivity of crops worldwide. Research has also shown that abiotic stressors are at their most harmful when they occur together, in combinations of abiotic stress factors. Examples Abiotic stress comes in many forms. The most common of the stressors are the easiest for people to identify, but there are many other, less recognizable abiotic stress factors which affect environments constantly. The most basic stressors include: High winds Extreme temperatures Drought Flood Other natural disasters, such as tornadoes and wildfires. Cold Heat Nutrient deficiency Lesser-known stressors generally occur on a smaller scale. They include: poor edaphic conditions like rock content and pH levels, high radiation, compaction, contamination, and other, highly specific conditions like rapid rehydration during seed germination. Effects Abiotic stress, as a natural part of every ecosystem, will affect organisms in a variety of ways. Although these effects may be either beneficial or detrimental, the location of the area is crucial in determining the extent of the impact that abiotic stress w
https://en.wikipedia.org/wiki/Chemistry%20of%20ascorbic%20acid
Ascorbic acid is an organic compound with formula , originally called hexuronic acid. It is a white solid, but impure samples can appear yellowish. It dissolves well in water to give mildly acidic solutions. It is a mild reducing agent. Ascorbic acid exists as two enantiomers (mirror-image isomers), commonly denoted "" (for "levo") and "" (for "dextro"). The isomer is the one most often encountered: it occurs naturally in many foods, and is one form ("vitamer") of vitamin C, an essential nutrient for humans and many animals. Deficiency of vitamin C causes scurvy, formerly a major disease of sailors in long sea voyages. It is used as a food additive and a dietary supplement for its antioxidant properties. The "" form can be made via chemical synthesis but has no significant biological role. History The antiscorbutic properties of certain foods were demonstrated in the 18th century by James Lind. In 1907, Axel Holst and Theodor Frølich discovered that the antiscorbutic factor was a water-soluble chemical substance, distinct from the one that prevented beriberi. Between 1928 and 1932, Albert Szent-Györgyi isolated a candidate for this substance, which he called it "hexuronic acid", first from plants and later from animal adrenal glands. In 1932 Charles Glen King confirmed that it was indeed the antiscorbutic factor. In 1933, sugar chemist Walter Norman Haworth, working with samples of "hexuronic acid" that Szent-Györgyi had isolated from paprika and sent him in the previous year, deduced the correct structure and optical-isomeric nature of the compound, and in 1934 reported its first synthesis. In reference to the compound's antiscorbutic properties, Haworth and Szent-Györgyi proposed to rename it "a-scorbic acid" for the compound, and later specifically -ascorbic acid. Because of their work, in 1937 two Nobel Prizes: in Chemistry and in Physiology or Medicine were awarded to Haworth and Szent-Györgyi, respectively. Independently, ascorbic acid was synthetized
https://en.wikipedia.org/wiki/Audio%20signal%20processing
Audio signal processing is a subfield of signal processing that is concerned with the electronic manipulation of audio signals. Audio signals are electronic representations of sound waves—longitudinal waves which travel through air, consisting of compressions and rarefactions. The energy contained in audio signals or sound power level is typically measured in decibels. As audio signals may be represented in either digital or analog format, processing may occur in either domain. Analog processors operate directly on the electrical signal, while digital processors operate mathematically on its digital representation. History The motivation for audio signal processing began at the beginning of the 20th century with inventions like the telephone, phonograph, and radio that allowed for the transmission and storage of audio signals. Audio processing was necessary for early radio broadcasting, as there were many problems with studio-to-transmitter links. The theory of signal processing and its application to audio was largely developed at Bell Labs in the mid 20th century. Claude Shannon and Harry Nyquist's early work on communication theory, sampling theory and pulse-code modulation (PCM) laid the foundations for the field. In 1957, Max Mathews became the first person to synthesize audio from a computer, giving birth to computer music. Major developments in digital audio coding and audio data compression include differential pulse-code modulation (DPCM) by C. Chapin Cutler at Bell Labs in 1950, linear predictive coding (LPC) by Fumitada Itakura (Nagoya University) and Shuzo Saito (Nippon Telegraph and Telephone) in 1966, adaptive DPCM (ADPCM) by P. Cummiskey, Nikil S. Jayant and James L. Flanagan at Bell Labs in 1973, discrete cosine transform (DCT) coding by Nasir Ahmed, T. Natarajan and K. R. Rao in 1974, and modified discrete cosine transform (MDCT) coding by J. P. Princen, A. W. Johnson and A. B. Bradley at the University of Surrey in 1987. LPC is the basis for p
https://en.wikipedia.org/wiki/Amdahl%27s%20law
In computer architecture, Amdahl's law (or Amdahl's argument) is a formula which gives the theoretical speedup in latency of the execution of a task at fixed workload that can be expected of a system whose resources are improved. It states that "the overall performance improvement gained by optimizing a single part of a system is limited by the fraction of time that the improved part is actually used". It is named after computer scientist Gene Amdahl, and was presented at the American Federation of Information Processing Societies (AFIPS) Spring Joint Computer Conference in 1967. Amdahl's law is often used in parallel computing to predict the theoretical speedup when using multiple processors. For example, if a program needs 20 hours to complete using a single thread, but a one-hour portion of the program cannot be parallelized, therefore only the remaining 19 hours' () execution time can be parallelized, then regardless of how many threads are devoted to a parallelized execution of this program, the minimum execution time is always more than 1 hour. Hence, the theoretical speedup is less than 20 times the single thread performance, . Definition Amdahl's law can be formulated in the following way: where Slatency is the theoretical speedup of the execution of the whole task; s is the speedup of the part of the task that benefits from improved system resources; p is the proportion of execution time that the part benefiting from improved resources originally occupied. Furthermore, shows that the theoretical speedup of the execution of the whole task increases with the improvement of the resources of the system and that regardless of the magnitude of the improvement, the theoretical speedup is always limited by the part of the task that cannot benefit from the improvement. Amdahl's law applies only to the cases where the problem size is fixed. In practice, as more computing resources become available, they tend to get used on larger problems (larger datas
https://en.wikipedia.org/wiki/Ayahuasca
Ayahuasca is a South American psychoactive brew, traditionally used by Indigenous cultures and folk healers in Amazon and Orinoco basins for spiritual ceremonies, divination, and healing a variety of psychosomatic complaints. Originally restricted to areas of Peru, Brazil, Colombia and Ecuador, in the middle of 20th century it became widespread in Brazil in context of appearance of syncretic religions that uses ayahuasca as a sacrament, like Santo Daime, União do Vegetal and Barquinha, which blend elements of Amazonian Shamanism, Christianity, Kardecist Spiritism, and African-Brazilian religions such as Umbanda, Candomblé and Tambor de Mina, later expanding to several countries across all continents, notably the United States and Western Europe, and, more incipiently, in Eastern Europe, South Africa, Australia, and Japan. More recently, new phenomena regarding ayahuasca use have evolved and moved to urban centers in North America and Europe, with the emergence of neoshamanic hybrid rituals and spiritual and recreational drug tourism. Also, anecdotal evidence, studies conducted among ayahuasca consumers and clinical trials suggest that ayahuasca has broad therapeutic potential, especially for the treatment of substance dependence, anxiety, and mood disorders. Thus, currently, despite continuing to be used in a traditional way, ayahuasca is also consumed recreationally worldwide, as well as used in modern medicine. Ayahuasca is commonly made by the prolonged decoction of the stems of the Banisteriopsis caapi vine and the leaves of the Psychotria viridis shrub, although hundreds of species are used in addition or substitution (See "Preparation" below). P. viridis contains N,N-Dimethyltryptamine (DMT), a highly psychedelic substance, although orally inactive, and B. caapi is rich on harmala alkaloids, such as harmine, harmaline and tetrahydroharmine (THH), which can act as a monoamine oxidase inhibitor (MAOi), halting liver and gastrointestinal metabolism of DMT, all
https://en.wikipedia.org/wiki/Abbe%20number
In optics and lens design, the Abbe number, also known as the V-number or constringence of a transparent material, is an approximate measure of the material's dispersion (change of refractive index versus wavelength), with high values of V indicating low dispersion. It is named after Ernst Abbe (1840–1905), the German physicist who defined it. The term V-number should not be confused with the normalized frequency in fibers. The Abbe number, of a material is defined as where and are the refractive indices of the material at the wavelengths of the Fraunhofer's C, d, and F spectral lines (656.3 nm, 587.56 nm, and 486.1 nm respectively). This formulation only applies to the human vision. Outside this range requires the use of different spectral lines. For non-visible spectral lines the term "V-number" is more commonly used. The more general formulation defined as, where and are the refractive indices of the material at three different wavelengths. The shortest wavelength's index is and the longest's is Abbe numbers are used to classify glass and other optical materials in terms of their chromaticity. For example, the higher dispersion flint glasses have relatively small Abbe numbers whereas the lower dispersion crown glasses have larger Abbe numbers. Values of range from below 25 for very dense flint glasses, around 34 for polycarbonate plastics, up to 65 for common crown glasses, and 75 to 85 for some fluorite and phosphate crown glasses. Abbe numbers are used in the design of achromatic lenses, as their reciprocal is proportional to dispersion (slope of refractive index versus wavelength) in the wavelength region where the human eye is most sensitive (see graph). For different wavelength regions, or for higher precision in characterizing a system's chromaticity (such as in the design of apochromats), the full dispersion relation (refractive index as a function of wavelength) is used. Abbe diagram An Abbe diagram, also called 'the glass veil', is pr
https://en.wikipedia.org/wiki/Abstract%20data%20type
In computer science, an abstract data type (ADT) is a mathematical model for data types, defined by its behavior (semantics) from the point of view of a user of the data, specifically in terms of possible values, possible operations on data of this type, and the behavior of these operations. This mathematical model contrasts with data structures, which are concrete representations of data, and are the point of view of an implementer, not a user. Formally, an ADT may be defined as a "class of objects whose logical behavior is defined by a set of values and a set of operations"; this is analogous to an algebraic structure in mathematics. What is meant by "behaviour" varies by author, with the two main types of formal specifications for behavior being axiomatic (algebraic) specification and an abstract model; these correspond to axiomatic semantics and operational semantics of an abstract machine, respectively. Some authors also include the computational complexity ("cost"), both in terms of time (for computing operations) and space (for representing values). In practice, many common data types are not ADTs, as the abstraction is not perfect, and users must be aware of issues like arithmetic overflow that are due to the representation. For example, integers are often stored as fixed-width values (32-bit or 64-bit binary numbers), and thus experience integer overflow if the maximum value is exceeded. ADTs are a theoretical concept, in computer science, used in the design and analysis of algorithms, data structures, and software systems, and do not correspond to specific features of computer languages—mainstream computer languages do not directly support formally specified ADTs. However, various language features correspond to certain aspects of ADTs, and are easily confused with ADTs proper; these include abstract types, opaque data types, protocols, and design by contract. ADTs were first proposed by Barbara Liskov and Stephen N. Zilles in 1974, as part of the develo
https://en.wikipedia.org/wiki/Antibody
An antibody (Ab), also known as an immunoglobulin (Ig), is a large, Y-shaped protein used by the immune system to identify and neutralize foreign objects such as pathogenic bacteria and viruses. The antibody recognizes a unique molecule of the pathogen, called an antigen. Each tip of the "Y" of an antibody contains a paratope (analogous to a lock) that is specific for one particular epitope (analogous to a key) on an antigen, allowing these two structures to bind together with precision. Using this binding mechanism, an antibody can tag a microbe or an infected cell for attack by other parts of the immune system, or can neutralize it directly (for example, by blocking a part of a virus that is essential for its invasion). To allow the immune system to recognize millions of different antigens, the antigen-binding sites at both tips of the antibody come in an equally wide variety. In contrast, the remainder of the antibody is relatively constant. In mammals, antibodies occur in a few variants, which define the antibody's class or isotype: IgA, IgD, IgE, IgG, and IgM. The constant region at the trunk of the antibody includes sites involved in interactions with other components of the immune system. The class hence determines the function triggered by an antibody after binding to an antigen, in addition to some structural features. Antibodies from different classes also differ in where they are released in the body and at what stage of an immune response. Together with B and T cells, antibodies comprise the most important part of the adaptive immune system. They occur in two forms: one that is attached to a B cell, and the other, a soluble form, that is unattached and found in extracellular fluids such as blood plasma. Initially, all antibodies are of the first form, attached to the surface of a B cell – these are then referred to as B-cell receptors (BCR). After an antigen binds to a BCR, the B cell activates to proliferate and differentiate into either plasma cells,
https://en.wikipedia.org/wiki/Accelerated%20Graphics%20Port
Accelerated Graphics Port (AGP) is a parallel expansion card standard, designed for attaching a video card to a computer system to assist in the acceleration of 3D computer graphics. It was originally designed as a successor to PCI-type connections for video cards. Since 2004, AGP was progressively phased out in favor of PCI Express (PCIe), which is serial, as opposed to parallel; by mid-2008, PCI Express cards dominated the market and only a few AGP models were available, with GPU manufacturers and add-in board partners eventually dropping support for the interface in favor of PCI Express. Advantages over PCI AGP is a superset of the PCI standard, designed to overcome PCI's limitations in serving the requirements of the era's high-performance graphics cards. The primary advantage of AGP is that it doesn't share the PCI bus, providing a dedicated, point-to-point pathway between the expansion slot(s) and the motherboard chipset. The direct connection also allows for higher clock speeds. The second major change is the use of split transactions, wherein the address and data phases are separated. The card may send many address phases so the host can process them in order, avoiding any long delays caused by the bus being idle during read operations. Third, PCI bus handshaking is simplified. Unlike PCI bus transactions whose length is negotiated on a cycle-by-cycle basis using the FRAME# and STOP# signals, AGP transfers are always a multiple of 8 bytes long, with the total length included in the request. Further, rather than using the IRDY# and TRDY# signals for each word, data is transferred in blocks of four clock cycles (32 words at AGP 8× speed), and pauses are allowed only between blocks. Finally, AGP allows (mandatory only in AGP 3.0) sideband addressing, meaning that the address and data buses are separated so the address phase does not use the main address/data (AD) lines at all. This is done by adding an extra 8-bit "SideBand Address" bus over which the
https://en.wikipedia.org/wiki/Alois%20Alzheimer
Alois Alzheimer ( , , ; 14 June 1864 – 19 December 1915) was a German psychiatrist and neuropathologist and a colleague of Emil Kraepelin. Alzheimer is credited with identifying the first published case of "presenile dementia", which Kraepelin would later identify as Alzheimer's disease. Early life and education Alzheimer was born in Marktbreit, Bavaria, on 14 June 1864, the son of Anna Johanna Barbara Sabina and Eduard Román Alzheimer. His father served in the office of notary public in the family's hometown. The Alzheimers moved to Aschaffenburg when Alois was still young in order to give their children an opportunity to attend the Royal Humanistic Gymnasium. After graduating with Abitur in 1883, Alzheimer studied medicine at University of Berlin, University of Tübingen, and University of Würzburg. In his final year at university, he was a member of a fencing fraternity, and even received a fine for disturbing the peace while out with his team. In 1887, Alois Alzheimer graduated from Würzburg as Doctor of Medicine. Career The following year, he spent five months assisting mentally ill women before he took an office in the city mental asylum in Frankfurt, the Städtische Anstalt für Irre und Epileptische (Asylum for Lunatics and Epileptics). , a noted psychiatrist, was the dean of the asylum. Another neurologist, Franz Nissl, began to work in the same asylum with Alzheimer. Together, they conducted research on the pathology of the nervous system, specifically the normal and pathological anatomy of the cerebral cortex. Alzheimer was the co-founder and co-publisher of the journal Zeitschrift für die gesamte Neurologie und Psychiatrie, though he never wrote a book that he could call his own. While at the Frankfurt asylum, Alzheimer also met Emil Kraepelin, one of the best-known German psychiatrists of the time. Kraepelin became a mentor to Alzheimer, and the two worked very closely for the next several years. When Kraepelin moved to Munich to work at the Royal Ps
https://en.wikipedia.org/wiki/Auger%20effect
The Auger effect (; ) or Auger−Meitner effect is a physical phenomenon in which the filling of an inner-shell vacancy of an atom is accompanied by the emission of an electron from the same atom. When a core electron is removed, leaving a vacancy, an electron from a higher energy level may fall into the vacancy, resulting in a release of energy. For light atoms (Z<12), this energy is most often transferred to a valence electron which is subsequently ejected from the atom. This second ejected electron is called an Auger electron. For heavier atomic nuclei, the release of the energy in the form of an emitted photon becomes gradually more probable. Effect Upon ejection, the kinetic energy of the Auger electron corresponds to the difference between the energy of the initial electronic transition into the vacancy and the ionization energy for the electron shell from which the Auger electron was ejected. These energy levels depend on the type of atom and the chemical environment in which the atom was located. Auger electron spectroscopy involves the emission of Auger electrons by bombarding a sample with either X-rays or energetic electrons and measures the intensity of Auger electrons that result as a function of the Auger electron energy. The resulting spectra can be used to determine the identity of the emitting atoms and some information about their environment. Auger recombination is a similar Auger effect which occurs in semiconductors. An electron and electron hole (electron-hole pair) can recombine giving up their energy to an electron in the conduction band, increasing its energy. The reverse effect is known as impact ionization. The Auger effect can impact biological molecules such as DNA. Following the K-shell ionization of the component atoms of DNA, Auger electrons are ejected leading to damage of its sugar-phosphate backbone. Discovery The Auger emission process was observed and published in 1922 by Lise Meitner, an Austrian-Swedish physicist, as a side
https://en.wikipedia.org/wiki/Analog%20television
Analog television is the original television technology that uses analog signals to transmit video and audio. In an analog television broadcast, the brightness, colors and sound are represented by amplitude, phase and frequency of an analog signal. Analog signals vary over a continuous range of possible values which means that electronic noise and interference may be introduced. Thus with analog, a moderately weak signal becomes snowy and subject to interference. In contrast, picture quality from a digital television (DTV) signal remains good until the signal level drops below a threshold where reception is no longer possible or becomes intermittent. Analog television may be wireless (terrestrial television and satellite television) or can be distributed over a cable network as cable television. All broadcast television systems used analog signals before the arrival of DTV. Motivated by the lower bandwidth requirements of compressed digital signals, beginning in the 2000s, a digital television transition is proceeding in most countries of the world, with different deadlines for the cessation of analog broadcasts. Several countries have made the switch already, with the remaining countries still in progress mostly in Africa and Asia. Development The earliest systems of analog television were mechanical television systems that used spinning disks with patterns of holes punched into the disc to scan an image. A similar disk reconstructed the image at the receiver. Synchronization of the receiver disc rotation was handled through sync pulses broadcast with the image information. Camera systems used similar spinning discs and required intensely bright illumination of the subject for the light detector to work. The reproduced images from these mechanical systems were dim, very low resolution and flickered severely. Analog television did not begin in earnest as an industry until the development of the cathode-ray tube (CRT), which uses a focused electron beam to tra
https://en.wikipedia.org/wiki/Albrecht%20D%C3%BCrer
Albrecht Dürer (; ; 21 May 1471 – 6 April 1528), sometimes spelled in English as Durer, was a German painter, printmaker, and theorist of the German Renaissance. Born in Nuremberg, Dürer established his reputation and influence across Europe in his twenties due to his high-quality woodcut prints. He was in contact with the major Italian artists of his time, including Raphael, Giovanni Bellini, and Leonardo da Vinci, and from 1512 was patronized by Emperor Maximilian I. Dürer's vast body of work includes engravings, his preferred technique in his later prints, altarpieces, portraits and self-portraits, watercolours and books. The woodcuts series are more Gothic than the rest of his work. His well-known engravings include the three Meisterstiche (master prints) Knight, Death and the Devil (1513), Saint Jerome in his Study (1514), and Melencolia I (1514). His watercolours mark him as one of the first European landscape artists, while his woodcuts revolutionised the potential of that medium. Dürer's introduction of classical motifs into Northern art, through his knowledge of Italian artists and German humanists, has secured his reputation as one of the most important figures of the Northern Renaissance. This is reinforced by his theoretical treatises, which involve principles of mathematics, perspective, and ideal proportions. Biography Early life (1471–1490) Dürer was born on 21 May 1471, the third child and second son of Albrecht Dürer the Elder and Barbara Holper, who married in 1467 and had eighteen children together. Albrecht Dürer the Elder (originally Albrecht Ajtósi) was a successful goldsmith who by 1455 had moved to Nuremberg from Ajtós, near Gyula in Hungary. He married Holper, his master's daughter, when he himself qualified as a master. One of Albrecht's brothers, Hans Dürer, was also a painter and trained under him. Her mother had some roots in Hungary to, Kinga Öllinger was born in Sopron. Another of Albrecht's brothers, Endres Dürer, took over their
https://en.wikipedia.org/wiki/Aon%20%28company%29
Aon PLC () is a British-American professional services and management consulting firm that offers a range of risk-mitigation products. The firm also provides data and analytics services, strategy consulting through Aon Inpoint and investment banking advisory through Aon Securities. Aon has approximately 50,000 employees across 120 countries. Founded in Chicago by Patrick Ryan, Aon was created in 1982 when the Ryan Insurance Group merged with the Combined Insurance Company of America. In 1987, that company was renamed Aon from aon, a Gaelic word meaning "one". The company is globally headquartered in London with its North America operations based in Chicago at the Aon Center. Aon is listed on the New York Stock Exchange under AON with a market cap of $65 billion in April 2023. History W. Clement Stone's mother bought a small Detroit insurance agency, and in 1918 brought her son into the business. Mr. Stone sold low-cost, low-benefit accident insurance, underwriting and issuing policies on-site. The next year he founded his own agency, the Combined Registry Co. As the Great Depression began, Stone reduced his workforce and improved training. Forced by his son's respiratory illness to winter in the South, Stone moved to Arkansas and Texas. In 1939 he bought American Casualty Insurance Co. of Dallas, Texas. It was consolidated with other purchases as the Combined Insurance Co. of America in 1947. The company continued through the 1950s and 1960s, continuing to sell health and accident policies. In the 1970s, Combined expanded overseas despite being hit hard by the recession. In 1982, after 10 years of stagnation under Clement Stone Jr., the elder Stone, then 79, resumed control until the completion of a merger with Ryan Insurance Co. allowed him to transfer control to Patrick Ryan. Ryan, the son of a Ford dealer in Wisconsin and a graduate of Northwestern University, had started his company as an auto credit insurer in 1964. In 1976, the company bought the insurance
https://en.wikipedia.org/wiki/Analytical%20chemistry
Analytical chemistry studies and uses instruments and methods to separate, identify, and quantify matter. In practice, separation, identification or quantification may constitute the entire analysis or be combined with another method. Separation isolates analytes. Qualitative analysis identifies analytes, while quantitative analysis determines the numerical amount or concentration. Analytical chemistry consists of classical, wet chemical methods and modern, instrumental methods. Classical qualitative methods use separations such as precipitation, extraction, and distillation. Identification may be based on differences in color, odor, melting point, boiling point, solubility, radioactivity or reactivity. Classical quantitative analysis uses mass or volume changes to quantify amount. Instrumental methods may be used to separate samples using chromatography, electrophoresis or field flow fractionation. Then qualitative and quantitative analysis can be performed, often with the same instrument and may use light interaction, heat interaction, electric fields or magnetic fields. Often the same instrument can separate, identify and quantify an analyte. Analytical chemistry is also focused on improvements in experimental design, chemometrics, and the creation of new measurement tools. Analytical chemistry has broad applications to medicine, science, and engineering. History Analytical chemistry has been important since the early days of chemistry, providing methods for determining which elements and chemicals are present in the object in question. During this period, significant contributions to analytical chemistry included the development of systematic elemental analysis by Justus von Liebig and systematized organic analysis based on the specific reactions of functional groups. The first instrumental analysis was flame emissive spectrometry developed by Robert Bunsen and Gustav Kirchhoff who discovered rubidium (Rb) and caesium (Cs) in 1860. Most of the major devel
https://en.wikipedia.org/wiki/Analog%20computer
An analog computer or analogue computer is a type of computer that uses the continuous variation aspect of physical phenomena such as electrical, mechanical, or hydraulic quantities (analog signals) to model the problem being solved. In contrast, digital computers represent varying quantities symbolically and by discrete values of both time and amplitude (digital signals). Analog computers can have a very wide range of complexity. Slide rules and nomograms are the simplest, while naval gunfire control computers and large hybrid digital/analog computers were among the most complicated. Complex mechanisms for process control and protective relays used analog computation to perform control and protective functions. Analog computers were widely used in scientific and industrial applications even after the advent of digital computers, because at the time they were typically much faster, but they started to become obsolete as early as the 1950s and 1960s, although they remained in use in some specific applications, such as aircraft flight simulators, the flight computer in aircraft, and for teaching control systems in universities. Perhaps the most relatable example of analog computers are mechanical watches where the continuous and periodic rotation of interlinked gears drives the second, minute and hour needles in the clock. More complex applications, such as aircraft flight simulators and synthetic-aperture radar, remained the domain of analog computing (and hybrid computing) well into the 1980s, since digital computers were insufficient for the task. Timeline of analog computers Precursors This is a list of examples of early computation devices considered precursors of the modern computers. Some of them may even have been dubbed 'computers' by the press, though they may fail to fit modern definitions. The Antikythera mechanism, a type of device used to determine the positions of heavenly bodies known as an orrery, was described as an early mechanical analog c
https://en.wikipedia.org/wiki/Acceleration
In mechanics, acceleration is the rate of change of the velocity of an object with respect to time. Acceleration is one of several components of kinematics, the study of motion. Accelerations are vector quantities (in that they have magnitude and direction). The orientation of an object's acceleration is given by the orientation of the net force acting on that object. The magnitude of an object's acceleration, as described by Newton's Second Law, is the combined effect of two causes: the net balance of all external forces acting onto that object — magnitude is directly proportional to this net resulting force; that object's mass, depending on the materials out of which it is made — magnitude is inversely proportional to the object's mass. The SI unit for acceleration is metre per second squared (, ). For example, when a vehicle starts from a standstill (zero velocity, in an inertial frame of reference) and travels in a straight line at increasing speeds, it is accelerating in the direction of travel. If the vehicle turns, an acceleration occurs toward the new direction and changes its motion vector. The acceleration of the vehicle in its current direction of motion is called a linear (or tangential during circular motions) acceleration, the reaction to which the passengers on board experience as a force pushing them back into their seats. When changing direction, the effecting acceleration is called radial (or centripetal during circular motions) acceleration, the reaction to which the passengers experience as a centrifugal force. If the speed of the vehicle decreases, this is an acceleration in the opposite direction of the velocity vector (mathematically a negative, if the movement is unidimensional and the velocity is positive), sometimes called deceleration or retardation, and passengers experience the reaction to deceleration as an inertial force pushing them forward. Such negative accelerations are often achieved by retrorocket burning in spacecraft. Bot
https://en.wikipedia.org/wiki/Apoptosis
Apoptosis (from ) is a form of programmed cell death that occurs in multicellular organisms and in some eukaryotic, single-celled microorganisms such as yeast. Biochemical events lead to characteristic cell changes (morphology) and death. These changes include blebbing, cell shrinkage, nuclear fragmentation, chromatin condensation, DNA fragmentation, and mRNA decay. The average adult human loses between 50 and 70 billion cells each day due to apoptosis. For an average human child between eight and fourteen years old, each day the approximate lost is 20 to 30 billion cells. In contrast to necrosis, which is a form of traumatic cell death that results from acute cellular injury, apoptosis is a highly regulated and controlled process that confers advantages during an organism's life cycle. For example, the separation of fingers and toes in a developing human embryo occurs because cells between the digits undergo apoptosis. Unlike necrosis, apoptosis produces cell fragments called apoptotic bodies that phagocytes are able to engulf and remove before the contents of the cell can spill out onto surrounding cells and cause damage to them. Because apoptosis cannot stop once it has begun, it is a highly regulated process. Apoptosis can be initiated through one of two pathways. In the intrinsic pathway the cell kills itself because it senses cell stress, while in the extrinsic pathway the cell kills itself because of signals from other cells. Weak external signals may also activate the intrinsic pathway of apoptosis. Both pathways induce cell death by activating caspases, which are proteases, or enzymes that degrade proteins. The two pathways both activate initiator caspases, which then activate executioner caspases, which then kill the cell by degrading proteins indiscriminately. In addition to its importance as a biological phenomenon, defective apoptotic processes have been implicated in a wide variety of diseases. Excessive apoptosis causes atrophy, whereas an insuffic
https://en.wikipedia.org/wiki/ATM
ATM or atm often refers to: Atmosphere (unit) or atm, a unit of atmospheric pressure Automated teller machine, a cash dispenser or cash machine ATM or atm may also refer to: Computing ATM (computer), a ZX Spectrum clone developed in Moscow in 1991 Adobe Type Manager, a computer program for managing fonts Accelerated Turing machine, or Zeno machine, a model of computation used in theoretical computer science Alternating Turing machine, a model of computation used in theoretical computer science Asynchronous Transfer Mode, a telecommunications protocol used in networking ATM adaptation layer ATM Adaptation Layer 5 Media Amateur Telescope Making, a series of books by Albert Graham Ingalls ATM (2012 film), an American film ATM: Er Rak Error, a 2012 Thai film Azhagiya Tamil Magan, a 2007 Indian film "ATM" (song), a 2018 song by J. Cole from KOD People and organizations Abiding Truth Ministries, anti-LGBT organization in Springfield, Massachusetts, US Association of Teachers of Mathematics, UK Acrylic Tank Manufacturing, US aquarium manufacturer, televised in Tanked ATM FA, a football club in Malaysia A. T. M. Wilson (1906–1978), British psychiatrist African Transformation Movement, South African political party founded in 2018 The a2 Milk Company (NZX ticker symbol ATM) Science Apollo Telescope Mount, a solar observatory ATM serine/threonine kinase, a serine/threonine kinase activated by DNA damage The Airborne Topographic Mapper, a laser altimeter among the instruments used by NASA's Operation IceBridge Transportation Active traffic management, a motorway scheme on the M42 in England Air traffic management, a concept in aviation Altamira Airport, in Brazil (IATA code ATM) Azienda Trasporti Milanesi, the municipal public transport company of Milan Airlines of Tasmania (ICAO code ATM) Catalonia, Spain Autoritat del Transport Metropolità (ATM Àrea de Barcelona), in the Barcelona metropolitan area Autoritat Territorial de la Mobil
https://en.wikipedia.org/wiki/Asynchronous%20Transfer%20Mode
Asynchronous Transfer Mode (ATM) is a telecommunications standard defined by the American National Standards Institute and ITU-T (formerly CCITT) for digital transmission of multiple types of traffic. ATM was developed to meet the needs of the Broadband Integrated Services Digital Network as defined in the late 1980s, and designed to integrate telecommunication networks. It can handle both traditional high-throughput data traffic and real-time, low-latency content such as telephony (voice) and video. ATM provides functionality that uses features of circuit switching and packet switching networks by using asynchronous time-division multiplexing. In the OSI reference model data link layer (layer 2), the basic transfer units are called frames. In ATM these frames are of a fixed length (53 octets) called cells. This differs from approaches such as Internet Protocol (IP) (OSI layer 3) or Ethernet (also layer 2) that use variable-sized packets or frames. ATM uses a connection-oriented model in which a virtual circuit must be established between two endpoints before the data exchange begins. These virtual circuits may be either permanent (dedicated connections that are usually preconfigured by the service provider), or switched (set up on a per-call basis using signaling and disconnected when the call is terminated). The ATM network reference model approximately maps to the three lowest layers of the OSI model: physical layer, data link layer, and network layer. ATM is a core protocol used in the synchronous optical networking and synchronous digital hierarchy (SONET/SDH) backbone of the public switched telephone network and in the Integrated Services Digital Network (ISDN) but has largely been superseded in favor of next-generation networks based on IP technology. Wireless and mobile ATM never established a significant foothold. Protocol architecture To minimize queuing delay and packet delay variation (PDV), all ATM cells are the same small size. Reduction of PDV is p
https://en.wikipedia.org/wiki/Amphetamine
Amphetamine (contracted from alpha-methylphenethylamine) is a central nervous system (CNS) stimulant that is used in the treatment of attention deficit hyperactivity disorder (ADHD), narcolepsy, and obesity. Amphetamine was discovered as a chemical in 1887 by Lazăr Edeleanu, and then as a drug in the late 1920s. It exists as two enantiomers: levoamphetamine and dextroamphetamine. Amphetamine properly refers to a specific chemical, the racemic free base, which is equal parts of the two enantiomers in their pure amine forms. The term is frequently used informally to refer to any combination of the enantiomers, or to either of them alone. Historically, it has been used to treat nasal congestion and depression. Amphetamine is also used as an athletic performance enhancer and cognitive enhancer, and recreationally as an aphrodisiac and euphoriant. It is a prescription drug in many countries, and unauthorized possession and distribution of amphetamine are often tightly controlled due to the significant health risks associated with recreational use. The first amphetamine pharmaceutical was Benzedrine, a brand which was used to treat a variety of conditions. Currently, pharmaceutical amphetamine is prescribed as racemic amphetamine, Adderall, dextroamphetamine, or the inactive prodrug lisdexamfetamine. Amphetamine increases monoamine and excitatory neurotransmission in the brain, with its most pronounced effects targeting the norepinephrine and dopamine neurotransmitter systems. At therapeutic doses, amphetamine causes emotional and cognitive effects such as euphoria, change in desire for sex, increased wakefulness, and improved cognitive control. It induces physical effects such as improved reaction time, fatigue resistance, and increased muscle strength. Larger doses of amphetamine may impair cognitive function and induce rapid muscle breakdown. Addiction is a serious risk with heavy recreational amphetamine use, but is unlikely to occur from long-term medical use at th
https://en.wikipedia.org/wiki/Asynchronous%20communication
In telecommunications, asynchronous communication is transmission of data, generally without the use of an external clock signal, where data can be transmitted intermittently rather than in a steady stream. Any timing required to recover data from the communication symbols is encoded within the symbols. The most significant aspect of asynchronous communications is that data is not transmitted at regular intervals, thus making possible variable bit rate, and that the transmitter and receiver clock generators do not have to be exactly synchronized all the time. In asynchronous transmission, data is sent one byte at a time and each byte is preceded by start and stop bits. Physical layer In asynchronous serial communication in the physical protocol layer, the data blocks are code words of a certain word length, for example octets (bytes) or ASCII characters, delimited by start bits and stop bits. A variable length space can be inserted between the code words. No bit synchronization signal is required. This is sometimes called character oriented communication. Examples include MNP2 and modems older than V.2. Data link layer and higher Asynchronous communication at the data link layer or higher protocol layers is known as statistical multiplexing, for example Asynchronous Transfer Mode (ATM). In this case, the asynchronously transferred blocks are called data packets, for example ATM cells. The opposite is circuit switched communication, which provides constant bit rate, for example ISDN and SONET/SDH. The packets may be encapsulated in a data frame, with a frame synchronization bit sequence indicating the start of the frame, and sometimes also a bit synchronization bit sequence, typically 01010101, for identification of the bit transition times. Note that at the physical layer, this is considered as synchronous serial communication. Examples of packet mode data link protocols that can be/are transferred using synchronous serial communication are the HDLC, Ethernet
https://en.wikipedia.org/wiki/Adenylyl%20cyclase
Adenylate cyclase (EC 4.6.1.1, also commonly known as adenyl cyclase and adenylyl cyclase, abbreviated AC) is an enzyme with systematic name ATP diphosphate-lyase (cyclizing; 3′,5′-cyclic-AMP-forming). It catalyzes the following reaction: ATP = 3′,5′-cyclic AMP + diphosphate It has key regulatory roles in essentially all cells. It is the most polyphyletic known enzyme: six distinct classes have been described, all catalyzing the same reaction but representing unrelated gene families with no known sequence or structural homology. The best known class of adenylyl cyclases is class III or AC-III (Roman numerals are used for classes). AC-III occurs widely in eukaryotes and has important roles in many human tissues. All classes of adenylyl cyclase catalyse the conversion of adenosine triphosphate (ATP) to 3',5'-cyclic AMP (cAMP) and pyrophosphate. Magnesium ions are generally required and appear to be closely involved in the enzymatic mechanism. The cAMP produced by AC then serves as a regulatory signal via specific cAMP-binding proteins, either transcription factors, enzymes (e.g., cAMP-dependent kinases), or ion transporters. Classes Class I The first class of adenylyl cyclases occur in many bacteria including E. coli (as CyaA [unrelated to the Class II enzyme]). This was the first class of AC to be characterized. It was observed that E. coli deprived of glucose produce cAMP that serves as an internal signal to activate expression of genes for importing and metabolizing other sugars. cAMP exerts this effect by binding the transcription factor CRP, also known as CAP. Class I AC's are large cytosolic enzymes (~100 kDa) with a large regulatory domain (~50 kDa) that indirectly senses glucose levels. , no crystal structure is available for class I AC. Some indirect structural information is available for this class. It is known that the N-terminal half is the catalytic portion, and that it requires two Mg2+ ions. S103, S113, D114, D116 and W118 are the five absolut
https://en.wikipedia.org/wiki/Automated%20theorem%20proving
Automated theorem proving (also known as ATP or automated deduction) is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs. Automated reasoning over mathematical proof was a major impetus for the development of computer science. Logical foundations While the roots of formalised logic go back to Aristotle, the end of the 19th and early 20th centuries saw the development of modern logic and formalised mathematics. Frege's Begriffsschrift (1879) introduced both a complete propositional calculus and what is essentially modern predicate logic. His Foundations of Arithmetic, published in 1884, expressed (parts of) mathematics in formal logic. This approach was continued by Russell and Whitehead in their influential Principia Mathematica, first published 1910–1913, and with a revised second edition in 1927. Russell and Whitehead thought they could derive all mathematical truth using axioms and inference rules of formal logic, in principle opening up the process to automatisation. In 1920, Thoralf Skolem simplified a previous result by Leopold Löwenheim, leading to the Löwenheim–Skolem theorem and, in 1930, to the notion of a Herbrand universe and a Herbrand interpretation that allowed (un)satisfiability of first-order formulas (and hence the validity of a theorem) to be reduced to (potentially infinitely many) propositional satisfiability problems. In 1929, Mojżesz Presburger showed that the first-order theory of the natural numbers with addition and equality (now called Presburger arithmetic in his honor) is decidable and gave an algorithm that could determine if a given sentence in the language was true or false. However, shortly after this positive result, Kurt Gödel published On Formally Undecidable Propositions of Principia Mathematica and Related Systems (1931), showing that in any sufficiently strong axiomatic system there are true statements that cannot be proved in the system. This topic wa
https://en.wikipedia.org/wiki/Aberration%20%28astronomy%29
In astronomy, aberration (also referred to as astronomical aberration, stellar aberration, or velocity aberration) is a phenomenon where celestial objects exhibit an apparent motion about their true positions based on the velocity of the observer: It causes objects to appear to be displaced towards the observer's direction of motion. The change in angle is of the order of v/c where c is the speed of light and v the velocity of the observer. In the case of "stellar" or "annual" aberration, the apparent position of a star to an observer on Earth varies periodically over the course of a year as the Earth's velocity changes as it revolves around the Sun, by a maximum angle of approximately 20 arcseconds in right ascension or declination. The term aberration has historically been used to refer to a number of related phenomena concerning the propagation of light in moving bodies. Aberration is distinct from parallax, which is a change in the apparent position of a relatively nearby object, as measured by a moving observer, relative to more distant objects that define a reference frame. The amount of parallax depends on the distance of the object from the observer, whereas aberration does not. Aberration is also related to light-time correction and relativistic beaming, although it is often considered separately from these effects. Aberration is historically significant because of its role in the development of the theories of light, electromagnetism and, ultimately, the theory of special relativity. It was first observed in the late 1600s by astronomers searching for stellar parallax in order to confirm the heliocentric model of the Solar System. However, it was not understood at the time to be a different phenomenon. In 1727, James Bradley provided a classical explanation for it in terms of the finite speed of light relative to the motion of the Earth in its orbit around the Sun, which he used to make one of the earliest measurements of the speed of light. However,
https://en.wikipedia.org/wiki/Optical%20aberration
In optics, aberration is a property of optical systems, such as lenses, that causes light to be spread out over some region of space rather than focused to a point. Aberrations cause the image formed by a lens to be blurred or distorted, with the nature of the distortion depending on the type of aberration. Aberration can be defined as a departure of the performance of an optical system from the predictions of paraxial optics. In an imaging system, it occurs when light from one point of an object does not converge into (or does not diverge from) a single point after transmission through the system. Aberrations occur because the simple paraxial theory is not a completely accurate model of the effect of an optical system on light, rather than due to flaws in the optical elements. An image-forming optical system with aberration will produce an image which is not sharp. Makers of optical instruments need to correct optical systems to compensate for aberration. Aberration can be analyzed with the techniques of geometrical optics. The articles on reflection, refraction and caustics discuss the general features of reflected and refracted rays. Overview With an ideal lens, light from any given point on an object would pass through the lens and come together at a single point in the image plane (or, more generally, the image surface). Real lenses do not focus light exactly to a single point, however, even when they are perfectly made. These deviations from the idealized lens performance are called aberrations of the lens. Aberrations fall into two classes: monochromatic and chromatic. Monochromatic aberrations are caused by the geometry of the lens or mirror and occur both when light is reflected and when it is refracted. They appear even when using monochromatic light, hence the name. Chromatic aberrations are caused by dispersion, the variation of a lens's refractive index with wavelength. Because of dispersion, different wavelengths of light come to focus at differ
https://en.wikipedia.org/wiki/Autocorrelation
Autocorrelation, sometimes known as serial correlation in the discrete time case, is the correlation of a signal with a delayed copy of itself as a function of delay. Informally, it is the similarity between observations of a random variable as a function of the time lag between them. The analysis of autocorrelation is a mathematical tool for finding repeating patterns, such as the presence of a periodic signal obscured by noise, or identifying the missing fundamental frequency in a signal implied by its harmonic frequencies. It is often used in signal processing for analyzing functions or series of values, such as time domain signals. Different fields of study define autocorrelation differently, and not all of these definitions are equivalent. In some fields, the term is used interchangeably with autocovariance. Unit root processes, trend-stationary processes, autoregressive processes, and moving average processes are specific forms of processes with autocorrelation. Auto-correlation of stochastic processes In statistics, the autocorrelation of a real or complex random process is the Pearson correlation between values of the process at different times, as a function of the two times or of the time lag. Let be a random process, and be any point in time ( may be an integer for a discrete-time process or a real number for a continuous-time process). Then is the value (or realization) produced by a given run of the process at time . Suppose that the process has mean and variance at time , for each . Then the definition of the auto-correlation function between times and is where is the expected value operator and the bar represents complex conjugation. Note that the expectation may not be well defined. Subtracting the mean before multiplication yields the auto-covariance function between times and : Note that this expression is not well defined for all time series or processes, because the mean may not exist, or the variance may be zero (for a constant
https://en.wikipedia.org/wiki/Aspartame
Aspartame is an artificial non-saccharide sweetener 200 times sweeter than sucrose and is commonly used as a sugar substitute in foods and beverages. It is a methyl ester of the aspartic acid/phenylalanine dipeptide with brand names NutraSweet, Equal, and Canderel. Aspartame was approved by the US Food and Drug Administration (FDA) in 1974, and then again in 1981, after approval was revoked in 1980. Aspartame is one of the most studied food additives in the human food supply. Reviews by over 100 governmental regulatory bodies found the ingredient safe for consumption at the normal acceptable daily intake (ADI) limit. Uses Aspartame is around 180 to 200 times sweeter than sucrose (table sugar). Due to this property, even though aspartame produces of energy per gram when metabolized, about the same as sucrose, the quantity of aspartame needed to produce a sweet taste is so small that its caloric contribution is negligible. The sweetness of aspartame lasts longer than that of sucrose, so it is often blended with other artificial sweeteners such as acesulfame potassium to produce an overall taste more like that of sugar. Like many other peptides, aspartame may hydrolyze (break down) into its constituent amino acids under conditions of elevated temperature or high pH. This makes aspartame undesirable as a baking sweetener and prone to degradation in products hosting a high pH, as required for a long shelf life. The stability of aspartame under heating can be improved to some extent by encasing it in fats or in maltodextrin. The stability when dissolved in water depends markedly on pH. At room temperature, it is most stable at pH 4.3, where its half-life is nearly 300 days. At pH 7, however, its half-life is only a few days. Most soft-drinks have a pH between 3 and 5, where aspartame is reasonably stable. In products that may require a longer shelf life, such as syrups for fountain beverages, aspartame is sometimes blended with a more stable sweetener, such as saccha
https://en.wikipedia.org/wiki/Ames%20test
The Ames test is a widely employed method that uses bacteria to test whether a given chemical can cause mutations in the DNA of the test organism. More formally, it is a biological assay to assess the mutagenic potential of chemical compounds. A positive test indicates that the chemical is mutagenic and therefore may act as a carcinogen, because cancer is often linked to mutation. The test serves as a quick and convenient assay to estimate the carcinogenic potential of a compound because standard carcinogen assays on mice and rats are time-consuming (taking two to three years to complete) and expensive. However, false-positives and false-negatives are known. The procedure was described in a series of papers in the early 1970s by Bruce Ames and his group at the University of California, Berkeley. General procedure The Ames test uses several strains of the bacterium Salmonella typhimurium that carry mutations in genes involved in histidine synthesis. These strains are auxotrophic mutants, i.e. they require histidine for growth, but cannot produce it. The method tests the capability of the tested substance in creating mutations that result in a return to a "prototrophic" state, so that the cells can grow on a histidine-free medium. The tester strains are specially constructed to detect either frameshift (e.g. strains TA-1537 and TA-1538) or point (e.g. strain TA-1531) mutations in the genes required to synthesize histidine, so that mutagens acting via different mechanisms may be identified. Some compounds are quite specific, causing reversions in just one or two strains. The tester strains also carry mutations in the genes responsible for lipopolysaccharide synthesis, making the cell wall of the bacteria more permeable, and in the excision repair system to make the test more sensitive. Larger organisms like mammals have metabolic processes that could potentially turn a chemical considered not mutagenic into one that is or one that is considered mutagenic into one
https://en.wikipedia.org/wiki/Parallel%20ATA
Parallel ATA (PATA), originally , also known as IDE, is a standard interface designed for IBM PC-compatible computers. It was first developed by Western Digital and Compaq in 1986 for compatible hard drives and CD or DVD drives. The connection is used for storage devices such as hard disk drives, floppy disk drives, and optical disc drives in computers. The standard is maintained by the X3/INCITS committee. It uses the underlying (ATA) and Packet Interface (ATAPI) standards. The Parallel ATA standard is the result of a long history of incremental technical development, which began with the original AT Attachment interface, developed for use in early PC AT equipment. The ATA interface itself evolved in several stages from Western Digital's original Integrated Drive Electronics (IDE) interface. As a result, many near-synonyms for ATA/ATAPI and its previous incarnations are still in common informal use, in particular Extended IDE (EIDE) and Ultra ATA (UATA). After the introduction of SATA in 2003, the original ATA was renamed to Parallel ATA, or PATA for short. Parallel ATA cables have a maximum allowable length of . Because of this limit, the technology normally appears as an internal computer storage interface. For many years, ATA provided the most common and the least expensive interface for this application. It has largely been replaced by SATA in newer systems. History and terminology The standard was originally conceived as the "AT Bus Attachment," officially called "AT Attachment" and abbreviated "ATA" because its primary feature was a direct connection to the 16-bit ISA bus introduced with the IBM PC/AT. The original ATA specifications published by the standards committees use the name "AT Attachment". The "AT" in the IBM PC/AT referred to "Advanced Technology" so ATA has also been referred to as "Advanced Technology Attachment". When a newer Serial ATA (SATA) was introduced in 2003, the original ATA was renamed to Parallel ATA, or PATA for short. Phy
https://en.wikipedia.org/wiki/Astrobiology
Astrobiology is a scientific field within the life and environmental sciences that studies the origins, early evolution, distribution, and future of life in the universe by investigating its deterministic conditions and contingent events. As a discipline, astrobiology is founded on the premise that life may exist beyond Earth. Research in astrobiology comprises three main areas: the study of habitable environments in the Solar System and beyond, the search for planetary biosignatures of past or present extraterrestrial life, and the study of the origin and early evolution of life on Earth. The field of astrobiology has its origins in the 20th century with the advent of space exploration and the discovery of exoplanets. Early astrobiology research focused on the search for extraterrestrial life and the study of the potential for life to exist on other planets. In the 1960s and 1970s, NASA began its astrobiology pursuits within the Viking program, which was the first US mission to land on Mars and search for signs of life. This mission, along with other early space exploration missions, laid the foundation for the development of astrobiology as a discipline. Regarding habitable environments, astrobiology investigates potential locations beyond Earth that could support life, such as Mars, Europa, and exoplanets, through research into the extremophiles populating austere environments on Earth, like volcanic and deep sea environments. Research within this topic is conducted utilising the methodology of the geosciences, especially geobiology, for astrobiological applications. The search for biosignatures involves the identification of signs of past or present life in the form of organic compounds, isotopic ratios, or microbial fossils. Research within this topic is conducted utilising the methodology of planetary and environmental science, especially atmospheric science, for astrobiological applications, and is often conducted through remote sensing and in situ missi
https://en.wikipedia.org/wiki/Anthropic%20principle
The anthropic principle, also known as the "observation selection effect", is the hypothesis, first proposed in 1957 by Robert Dicke, that the range of possible observations that could be made about the universe is limited by the fact that observations could happen only in a universe capable of developing intelligent life. Proponents of the anthropic principle argue that it explains why the universe has the age and the fundamental physical constants necessary to accommodate conscious life, since if either had been different, no one would have been around to make observations. Anthropic reasoning is often used to deal with the idea that the universe seems to be finely tuned for the existence of life. There are many different formulations of the anthropic principle. Philosopher Nick Bostrom counts them at thirty, but the underlying principles can be divided into "weak" and "strong" forms, depending on the types of cosmological claims they entail. The weak anthropic principle (WAP), as defined by Brandon Carter, states that the universe's ostensible fine tuning is the result of selection bias (specifically survivorship bias). Most such arguments draw upon some notion of the multiverse for there to be a statistical population of universes from which to select. However, a single vast universe is sufficient for most forms of the WAP that do not specifically deal with fine tuning. Carter distinguished the WAP from the strong anthropic principle (SAP), which considers the universe in some sense compelled to eventually have conscious and sapient life emerge within it. A form of the latter known as the participatory anthropic principle, articulated by John Archibald Wheeler, suggests on the basis of quantum mechanics that the universe, as a condition of its existence, must be observed, thus implying one or more observers. Stronger yet is the final anthropic principle (FAP), proposed by John D. Barrow and Frank Tipler, which views the universe's structure as expressible by bi
https://en.wikipedia.org/wiki/Active%20Directory
Active Directory (AD) is a directory service developed by Microsoft for Windows domain networks. Windows Server operating systems include it as a set of processes and services. Originally, only centralized domain management used Active Directory. However, it ultimately became an umbrella title for various directory-based identity-related services. A domain controller is a server running the Active Directory Domain Service (AD DS) role. It authenticates and authorizes all users and computers in a Windows domain-type network, assigning and enforcing security policies for all computers and installing or updating software. For example, when a user logs into a computer part of a Windows domain, Active Directory checks the submitted username and password and determines whether the user is a system administrator or a non-admin user. Furthermore, it allows the management and storage of information, provides authentication and authorization mechanisms, and establishes a framework to deploy other related services: Certificate Services, Active Directory Federation Services, Lightweight Directory Services, and Rights Management Services. Active Directory uses Lightweight Directory Access Protocol (LDAP) versions 2 and 3, Microsoft's version of Kerberos, and DNS. Robert R. King defined it in the following way: History Like many information-technology efforts, Active Directory originated out of a democratization of design using Requests for Comments (RFCs). The Internet Engineering Task Force (IETF) oversees the RFC process and has accepted numerous RFCs initiated by widespread participants. For example, LDAP underpins Active Directory. Also, X.500 directories and the Organizational Unit preceded the Active Directory concept that uses those methods. The LDAP concept began to emerge even before the founding of Microsoft in April 1975, with RFCs as early as 1971. RFCs contributing to LDAP include RFC 1823 (on the LDAP API, August 1995), RFC 2307, RFC 3062, and RFC 4533. Mi
https://en.wikipedia.org/wiki/Angular%20momentum
In physics, angular momentum (sometimes called moment of momentum or rotational momentum) is the rotational analog of linear momentum. It is an important physical quantity because it is a conserved quantity – the total angular momentum of a closed system remains constant. Angular momentum has both a direction and a magnitude, and both are conserved. Bicycles and motorcycles, flying discs, rifled bullets, and gyroscopes owe their useful properties to conservation of angular momentum. Conservation of angular momentum is also why hurricanes form spirals and neutron stars have high rotational rates. In general, conservation limits the possible motion of a system, but it does not uniquely determine it. The three-dimensional angular momentum for a point particle is classically represented as a pseudovector , the cross product of the particle's position vector (relative to some origin) and its momentum vector; the latter is in Newtonian mechanics. Unlike linear momentum, angular momentum depends on where this origin is chosen, since the particle's position is measured from it. Angular momentum is an extensive quantity; that is, the total angular momentum of any composite system is the sum of the angular momenta of its constituent parts. For a continuous rigid body or a fluid, the total angular momentum is the volume integral of angular momentum density (angular momentum per unit volume in the limit as volume shrinks to zero) over the entire body. Similar to conservation of linear momentum, where it is conserved if there is no external force, angular momentum is conserved if there is no external torque. Torque can be defined as the rate of change of angular momentum, analogous to force. The net external torque on any system is always equal to the total torque on the system; in other words, the sum of all internal torques of any system is always 0 (this is the rotational analogue of Newton's third law of motion). Therefore, for a closed system (where there is no ne
https://en.wikipedia.org/wiki/Plum%20pudding%20model
The plum pudding model is one of several historical scientific models of the atom. First proposed by J. J. Thomson in 1904 soon after the discovery of the electron, but before the discovery of the atomic nucleus, the model tried to account for two properties of atoms then known: that electrons are negatively charged subatomic particles and that atoms have no net electric charge. The plum pudding model has electrons surrounded by a volume of positive charge, like negatively charged "plums" embedded in a positively charged "pudding". Overview It had been known for many years that atoms contain negatively charged subatomic particles. Thomson called them "corpuscles" (particles), but they were more commonly called "electrons", the name G. J. Stoney had coined for the "fundamental unit quantity of electricity" in 1891. It had also been known for many years that atoms have no net electric charge. Thomson held that atoms must also contain some positive charge that cancels out the negative charge of their electrons. Thomson published his proposed model in the March 1904 edition of the Philosophical Magazine, the leading British science journal of the day. In Thomson's view: ... the atoms of the elements consist of a number of negatively electrified corpuscles enclosed in a sphere of uniform positive electrification, ... Thomson's model was the first to assign a specific inner structure to an atom, though his original description did not include mathematical formulas. He had followed the work of William Thomson who had written a paper proposing a vortex atom in 1867, J.J. Thomson abandoned his 1890 "nebular atom" hypothesis, based on the vortex theory of the atom, in which atoms were composed of immaterial vortices and suggested there were similarities between the arrangement of vortices and periodic regularity found among the chemical elements. Thomson based his atomic model on known experimental evidence of the day, and in fact, followed Lord Kelvin's lead again as Kel
https://en.wikipedia.org/wiki/AI-complete
In the field of artificial intelligence, the most difficult problems are informally known as AI-complete or AI-hard, implying that the difficulty of these computational problems, assuming intelligence is computational, is equivalent to that of solving the central artificial intelligence problem—making computers as intelligent as people, or strong AI. To call a problem AI-complete reflects an attitude that it would not be solved by a simple specific algorithm. AI-complete problems are hypothesised to include computer vision, natural language understanding, and dealing with unexpected circumstances while solving any real-world problem. Currently, AI-complete problems cannot be solved with modern computer technology alone, but would also require human computation. This property could be useful, for example, to test for the presence of humans as CAPTCHAs aim to do, and for computer security to circumvent brute-force attacks. History The term was coined by Fanya Montalvo by analogy with NP-complete and NP-hard in complexity theory, which formally describes the most famous class of difficult problems. Early uses of the term are in Erik Mueller's 1987 PhD dissertation and in Eric Raymond's 1991 Jargon File. AI-complete problems AI-complete problems are hypothesized to include: AI peer review (composite natural language understanding, automated reasoning, automated theorem proving, formalized logic expert system) Bongard problems Computer vision (and subproblems such as object recognition) Natural language understanding (and subproblems such as text mining, machine translation, and word-sense disambiguation) Autonomous driving Dealing with unexpected circumstances while solving any real world problem, whether it's navigation or planning or even the kind of reasoning done by expert systems. Machine translation To translate accurately, a machine must be able to understand the text. It must be able to follow the author's argument, so it must have some ability
https://en.wikipedia.org/wiki/Amorphous%20solid
In condensed matter physics and materials science, an amorphous solid (or non-crystalline solid) is a solid that lacks the long-range order that is characteristic of a crystal. The terms "glass" and "glassy solid" are sometimes used synonymously with amorphous solid; however, these terms refer specifically to amorphous materials that undergo a glass transition. Examples of amorphous solids include glasses, metallic glasses, and certain types of plastics and polymers. Etymology The term comes from the Greek a ("without"), and morphé ("shape, form"). Structure Amorphous materials have an internal structure consisting of interconnected structural blocks that can be similar to the basic structural units found in the corresponding crystalline phase of the same compound. Unlike in crystalline materials, however, no long-range order exists. Amorphous materials therefore cannot be defined by a finite unit cell. Statistical methods, such as the atomic density function and radial distribution function, are more useful in describing the structure of amorphous solids. Although amorphous materials lack long range order, they exhibit localized order on small length scales. Localized order in amorphous materials can be categorized as short or medium range order. By convention, short range order extends only to the nearest neighbor shell, typically only 1-2 atomic spacings. Medium range order is then defined as the structural organization extending beyond the short range order, usually by 1-2 nm. Fundamental properties of amorphous solids Glass transition at high temperatures The freezing from liquid state to amorphous solid - glass transition - is considered one of the very important and unsolved problems of physics. Universal low-temperature properties of amorphous solids At very low temperatures (below 1-10 K), large family of amorphous solids have various similar low-temperature properties. Although there are various theoretical models, neither glass transition nor l
https://en.wikipedia.org/wiki/File%20archiver
A file archiver is a computer program that combines a number of files together into one archive file, or a series of archive files, for easier transportation or storage. File archivers may employ lossless data compression in their archive formats to reduce the size of the archive. Basic archivers just take a list of files and concatenate their contents sequentially into archives. The archive files need to store metadata, at least the names and lengths of the original files, if proper reconstruction is possible. More advanced archivers store additional metadata, such as the original timestamps, file attributes or access control lists. The process of making an archive file is called archiving or packing. Reconstructing the original files from the archive is termed unarchiving, unpacking or extracting. History An early archiver was the Multics command archive, descended from the CTSS command of the same name, which was a basic archiver and performed no compression. Multics also had a "tape_archiver" command, abbreviated ta, which was perhaps the forerunner of the Unix command tar. Unix archivers The Unix tools ar, tar, and cpio act as archivers but not compressors. Users of the Unix tools use additional compression tools, such as gzip, bzip2, or xz, to compress the archive file after packing or remove compression before unpacking the archive file. The filename extensions are successively added at each step of this process. For example, archiving a collection of files with tar and then compressing the resulting archive file with gzip results a file with .tar.gz extension. This approach has two goals: It follows the Unix philosophy that each program should accomplish a single task to perfection, as opposed to attempting to accomplish everything with one tool. As compression technology progresses, users may use different compression programs without having to modify or abandon their archiver. The archives use solid compression. When the files are combined, the comp
https://en.wikipedia.org/wiki/Arbeit%20macht%20frei
() is a German phrase meaning "Work sets you free" or "Work makes one free". The slogan originates from a 1873 novel by Lorenz Diefenbach. It is known for appearing on the entrance of Auschwitz and other Nazi concentration camps. Origin The expression comes from the title of an 1873 novel by the German philologist Lorenz Diefenbach, , in which gamblers and fraudsters find the path to virtue through labour. The phrase was also used in French () by Auguste Forel, a Swiss entomologist, neuroanatomist and psychiatrist, in his () (1920). In 1922, the of Vienna, an ethnic nationalist "protective" organization of Germans within Austria, printed membership stamps with the phrase . The phrase is also evocative of the medieval German principle of ("urban air makes you free"), according to which serfs were liberated after being a city resident for one year and one day. Use by the Nazis In 1933 the first communist prisoners were being rounded up for an indefinite period without charges. They were held in a number of places in Germany. The slogan was first used over the gate of a "wild camp" in the city of Oranienburg, which was set up in an abandoned brewery in March 1933 (it was later rebuilt in 1936 as Sachsenhausen). The slogan was placed at the entrances to a number of Nazi concentration camps. The slogan's use was implemented by (SS) officer Theodor Eicke at Dachau concentration camp. From Dachau, it was copied by the Nazi officer Rudolf Höss, who had previously worked there. Höss was appointed to create the original camp at Auschwitz, which became known as Auschwitz (or Camp) 1 and whose intended purpose was to incarcerate Polish political detainees. The Auschwitz I sign was made by prisoner-laborers including master blacksmith Jan Liwacz, and features an upside-down B, which has been interpreted as an act of defiance by the prisoners who made it. In The Kingdom of Auschwitz, Otto Friedrich wrote about Rudolf Höss, regarding his decision to display the mo
https://en.wikipedia.org/wiki/AIM%20%28software%29
AIM (AOL Instant Messenger) was an instant messaging and presence computer program created by AOL, which used the proprietary OSCAR instant messaging protocol and the TOC protocol to allow registered users to communicate in real time. AIM was popular by the late 1990s, in United States and other countries, and was the leading instant messaging application in that region into the following decade. Teens and college students were known to use the messenger's away message feature to keep in touch with friends, often frequently changing their away message throughout a day or leaving a message up with one's computer left on to inform buddies of their ongoings, location, parties, thoughts, or jokes. AIM's popularity declined as AOL subscribers started decreasing and steeply towards the 2010s, as Gmail's Google Talk, SMS, and Internet social networks, like Facebook gained popularity. Its fall has often been compared with other once-popular Internet services, such as Myspace. In June 2015, AOL was acquired by Verizon Communications. In June 2017, Verizon combined AOL and Yahoo into its subsidiary Oath Inc. (now called Yahoo). The company discontinued AIM as a service on December 15, 2017. History In May 1997, AIM was released unceremoniously as a stand-alone download for Microsoft Windows. AIM was an outgrowth of "online messages" in the original platform written in PL/1 on a Stratus computer by Dave Brown. At one time, the software had the largest share of the instant messaging market in North America, especially in the United States (with 52% of the total reported ). This does not include other instant messaging software related to or developed by AOL, such as ICQ and iChat. During its heyday, its main competitors were ICQ (which AOL acquired in 1998), Yahoo! Messenger and MSN Messenger. AOL particularly had a rivalry or "chat war" with PowWow and Microsoft, starting in 1999. There were several attempts from Microsoft to simultaneously log into their own and AIM's pro
https://en.wikipedia.org/wiki/Ackermann%20function
In computability theory, the Ackermann function, named after Wilhelm Ackermann, is one of the simplest and earliest-discovered examples of a total computable function that is not primitive recursive. All primitive recursive functions are total and computable, but the Ackermann function illustrates that not all total computable functions are primitive recursive. After Ackermann's publication of his function (which had three non-negative integer arguments), many authors modified it to suit various purposes, so that today "the Ackermann function" may refer to any of numerous variants of the original function. One common version is the two-argument Ackermann–Péter function developed by Rózsa Péter and Raphael Robinson. Its value grows very rapidly; for example, results in , an integer of 19,729 decimal digits. History In the late 1920s, the mathematicians Gabriel Sudan and Wilhelm Ackermann, students of David Hilbert, were studying the foundations of computation. Both Sudan and Ackermann are credited with discovering total computable functions (termed simply "recursive" in some references) that are not primitive recursive. Sudan published the lesser-known Sudan function, then shortly afterwards and independently, in 1928, Ackermann published his function (the Greek letter phi). Ackermann's three-argument function, , is defined such that for , it reproduces the basic operations of addition, multiplication, and exponentiation as and for p > 2 it extends these basic operations in a way that can be compared to the hyperoperations: (Aside from its historic role as a total-computable-but-not-primitive-recursive function, Ackermann's original function is seen to extend the basic arithmetic operations beyond exponentiation, although not as seamlessly as do variants of Ackermann's function that are specifically designed for that purpose—such as Goodstein's hyperoperation sequence.) In On the Infinite, David Hilbert hypothesized that the Ackermann function was not primitiv
https://en.wikipedia.org/wiki/AMOS%20%28programming%20language%29
AMOS BASIC is a dialect of the BASIC programming language for the Amiga computer. Following on from the successful STOS BASIC for the Atari ST, AMOS BASIC was written for the Amiga by François Lionet with Constantin Sotiropoulos and published by Europress Software in 1990. History AMOS competed on the Amiga platform with Acid Software's Blitz BASIC. Both BASICs differed from other dialects on different platforms, in that they allowed the easy creation of fairly demanding multimedia software, with full structured code and many high-level functions to load images, animations, sounds and display them in various ways. The original AMOS was a BASIC interpreter which, whilst working fine, suffered the same disadvantages of any language being run interpretively. By all accounts, AMOS was extremely fast among interpreted languages, being speedy enough that an extension called AMOS 3D could produce playable 3D games even on plain 7 MHz 68000 Amigas. Later, an AMOS compiler was developed that further increased speed. AMOS could also run MC68000 machine code, loaded into a program's memory banks. To simplify animation of sprites, AMOS included the AMOS Animation Language (AMAL), a compiled sprite scripting language which runs independently of the main AMOS BASIC program. It was also possible to control screen and "rainbow" effects using AMAL scripts. AMAL scripts in effect created CopperLists, small routines executed by the Amiga's Agnus chip. After the original version of AMOS, Europress released a compiler (AMOS Compiler), and two other versions of the language: Easy AMOS, a simpler version for beginners, and AMOS Professional, a more advanced version with added features, such as a better integrated development environment, ARexx support, a new user interface API and new flow control constructs. Neither of these new versions was significantly more popular than the original AMOS. AMOS was used mostly to make multimedia software, video games (platformers and graphical ad
https://en.wikipedia.org/wiki/Alpha%20helix
An alpha helix (or α-helix) is a sequence of amino acids in a protein that are twisted into a coil (a helix). The alpha helix is the most common structural arrangement in the secondary structure of proteins. It is also the most extreme type of local structure, and it is the local structure that is most easily predicted from a sequence of amino acids. The alpha helix has a right hand-helix conformation in which every backbone N−H group hydrogen bonds to the backbone C=O group of the amino acid that is four residues earlier in the protein sequence. Other names The alpha helix is also commonly called a: Pauling–Corey–Branson α-helix (from the names of three scientists who described its structure). 3.613-helix because there are 3.6 amino acids in one ring, and there are an average of 13 residues per helical turn, with 13 atoms being involved in the ring formed by the hydrogen bond. Discovery In the early 1930s, William Astbury showed that there were drastic changes in the X-ray fiber diffraction of moist wool or hair fibers upon significant stretching. The data suggested that the unstretched fibers had a coiled molecular structure with a characteristic repeat of ≈. Astbury initially proposed a linked-chain structure for the fibers. He later joined other researchers (notably the American chemist Maurice Huggins) in proposing that: the unstretched protein molecules formed a helix (which he called the α-form) the stretching caused the helix to uncoil, forming an extended state (which he called the β-form). Although incorrect in their details, Astbury's models of these forms were correct in essence and correspond to modern elements of secondary structure, the α-helix and the β-strand (Astbury's nomenclature was kept), which were developed by Linus Pauling, Robert Corey and Herman Branson in 1951 (see below); that paper showed both right- and left-handed helices, although in 1960 the crystal structure of myoglobin showed that the right-handed form is the common
https://en.wikipedia.org/wiki/Athlon
Athlon is the brand name applied to a series of x86-compatible microprocessors designed and manufactured by AMD. The original Athlon (now called Athlon Classic) was the first seventh-generation x86 processor and the first desktop processor to reach speeds of one gigahertz (GHz). It made its debut as AMD's high-end processor brand on June 23, 1999. Over the years AMD has used the Athlon name with the 64-bit Athlon 64 architecture, the Athlon II, and Accelerated Processing Unit (APU) chips targeting the Socket AM1 desktop SoC architecture, and Socket AM4 Zen microarchitecture. The modern Zen-based Athlon with a Radeon Graphics processor was introduced in 2019 as AMD's highest-performance entry-level processor. Athlon comes from the Ancient Greek (athlon), meaning "(sport) contest", or "prize of a contest", or "place of a contest; arena". With the Athlon name originally used for AMD's high-end processors, AMD currently uses Athlon for budget APUs with integrated graphics. AMD positions the Athlon against its rival, the Intel Pentium. Brand history K7 design and development The first Athlon processor was a result of AMD's development of K7 processors in the 1990s. AMD founder and then-CEO Jerry Sanders aggressively pursued strategic partnerships and engineering talent in the late 1990s, working to build on earlier successes in the PC market with the AMD K6 processor line. One major partnership announced in 1998 paired AMD with semiconductor giant Motorola to co-develop copper-based semiconductor technology, resulting in the K7 project being the first commercial processor to utilize copper fabrication technology. In the announcement, Sanders referred to the partnership as creating a "virtual gorilla" that would enable AMD to compete with Intel on fabrication capacity while limiting AMD's financial outlay for new facilities. The K7 design team was led by Dirk Meyer, who had previously worked as a lead engineer at DEC on multiple Alpha microprocessors. When DEC was sol
https://en.wikipedia.org/wiki/Arithmetic%E2%80%93geometric%20mean
In mathematics, the arithmetic–geometric mean of two positive real numbers and is the mutual limit of a sequence of arithmetic means and a sequence of geometric means: Begin the sequences with x and y: Then define the two interdependent sequences and as These two sequences converge to the same number, the arithmetic–geometric mean of and ; it is denoted by , or sometimes by or . The arithmetic–geometric mean is used in fast algorithms for exponential and trigonometric functions, as well as some mathematical constants, in particular, computing . The arithmetic–geometric mean can be extended to complex numbers and when the branches of the square root are allowed to be taken inconsistently, it is, in general, a multivalued function. Example To find the arithmetic–geometric mean of and , iterate as follows: The first five iterations give the following values: The number of digits in which and agree (underlined) approximately doubles with each iteration. The arithmetic–geometric mean of 24 and 6 is the common limit of these two sequences, which is approximately . History The first algorithm based on this sequence pair appeared in the works of Lagrange. Its properties were further analyzed by Gauss. Properties The geometric mean of two positive numbers is never bigger than the arithmetic mean (see inequality of arithmetic and geometric means). As a consequence, for , is an increasing sequence, is a decreasing sequence, and . These are strict inequalities if . is thus a number between the geometric and arithmetic mean of and ; it is also between and . If , then . There is an integral-form expression for : where is the complete elliptic integral of the first kind: Indeed, since the arithmetic–geometric process converges so quickly, it provides an efficient way to compute elliptic integrals via this formula. In engineering, it is used for instance in elliptic filter design. The arithmetic–geometric mean is connected to the Jacobi theta functio
https://en.wikipedia.org/wiki/Asymptote
In analytic geometry, an asymptote () of a curve is a line such that the distance between the curve and the line approaches zero as one or both of the x or y coordinates tends to infinity. In projective geometry and related contexts, an asymptote of a curve is a line which is tangent to the curve at a point at infinity. The word asymptote is derived from the Greek ἀσύμπτωτος (asumptōtos) which means "not falling together", from ἀ priv. + σύν "together" + πτωτ-ός "fallen". The term was introduced by Apollonius of Perga in his work on conic sections, but in contrast to its modern meaning, he used it to mean any line that does not intersect the given curve. There are three kinds of asymptotes: horizontal, vertical and oblique. For curves given by the graph of a function , horizontal asymptotes are horizontal lines that the graph of the function approaches as x tends to Vertical asymptotes are vertical lines near which the function grows without bound. An oblique asymptote has a slope that is non-zero but finite, such that the graph of the function approaches it as x tends to More generally, one curve is a curvilinear asymptote of another (as opposed to a linear asymptote) if the distance between the two curves tends to zero as they tend to infinity, although the term asymptote by itself is usually reserved for linear asymptotes. Asymptotes convey information about the behavior of curves in the large, and determining the asymptotes of a function is an important step in sketching its graph. The study of asymptotes of functions, construed in a broad sense, forms a part of the subject of asymptotic analysis. Introduction The idea that a curve may come arbitrarily close to a line without actually becoming the same may seem to counter everyday experience. The representations of a line and a curve as marks on a piece of paper or as pixels on a computer screen have a positive width. So if they were to be extended far enough they would seem to merge, at least as far a
https://en.wikipedia.org/wiki/Accumulator%20%28computing%29
In a computer's central processing unit (CPU), the accumulator is a register in which intermediate arithmetic logic unit results are stored. Without a register like an accumulator, it would be necessary to write the result of each calculation (addition, multiplication, shift, etc.) to main memory, perhaps only to be read right back again for use in the next operation. Access to main memory is slower than access to a register like an accumulator because the technology used for the large main memory is slower (but cheaper) than that used for a register. Early electronic computer systems were often split into two groups, those with accumulators and those without. Modern computer systems often have multiple general-purpose registers that can operate as accumulators, and the term is no longer as common as it once was. However, to simplify their design, a number of special-purpose processors still use a single accumulator. Basic concept Mathematical operations often take place in a stepwise fashion, using the results from one operation as the input to the next. For instance, a manual calculation of a worker's weekly payroll might look something like: look up the number of hours worked from the employee's time card look up the pay rate for that employee from a table multiply the hours by the pay rate to get their basic weekly pay multiply their basic pay by a fixed percentage to account for income tax subtract that number from their basic pay to get their weekly pay after tax multiply that result by another fixed percentage to account for retirement plans subtract that number from their basic pay to get their weekly pay after all deductions A computer program carrying out the same task would follow the same basic sequence of operations, although the values being looked up would all be stored in computer memory. In early computers, the number of hours would likely be held on a punch card and the pay rate in some other form of memory, perhaps a magnetic drum.
https://en.wikipedia.org/wiki/Arithmetic
Arithmetic () is an elementary part of mathematics that consists of the study of the properties of the traditional operations on numbers—addition, subtraction, multiplication, division, exponentiation, and extraction of roots. In the 19th century, Italian mathematician Giuseppe Peano formalized arithmetic with his Peano axioms, which are highly important to the field of mathematical logic today. History The prehistory of arithmetic is limited to a small number of artifacts that may indicate the conception of addition and subtraction; the best-known is the Ishango bone from central Africa, dating from somewhere between 20,000 and 18,000 BC, although its interpretation is disputed. The earliest written records indicate the Egyptians and Babylonians used all the elementary arithmetic operations: addition, subtraction, multiplication, and division, as early as 2000 BC. These artifacts do not always reveal the specific process used for solving problems, but the characteristics of the particular numeral system strongly influence the complexity of the methods. The hieroglyphic system for Egyptian numerals, like the later Roman numerals, descended from tally marks used for counting. In both cases, this origin resulted in values that used a decimal base but did not include positional notation. Complex calculations with Roman numerals required the assistance of a counting board (or the Roman abacus) to obtain the results. Early number systems that included positional notation were not decimal; these include the sexagesimal (base 60) system for Babylonian numerals and the vigesimal (base 20) system that defined Maya numerals. Because of the place-value concept, the ability to reuse the same digits for different values contributed to simpler and more efficient methods of calculation. The continuous historical development of modern arithmetic starts with the Hellenistic period of ancient Greece; it originated much later than the Babylonian and Egyptian examples. Prior to th
https://en.wikipedia.org/wiki/Advanced%20Power%20Management
Advanced power management (APM) is a technical standard for power management developed by Intel and Microsoft and released in 1992 which enables an operating system running an IBM-compatible personal computer to work with the BIOS (part of the computer's firmware) to achieve power management. Revision 1.2 was the last version of the APM specification, released in 1996. ACPI is the successor to APM. Microsoft dropped support for APM in Windows Vista. The Linux kernel still mostly supports APM, though support for APM CPU idle was dropped in version 3.0. Overview APM uses a layered approach to manage devices. APM-aware applications (which include device drivers) talk to an OS-specific APM driver. This driver communicates to the APM-aware BIOS, which controls the hardware. There is the ability to opt out of APM control on a device-by-device basis, which can be used if a driver wants to communicate directly with a hardware device. Communication occurs both ways; power management events are sent from the BIOS to the APM driver, and the APM driver sends information and requests to the BIOS via function calls. In this way the APM driver is an intermediary between the BIOS and the operating system. Power management happens in two ways; through the above-mentioned function calls from the APM driver to the BIOS requesting power state changes, and automatically based on device activity. In APM 1.0 and APM 1.1, power management is almost fully controlled by the BIOS. In APM 1.2, the operating system can control PM time (e.g. suspend timeout). Power management events There are 12 power events (such as standby, suspend and resume requests, and low battery notifications), plus OEM-defined events, that can be sent from the APM BIOS to the operating system. The APM driver regularly polls for event change notifications. Power Management Events: APM functions There are 21 APM function calls defined that the APM driver can use to query power management statuses, or request pow
https://en.wikipedia.org/wiki/Arithmetic%20function
In number theory, an arithmetic, arithmetical, or number-theoretic function is for most authors any function f(n) whose domain is the positive integers and whose range is a subset of the complex numbers. Hardy & Wright include in their definition the requirement that an arithmetical function "expresses some arithmetical property of n". An example of an arithmetic function is the divisor function whose value at a positive integer n is equal to the number of divisors of n. There is a larger class of number-theoretic functions that do not fit the above definition, for example, the prime-counting functions. This article provides links to functions of both classes. Arithmetic functions are often extremely irregular (see table), but some of them have series expansions in terms of Ramanujan's sum. Multiplicative and additive functions An arithmetic function a is completely additive if a(mn) = a(m) + a(n) for all natural numbers m and n; completely multiplicative if a(mn) = a(m)a(n) for all natural numbers m and n; Two whole numbers m and n are called coprime if their greatest common divisor is 1, that is, if there is no prime number that divides both of them. Then an arithmetic function a is additive if a(mn) = a(m) + a(n) for all coprime natural numbers m and n; multiplicative if a(mn) = a(m)a(n) for all coprime natural numbers m and n. Notation In this article, and mean that the sum or product is over all prime numbers: and Similarly, and mean that the sum or product is over all prime powers with strictly positive exponent (so is not included): The notations and mean that the sum or product is over all positive divisors of n, including 1 and n. For example, if , then The notations can be combined: and mean that the sum or product is over all prime divisors of n. For example, if n = 18, then and similarly and mean that the sum or product is over all prime powers dividing n. For example, if n = 24, then Ω(n), ω(n), νp(n) – prime power decomposit
https://en.wikipedia.org/wiki/Ascending%20chain%20condition
In mathematics, the ascending chain condition (ACC) and descending chain condition (DCC) are finiteness properties satisfied by some algebraic structures, most importantly ideals in certain commutative rings. These conditions played an important role in the development of the structure theory of commutative rings in the works of David Hilbert, Emmy Noether, and Emil Artin. The conditions themselves can be stated in an abstract form, so that they make sense for any partially ordered set. This point of view is useful in abstract algebraic dimension theory due to Gabriel and Rentschler. Definition A partially ordered set (poset) P is said to satisfy the ascending chain condition (ACC) if no infinite strictly ascending sequence of elements of P exists. Equivalently, every weakly ascending sequence of elements of P eventually stabilizes, meaning that there exists a positive integer n such that Similarly, P is said to satisfy the descending chain condition (DCC) if there is no infinite descending chain of elements of P. Equivalently, every weakly descending sequence of elements of P eventually stabilizes. Comments Assuming the axiom of dependent choice, the descending chain condition on (possibly infinite) poset P is equivalent to P being well-founded: every nonempty subset of P has a minimal element (also called the minimal condition or minimum condition). A totally ordered set that is well-founded is a well-ordered set. Similarly, the ascending chain condition is equivalent to P being converse well-founded (again, assuming dependent choice): every nonempty subset of P has a maximal element (the maximal condition or maximum condition). Every finite poset satisfies both the ascending and descending chain conditions, and thus is both well-founded and converse well-founded. Example Consider the ring of integers. Each ideal of consists of all multiples of some number . For example, the ideal consists of all multiples of . Let be the ideal consisting
https://en.wikipedia.org/wiki/Amplifier%20figures%20of%20merit
In electronics, the figures of merit of an amplifier are numerical measures that characterize its properties and performance. Figures of merit can be given as a list of specifications that include properties such as gain, bandwidth, noise and linearity, among others listed in this article. Figures of merit are important for determining the suitability of a particular amplifier for an intended use. Gain The gain of an amplifier is the ratio of output to input power or amplitude, and is usually measured in decibels. When measured in decibels it is logarithmically related to the power ratio: G(dB)=10 log(Pout /Pin). RF amplifiers are often specified in terms of the maximum power gain obtainable, while the voltage gain of audio amplifiers and instrumentation amplifiers will be more often specified. For example, an audio amplifier with a gain given as 20 dB will have a voltage gain of ten. The use of voltage gain figure is appropriate when the amplifier's input impedance is much higher than the source impedance, and the load impedance higher than the amplifier's output impedance. If two equivalent amplifiers are being compared, the amplifier with higher gain settings would be more sensitive as it would take less input signal to produce a given amount of power. Bandwidth The bandwidth of an amplifier is the range of frequencies for which the amplifier gives "satisfactory performance". The definition of "satisfactory performance" may be different for different applications. However, a common and well-accepted metric is the half-power points (i.e. frequency where the power goes down by half its peak value) on the output vs. frequency curve. Therefore, bandwidth can be defined as the difference between the lower and upper half power points. This is therefore also known as the bandwidth. Bandwidths (otherwise called "frequency responses") for other response tolerances are sometimes quoted (, etc.) or "plus or minus 1dB" (roughly the sound level difference people usual
https://en.wikipedia.org/wiki/Acceptance%20testing
In engineering and its various subdisciplines, acceptance testing is a test conducted to determine if the requirements of a specification or contract are met. It may involve chemical tests, physical tests, or performance tests. In systems engineering, it may involve black-box testing performed on a system (for example: a piece of software, lots of manufactured mechanical parts, or batches of chemical products) prior to its delivery. In software testing, the ISTQB defines acceptance testing as: Acceptance testing is also known as user acceptance testing (UAT), end-user testing, operational acceptance testing (OAT), acceptance test-driven development (ATDD) or field (acceptance) testing. Acceptance criteria are the criteria that a system or component must satisfy in order to be accepted by a user, customer, or other authorized entity. Overview Testing is a set of activities conducted to facilitate discovery and/or evaluation of properties of one or more items under test. Each individual test, known as a test case, exercises a set of predefined test activities, developed to drive the execution of the test item to meet test objectives; including correct implementation, error identification, quality verification and other valued detail. The test environment is usually designed to be identical, or as close as possible, to the anticipated production environment. It includes all facilities, hardware, software, firmware, procedures and/or documentation intended for or used to perform the testing of software. UAT and OAT test cases are ideally derived in collaboration with business customers, business analysts, testers, and developers. It is essential that these tests include both business logic tests as well as operational environment conditions. The business customers (product owners) are the primary stakeholders of these tests. As the test conditions successfully achieve their acceptance criteria, the stakeholders are reassured the development is progressing in the r
https://en.wikipedia.org/wiki/Amygdalin
Amygdalin (from Ancient Greek: 'almond') is a naturally occurring chemical compound found in many plants, most notably in the seeds (kernels) of apricots, bitter almonds, apples, peaches, cherries and plums, and in the roots of manioc. Amygdalin is classified as a cyanogenic glycoside, because each amygdalin molecule includes a nitrile group, which can be released as the toxic cyanide anion by the action of a beta-glucosidase. Eating amygdalin will cause it to release cyanide in the human body, and may lead to cyanide poisoning. Since the early 1950s, both amygdalin and a chemical derivative named laetrile have been promoted as alternative cancer treatments, often under the misnomer vitamin B17 (neither amygdalin nor laetrile is a vitamin). Scientific study has found them to not only be clinically ineffective in treating cancer, but also potentially toxic or lethal when taken by mouth due to cyanide poisoning. The promotion of laetrile to treat cancer has been described in the medical literature as a canonical example of quackery, and as "the slickest, most sophisticated, and certainly the most remunerative cancer quack promotion in medical history". Chemistry Amygdalin is a cyanogenic glycoside derived from the aromatic amino acid phenylalanine. Amygdalin and prunasin are common among plants of the family Rosaceae, particularly the genus Prunus, Poaceae (grasses), Fabaceae (legumes), and in other food plants, including flaxseed and manioc. Within these plants, amygdalin and the enzymes necessary to hydrolyze it are stored in separate locations, and only mix as a result of tissue damage. This provides a natural defense system. Amygdalin is contained in stone fruit kernels, such as almonds, apricot (14 g/kg), peach (6.8 g/kg), and plum (4–17.5 g/kg depending on variety), and also in the seeds of the apple (3 g/kg). Benzaldehyde released from amygdalin provides a bitter flavor. Because of a difference in a recessive gene called Sweet kernal [Sk], much less am
https://en.wikipedia.org/wiki/Amicable%20numbers
Amicable numbers are two different natural numbers related in such a way that the sum of the proper divisors of each is equal to the other number. That is, s(a)=b and s(b)=a, where s(n)=σ(n)-n is equal to the sum of positive divisors of n except n itself (see also divisor function). The smallest pair of amicable numbers is (220, 284). They are amicable because the proper divisors of 220 are 1, 2, 4, 5, 10, 11, 20, 22, 44, 55 and 110, of which the sum is 284; and the proper divisors of 284 are 1, 2, 4, 71 and 142, of which the sum is 220. (A proper divisor of a number is a positive factor of that number other than the number itself. For example, the proper divisors of 6 are 1, 2, and 3.) The first ten amicable pairs are: (220, 284), (1184, 1210), (2620, 2924), (5020, 5564), (6232, 6368), (10744, 10856), (12285, 14595), (17296, 18416), (63020, 76084), and (66928, 66992). . (Also see and ) It is unknown if there are infinitely many pairs of amicable numbers. A pair of amicable numbers constitutes an aliquot sequence of period 2. A related concept is that of a perfect number, which is a number that equals the sum of its own proper divisors, in other words a number which forms an aliquot sequence of period 1. Numbers that are members of an aliquot sequence with period greater than 2 are known as sociable numbers. History Amicable numbers were known to the Pythagoreans, who credited them with many mystical properties. A general formula by which some of these numbers could be derived was invented circa 850 by the Iraqi mathematician Thābit ibn Qurra (826–901). Other Arab mathematicians who studied amicable numbers are al-Majriti (died 1007), al-Baghdadi (980–1037), and al-Fārisī (1260–1320). The Iranian mathematician Muhammad Baqir Yazdi (16th century) discovered the pair (9363584, 9437056), though this has often been attributed to Descartes. Much of the work of Eastern mathematicians in this area has been forgotten. Thābit ibn Qurra's formula was rediscovered by
https://en.wikipedia.org/wiki/Agar
Agar ( or ), or agar-agar, is a jelly-like substance consisting of polysaccharides obtained from the cell walls of some species of red algae, primarily from "ogonori" (Gracilaria) and "tengusa" (Gelidiaceae). As found in nature, agar is a mixture of two components, the linear polysaccharide agarose and a heterogeneous mixture of smaller molecules called agaropectin. It forms the supporting structure in the cell walls of certain species of algae and is released on boiling. These algae are known as agarophytes, belonging to the Rhodophyta (red algae) phylum. The processing of food-grade agar removes the agaropectin, and the commercial product is essentially pure agarose. Agar has been used as an ingredient in desserts throughout Asia and also as a solid substrate to contain culture media for microbiological work. Agar can be used as a laxative; an appetite suppressant; a vegan substitute for gelatin; a thickener for soups; in fruit preserves, ice cream, and other desserts; as a clarifying agent in brewing; and for sizing paper and fabrics. Etymology The word agar comes from agar-agar, the Malay name for red algae (Gigartina, Eucheuma, Gracilaria) from which the jelly is produced. It is also known as Kanten () (from the phrase kan-zarashi tokoroten () or “cold-exposed agar”), Japanese isinglass, China grass, Ceylon moss or Jaffna moss. Gracilaria edulis or its synonym G. lichenoides is specifically referred to as agal-agal or Ceylon agar. History Macroalgae have been used widely as food by coastal cultures, especially in Southeast Asia. In the Philippines, Gracilaria, known as gulaman (or gulaman dagat) in Tagalog, have been harvested and used as food for centuries, eaten both fresh or sun-dried and turned into jellies. The earliest historical attestation is from the Vocabulario de la lengua tagala (1754) by the Jesuit priests Juan de Noceda and Pedro de Sanlucar, where golaman or gulaman was defined as "una yerva, de que se haze conserva a modo de Halea, naz
https://en.wikipedia.org/wiki/Antioxidant
Antioxidants are compounds that inhibit oxidation (usually occurring as autoxidation), a chemical reaction that can produce free radicals. Autoxidation leads to degradation of organic compounds, including living matter. Antioxidants are frequently added to industrial products, such as polymers, fuels, and lubricants, to extend their usable lifetimes. Food are also treated with antioxidants to forestall spoilage, in particular the rancidification of oils and fats. In cells, antioxidants such as glutathione, mycothiol or bacillithiol, and enzyme systems like superoxide dismutase, can prevent damage from oxidative stress. Known dietary antioxidants are vitamins A, C, and E, but the term antioxidant has also been applied to numerous other dietary compounds that only have antioxidant properties in vitro, with little evidence for antioxidant properties in vivo. Dietary supplements marketed as antioxidants have not been shown to maintain health or prevent disease in humans. History As part of their adaptation from marine life, terrestrial plants began producing non-marine antioxidants such as ascorbic acid (vitamin C), polyphenols and tocopherols. The evolution of angiosperm plants between 50 and 200 million years ago resulted in the development of many antioxidant pigments – particularly during the Jurassic period – as chemical defences against reactive oxygen species that are byproducts of photosynthesis. Originally, the term antioxidant specifically referred to a chemical that prevented the consumption of oxygen. In the late 19th and early 20th centuries, extensive study concentrated on the use of antioxidants in important industrial processes, such as the prevention of metal corrosion, the vulcanization of rubber, and the polymerization of fuels in the fouling of internal combustion engines. Early research on the role of antioxidants in biology focused on their use in preventing the oxidation of unsaturated fats, which is the cause of rancidity. Antioxidant activit
https://en.wikipedia.org/wiki/Bit
The bit is the most basic unit of information in computing and digital communications. The name is a portmanteau of binary digit. The bit represents a logical state with one of two possible values. These values are most commonly represented as either , but other representations such as true/false, yes/no, on/off, or +/− are also widely used. The relation between these values and the physical states of the underlying storage or device is a matter of convention, and different assignments may be used even within the same device or program. It may be physically implemented with a two-state device. A contiguous group of binary digits is commonly called a bit string, a bit vector, or a single-dimensional (or multi-dimensional) bit array. A group of eight bits is called one byte, but historically the size of the byte is not strictly defined. Frequently, half, full, double and quadruple words consist of a number of bytes which is a low power of two. A string of four bits is a nibble. In information theory, one bit is the information entropy of a random binary variable that is 0 or 1 with equal probability, or the information that is gained when the value of such a variable becomes known. As a unit of information, the bit is also known as a shannon, named after Claude E. Shannon. The symbol for the binary digit is either "bit", per the IEC 80000-13:2008 standard, or the lowercase character "b", per the IEEE 1541-2002 standard. Use of the latter may create confusion with the capital "B" which is the international standard symbol for the byte. History The encoding of data by discrete bits was used in the punched cards invented by Basile Bouchon and Jean-Baptiste Falcon (1732), developed by Joseph Marie Jacquard (1804), and later adopted by Semyon Korsakov, Charles Babbage, Herman Hollerith, and early computer manufacturers like IBM. A variant of that idea was the perforated paper tape. In all those systems, the medium (card or tape) conceptually carried an array of hole p
https://en.wikipedia.org/wiki/Byte
The byte is a unit of digital information that most commonly consists of eight bits. Historically, the byte was the number of bits used to encode a single character of text in a computer and for this reason it is the smallest addressable unit of memory in many computer architectures. To disambiguate arbitrarily sized bytes from the common 8-bit definition, network protocol documents such as the Internet Protocol () refer to an 8-bit byte as an octet. Those bits in an octet are usually counted with numbering from 0 to 7 or 7 to 0 depending on the bit endianness. The first bit is number 0, making the eighth bit number 7. The size of the byte has historically been hardware-dependent and no definitive standards existed that mandated the size. Sizes from 1 to 48 bits have been used. The six-bit character code was an often-used implementation in early encoding systems, and computers using six-bit and nine-bit bytes were common in the 1960s. These systems often had memory words of 12, 18, 24, 30, 36, 48, or 60 bits, corresponding to 2, 3, 4, 5, 6, 8, or 10 six-bit bytes. In this era, bit groupings in the instruction stream were often referred to as syllables or slab, before the term byte became common. The modern de facto standard of eight bits, as documented in ISO/IEC 2382-1:1993, is a convenient power of two permitting the binary-encoded values 0 through 255 for one byte, as 2 to the power of 8 is 256. The international standard IEC 80000-13 codified this common meaning. Many types of applications use information representable in eight or fewer bits and processor designers commonly optimize for this usage. The popularity of major commercial computing architectures has aided in the ubiquitous acceptance of the 8-bit byte. Modern architectures typically use 32- or 64-bit words, built of four or eight bytes, respectively. The unit symbol for the byte was designated as the upper-case letter B by the International Electrotechnical Commission (IEC) and Institute of Electri
https://en.wikipedia.org/wiki/Bay%20leaf
The bay leaf is an aromatic leaf commonly used as a herb in cooking. It can be used whole, either dried or fresh, in which case it is removed from the dish before consumption, or less commonly used in ground form. The flavor that a bay leaf imparts to a dish has not been universally agreed upon, but most agree it is a subtle addition. Bay leaves come from various plants and are used for their distinctive flavor and fragrance. The most common source is the bay laurel (Laurus nobilis). Other types include California bay laurel, Indian bay leaf, West Indian bay laurel, and Mexican bay laurel. Bay leaves contain essential oils, such as eucalyptol, terpenes, and methyleugenol, which contribute to their taste and aroma. Bay leaves are used in various cuisines around the world, including Indian, Filipino, European, and Caribbean. They are typically used in soups, stews, meat, seafood, and vegetable dishes. The leaves should be removed from the cooked food before eating as they can be abrasive in the digestive tract. Bay leaves are used as an insect repellent in pantries and as an active ingredient in killing jars for entomology. In Eastern Orthodoxy liturgy, they are used to symbolize Jesus' destruction of Hades and freeing of the dead. While some visually similar plants have poisonous leaves, bay leaves are not toxic and can be eaten without harm. However, they remain stiff even after cooking and may pose a choking hazard or cause harm to the digestive tract if swallowed whole or in large pieces. Canadian food and drug regulations set specific standards for bay leaves, including limits on ash content, moisture levels, and essential oil content. Sources Bay leaves come from several plants, such as: Bay laurel (Laurus nobilis, Lauraceae). Fresh or dried bay leaves are used in cooking for their distinctive flavour and fragrance. The leaves should be removed from the cooked food before eating (see safety section below). The leaves are often used to flavour soups, stew
https://en.wikipedia.org/wiki/Bulletin%20board%20system
A bulletin board system (BBS), also called a computer bulletin board service (CBBS), is a computer server running software that allows users to connect to the system using a terminal program. Once logged in, the user can perform functions such as uploading and downloading software and data, reading news and bulletins, and exchanging messages with other users through public message boards and sometimes via direct chatting. In the early 1980s, message networks such as FidoNet were developed to provide services such as NetMail, which is similar to internet-based email. Many BBSes also offer online games in which users can compete with each other. BBSes with multiple phone lines often provide chat rooms, allowing users to interact with each other. Bulletin board systems were in many ways a precursor to the modern form of the World Wide Web, social networks, and other aspects of the Internet. Low-cost, high-performance asynchronous modems drove the use of online services and BBSes through the early 1990s. InfoWorld estimated that there were 60,000 BBSes serving 17 million users in the United States alone in 1994, a collective market much larger than major online services such as CompuServe. The introduction of inexpensive dial-up internet service and the Mosaic web browser offered ease of use and global access that BBS and online systems did not provide, and led to a rapid crash in the market starting in late 1994 to early 1995. Over the next year, many of the leading BBS software providers went bankrupt and tens of thousands of BBSes disappeared. Today, BBSing survives largely as a nostalgic hobby in most parts of the world, but it is still an extremely popular form of communication for Taiwanese youth (see PTT Bulletin Board System). Most surviving BBSes are accessible over Telnet and typically offer free email accounts, FTP services, IRC and all the protocols commonly used on the Internet. Some offer access through packet switched networks or packet radio connection
https://en.wikipedia.org/wiki/Brain
The brain (or encephalon) is an organ that serves as the center of the nervous system in all vertebrate and most invertebrate animals. The brain is the largest cluster of neurons in the body and is typically located in the head, usually near organs for special senses such as vision, hearing and olfaction. It is the most specialized and energy-consuming organ in the body, responsible for complex sensory perception, motor control, endocrine regulation and the development of intelligence. While invertebrate brains arise from paired segmental ganglia (each of which is only responsible for the respective body segment) of the ventral nerve cord, vertebrate brains develop axially from the midline dorsal nerve cord as a vesicular enlargement at the rostral end of the neural tube, with centralized control over all body segments. All vertebrate brains can be embryonically divided into three parts: the forebrain (prosencephalon, subdivided into telencephalon and diencephalon), midbrain (mesencephalon) and hindbrain (rhombencephalon, subdivided into metencephalon and myelencephalon). The spinal cord, which directly interacts with somatic functions below the head, can be considered a caudal extension of the myelencephalon enclosed inside the vertebral column. Together, the brain and spinal cord constitute the central nervous system in all vertebrates. In humans, the cerebral cortex contains approximately 14–16 billion neurons, and the estimated number of neurons in the cerebellum is 55–70 billion. Each neuron is connected by synapses to several thousand other neurons, typically communicating with one another via root-like protrusions called dendrites and long fiber-like extensions called axons, which are usually myelinated and carry trains of rapid micro-electric signal pulses called action potentials to target specific recipient cells in other areas of the brain or distant parts of the body. The prefrontal cortex, which controls executive functions, is particularly well devel
https://en.wikipedia.org/wiki/Bluetooth
Bluetooth is a short-range wireless technology standard that is used for exchanging data between fixed and mobile devices over short distances and building personal area networks (PANs). In the most widely used mode, transmission power is limited to 2.5 milliwatts, giving it a very short range of up to . It employs UHF radio waves in the ISM bands, from 2.402GHz to 2.48GHz. It is mainly used as an alternative to wire connections, to exchange files between nearby portable devices and connect cell phones and music players with wireless headphones. Bluetooth is managed by the Bluetooth Special Interest Group (SIG), which has more than 35,000 member companies in the areas of telecommunication, computing, networking, and consumer electronics. The IEEE standardized Bluetooth as IEEE 802.15.1, but no longer maintains the standard. The Bluetooth SIG oversees development of the specification, manages the qualification program, and protects the trademarks. A manufacturer must meet Bluetooth SIG standards to market it as a Bluetooth device. A network of patents apply to the technology, which are licensed to individual qualifying devices. , 4.7 billion Bluetooth integrated circuit chips are shipped annually. Etymology The name "Bluetooth" was proposed in 1997 by Jim Kardach of Intel, one of the founders of the Bluetooth SIG. The name was inspired by a conversation with Sven Mattisson who related Scandinavian history through tales from Frans G. Bengtsson's The Long Ships, a historical novel about Vikings and the 10th-century Danish king Harald Bluetooth. Upon discovering a picture of the runestone of Harald Bluetooth in the book A History of the Vikings by Gwyn Jones, Kardach proposed Bluetooth as the codename for the short-range wireless program which is now called Bluetooth. According to Bluetooth's official website, Bluetooth is the Anglicised version of the Scandinavian Blåtand/Blåtann (or in Old Norse blátǫnn). It was the epithet of King Harald Bluetooth, who united th
https://en.wikipedia.org/wiki/Bluetooth%20Special%20Interest%20Group
The Bluetooth Special Interest Group (Bluetooth SIG) is the standards organization that oversees the development of Bluetooth standards and the licensing of the Bluetooth technologies and trademarks to manufacturers. The SIG is a not-for-profit, non-stock corporation founded in September 1998. The SIG is headquartered in Kirkland, Washington. The SIG does not make, manufacture or sell Bluetooth-enabled products. Introduction Bluetooth technology provides a way to exchange information between wireless devices such as PDAs, laptops, computers, printers and digital cameras via a secure, low-cost, globally available short-range radio frequency band. Originally developed by Ericsson, Bluetooth technology is now used in many different products by many different manufacturers. These manufacturers must be either Associate or Promoter members of(see below) the Bluetooth SIG before they are granted early access to the Bluetooth specifications, but published Bluetooth specifications are available online via the Bluetooth SIG Website bluetooth.com. The SIG owns the Bluetooth word mark, figure mark and combination mark. These trademarks are licensed out for use to companies that are incorporating Bluetooth wireless technology into their products. To become a licensee, a company must become a member of the Bluetooth SIG. The SIG also manages the Bluetooth SIG Qualification program, a certification process required for any product using Bluetooth wireless technology and a pre-condition of the intellectual property license for Bluetooth technology. The main tasks for the SIG are to publish the Bluetooth specifications, protect the Bluetooth trademarks and evangelize Bluetooth wireless technology. In 2016, the SIG introduced a new visual and creative identity to support Bluetooth technology as the catalyst for the Internet of Things (IoT). This change included an updated logo, a new tagline and deprecation of the Bluetooth Smart and Bluetooth Smart Ready logos. At its incept
https://en.wikipedia.org/wiki/On-base%20percentage
In baseball statistics, on-base percentage (OBP) measures how frequently a batter reaches base. An official Major League Baseball (MLB) statistic since 1984, it is sometimes referred to as on-base average (OBA), as it is rarely presented as a true percentage. Generally defined as "how frequently a batter reaches base per plate appearance", OBP is specifically calculated as the ratio of a batter's times on base (the sum of hits, bases on balls, and times hit by pitch) to the sum of at bats, bases on balls, hit by pitch, and sacrifice flies. OBP does not credit the batter for reaching base on fielding errors, fielder's choice, uncaught third strikes, fielder's obstruction, or catcher's interference. OBP is added to slugging average (SLG) to determine on-base plus slugging (OPS). The OBP of all batters faced by one pitcher or team is referred to as "on-base against". On-base percentage is calculable for professional teams dating back to the first year of National Association of Professional Base Ball Players competition in 1871, because the component values of its formula have been recorded in box scores ever since. History The statistic was invented in the late 1940s by Brooklyn Dodgers statistician Allan Roth with then-Dodgers general manager Branch Rickey. In 1954, Rickey, who was then the general manager of the Pittsburgh Pirates, was featured in a Life Magazine graphic in which the formula for on-base percentage was shown as the first component of an all-encompassing "offense" equation. However, it was not named as on-base percentage, and there is little evidence that Roth's statistic was taken seriously at the time by the baseball community at large. On-base percentage became an official MLB statistic in 1984. Its perceived importance jumped after the influential 2003 book Moneyball highlighted Oakland Athletics general manager Billy Beane's focus on the statistic. Many baseball observers, particularly those influenced by the field of sabermetrics, now c
https://en.wikipedia.org/wiki/Binary-coded%20decimal
In computing and electronic systems, binary-coded decimal (BCD) is a class of binary encodings of decimal numbers where each digit is represented by a fixed number of bits, usually four or eight. Sometimes, special bit patterns are used for a sign or other indications (e.g. error or overflow). In byte-oriented systems (i.e. most modern computers), the term unpacked BCD usually implies a full byte for each digit (often including a sign), whereas packed BCD typically encodes two digits within a single byte by taking advantage of the fact that four bits are enough to represent the range 0 to 9. The precise four-bit encoding, however, may vary for technical reasons (e.g. Excess-3). The ten states representing a BCD digit are sometimes called tetrades (the nibble typically needed to hold them is also known as a tetrade) while the unused, don't care-states are named , pseudo-decimals or pseudo-decimal digits. BCD's main virtue, in comparison to binary positional systems, is its more accurate representation and rounding of decimal quantities, as well as its ease of conversion into conventional human-readable representations. Its principal drawbacks are a slight increase in the complexity of the circuits needed to implement basic arithmetic as well as slightly less dense storage. BCD was used in many early decimal computers, and is implemented in the instruction set of machines such as the IBM System/360 series and its descendants, Digital Equipment Corporation's VAX, the Burroughs B1700, and the Motorola 68000-series processors. BCD per se is not as widely used as in the past, and is unavailable or limited in newer instruction sets (e.g., ARM; x86 in long mode). However, decimal fixed-point and decimal floating-point formats are still important and continue to be used in financial, commercial, and industrial computing, where the subtle conversion and fractional rounding errors that are inherent in binary floating point formats cannot be tolerated. Background BCD takes
https://en.wikipedia.org/wiki/Brain%20abscess
Brain abscess (or cerebral abscess) is an abscess within the brain tissue caused by inflammation and collection of infected material coming from local (ear infection, dental abscess, infection of paranasal sinuses, infection of the mastoid air cells of the temporal bone, epidural abscess) or remote (lung, heart, kidney etc.) infectious sources. The infection may also be introduced through a skull fracture following a head trauma or surgical procedures. Brain abscess is usually associated with congenital heart disease in young children. It may occur at any age but is most frequent in the third decade of life. Signs and symptoms Fever, headache, and neurological problems, while classic, only occur in 20% of people with brain abscess. The famous triad of fever, headache and focal neurologic findings are highly suggestive of brain abscess. These symptoms are caused by a combination of increased intracranial pressure due to a space-occupying lesion (headache, vomiting, confusion, coma), infection (fever, fatigue etc.) and focal neurologic brain tissue damage (hemiparesis, aphasia etc.). The most frequent presenting symptoms are headache, drowsiness, confusion, seizures, hemiparesis or speech difficulties together with fever with a rapidly progressive course. Headache is characteristically worse at night and in the morning, as the intracranial pressure naturally increases when in the supine position. This elevation similarly stimulates the medullary vomiting center and area postrema, leading to morning vomiting. Other symptoms and findings depend largely on the specific location of the abscess in the brain. An abscess in the cerebellum, for instance, may cause additional complaints as a result of brain stem compression and hydrocephalus. Neurological examination may reveal a stiff neck in occasional cases (erroneously suggesting meningitis). Pathophysiology Bacterial Anaerobic and microaerophilic cocci and gram-negative and gram-positive anaerobic bacilli are the p
https://en.wikipedia.org/wiki/Binomial%20distribution
In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: success (with probability p) or failure (with probability ). A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance. The binomial distribution is frequently used to model the number of successes in a sample of size n drawn with replacement from a population of size N. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for N much larger than n, the binomial distribution remains a good approximation, and is widely used. Definitions Probability mass function In general, if the random variable X follows the binomial distribution with parameters n ∈ and p ∈ [0,1], we write X ~ B(n, p). The probability of getting exactly k successes in n independent Bernoulli trials (with the same rate p) is given by the probability mass function: for k = 0, 1, 2, ..., n, where is the binomial coefficient, hence the name of the distribution. The formula can be understood as follows: k successes occur with probability pk and n − k failures occur with probability . However, the k successes can occur anywhere among the n trials, and there are different ways of distributing k successes in a sequence of n trials. In creating reference tables for binomial distribution probability, usually the table is filled in up to n/2 values. This is because for k > n/2, the probability can be calculated by its complement a
https://en.wikipedia.org/wiki/Biostatistics
Biostatistics (also known as biometry) is a branch of statistics that applies statistical methods to a wide range of topics in biology. It encompasses the design of biological experiments, the collection and analysis of data from those experiments and the interpretation of the results. History Biostatistics and genetics Biostatistical modeling forms an important part of numerous modern biological theories. Genetics studies, since its beginning, used statistical concepts to understand observed experimental results. Some genetics scientists even contributed with statistical advances with the development of methods and tools. Gregor Mendel started the genetics studies investigating genetics segregation patterns in families of peas and used statistics to explain the collected data. In the early 1900s, after the rediscovery of Mendel's Mendelian inheritance work, there were gaps in understanding between genetics and evolutionary Darwinism. Francis Galton tried to expand Mendel's discoveries with human data and proposed a different model with fractions of the heredity coming from each ancestral composing an infinite series. He called this the theory of "Law of Ancestral Heredity". His ideas were strongly disagreed by William Bateson, who followed Mendel's conclusions, that genetic inheritance were exclusively from the parents, half from each of them. This led to a vigorous debate between the biometricians, who supported Galton's ideas, as Raphael Weldon, Arthur Dukinfield Darbishire and Karl Pearson, and Mendelians, who supported Bateson's (and Mendel's) ideas, such as Charles Davenport and Wilhelm Johannsen. Later, biometricians could not reproduce Galton conclusions in different experiments, and Mendel's ideas prevailed. By the 1930s, models built on statistical reasoning had helped to resolve these differences and to produce the neo-Darwinian modern evolutionary synthesis. Solving these differences also allowed to define the concept of population genetics and brou
https://en.wikipedia.org/wiki/Binary%20relation
In mathematics, a binary relation associates elements of one set, called the domain, with elements of another set, called the codomain. A binary relation over sets and is a new set of ordered pairs consisting of elements from and from . It is a generalization of the more widely understood idea of a unary function. It encodes the common concept of relation: an element is related to an element , if and only if the pair belongs to the set of ordered pairs that defines the binary relation. A binary relation is the most studied special case of an -ary relation over sets , which is a subset of the Cartesian product An example of a binary relation is the "divides" relation over the set of prime numbers and the set of integers , in which each prime is related to each integer that is a multiple of , but not to an integer that is not a multiple of . In this relation, for instance, the prime number 2 is related to numbers such as −4, 0, 6, 10, but not to 1 or 9, just as the prime number 3 is related to 0, 6, and 9, but not to 4 or 13. Binary relations are used in many branches of mathematics to model a wide variety of concepts. These include, among others: the "is greater than", "is equal to", and "divides" relations in arithmetic; the "is congruent to" relation in geometry; the "is adjacent to" relation in graph theory; the "is orthogonal to" relation in linear algebra. A function may be defined as a special kind of binary relation. Binary relations are also heavily used in computer science. A binary relation over sets and is an element of the power set of Since the latter set is ordered by inclusion (⊆), each relation has a place in the lattice of subsets of A binary relation is called a homogeneous relation when X = Y. A binary relation is also called a heterogeneous relation when it is not necessary that X = Y. Since relations are sets, they can be manipulated using set operations, including union, intersection, and complementation, and satisfyin
https://en.wikipedia.org/wiki/Braille
Braille ( , ) is a tactile writing system used by people who are visually impaired. It can be read either on embossed paper or by using refreshable braille displays that connect to computers and smartphone devices. Braille can be written using a slate and stylus, a braille writer, an electronic braille notetaker or with the use of a computer connected to a braille embosser. Braille is named after its creator, Louis Braille, a Frenchman who lost his sight as a result of a childhood accident. In 1824, at the age of fifteen, he developed the braille code based on the French alphabet as an improvement on night writing. He published his system, which subsequently included musical notation, in 1829. The second revision, published in 1837, was the first binary form of writing developed in the modern era. Braille characters are formed using a combination of six raised dots arranged in a 3 × 2 matrix, called the braille cell. The number and arrangement of these dots distinguishes one character from another. Since the various braille alphabets originated as transcription codes for printed writing, the mappings (sets of character designations) vary from language to language, and even within one; in English Braille there are 3 levels of braille: uncontracted braille a letter-by-letter transcription used for basic literacy; contracted braille an addition of abbreviations and contractions used as a space-saving mechanism; and grade 3 various non-standardized personal stenography that is less commonly used. In addition to braille text (letters, punctuation, contractions), it is also possible to create embossed illustrations and graphs, with the lines either solid or made of series of dots, arrows, and bullets that are larger than braille dots. A full braille cell includes six raised dots arranged in two columns, each column having three dots. The dot positions are identified by numbers from one to six. There are 64 possible combinations, including no dots at all for a word spa
https://en.wikipedia.org/wiki/Bijection
A bijection is a function that is both injective (one-to-one) and surjective (onto). In other words, every element in the codomain of the function is mapped to by exactly one element in the domain of the function. Equivalently, a bijection is a binary relation between two sets, such that each element of one set is paired with exactly one element of the other set, and each element of the other set is paired with exactly one element of the first set. A bijection is also called as a bijective function, one-to-one correspondence, or invertible function. The term one-to-one correspondence must not be confused with one-to-one function, which refers to an injective function (see examples on figures). A bijection from a set X to a set Y has an inverse function from Y to X. There exists a bijection between two sets if and only if they have the same cardinal number, which, in the case of finite sets is simply the number of their elements. A bijective function from a set to itself is also called a permutation, and the set of all permutations of a set forms its symmetric group. Some bijections with further properties have received specific names, which include automorphisms, isomorphisms, homeomorphisms, diffeomorphisms, permutation groups, and most geometric transformations. Galois correspondences are bijections between sets of mathematical objects of apparently very different nature. Definition For a pairing between X and Y (where Y need not be different from X) to be a bijection, four properties must hold: each element of X must be paired with at least one element of Y, no element of X may be paired with more than one element of Y, each element of Y must be paired with at least one element of X, and no element of Y may be paired with more than one element of X. Satisfying properties (1) and (2) means that a pairing is a function with domain X. It is more common to see properties (1) and (2) written as a single statement: Every element of X is paired with exactl
https://en.wikipedia.org/wiki/Binary%20function
In mathematics, a binary function (also called bivariate function, or function of two variables) is a function that takes two inputs. Precisely stated, a function is binary if there exists sets such that where is the Cartesian product of and Alternative definitions Set-theoretically, a binary function can be represented as a subset of the Cartesian product , where belongs to the subset if and only if . Conversely, a subset defines a binary function if and only if for any and , there exists a unique such that belongs to . is then defined to be this . Alternatively, a binary function may be interpreted as simply a function from to . Even when thought of this way, however, one generally writes instead of . (That is, the same pair of parentheses is used to indicate both function application and the formation of an ordered pair.) Examples Division of whole numbers can be thought of as a function. If is the set of integers, is the set of natural numbers (except for zero), and is the set of rational numbers, then division is a binary function . Another example is that of inner products, or more generally functions of the form , where , are real-valued vectors of appropriate size and is a matrix. If is a positive definite matrix, this yields an inner product. Functions of two real variables Functions whose domain is a subset of are often also called functions of two variables even if their domain does not form a rectangle and thus the cartesian product of two sets. Restrictions to ordinary functions In turn, one can also derive ordinary functions of one variable from a binary function. Given any element , there is a function , or , from to , given by . Similarly, given any element , there is a function , or , from to , given by . In computer science, this identification between a function from to and a function from to , where is the set of all functions from to , is called currying. Generalisations The various concepts relating to functi
https://en.wikipedia.org/wiki/Binary%20operation
In mathematics, a binary operation or dyadic operation is a rule for combining two elements (called operands) to produce another element. More formally, a binary operation is an operation of arity two. More specifically, an internal binary operation on a set is a binary operation whose two domains and the codomain are the same set. Examples include the familiar arithmetic operations of addition, subtraction, and multiplication. Other examples are readily found in different areas of mathematics, such as vector addition, matrix multiplication, and conjugation in groups. An operation of arity two that involves several sets is sometimes also called a binary operation. For example, scalar multiplication of vector spaces takes a scalar and a vector to produce a vector, and scalar product takes two vectors to produce a scalar. Such binary operations may also be called binary functions. Binary operations are the keystone of most structures that are studied in algebra, in particular in semigroups, monoids, groups, rings, fields, and vector spaces. Terminology More precisely, a binary operation on a set is a mapping of the elements of the Cartesian product to : Because the result of performing the operation on a pair of elements of is again an element of , the operation is called a closed (or internal) binary operation on (or sometimes expressed as having the property of closure). If is not a function but a partial function, then is called a partial binary operation. For instance, division of real numbers is a partial binary operation, because one can't divide by zero: is undefined for every real number . In both model theory and classical universal algebra, binary operations are required to be defined on all elements of . However, partial algebras generalize universal algebras to allow partial operations. Sometimes, especially in computer science, the term binary operation is used for any binary function. Properties and examples Typical examples of binary
https://en.wikipedia.org/wiki/Biochemistry
Biochemistry or biological chemistry is the study of chemical processes within and relating to living organisms. A sub-discipline of both chemistry and biology, biochemistry may be divided into three fields: structural biology, enzymology, and metabolism. Over the last decades of the 20th century, biochemistry has become successful at explaining living processes through these three disciplines. Almost all areas of the life sciences are being uncovered and developed through biochemical methodology and research. Biochemistry focuses on understanding the chemical basis which allows biological molecules to give rise to the processes that occur within living cells and between cells, in turn relating greatly to the understanding of tissues and organs, as well as organism structure and function. Biochemistry is closely related to molecular biology, which is the study of the molecular mechanisms of biological phenomena. Much of biochemistry deals with the structures, bonding, functions, and interactions of biological macromolecules, such as proteins, nucleic acids, carbohydrates, and lipids. They provide the structure of cells and perform many of the functions associated with life. The chemistry of the cell also depends upon the reactions of small molecules and ions. These can be inorganic (for example, water and metal ions) or organic (for example, the amino acids, which are used to synthesize proteins). The mechanisms used by cells to harness energy from their environment via chemical reactions are known as metabolism. The findings of biochemistry are applied primarily in medicine, nutrition and agriculture. In medicine, biochemists investigate the causes and cures of diseases. Nutrition studies how to maintain health and wellness and also the effects of nutritional deficiencies. In agriculture, biochemists investigate soil and fertilizers, with the goal of improving crop cultivation, crop storage, and pest control. In recent decades, biochemical principles a
https://en.wikipedia.org/wiki/Boolean%20algebra%20%28structure%29
In abstract algebra, a Boolean algebra or Boolean lattice is a complemented distributive lattice. This type of algebraic structure captures essential properties of both set operations and logic operations. A Boolean algebra can be seen as a generalization of a power set algebra or a field of sets, or its elements can be viewed as generalized truth values. It is also a special case of a De Morgan algebra and a Kleene algebra (with involution). Every Boolean algebra gives rise to a Boolean ring, and vice versa, with ring multiplication corresponding to conjunction or meet ∧, and ring addition to exclusive disjunction or symmetric difference (not disjunction ∨). However, the theory of Boolean rings has an inherent asymmetry between the two operators, while the axioms and theorems of Boolean algebra express the symmetry of the theory described by the duality principle. History The term "Boolean algebra" honors George Boole (1815–1864), a self-educated English mathematician. He introduced the algebraic system initially in a small pamphlet, The Mathematical Analysis of Logic, published in 1847 in response to an ongoing public controversy between Augustus De Morgan and William Hamilton, and later as a more substantial book, The Laws of Thought, published in 1854. Boole's formulation differs from that described above in some important respects. For example, conjunction and disjunction in Boole were not a dual pair of operations. Boolean algebra emerged in the 1860s, in papers written by William Jevons and Charles Sanders Peirce. The first systematic presentation of Boolean algebra and distributive lattices is owed to the 1890 Vorlesungen of Ernst Schröder. The first extensive treatment of Boolean algebra in English is A. N. Whitehead's 1898 Universal Algebra. Boolean algebra as an axiomatic algebraic structure in the modern axiomatic sense begins with a 1904 paper by Edward V. Huntington. Boolean algebra came of age as serious mathematics with the work of Marshall S
https://en.wikipedia.org/wiki/Bandwidth%20%28signal%20processing%29
Bandwidth is the difference between the upper and lower frequencies in a continuous band of frequencies. It is typically measured in hertz, and depending on context, may specifically refer to passband bandwidth or baseband bandwidth. Passband bandwidth is the difference between the upper and lower cutoff frequencies of, for example, a band-pass filter, a communication channel, or a signal spectrum. Baseband bandwidth applies to a low-pass filter or baseband signal; the bandwidth is equal to its upper cutoff frequency. Bandwidth in hertz is a central concept in many fields, including electronics, information theory, digital communications, radio communications, signal processing, and spectroscopy and is one of the determinants of the capacity of a given communication channel. A key characteristic of bandwidth is that any band of a given width can carry the same amount of information, regardless of where that band is located in the frequency spectrum. For example, a 3 kHz band can carry a telephone conversation whether that band is at baseband (as in a POTS telephone line) or modulated to some higher frequency. However, wide bandwidths are easier to obtain and process at higher frequencies because the is smaller. Overview Bandwidth is a key concept in many telecommunications applications. In radio communications, for example, bandwidth is the frequency range occupied by a modulated carrier signal. An FM radio receiver's tuner spans a limited range of frequencies. A government agency (such as the Federal Communications Commission in the United States) may apportion the regionally available bandwidth to broadcast license holders so that their signals do not mutually interfere. In this context, bandwidth is also known as channel spacing. For other applications, there are other definitions. One definition of bandwidth, for a system, could be the range of frequencies over which the system produces a specified level of performance. A less strict and more practica
https://en.wikipedia.org/wiki/Biopolymer
Biopolymers are natural polymers produced by the cells of living organisms. Like other polymers, biopolymers consist of monomeric units that are covalently bonded in chains to form larger molecules. There are three main classes of biopolymers, classified according to the monomers used and the structure of the biopolymer formed: polynucleotides, polypeptides, and polysaccharides. The Polynucleotides, RNA and DNA, are long polymers of nucleotides. Polypeptides include proteins and shorter polymers of amino acids; some major examples include collagen, actin, and fibrin. Polysaccharides are linear or branched chains of sugar carbohydrates; examples include starch, cellulose, and alginate. Other examples of biopolymers include natural rubbers (polymers of isoprene), suberin and lignin (complex polyphenolic polymers), cutin and cutan (complex polymers of long-chain fatty acids), melanin, and polyhydroxyalkanoates (PHAs). In addition to their many essential roles in living organisms, biopolymers have applications in many fields including the food industry, manufacturing, packaging, and biomedical engineering. Biopolymers versus synthetic polymers A major defining difference between biopolymers and synthetic polymers can be found in their structures. All polymers are made of repetitive units called monomers. Biopolymers often have a well-defined structure, though this is not a defining characteristic (example: lignocellulose): The exact chemical composition and the sequence in which these units are arranged is called the primary structure, in the case of proteins. Many biopolymers spontaneously fold into characteristic compact shapes (see also "protein folding" as well as secondary structure and tertiary structure), which determine their biological functions and depend in a complicated way on their primary structures. Structural biology is the study of the structural properties of biopolymers. In contrast, most synthetic polymers have much simpler and more random (or st
https://en.wikipedia.org/wiki/Banach%20space
In mathematics, more specifically in functional analysis, a Banach space (pronounced ) is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and is complete in the sense that a Cauchy sequence of vectors always converges to a well-defined limit that is within the space. Banach spaces are named after the Polish mathematician Stefan Banach, who introduced this concept and studied it systematically in 1920–1922 along with Hans Hahn and Eduard Helly. Maurice René Fréchet was the first to use the term "Banach space" and Banach in turn then coined the term "Fréchet space". Banach spaces originally grew out of the study of function spaces by Hilbert, Fréchet, and Riesz earlier in the century. Banach spaces play a central role in functional analysis. In other areas of analysis, the spaces under study are often Banach spaces. Definition A Banach space is a complete normed space A normed space is a pair consisting of a vector space over a scalar field (where is commonly or ) together with a distinguished norm Like all norms, this norm induces a translation invariant distance function, called the canonical or (norm) induced metric, defined for all vectors by This makes into a metric space A sequence is called or or if for every real there exists some index such that whenever and are greater than The normed space is called a and the canonical metric is called a if is a , which by definition means for every Cauchy sequence in there exists some such that where because this sequence's convergence to can equivalently be expressed as: The norm of a normed space is called a if is a Banach space. L-semi-inner product For any normed space there exists an L-semi-inner product on such that for all ; in general, there may be infinitely many L-semi-inner products that satisfy this condition. L-semi-inner products are a generalizati
https://en.wikipedia.org/wiki/Blood
Blood is a body fluid in the circulatory system of humans and other vertebrates that delivers necessary substances such as nutrients and oxygen to the cells, and transports metabolic waste products away from those same cells. Blood in the circulatory system is also known as peripheral blood, and the blood cells it carries, peripheral blood cells. Blood is composed of blood cells suspended in blood plasma. Plasma, which constitutes 55% of blood fluid, is mostly water (92% by volume), and contains proteins, glucose, mineral ions, hormones, carbon dioxide (plasma being the main medium for excretory product transportation), and blood cells themselves. Albumin is the main protein in plasma, and it functions to regulate the colloidal osmotic pressure of blood. The blood cells are mainly red blood cells (also called RBCs or erythrocytes), white blood cells (also called WBCs or leukocytes), and in mammals platelets (also called thrombocytes). The most abundant cells in vertebrate blood are red blood cells. These contain hemoglobin, an iron-containing protein, which facilitates oxygen transport by reversibly binding to this respiratory gas thereby increasing its solubility in blood. In contrast, carbon dioxide is mostly transported extracellularly as bicarbonate ion transported in plasma. Vertebrate blood is bright red when its hemoglobin is oxygenated and dark red when it is deoxygenated. Some animals, such as crustaceans and mollusks, use hemocyanin to carry oxygen, instead of hemoglobin. Insects and some mollusks use a fluid called hemolymph instead of blood, the difference being that hemolymph is not contained in a closed circulatory system. In most insects, this "blood" does not contain oxygen-carrying molecules such as hemoglobin because their bodies are small enough for their tracheal system to suffice for supplying oxygen. Jawed vertebrates have an adaptive immune system, based largely on white blood cells. White blood cells help to resist infections and parasite
https://en.wikipedia.org/wiki/BASIC
BASIC (Beginners' All-purpose Symbolic Instruction Code) is a family of general-purpose, high-level programming languages designed for ease of use. The original version was created by John G. Kemeny and Thomas E. Kurtz at Dartmouth College in 1963. They wanted to enable students in non-scientific fields to use computers. At the time, nearly all computers required writing custom software, which only scientists and mathematicians tended to learn. In addition to the program language, Kemeny and Kurtz developed the Dartmouth Time Sharing System (DTSS), which allowed multiple users to edit and run BASIC programs simultaneously on remote terminals. This general model became popular on minicomputer systems like the PDP-11 and Data General Nova in the late 1960s and early 1970s. Hewlett-Packard produced an entire computer line for this method of operation, introducing the HP2000 series in the late 1960s and continuing sales into the 1980s. Many early video games trace their history to one of these versions of BASIC. The emergence of microcomputers in the mid-1970s led to the development of multiple BASIC dialects, including Microsoft BASIC in 1975. Due to the tiny main memory available on these machines, often 4 KB, a variety of Tiny BASIC dialects were also created. BASIC was available for almost any system of the era, and became the de facto programming language for home computer systems that emerged in the late 1970s. These PCs almost always had a BASIC interpreter installed by default, often in the machine's firmware or sometimes on a ROM cartridge. BASIC declined in popularity in the 1990s, as more powerful microcomputers came to market and programming languages with advanced features (such as Pascal and C) became tenable on such computers. In 1991, Microsoft released Visual Basic, combining an updated version of BASIC with a visual forms builder. This reignited use of the language and "VB" remains a major programming language in the form of VB.NET, while a hobbyist
https://en.wikipedia.org/wiki/Butterfly%20effect
In chaos theory, the butterfly effect is the sensitive dependence on initial conditions in which a small change in one state of a deterministic nonlinear system can result in large differences in a later state. The term is closely associated with the work of mathematician and meteorologist Edward Norton Lorenz. He noted that the butterfly effect is derived from the metaphorical example of the details of a tornado (the exact time of formation, the exact path taken) being influenced by minor perturbations such as a distant butterfly flapping its wings several weeks earlier. Lorenz originally used a seagull causing a storm but was persuaded to make it more poetic with the use of a butterfly and tornado by 1972. He discovered the effect when he observed runs of his weather model with initial condition data that were rounded in a seemingly inconsequential manner. He noted that the weather model would fail to reproduce the results of runs with the unrounded initial condition data. A very small change in initial conditions had created a significantly different outcome. The idea that small causes may have large effects in weather was earlier acknowledged by French mathematician and engineer Henri Poincaré. American mathematician and philosopher Norbert Wiener also contributed to this theory. Lorenz's work placed the concept of instability of the Earth's atmosphere onto a quantitative base and linked the concept of instability to the properties of large classes of dynamic systems which are undergoing nonlinear dynamics and deterministic chaos. The butterfly effect concept has since been used outside the context of weather science as a broad term for any situation where a small change is supposed to be the cause of larger consequences. History In The Vocation of Man (1800), Johann Gottlieb Fichte says "you could not remove a single grain of sand from its place without thereby ... changing something throughout all parts of the immeasurable whole". Chaos theory and the se
https://en.wikipedia.org/wiki/Bletchley%20Park
Bletchley Park is an English country house and estate in Bletchley, Milton Keynes (Buckinghamshire) that became the principal centre of Allied code-breaking during the Second World War. The mansion was constructed during the years following 1883 for the financier and politician Sir Herbert Leon in the Victorian Gothic, Tudor, and Dutch Baroque styles, on the site of older buildings of the same name. During World War II, the estate housed the Government Code and Cypher School (GC&CS), which regularly penetrated the secret communications of the Axis Powersmost importantly the German Enigma and Lorenz ciphers. The GC&CS team of codebreakers included Alan Turing, Gordon Welchman, Hugh Alexander, Bill Tutte, and Stuart Milner-Barry. The nature of the work at Bletchley remained secret until many years after the war. According to the official historian of British Intelligence, the "Ultra" intelligence produced at Bletchley shortened the war by two to four years, and without it the outcome of the war would have been uncertain. The team at Bletchley Park devised automatic machinery to help with decryption, culminating in the development of Colossus, the world's first programmable digital electronic computer. Codebreaking operations at Bletchley Park came to an end in 1946 and all information about the wartime operations was classified until the mid-1970s. After the war it had various uses including as a teacher-training college and local GPO headquarters. By 1990 the huts in which the codebreakers worked were being considered for demolition and redevelopment. The Bletchley Park Trust was formed in February 1992 to save large portions of the site from development. More recently, Bletchley Park has been open to the public, featuring interpretive exhibits and huts that have been rebuilt to appear as they did during their wartime operations. It receives hundreds of thousands of visitors annually. The separate National Museum of Computing, which includes a working replica Bom
https://en.wikipedia.org/wiki/Brian%20Kernighan
Brian Wilson Kernighan (; born January 30, 1942) is a Canadian computer scientist. He worked at Bell Labs and contributed to the development of Unix alongside Unix creators Ken Thompson and Dennis Ritchie. Kernighan's name became widely known through co-authorship of the first book on the C programming language (The C Programming Language) with Dennis Ritchie. Kernighan affirmed that he had no part in the design of the C language ("it's entirely Dennis Ritchie's work"). He authored many Unix programs, including ditroff. Kernighan is coauthor of the AWK and AMPL programming languages. The "K" of K&R C and of AWK both stand for "Kernighan". In collaboration with Shen Lin he devised well-known heuristics for two NP-complete optimization problems: graph partitioning and the travelling salesman problem. In a display of authorial equity, the former is usually called the Kernighan–Lin algorithm, while the latter is known as the Lin–Kernighan heuristic. Kernighan has been a professor of computer science at Princeton University since 2000 and is the director of undergraduate studies in the department of computer science. In 2015, he co-authored the book The Go Programming Language. Early life and education Kernighan was born in Toronto. He attended the University of Toronto between 1960 and 1964, earning his bachelor's degree in engineering physics. He received his Ph.D. in electrical engineering from Princeton University in 1969, completing a doctoral dissertation titled "Some graph partitioning problems related to program segmentation" under the supervision of Peter G. Weiner. Career and research Kernighan has held a professorship in the department of computer science at Princeton since 2000. Each fall he teaches a course called "Computers in Our World", which introduces the fundamentals of computing to non-majors. Kernighan was the software editor for Prentice Hall International. His "Software Tools" series spread the essence of "C/Unix thinking" with makeovers for
https://en.wikipedia.org/wiki/Borsuk%E2%80%93Ulam%20theorem
In mathematics, the Borsuk–Ulam theorem states that every continuous function from an n-sphere into Euclidean n-space maps some pair of antipodal points to the same point. Here, two points on a sphere are called antipodal if they are in exactly opposite directions from the sphere's center. Formally: if is continuous then there exists an such that: . The case can be illustrated by saying that there always exist a pair of opposite points on the Earth's equator with the same temperature. The same is true for any circle. This assumes the temperature varies continuously in space, which is, however, not always the case. The case is often illustrated by saying that at any moment, there is always a pair of antipodal points on the Earth's surface with equal temperatures and equal barometric pressures, assuming that both parameters vary continuously in space. Since temperature, pressure or other such physical variables do not necessarily vary continuously, the predictions of the theorem are unlikely to be true in some necessary sense (as following from a mathematical necessity). The Borsuk–Ulam theorem has several equivalent statements in terms of odd functions. Recall that is the n-sphere and is the n-ball: If is a continuous odd function, then there exists an such that: . If is a continuous function which is odd on (the boundary of ), then there exists an such that: . History According to , the first historical mention of the statement of the Borsuk–Ulam theorem appears in . The first proof was given by , where the formulation of the problem was attributed to Stanisław Ulam. Since then, many alternative proofs have been found by various authors, as collected by . Equivalent statements The following statements are equivalent to the Borsuk–Ulam theorem. With odd functions A function is called odd (aka antipodal or antipode-preserving) if for every : . The Borsuk–Ulam theorem is equivalent to the following statement: A continuous odd function fro
https://en.wikipedia.org/wiki/Binary%20prefix
A binary prefix is a unit prefix that indicates a multiple of a unit of measurement by an integer power of two. The most commonly used binary prefixes are kibi (symbol Ki, meaning 210= 1024), mebi (Mi, 220 = ), and gibi (Gi, 230 = ). They are most often used in information technology as multipliers of bit and byte, when expressing the capacity of storage devices or the size of computer files. The binary prefixes "kibi", "mebi", etc. were defined in 1999 by the International Electrotechnical Commission (IEC), in the IEC 60027-2 standard (Amendment 2). They were meant to replace the metric (SI) decimal power prefixes, such as "kilo" ("k", 103 = 1000), "mega" ("M", 106 = ) and "giga" ("G", 109 = ), that were commonly used in the computer industry to indicate the nearest powers of two. For example, a memory module whose capacity was specified by the manufacturer as "2 megabytes" or "2 MB" would hold 2 × 220 = bytes, instead of 2 × 106 = . On the other hand, a hard disk whose capacity is specified by the manufacturer as "10 gigabytes" or "10 GB", holds 10 × 109 = bytes, or a little more than that, but less than 10 × 230 = and a file whose size is listed as "2.3 GB" may have a size closer to 2.3 × 230 ≈ or to 2.3 × 109 = , depending on the program or operating system providing that measurement. This kind of ambiguity is often confusing to computer system users and has resulted in lawsuits. The IEC 60027-2 binary prefixes have been incorporated in the ISO/IEC 80000 standard and are supported by other standards bodies, including the BIPM, which defines the SI system, the US NIST, and the European Union. Prior to the 1999 IEC standard, some industry organizations, such as the Joint Electron Device Engineering Council (JEDEC), attempted to redefine the terms kilobyte, megabyte, and gigabyte, and the corresponding symbols KB, MB, and GB in the binary sense, for use in storage capacity measurements. However, other computer industry sectors (such as magnetic storage)
https://en.wikipedia.org/wiki/BQP
In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP. A decision problem is a member of BQP if there exists a quantum algorithm (an algorithm that runs on a quantum computer) that solves the decision problem with high probability and is guaranteed to run in polynomial time. A run of the algorithm will correctly solve the decision problem with a probability of at least 2/3. Definition BQP can be viewed as the languages associated with certain bounded-error uniform families of quantum circuits. A language L is in BQP if and only if there exists a polynomial-time uniform family of quantum circuits , such that For all , Qn takes n qubits as input and outputs 1 bit For all x in L, For all x not in L, Alternatively, one can define BQP in terms of quantum Turing machines. A language L is in BQP if and only if there exists a polynomial quantum Turing machine that accepts L with an error probability of at most 1/3 for all instances. Similarly to other "bounded error" probabilistic classes the choice of 1/3 in the definition is arbitrary. We can run the algorithm a constant number of times and take a majority vote to achieve any desired probability of correctness less than 1, using the Chernoff bound. The complexity class is unchanged by allowing error as high as 1/2 − n−c on the one hand, or requiring error as small as 2−nc on the other hand, where c is any positive constant, and n is the length of input. A complete problem for Promise-BQP Similar to the notion of NP-completeness and other complete problems, we can define a complete problem as a problem that is in Promise-BQP and that every problem in Promise-BQP reduces to it in polynomial time. Here is an intuitive problem that is complete for efficient quantum computation, which stems
https://en.wikipedia.org/wiki/Brouwer%20fixed-point%20theorem
Brouwer's fixed-point theorem is a fixed-point theorem in topology, named after L. E. J. (Bertus) Brouwer. It states that for any continuous function mapping a nonempty compact convex set to itself, there is a point such that . The simplest forms of Brouwer's theorem are for continuous functions from a closed interval in the real numbers to itself or from a closed disk to itself. A more general form than the latter is for continuous functions from a nonempty convex compact subset of Euclidean space to itself. Among hundreds of fixed-point theorems, Brouwer's is particularly well known, due in part to its use across numerous fields of mathematics. In its original field, this result is one of the key theorems characterizing the topology of Euclidean spaces, along with the Jordan curve theorem, the hairy ball theorem, the invariance of dimension and the Borsuk–Ulam theorem. This gives it a place among the fundamental theorems of topology. The theorem is also used for proving deep results about differential equations and is covered in most introductory courses on differential geometry. It appears in unlikely fields such as game theory. In economics, Brouwer's fixed-point theorem and its extension, the Kakutani fixed-point theorem, play a central role in the proof of existence of general equilibrium in market economies as developed in the 1950s by economics Nobel prize winners Kenneth Arrow and Gérard Debreu. The theorem was first studied in view of work on differential equations by the French mathematicians around Henri Poincaré and Charles Émile Picard. Proving results such as the Poincaré–Bendixson theorem requires the use of topological methods. This work at the end of the 19th century opened into several successive versions of the theorem. The case of differentiable mappings of the -dimensional closed ball was first proved in 1910 by Jacques Hadamard and the general case for continuous mappings by Brouwer in 1911. Statement The theorem has several formulati
https://en.wikipedia.org/wiki/Benzoic%20acid
Benzoic acid is a white (or colorless) solid organic compound with the formula , whose structure consists of a benzene ring () with a carboxyl () substituent. The benzoyl group is often abbreviated "Bz" (not to be confused with "Bn" which is used for benzyl), thus benzoic acid is also denoted as BzOH, since the benzoyl group has the formula –. It is the simplest aromatic carboxylic acid. The name is derived from gum benzoin, which was for a long time its only source. Benzoic acid occurs naturally in many plants and serves as an intermediate in the biosynthesis of many secondary metabolites. Salts of benzoic acid are used as food preservatives. Benzoic acid is an important precursor for the industrial synthesis of many other organic substances. The salts and esters of benzoic acid are known as benzoates . History Benzoic acid was discovered in the sixteenth century. The dry distillation of gum benzoin was first described by Nostradamus (1556), and then by Alexius Pedemontanus (1560) and Blaise de Vigenère (1596). Justus von Liebig and Friedrich Wöhler determined the composition of benzoic acid. These latter also investigated how hippuric acid is related to benzoic acid. In 1875 Salkowski discovered the antifungal properties of benzoic acid, which was used for a long time in the preservation of benzoate-containing cloudberry fruits. Production Industrial preparations Benzoic acid is produced commercially by partial oxidation of toluene with oxygen. The process is catalyzed by cobalt or manganese naphthenates. The process uses abundant materials, and proceeds in high yield. The first industrial process involved the reaction of benzotrichloride (trichloromethyl benzene) with calcium hydroxide in water, using iron or iron salts as catalyst. The resulting calcium benzoate is converted to benzoic acid with hydrochloric acid. The product contains significant amounts of chlorinated benzoic acid derivatives. For this reason, benzoic acid for human consumption was
https://en.wikipedia.org/wiki/Boltzmann%20distribution
In statistical mechanics and mathematics, a Boltzmann distribution (also called Gibbs distribution) is a probability distribution or probability measure that gives the probability that a system will be in a certain state as a function of that state's energy and the temperature of the system. The distribution is expressed in the form: where is the probability of the system being in state , is the exponential function, is the energy of that state, and a constant of the distribution is the product of the Boltzmann constant and thermodynamic temperature . The symbol denotes proportionality (see for the proportionality constant). The term system here has a wide meaning; it can range from a collection of 'sufficient number' of atoms or a single atom to a macroscopic system such as a natural gas storage tank. Therefore the Boltzmann distribution can be used to solve a wide variety of problems. The distribution shows that states with lower energy will always have a higher probability of being occupied. The ratio of probabilities of two states is known as the Boltzmann factor and characteristically only depends on the states' energy difference: The Boltzmann distribution is named after Ludwig Boltzmann who first formulated it in 1868 during his studies of the statistical mechanics of gases in thermal equilibrium. Boltzmann's statistical work is borne out in his paper “On the Relationship between the Second Fundamental Theorem of the Mechanical Theory of Heat and Probability Calculations Regarding the Conditions for Thermal Equilibrium" The distribution was later investigated extensively, in its modern generic form, by Josiah Willard Gibbs in 1902. The Boltzmann distribution should not be confused with the Maxwell–Boltzmann distribution or Maxwell-Boltzmann statistics. The Boltzmann distribution gives the probability that a system will be in a certain state as a function of that state's energy, while the Maxwell-Boltzmann distributions give the probabilities of
https://en.wikipedia.org/wiki/Bioleaching
Bioleaching is the extraction or liberation of metals from their ores through the use of living organisms. Bioleaching is one of several applications within biohydrometallurgy and several methods are used to treat ores or concentrates containing copper, zinc, lead, arsenic, antimony, nickel, molybdenum, gold, silver, and cobalt. Bioleaching falls into two broad categories. The first, is the use of microorganisms to oxidize refractory minerals to release valuable metals such and gold and silver. Most commonly the minerals that are the target of oxidization are pyrite and arsenopyrite. The second category is leaching of sulphide minerals to release the associated metal, for example, leaching of pentlandite to release nickel, or the leaching of chalcocite, covellite or chalcopyrite to release copper. Process Bioleaching can involve numerous ferrous iron and sulfur oxidizing bacteria, including Acidithiobacillus ferrooxidans (formerly known as Thiobacillus ferrooxidans) and Acidithiobacillus thiooxidans (formerly known as Thiobacillus thiooxidans). As a general principle, in one proposed method of bacterial leaching known as Indirect Leaching, Fe3+ ions are used to oxidize the ore. This step is entirely independent of microbes. The role of the bacteria is further oxidation of the ore, but also the regeneration of the chemical oxidant Fe3+ from Fe2+. For example, bacteria catalyse the breakdown of the mineral pyrite (FeS2) by oxidising the sulfur and metal (in this case ferrous iron, (Fe2+)) using oxygen. This yields soluble products that can be further purified and refined to yield the desired metal. Pyrite leaching (FeS2): In the first step, disulfide is spontaneously oxidized to thiosulfate by ferric ion (Fe3+), which in turn is reduced to give ferrous ion (Fe2+): (1)      spontaneous The ferrous ion is then oxidized by bacteria using oxygen: (2)      (iron oxidizers) Thiosulfate is also oxidized by bacteria to give sulfate: (3)      (sulfur oxidizers)
https://en.wikipedia.org/wiki/Boiling%20point
The boiling point of a substance is the temperature at which the vapor pressure of a liquid equals the pressure surrounding the liquid and the liquid changes into a vapor. The boiling point of a liquid varies depending upon the surrounding environmental pressure. A liquid in a partial vacuum, i.e., under a lower pressure, has a lower boiling point than when that liquid is at atmospheric pressure. Because of this, water boils at under standard pressure at sea level, but at at altitude. For a given pressure, different liquids will boil at different temperatures. The normal boiling point (also called the atmospheric boiling point or the atmospheric pressure boiling point) of a liquid is the special case in which the vapor pressure of the liquid equals the defined atmospheric pressure at sea level, one atmosphere. At that temperature, the vapor pressure of the liquid becomes sufficient to overcome atmospheric pressure and allow bubbles of vapor to form inside the bulk of the liquid. The standard boiling point has been defined by IUPAC since 1982 as the temperature at which boiling occurs under a pressure of one bar. The heat of vaporization is the energy required to transform a given quantity (a mol, kg, pound, etc.) of a substance from a liquid into a gas at a given pressure (often atmospheric pressure). Liquids may change to a vapor at temperatures below their boiling points through the process of evaporation. Evaporation is a surface phenomenon in which molecules located near the liquid's edge, not contained by enough liquid pressure on that side, escape into the surroundings as vapor. On the other hand, boiling is a process in which molecules anywhere in the liquid escape, resulting in the formation of vapor bubbles within the liquid. Saturation temperature and pressure A saturated liquid contains as much thermal energy as it can without boiling (or conversely a saturated vapor contains as little thermal energy as it can without condensing). Saturation te
https://en.wikipedia.org/wiki/Big%20Bang
The Big Bang event is a physical theory that describes how the universe expanded from an initial state of high density and temperature. Various cosmological models of the Big Bang explain the evolution of the observable universe from the earliest known periods through its subsequent large-scale form. These models offer a comprehensive explanation for a broad range of observed phenomena, including the abundance of light elements, the cosmic microwave background (CMB) radiation, and large-scale structure. The overall uniformity of the Universe, known as the flatness problem, is explained through cosmic inflation: a sudden and very rapid expansion of space during the earliest moments. However, physics currently lacks a widely accepted theory of quantum gravity that can successfully model the earliest conditions of the Big Bang. Crucially, these models are compatible with the Hubble–Lemaître law—the observation that the farther away a galaxy is, the faster it is moving away from Earth. Extrapolating this cosmic expansion backwards in time using the known laws of physics, the models describe an increasingly concentrated cosmos preceded by a singularity in which space and time lose meaning (typically named "the Big Bang singularity"). In 1964 the CMB was discovered, which convinced many cosmologists that the competing steady-state model of cosmic evolution was falsified, since the Big Bang models predict a uniform background radiation caused by high temperatures and densities in the distant past. A wide range of empirical evidence strongly favors the Big Bang event, which is now essentially universally accepted. Detailed measurements of the expansion rate of the universe place the Big Bang singularity at an estimated  billion years ago, which is considered the age of the universe. There remain aspects of the observed universe that are not yet adequately explained by the Big Bang models. After its initial expansion, the universe cooled sufficiently to allow the formation