Dataset Viewer
source
stringlengths 32
209
| text
stringlengths 18
1.5k
|
---|---|
https://en.wikipedia.org/wiki/Arithmetic%20mean
|
In mathematics and statistics, the arithmetic mean ( ), arithmetic average, or just the mean or average (when the context is clear) is the sum of a collection of numbers divided by the count of numbers in the collection. The collection is often a set of results from an experiment, an observational study, or a survey. The term "arithmetic mean" is preferred in some mathematics and statistics contexts because it helps distinguish it from other types of means, such as geometric and harmonic.
In addition to mathematics and statistics, the arithmetic mean is frequently used in economics, anthropology, history, and almost every academic field to some extent. For example, per capita income is the arithmetic average income of a nation's population.
While the arithmetic mean is often used to report central tendencies, it is not a robust statistic: it is greatly influenced by outliers (values much larger or smaller than most others). For skewed distributions, such as the distribution of income for which a few people's incomes are substantially higher than most people's, the arithmetic mean may not coincide with one's notion of "middle". In that case, robust statistics, such as the median, may provide a better description of central tendency.
Definition
Given a data set , the arithmetic mean (also mean or average), denoted (read bar), is the mean of the values .
The arithmetic mean is a data set's most commonly used and readily understood measure of central tendency. In statistic
|
https://en.wikipedia.org/wiki/Algorithms%20%28journal%29
|
Algorithms is a monthly peer-reviewed open-access scientific journal of mathematics, covering design, analysis, and experiments on algorithms. The journal is published by MDPI and was established in 2008. The founding editor-in-chief was Kazuo Iwama (Kyoto University). From May 2014 to September 2019, the editor-in-chief was Henning Fernau (Universität Trier). The current editor-in-chief is Frank Werner (Otto-von-Guericke-Universität Magdeburg).
Abstracting and indexing
According to the Journal Citation Reports, the journal has a 2022 impact factor of 2.3.
The journal is abstracted and indexed in:
See also
Journals with similar scope include:
ACM Transactions on Algorithms
Algorithmica
Journal of Algorithms (Elsevier)
References
External links
Computer science journals
Open access journals
MDPI academic journals
English-language journals
Academic journals established in 2008
Mathematics journals
Monthly journals
|
https://en.wikipedia.org/wiki/Algorithm
|
In mathematics and computer science, an algorithm () is a finite sequence of rigorous instructions, typically used to solve a class of specific problems or to perform a computation. Algorithms are used as specifications for performing calculations and data processing. More advanced algorithms can use conditionals to divert the code execution through various routes (referred to as automated decision-making) and deduce valid inferences (referred to as automated reasoning), achieving automation eventually. Using human characteristics as descriptors of machines in metaphorical ways was already practiced by Alan Turing with terms such as "memory", "search" and "stimulus".
In contrast, a heuristic is an approach to problem solving that may not be fully specified or may not guarantee correct or optimal results, especially in problem domains where there is no well-defined correct or optimal result.
As an effective method, an algorithm can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output" and terminating at a final ending state. The transition from one state to the next is not necessarily deterministic; some algorithms, known as randomized algorithms, incorporate random input.
History
Ancien
|
https://en.wikipedia.org/wiki/Asparagales
|
Asparagales (asparagoid lilies) is an order of plants in modern classification systems such as the Angiosperm Phylogeny Group (APG) and the Angiosperm Phylogeny Web. The order takes its name from the type family Asparagaceae and is placed in the monocots amongst the lilioid monocots. The order has only recently been recognized in classification systems. It was first put forward by Huber in 1977 and later taken up in the Dahlgren system of 1985 and then the APG in 1998, 2003 and 2009. Before this, many of its families were assigned to the old order Liliales, a very large order containing almost all monocots with colorful tepals and lacking starch in their endosperm. DNA sequence analysis indicated that many of the taxa previously included in Liliales should actually be redistributed over three orders, Liliales, Asparagales, and Dioscoreales. The boundaries of the Asparagales and of its families have undergone a series of changes in recent years; future research may lead to further changes and ultimately greater stability. In the APG circumscription, Asparagales is the largest order of monocots with 14 families, 1,122 genera, and about 36,000 species.
The order is clearly circumscribed on the basis of molecular phylogenetics, but it is difficult to define morphologically since its members are structurally diverse. Most species of Asparagales are herbaceous perennials, although some are climbers and some are tree-like. The order also contains many geophytes (bulbs, corms, and v
|
https://en.wikipedia.org/wiki/Apiales
|
The Apiales are an order of flowering plants. The families are those recognized in the APG III system. This is typical of the newer classifications, though there is some slight variation and in particular, the Torriceliaceae may also be divided.
Under this definition, well-known members include carrots, celery, parsley, and Hedera helix (English ivy).
The order Apiales is placed within the asterid group of eudicots as circumscribed by the APG III system. Within the asterids, Apiales belongs to an unranked group called the campanulids, and within the campanulids, it belongs to a clade known in phylogenetic nomenclature as Apiidae. In 2010, a subclade of Apiidae named Dipsapiidae was defined to consist of the three orders: Apiales, Paracryphiales, and Dipsacales.
Taxonomy
Under the Cronquist system, only the Apiaceae and Araliaceae were included here, and the restricted order was placed among the rosids rather than the asterids. The Pittosporaceae were placed within the Rosales, and many of the other forms within the family Cornaceae. Pennantia was in the family Icacinaceae. In the classification system of Dahlgren the families Apiaceae and Araliaceae were placed in the order Ariales, in the superorder Araliiflorae (also called Aralianae).
The present understanding of the Apiales is fairly recent and is based upon comparison of DNA sequences by phylogenetic methods. The circumscriptions of some of the families have changed. In 2009, one of the subfamilies of Araliaceae was
|
https://en.wikipedia.org/wiki/Motor%20neuron%20diseases
|
Motor neuron diseases or motor neurone diseases (MNDs) are a group of rare neurodegenerative disorders that selectively affect motor neurons, the cells which control voluntary muscles of the body. They include amyotrophic lateral sclerosis (ALS), progressive bulbar palsy (PBP), pseudobulbar palsy, progressive muscular atrophy (PMA), primary lateral sclerosis (PLS), spinal muscular atrophy (SMA) and monomelic amyotrophy (MMA), as well as some rarer variants resembling ALS.
Motor neuron diseases affect both children and adults. While each motor neuron disease affects patients differently, they all cause movement-related symptoms, mainly muscle weakness. Most of these diseases seem to occur randomly without known causes, but some forms are inherited. Studies into these inherited forms have led to discoveries of various genes (e.g. SOD1) that are thought to be important in understanding how the disease occurs.
Symptoms of motor neuron diseases can be first seen at birth or can come on slowly later in life. Most of these diseases worsen over time; while some, such as ALS, shorten one's life expectancy, others do not. Currently, there are no approved treatments for the majority of motor neuron disorders, and care is mostly symptomatic.
Signs and symptoms
Signs and symptoms depend on the specific disease, but motor neuron diseases typically manifest as a group of movement-related symptoms. They come on slowly, and worsen over the course of more than three months. Various patter
|
https://en.wikipedia.org/wiki/Arsenic
|
Arsenic is a chemical element with the symbol As and atomic number 33. Arsenic occurs in many minerals, usually in combination with sulfur and metals, but also as a pure elemental crystal. Arsenic is a metalloid. It has various allotropes, but only the grey form, which has a metallic appearance, is important to industry.
The primary use of arsenic is in alloys of lead (for example, in car batteries and ammunition). Arsenic is a common n-type dopant in semiconductor electronic devices. It is also a component of the III–V compound semiconductor gallium arsenide. Arsenic and its compounds, especially the trioxide, are used in the production of pesticides, treated wood products, herbicides, and insecticides. These applications are declining with the increasing recognition of the toxicity of arsenic and its compounds.
A few species of bacteria are able to use arsenic compounds as respiratory metabolites. Trace quantities of arsenic are an essential dietary element in rats, hamsters, goats, chickens, and presumably other species. A role in human metabolism is not known. However, arsenic poisoning occurs in multicellular life if quantities are larger than needed. Arsenic contamination of groundwater is a problem that affects millions of people across the world.
The United States' Environmental Protection Agency states that all forms of arsenic are a serious risk to human health. The United States' Agency for Toxic Substances and Disease Registry ranked arsenic as number 1 in its
|
https://en.wikipedia.org/wiki/Arable%20land
|
Arable land (from the , "able to be ploughed") is any land capable of being ploughed and used to grow crops. Alternatively, for the purposes of agricultural statistics, the term often has a more precise definition:
A more concise definition appearing in the Eurostat glossary similarly refers to actual rather than potential uses: "land worked (ploughed or tilled) regularly, generally under a system of crop rotation". In Britain, arable land has traditionally been contrasted with pasturable land such as heaths, which could be used for sheep-rearing but not as farmland.
Arable land is vulnerable to land degradation and some types of un-arable land can be enriched to create useful land. Climate change and biodiversity loss, are driving pressure on arable land.
By country
According to the Food and Agriculture Organization of the United Nations, in 2013, the world's arable land amounted to 1.407 billion hectares, out of a total of 4.924 billion hectares of land used for agriculture.
Arable land (hectares per person)
Non-arable land
Agricultural land that is not arable according to the FAO definition above includes:
Meadows and pasturesland used as pasture and grazed range, and those natural grasslands and sedge meadows that are used for hay production in some regions.
Permanent cropland that produces crops from woody vegetation, e.g. orchard land, vineyards, coffee plantations, rubber plantations, and land producing nut trees;
Other non-arable land includes land that is
|
https://en.wikipedia.org/wiki/Axon
|
An axon (from Greek ἄξων áxōn, axis), or nerve fiber (or nerve fibre: see spelling differences), is a long, slender projection of a nerve cell, or neuron, in vertebrates, that typically conducts electrical impulses known as action potentials away from the nerve cell body. The function of the axon is to transmit information to different neurons, muscles, and glands. In certain sensory neurons (pseudounipolar neurons), such as those for touch and warmth, the axons are called afferent nerve fibers and the electrical impulse travels along these from the periphery to the cell body and from the cell body to the spinal cord along another branch of the same axon. Axon dysfunction can be the cause of many inherited and acquired neurological disorders that affect both the peripheral and central neurons. Nerve fibers are classed into three typesgroup A nerve fibers, group B nerve fibers, and group C nerve fibers. Groups A and B are myelinated, and group C are unmyelinated. These groups include both sensory fibers and motor fibers. Another classification groups only the sensory fibers as Type I, Type II, Type III, and Type IV.
An axon is one of two types of cytoplasmic protrusions from the cell body of a neuron; the other type is a dendrite. Axons are distinguished from dendrites by several features, including shape (dendrites often taper while axons usually maintain a constant radius), length (dendrites are restricted to a small region around the cell body while axons can be much longe
|
https://en.wikipedia.org/wiki/Abscess
|
An abscess is a collection of pus that has built up within the tissue of the body. Signs and symptoms of abscesses include redness, pain, warmth, and swelling. The swelling may feel fluid-filled when pressed. The area of redness often extends beyond the swelling. Carbuncles and boils are types of abscess that often involve hair follicles, with carbuncles being larger.
They are usually caused by a bacterial infection. Often many different types of bacteria are involved in a single infection. In many areas of the world, the most common bacteria present is methicillin-resistant Staphylococcus aureus. Rarely, parasites can cause abscesses; this is more common in the developing world. Diagnosis of a skin abscess is usually made based on what it looks like and is confirmed by cutting it open. Ultrasound imaging may be useful in cases in which the diagnosis is not clear. In abscesses around the anus, computer tomography (CT) may be important to look for deeper infection.
Standard treatment for most skin or soft tissue abscesses is cutting it open and drainage. There appears to be some benefit from also using antibiotics. A small amount of evidence supports not packing the cavity that remains with gauze after drainage. Closing this cavity right after draining it rather than leaving it open may speed healing without increasing the risk of the abscess returning. Sucking out the pus with a needle is often not sufficient.
Skin abscesses are common and have become more common in recent
|
https://en.wikipedia.org/wiki/Algorithms%20for%20calculating%20variance
|
Algorithms for calculating variance play a major role in computational statistics. A key difficulty in the design of good algorithms for this problem is that formulas for the variance may involve sums of squares, which can lead to numerical instability as well as to arithmetic overflow when dealing with large values.
Naïve algorithm
A formula for calculating the variance of an entire population of size N is:
Using Bessel's correction to calculate an unbiased estimate of the population variance from a finite sample of n observations, the formula is:
Therefore, a naïve algorithm to calculate the estimated variance is given by the following:
Let
For each datum :
This algorithm can easily be adapted to compute the variance of a finite population: simply divide by n instead of n − 1 on the last line.
Because and can be very similar numbers, cancellation can lead to the precision of the result to be much less than the inherent precision of the floating-point arithmetic used to perform the computation. Thus this algorithm should not be used in practice, and several alternate, numerically stable, algorithms have been proposed. This is particularly bad if the standard deviation is small relative to the mean.
Computing shifted data
The variance is invariant with respect to changes in a location parameter, a property which can be used to avoid the catastrophic cancellation in this formula.
with any constant, which leads to the new formula
the closer is to the m
|
https://en.wikipedia.org/wiki/Amplitude%20modulation
|
Amplitude modulation (AM) is a modulation technique used in electronic communication, most commonly for transmitting messages with a radio wave. In amplitude modulation, the amplitude (signal strength) of the wave is varied in proportion to that of the message signal, such as an audio signal. This technique contrasts with angle modulation, in which either the frequency of the carrier wave is varied, as in frequency modulation, or its phase, as in phase modulation.
AM was the earliest modulation method used for transmitting audio in radio broadcasting. It was developed during the first quarter of the 20th century beginning with Roberto Landell de Moura and Reginald Fessenden's radiotelephone experiments in 1900. This original form of AM is sometimes called double-sideband amplitude modulation (DSBAM), because the standard method produces sidebands on either side of the carrier frequency. Single-sideband modulation uses bandpass filters to eliminate one of the sidebands and possibly the carrier signal, which improves the ratio of message power to total transmission power, reduces power handling requirements of line repeaters, and permits better bandwidth utilization of the transmission medium.
AM remains in use in many forms of communication in addition to AM broadcasting: shortwave radio, amateur radio, two-way radios, VHF aircraft radio, citizens band radio, and in computer modems in the form of QAM.
Foundation
In electronics, telecommunications and mechanics, modulation
|
https://en.wikipedia.org/wiki/Acoustic%20theory
|
Acoustic theory is a scientific field that relates to the description of sound waves. It derives from fluid dynamics. See acoustics for the engineering approach.
For sound waves of any magnitude of a disturbance in velocity, pressure, and density we have
In the case that the fluctuations in velocity, density, and pressure are small, we can approximate these as
Where is the perturbed velocity of the fluid, is the pressure of the fluid at rest, is the perturbed pressure of the system as a function of space and time, is the density of the fluid at rest, and is the variance in the density of the fluid over space and time.
In the case that the velocity is irrotational (), we then have the acoustic wave equation that describes the system:
Where we have
Derivation for a medium at rest
Starting with the Continuity Equation and the Euler Equation:
If we take small perturbations of a constant pressure and density:
Then the equations of the system are
Noting that the equilibrium pressures and densities are constant, this simplifies to
A Moving Medium
Starting with
We can have these equations work for a moving medium by setting , where is the constant velocity that the whole fluid is moving at before being disturbed (equivalent to a moving observer) and is the fluid velocity.
In this case the equations look very similar:
Note that setting returns the equations at rest.
Linearized Waves
Starting with the above given equations of motion for a medium at
|
https://en.wikipedia.org/wiki/Amine
|
In chemistry, amines (, ) are compounds and functional groups that contain a basic nitrogen atom with a lone pair. Amines are formally derivatives of ammonia (), wherein one or more hydrogen atoms have been replaced by a substituent such as an alkyl or aryl group (these may respectively be called alkylamines and arylamines; amines in which both types of substituent are attached to one nitrogen atom may be called alkylarylamines). Important amines include amino acids, biogenic amines, trimethylamine, and aniline. Inorganic derivatives of ammonia are also called amines, such as monochloramine ().
The substituent is called an amino group.
Compounds with a nitrogen atom attached to a carbonyl group, thus having the structure , are called amides and have different chemical properties from amines.
Classification of amines
Amines can be classified according to the nature and number of substituents on nitrogen. Aliphatic amines contain only H and alkyl substituents. Aromatic amines have the nitrogen atom connected to an aromatic ring.
Amines, alkyl and aryl alike, are organized into three subcategories (see table) based on the number of carbon atoms adjacent to the nitrogen(how many hydrogen atoms of the ammonia molecule are replaced by hydrocarbon groups):
Primary (1°) amines—Primary amines arise when one of three hydrogen atoms in ammonia is replaced by an alkyl or aromatic group. Important primary alkyl amines include, methylamine, most amino acids, and the buffering agent
|
https://en.wikipedia.org/wiki/Absolute%20zero
|
Absolute zero is the lowest limit of the thermodynamic temperature scale; a state at which the enthalpy and entropy of a cooled ideal gas reach their minimum value, taken as zero kelvin. The fundamental particles of nature have minimum vibrational motion, retaining only quantum mechanical, zero-point energy-induced particle motion. The theoretical temperature is determined by extrapolating the ideal gas law; by international agreement, absolute zero is taken as −273.15 degrees on the Celsius scale (International System of Units), which equals −459.67 degrees on the Fahrenheit scale (United States customary units or imperial units). The corresponding Kelvin and Rankine temperature scales set their zero points at absolute zero by definition.
It is commonly thought of as the lowest temperature possible, but it is not the lowest enthalpy state possible, because all real substances begin to depart from the ideal gas when cooled as they approach the change of state to liquid, and then to solid; and the sum of the enthalpy of vaporization (gas to liquid) and enthalpy of fusion (liquid to solid) exceeds the ideal gas's change in enthalpy to absolute zero. In the quantum-mechanical description, matter at absolute zero is in its ground state, the point of lowest internal energy.
The laws of thermodynamics indicate that absolute zero cannot be reached using only thermodynamic means, because the temperature of the substance being cooled approaches the temperature of the cooling agent a
|
https://en.wikipedia.org/wiki/Adiabatic%20process
|
In thermodynamics, an adiabatic process (Greek: adiábatos, "impassable") is a type of thermodynamic process that occurs without transferring heat or mass between the thermodynamic system and its environment. Unlike an isothermal process, an adiabatic process transfers energy to the surroundings only as work. As a key concept in thermodynamics, the adiabatic process supports the theory that explains the first law of thermodynamics.
Some chemical and physical processes occur too rapidly for energy to enter or leave the system as heat, allowing a convenient "adiabatic approximation". For example, the adiabatic flame temperature uses this approximation to calculate the upper limit of flame temperature by assuming combustion loses no heat to its surroundings.
In meteorology and oceanography, adiabatic expanding produces condensation of moisture or salinity, oversaturating the parcel. Therefore, the excess must be removed. There, the process becomes a pseudo-adiabatic process whereby the liquid water or salt that condenses is assumed to be removed upon formation by idealized instantaneous precipitation. The pseudoadiabatic process is only defined for expansion because a compressed parcel becomes warmer and remains undersaturated.
Description
A process without transfer of heat to or from a system, so that , is called adiabatic, and such a system is said to be adiabatically isolated. The simplifying assumption frequently made is that a process is adiabatic. For example, the compr
|
https://en.wikipedia.org/wiki/ALGOL
|
ALGOL (; short for "Algorithmic Language") is a family of imperative computer programming languages originally developed in 1958. ALGOL heavily influenced many other languages and was the standard method for algorithm description used by the Association for Computing Machinery (ACM) in textbooks and academic sources for more than thirty years.
In the sense that the syntax of most modern languages is "Algol-like", it was arguably more influential than three other high-level programming languages among which it was roughly contemporary: FORTRAN, Lisp, and COBOL. It was designed to avoid some of the perceived problems with FORTRAN and eventually gave rise to many other programming languages, including PL/I, Simula, BCPL, B, Pascal, and C.
ALGOL introduced code blocks and the begin...end pairs for delimiting them. It was also the first language implementing nested function definitions with lexical scope. Moreover, it was the first programming language which gave detailed attention to formal language definition and through the Algol 60 Report introduced Backus–Naur form, a principal formal grammar notation for language design.
There were three major specifications, named after the years they were first published:
ALGOL 58 – originally proposed to be called IAL, for International Algebraic Language.
ALGOL 60 – first implemented as X1 ALGOL 60 in 1961. Revised 1963.
ALGOL 68 – introduced new elements including flexible arrays, slices, parallelism, operator identification. Revi
|
https://en.wikipedia.org/wiki/Kolmogorov%20complexity
|
In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as algorithmic complexity, Solomonoff–Kolmogorov–Chaitin complexity, program-size complexity, descriptive complexity, or algorithmic entropy. It is named after Andrey Kolmogorov, who first published on the subject in 1963 and is a generalization of classical information theory.
The notion of Kolmogorov complexity can be used to state and prove impossibility results akin to Cantor's diagonal argument, Gödel's incompleteness theorem, and Turing's halting problem.
In particular, no program P computing a lower bound for each text's Kolmogorov complexity can return a value essentially larger than P's own length (see section ); hence no single program can compute the exact Kolmogorov complexity for infinitely many texts.
Definition
Consider the following two strings of 32 lowercase letters and digits:
abababababababababababababababab , and
4c1j5b2p0cv4w1x8rx2y39umgw5q85s7
The first string has a short English-language description, namely "write ab 16 times", which consists of 17 characters. The second one has no obvious simple description (using the same character set) other than writing down the string itself, i.e
|
https://en.wikipedia.org/wiki/ATP
|
ATP may refer to:
Science, technology and biology
Adenosine triphosphate, an organic chemical used for driving biological processes
ATPase, any enzyme that makes use of adenosine triphosphate
Advanced Technology Program, US government program
Anti-tachycardia pacing, process similar to a pacemaker
Alberta Taciuk process, for extracting oil from shale, etc.
Automated theorem proving, method of proving mathematical theorems by computer programs
Companies and organizations
Association of Tennis Professionals, men's professional tennis governing body
ATP Tour
American Technical Publishers, employee-owned publishing company
Armenia Tree Project, non-profit organization
Association for Transpersonal Psychology
ATP architects engineers, architecture- and engineering office for integrated design
ATP Oil and Gas, defunct US energy company
Entertainment, arts and media
Adenosine Tri-Phosphate (band), Japanese alternative rock/pop band
All Tomorrow's Parties (festival), UK organisation
ATP Recordings, record label
Alberta Theatre Projects, professional, not-for-profit, Canadian theatre company
Associated Talking Pictures, former name of Ealing Studios, a television and film production company
Transport
British Aerospace ATP, airliner
Airline transport pilot license
ATP Flight School, US
ATP (treaty), UN treaty that establishes standards for the international transport of perishable food
Aitape Airport, Papua New Guinea, IATA code
Anti-trespass panels, meant to deter
|
https://en.wikipedia.org/wiki/Abzyme
|
An abzyme (from antibody and enzyme), also called catmab (from catalytic monoclonal antibody), and most often called catalytic antibody or sometimes catab, is a monoclonal antibody with catalytic activity. Abzymes are usually raised in lab animals immunized against synthetic haptens, but some natural abzymes can be found in normal humans (anti-vasoactive intestinal peptide autoantibodies) and in patients with autoimmune diseases such as systemic lupus erythematosus, where they can bind to and hydrolyze DNA. To date abzymes display only weak, modest catalytic activity and have not proved to be of any practical use. They are, however, subjects of considerable academic interest. Studying them has yielded important insights into reaction mechanisms, enzyme structure and function, catalysis, and the immune system itself.
Enzymes function by lowering the activation energy of the transition state of a chemical reaction, thereby enabling the formation of an otherwise less-favorable molecular intermediate between the reactant(s) and the product(s). If an antibody is developed to bind to a molecule that is structurally and electronically similar to the transition state of a given chemical reaction, the developed antibody will bind to, and stabilize, the transition state, just like a natural enzyme, lowering the activation energy of the reaction, and thus catalyzing the reaction. By raising an antibody to bind to a stable transition-state analog, a new and unique type of enzyme is prod
|
https://en.wikipedia.org/wiki/Agarose%20gel%20electrophoresis
|
Agarose gel electrophoresis is a method of gel electrophoresis used in biochemistry, molecular biology, genetics, and clinical chemistry to separate a mixed population of macromolecules such as DNA or proteins in a matrix of agarose, one of the two main components of agar. The proteins may be separated by charge and/or size (isoelectric focusing agarose electrophoresis is essentially size independent), and the DNA and RNA fragments by length. Biomolecules are separated by applying an electric field to move the charged molecules through an agarose matrix, and the biomolecules are separated by size in the agarose gel matrix.
Agarose gel is easy to cast, has relatively fewer charged groups, and is particularly suitable for separating DNA of size range most often encountered in laboratories, which accounts for the popularity of its use. The separated DNA may be viewed with stain, most commonly under UV light, and the DNA fragments can be extracted from the gel with relative ease. Most agarose gels used are between 0.7–2% dissolved in a suitable electrophoresis buffer.
Properties of agarose gel
Agarose gel is a three-dimensional matrix formed of helical agarose molecules in supercoiled bundles that are aggregated into three-dimensional structures with channels and pores through which biomolecules can pass. The 3-D structure is held together with hydrogen bonds and can therefore be disrupted by heating back to a liquid state. The melting temperature is different from the gelling
|
https://en.wikipedia.org/wiki/Allele
|
An allele is a variation of the same sequence of nucleotides at the same place on a long DNA molecule, as described in leading textbooks on genetics and evolution. The word is a short form of "allelomorph".
"The chromosomal or genomic location of a gene or any other genetic element is called a locus (plural: loci) and alternative DNA sequences at a locus are called alleles."
The simplest alleles are single nucleotide polymorphisms (SNP), but they can also be insertions and deletions of up to several thousand base pairs.
Most alleles observed result in little or no change in the function of the gene product it codes for. However, sometimes different alleles can result in different observable phenotypic traits, such as different pigmentation. A notable example of this is Gregor Mendel's discovery that the white and purple flower colors in pea plants were the result of a single gene with two alleles.
Nearly all multicellular organisms have two sets of chromosomes at some point in their biological life cycle; that is, they are diploid. In this case, the chromosomes can be paired. Each chromosome in the pair contains the same genes in the same order, and place, along the length of the chromosome. For a given gene, if the two chromosomes contain the same allele, they, and the organism, are homozygous with respect to that gene. If the alleles are different, they, and the organism, are heterozygous with respect to that gene.
Popular definitions of 'allele' typically refer only t
|
https://en.wikipedia.org/wiki/Autosome
|
An autosome is any chromosome that is not a sex chromosome. The members of an autosome pair in a diploid cell have the same morphology, unlike those in allosomal (sex chromosome) pairs, which may have different structures. The DNA in autosomes is collectively known as atDNA or auDNA.
For example, humans have a diploid genome that usually contains 22 pairs of autosomes and one allosome pair (46 chromosomes total). The autosome pairs are labeled with numbers (1–22 in humans) roughly in order of their sizes in base pairs, while allosomes are labelled with their letters. By contrast, the allosome pair consists of two X chromosomes in females or one X and one Y chromosome in males. Unusual combinations of XYY, XXY, XXX, XXXX, XXXXX or XXYY, among other Salome combinations, are known to occur and usually cause developmental abnormalities.
Autosomes still contain sexual determination genes even though they are not sex chromosomes. For example, the SRY gene on the Y chromosome encodes the transcription factor TDF and is vital for male sex determination during development. TDF functions by activating the SOX9 gene on chromosome 17, so mutations of the SOX9 gene can cause humans with an ordinary Y chromosome to develop as females.
All human autosomes have been identified and mapped by extracting the chromosomes from a cell arrested in metaphase or prometaphase and then staining them with a type of dye (most commonly, Giemsa). These chromosomes are typically viewed as karyograms for
|
https://en.wikipedia.org/wiki/Absorption
|
Absorption may refer to:
Chemistry and biology
Absorption (biology), digestion
Absorption (small intestine)
Absorption (chemistry), diffusion of particles of gas or liquid into liquid or solid materials
Absorption (skin), a route by which substances enter the body through the skin
Absorption (pharmacology), absorption of drugs into the body
Physics and chemical engineering
Absorption (acoustics), absorption of sound waves by a material
Absorption (electromagnetic radiation), absorption of light or other electromagnetic radiation by a material
Absorption air conditioning, a type of solar air conditioning
Absorption refrigerator, a refrigerator that runs on surplus heat rather than electricity
Dielectric absorption, the inability of a charged capacitor to completely discharge when briefly discharged
Mathematics and economics
Absorption (economics), the total demand of an economy for goods and services both from within and without
Absorption (logic), one of the rules of inference
Absorption costing, or total absorption costing, a method for appraising or valuing a firm's total inventory by including all the manufacturing costs incurred to produce those goods
Absorbing element, in mathematics, an element that does not change when it is combined in a binary operation with some other element
Absorption law, in mathematics, an identity linking a pair of binary operations
See also
Adsorption, the formation of a gas or liquid film on a solid surface
CO2 scrubber, device which abs
|
https://en.wikipedia.org/wiki/Andrew%20Tridgell
|
Andrew "Tridge" Tridgell (born 28 February 1967) is an Australian computer programmer. He is the author of and a contributor to the Samba file server, and co-inventor of the rsync algorithm.
He has analysed complex proprietary protocols and algorithms, to allow compatible free and open source software implementations.
Projects
Tridgell was a major developer of the Samba software, analyzing the Server Message Block protocol used for workgroup and network file sharing by Microsoft Windows products. He developed the hierarchical memory allocator, originally as part of Samba.
For his PhD dissertation, he co-developed rsync, including the rsync algorithm, a highly efficient file transfer and synchronisation tool. He was also the original author of rzip, which uses a similar algorithm to rsync. He developed spamsum, based on locality-sensitive hashing algorithms.
He is the author of KnightCap, a reinforcement-learning based chess engine.
Tridgell was also a leader in hacking the TiVo to make it work in Australia, which uses the PAL video format.
In April 2005, Tridgell tried to produce free software (now known as SourcePuller) that interoperated with the BitKeeper source code repository. This was cited as the reason that BitMover revoked a license allowing Linux developers free use of their BitKeeper product. Linus Torvalds, the creator of the Linux kernel, and Tridgell were thus involved in a public debate about the events, in which Tridgell stated that, not having bought
|
https://en.wikipedia.org/wiki/Analysis%20of%20algorithms
|
In computer science, the analysis of algorithms is the process of finding the computational complexity of algorithms—the amount of time, storage, or other resources needed to execute them. Usually, this involves determining a function that relates the size of an algorithm's input to the number of steps it takes (its time complexity) or the number of storage locations it uses (its space complexity). An algorithm is said to be efficient when this function's values are small, or grow slowly compared to a growth in the size of the input. Different inputs of the same size may cause the algorithm to have different behavior, so best, worst and average case descriptions might all be of practical interest. When not otherwise specified, the function describing the performance of an algorithm is usually an upper bound, determined from the worst case inputs to the algorithm.
The term "analysis of algorithms" was coined by Donald Knuth. Algorithm analysis is an important part of a broader computational complexity theory, which provides theoretical estimates for the resources needed by any algorithm which solves a given computational problem. These estimates provide an insight into reasonable directions of search for efficient algorithms.
In theoretical analysis of algorithms it is common to estimate their complexity in the asymptotic sense, i.e., to estimate the complexity function for arbitrarily large input. Big O notation, Big-omega notation and Big-theta notation are used to this e
|
https://en.wikipedia.org/wiki/Antibody
|
An antibody (Ab), also known as an immunoglobulin (Ig), is a large, Y-shaped protein used by the immune system to identify and neutralize foreign objects such as pathogenic bacteria and viruses. The antibody recognizes a unique molecule of the pathogen, called an antigen. Each tip of the "Y" of an antibody contains a paratope (analogous to a lock) that is specific for one particular epitope (analogous to a key) on an antigen, allowing these two structures to bind together with precision. Using this binding mechanism, an antibody can tag a microbe or an infected cell for attack by other parts of the immune system, or can neutralize it directly (for example, by blocking a part of a virus that is essential for its invasion).
To allow the immune system to recognize millions of different antigens, the antigen-binding sites at both tips of the antibody come in an equally wide variety.
In contrast, the remainder of the antibody is relatively constant. In mammals, antibodies occur in a few variants, which define the antibody's class or isotype: IgA, IgD, IgE, IgG, and IgM.
The constant region at the trunk of the antibody includes sites involved in interactions with other components of the immune system. The class hence determines the function triggered by an antibody after binding to an antigen, in addition to some structural features.
Antibodies from different classes also differ in where they are released in the body and at what stage of an immune response.
Together with B and T
|
https://en.wikipedia.org/wiki/AMD
|
Advanced Micro Devices, Inc., commonly abbreviated as AMD, is an American multinational semiconductor company based in Santa Clara, California, that develops computer processors and related technologies for business and consumer markets.
The company was founded in 1969 by Jerry Sanders and a group of other technology professionals. AMD's early products were primarily memory chips and other components for computers. The company later expanded into the microprocessor market, competing with Intel, its main rival in the industry. In the early 2000s, AMD experienced significant growth and success, thanks in part to its strong position in the PC market and the success of its Athlon and Opteron processors. However, the company faced challenges in the late 2000s and early 2010s, as it struggled to keep up with Intel in the race to produce faster and more powerful processors. In the late 2010s, AMD regained some of its market share thanks to the success of its Ryzen processors which are now widely regarded as superior to Intel products in business applications including cloud applications. AMD's processors are used in a wide range of computing devices, including personal computers, servers, laptops, and gaming consoles. While it initially manufactured its own processors, the company later outsourced its manufacturing, a practice known as going fabless, after GlobalFoundries was spun off in 2009.
AMD's main products include microprocessors, motherboard chipsets, embedded processors,
|
https://en.wikipedia.org/wiki/Audio
|
Audio most commonly refers to sound, as it is transmitted in signal form. It may also refer to:
Sound
Audio signal, an electrical representation of sound
Audio frequency, a frequency in the audio spectrum
Digital audio, representation of sound in a form processed and/or stored by computers or digital electronics
Audio, audible content (media) in audio production and publishing
Semantic audio, extraction of symbols or meaning from audio
Stereophonic audio, method of sound reproduction that creates an illusion of multi-directional audible perspective
Audio equipment
Entertainment
AUDIO (group), an American R&B band of 5 brothers formerly known as TNT Boyz and as B5
Audio (album), an album by the Blue Man Group
Audio (magazine), a magazine published from 1947 to 2000
Audio (musician), British drum and bass artist
"Audio" (song), a song by LSD
Computing
, an HTML element, see HTML5 audio
See also
Acoustic (disambiguation)
Audible (disambiguation)
Audiobook
Radio broadcasting
Sound recording and reproduction
Sound reinforcement
|
https://en.wikipedia.org/wiki/Apoptosis
|
Apoptosis (from ) is a form of programmed cell death that occurs in multicellular organisms and in some eukaryotic, single-celled microorganisms such as yeast. Biochemical events lead to characteristic cell changes (morphology) and death. These changes include blebbing, cell shrinkage, nuclear fragmentation, chromatin condensation, DNA fragmentation, and mRNA decay. The average adult human loses between 50 and 70 billion cells each day due to apoptosis. For an average human child between eight and fourteen years old, each day the approximate lost is 20 to 30 billion cells.
In contrast to necrosis, which is a form of traumatic cell death that results from acute cellular injury, apoptosis is a highly regulated and controlled process that confers advantages during an organism's life cycle. For example, the separation of fingers and toes in a developing human embryo occurs because cells between the digits undergo apoptosis. Unlike necrosis, apoptosis produces cell fragments called apoptotic bodies that phagocytes are able to engulf and remove before the contents of the cell can spill out onto surrounding cells and cause damage to them.
Because apoptosis cannot stop once it has begun, it is a highly regulated process. Apoptosis can be initiated through one of two pathways. In the intrinsic pathway the cell kills itself because it senses cell stress, while in the extrinsic pathway the cell kills itself because of signals from other cells. Weak external signals may also activate t
|
https://en.wikipedia.org/wiki/Adenylyl%20cyclase
|
Adenylate cyclase (EC 4.6.1.1, also commonly known as adenyl cyclase and adenylyl cyclase, abbreviated AC) is an enzyme with systematic name ATP diphosphate-lyase (cyclizing; 3′,5′-cyclic-AMP-forming). It catalyzes the following reaction:
ATP = 3′,5′-cyclic AMP + diphosphate
It has key regulatory roles in essentially all cells. It is the most polyphyletic known enzyme: six distinct classes have been described, all catalyzing the same reaction but representing unrelated gene families with no known sequence or structural homology. The best known class of adenylyl cyclases is class III or AC-III (Roman numerals are used for classes). AC-III occurs widely in eukaryotes and has important roles in many human tissues.
All classes of adenylyl cyclase catalyse the conversion of adenosine triphosphate (ATP) to 3',5'-cyclic AMP (cAMP) and pyrophosphate. Magnesium ions are generally required and appear to be closely involved in the enzymatic mechanism. The cAMP produced by AC then serves as a regulatory signal via specific cAMP-binding proteins, either transcription factors, enzymes (e.g., cAMP-dependent kinases), or ion transporters.
Classes
Class I
The first class of adenylyl cyclases occur in many bacteria including E. coli (as CyaA [unrelated to the Class II enzyme]). This was the first class of AC to be characterized. It was observed that E. coli deprived of glucose produce cAMP that serves as an internal signal to activate expression of genes for importing and metabolizing
|
https://en.wikipedia.org/wiki/Automated%20theorem%20proving
|
Automated theorem proving (also known as ATP or automated deduction) is a subfield of automated reasoning and mathematical logic dealing with proving mathematical theorems by computer programs. Automated reasoning over mathematical proof was a major impetus for the development of computer science.
Logical foundations
While the roots of formalised logic go back to Aristotle, the end of the 19th and early 20th centuries saw the development of modern logic and formalised mathematics. Frege's Begriffsschrift (1879) introduced both a complete propositional calculus and what is essentially modern predicate logic. His Foundations of Arithmetic, published in 1884, expressed (parts of) mathematics in formal logic. This approach was continued by Russell and Whitehead in their influential Principia Mathematica, first published 1910–1913, and with a revised second edition in 1927. Russell and Whitehead thought they could derive all mathematical truth using axioms and inference rules of formal logic, in principle opening up the process to automatisation. In 1920, Thoralf Skolem simplified a previous result by Leopold Löwenheim, leading to the Löwenheim–Skolem theorem and, in 1930, to the notion of a Herbrand universe and a Herbrand interpretation that allowed (un)satisfiability of first-order formulas (and hence the validity of a theorem) to be reduced to (potentially infinitely many) propositional satisfiability problems.
In 1929, Mojżesz Presburger showed that the first-order theor
|
https://en.wikipedia.org/wiki/Ames%20test
|
The Ames test is a widely employed method that uses bacteria to test whether a given chemical can cause mutations in the DNA of the test organism. More formally, it is a biological assay to assess the mutagenic potential of chemical compounds. A positive test indicates that the chemical is mutagenic and therefore may act as a carcinogen, because cancer is often linked to mutation. The test serves as a quick and convenient assay to estimate the carcinogenic potential of a compound because standard carcinogen assays on mice and rats are time-consuming (taking two to three years to complete) and expensive. However, false-positives and false-negatives are known.
The procedure was described in a series of papers in the early 1970s by Bruce Ames and his group at the University of California, Berkeley.
General procedure
The Ames test uses several strains of the bacterium Salmonella typhimurium that carry mutations in genes involved in histidine synthesis. These strains are auxotrophic mutants, i.e. they require histidine for growth, but cannot produce it. The method tests the capability of the tested substance in creating mutations that result in a return to a "prototrophic" state, so that the cells can grow on a histidine-free medium.
The tester strains are specially constructed to detect either frameshift (e.g. strains TA-1537 and TA-1538) or point (e.g. strain TA-1531) mutations in the genes required to synthesize histidine, so that mutagens acting via different mechanisms ma
|
https://en.wikipedia.org/wiki/ACE%20inhibitor
|
Angiotensin-converting-enzyme inhibitors (ACE inhibitors) are a class of medication used primarily for the treatment of high blood pressure and heart failure. This class of medicine works by causing relaxation of blood vessels as well as a decrease in blood volume, which leads to lower blood pressure and decreased oxygen demand from the heart.
ACE inhibitors inhibit the activity of angiotensin-converting enzyme, an important component of the renin–angiotensin system which converts angiotensin I to angiotensin II, and hydrolyses bradykinin. Therefore, ACE inhibitors decrease the formation of angiotensin II, a vasoconstrictor, and increase the level of bradykinin, a peptide vasodilator. This combination is synergistic in lowering blood pressure. As a result of inhibiting the ACE enzyme in the bradykinin system, the ACE inhibitor drugs allow for increased levels of bradykinin which would normally be degraded. Bradykinin produces prostaglandin. This mechanism can explain the two most common side effects seen with ACE Inhibitors: angioedema and cough.
Frequently prescribed ACE inhibitors include benazepril, zofenopril, perindopril, trandolapril, captopril, enalapril, lisinopril, and ramipril.
Medical use
ACE inhibitors were initially approved for the treatment of hypertension and can be used alone or in combination with other anti-hypertensive medications. Later, they were found useful for other cardiovascular and kidney diseases including:
Acute myocardial infarction (heart
|
https://en.wikipedia.org/wiki/Anatomical%20Therapeutic%20Chemical%20Classification%20System
|
The Anatomical Therapeutic Chemical (ATC) Classification System is a drug classification system that classifies the active ingredients of drugs according to the organ or system on which they act and their therapeutic, pharmacological and chemical properties. Its purpose is an aid to monitor drug use and for research to improve quality medication use. It does not imply drug recommendation or efficacy. It is controlled by the World Health Organization Collaborating Centre for Drug Statistics Methodology (WHOCC), and was first published in 1976.
Coding system
This pharmaceutical coding system divides drugs into different groups according to the organ or system on which they act, their therapeutic intent or nature, and the drug's chemical characteristics. Different brands share the same code if they have the same active substance and indications. Each bottom-level ATC code stands for a pharmaceutically used substance, or a combination of substances, in a single indication (or use). This means that one drug can have more than one code, for example acetylsalicylic acid (aspirin) has as a drug for local oral treatment, as a platelet inhibitor, and as an analgesic and antipyretic; as well as one code can represent more than one active ingredient, for example is the combination of perindopril with amlodipine, two active ingredients that have their own codes ( and respectively) when prescribed alone.
The ATC classification system is a strict hierarchy, meaning that each code ne
|
https://en.wikipedia.org/wiki/Aerodynamics
|
Aerodynamics ( aero (air) + (dynamics)) is the study of the motion of air, particularly when affected by a solid object, such as an airplane wing. It involves topics covered in the field of fluid dynamics and its subfield of gas dynamics, and is an important domain of study in aeronautics. The term aerodynamics is often used synonymously with gas dynamics, the difference being that "gas dynamics" applies to the study of the motion of all gases, and is not limited to air. The formal study of aerodynamics began in the modern sense in the eighteenth century, although observations of fundamental concepts such as aerodynamic drag were recorded much earlier. Most of the early efforts in aerodynamics were directed toward achieving heavier-than-air flight, which was first demonstrated by Otto Lilienthal in 1891. Since then, the use of aerodynamics through mathematical analysis, empirical approximations, wind tunnel experimentation, and computer simulations has formed a rational basis for the development of heavier-than-air flight and a number of other technologies. Recent work in aerodynamics has focused on issues related to compressible flow, turbulence, and boundary layers and has become increasingly computational in nature.
History
Modern aerodynamics only dates back to the seventeenth century, but aerodynamic forces have been harnessed by humans for thousands of years in sailboats and windmills, and images and stories of flight appear throughout recorded history, such as the A
|
https://en.wikipedia.org/wiki/Antiderivative
|
In calculus, an antiderivative, inverse derivative, primitive function, primitive integral or indefinite integral of a function is a differentiable function whose derivative is equal to the original function . This can be stated symbolically as . The process of solving for antiderivatives is called antidifferentiation (or indefinite integration), and its opposite operation is called differentiation, which is the process of finding a derivative. Antiderivatives are often denoted by capital Roman letters such as and .
Antiderivatives are related to definite integrals through the second fundamental theorem of calculus: the definite integral of a function over a closed interval where the function is Riemann integrable is equal to the difference between the values of an antiderivative evaluated at the endpoints of the interval.
In physics, antiderivatives arise in the context of rectilinear motion (e.g., in explaining the relationship between position, velocity and acceleration). The discrete equivalent of the notion of antiderivative is antidifference.
Examples
The function is an antiderivative of , since the derivative of is . And since the derivative of a constant is zero, will have an infinite number of antiderivatives, such as , etc. Thus, all the antiderivatives of can be obtained by changing the value of in , where is an arbitrary constant known as the constant of integration. Essentially, the graphs of antiderivatives of a given function are vertical translatio
|
https://en.wikipedia.org/wiki/Amorphous%20solid
|
In condensed matter physics and materials science, an amorphous solid (or non-crystalline solid) is a solid that lacks the long-range order that is characteristic of a crystal. The terms "glass" and "glassy solid" are sometimes used synonymously with amorphous solid; however, these terms refer specifically to amorphous materials that undergo a glass transition. Examples of amorphous solids include glasses, metallic glasses, and certain types of plastics and polymers.
Etymology
The term comes from the Greek a ("without"), and morphé ("shape, form").
Structure
Amorphous materials have an internal structure consisting of interconnected structural blocks that can be similar to the basic structural units found in the corresponding crystalline phase of the same compound. Unlike in crystalline materials, however, no long-range order exists. Amorphous materials therefore cannot be defined by a finite unit cell. Statistical methods, such as the atomic density function and radial distribution function, are more useful in describing the structure of amorphous solids.
Although amorphous materials lack long range order, they exhibit localized order on small length scales. Localized order in amorphous materials can be categorized as short or medium range order. By convention, short range order extends only to the nearest neighbor shell, typically only 1-2 atomic spacings. Medium range order is then defined as the structural organization extending beyond the short range order, usually b
|
https://en.wikipedia.org/wiki/Romantic%20orientation
|
A person's romantic orientation, also called affectional orientation, is the classification of the sex or gender with which a person experiences romantic attraction towards or is likely to have a romantic relationship with. The term is used alongside the term "sexual orientation", as well as being used alternatively to it, based upon the perspective that sexual attraction is only a single component of a larger concept.
For example, although a pansexual person may feel sexually attracted to people regardless of gender, the person may experience romantic attraction and intimacy with women only.
For asexual people, romantic orientation is often considered a more useful measure of attraction than sexual orientation.
The relationship between sexual attraction and romantic attraction is still under debate. Sexual and romantic attractions are often studied in conjunction. Even though studies of sexual and romantic spectrums are shedding light onto this under-researched subject, much is still not fully understood.
Romantic identities
People may or may not engage in purely emotional romantic relationships. The main identities relating to this are:
Aromantic, meaning someone who experiences little to no romantic attraction.
, or experiencing romantic attraction rarely, only under certain circumstances, or only weakly
: Romantic attraction towards any of the above but only after forming a deep emotional bond with the person(s) (demiromanticism).
: Romantic attraction towards pe
|
https://en.wikipedia.org/wiki/Abstraction
|
Abstraction is a conceptual process wherein general rules and concepts are derived from the usage and classification of specific examples, literal (real or concrete) signifiers, first principles, or other methods.
"An abstraction" is the outcome of this process—a concept that acts as a common noun for all subordinate concepts and connects any related concepts as a group, field, or category.
Conceptual abstractions may be formed by filtering the information content of a concept or an observable phenomenon, selecting only those aspects which are relevant for a particular purpose. For example, abstracting a leather soccer ball to the more general idea of a ball selects only the information on general ball attributes and behavior, excluding but not eliminating the other phenomenal and cognitive characteristics of that particular ball. In a type–token distinction, a type (e.g., a 'ball') is more abstract than its tokens (e.g., 'that leather soccer ball').
Abstraction in its secondary use is a material process, discussed in the themes below.
Origins
Thinking in abstractions is considered by anthropologists, archaeologists, and sociologists to be one of the key traits in modern human behaviour, which is believed to have developed between 50,000 and 100,000 years ago. Its development is likely to have been closely connected with the development of human language, which (whether spoken or written) appears to both involve and facilitate abstract thinking.
History
Abstraction invo
|
https://en.wikipedia.org/wiki/Analcime
|
Analcime (; ) or analcite is a white, gray, or colorless tectosilicate mineral. Analcime consists of hydrated sodium aluminium silicate in cubic crystalline form. Its chemical formula is NaAlSi2O6·H2O. Minor amounts of potassium and calcium substitute for sodium. A silver-bearing synthetic variety also exists (Ag-analcite). Analcime is usually classified as a zeolite mineral, but structurally and chemically it is more similar to the feldspathoids. Analcime isn't classified as an isometric crystal, as although the crystal structure appears to be isometric, it is usually off only by a fraction of an angle. However, there are truly isometric samples of the mineral, which makes its classification even more difficult. Due to the differences between the samples being too slight, there's no merit from having multiple species names, so as a result analcime is a common example for minerals occurring in multiple crystal systems and space groups. It was first described by French geologist Déodat de Dolomieu, who called it zéolithe dure, meaning hard zeolite. It was found in lava in Cyclops, Italy. The mineral is IMA approved, and had been grandfathered, meaning the name analcime is believed to refer to a valid species til this day.
Properties
Analcime crystals always look pseudocubic. Its common crystal forms include trapezohedron, truncated trapezohedron with cubic faces, and more rarely either as a truncated trapezohedron, or the crystals can take the shape of a truncated cube that
|
https://en.wikipedia.org/wiki/Alpha%20helix
|
An alpha helix (or α-helix) is a sequence of amino acids in a protein that are twisted into a coil (a helix).
The alpha helix is the most common structural arrangement in the secondary structure of proteins. It is also the most extreme type of local structure, and it is the local structure that is most easily predicted from a sequence of amino acids.
The alpha helix has a right hand-helix conformation in which every backbone N−H group hydrogen bonds to the backbone C=O group of the amino acid that is four residues earlier in the protein sequence.
Other names
The alpha helix is also commonly called a:
Pauling–Corey–Branson α-helix (from the names of three scientists who described its structure).
3.613-helix because there are 3.6 amino acids in one ring, and there are an average of 13 residues per helical turn, with 13 atoms being involved in the ring formed by the hydrogen bond.
Discovery
In the early 1930s, William Astbury showed that there were drastic changes in the X-ray fiber diffraction of moist wool or hair fibers upon significant stretching. The data suggested that the unstretched fibers had a coiled molecular structure with a characteristic repeat of ≈.
Astbury initially proposed a linked-chain structure for the fibers. He later joined other researchers (notably the American chemist Maurice Huggins) in proposing that:
the unstretched protein molecules formed a helix (which he called the α-form)
the stretching caused the helix to uncoil, forming an extende
|
https://en.wikipedia.org/wiki/Axiology
|
Axiology (from Greek , axia: "value, worth"; and , -logia: "study of") is the philosophical study of value. It includes questions about the nature and classification of values and about what kinds of things have value. It is intimately connected with various other philosophical fields that crucially depend on the notion of value, like ethics, aesthetics or philosophy of religion. It is also closely related to value theory and meta-ethics. The term was first used by Eduard von Hartmann in 1887 and by Paul Lapie in 1902.
The distinction between intrinsic and extrinsic value is central to axiology. One conceptualization holds that something is intrinsically valuable if it is good in itself or good for its own sake. It is usually held that intrinsic value depends on certain features of the valuable entity. For example, an experience may be said to be intrinsically valuable by virtue of being (because it is) pleasurable or beautiful or "true" (e.g., the ascertainment of a fact can be said to be valuable in itself). Extrinsic value, by contrast, is ascribed to things that are valuable only as a means to something else. Substantive theories of value try to determine which entities have intrinsic value. Monist theories hold that there is only one type of intrinsic value. The paradigm example of monist theories is hedonism, the thesis that only pleasure has intrinsic value. Pluralist theories, on the other hand, contend that there are various different types of intrinsic value, for
|
https://en.wikipedia.org/wiki/Agar
|
Agar ( or ), or agar-agar, is a jelly-like substance consisting of polysaccharides obtained from the cell walls of some species of red algae, primarily from "ogonori" (Gracilaria) and "tengusa" (Gelidiaceae). As found in nature, agar is a mixture of two components, the linear polysaccharide agarose and a heterogeneous mixture of smaller molecules called agaropectin. It forms the supporting structure in the cell walls of certain species of algae and is released on boiling. These algae are known as agarophytes, belonging to the Rhodophyta (red algae) phylum. The processing of food-grade agar removes the agaropectin, and the commercial product is essentially pure agarose.
Agar has been used as an ingredient in desserts throughout Asia and also as a solid substrate to contain culture media for microbiological work. Agar can be used as a laxative; an appetite suppressant; a vegan substitute for gelatin; a thickener for soups; in fruit preserves, ice cream, and other desserts; as a clarifying agent in brewing; and for sizing paper and fabrics.
Etymology
The word agar comes from agar-agar, the Malay name for red algae (Gigartina, Eucheuma, Gracilaria) from which the jelly is produced. It is also known as Kanten () (from the phrase kan-zarashi tokoroten () or “cold-exposed agar”), Japanese isinglass, China grass, Ceylon moss or Jaffna moss. Gracilaria edulis or its synonym G. lichenoides is specifically referred to as agal-agal or Ceylon agar.
History
Macroalgae have been u
|
https://en.wikipedia.org/wiki/Boron%20nitride
|
Boron nitride is a thermally and chemically resistant refractory compound of boron and nitrogen with the chemical formula BN. It exists in various crystalline forms that are isoelectronic to a similarly structured carbon lattice. The hexagonal form corresponding to graphite is the most stable and soft among BN polymorphs, and is therefore used as a lubricant and an additive to cosmetic products. The cubic (zincblende aka sphalerite structure) variety analogous to diamond is called c-BN; it is softer than diamond, but its thermal and chemical stability is superior. The rare wurtzite BN modification is similar to lonsdaleite but slightly softer than the cubic form.
Because of excellent thermal and chemical stability, boron nitride ceramics are used in high-temperature equipment and metal casting. Boron nitride has potential use in nanotechnology.
Structure
Boron nitride exists in multiple forms that differ in the arrangement of the boron and nitrogen atoms, giving rise to varying bulk properties of the material.
Amorphous form (a-BN)
The amorphous form of boron nitride (a-BN) is non-crystalline, lacking any long-distance regularity in the arrangement of its atoms. It is analogous to amorphous carbon.
All other forms of boron nitride are crystalline.
Hexagonal form (h-BN)
The most stable crystalline form is the hexagonal one, also called h-BN, α-BN, g-BN, and graphitic boron nitride. Hexagonal boron nitride (point group = D6h; space group = P63/mmc) has a layered structur
|
https://en.wikipedia.org/wiki/Brain
|
The brain (or encephalon) is an organ that serves as the center of the nervous system in all vertebrate and most invertebrate animals. The brain is the largest cluster of neurons in the body and is typically located in the head, usually near organs for special senses such as vision, hearing and olfaction. It is the most specialized and energy-consuming organ in the body, responsible for complex sensory perception, motor control, endocrine regulation and the development of intelligence.
While invertebrate brains arise from paired segmental ganglia (each of which is only responsible for the respective body segment) of the ventral nerve cord, vertebrate brains develop axially from the midline dorsal nerve cord as a vesicular enlargement at the rostral end of the neural tube, with centralized control over all body segments. All vertebrate brains can be embryonically divided into three parts: the forebrain (prosencephalon, subdivided into telencephalon and diencephalon), midbrain (mesencephalon) and hindbrain (rhombencephalon, subdivided into metencephalon and myelencephalon). The spinal cord, which directly interacts with somatic functions below the head, can be considered a caudal extension of the myelencephalon enclosed inside the vertebral column. Together, the brain and spinal cord constitute the central nervous system in all vertebrates.
In humans, the cerebral cortex contains approximately 14–16 billion neurons, and the estimated number of neurons in the cerebellum is 55–
|
https://en.wikipedia.org/wiki/Bass%20%28sound%29
|
Bass ( ) (also called bottom end) describes tones of low (also called "deep") frequency, pitch and range from 16 to 250 Hz (C0 to middle C4) and bass instruments that produce tones in the low-pitched range C2-C4. They belong to different families of instruments and can cover a wide range of musical roles. Since producing low pitches usually requires a long air column or string, and for stringed instruments, a large hollow body, the string and wind bass instruments are usually the largest instruments in their families or instrument classes.
Musical role
When bass notes are played in a musical ensemble such an orchestra, they are frequently used to provide a counterpoint or counter-melody, in a harmonic context either to outline or juxtapose the progression of the chords, or with percussion to underline the rhythm.
Rhythm section
In popular music, the bass part, which is called the "bassline", typically provides harmonic and rhythmic support to the band. The bass player is a member of the rhythm section in a band, along with the drummer, rhythm guitarist, and, in some cases, a keyboard instrument player (e.g., piano or Hammond organ). The bass player emphasizes the root or fifth of the chord in their basslines (and to a lesser degree, the third of the chord) and accents the strong beats.
Kinds of bass harmony
In classical music, different forms of bass are: basso concertante, or basso recitante; the bass voice of the chorus; the bass which accompanies the softer passages of
|
https://en.wikipedia.org/wiki/Boron
|
Boron is a chemical element with the symbol B and atomic number 5. In its crystalline form it is a brittle, dark, lustrous metalloid; in its amorphous form it is a brown powder. As the lightest element of the boron group it has three valence electrons for forming covalent bonds, resulting in many compounds such as boric acid, the mineral sodium borate, and the ultra-hard crystals of boron carbide and boron nitride.
Boron is synthesized entirely by cosmic ray spallation and supernovae and not by stellar nucleosynthesis, so it is a low-abundance element in the Solar System and in the Earth's crust. It constitutes about 0.001 percent by weight of Earth's crust. It is concentrated on Earth by the water-solubility of its more common naturally occurring compounds, the borate minerals. These are mined industrially as evaporites, such as borax and kernite. The largest known deposits are in Turkey, the largest producer of boron minerals.
Elemental boron is a metalloid that is found in small amounts in meteoroids but chemically uncombined boron is not otherwise found naturally on Earth. Industrially, the very pure element is produced with difficulty because of contamination by carbon or other elements that resist removal. Several allotropes exist: amorphous boron is a brown powder; crystalline boron is silvery to black, extremely hard (about 9.5 on the Mohs scale), and a poor electrical conductor at room temperature. The primary use of the element itself is as boron filaments with ap
|
https://en.wikipedia.org/wiki/Baseball%20statistics
|
Baseball statistics play an important role in evaluating the progress of a player or team.
Since the flow of a baseball game has natural breaks to it, and normally players act individually rather than performing in clusters, the sport lends itself to easy record-keeping and statistics. Statistics have been recorded since the game's earliest beginnings as a distinct sport in the middle of the nineteenth century, and as such are extensively available from leagues such as the National Association of Professional Base Ball Players and the Negro leagues, although the consistency to which these records have been kept and the standards with respect to which they were calculated (and their accuracy) has varied.
Since the National League (which along with the American League constitutes contemporary Major League Baseball) was founded in 1876, statistics in the most elite levels of professional baseball have been kept to a reasonably consistent standard which has continually evolved in tandem with advancement in available technology.
Development
The practice of keeping records of player achievements was started in the 19th century by Henry Chadwick. Based on his experience with the sport of cricket, Chadwick devised the predecessors to modern-day statistics including batting average, runs scored, and runs allowed.
Traditionally, statistics such as batting average (the number of hits divided by the number of at bats) and earned run average (the average number of earned runs allowed
|
https://en.wikipedia.org/wiki/List%20of%20Major%20League%20Baseball%20career%20total%20bases%20leaders
|
In baseball statistics, total bases (TB) is the number of bases a player has gained with hits. It is a weighted sum for which the weight value is 1 for a single, 2 for a double, 3 for a triple and 4 for a home run. Only bases attained from hits count toward this total. Reaching base by other means (such as a base on balls) or advancing further after the hit (such as when a subsequent batter gets a hit) does not increase the player's total bases.
The total bases divided by the number of at bats is the player's slugging average.
Hank Aaron is the career leader in total bases with 6,856. Albert Pujols (6,211), Stan Musial (6,134), and Willie Mays (6,080) are the only other players with at least 6,000 career total bases.
As of October 2023, no active players are in the top 100 for career total bases. The active leader is Nelson Cruz, in 113th with 3,847.
Key
List
Stats updated as of October 1, 2023.
Notes
External links
Baseball Reference – Career Leaders & Records for Total Bases
Total
Major League Baseball statistics
|
https://en.wikipedia.org/wiki/Hit%20%28baseball%29
|
In baseball statistics, a hit (denoted by H), also called a base hit, is credited to a batter when the batter safely reaches or passes first base after hitting the ball into fair territory with neither the benefit of an error nor a fielder's choice.
Scoring a hit
To achieve a hit, the batter must reach first base before any fielder can either tag him with the ball, throw to another player protecting the base before the batter reaches it, or tag first base while carrying the ball. The hit is scored the moment the batter reaches first base safely; if he is put out while attempting to stretch his hit to a double or triple or home run on the same play, he still gets credit for a hit (according to the last base he reached safely on the play).
If a batter reaches first base because of offensive interference by a preceding runner (including if a preceding runner is hit by a batted ball), he is also credited with a hit.
Types of hits
A hit for one base is called a single, for two bases a double, and for three bases a triple. A home run is also scored as a hit. Doubles, triples, and home runs are also called extra base hits.
An "infield hit" is a hit where the ball does not leave the infield. Infield hits are uncommon by nature, and most often earned by speedy runners.
Pitching a no-hitter
A no-hitter is a game in which one of the teams prevented the other from getting a hit. Throwing a no-hitter is rare and considered an extraordinary accomplishment for a pitcher or pitching
|
https://en.wikipedia.org/wiki/On-base%20percentage
|
In baseball statistics, on-base percentage (OBP) measures how frequently a batter reaches base. An official Major League Baseball (MLB) statistic since 1984, it is sometimes referred to as on-base average (OBA), as it is rarely presented as a true percentage.
Generally defined as "how frequently a batter reaches base per plate appearance", OBP is specifically calculated as the ratio of a batter's times on base (the sum of hits, bases on balls, and times hit by pitch) to the sum of at bats, bases on balls, hit by pitch, and sacrifice flies. OBP does not credit the batter for reaching base on fielding errors, fielder's choice, uncaught third strikes, fielder's obstruction, or catcher's interference.
OBP is added to slugging average (SLG) to determine on-base plus slugging (OPS).
The OBP of all batters faced by one pitcher or team is referred to as "on-base against".
On-base percentage is calculable for professional teams dating back to the first year of National Association of Professional Base Ball Players competition in 1871, because the component values of its formula have been recorded in box scores ever since.
History
The statistic was invented in the late 1940s by Brooklyn Dodgers statistician Allan Roth with then-Dodgers general manager Branch Rickey. In 1954, Rickey, who was then the general manager of the Pittsburgh Pirates, was featured in a Life Magazine graphic in which the formula for on-base percentage was shown as the first component of an all-encompassin
|
https://en.wikipedia.org/wiki/Banjo
|
The banjo is a stringed instrument with a thin membrane stretched over a frame or cavity to form a resonator. The membrane is typically circular, in modern forms usually made of plastic, originally of animal skin. Early forms of the instrument were fashioned by African Americans and had African antecedents. In the 19th century, interest in the instrument was spread across the United States and United Kingdom by traveling shows of the 19th century minstrel show fad, followed by mass-production and mail-order sales, including instruction method books. The inexpensive or home-made banjo remained part of rural folk culture, but 5-string and 4-string banjos also became popular for home parlour music entertainment, college music clubs, and early 20th century jazz bands. By the early 21st century, the banjo was most frequently associated with folk, bluegrass and country music, but was also used in some rock, pop and even hip-hop music. Among rock bands, the Eagles, Led Zeppelin, and the Grateful Dead have used the five-string banjo in some of their songs.
Historically, the banjo occupied a central place in Black American traditional music and rural folk culture before entering the mainstream via the minstrel shows of the 19th century. Along with the fiddle, the banjo is a mainstay of American styles of music, such as bluegrass and old-time music. It is also very frequently used in Dixieland jazz, as well as in Caribbean genres like biguine, calypso and mento.
History
Early origi
|
https://en.wikipedia.org/wiki/Binomial%20distribution
|
In probability theory and statistics, the binomial distribution with parameters n and p is the discrete probability distribution of the number of successes in a sequence of n independent experiments, each asking a yes–no question, and each with its own Boolean-valued outcome: success (with probability p) or failure (with probability ). A single success/failure experiment is also called a Bernoulli trial or Bernoulli experiment, and a sequence of outcomes is called a Bernoulli process; for a single trial, i.e., n = 1, the binomial distribution is a Bernoulli distribution. The binomial distribution is the basis for the popular binomial test of statistical significance.
The binomial distribution is frequently used to model the number of successes in a sample of size n drawn with replacement from a population of size N. If the sampling is carried out without replacement, the draws are not independent and so the resulting distribution is a hypergeometric distribution, not a binomial one. However, for N much larger than n, the binomial distribution remains a good approximation, and is widely used.
Definitions
Probability mass function
In general, if the random variable X follows the binomial distribution with parameters n ∈ and p ∈ [0,1], we write X ~ B(n, p). The probability of getting exactly k successes in n independent Bernoulli trials (with the same rate p) is given by the probability mass function:
for k = 0, 1, 2, ..., n, where
is the binomial coefficient, hence the
|
https://en.wikipedia.org/wiki/Biostatistics
|
Biostatistics (also known as biometry) is a branch of statistics that applies statistical methods to a wide range of topics in biology. It encompasses the design of biological experiments, the collection and analysis of data from those experiments and the interpretation of the results.
History
Biostatistics and genetics
Biostatistical modeling forms an important part of numerous modern biological theories. Genetics studies, since its beginning, used statistical concepts to understand observed experimental results. Some genetics scientists even contributed with statistical advances with the development of methods and tools. Gregor Mendel started the genetics studies investigating genetics segregation patterns in families of peas and used statistics to explain the collected data. In the early 1900s, after the rediscovery of Mendel's Mendelian inheritance work, there were gaps in understanding between genetics and evolutionary Darwinism. Francis Galton tried to expand Mendel's discoveries with human data and proposed a different model with fractions of the heredity coming from each ancestral composing an infinite series. He called this the theory of "Law of Ancestral Heredity". His ideas were strongly disagreed by William Bateson, who followed Mendel's conclusions, that genetic inheritance were exclusively from the parents, half from each of them. This led to a vigorous debate between the biometricians, who supported Galton's ideas, as Raphael Weldon, Arthur Dukinfield Darbi
|
https://en.wikipedia.org/wiki/Blowfish%20%28disambiguation%29
|
Blowfish are species of fish in the family Tetraodontidae.
Blowfish may also refer to:
Porcupinefish, belonging to the family Diodontidae
Blowfish (cipher), an encryption algorithm
Blowfish (company), an American erotic goods supplier
The Blowfish, a satirical newspaper at Brandeis University
Lexington County Blowfish, a baseball team
Vice President Blowfish, a character in the animated series Adventure Time episode "President Porpoise Is Missing!"
See also
Hootie & the Blowfish, an American rock band
|
https://en.wikipedia.org/wiki/Banach%20space
|
In mathematics, more specifically in functional analysis, a Banach space (pronounced ) is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and is complete in the sense that a Cauchy sequence of vectors always converges to a well-defined limit that is within the space.
Banach spaces are named after the Polish mathematician Stefan Banach, who introduced this concept and studied it systematically in 1920–1922 along with Hans Hahn and Eduard Helly.
Maurice René Fréchet was the first to use the term "Banach space" and Banach in turn then coined the term "Fréchet space".
Banach spaces originally grew out of the study of function spaces by Hilbert, Fréchet, and Riesz earlier in the century. Banach spaces play a central role in functional analysis. In other areas of analysis, the spaces under study are often Banach spaces.
Definition
A Banach space is a complete normed space
A normed space is a pair
consisting of a vector space over a scalar field (where is commonly or ) together with a distinguished
norm Like all norms, this norm induces a translation invariant
distance function, called the canonical or (norm) induced metric, defined for all vectors by
This makes into a metric space
A sequence is called or or if for every real there exists some index such that
whenever and are greater than
The normed space is called a and the canonical metric
|
https://en.wikipedia.org/wiki/Blood
|
Blood is a body fluid in the circulatory system of humans and other vertebrates that delivers necessary substances such as nutrients and oxygen to the cells, and transports metabolic waste products away from those same cells. Blood in the circulatory system is also known as peripheral blood, and the blood cells it carries, peripheral blood cells.
Blood is composed of blood cells suspended in blood plasma. Plasma, which constitutes 55% of blood fluid, is mostly water (92% by volume), and contains proteins, glucose, mineral ions, hormones, carbon dioxide (plasma being the main medium for excretory product transportation), and blood cells themselves. Albumin is the main protein in plasma, and it functions to regulate the colloidal osmotic pressure of blood. The blood cells are mainly red blood cells (also called RBCs or erythrocytes), white blood cells (also called WBCs or leukocytes), and in mammals platelets (also called thrombocytes). The most abundant cells in vertebrate blood are red blood cells. These contain hemoglobin, an iron-containing protein, which facilitates oxygen transport by reversibly binding to this respiratory gas thereby increasing its solubility in blood. In contrast, carbon dioxide is mostly transported extracellularly as bicarbonate ion transported in plasma.
Vertebrate blood is bright red when its hemoglobin is oxygenated and dark red when it is deoxygenated.
Some animals, such as crustaceans and mollusks, use hemocyanin to carry oxygen, instead of he
|
https://en.wikipedia.org/wiki/BCPL
|
BCPL ("Basic Combined Programming Language") is a procedural, imperative, and structured programming language. Originally intended for writing compilers for other languages, BCPL is no longer in common use. However, its influence is still felt because a stripped down and syntactically changed version of BCPL, called B, was the language on which the C programming language was based. BCPL introduced several features of many modern programming languages, including using curly braces to delimit code blocks. BCPL was first implemented by Martin Richards of the University of Cambridge in 1967.
Design
BCPL was designed so that small and simple compilers could be written for it; reputedly some compilers could be run in 16 kilobytes. Furthermore, the original compiler, itself written in BCPL, was easily portable. BCPL was thus a popular choice for bootstrapping a system. A major reason for the compiler's portability lay in its structure. It was split into two parts: the front end parsed the source and generated O-code, an intermediate language. The back end took the O-code and translated it into the machine code for the target machine. Only of the compiler's code needed to be rewritten to support a new machine, a task that usually took between 2 and 5 person-months. This approach became common practice later (e.g. Pascal, Java).
The language is unusual in having only one data type: a word, a fixed number of bits, usually chosen to align with the architecture's machine word and
|
https://en.wikipedia.org/wiki/Borsuk%E2%80%93Ulam%20theorem
|
In mathematics, the Borsuk–Ulam theorem states that every continuous function from an n-sphere into Euclidean n-space maps some pair of antipodal points to the same point. Here, two points on a sphere are called antipodal if they are in exactly opposite directions from the sphere's center.
Formally: if is continuous then there exists an such that: .
The case can be illustrated by saying that there always exist a pair of opposite points on the Earth's equator with the same temperature. The same is true for any circle. This assumes the temperature varies continuously in space, which is, however, not always the case.
The case is often illustrated by saying that at any moment, there is always a pair of antipodal points on the Earth's surface with equal temperatures and equal barometric pressures, assuming that both parameters vary continuously in space. Since temperature, pressure or other such physical variables do not necessarily vary continuously, the predictions of the theorem are unlikely to be true in some necessary sense (as following from a mathematical necessity).
The Borsuk–Ulam theorem has several equivalent statements in terms of odd functions. Recall that is the n-sphere and is the n-ball:
If is a continuous odd function, then there exists an such that: .
If is a continuous function which is odd on (the boundary of ), then there exists an such that: .
History
According to , the first historical mention of the statement of the Borsuk–Ulam theor
|
https://en.wikipedia.org/wiki/BQP
|
In computational complexity theory, bounded-error quantum polynomial time (BQP) is the class of decision problems solvable by a quantum computer in polynomial time, with an error probability of at most 1/3 for all instances. It is the quantum analogue to the complexity class BPP.
A decision problem is a member of BQP if there exists a quantum algorithm (an algorithm that runs on a quantum computer) that solves the decision problem with high probability and is guaranteed to run in polynomial time. A run of the algorithm will correctly solve the decision problem with a probability of at least 2/3.
Definition
BQP can be viewed as the languages associated with certain bounded-error uniform families of quantum circuits. A language L is in BQP if and only if there exists a polynomial-time uniform family of quantum circuits , such that
For all , Qn takes n qubits as input and outputs 1 bit
For all x in L,
For all x not in L,
Alternatively, one can define BQP in terms of quantum Turing machines. A language L is in BQP if and only if there exists a polynomial quantum Turing machine that accepts L with an error probability of at most 1/3 for all instances.
Similarly to other "bounded error" probabilistic classes the choice of 1/3 in the definition is arbitrary. We can run the algorithm a constant number of times and take a majority vote to achieve any desired probability of correctness less than 1, using the Chernoff bound. The complexity class is unchanged by allowing error
|
https://en.wikipedia.org/wiki/Brouwer%20fixed-point%20theorem
|
Brouwer's fixed-point theorem is a fixed-point theorem in topology, named after L. E. J. (Bertus) Brouwer. It states that for any continuous function mapping a nonempty compact convex set to itself, there is a point such that . The simplest forms of Brouwer's theorem are for continuous functions from a closed interval in the real numbers to itself or from a closed disk to itself. A more general form than the latter is for continuous functions from a nonempty convex compact subset of Euclidean space to itself.
Among hundreds of fixed-point theorems, Brouwer's is particularly well known, due in part to its use across numerous fields of mathematics. In its original field, this result is one of the key theorems characterizing the topology of Euclidean spaces, along with the Jordan curve theorem, the hairy ball theorem, the invariance of dimension and the Borsuk–Ulam theorem. This gives it a place among the fundamental theorems of topology. The theorem is also used for proving deep results about differential equations and is covered in most introductory courses on differential geometry. It appears in unlikely fields such as game theory. In economics, Brouwer's fixed-point theorem and its extension, the Kakutani fixed-point theorem, play a central role in the proof of existence of general equilibrium in market economies as developed in the 1950s by economics Nobel prize winners Kenneth Arrow and Gérard Debreu.
The theorem was first studied in view of work on differential eq
|
https://en.wikipedia.org/wiki/Boltzmann%20distribution
|
In statistical mechanics and mathematics, a Boltzmann distribution (also called Gibbs distribution) is a probability distribution or probability measure that gives the probability that a system will be in a certain state as a function of that state's energy and the temperature of the system. The distribution is expressed in the form:
where is the probability of the system being in state , is the exponential function, is the energy of that state, and a constant of the distribution is the product of the Boltzmann constant and thermodynamic temperature . The symbol denotes proportionality (see for the proportionality constant).
The term system here has a wide meaning; it can range from a collection of 'sufficient number' of atoms or a single atom to a macroscopic system such as a natural gas storage tank. Therefore the Boltzmann distribution can be used to solve a wide variety of problems. The distribution shows that states with lower energy will always have a higher probability of being occupied.
The ratio of probabilities of two states is known as the Boltzmann factor and characteristically only depends on the states' energy difference:
The Boltzmann distribution is named after Ludwig Boltzmann who first formulated it in 1868 during his studies of the statistical mechanics of gases in thermal equilibrium. Boltzmann's statistical work is borne out in his paper “On the Relationship between the Second Fundamental Theorem of the Mechanical Theory of Heat and Probability
|
https://en.wikipedia.org/wiki/Cell%20%28biology%29
|
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure an
|
https://en.wikipedia.org/wiki/Binary%20search%20algorithm
|
In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array.
Binary search runs in logarithmic time in the worst case, making comparisons, where is the number of elements in the array. Binary search is faster than linear search except for small arrays. However, the array must be sorted first to be able to apply binary search. There are specialized data structures designed for fast searching, such as hash tables, that can be searched more efficiently than binary search. However, binary search can be used to solve a wider range of problems, such as finding the next-smallest or next-largest element in the array relative to the target even if it is absent from the array.
There are numerous variations of binary search. In particular, fractional cascading speeds up binary searches for the same value in multiple arrays. Fractional cascading efficiently solves a number of search problems in computational geometry and in numerous other fields. Exponent
|
https://en.wikipedia.org/wiki/Base%20pair
|
A base pair (bp) is a fundamental unit of double-stranded nucleic acids consisting of two nucleobases bound to each other by hydrogen bonds. They form the building blocks of the DNA double helix and contribute to the folded structure of both DNA and RNA. Dictated by specific hydrogen bonding patterns, "Watson–Crick" (or "Watson–Crick–Franklin") base pairs (guanine–cytosine and adenine–thymine) allow the DNA helix to maintain a regular helical structure that is subtly dependent on its nucleotide sequence. The complementary nature of this based-paired structure provides a redundant copy of the genetic information encoded within each strand of DNA. The regular structure and data redundancy provided by the DNA double helix make DNA well suited to the storage of genetic information, while base-pairing between DNA and incoming nucleotides provides the mechanism through which DNA polymerase replicates DNA and RNA polymerase transcribes DNA into RNA. Many DNA-binding proteins can recognize specific base-pairing patterns that identify particular regulatory regions of genes.
Intramolecular base pairs can occur within single-stranded nucleic acids. This is particularly important in RNA molecules (e.g., transfer RNA), where Watson–Crick base pairs (guanine–cytosine and adenine–uracil) permit the formation of short double-stranded helices, and a wide variety of non–Watson–Crick interactions (e.g., G–U or A–A) allow RNAs to fold into a vast range of specific three-dimensional structures.
|
https://en.wikipedia.org/wiki/Bilinear%20map
|
In mathematics, a bilinear map is a function combining elements of two vector spaces to yield an element of a third vector space, and is linear in each of its arguments. Matrix multiplication is an example.
Definition
Vector spaces
Let and be three vector spaces over the same base field . A bilinear map is a function
such that for all , the map
is a linear map from to and for all , the map
is a linear map from to In other words, when we hold the first entry of the bilinear map fixed while letting the second entry vary, the result is a linear operator, and similarly for when we hold the second entry fixed.
Such a map satisfies the following properties.
For any ,
The map is additive in both components: if and then and
If and we have for all then we say that B is symmetric. If X is the base field F, then the map is called a bilinear form, which are well-studied (for example: scalar product, inner product, and quadratic form).
Modules
The definition works without any changes if instead of vector spaces over a field F, we use modules over a commutative ring R. It generalizes to n-ary functions, where the proper term is multilinear.
For non-commutative rings R and S, a left R-module M and a right S-module N, a bilinear map is a map with T an -bimodule, and for which any n in N, is an R-module homomorphism, and for any m in M, is an S-module homomorphism. This satisfies
B(r ⋅ m, n) = r ⋅ B(m, n)
B(m, n ⋅ s) = B(m, n) ⋅ s
for all m in M, n in N,
|
https://en.wikipedia.org/wiki/Brownian%20motion
|
Brownian motion is the random motion of particles suspended in a medium (a liquid or a gas).
This motion pattern typically consists of random fluctuations in a particle's position inside a fluid sub-domain, followed by a relocation to another sub-domain. Each relocation is followed by more fluctuations within the new closed volume. This pattern describes a fluid at thermal equilibrium, defined by a given temperature. Within such a fluid, there exists no preferential direction of flow (as in transport phenomena). More specifically, the fluid's overall linear and angular momenta remain null over time. The kinetic energies of the molecular Brownian motions, together with those of molecular rotations and vibrations, sum up to the caloric component of a fluid's internal energy (the equipartition theorem).
This motion is named after the botanist Robert Brown, who first described the phenomenon in 1827, while looking through a microscope at pollen of the plant Clarkia pulchella immersed in water. In 1900, almost eighty years later, the French mathematician Louis Bachelier modeled the stochastic process now called Brownian motion in his doctoral thesis, The Theory of Speculation (Théorie de la spéculation), prepared under the supervision of Henri Poincaré. Then, in 1905, theoretical physicist Albert Einstein published a paper where he modeled the motion of the pollen particles as being moved by individual water molecules, making one of his first major scientific contributions.
The
|
https://en.wikipedia.org/wiki/Bra%E2%80%93ket%20notation
|
Bra–ket notation, also called Dirac notation, is a notation for linear algebra and linear operators on complex vector spaces together with their dual space both in the finite-dimensional and infinite-dimensional case. It is specifically designed to ease the types of calculations that frequently come up in quantum mechanics. Its use in quantum mechanics is quite widespread.
Bra-ket notation was created by Paul Dirac in his 1939 publication A New Notation for Quantum Mechanics. The notation was introduced as an easier way to write quantum mechanical expressions. The name comes from the English word "Bracket".
Quantum mechanics
In quantum mechanics, bra–ket notation is used ubiquitously to denote quantum states. The notation uses angle brackets, and , and a vertical bar , to construct "bras" and "kets".
A ket is of the form . Mathematically it denotes a vector, , in an abstract (complex) vector space , and physically it represents a state of some quantum system.
A bra is of the form . Mathematically it denotes a linear form , i.e. a linear map that maps each vector in to a number in the complex plane . Letting the linear functional act on a vector is written as .
Assume that on there exists an inner product with antilinear first argument, which makes an inner product space. Then with this inner product each vector can be identified with a corresponding linear form, by placing the vector in the anti-linear first slot of the inner product: . The correspondence between
|
https://en.wikipedia.org/wiki/Blind%20Willie%20McTell
|
Blind Willie McTell (born William Samuel McTier; May 5, 1898 – August 19, 1959) was a Piedmont blues and ragtime singer and guitarist. He played with a fluid, syncopated fingerstyle guitar technique, common among many exponents of Piedmont blues. Unlike his contemporaries, he came to use twelve-string guitars exclusively. McTell was also an adept slide guitarist, unusual among ragtime bluesmen. His vocal style, a smooth and often laid-back tenor, differed greatly from many of the harsher voices of Delta bluesmen such as Charley Patton. McTell performed in various musical styles, including blues, ragtime, religious music and hokum.
McTell was born in Thomson, Georgia. He learned to play the guitar in his early teens. He soon became a street performer in several Georgia cities, including Atlanta and Augusta, and first recorded in 1927 for Victor Records. He never produced a major hit record, but he had a prolific recording career with different labels and under different names in the 1920s and 1930s. In 1940, he was recorded by the folklorist John A. Lomax and Ruby Terrill Lomax for the folk song archive of the Library of Congress. He was active in the 1940s and 1950s, playing on the streets of Atlanta, often with his longtime associate Curley Weaver. Twice more he recorded professionally. His last recordings originated during an impromptu session recorded by an Atlanta record store owner in 1956. McTell died three years later, having lived for years with diabetes and alcohol
|
https://en.wikipedia.org/wiki/Block%20cipher
|
In cryptography, a block cipher is a deterministic algorithm that operates on fixed-length groups of bits, called blocks. Block ciphers are the elementary building blocks of many cryptographic protocols. They are ubiquitous in the storage and exchange of data, where such data is secured and authenticated via encryption.
A block cipher uses blocks as an unvarying transformation. Even a secure block cipher is suitable for the encryption of only a single block of data at a time, using a fixed key. A multitude of modes of operation have been designed to allow their repeated use in a secure way to achieve the security goals of confidentiality and authenticity. However, block ciphers may also feature as building blocks in other cryptographic protocols, such as universal hash functions and pseudorandom number generators.
Definition
A block cipher consists of two paired algorithms, one for encryption, , and the other for decryption, . Both algorithms accept two inputs: an input block of size bits and a key of size bits; and both yield an -bit output block. The decryption algorithm is defined to be the inverse function of encryption, i.e., . More formally, a block cipher is specified by an encryption function
which takes as input a key , of bit length (called the key size), and a bit string , of length (called the block size), and returns a string of bits. is called the plaintext, and is termed the ciphertext. For each , the function () is required to be an invertible mapp
|
https://en.wikipedia.org/wiki/Beta-lactamase
|
Beta-lactamases (β-lactamases) are enzymes () produced by bacteria that provide multi-resistance to beta-lactam antibiotics such as penicillins, cephalosporins, cephamycins, monobactams and carbapenems (ertapenem), although carbapenems are relatively resistant to beta-lactamase. Beta-lactamase provides antibiotic resistance by breaking the antibiotics' structure. These antibiotics all have a common element in their molecular structure: a four-atom ring known as a beta-lactam (β-lactam) ring. Through hydrolysis, the enzyme lactamase breaks the β-lactam ring open, deactivating the molecule's antibacterial properties.
Beta-lactam antibiotics are typically used to target a broad spectrum of gram-positive and gram-negative pathogenic bacteria.
Beta-lactamases produced by gram-negative bacteria are usually secreted, especially when antibiotics are present in the environment.
Structure
The structure of a Streptomyces serine β-lactamase (SBLs) is given by . The alpha-beta fold () resembles that of a DD-transpeptidase, from which the enzyme is thought to have evolved. β-lactam antibiotics bind to DD-transpeptidases to inhibit bacterial cell wall biosynthesis. Serine β-lactamases are grouped by sequence similarity into types A, C, and D.
The other type of beta-lactamase is of the metallo type ("type B"). Metallo-beta-lactamases (MBLs) need metal ion(s) (1 or 2 Zn2+ ions) on their active site for their catalytic activities. The structure of the New Delhi metallo-beta-lactamase 1
|
https://en.wikipedia.org/wiki/Broch
|
In archaeology, a broch is an Iron Age drystone hollow-walled structure found in Scotland. Brochs belong to the classification "complex Atlantic roundhouse" devised by Scottish archaeologists in the 1980s.
Brochs are roundhouse buildings found throughout Atlantic Scotland. The word broch is derived from the Lowland Scots 'brough', meaning fort. In the mid-19th century, Scottish antiquaries called brochs 'burgs', after Old Norse borg, with the same meaning. Brochs are often referred to as duns in the west, and they are the most spectacular of a complex class of buildings found in northern Scotland. There are approximately 571 candidate broch sites throughout the country, according to the Royal Commission on the Ancient and Historical Monuments of Scotland.
The origin of brochs is still subject to ongoing research. While most archaeologists believed 80 years ago that brochs were built by immigrants, there is now little doubt that the hollow-walled broch tower was an invention in what is now Scotland. The first brochs may have been built in the first century BC, and there is evidence to suggest that they were used primarily for defensive or offensive purposes.
The distribution of brochs is centred on northern Scotland, with the densest concentrations found in Caithness, Sutherland, and the Northern Isles. A few examples occur in the Borders and on the west coast of Dumfries and Galloway, and near Stirling. The original interpretation of brochs was that they were defensive s
|
https://en.wikipedia.org/wiki/Billy%20Crystal
|
William Edward Crystal (born March 14, 1948) is an American actor, comedian, and filmmaker. Crystal is known as a standup comedian, and for his film and stage roles. Crystal has received numerous accolades, including six Primetime Emmy Awards and a Tony Award as well as nominations for three Grammy Awards and three Golden Globe Awards. He was honored with a star on the Hollywood Walk of Fame in 1991, the Mark Twain Prize for American Humor in 2007, the Critics' Choice Lifetime Achievement Award in 2022 and the Kennedy Center Honors in 2023.
He gained prominence for television roles as Jodie Dallas on the ABC sitcom Soap from 1977 to 1981 and as a cast member and frequent host of Saturday Night Live from 1984 to 1985. Crystal then became known for his leading roles in films such as Running Scared (1986), The Princess Bride (1987), Throw Momma from the Train (1987), Memories of Me (1988), When Harry Met Sally... (1989), City Slickers (1991), Mr. Saturday Night (1992), Hamlet (1996), Deconstructing Harry (1997), Analyze This (1999), and Parental Guidance (2012). He provided the voice of Mike Wazowski in the Pixar animated Monsters, Inc. franchise. He has hosted the Academy Awards nine times, beginning in 1990 and most recently in 2012.
He made his Broadway debut in his one man show 700 Sundays in 2004 for which he won the Tony Award for Best Special Theatrical Event. He returned to the show again in 2014 which was filmed by HBO and received a Primetime Emmy Award for Outstandi
|
https://en.wikipedia.org/wiki/Binomial%20coefficient
|
In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem. Commonly, a binomial coefficient is indexed by a pair of integers and is written It is the coefficient of the term in the polynomial expansion of the binomial power ; this coefficient can be computed by the multiplicative formula
which using factorial notation can be compactly expressed as
For example, the fourth power of is
and the binomial coefficient is the coefficient of the term.
Arranging the numbers in successive rows for gives a triangular array called Pascal's triangle, satisfying the recurrence relation
The binomial coefficients occur in many areas of mathematics, and especially in combinatorics. The symbol is usually read as " choose " because there are ways to choose an (unordered) subset of elements from a fixed set of elements. For example, there are ways to choose 2 elements from namely and
The binomial coefficients can be generalized to for any complex number and integer , and many of their properties continue to hold in this more general form.
History and notation
Andreas von Ettingshausen introduced the notation in 1826, although the numbers were known centuries earlier (see Pascal's triangle). In about 1150, the Indian mathematician Bhaskaracharya gave an exposition of binomial coefficients in his book Līlāvatī.
Alternative notations include , , , , , and in all of which the stands for combinations or choices.
|
https://en.wikipedia.org/wiki/Binomial%20theorem
|
In elementary algebra, the binomial theorem (or binomial expansion) describes the algebraic expansion of powers of a binomial. According to the theorem, it is possible to expand the polynomial into a sum involving terms of the form , where the exponents and are nonnegative integers with , and the coefficient of each term is a specific positive integer depending on and . For example, for ,
The coefficient in the term of is known as the binomial coefficient or (the two have the same value). These coefficients for varying and can be arranged to form Pascal's triangle. These numbers also occur in combinatorics, where gives the number of different combinations of elements that can be chosen from an -element set. Therefore is often pronounced as " choose ".
History
Special cases of the binomial theorem were known since at least the 4th century BC when Greek mathematician Euclid mentioned the special case of the binomial theorem for exponent . Greek mathematican Diophantus cubed various binomials, including . Indian mathematican Aryabhata's method for finding cube roots, from around 510 CE, suggests that he knew the binomial formula for exponent .
Binomial coefficients, as combinatorial quantities expressing the number of ways of selecting objects out of without replacement, were of interest to ancient Indian mathematicians. The earliest known reference to this combinatorial problem is the Chandaḥśāstra by the Indian lyricist Pingala (c. 200 BC), which contain
|
https://en.wikipedia.org/wiki/Bessel%20function
|
Bessel functions, first defined by the mathematician Daniel Bernoulli and then generalized by Friedrich Bessel, are canonical solutions of Bessel's differential equation
for an arbitrary complex number , which represents the order of the Bessel function. Although and produce the same differential equation, it is conventional to define different Bessel functions for these two values in such a way that the Bessel functions are mostly smooth functions of .
The most important cases are when is an integer or half-integer. Bessel functions for integer are also known as cylinder functions or the cylindrical harmonics because they appear in the solution to Laplace's equation in cylindrical coordinates. Spherical Bessel functions with half-integer are obtained when solving the Helmholtz equation in spherical coordinates.
Applications of Bessel functions
The Bessel function is a generalization of the sine function. It can be interpreted as the vibration of a string with variable thickness, variable tension (or both conditions simultaneously); vibrations in a medium with variable properties; vibrations of the disc membrane, etc.
Bessel's equation arises when finding separable solutions to Laplace's equation and the Helmholtz equation in cylindrical or spherical coordinates. Bessel functions are therefore especially important for many problems of wave propagation and static potentials. In solving problems in cylindrical coordinate systems, one obtains Bessel functions of intege
|
https://en.wikipedia.org/wiki/Black%20people
|
Black is a racialized classification of people, usually a political and skin color-based category for specific populations with a mid- to dark brown complexion. Not all people considered "black" have dark skin; in certain countries, often in socially based systems of racial classification in the Western world, the term "black" is used to describe persons who are perceived as dark-skinned compared to other populations. It is most commonly used for people of sub-Saharan African ancestry, Indigenous Australians and Melanesians, though it has been applied in many contexts to other groups, and is no indicator of any close ancestral relationship whatsoever. Indigenous African societies do not use the term black as a racial identity outside of influences brought by Western cultures.
Contemporary anthropologists and other scientists, while recognizing the reality of biological variation between different human populations, regard the concept of a unified, distinguishable "Black race" as socially constructed. Different societies apply different criteria regarding who is classified "black", and these social constructs have changed over time. In a number of countries, societal variables affect classification as much as skin color, and the social criteria for "blackness" vary. In the United Kingdom, "black" was historically equivalent with "person of color", a general term for non-European peoples. While the term "person of color" is commonly used and accepted in the United States, the
|
https://en.wikipedia.org/wiki/Biosphere
|
The biosphere (from Greek βίος bíos "life" and σφαῖρα sphaira "sphere"), also known as the ecosphere (from Greek οἶκος oîkos "environment" and σφαῖρα), is the worldwide sum of all ecosystems. It can also be termed the zone of life on Earth. The biosphere (which is technically a spherical shell) is virtually a closed system with regard to matter, with minimal inputs and outputs. Regarding energy, it is an open system, with photosynthesis capturing solar energy at a rate of around 100 terawatts. By the most general biophysiological definition, the biosphere is the global ecological system integrating all living beings and their relationships, including their interaction with the elements of the lithosphere, cryosphere, hydrosphere, and atmosphere. The biosphere is postulated to have evolved, beginning with a process of biopoiesis (life created naturally from matter, such as simple organic compounds) or biogenesis (life created from living matter), at least some 3.5 billion years ago.
In a general sense, biospheres are any closed, self-regulating systems containing ecosystems. This includes artificial biospheres such as and , and potentially ones on other planets or moons.
Origin and use of the term
The term "biosphere" was coined in 1875 by geologist Eduard Suess, who defined it as the place on Earth's surface where life dwells.
While the concept has a geological origin, it is an indication of the effect of both Charles Darwin and Matthew F. Maury on the Earth sciences.
|
https://en.wikipedia.org/wiki/Biological%20membrane
|
A biological membrane, biomembrane or cell membrane is a selectively permeable membrane that separates the interior of a cell from the external environment or creates intracellular compartments by serving as a boundary between one part of the cell and another. Biological membranes, in the form of eukaryotic cell membranes, consist of a phospholipid bilayer with embedded, integral and peripheral proteins used in communication and transportation of chemicals and ions. The bulk of lipids in a cell membrane provides a fluid matrix for proteins to rotate and laterally diffuse for physiological functioning. Proteins are adapted to high membrane fluidity environment of the lipid bilayer with the presence of an annular lipid shell, consisting of lipid molecules bound tightly to the surface of integral membrane proteins. The cell membranes are different from the isolating tissues formed by layers of cells, such as mucous membranes, basement membranes, and serous membranes.
Composition
Asymmetry
The lipid bilayer consists of two layers- an outer leaflet and an inner leaflet. The components of bilayers are distributed unequally between the two surfaces to create asymmetry between the outer and inner surfaces. This asymmetric organization is important for cell functions such as cell signaling. The asymmetry of the biological membrane reflects the different functions of the two leaflets of the membrane. As seen in the fluid membrane model of the phospholipid bilayer, the outer leaflet
|
https://en.wikipedia.org/wiki/Blitz%20BASIC
|
Blitz BASIC is the programming language dialect of the first Blitz compilers, devised by New Zealand-based developer Mark Sibly. Being derived from BASIC, Blitz syntax was designed to be easy to pick up for beginners first learning to program. The languages are game-programming oriented but are often found general purpose enough to be used for most types of application. The Blitz language evolved as new products were released, with recent incarnations offering support for more advanced programming techniques such as object-orientation and multithreading. This led to the languages losing their BASIC moniker in later years.
History
The first iteration of the Blitz language was created for the Amiga platform and published by the Australian firm Memory and Storage Technology. Returning to New Zealand, Blitz BASIC 2 was published several years later (around 1993 according this press release ) by Acid Software (a local Amiga game publisher). Since then, Blitz compilers have been released on several platforms. Following the demise of the Amiga as a commercially viable platform, the Blitz BASIC 2 source code was released to the Amiga community. Development continues to this day under the name AmiBlitz.
BlitzBasic
Idigicon published BlitzBasic for Microsoft Windows in October 2000. The language included a built-in API for performing basic 2D graphics and audio operations. Following the release of Blitz3D, BlitzBasic is often synonymously referred to as Blitz2D.
Recognition of Blit
|
https://en.wikipedia.org/wiki/Bliss%20bibliographic%20classification
|
The Bliss bibliographic classification (BC) is a library classification system that was created by Henry E. Bliss (1870–1955) and published in four volumes between 1940 and 1953. Although originally devised in the United States, it was more commonly adopted by British libraries. A second edition of the system (BC2) has been in ongoing development in Britain since 1977.
Origins of the system
Henry E. Bliss began working on the Bliss Classification system while working at the City College of New York Library as Assistant Librarian. He was a critic of Melvil Dewey's work with the Dewey Decimal System and believed that organization of titles needed to be done with an intellectual mind frame. Being overly pragmatic or simply alphabetical, would be inadequate. In fact, Bliss is the only theorist who created an organizational scheme based on societal needs. Bliss wanted a classification system that would provide distinct rules yet still be adaptable to whatever kind of collection a library might have, as different libraries have different needs. His solution was the concept of "alternative location," in which a particular subject could be put in more than one place, as long as the library made a specific choice and used it consistently.
Bliss discusses his theories and basis of organization for the Bliss Classification for the first time in his 1910 article, "A Modern Classification for Libraries, with Simple Notation, Mnemonics, and Alternatives". This publication followed his 19
|
https://en.wikipedia.org/wiki/Bayesian%20probability
|
Bayesian probability ( or ) is an interpretation of the concept of probability, in which, instead of frequency or propensity of some phenomenon, probability is interpreted as reasonable expectation representing a state of knowledge or as quantification of a personal belief.
The Bayesian interpretation of probability can be seen as an extension of propositional logic that enables reasoning with hypotheses; that is, with propositions whose truth or falsity is unknown. In the Bayesian view, a probability is assigned to a hypothesis, whereas under frequentist inference, a hypothesis is typically tested without being assigned a probability.
Bayesian probability belongs to the category of evidential probabilities; to evaluate the probability of a hypothesis, the Bayesian probabilist specifies a prior probability. This, in turn, is then updated to a posterior probability in the light of new, relevant data (evidence). The Bayesian interpretation provides a standard set of procedures and formulae to perform this calculation.
The term Bayesian derives from the 18th-century mathematician and theologian Thomas Bayes, who provided the first mathematical treatment of a non-trivial problem of statistical data analysis using what is now known as Bayesian inference. Mathematician Pierre-Simon Laplace pioneered and popularized what is now called Bayesian probability.
Bayesian methodology
Bayesian methods are characterized by concepts and procedures as follows:
The use of random variable
|
https://en.wikipedia.org/wiki/Beta%20sheet
|
The beta sheet, (β-sheet) (also β-pleated sheet) is a common motif of the regular protein secondary structure. Beta sheets consist of beta strands (β-strands) connected laterally by at least two or three backbone hydrogen bonds, forming a generally twisted, pleated sheet. A β-strand is a stretch of polypeptide chain typically 3 to 10 amino acids long with backbone in an extended conformation. The supramolecular association of β-sheets has been implicated in the formation of the fibrils and protein aggregates observed in amyloidosis, Alzheimer's disease and other proteinopathies.
History
The first β-sheet structure was proposed by William Astbury in the 1930s. He proposed the idea of hydrogen bonding between the peptide bonds of parallel or antiparallel extended β-strands. However, Astbury did not have the necessary data on the bond geometry of the amino acids in order to build accurate models, especially since he did not then know that the peptide bond was planar. A refined version was proposed by Linus Pauling and Robert Corey in 1951. Their model incorporated the planarity of the peptide bond which they previously explained as resulting from keto-enol tautomerization.
Structure and orientation
Geometry
The majority of β-strands are arranged adjacent to other strands and form an extensive hydrogen bond network with their neighbors in which the N−H groups in the backbone of one strand establish hydrogen bonds with the C=O groups in the backbone of the adjacent strands. I
|
https://en.wikipedia.org/wiki/B%C3%A9zout%27s%20identity
|
In mathematics, Bézout's identity (also called Bézout's lemma), named after Étienne Bézout who proved it for polynomials, is the following theorem:
Here the greatest common divisor of and is taken to be . The integers and are called Bézout coefficients for ; they are not unique. A pair of Bézout coefficients can be computed by the extended Euclidean algorithm, and this pair is, in the case of integers one of the two pairs such that and equality occurs only if one of and is a multiple of the other.
As an example, the greatest common divisor of 15 and 69 is 3, and 3 can be written as a combination of 15 and 69 as with Bézout coefficients −9 and 2.
Many other theorems in elementary number theory, such as Euclid's lemma or the Chinese remainder theorem, result from Bézout's identity.
A Bézout domain is an integral domain in which Bézout's identity holds. In particular, Bézout's identity holds in principal ideal domains. Every theorem that results from Bézout's identity is thus true in all principal ideal domains.
Structure of solutions
If and are not both zero and one pair of Bézout coefficients has been computed (for example, using the extended Euclidean algorithm), all pairs can be represented in the form
where is an arbitrary integer, is the greatest common divisor of and , and the fractions simplify to integers.
If and are both nonzero, then exactly two of these pairs of Bézout coefficients satisfy
and equality may occur only if one of and divide
|
https://en.wikipedia.org/wiki/Cytoplasm
|
In cell biology, the cytoplasm describes all material within a eukaryotic cell, enclosed by the cell membrane, except for the cell nucleus. The material inside the nucleus and contained within the nuclear membrane is termed the nucleoplasm. The main components of the cytoplasm are the cytosol (a gel-like substance), the organelles (the cell's internal sub-structures), and various cytoplasmic inclusions. The cytoplasm is about 80% water and is usually colorless.
The submicroscopic ground cell substance, or cytoplasmic matrix, that remains after the exclusion of the cell organelles and particles is groundplasm. It is the hyaloplasm of light microscopy, a highly complex, polyphasic system in which all resolvable cytoplasmic elements are suspended, including the larger organelles such as the ribosomes, mitochondria, plant plastids, lipid droplets, and vacuoles.
Many cellular activities take place within the cytoplasm, such as many metabolic pathways, including glycolysis, photosynthesis, and processes such as cell division. The concentrated inner area is called the endoplasm and the outer layer is called the cell cortex, or ectoplasm.
Movement of calcium ions in and out of the cytoplasm is a signaling activity for metabolic processes.
In plants, movement of the cytoplasm around vacuoles is known as cytoplasmic streaming.
History
The term was introduced by Rudolf von Kölliker in 1863, originally as a synonym for protoplasm, but later it has come to mean the cell substance
|
https://en.wikipedia.org/wiki/Demographics%20of%20Canada
|
Statistics Canada conducts a country-wide census that collects demographic data every five years on the first and sixth year of each decade. The 2021 Canadian census enumerated a total population of 36,991,981, an increase of around 5.2 percent over the 2016 figure, Between 2011 and May 2016, Canada's population grew by 1.7 million people, with immigrants accounting for two-thirds of the increase. Between 1990 and 2008, the population increased by 5.6 million, equivalent to 20.4 percent overall growth. The main driver of population growth is immigration, and to a lesser extent, natural growth.
Canada has one of the highest per-capita immigration rates in the world, driven mainly by economic policy and, to a lesser extent, family reunification. In 2021, a total of 405,330 immigrants were admitted to Canada, mainly from Asia. New immigrants settle mostly in major urban areas such as Toronto, Montreal, and Vancouver. Canada also accepts large numbers of refugees, accounting for over 10 percent of annual global refugee resettlements.
Population
The 2021 Canadian census had a total population count of 36,991,981 individuals, making up approximately 0.5% of the world's total population. A population estimate for 2023 put the total number of people in Canada at 40,097,761.
Demographic statistics according to the World Population Review in 2022.
One birth every 1 minutes
One death every 2 minutes
One net migrant every 2 minutes
Net gain of one person every 1 minute
Death rate
8
|
https://en.wikipedia.org/wiki/Computing
|
Computing is any goal-oriented activity requiring, benefiting from, or creating computing machinery. It includes the study and experimentation of algorithmic processes, and development of both hardware and software. Computing has scientific, engineering, mathematical, technological and social aspects. Major computing disciplines include computer engineering, computer science, cybersecurity, data science, information systems, information technology, digital art and software engineering.
The term computing is also synonymous with counting and calculating. In earlier times, it was used in reference to the action performed by mechanical computing machines, and before that, to human computers.
History
The history of computing is longer than the history of computing hardware and includes the history of methods intended for pen and paper (or for chalk and slate) with or without the aid of tables. Computing is intimately tied to the representation of numbers, though mathematical concepts necessary for computing existed before numeral systems. The earliest known tool for use in computation is the abacus, and it is thought to have been invented in Babylon circa between 2700–2300 BC. Abaci, of a more modern design, are still used as calculation tools today.
The first recorded proposal for using digital electronics in computing was the 1931 paper "The Use of Thyratrons for High Speed Automatic Counting of Physical Phenomena" by C. E. Wynn-Williams. Claude Shannon's 1938 paper "A Sym
|
https://en.wikipedia.org/wiki/Cipher
|
In cryptography, a cipher (or cypher) is an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is encipherment. To encipher or encode is to convert information into cipher or code. In common parlance, "cipher" is synonymous with "code", as they are both a set of steps that encrypt a message; however, the concepts are distinct in cryptography, especially classical cryptography.
Codes generally substitute different length strings of characters in the output, while ciphers generally substitute the same number of characters as are input. A code maps one meaning with another. Words and phrases can be coded as letters or numbers. Codes typically have direct meaning from input to key. Codes primarily function to save time. Ciphers are algorithmic. The given input must follow the cipher's process to be solved. Ciphers are commonly used to encrypt written information.
Codes operated by substituting according to a large codebook which linked a random string of characters or numbers to a word or phrase. For example, "UQJHSE" could be the code for "Proceed to the following coordinates." When using a cipher the original information is known as plaintext, and the encrypted form as ciphertext. The ciphertext message contains all the information of the plaintext message, but is not in a format readable by a human or computer without the proper mechanism to decrypt it.
The operation of a ci
|
https://en.wikipedia.org/wiki/Conditional
|
Conditional (if then) may refer to:
Causal conditional, if X then Y, where X is a cause of Y
Conditional probability, the probability of an event A given that another event B has occurred
Conditional proof, in logic: a proof that asserts a conditional, and proves that the antecedent leads to the consequent
Strict conditional, in philosophy, logic, and mathematics
Material conditional, in propositional calculus, or logical calculus in mathematics
Relevance conditional, in relevance logic
Conditional (computer programming), a statement or expression in computer programming languages
A conditional expression in computer programming languages such as ?:
Conditions in a contract
Grammar and linguistics
Conditional mood (or conditional tense), a verb form in many languages
Conditional sentence, a sentence type used to refer to hypothetical situations and their consequences
Indicative conditional, a conditional sentence expressing "if A then B" in a natural language
Counterfactual conditional, a conditional sentence indicating what would be the case if its antecedent were true
Other
"Conditional" (Laura Mvula song)
Conditional jockey, an apprentice jockey in British or Irish National Hunt racing
Conditional short-circuit current
Conditional Value-at-Risk
See also
Condition (disambiguation)
Conditional statement (disambiguation)
|
https://en.wikipedia.org/wiki/Computer%20programming
|
Computer programming or coding is the composition of sequences of instructions, called programs, that computers can follow to perform tasks. It involves designing and implementing algorithms, step-by-step specifications of procedures, by writing code in one or more programming languages. Programmers typically use high-level programming languages that are more easily intelligible to humans than machine code, which is directly executed by the central processing unit. Proficient programming usually requires expertise in several different subjects, including knowledge of the application domain, details of programming languages and generic code libraries, specialized algorithms, and formal logic.
Auxiliary tasks accompanying and related to programming include analyzing requirements, testing, debugging (investigating and fixing problems), implementation of build systems, and management of derived artifacts, such as programs' machine code. While these are sometimes considered programming, often the term software development is used for this larger overall process – with the terms programming, implementation, and coding reserved for the writing and editing of code per se. Sometimes software development is known as software engineering, especially when it employs formal methods or follows an engineering design process.
History
Programmable devices have existed for centuries. As early as the 9th century, a programmable music sequencer was invented by the Persian Banu Musa brothers,
|
https://en.wikipedia.org/wiki/Computer%20science
|
Computer science is the study of computation, information, and automation. Computer science spans theoretical disciplines (such as algorithms, theory of computation, and information theory) to applied disciplines (including the design and implementation of hardware and software). Though more often considered an academic discipline, computer science is closely related to computer programming.
Algorithms and data structures are central to computer science.
The theory of computation concerns abstract models of computation and general classes of problems that can be solved using them. The fields of cryptography and computer security involve studying the means for secure communication and for preventing security vulnerabilities. Computer graphics and computational geometry address the generation of images. Programming language theory considers different ways to describe computational processes, and database theory concerns the management of repositories of data. Human–computer interaction investigates the interfaces through which humans and computers interact, and software engineering focuses on the design and principles behind developing software. Areas such as operating systems, networks and embedded systems investigate the principles and design behind complex systems. Computer architecture describes the construction of computer components and computer-operated equipment. Artificial intelligence and machine learning aim to synthesize goal-orientated processes such as problem-s
|
https://en.wikipedia.org/wiki/Telecommunications%20in%20Chad
|
Telecommunications in Chad include radio, television, fixed and mobile telephones, and the Internet.
Radio and television
Radio stations:
state-owned radio network, Radiodiffusion Nationale Tchadienne (RNT), operates national and regional stations; about 10 private radio stations; some stations rebroadcast programs from international broadcasters (2007);
2 AM, 4 FM, and 5 shortwave stations (2001).
Radios:
1.7 million (1997).
Television stations:
1 state-owned TV station, Tele Tchad (2007);
1 station (2001).
Television sets:
10,000 (1997).
Radio is the most important medium of mass communication. State-run Radiodiffusion Nationale Tchadienne operates national and regional radio stations. Around a dozen private radio stations are on the air, despite high licensing fees, some run by religious or other non-profit groups. The BBC World Service (FM 90.6) and Radio France Internationale (RFI) broadcast in the capital, N'Djamena. The only television station, Tele Tchad, is state-owned.
State control of many broadcasting outlets allows few dissenting views. Journalists are harassed and attacked. On rare occasions journalists are warned in writing by the High Council for Communication to produce more "responsible" journalism or face fines. Some journalists and publishers practice self-censorship. On 10 October 2012, the High Council on Communications issued a formal warning to La Voix du Paysan, claiming that the station's live broadcast on 30 September incited the public t
|
https://en.wikipedia.org/wiki/Concrete
|
Concrete is a composite material composed of aggregate bonded together with a fluid cement that cures over time. Concrete is the second-most-used substance in the world after water, and is the most widely used building material. Its usage worldwide, ton for ton, is twice that of steel, wood, plastics, and aluminium combined.
When aggregate is mixed with dry Portland cement and water, the mixture forms a fluid slurry that is easily poured and molded into shape. The cement reacts with the water through a process called concrete hydration that hardens it over several hours to form a hard matrix that binds the materials together into a durable stone-like material that has many uses. This time allows concrete to not only be cast in forms, but also to have a variety of tooled processes performed. The hydration process is exothermic, which means ambient temperature plays a significant role in how long it takes concrete to set. Often, additives (such as pozzolans or superplasticizers) are included in the mixture to improve the physical properties of the wet mix, delay or accelerate the curing time, or otherwise change the finished material. Most concrete is poured with reinforcing materials (such as steel rebar) embedded to provide tensile strength, yielding reinforced concrete.
In the past, lime based cement binders, such as lime putty, were often used but sometimes with other hydraulic cements, (water resistant) such as a calcium aluminate cement or with Portland cement to form P
|
https://en.wikipedia.org/wiki/Condom
|
A condom is a sheath-shaped barrier device used during sexual intercourse to reduce the probability of pregnancy or a sexually transmitted infection (STI). There are both male and female condoms.
The male condom is rolled onto an erect penis before intercourse and works by forming a physical barrier which blocks semen from entering the body of a sexual partner. Male condoms are typically made from latex and, less commonly, from polyurethane, polyisoprene, or lamb intestine. Male condoms have the advantages of ease of use, ease of access, and few side effects. Individuals with latex allergy should use condoms made from a material other than latex, such as polyurethane. Female condoms are typically made from polyurethane and may be used multiple times.
With proper use—and use at every act of intercourse—women whose partners use male condoms experience a 2% per-year pregnancy rate. With typical use, the rate of pregnancy is 18% per-year. Their use greatly decreases the risk of gonorrhea, chlamydia, trichomoniasis, hepatitis B, and HIV/AIDS. To a lesser extent, they also protect against genital herpes, human papillomavirus (HPV), and syphilis.
Condoms as a method of preventing STIs have been used since at least 1564. Rubber condoms became available in 1855, followed by latex condoms in the 1920s. It is on the World Health Organization's List of Essential Medicines. As of 2019, globally around 21% of those using birth control use the condom, making it the second-most common met
|
https://en.wikipedia.org/wiki/Cladistics
|
Cladistics (; ) is an approach to biological classification in which organisms are categorized in groups ("clades") based on hypotheses of most recent common ancestry. The evidence for hypothesized relationships is typically shared derived characteristics (synapomorphies) that are not present in more distant groups and ancestors. However, from an empirical perspective, common ancestors are inferences based on a cladistic hypothesis of relationships of taxa whose character states can be observed. Theoretically, a last common ancestor and all its descendants constitute a (minimal) clade. Importantly, all descendants stay in their overarching ancestral clade. For example, if the terms worms or fishes were used within a strict cladistic framework, these terms would include humans. Many of these terms are normally used paraphyletically, outside of cladistics, e.g. as a 'grade', which are fruitless to precisely delineate, especially when including extinct species. Radiation results in the generation of new subclades by bifurcation, but in practice sexual hybridization may blur very closely related groupings.
As a hypothesis, a clade can be rejected only if some groupings were explicitly excluded. It may then be found that the excluded group did actually descend from the last common ancestor of the group, and thus emerged within the group. ("Evolved from" is misleading, because in cladistics all descendants stay in the ancestral group). Upon finding that the group is paraphyletic
|
https://en.wikipedia.org/wiki/Foreign%20relations%20of%20Cuba
|
Cuba's foreign policy has been fluid throughout history depending on world events and other variables, including relations with the United States. Without massive Soviet subsidies and its primary trading partner, Cuba became increasingly isolated in the late 1980s and early 1990s after the fall of the USSR and the end of the Cold War, but Cuba opened up more with the rest of the world again starting in the late 1990s when they have since entered bilateral co-operation with several South American countries, most notably Venezuela and Bolivia beginning in the late 1990s, especially after the Venezuela election of Hugo Chávez in 1999, who became a staunch ally of Castro's Cuba. The United States used to stick to a policy of isolating Cuba until December 2014, when Barack Obama announced a new policy of diplomatic and economic engagement. The European Union accuses Cuba of "continuing flagrant violation of human rights and fundamental freedoms". Cuba has developed a growing relationship with the People's Republic of China and Russia. Cuba provided civilian assistance workers – principally medical – to more than 20 countries. More than one million exiles have escaped to foreign countries. Cuba's present foreign minister is Bruno Rodríguez Parrilla.
Cuba is currently a lead country on the United Nations Human Rights Council, and is a founding member of the organization known as the Bolivarian Alternative for the Americas, a member of the Community of Latin American and Caribbean S
|
https://en.wikipedia.org/wiki/Cracking
|
Cracking may refer to:
Cracking, the formation of a fracture or partial fracture in a solid material studied as fracture mechanics
Performing a sternotomy
Fluid catalytic cracking, a catalytic process widely used in oil refineries for cracking large hydrocarbon molecules into smaller molecules
Cracking (chemistry), the decomposition of complex organic molecules into smaller ones
Cracking joints, the practice of manipulating one's bone joints to make a sharp sound
Cracking codes, see cryptanalysis
Whip cracking
Safe cracking
Crackin, band featuring Lester Abrams
Packing and cracking, a method of creating voting districts to give a political party an advantage
In computing':
Another name for security hacking; the practice of defeating computer security.
Password cracking, the process of discovering the plaintext of an encrypted computer password.
Software cracking, the defeating of software copy protection.
See also
Crack (disambiguation)
Cracker (disambiguation)
Cracklings (solid material remaining after rendering fat)
Cracker (pejorative)
|
https://en.wikipedia.org/wiki/Compiler
|
In computing, a compiler is a computer program that translates computer code written in one programming language (the source language) into another language (the target language). The name "compiler" is primarily used for programs that translate source code from a high-level programming language to a low-level programming language (e.g. assembly language, object code, or machine code) to create an executable program.
There are many different types of compilers which produce output in different useful forms. A cross-compiler produces code for a different CPU or operating system than the one on which the cross-compiler itself runs. A bootstrap compiler is often a temporary compiler, used for compiling a more permanent or better optimised compiler for a language.
Related software include, a program that translates from a low-level language to a higher level one is a decompiler; a program that translates between high-level languages, usually called a source-to-source compiler or transpiler. A language rewriter is usually a program that translates the form of expressions without a change of language. A compiler-compiler is a compiler that produces a compiler (or part of one), often in a generic and reusable way so as to be able to produce many differing compilers.
A compiler is likely to perform some or all of the following operations, often called phases: preprocessing, lexical analysis, parsing, semantic analysis (syntax-directed translation), conversion of input programs
|
https://en.wikipedia.org/wiki/Key%20size
|
In cryptography, key size or key length refers to the number of bits in a key used by a cryptographic algorithm (such as a cipher).
Key length defines the upper-bound on an algorithm's security (i.e. a logarithmic measure of the fastest known attack against an algorithm), because the security of all algorithms can be violated by brute-force attacks. Ideally, the lower-bound on an algorithm's security is by design equal to the key length (that is, the algorithm's design does not detract from the degree of security inherent in the key length).
Most symmetric-key algorithms are designed to have security equal to their key length. However, after design, a new attack might be discovered. For instance, Triple DES was designed to have a 168-bit key, but an attack of complexity 2112 is now known (i.e. Triple DES now only has 112 bits of security, and of the 168 bits in the key the attack has rendered 56 'ineffective' towards security). Nevertheless, as long as the security (understood as "the amount of effort it would take to gain access") is sufficient for a particular application, then it does not matter if key length and security coincide. This is important for asymmetric-key algorithms, because no such algorithm is known to satisfy this property; elliptic curve cryptography comes the closest with an effective security of roughly half its key length.
Significance
Keys are used to control the operation of a cipher so that only the correct key can convert encrypted text (cipherte
|
End of preview. Expand
in Data Studio
No dataset card yet
- Downloads last month
- 2