source
stringlengths 31
207
| text
stringlengths 12
1.5k
|
---|---|
https://en.wikipedia.org/wiki/Bioinformatics
|
Bioinformatics () is an interdisciplinary field of science that develops methods and software tools for understanding biological data, especially when the data sets are large and complex. Bioinformatics uses biology, chemistry, physics, computer science, computer programming, information engineering, mathematics and statistics to analyze and interpret biological data. The subsequent process of analyzing and interpreting data is referred to as computational biology.
Computational, statistical, and computer programming techniques have been used for computer simulation analyses of biological queries. They include reused specific analysis "pipelines", particularly in the field of genomics, such as by the identification of genes and single nucleotide polymorphisms (SNPs). These pipelines are used to better understand the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. Bioinformatics also includes proteomics, which tries to understand the organizational principles within nucleic acid and protein sequences.
Image and signal processing allow extraction of useful results from large amounts of raw data. In the field of genetics, it aids in sequencing and annotating genomes and their observed mutations. Bioinformatics includes text mining of biological literature and the development of biological and gene ontologies to organize and query biological data. It also plays a role in the analysis of gen
|
https://en.wikipedia.org/wiki/Cell%20%28biology%29
|
The cell is the basic structural and functional unit of all forms of life. Every cell consists of cytoplasm enclosed within a membrane, and contains many macromolecules such as proteins, DNA and RNA, as well as many small molecules of nutrients and metabolites. The term comes from the Latin word meaning 'small room'.
Cells can acquire specified function and carry out various tasks within the cell such as replication, DNA repair, protein synthesis, and motility. Cells are capable of specialization and mobility within the cell.
Most plant and animal cells are only visible under a light microscope, with dimensions between 1 and 100 micrometres. Electron microscopy gives a much higher resolution showing greatly detailed cell structure. Organisms can be classified as unicellular (consisting of a single cell such as bacteria) or multicellular (including plants and animals). Most unicellular organisms are classed as microorganisms.
The study of cells and how they work has led to many other studies in related areas of biology, including: discovery of DNA, cancer systems biology, aging and developmental biology.
Cell biology is the study of cells, which were discovered by Robert Hooke in 1665, who named them for their resemblance to cells inhabited by Christian monks in a monastery. Cell theory, first developed in 1839 by Matthias Jakob Schleiden and Theodor Schwann, states that all organisms are composed of one or more cells, that cells are the fundamental unit of structure an
|
https://en.wikipedia.org/wiki/Binary%20search%20algorithm
|
In computer science, binary search, also known as half-interval search, logarithmic search, or binary chop, is a search algorithm that finds the position of a target value within a sorted array. Binary search compares the target value to the middle element of the array. If they are not equal, the half in which the target cannot lie is eliminated and the search continues on the remaining half, again taking the middle element to compare to the target value, and repeating this until the target value is found. If the search ends with the remaining half being empty, the target is not in the array.
Binary search runs in logarithmic time in the worst case, making comparisons, where is the number of elements in the array. Binary search is faster than linear search except for small arrays. However, the array must be sorted first to be able to apply binary search. There are specialized data structures designed for fast searching, such as hash tables, that can be searched more efficiently than binary search. However, binary search can be used to solve a wider range of problems, such as finding the next-smallest or next-largest element in the array relative to the target even if it is absent from the array.
There are numerous variations of binary search. In particular, fractional cascading speeds up binary searches for the same value in multiple arrays. Fractional cascading efficiently solves a number of search problems in computational geometry and in numerous other fields. Exponent
|
https://en.wikipedia.org/wiki/Binary%20search%20tree
|
In computer science, a binary search tree (BST), also called an ordered or sorted binary tree, is a rooted binary tree data structure with the key of each internal node being greater than all the keys in the respective node's left subtree and less than the ones in its right subtree. The time complexity of operations on the binary search tree is linear with respect to the height of the tree.
Binary search trees allow binary search for fast lookup, addition, and removal of data items. Since the nodes in a BST are laid out so that each comparison skips about half of the remaining tree, the lookup performance is proportional to that of binary logarithm. BSTs were devised in the 1960s for the problem of efficient storage of labeled data and are attributed to Conway Berners-Lee and David Wheeler.
The performance of a binary search tree is dependent on the order of insertion of the nodes into the tree since arbitrary insertions may lead to degeneracy; several variations of the binary search tree can be built with guaranteed worst-case performance. The basic operations include: search, traversal, insert and delete. BSTs with guaranteed worst-case complexities perform better than an unsorted array, which would require linear search time.
The complexity analysis of BST shows that, on average, the insert, delete and search takes for nodes. In the worst case, they degrade to that of a singly linked list: . To address the boundless increase of the tree height with arbitrary insertion
|
https://en.wikipedia.org/wiki/Binary%20tree
|
In computer science, a binary tree is a tree data structure in which each node has at most two children, referred to as the left child and the right child. That is, it is a k-ary tree with . A recursive definition using set theory is that a binary tree is a tuple (L, S, R), where L and R are binary trees or the empty set and S is a singleton set containing the root.
From a graph theory perspective, binary trees as defined here are arborescences. A binary tree may thus be also called a bifurcating arborescence, a term which appears in some very old programming books before the modern computer science terminology prevailed. It is also possible to interpret a binary tree as an undirected, rather than directed graph, in which case a binary tree is an ordered, rooted tree. Some authors use rooted binary tree instead of binary tree to emphasize the fact that the tree is rooted, but as defined above, a binary tree is always rooted.
In mathematics, what is termed binary tree can vary significantly from author to author. Some use the definition commonly used in computer science, but others define it as every non-leaf having exactly two children and don't necessarily label the children as left and right either.
In computing, binary trees can be used in two very different ways:
First, as a means of accessing nodes based on some value or label associated with each node. Binary trees labelled this way are used to implement binary search trees and binary heaps, and are used for efficie
|
https://en.wikipedia.org/wiki/Borel%20measure
|
In mathematics, specifically in measure theory, a Borel measure on a topological space is a measure that is defined on all open sets (and thus on all Borel sets). Some authors require additional restrictions on the measure, as described below.
Formal definition
Let be a locally compact Hausdorff space, and let be the smallest σ-algebra that contains the open sets of ; this is known as the σ-algebra of Borel sets. A Borel measure is any measure defined on the σ-algebra of Borel sets. A few authors require in addition that is locally finite, meaning that for every compact set . If a Borel measure is both inner regular and outer regular, it is called a regular Borel measure. If is both inner regular, outer regular, and locally finite, it is called a Radon measure.
On the real line
The real line with its usual topology is a locally compact Hausdorff space; hence we can define a Borel measure on it. In this case, is the smallest σ-algebra that contains the open intervals of . While there are many Borel measures μ, the choice of Borel measure that assigns for every half-open interval is sometimes called "the" Borel measure on . This measure turns out to be the restriction to the Borel σ-algebra of the Lebesgue measure , which is a complete measure and is defined on the Lebesgue σ-algebra. The Lebesgue σ-algebra is actually the completion of the Borel σ-algebra, which means that it is the smallest σ-algebra that contains all the Borel sets and can be equipped with a
|
https://en.wikipedia.org/wiki/Bilinear%20map
|
In mathematics, a bilinear map is a function combining elements of two vector spaces to yield an element of a third vector space, and is linear in each of its arguments. Matrix multiplication is an example.
Definition
Vector spaces
Let and be three vector spaces over the same base field . A bilinear map is a function
such that for all , the map
is a linear map from to and for all , the map
is a linear map from to In other words, when we hold the first entry of the bilinear map fixed while letting the second entry vary, the result is a linear operator, and similarly for when we hold the second entry fixed.
Such a map satisfies the following properties.
For any ,
The map is additive in both components: if and then and
If and we have for all then we say that B is symmetric. If X is the base field F, then the map is called a bilinear form, which are well-studied (for example: scalar product, inner product, and quadratic form).
Modules
The definition works without any changes if instead of vector spaces over a field F, we use modules over a commutative ring R. It generalizes to n-ary functions, where the proper term is multilinear.
For non-commutative rings R and S, a left R-module M and a right S-module N, a bilinear map is a map with T an -bimodule, and for which any n in N, is an R-module homomorphism, and for any m in M, is an S-module homomorphism. This satisfies
B(r ⋅ m, n) = r ⋅ B(m, n)
B(m, n ⋅ s) = B(m, n) ⋅ s
for all m in M, n in N,
|
https://en.wikipedia.org/wiki/BCS%20theory
|
In physics, the Bardeen–Cooper–Schrieffer (BCS) theory (named after John Bardeen, Leon Cooper, and John Robert Schrieffer) is the first microscopic theory of superconductivity since Heike Kamerlingh Onnes's 1911 discovery. The theory describes superconductivity as a microscopic effect caused by a condensation of Cooper pairs. The theory is also used in nuclear physics to describe the pairing interaction between nucleons in an atomic nucleus.
It was proposed by Bardeen, Cooper, and Schrieffer in 1957; they received the Nobel Prize in Physics for this theory in 1972.
History
Rapid progress in the understanding of superconductivity gained momentum in the mid-1950s. It began with the 1948 paper, "On the Problem of the Molecular Theory of Superconductivity", where Fritz London proposed that the phenomenological London equations may be consequences of the coherence of a quantum state. In 1953, Brian Pippard, motivated by penetration experiments, proposed that this would modify the London equations via a new scale parameter called the coherence length. John Bardeen then argued in the 1955 paper, "Theory of the Meissner Effect in Superconductors", that such a modification naturally occurs in a theory with an energy gap. The key ingredient was Leon Cooper's calculation of the bound states of electrons subject to an attractive force in his 1956 paper, "Bound Electron Pairs in a Degenerate Fermi Gas".
In 1957 Bardeen and Cooper assembled these ingredients and constructed such a theo
|
https://en.wikipedia.org/wiki/Bose%E2%80%93Einstein%20condensate
|
In condensed matter physics, a Bose–Einstein condensate (BEC) is a state of matter that is typically formed when a gas of bosons at very low densities is cooled to temperatures very close to absolute zero (−273.15 °C or −459.67 °F). Under such conditions, a large fraction of bosons occupy the lowest quantum state, at which microscopic quantum mechanical phenomena, particularly wavefunction interference, become apparent macroscopically.
This state was first predicted, generally, in 1924–1925 by Albert Einstein, crediting a pioneering paper by Satyendra Nath Bose on the new field now known as quantum statistics. In 1995, the Bose–Einstein condensate was created by Eric Cornell and Carl Wieman of the University of Colorado Boulder using rubidium atoms; later that year, Wolfgang Ketterle of MIT produced a BEC using sodium atoms. In 2001 Cornell, Wieman and Ketterle shared the Nobel Prize in Physics "for the achievement of Bose-Einstein condensation in dilute gases of alkali atoms, and for early fundamental studies of the properties of the condensates."
History
Bose first sent a paper to Einstein on the quantum statistics of light quanta (now called photons), in which he derived Planck's quantum radiation law without any reference to classical physics. Einstein was impressed, translated the paper himself from English to German and submitted it for Bose to the Zeitschrift für Physik, which published it in 1924. (The Einstein manuscript, once believed to be lost, was found in a
|
https://en.wikipedia.org/wiki/Beer%E2%80%93Lambert%20law
|
The Beer-Lambert law is commonly applied to chemical analysis measurements to determine the concentration of chemical species that absorb light. It is often referred to as Beer's law. In physics, the Bouguer–Lambert law is an empirical law which relates the extinction or attenuation of light to the properties of the material through which the light is travelling. It had its first use in astronomical extinction. The fundamental law of extinction (the process is linear in the intensity of radiation and amount of radiatively active matter, provided that the physical state is held constant) is sometimes called the Beer-Bouguer-Lambert law or the Bouguer-Beer-Lambert law or merely the extinction law. The extinction law is also used in understanding attenuation in physical optics, for photons, neutrons, or rarefied gases. In mathematical physics, this law arises as a solution of the BGK equation.
History
Bouguer-Lambert law: This law is based on observations made by Pierre Bouguer before 1729. It is often attributed to Johann Heinrich Lambert, who cited Bouguer's (Claude Jombert, Paris, 1729) – and even quoted from it – in his Photometria in 1760. Lambert expressed the law, which states that the loss of light intensity when it propagates in a medium is directly proportional to intensity and path length, in the mathematical form used today.
Lambert began by assuming that the intensity of light traveling into an absorbing body would be given by the differential equation: w
|
https://en.wikipedia.org/wiki/BCE%20%28disambiguation%29
|
BCE is an abbreviation meaning Before Common Era, an alternative to the use of BC.
BCE, B.C.E. or bce may also refer to:
Bachelor of Civil Engineering
Banco Central del Ecuador
Basic Chess Endings, a book by Reuben Fine
BCE Inc., formerly Bell Canada Enterprises
BCE Place, Toronto, Canada, later Brookfield Place
Bracknell railway station, Berkshire, UK, code
Bhagalpur College of Engineering
Entity-control-boundary, an architectural pattern used in software design
See also
|
https://en.wikipedia.org/wiki/Outline%20of%20biology
|
Biology – The natural science that studies life. Areas of focus include structure, function, growth, origin, evolution, distribution, and taxonomy.
History of biology
History of anatomy
History of biochemistry
History of biotechnology
History of ecology
History of genetics
History of evolutionary thought:
The eclipse of Darwinism – Catastrophism – Lamarckism – Orthogenesis – Mutationism – Structuralism – Vitalism
Modern (evolutionary) synthesis
History of molecular evolution
History of speciation
History of medicine
History of model organisms
History of molecular biology
Natural history
History of plant systematics
Overview
Biology
Science
Life
Properties: Adaptation – Energy processing – Growth – Order – Regulation – Reproduction – Response to environment
Biological organization: atom – molecule – cell – tissue – organ – organ system – organism – population – community – ecosystem – biosphere
Approach: Reductionism – emergent property – mechanistic
Biology as a science:
Natural science
Scientific method: observation – research question – hypothesis – testability – prediction – experiment – data – statistics
Scientific theory – scientific law
Research method
List of research methods in biology
Scientific literature
List of biology journals: peer review
Chemical basis
Outline of biochemistry
Atoms and molecules
matter – element – atom – proton – neutron – electron– Bohr model – isotope – chemical bond – ionic bond – ions – covalent bond – hydrogen bond – molecule
Water:
|
https://en.wikipedia.org/wiki/Bra%E2%80%93ket%20notation
|
Bra–ket notation, also called Dirac notation, is a notation for linear algebra and linear operators on complex vector spaces together with their dual space both in the finite-dimensional and infinite-dimensional case. It is specifically designed to ease the types of calculations that frequently come up in quantum mechanics. Its use in quantum mechanics is quite widespread.
Bra-ket notation was created by Paul Dirac in his 1939 publication A New Notation for Quantum Mechanics. The notation was introduced as an easier way to write quantum mechanical expressions. The name comes from the English word "Bracket".
Quantum mechanics
In quantum mechanics, bra–ket notation is used ubiquitously to denote quantum states. The notation uses angle brackets, and , and a vertical bar , to construct "bras" and "kets".
A ket is of the form . Mathematically it denotes a vector, , in an abstract (complex) vector space , and physically it represents a state of some quantum system.
A bra is of the form . Mathematically it denotes a linear form , i.e. a linear map that maps each vector in to a number in the complex plane . Letting the linear functional act on a vector is written as .
Assume that on there exists an inner product with antilinear first argument, which makes an inner product space. Then with this inner product each vector can be identified with a corresponding linear form, by placing the vector in the anti-linear first slot of the inner product: . The correspondence between
|
https://en.wikipedia.org/wiki/Baryon
|
In particle physics, a baryon is a type of composite subatomic particle which contains an odd number of valence quarks (at least 3). Baryons belong to the hadron family of particles; hadrons are composed of quarks. Baryons are also classified as fermions because they have half-integer spin.
The name "baryon", introduced by Abraham Pais, comes from the Greek word for "heavy" (βαρύς, barýs), because, at the time of their naming, most known elementary particles had lower masses than the baryons. Each baryon has a corresponding antiparticle (antibaryon) where their corresponding antiquarks replace quarks. For example, a proton is made of two up quarks and one down quark; and its corresponding antiparticle, the antiproton, is made of two up antiquarks and one down antiquark.
Baryons participate in the residual strong force, which is mediated by particles known as mesons. The most familiar baryons are protons and neutrons, both of which contain three quarks, and for this reason they are sometimes called triquarks. These particles make up most of the mass of the visible matter in the universe and compose the nucleus of every atom (electrons, the other major component of the atom, are members of a different family of particles called leptons; leptons do not interact via the strong force). Exotic baryons containing five quarks, called pentaquarks, have also been discovered and studied.
A census of the Universe's baryons indicates that 10% of them could be found inside galaxies, 50
|
https://en.wikipedia.org/wiki/Block%20cipher
|
In cryptography, a block cipher is a deterministic algorithm that operates on fixed-length groups of bits, called blocks. Block ciphers are the elementary building blocks of many cryptographic protocols. They are ubiquitous in the storage and exchange of data, where such data is secured and authenticated via encryption.
A block cipher uses blocks as an unvarying transformation. Even a secure block cipher is suitable for the encryption of only a single block of data at a time, using a fixed key. A multitude of modes of operation have been designed to allow their repeated use in a secure way to achieve the security goals of confidentiality and authenticity. However, block ciphers may also feature as building blocks in other cryptographic protocols, such as universal hash functions and pseudorandom number generators.
Definition
A block cipher consists of two paired algorithms, one for encryption, , and the other for decryption, . Both algorithms accept two inputs: an input block of size bits and a key of size bits; and both yield an -bit output block. The decryption algorithm is defined to be the inverse function of encryption, i.e., . More formally, a block cipher is specified by an encryption function
which takes as input a key , of bit length (called the key size), and a bit string , of length (called the block size), and returns a string of bits. is called the plaintext, and is termed the ciphertext. For each , the function () is required to be an invertible mapp
|
https://en.wikipedia.org/wiki/Bilinear%20transform
|
The bilinear transform (also known as Tustin's method, after Arnold Tustin) is used in digital signal processing and discrete-time control theory to transform continuous-time system representations to discrete-time and vice versa.
The bilinear transform is a special case of a conformal mapping (namely, a Möbius transformation), often used to convert a transfer function of a linear, time-invariant (LTI) filter in the continuous-time domain (often called an analog filter) to a transfer function of a linear, shift-invariant filter in the discrete-time domain (often called a digital filter although there are analog filters constructed with switched capacitors that are discrete-time filters). It maps positions on the axis, , in the s-plane to the unit circle, , in the z-plane. Other bilinear transforms can be used to warp the frequency response of any discrete-time linear system (for example to approximate the non-linear frequency resolution of the human auditory system) and are implementable in the discrete domain by replacing a system's unit delays with first order all-pass filters.
The transform preserves stability and maps every point of the frequency response of the continuous-time filter, to a corresponding point in the frequency response of the discrete-time filter, although to a somewhat different frequency, as shown in the Frequency warping section below. This means that for every feature that one sees in the frequency response of the analog filter, there is a c
|
https://en.wikipedia.org/wiki/Beta%20decay
|
In nuclear physics, beta decay (β-decay) is a type of radioactive decay in which an atomic nucleus emits a beta particle (fast energetic electron or positron), transforming into an isobar of that nuclide. For example, beta decay of a neutron transforms it into a proton by the emission of an electron accompanied by an antineutrino; or, conversely a proton is converted into a neutron by the emission of a positron with a neutrino in so-called positron emission. Neither the beta particle nor its associated (anti-)neutrino exist within the nucleus prior to beta decay, but are created in the decay process. By this process, unstable atoms obtain a more stable ratio of protons to neutrons. The probability of a nuclide decaying due to beta and other forms of decay is determined by its nuclear binding energy. The binding energies of all existing nuclides form what is called the nuclear band or valley of stability. For either electron or positron emission to be energetically possible, the energy release (see below) or Q value must be positive.
Beta decay is a consequence of the weak force, which is characterized by relatively lengthy decay times. Nucleons are composed of up quarks and down quarks, and the weak force allows a quark to change its flavour by emission of a W boson leading to creation of an electron/antineutrino or positron/neutrino pair. For example, a neutron, composed of two down quarks and an up quark, decays to a proton composed of a down quark and two up quarks.
Elec
|
https://en.wikipedia.org/wiki/Banach%20algebra
|
In mathematics, especially functional analysis, a Banach algebra, named after Stefan Banach, is an associative algebra over the real or complex numbers (or over a non-Archimedean complete normed field) that at the same time is also a Banach space, that is, a normed space that is complete in the metric induced by the norm. The norm is required to satisfy
This ensures that the multiplication operation is continuous.
A Banach algebra is called unital if it has an identity element for the multiplication whose norm is and commutative if its multiplication is commutative.
Any Banach algebra (whether it has an identity element or not) can be embedded isometrically into a unital Banach algebra so as to form a closed ideal of . Often one assumes a priori that the algebra under consideration is unital: for one can develop much of the theory by considering and then applying the outcome in the original algebra. However, this is not the case all the time. For example, one cannot define all the trigonometric functions in a Banach algebra without identity.
The theory of real Banach algebras can be very different from the theory of complex Banach algebras. For example, the spectrum of an element of a nontrivial complex Banach algebra can never be empty, whereas in a real Banach algebra it could be empty for some elements.
Banach algebras can also be defined over fields of -adic numbers. This is part of -adic analysis.
Examples
The prototypical example of a Banach algebra is , the
|
https://en.wikipedia.org/wiki/Binomial%20coefficient
|
In mathematics, the binomial coefficients are the positive integers that occur as coefficients in the binomial theorem. Commonly, a binomial coefficient is indexed by a pair of integers and is written It is the coefficient of the term in the polynomial expansion of the binomial power ; this coefficient can be computed by the multiplicative formula
which using factorial notation can be compactly expressed as
For example, the fourth power of is
and the binomial coefficient is the coefficient of the term.
Arranging the numbers in successive rows for gives a triangular array called Pascal's triangle, satisfying the recurrence relation
The binomial coefficients occur in many areas of mathematics, and especially in combinatorics. The symbol is usually read as " choose " because there are ways to choose an (unordered) subset of elements from a fixed set of elements. For example, there are ways to choose 2 elements from namely and
The binomial coefficients can be generalized to for any complex number and integer , and many of their properties continue to hold in this more general form.
History and notation
Andreas von Ettingshausen introduced the notation in 1826, although the numbers were known centuries earlier (see Pascal's triangle). In about 1150, the Indian mathematician Bhaskaracharya gave an exposition of binomial coefficients in his book Līlāvatī.
Alternative notations include , , , , , and in all of which the stands for combinations or choices.
|
https://en.wikipedia.org/wiki/B-tree
|
In computer science, a B-tree is a self-balancing tree data structure that maintains sorted data and allows searches, sequential access, insertions, and deletions in logarithmic time. The B-tree generalizes the binary search tree, allowing for nodes with more than two children. Unlike other self-balancing binary search trees, the B-tree is well suited for storage systems that read and write relatively large blocks of data, such as databases and file systems.
History
B-trees were invented by Rudolf Bayer and Edward M. McCreight while working at Boeing Research Labs, for the purpose of efficiently managing index pages for large random-access files. The basic assumption was that indices would be so voluminous that only small chunks of the tree could fit in main memory. Bayer and McCreight's paper, Organization and maintenance of large ordered indices, was first circulated in July 1970 and later published in Acta Informatica.
Bayer and McCreight never explained what, if anything, the B stands for: Boeing, balanced, between, broad, bushy, and Bayer have been suggested. McCreight has said that "the more you think about what the B in B-trees means, the better you understand B-trees."
In 2011 Google developed the C++ B-Tree, reporting a 50-80% reduction in memory use for small data types and improved performance for large data sets when compared to a Red-Black tree.
Definition
According to Knuth's definition, a B-tree of order m is a tree which satisfies the following propertie
|
https://en.wikipedia.org/wiki/Boolean%20satisfiability%20problem
|
In logic and computer science, the Boolean satisfiability problem (sometimes called propositional satisfiability problem and abbreviated SATISFIABILITY, SAT or B-SAT) is the problem of determining if there exists an interpretation that satisfies a given Boolean formula. In other words, it asks whether the variables of a given Boolean formula can be consistently replaced by the values TRUE or FALSE in such a way that the formula evaluates to TRUE. If this is the case, the formula is called satisfiable. On the other hand, if no such assignment exists, the function expressed by the formula is FALSE for all possible variable assignments and the formula is unsatisfiable. For example, the formula "a AND NOT b" is satisfiable because one can find the values a = TRUE and b = FALSE, which make (a AND NOT b) = TRUE. In contrast, "a AND NOT a" is unsatisfiable.
SAT is the first problem that was proven to be NP-complete; see Cook–Levin theorem. This means that all problems in the complexity class NP, which includes a wide range of natural decision and optimization problems, are at most as difficult to solve as SAT. There is no known algorithm that efficiently solves each SAT problem, and it is generally believed that no such algorithm exists; yet this belief has not been proved mathematically, and resolving the question of whether SAT has a polynomial-time algorithm is equivalent to the P versus NP problem, which is a famous open problem in the theory of computing.
Nevertheless, as of
|
https://en.wikipedia.org/wiki/Bernoulli%27s%20inequality
|
In mathematics, Bernoulli's inequality (named after Jacob Bernoulli) is an inequality that approximates exponentiations of . It is often employed in real analysis. It has several useful variants:
Integer exponent
Case 1: for every integer and real number . The inequality is strict if and .
Case 2: for every integer and every real number .
Case 3: for every even integer and every real number .
Real exponent
for every real number and . The inequality is strict if and .
for every real number and .
History
Jacob Bernoulli first published the inequality in his treatise "Positiones Arithmeticae de Seriebus Infinitis" (Basel, 1689), where he used the inequality often.
According to Joseph E. Hofmann, Über die Exercitatio Geometrica des M. A. Ricci (1963), p. 177, the inequality is actually due to Sluse in his Mesolabum (1668 edition), Chapter IV "De maximis & minimis".
Proof for integer exponent
The first case has a simple inductive proof:
Suppose the statement is true for :
Then it follows that
Bernoulli's inequality can be proved for case 2, in which is a non-negative integer and , using mathematical induction in the following form:
we prove the inequality for ,
from validity for some r we deduce validity for .
For ,
is equivalent to which is true.
Similarly, for we have
Now suppose the statement is true for :
Then it follows that
since as well as . By the modified induction we conclude the statement is true for every non-negative integer
|
https://en.wikipedia.org/wiki/International%20Bureau%20of%20Weights%20and%20Measures
|
The International Bureau of Weights and Measures (, BIPM) is an intergovernmental organisation, through which its 59 member-states act on measurement standards in four areas: chemistry, ionising radiation, physical metrology, as well as Coordinated Universal Time. It is based in Saint-Cloud, near Paris, France. The organisation has been referred to as IBWM (from its name in English) in older literature.
Structure
The BIPM is overseen by the International Committee for Weights and Measures (), a committee of eighteen members that meet normally in two sessions per year, which is in turn overseen by the General Conference on Weights and Measures () that meets in Paris usually once every four years, consisting of delegates of the governments of the Member States and observers from the Associates of the CGPM. These organs are also commonly referred to by their French initialisms.
History
The BIPM was created on 20 May 1875, following the signing of the Metre Convention, a treaty among 17 Member States ( there are now 59 members).
It is based at the Pavillon de Breteuil in Saint-Cloud, France, a site (originally ) granted to the Bureau by the French Government in 1876. Since 1969 the site has been considered international territory, and the BIPM has all the rights and privileges accorded an intergovernmental organisation. This status was further clarified by the French decree No 70-820 of 9 September 1970.
Function
The BIPM has the mandate to provide the basis for a single, c
|
https://en.wikipedia.org/wiki/Bladder%20%28disambiguation%29
|
The bladder (or urinary bladder) is an organ that collects urine for excretion in animals.
Bladder may also refer to:
Biology
Artificial urinary bladder, in humans
Gallbladder, which stores bile for digestion
Pig bladder, urinary bladder of a domestic pig, with many human uses
Swim bladder, in bony fishes, an internal organ that helps to control buoyancy (homologous to lungs)
Urinary bladder (Chinese medicine)
Technology
Air bladder effect, a special effect used in filmmaking
Fuel bladder, which stores fuels or other industrial liquids
Hydration system, sometimes known as a bladder
Pneumatic bladder, an old technology with many industrial applications
Waterskin, a traditional container for transporting water
Geography
Bladder Lake, a lake in Minnesota
See also
|
https://en.wikipedia.org/wiki/Biomedical%20engineering
|
Biomedical engineering (BME) or medical engineering is the application of engineering principles and design concepts to medicine and biology for healthcare purposes (e.g., diagnostic or therapeutic). BME is also traditionally logical sciences to advance health care treatment, including diagnosis, monitoring, and therapy. Also included under the scope of a biomedical engineer is the management of current medical equipment in hospitals while adhering to relevant industry standards. This involves procurement, routine testing, preventive maintenance, and making equipment recommendations, a role also known as a Biomedical Equipment Technician (BMET) or as clinical engineering.
Biomedical engineering has recently emerged as its own study, as compared to many other engineering fields. Such an evolution is common as a new field transition from being an interdisciplinary specialization among already-established fields to being considered a field in itself. Much of the work in biomedical engineering consists of research and development, spanning a broad array of subfields (see below). Prominent biomedical engineering applications include the development of biocompatible prostheses, various diagnostic and therapeutic medical devices ranging from clinical equipment to micro-implants, common imaging equipment such as MRIs and EKG/ECGs, regenerative tissue growth, pharmaceutical drugs and therapeutic biologicals.
Subfields and related fields
Bioinformatics
Bioinformatics is an interdi
|
https://en.wikipedia.org/wiki/Bohr%20model
|
In atomic physics, the Bohr model or Rutherford–Bohr model of the atom, presented by Niels Bohr and Ernest Rutherford in 1913, consists of a small, dense nucleus surrounded by orbiting electrons. It is analogous to the structure of the Solar System, but with attraction provided by electrostatic force rather than gravity, and with the electron energies quantized (assuming only discrete values).
In the history of atomic physics, it followed, and ultimately replaced, several earlier models, including Joseph Larmor's Solar System model (1897), Jean Perrin's model (1901), the cubical model (1902), Hantaro Nagaoka's Saturnian model (1904), the plum pudding model (1904), Arthur Haas's quantum model (1910), the Rutherford model (1911), and John William Nicholson's nuclear quantum model (1912). The improvement over the 1911 Rutherford model mainly concerned the new quantum mechanical interpretation introduced by Haas and Nicholson, but forsaking any attempt to explain radiation according to classical physics.
The model's key success lay in explaining the Rydberg formula for hydrogen's spectral emission lines. While the Rydberg formula had been known experimentally, it did not gain a theoretical basis until the Bohr model was introduced. Not only did the Bohr model explain the reasons for the structure of the Rydberg formula, it also provided a justification for the fundamental physical constants that make up the formula's empirical results.
The Bohr model is a relatively primitive
|
https://en.wikipedia.org/wiki/Naive%20set%20theory
|
Naive set theory is any of several theories of sets used in the discussion of the foundations of mathematics.
Unlike axiomatic set theories, which are defined using formal logic, naive set theory is defined informally, in natural language. It describes the aspects of mathematical sets familiar in discrete mathematics (for example Venn diagrams and symbolic reasoning about their Boolean algebra), and suffices for the everyday use of set theory concepts in contemporary mathematics.
Sets are of great importance in mathematics; in modern formal treatments, most mathematical objects (numbers, relations, functions, etc.) are defined in terms of sets. Naive set theory suffices for many purposes, while also serving as a stepping stone towards more formal treatments.
Method
A naive theory in the sense of "naive set theory" is a non-formalized theory, that is, a theory that uses natural language to describe sets and operations on sets. The words and, or, if ... then, not, for some, for every are treated as in ordinary mathematics. As a matter of convenience, use of naive set theory and its formalism prevails even in higher mathematics – including in more formal settings of set theory itself.
The first development of set theory was a naive set theory. It was created at the end of the 19th century by Georg Cantor as part of his study of infinite sets and developed by Gottlob Frege in his Grundgesetze der Arithmetik.
Naive set theory may refer to several very distinct notions. It may
|
https://en.wikipedia.org/wiki/B%C3%A9zout%27s%20identity
|
In mathematics, Bézout's identity (also called Bézout's lemma), named after Étienne Bézout who proved it for polynomials, is the following theorem:
Here the greatest common divisor of and is taken to be . The integers and are called Bézout coefficients for ; they are not unique. A pair of Bézout coefficients can be computed by the extended Euclidean algorithm, and this pair is, in the case of integers one of the two pairs such that and equality occurs only if one of and is a multiple of the other.
As an example, the greatest common divisor of 15 and 69 is 3, and 3 can be written as a combination of 15 and 69 as with Bézout coefficients −9 and 2.
Many other theorems in elementary number theory, such as Euclid's lemma or the Chinese remainder theorem, result from Bézout's identity.
A Bézout domain is an integral domain in which Bézout's identity holds. In particular, Bézout's identity holds in principal ideal domains. Every theorem that results from Bézout's identity is thus true in all principal ideal domains.
Structure of solutions
If and are not both zero and one pair of Bézout coefficients has been computed (for example, using the extended Euclidean algorithm), all pairs can be represented in the form
where is an arbitrary integer, is the greatest common divisor of and , and the fractions simplify to integers.
If and are both nonzero, then exactly two of these pairs of Bézout coefficients satisfy
and equality may occur only if one of and divide
|
https://en.wikipedia.org/wiki/Bernoulli%20number
|
In mathematics, the Bernoulli numbers are a sequence of rational numbers which occur frequently in analysis. The Bernoulli numbers appear in (and can be defined by) the Taylor series expansions of the tangent and hyperbolic tangent functions, in Faulhaber's formula for the sum of m-th powers of the first n positive integers, in the Euler–Maclaurin formula, and in expressions for certain values of the Riemann zeta function.
The values of the first 20 Bernoulli numbers are given in the adjacent table. Two conventions are used in the literature, denoted here by and ; they differ only for , where and . For every odd , . For every even , is negative if is divisible by 4 and positive otherwise. The Bernoulli numbers are special values of the Bernoulli polynomials , with and .
The Bernoulli numbers were discovered around the same time by the Swiss mathematician Jacob Bernoulli, after whom they are named, and independently by Japanese mathematician Seki Takakazu. Seki's discovery was posthumously published in 1712 in his work Katsuyō Sanpō; Bernoulli's, also posthumously, in his Ars Conjectandi of 1713. Ada Lovelace's note G on the Analytical Engine from 1842 describes an algorithm for generating Bernoulli numbers with Babbage's machine. As a result, the Bernoulli numbers have the distinction of being the subject of the first published complex computer program.
Notation
The superscript used in this article distinguishes the two sign conventions for Bernoulli numbers. Only
|
https://en.wikipedia.org/wiki/Balance
|
Balance may refer to:
Common meanings
Balance (ability) in biomechanics
Balance (accounting)
Balance or weighing scale
Balance, as in equality (mathematics) or equilibrium
Arts and entertainment
Film
Balance (1983 film), a Bulgarian film
Balance (1989 film), a short animated film
La Balance, a 1982 French film
Television
Balance: Television for Living Well, a Canadian television talk show
"The Balance" (Roswell), an episode of the television series Roswell
"The Balance", an episode of the animated series Justice League
Music
Performers
Balance (band), a 1980s pop-rock group
Albums
Balance (Akrobatik album), 2003
Balance (Kim-Lian album), 2004
Balance (Leo Kottke album), 1978
Balance (Joe Morris album), 2014
Balance (Swollen Members album), 1999
Balance (Ty Tabor album), 2008
Balance (Van Halen album), 1995
Balance (Armin van Buuren album), 2019
The Balance, a 2019 album by Catfish and the Bottlemen
Songs
"Balance", a song by Axium from The Story Thus Far
"Balance", a song by Band-Maid from Unleash
"Balance", a song by Ed Sheeran from - (Deluxe vinyl edition)
"The Balance", a Moody Blues song on the 1970 album A Question of Balance
Other
Balance (game design), the concept and the practice of tuning relationships between a game's component systems
Balance (installation), a 2013 glazed ceramic installation by Tim Ryan
Balance (puzzle), a mathematical puzzle
"Balance", a poem by Patti Smith from the book kodak
Government and law
BALANCE Act
|
https://en.wikipedia.org/wiki/Combinatorics
|
Combinatorics is an area of mathematics primarily concerned with counting, both as a means and an end in obtaining results, and certain properties of finite structures. It is closely related to many other areas of mathematics and has many applications ranging from logic to statistical physics and from evolutionary biology to computer science.
Combinatorics is well known for the breadth of the problems it tackles. Combinatorial problems arise in many areas of pure mathematics, notably in algebra, probability theory, topology, and geometry, as well as in its many application areas. Many combinatorial questions have historically been considered in isolation, giving an ad hoc solution to a problem arising in some mathematical context. In the later twentieth century, however, powerful and general theoretical methods were developed, making combinatorics into an independent branch of mathematics in its own right. One of the oldest and most accessible parts of combinatorics is graph theory, which by itself has numerous natural connections to other areas. Combinatorics is used frequently in computer science to obtain formulas and estimates in the analysis of algorithms.
A mathematician who studies combinatorics is called a .
Definition
The full scope of combinatorics is not universally agreed upon. According to H.J. Ryser, a definition of the subject is difficult because it crosses so many mathematical subdivisions. Insofar as an area can be described by the types of problems it a
|
https://en.wikipedia.org/wiki/Chemistry
|
Chemistry is the scientific study of the properties and behavior of matter. It is a physical science under natural sciences that covers the elements that make up matter to the compounds made of atoms, molecules and ions: their composition, structure, properties, behavior and the changes they undergo during a reaction with other substances. Chemistry also addresses the nature of chemical bonds in chemical compounds.
In the scope of its subject, chemistry occupies an intermediate position between physics and biology. It is sometimes called the central science because it provides a foundation for understanding both basic and applied scientific disciplines at a fundamental level. For example, chemistry explains aspects of plant growth (botany), the formation of igneous rocks (geology), how atmospheric ozone is formed and how environmental pollutants are degraded (ecology), the properties of the soil on the moon (cosmochemistry), how medications work (pharmacology), and how to collect DNA evidence at a crime scene (forensics).
Chemistry is a study that has existed since ancient times. Over this time frame, it has evolved, and now chemistry encompasses various areas of specialisation, or subdisciplines, that continue to increase in number and interrelate to create further interdisciplinary fields of study. The applications of various fields of chemistry are used frequently for economic purposes in the chemical industry.
Etymology
The word chemistry comes from a modification dur
|
https://en.wikipedia.org/wiki/Cytoplasm
|
In cell biology, the cytoplasm describes all material within a eukaryotic cell, enclosed by the cell membrane, except for the cell nucleus. The material inside the nucleus and contained within the nuclear membrane is termed the nucleoplasm. The main components of the cytoplasm are the cytosol (a gel-like substance), the organelles (the cell's internal sub-structures), and various cytoplasmic inclusions. The cytoplasm is about 80% water and is usually colorless.
The submicroscopic ground cell substance, or cytoplasmic matrix, that remains after the exclusion of the cell organelles and particles is groundplasm. It is the hyaloplasm of light microscopy, a highly complex, polyphasic system in which all resolvable cytoplasmic elements are suspended, including the larger organelles such as the ribosomes, mitochondria, plant plastids, lipid droplets, and vacuoles.
Many cellular activities take place within the cytoplasm, such as many metabolic pathways, including glycolysis, photosynthesis, and processes such as cell division. The concentrated inner area is called the endoplasm and the outer layer is called the cell cortex, or ectoplasm.
Movement of calcium ions in and out of the cytoplasm is a signaling activity for metabolic processes.
In plants, movement of the cytoplasm around vacuoles is known as cytoplasmic streaming.
History
The term was introduced by Rudolf von Kölliker in 1863, originally as a synonym for protoplasm, but later it has come to mean the cell substance
|
https://en.wikipedia.org/wiki/Cipher
|
In cryptography, a cipher (or cypher) is an algorithm for performing encryption or decryption—a series of well-defined steps that can be followed as a procedure. An alternative, less common term is encipherment. To encipher or encode is to convert information into cipher or code. In common parlance, "cipher" is synonymous with "code", as they are both a set of steps that encrypt a message; however, the concepts are distinct in cryptography, especially classical cryptography.
Codes generally substitute different length strings of characters in the output, while ciphers generally substitute the same number of characters as are input. A code maps one meaning with another. Words and phrases can be coded as letters or numbers. Codes typically have direct meaning from input to key. Codes primarily function to save time. Ciphers are algorithmic. The given input must follow the cipher's process to be solved. Ciphers are commonly used to encrypt written information.
Codes operated by substituting according to a large codebook which linked a random string of characters or numbers to a word or phrase. For example, "UQJHSE" could be the code for "Proceed to the following coordinates." When using a cipher the original information is known as plaintext, and the encrypted form as ciphertext. The ciphertext message contains all the information of the plaintext message, but is not in a format readable by a human or computer without the proper mechanism to decrypt it.
The operation of a ci
|
https://en.wikipedia.org/wiki/Lists%20of%20universities%20and%20colleges
|
This is a list of lists of universities and colleges.
Subject of study
Aerospace engineering
Agriculture
Art schools
Business
Chiropractic
Engineering
Forestry
Law
Maritime studies
Medicine
Music
Nanotechnology
Osteopathy
Pharmaceuticals
Social Work
Institution type
Community colleges
For-profit universities and colleges
Land-grant universities
Liberal arts universities
National universities
Postgraduate-only institutions
Private universities
Public universities
Research universities
Technical universities
Sea-grant universities
Space-grant universities
State universities and colleges
Unaccredited universities
Location
Lists of universities and colleges by country
List of largest universities
Religious affiliation
Assemblies of God
Baptist colleges and universities in the United States
Catholic universities
Ecclesiastical universities
Benedictine colleges and universities
Jesuit institutions
Opus Dei universities
Pontifical universities
International Council of Universities of Saint Thomas Aquinas
International Federation of Catholic Universities
Christian churches and churches of Christ
Churches of Christ
Church of the Nazarene
Islamic seminaries
Lutheran colleges and universities
International Association of Methodist-related Schools, Colleges, and Universities
Muslim educational institutions
Association of Presbyterian Colleges and Universities
Extremities
Endowment
Largest universities by enrollment
Oldest madrasa
|
https://en.wikipedia.org/wiki/Common%20descent
|
Common descent is a concept in evolutionary biology applicable when one species is the ancestor of two or more species later in time. According to modern evolutionary biology, all living beings could be descendants of a unique ancestor commonly referred to as the last universal common ancestor (LUCA) of all life on Earth.
Common descent is an effect of speciation, in which multiple species derive from a single ancestral population. The more recent the ancestral population two species have in common, the more closely are they related. The most recent common ancestor of all currently living organisms is the last universal ancestor, which lived about 3.9 billion years ago. The two earliest pieces of evidence for life on Earth are graphite found to be biogenic in 3.7 billion-year-old metasedimentary rocks discovered in western Greenland and microbial mat fossils found in 3.48 billion-year-old sandstone discovered in Western Australia. All currently living organisms on Earth share a common genetic heritage, though the suggestion of substantial horizontal gene transfer during early evolution has led to questions about the monophyly (single ancestry) of life. 6,331 groups of genes common to all living animals have been identified; these may have arisen from a single common ancestor that lived 650 million years ago in the Precambrian.
Universal common descent through an evolutionary process was first proposed by the British naturalist Charles Darwin in the concluding sentence of hi
|
https://en.wikipedia.org/wiki/Cone%20%28disambiguation%29
|
A cone is a basic geometrical shape.
Cone may also refer to:
Mathematics
Cone (category theory)
Cone (formal languages)
Cone (graph theory), a graph in which one vertex is adjacent to all others
Cone (linear algebra), a subset of vector space
Mapping cone (homological algebra)
Cone (topology)
Conic bundle, a concept in algebraic geometry
Conical surface, generated by a moving line with one fixed point
Projective cone, the union of all lines that intersect a projective subspace and an arbitrary subset of some other disjoint subspace
Computing
Cone tracing, a derivative of the ray-tracing algorithm that replaces rays, which have no thickness, with cones
Second-order cone programming, a library of routines that implements a predictor corrector variant of the semidefinite programming algorithm
Astronomy
Cone Nebula (also known as NGC 2264), an H II region in the constellation of Monoceros
Ionization cone, cones of material extending out from spiral galaxies
Engineering and physical science
Antenna blind cone, the volume of space that cannot be scanned by an antenna
Carbon nanocones, conical structures which are made predominantly from carbon and which have at least one dimension of the order one micrometer or smaller
Cone algorithm identifies surface particles quickly and accurately for three-dimensional clusters composed of discrete particles
Cone beam reconstruction, a method of X-ray scanning in microtomography
Cone calorimeter, a modern device used to study the fire beha
|
https://en.wikipedia.org/wiki/Combination
|
In mathematics, a combination is a selection of items from a set that has distinct members, such that the order of selection does not matter (unlike permutations). For example, given three fruits, say an apple, an orange and a pear, there are three combinations of two that can be drawn from this set: an apple and a pear; an apple and an orange; or a pear and an orange. More formally, a k-combination of a set S is a subset of k distinct elements of S. So, two combinations are identical if and only if each combination has the same members. (The arrangement of the members in each set does not matter.) If the set has n elements, the number of k-combinations, denoted by or , is equal to the binomial coefficient
which can be written using factorials as whenever , and which is zero when . This formula can be derived from the fact that each k-combination of a set S of n members has permutations so or . The set of all k-combinations of a set S is often denoted by .
A combination is a combination of n things taken k at a time without repetition. To refer to combinations in which repetition is allowed, the terms k-combination with repetition, k-multiset, or k-selection, are often used. If, in the above example, it were possible to have two of any one kind of fruit there would be 3 more 2-selections: one with two apples, one with two oranges, and one with two pears.
Although the set of three fruits was small enough to write a complete list of combinations, this becomes impractical
|
https://en.wikipedia.org/wiki/Computer%20science
|
Computer science is the study of computation, information, and automation. Computer science spans theoretical disciplines (such as algorithms, theory of computation, and information theory) to applied disciplines (including the design and implementation of hardware and software). Though more often considered an academic discipline, computer science is closely related to computer programming.
Algorithms and data structures are central to computer science.
The theory of computation concerns abstract models of computation and general classes of problems that can be solved using them. The fields of cryptography and computer security involve studying the means for secure communication and for preventing security vulnerabilities. Computer graphics and computational geometry address the generation of images. Programming language theory considers different ways to describe computational processes, and database theory concerns the management of repositories of data. Human–computer interaction investigates the interfaces through which humans and computers interact, and software engineering focuses on the design and principles behind developing software. Areas such as operating systems, networks and embedded systems investigate the principles and design behind complex systems. Computer architecture describes the construction of computer components and computer-operated equipment. Artificial intelligence and machine learning aim to synthesize goal-orientated processes such as problem-s
|
https://en.wikipedia.org/wiki/Condensed%20matter%20physics
|
Condensed matter physics is the field of physics that deals with the macroscopic and microscopic physical properties of matter, especially the solid and liquid phases which arise from electromagnetic forces between atoms. More generally, the subject deals with condensed phases of matter: systems of many constituents with strong interactions among them. More exotic condensed phases include the superconducting phase exhibited by certain materials at extremely low cryogenic temperature, the ferromagnetic and antiferromagnetic phases of spins on crystal lattices of atoms, and the Bose–Einstein condensate found in ultracold atomic systems. Condensed matter physicists seek to understand the behavior of these phases by experiments to measure various material properties, and by applying the physical laws of quantum mechanics, electromagnetism, statistical mechanics, and other physics theories to develop mathematical models.
The diversity of systems and phenomena available for study makes condensed matter physics the most active field of contemporary physics: one third of all American physicists self-identify as condensed matter physicists, and the Division of Condensed Matter Physics is the largest division at the American Physical Society. The field overlaps with chemistry, materials science, engineering and nanotechnology, and relates closely to atomic physics and biophysics. The theoretical physics of condensed matter shares important concepts and methods with that of particle p
|
https://en.wikipedia.org/wiki/Computational%20linguistics
|
Computational linguistics is an interdisciplinary field concerned with the computational modelling of natural language, as well as the study of appropriate computational approaches to linguistic questions. In general, computational linguistics draws upon linguistics, computer science, artificial intelligence, mathematics, logic, philosophy, cognitive science, cognitive psychology, psycholinguistics, anthropology and neuroscience, among others.
Since the 2020s, computational linguistics has become a near-synonym of either natural language processing or language technology, with deep learning approaches, such as large language models, outperforming the specific approaches previously used in the field.
Origins
The field overlapped with artificial intelligence since the efforts in the United States in the 1950s to use computers to automatically translate texts from foreign languages, particularly Russian scientific journals, into English. Since rule-based approaches were able to make arithmetic (systematic) calculations much faster and more accurately than humans, it was expected that lexicon, morphology, syntax and semantics can be learned using explicit rules, as well. After the failure of rule-based approaches, David Hays coined the term in order to distinguish the field from AI and co-founded both the Association for Computational Linguistics (ACL) and the International Committee on Computational Linguistics (ICCL) in the 1970s and 1980s. What started as an effort to transl
|
https://en.wikipedia.org/wiki/Cognitive%20science
|
Cognitive science is the interdisciplinary, scientific study of the mind and its processes with input from linguistics, psychology, neuroscience, philosophy, computer science/artificial intelligence, and anthropology. It examines the nature, the tasks, and the functions of cognition (in a broad sense). Cognitive scientists study intelligence and behavior, with a focus on how nervous systems represent, process, and transform information. Mental faculties of concern to cognitive scientists include language, perception, memory, attention, reasoning, and emotion; to understand these faculties, cognitive scientists borrow from fields such as linguistics, psychology, artificial intelligence, philosophy, neuroscience, and anthropology. The typical analysis of cognitive science spans many levels of organization, from learning and decision to logic and planning; from neural circuitry to modular brain organization. One of the fundamental concepts of cognitive science is that "thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."
The goal of cognitive science is to understand and formulate the principles of intelligence with the hope that this will lead to a better comprehension of the mind and of learning.
The cognitive sciences began as an intellectual movement in the 1950s often referred to as the cognitive revolution.
History
The cognitive sciences began as an intellectual movement in the
|
https://en.wikipedia.org/wiki/Chemist
|
A chemist (from Greek chēm(ía) alchemy; replacing chymist from Medieval Latin alchemist) is a scientist trained in the study of chemistry. Chemists study the composition of matter and its properties. Chemists carefully describe the properties they study in terms of quantities, with detail on the level of molecules and their component atoms. Chemists carefully measure substance proportions, chemical reaction rates, and other chemical properties. In Commonwealth English, pharmacists are often called chemists.
Chemists use their knowledge to learn the composition and properties of unfamiliar substances, as well as to reproduce and synthesize large quantities of useful naturally occurring substances and create new artificial substances and useful processes. Chemists may specialize in any number of subdisciplines of chemistry. Materials scientists and metallurgists share much of the same education and skills with chemists. The work of chemists is often related to the work of chemical engineers, who are primarily concerned with the proper design, construction and evaluation of the most cost-effective large-scale chemical plants and work closely with industrial chemists on the development of new processes and methods for the commercial-scale manufacture of chemicals and related products.
History of chemistry
The roots of chemistry can be traced to the phenomenon of burning. Fire was a mystical force that transformed one substance into another and thus was of primary interest to m
|
https://en.wikipedia.org/wiki/Cracking
|
Cracking may refer to:
Cracking, the formation of a fracture or partial fracture in a solid material studied as fracture mechanics
Performing a sternotomy
Fluid catalytic cracking, a catalytic process widely used in oil refineries for cracking large hydrocarbon molecules into smaller molecules
Cracking (chemistry), the decomposition of complex organic molecules into smaller ones
Cracking joints, the practice of manipulating one's bone joints to make a sharp sound
Cracking codes, see cryptanalysis
Whip cracking
Safe cracking
Crackin, band featuring Lester Abrams
Packing and cracking, a method of creating voting districts to give a political party an advantage
In computing':
Another name for security hacking; the practice of defeating computer security.
Password cracking, the process of discovering the plaintext of an encrypted computer password.
Software cracking, the defeating of software copy protection.
See also
Crack (disambiguation)
Cracker (disambiguation)
Cracklings (solid material remaining after rendering fat)
Cracker (pejorative)
|
https://en.wikipedia.org/wiki/Continuum%20hypothesis
|
In mathematics, specifically set theory, the continuum hypothesis (abbreviated CH) is a hypothesis about the possible sizes of infinite sets. It states that
or equivalently, that
In Zermelo–Fraenkel set theory with the axiom of choice (ZFC), this is equivalent to the following equation in aleph numbers: , or even shorter with beth numbers: .
The continuum hypothesis was advanced by Georg Cantor in 1878, and establishing its truth or falsehood is the first of Hilbert's 23 problems presented in 1900. The answer to this problem is independent of ZFC, so that either the continuum hypothesis or its negation can be added as an axiom to ZFC set theory, with the resulting theory being consistent if and only if ZFC is consistent. This independence was proved in 1963 by Paul Cohen, complementing earlier work by Kurt Gödel in 1940.
The name of the hypothesis comes from the term the continuum for the real numbers.
History
Cantor believed the continuum hypothesis to be true and for many years tried in vain to prove it. It became the first on David Hilbert's list of important open questions that was presented at the International Congress of Mathematicians in the year 1900 in Paris. Axiomatic set theory was at that point not yet formulated.
Kurt Gödel proved in 1940 that the negation of the continuum hypothesis, i.e., the existence of a set with intermediate cardinality, could not be proved in standard set theory. The second half of the independence of the continuum hypothesis – i.e.
|
https://en.wikipedia.org/wiki/Key%20size
|
In cryptography, key size or key length refers to the number of bits in a key used by a cryptographic algorithm (such as a cipher).
Key length defines the upper-bound on an algorithm's security (i.e. a logarithmic measure of the fastest known attack against an algorithm), because the security of all algorithms can be violated by brute-force attacks. Ideally, the lower-bound on an algorithm's security is by design equal to the key length (that is, the algorithm's design does not detract from the degree of security inherent in the key length).
Most symmetric-key algorithms are designed to have security equal to their key length. However, after design, a new attack might be discovered. For instance, Triple DES was designed to have a 168-bit key, but an attack of complexity 2112 is now known (i.e. Triple DES now only has 112 bits of security, and of the 168 bits in the key the attack has rendered 56 'ineffective' towards security). Nevertheless, as long as the security (understood as "the amount of effort it would take to gain access") is sufficient for a particular application, then it does not matter if key length and security coincide. This is important for asymmetric-key algorithms, because no such algorithm is known to satisfy this property; elliptic curve cryptography comes the closest with an effective security of roughly half its key length.
Significance
Keys are used to control the operation of a cipher so that only the correct key can convert encrypted text (cipherte
|
https://en.wikipedia.org/wiki/Complex%20analysis
|
Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers. It is helpful in many branches of mathematics, including algebraic geometry, number theory, analytic combinatorics, applied mathematics; as well as in physics, including the branches of hydrodynamics, thermodynamics, quantum mechanics, and twistor theory. By extension, use of complex analysis also has applications in engineering fields such as nuclear, aerospace, mechanical and electrical engineering.
As a differentiable function of a complex variable is equal to its Taylor series (that is, it is analytic), complex analysis is particularly concerned with analytic functions of a complex variable (that is, holomorphic functions).
History
Complex analysis is one of the classical branches in mathematics, with roots in the 18th century and just prior. Important mathematicians associated with complex numbers include Euler, Gauss, Riemann, Cauchy, Gösta Mittag-Leffler, Weierstrass, and many more in the 20th century. Complex analysis, in particular the theory of conformal mappings, has many physical applications and is also used throughout analytic number theory. In modern times, it has become very popular through a new boost from complex dynamics and the pictures of fractals produced by iterating holomorphic functions. Another important application of complex analysis is in string theory which examines con
|
https://en.wikipedia.org/wiki/Civil%20engineering
|
Civil engineering is a professional engineering discipline that deals with the design, construction, and maintenance of the physical and naturally built environment, including public works such as roads, bridges, canals, dams, airports, sewage systems, pipelines, structural components of buildings, and railways.
Civil engineering is traditionally broken into a number of sub-disciplines. It is considered the second-oldest engineering discipline after military engineering, and it is defined to distinguish non-military engineering from military engineering. Civil engineering can take place in the public sector from municipal public works departments through to federal government agencies, and in the private sector from locally based firms to global Fortune 500 companies.
History
Civil engineering as a discipline
Civil engineering is the application of physical and scientific principles for solving the problems of society, and its history is intricately linked to advances in the understanding of physics and mathematics throughout history. Because civil engineering is a broad profession, including several specialized sub-disciplines, its history is linked to knowledge of structures, materials science, geography, geology, soils, hydrology, environmental science, mechanics, project management, and other fields.
Throughout ancient and medieval history most architectural design and construction was carried out by artisans, such as stonemasons and carpenters, rising to the role of
|
https://en.wikipedia.org/wiki/Complex%20number
|
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation ; every complex number can be expressed in the form , where and are real numbers. Because no real number satisfies the above equation, was called an imaginary number by René Descartes. For the complex number , is called the , and is called the . The set of complex numbers is denoted by either of the symbols or . Despite the historical nomenclature "imaginary", complex numbers are regarded in the mathematical sciences as just as "real" as the real numbers and are fundamental in many aspects of the scientific description of the natural world.
Complex numbers allow solutions to all polynomial equations, even those that have no solutions in real numbers. More precisely, the fundamental theorem of algebra asserts that every non-constant polynomial equation with real or complex coefficients has a solution which is a complex number. For example, the equation
has no real solution, since the square of a real number cannot be negative, but has the two nonreal complex solutions and .
Addition, subtraction and multiplication of complex numbers can be naturally defined by using the rule combined with the associative, commutative, and distributive laws. Every nonzero complex number has a multiplicative inverse. This makes the complex numbers a field that has the real numbers as a subfield. The comp
|
https://en.wikipedia.org/wiki/Copenhagen%20interpretation
|
The Copenhagen interpretation is a collection of views about the meaning of quantum mechanics, stemming from the work of Niels Bohr, Werner Heisenberg, Max Born, and others. The term "Copenhagen interpretation" was apparently coined by Heisenberg during the 1950s to refer to ideas developed in the 1925–1927 period, glossing over his disagreements with Bohr. Consequently, there is no definitive historical statement of what the interpretation entails. Features common across versions of the Copenhagen interpretation include the idea that quantum mechanics is intrinsically indeterministic, with probabilities calculated using the Born rule, and the principle of complementarity, which states that objects have certain pairs of complementary properties that cannot all be observed or measured simultaneously. Moreover, the act of "observing" or "measuring" an object is irreversible, and no truth can be attributed to an object except according to the results of its measurement (that is, the Copenhagen interpretation rejects counterfactual definiteness). Copenhagen-type interpretations hold that quantum descriptions are objective, in that they are independent of physicists' personal beliefs and other arbitrary mental factors.
Over the years, there have been many objections to aspects of Copenhagen-type interpretations, including the discontinuous and stochastic nature of the "observation" or "measurement" process, the apparent subjectivity of requiring an observer, the difficulty of def
|
https://en.wikipedia.org/wiki/Category%20theory
|
Category theory is a general theory of mathematical structures and their relations that was introduced by Samuel Eilenberg and Saunders Mac Lane in the middle of the 20th century in their foundational work on algebraic topology. Category theory is used in almost all areas of mathematics. In particular, numerous constructions of new mathematical objects from previous ones that appear similarly in several contexts are conveniently expressed and unified in terms of categories. Examples include quotient spaces, direct products, completion, and duality.
Many areas of computer science also rely on category theory, such as functional programming and semantics.
A category is formed by two sorts of objects: the objects of the category, and the morphisms, which relate two objects called the source and the target of the morphism. One often says that a morphism is an arrow that maps its source to its target. Morphisms can be composed if the target of the first morphism equals the source of the second one, and morphism composition has similar properties as function composition (associativity and existence of identity morphisms). Morphisms are often some sort of function, but this is not always the case. For example, a monoid may be viewed as a category with a single object, whose morphisms are the elements of the monoid.
The second fundamental concept of category theory is the concept of a functor, which plays the role of a morphism between two categories and it maps objects of to
|
https://en.wikipedia.org/wiki/Cyanide
|
In chemistry, a cyanide () is a chemical compound that contains a functional group. This group, known as the cyano group, consists of a carbon atom triple-bonded to a nitrogen atom.
In inorganic cyanides, the cyanide group is present as the cyanide anion . This anion is extremely poisonous. Soluble salts such as sodium cyanide (NaCN) and potassium cyanide (KCN) are highly toxic. Hydrocyanic acid, also known as hydrogen cyanide, or HCN, is a highly volatile liquid that is produced on a large scale industrially. It is obtained by acidification of cyanide salts.
Organic cyanides are usually called nitriles. In nitriles, the group is linked by a single covalent bond to carbon. For example, in acetonitrile (), the cyanide group is bonded to methyl (). Although nitriles generally do not release cyanide ions, the cyanohydrins do and are thus toxic.
Bonding
The cyanide ion is isoelectronic with carbon monoxide and with molecular nitrogen N≡N. A triple bond exists between C and N. The negative charge is concentrated on carbon C.
Occurrence
In nature
Cyanides are produced by certain bacteria, fungi, and algae. It is an antifeedant in a number of plants. Cyanides are found in substantial amounts in certain seeds and fruit stones, e.g., those of bitter almonds, apricots, apples, and peaches. Chemical compounds that can release cyanide are known as cyanogenic compounds. In plants, cyanides are usually bound to sugar molecules in the form of cyanogenic glycosides and defend the p
|
https://en.wikipedia.org/wiki/Catharine%20Garmany
|
Catharine "Katy" D. Garmany (born March 6, 1946) is an astronomer with the National Optical Astronomy Observatory. She holds a B.S. (astrophysics), 1966 from Indiana University; and a M.A. (astrophysics), 1968, and Ph.D. (astronomy), 1971, from the University of Virginia. Catharine's main areas of research are massive stars, evolution and formation; astronomical education.
Garmany served as board member of the Astronomical Society of the Pacific from 1998 to 2001, and then the vice president from 2001 to 2003. She is most recognized in association with her work on star formation. In 1976, Garmany received the Annie J. Cannon Award in Astronomy from the American Astronomical Society. From 1976 to 1984, Garmany was a research associate at the Joint Institute for Laboratory Astrophysics (JILA). Since 1981, Dr. Garmany has been a professor with in the Department of Astrophysical, Planetary, and Atmospheric Sciences at the University of Colorado. Garmany is the former chair of JILA and has experience teaching undergraduate, graduate, elementary, and general public audiences through her work with the University of Virginia, University of Colorado, and the Sommers-Bausch Observatory and Fiske Planetarium, on Colorado's campus. She is also a member of the International Astronomical Union, the American Astronomical Society, the Astronomical Society of the Pacific, and the International Planetarium Society.
Research
Garmany's dissertation built upon three years of research on OB as
|
https://en.wikipedia.org/wiki/Cartan%E2%80%93Dieudonn%C3%A9%20theorem
|
In mathematics, the Cartan–Dieudonné theorem, named after Élie Cartan and Jean Dieudonné, establishes that every orthogonal transformation in an n-dimensional symmetric bilinear space can be described as the composition of at most n reflections.
The notion of a symmetric bilinear space is a generalization of Euclidean space whose structure is defined by a symmetric bilinear form (which need not be positive definite, so is not necessarily an inner product – for instance, a pseudo-Euclidean space is also a symmetric bilinear space). The orthogonal transformations in the space are those automorphisms which preserve the value of the bilinear form between every pair of vectors; in Euclidean space, this corresponds to preserving distances and angles. These orthogonal transformations form a group under composition, called the orthogonal group.
For example, in the two-dimensional Euclidean plane, every orthogonal transformation is either a reflection across a line through the origin or a rotation about the origin (which can be written as the composition of two reflections). Any arbitrary composition of such rotations and reflections can be rewritten as a composition of no more than 2 reflections. Similarly, in three-dimensional Euclidean space, every orthogonal transformation can be described as a single reflection, a rotation (2 reflections), or an improper rotation (3 reflections). In four dimensions, double rotations are added that represent 4 reflections.
Formal statement
Let
|
https://en.wikipedia.org/wiki/Mechanically%20interlocked%20molecular%20architectures
|
In chemistry, mechanically interlocked molecular architectures (MIMAs) are molecules that are connected as a consequence of their topology. This connection of molecules is analogous to keys on a keychain loop. The keys are not directly connected to the keychain loop but they cannot be separated without breaking the loop. On the molecular level, the interlocked molecules cannot be separated without the breaking of the covalent bonds that comprise the conjoined molecules; this is referred to as a mechanical bond. Examples of mechanically interlocked molecular architectures include catenanes, rotaxanes, molecular knots, and molecular Borromean rings. Work in this area was recognized with the 2016 Nobel Prize in Chemistry to Bernard L. Feringa, Jean-Pierre Sauvage, and J. Fraser Stoddart.
The synthesis of such entangled architectures has been made efficient by combining supramolecular chemistry with traditional covalent synthesis, however mechanically interlocked molecular architectures have properties that differ from both "supramolecular assemblies" and "covalently bonded molecules". The terminology "mechanical bond" has been coined to describe the connection between the components of mechanically interlocked molecular architectures. Although research into mechanically interlocked molecular architectures is primarily focused on artificial compounds, many examples have been found in biological systems including: cystine knots, cyclotides or lasso-peptides such as microcin J25 w
|
https://en.wikipedia.org/wiki/Carlos%20Chagas%20Filho
|
Carlos Chagas Filho (September 10, 1910 – February 16, 2000) was a Brazilian physician, biologist and scientist active in the field of neuroscience. He was internationally renowned for his investigations on the neural mechanisms underlying the phenomenon of electrogenesis by the electroplaques of electric fishes. He was also an important scientific leader, being one of the founders of the Biophysics Institute of the Federal University of Rio de Janeiro and was also a president for 16 years of the Vatican's Pontifical Academy of Sciences, and president of the Brazilian Academy of Sciences (1965–1967).
Life
He was the second son of Carlos Chagas (1879–1934), an eminent scientist who is credited with the discovery of Chagas disease. His oldest brother was Evandro Chagas (1905–1940), also a physician and scientist specialized, like his father, in tropical medicine.
He studied medicine from 1926 to 1931 at the Faculdade de Medicina da Universidade do Brasil (presently the Federal University of Rio de Janeiro). While a student he worked at Manguinhos, where the Instituto Oswaldo Cruz was founded by the physician Oswaldo Cruz (1872–1917) and where his father worked, as well as at a hospital in Lassance, state of Minas Gerais, where his father had discovered Chagas disease. After graduation, he went to become a director of this hospital during 1932. But what he really liked to do was biomedical research, following the example of his father and colleagues, so in the next years he wo
|
https://en.wikipedia.org/wiki/Granule
|
A granule is a large particle or grain. It can refer to:
Granule (cell biology), any of several submicroscopic structures, some with explicable origins, others noted only as cell type-specific features of unknown function
Azurophilic granule, a structure characteristic of the azurophil eukaryotic cell type
Chromaffin granule, a structure characteristic of the chromophil eukaryotic cell type.
Astrophysics and geology:
Granule (solar physics), a visible structure in the photosphere of the Sun arising from activity in the Sun's convective zone
Martian spherules, spherical granules of material found on the surface of the planet Mars
Granule (geology), a specified particle size of 2–4 millimetres (-1 to -2 on the φ scale)
Granule, in pharmaceutical terms, small particles gathered into a larger, permanent aggregate in which the original particles can still be identified
Granule (Oracle DBMS), a unit of contiguously allocated virtual memory
Granular synthesis of sound
See also
Granularity, extent to which a material or system is composed of distinguishable particles
Granular material, any conglomeration of discrete solid, macroscopic particles (grains)
Granule cell, a neuron with a small cell body
Grain (disambiguation)
Granulation (disambiguation)
Granulometry (disambiguation)
|
https://en.wikipedia.org/wiki/Atlantic%20modal%20haplotype
|
In human genetics, the Atlantic modal haplotype (AMH) or haplotype 15 is a Y chromosome haplotype of Y-STR microsatellite variations, associated with the Haplogroup R1b. It was discovered prior to many of the SNPs now used to identify subclades of R1b and references to it can be found in some of the older literature. It corresponds most closely with subclade R1b1a2a1a(1) [L11].
The AMH is the most frequently occurring haplotype amongst human males in Atlantic Europe. It is characterized by the following marker alleles:
DYS388 12
DYS390 24
DYS391 11
DYS392 13
DYS393 13
DYS394 14 (also known as DYS19)
It reaches the highest frequencies in the Iberian Peninsula, in Great Britain and Ireland. In the Iberian Peninsula it reaches 70% in Portugal as a whole, more than 90% in NW Portugal and nearly 90% in Galicia (NW Spain), while the highest value is to be found among Spain and the Basques.
One mutation in either direction, would be AMH 1.15+. The AMH 1.15 set of haplotypes is also referred to as the Atlantic modal cluster or AMC.
See also
Modal haplotype
Haplotype
Haplogroup
Haplogroup R1b
List of Y-STR markers
External links
Some Haplotype Definitions
Some Variations of DYS390
A PNAS article on AMH and variations on the British Isles
Human Y-DNA modal haplotypes
|
https://en.wikipedia.org/wiki/Mineralogical%20Society%20of%20America
|
The Mineralogical Society of America (MSA) is a scientific membership organization. MSA was founded in 1919 for the advancement of mineralogy, crystallography, geochemistry, and petrology, and promotion of their uses in other sciences, industry, and the arts. It encourages fundamental research about natural materials; supports the teaching of mineralogical concepts and procedures to students of mineralogy and related arts and sciences; and attempts to raise the scientific literacy of society with respect to issues involving mineralogy. The Society encourages the general preservation of mineral collections, displays, mineral localities, type minerals and scientific data. MSA represents the United States with regard to the science of mineralogy in any international context. The Society was incorporated in 1937 and approved as a nonprofit organization in 1959.
Publications
American Mineralogist: An International Journal of Earth and Planetary Materials, is the print journal of the Society, and it has been published continuously since 1916. It publishes the results of original scientific research in the fields of mineralogy, crystallography, geochemistry, and petrology with the goal of providing readers with the best in earth science research.
Reviews in Mineralogy is a series of multi-authored, soft-bound books containing cogent and concise reviews of the literature and advances about a subject area. Since 1974, 86 volumes have been published.
The Lattice is a quarterly new
|
https://en.wikipedia.org/wiki/Michael%20Shackleford
|
Michael Shackleford, (May 23, 1965 in Pasadena, California, United States), also known as "The Wizard of Odds" (a title taken from Donald Angelini), is an American mathematician and an actuary. He is best known for his professional analysis of the mathematics of the casino games. He is also an adjunct professor of actuarial science and mathematics at the University of Nevada, Las Vegas. He became interested in the mathematics of gambling at a young age after reading John Scarne's Guide to Casino Gambling.
Shackleford discovered his affinity for mathematics when he began to study algebra in school at approximately 11 years old. He described how math had become something new and interesting.
Before launching his websites, he was a government actuary in Baltimore. It took him approximately one year to convince his wife that going into business for himself within the gambling industry was the right thing to do. He then left his job as an actuary to work on his websites.
Today Shackleford is best known for his websites, The Wizard of Odds and The Wizard of Vegas, which contain analyses and strategies for casino games. In 2002, after moving to Las Vegas, he published a paper with rankings of slot machine payout percentages that were previously considered unavailable. The Time Out Las Vegas referred to the survey as groundbreaking. This paper was referenced by Palms Casino Resort to advertise their competitive payouts.
Shackleford sold his sites on September 19, 2014 for $2.35
|
https://en.wikipedia.org/wiki/Pi-system
|
In mathematics, a -system (or pi-system) on a set is a collection of certain subsets of such that
is non-empty.
If then
That is, is a non-empty family of subsets of that is closed under non-empty finite intersections.
The importance of -systems arises from the fact that if two probability measures agree on a -system, then they agree on the -algebra generated by that -system. Moreover, if other properties, such as equality of integrals, hold for the -system, then they hold for the generated -algebra as well. This is the case whenever the collection of subsets for which the property holds is a -system. -systems are also useful for checking independence of random variables.
This is desirable because in practice, -systems are often simpler to work with than -algebras. For example, it may be awkward to work with -algebras generated by infinitely many sets So instead we may examine the union of all -algebras generated by finitely many sets This forms a -system that generates the desired -algebra. Another example is the collection of all intervals of the real line, along with the empty set, which is a -system that generates the very important Borel -algebra of subsets of the real line.
Definitions
A -system is a non-empty collection of sets that is closed under non-empty finite intersections, which is equivalent to containing the intersection of any two of its elements.
If every set in this -system is a subset of then it is called a
For any non-empty family
|
https://en.wikipedia.org/wiki/Malcolm%20Parry
|
C. Malcolm Parry (born c. 1938) is a Welsh architect, professor emeritus, and TV/radio broadcaster.
Early life
Parry was born in Blaenavon, left school at the age of 15 and trained as a Mining Surveyor. He intended to attend university to study Civil Engineering but was encouraged by the then Head of Architecture at Cardiff University to study architecture. Parry was later employed by the university to teach and carry out research.
Employment at Cardiff University
Parry is a former Head of the Welsh School of Architecture at Cardiff University, which he led from 1997 until his retirement. In his teachings Parry showed a particular interest in the aspect of light in architecture, evident in the Architectural Practice Group of which he was director. The research group studies the processes of architectural design, including construction law, economics, process and the strategic management of estates. The group has links with the RIBA. Research into the design process includes the production of a Design Manual for Wales.
TV appearances
Parry has presented several programmes about architecture on BBC television, for example On the House, as well as BBC Radio Wales programmes Building on the Past and Work Matter.
In 1999 he devised and progressed The House for the Future project in conjunction with the BBC, which resulted in a TV series about the construction of the building, which he presented.
In the first series of Building on the Past, Parry visited the towns of Newport,
|
https://en.wikipedia.org/wiki/Neuroimmunology
|
Neuroimmunology is a field combining neuroscience, the study of the nervous system, and immunology, the study of the immune system. Neuroimmunologists seek to better understand the interactions of these two complex systems during development, homeostasis, and response to injuries. A long-term goal of this rapidly developing research area is to further develop our understanding of the pathology of certain neurological diseases, some of which have no clear etiology. In doing so, neuroimmunology contributes to development of new pharmacological treatments for several neurological conditions. Many types of interactions involve both the nervous and immune systems including the physiological functioning of the two systems in health and disease, malfunction of either and or both systems that leads to disorders, and the physical, chemical, and environmental stressors that affect the two systems on a daily basis.
Background
Neural targets that control thermogenesis, behavior, sleep, and mood can be affected by pro-inflammatory cytokines which are released by activated macrophages and monocytes during infection. Within the central nervous system production of cytokines has been detected as a result of brain injury, during viral and bacterial infections, and in neurodegenerative processes.
From the US National Institute of Health:
"Despite the brain's status as an immune privileged site, an extensive bi-directional communication takes place between the nervous and the immune syst
|
https://en.wikipedia.org/wiki/CAUP
|
CAUP may refer to:
Tongji University College of Architecture and Urban Planning
University of Washington College of Architecture and Urban Planning
Centre for Astrophysics of the University of Porto
|
https://en.wikipedia.org/wiki/Richard%20Vaughan%20%28robotics%29
|
Richard Vaughan (born 28 July 1971) is a robotics and artificial intelligence researcher at Simon Fraser University in Canada. Since 2018, Vaughan is on leave from SFU and is working at Apple. He is the founder and director of the SFU Autonomy Laboratory. In 1998, Vaughan demonstrated the first robot to interact with animals and in 2000 co-founded the Player Project, a robot control and simulation system.
References
1971 births
Living people
Canadian roboticists
|
https://en.wikipedia.org/wiki/Marc%20Seriff
|
Marc S. Seriff (born May 5, 1948 in Austin, Texas) is best known as the CTO and co-founder of America Online, along with Jim Kimsey (CEO), Steve Case, and William von Meister (as Control Video Corporation).
Biography
Seriff received his B.S. in Mathematics and Computer Science from the University of Texas at Austin in 1971 and an M.S. in Electrical Engineering and Computer Science from Massachusetts Institute of Technology (MIT) in 1974.
In 1974 he was one of the first dozen people at Telenet Communications. He later served as an executive of several audio and data communications companies, including GTE Corporation, Venture Technology, Digital Music, Inc., and Control Video Corporation. In 1985, he co-founded Quantum Computer Services (later known as America Online), where he served as a Senior Vice President until 1996.
From August 1997 to May 1998 he was a director of InteliHome, which merged with Global Converging Technologies. From January to June 1998 he was CEO of Eos Management, LLC. He also served as a director of U.S. Online Communications.
In a 2007 interview, Seriff cited two people who influenced him at a young age: J.C.R. Licklider, one of his professors in graduate school at MIT, and Larry Roberts, whom he met and worked for at Telenet.
References
1948 births
Living people
AOL people
Businesspeople from Austin, Texas
University of Texas at Austin alumni
Massachusetts Institute of Technology School of Science alumni
American chief executives
American chief
|
https://en.wikipedia.org/wiki/%C3%89cole%20nationale%20sup%C3%A9rieure%20d%27informatique%20et%20de%20math%C3%A9matiques%20appliqu%C3%A9es%20de%20Grenoble
|
The École nationale supérieure d'informatique et de mathématiques appliquées, or Ensimag, is a prestigious French Grande École located in Grenoble, France. Ensimag is part of the Institut polytechnique de Grenoble (Grenoble INP). The school is one of the top French engineering institutions and specializes in computer science, applied mathematics and telecommunications.
In the fields of computer science and applied mathematics, Ensimag ranks first in France, as measured by the position of its students in the national admission examinations and by the ranking of companies hiring its students and specialized media.
Students are usually admitted to Ensimag competitively following two years of undergraduate studies in classes préparatoires aux grandes écoles. Studies at Ensimag are of three years' duration and lead to the French degree of "Diplôme National d'Ingénieur" (equivalent to a master's degree).
Grenoble, in the French Alps, has always been a pioneer for high-tech engineering education in France. The first French school of electrical engineering has been created in Grenoble in 1900 (one of the first in the world after MIT). In 1960 the eminent French mathematician Jean Kuntzmann founded Ensimag. Since that time it has become the highest ranking French engineering school in computer science and applied mathematics.
About 250 students graduates from the school each year in its different degrees, and counts with more than 5500 alumni worldwide.
Ensimag Graduate specializ
|
https://en.wikipedia.org/wiki/Selection%20coefficient
|
In population genetics, a selection coefficient, usually denoted by the letter s, is a measure of differences in relative fitness. Selection coefficients are central to the quantitative description of evolution, since fitness differences determine the change in genotype frequencies attributable to selection.
The following definition of s is commonly used. Suppose that there are two genotypes A and B in a population with relative fitnesses and respectively. Then, choosing genotype A as our point of reference, we have , and , where s measures the fitness advantage (s>0) or disadvantage (s<0) of B.
For example, the lactose-tolerant allele spread from very low frequencies to high frequencies in less than 9000 years since farming with an estimated selection coefficient of 0.09-0.19 for a Scandinavian population. Though this selection coefficient might seem like a very small number, over evolutionary time, the favored alleles accumulate in the population and become more and more common, potentially reaching fixation.
See also
Evolutionary pressure
References
Population genetics
Evolutionary biology
|
https://en.wikipedia.org/wiki/Hong%20Kong%20Mathematical%20High%20Achievers%20Selection%20Contest
|
Hong Kong Mathematical High Achievers Selection Contest (HKMHASC, Traditional Chinese: 香港青少年數學精英選拔賽) is a yearly mathematics competition for students of or below Secondary 3 in Hong Kong. It is jointly organized by Po Leung Kuk and Hong Kong Association of Science and Mathematics Education since the academic year 1998-1999. Recently, there are more than 250 secondary schools participating.
Format and Scoring
Each participating school may send at most 5 students into the contest. There is one paper, divided into Part A and Part B, with two hours given. Part A is usually made up of 14 - 18 easier questions, carrying one mark each. In Part A, only answers are required. Part B is usually made up of 2 - 4 problems with different difficulties, and may carry different number of marks, varying from 4 to 8. In Part B, workings are required and marked. No calculators or calculation assisting equipments (e.g. printed mathematical tables) are allowed.
Awards and Further Training
Awards are given according to the total mark. The top 40 contestants are given the First Honour Award (一等獎), the next 80 the Second Honour Award (二等獎), and the Third Honour Award (三等獎) for the next 120. Moreover, the top 4 can obtain an award, namely the Champion and the 1st, 2nd and 3rd Runner-up.
Group Awards are given to schools, according to the sum of marks of the 3 contestants with highest mark. The first 4 are given the honour of Champion and 1st, 2nd and 3rd Runner-up. The honour of Top 10 (首十名最佳成績)
|
https://en.wikipedia.org/wiki/Gustav%20Tschermak%20von%20Seysenegg
|
Gustav Tschermak von Seysenegg (19 April 1836 – 24 May 1927) was an Austrian mineralogist.
Biography
He was born in Litovel, Moravia, and studied at the University of Vienna, where he obtained a teaching degree. He studied mineralogy at Heidelberg and Tübingen and obtained a PhD. He returned to Vienna as a lecturer in mineralogy and chemistry and, in 1862 was appointed second vice curator of the Imperial Mineralogical Cabinet, becoming director in 1868. He resigned as director in 1877. He was also professor of petrography at the University of Vienna. He was appointed professor in 1873 and a member of the Imperial Academy of Sciences, a member of the American Philosophical Society in 1882, and a member of the Royal Swedish Academy of Sciences in 1905. He died in 1927, aged 91.
Work
He did useful work on many minerals and on meteorites. The mineral tschermakite is named in his honour. In 1871 he established the Mineralogische Mitteilungen (Mineralogical Reports), published after 1878 as the Mineralogische und petrographische Mitteilungen (Mineralogical and Petrographical Reports). His publications include:
Die Porphyrgesteine Oesterreichs (1869)
Die mikroskopische Beschaffenheit der Meteoriten (1883)
Lehrbuch der Mineralogie (1884; 5th ed. 1897) Digital 5th edition by the University and State Library Düsseldorf
Family
He had two sons, Armin von Tschermak-Seysenegg, professor of physiology, and Erich von Tschermak-Seysenegg, a botanist, who were one of the re-discoverers
|
https://en.wikipedia.org/wiki/Hugh%20F.%20Durrant-Whyte
|
Hugh Francis Durrant-Whyte (born 6 February 1961) is a British-Australian engineer and academic. He is known for his pioneering work on probabilistic methods for robotics. The algorithms developed in his group since the early 1990s permit autonomous vehicles to deal with uncertainty and to localize themselves despite noisy sensor readings using simultaneous localization and mapping (SLAM).
Early life and education
Durrant-Whyte was born on 6 February 1961 in London, England. He was educated at Richard Hale School, then a state grammar school in Hertford, Hertfordshire. He studied engineering at the University of London, graduating with a first class Bachelor of Science (BSc) degree in 1983. He then moved to the United States where he studied systems engineering at the University of Pennsylvania: he graduated with a Master of Science in Engineering (MSE) degree in 1985 and a Doctor of Philosophy (PhD) degree in 1986. He was a Thouron Scholar in 1983.
Career and research
From 1986 to 1987, Durrant-Whyte was a BP research fellow in the Department of Engineering Science, University of Oxford, and a Fellow of St Cross College, Oxford. Then, from 1987 to 1995, he was a Fellow of Oriel College, Oxford, and a university lecturer in engineering science.
In 1995, he accepted a chair at the University of Sydney as Professor of Mechatronic Engineering. He was also director of the Australian Centre for Field Robotics (ACFR) from 1999 to 2002. From 2002 until 2010 he held the position
|
https://en.wikipedia.org/wiki/Tensor%20bundle
|
In mathematics, the tensor bundle of a manifold is the direct sum of all tensor products of the tangent bundle and the cotangent bundle of that manifold. To do calculus on the tensor bundle a connection is needed, except for the special case of the exterior derivative of antisymmetric tensors.
Definition
A tensor bundle is a fiber bundle where the fiber is a tensor product of any number of copies of the tangent space and/or cotangent space of the base space, which is a manifold. As such, the fiber is a vector space and the tensor bundle is a special kind of vector bundle.
References
See also
Vector bundles
|
https://en.wikipedia.org/wiki/Richard%20Blahut
|
Richard Blahut (born June 9, 1937), former chair of the Electrical and Computer Engineering Department at the University of Illinois at Urbana–Champaign, is best known for his work in information theory (e.g. the Blahut–Arimoto algorithm used in rate–distortion theory). He received his PhD Electrical Engineering from Cornell University in 1972.
Blahut was elected a member of the National Academy of Engineering in 1990 for pioneering work in coherent emitter signal processing and for contributions to information theory and error control codes.
Academic life
Blahut taught at Cornell from 1973 to 1994. He has taught at Princeton University, the Swiss Federal Institute of Technology, the NATO Advanced Study Institute, and has also been a Consulting Professor at the South China University of Technology. He is also the Henryk Magnuski Professor of Electrical and Computer Engineering and is affiliated with the Coordinated Science Laboratory.
Awards and recognition
IEEE Claude E. Shannon Award, 2005
IEEE Third Millennium Medal
TBP Daniel C. Drucker Eminent Faculty Award 2000
IEEE Alexander Graham Bell Medal 1998, for "contributions to error-control coding, particularly by combining algebraic coding theory and digital transform techniques."
National Academy of Engineering 1990
Japanese Society for the Propagation of Science Fellowship 1982
Fellow of Institute of Electrical and Electronics Engineers, 1981, for the development of passive surveillance systems and for contribut
|
https://en.wikipedia.org/wiki/George%20Saitoti
|
George Musengi Saitoti, E.G.H. (3 August 1945 – 10 June 2012) was a Kenyan politician, businessman and American- and British-trained economist, mathematician and development policy thinker.
As a mathematician, Saitoti served as Head of the Mathematics Department at the University of Nairobi, pioneered the founding of the African Mathematical Union and served as its vice-president from 1976 to 1979.
As an economist, Saitoti served as the Executive Chairman of the World Bank and the International Monetary Fund (IMF) in 1990–91, and as President of the African Caribbean and Pacific (ACP) Group of States in 1999–2000, at the crucial phase of re-negotiating the new development partnership agreement to replace the expired Lomé Convention between the ACP bloc and the European Union (EU). His book The Challenges of Economic and Institutional Reforms in Africa influenced practical policy directions on an array of areas during the turbulent 1980s and 1990s.
Saitoti joined politics as a nominated Member of Parliament and Minister for Finance in 1983, rising to become Kenya's longest-serving Vice-President, a proficient Minister for education, Internal Security and Provincial Administration and Foreign Affairs. Few recognise him as a "reformist", but his recommendations as the Chair of the KANU Review Committee, popularly known as the "Saitoti Committee" in 1990–91, opened KANU to internal changes and set the stage for the repeal of Section 2A and Kenya's return to pluralist democracy
|
https://en.wikipedia.org/wiki/Thomas%20Farrell%20%28United%20States%20Army%20officer%29
|
Major General Thomas Francis Farrell (3 December 1891 – 11 April 1967) was the Deputy Commanding General and Chief of Field Operations of the Manhattan Project, acting as executive officer to Major General Leslie R. Groves Jr.
Farrell graduated from Rensselaer Polytechnic Institute with a degree in civil engineering in 1912. During World War I, he served with the 1st Engineers on the Western Front, and was awarded the Distinguished Service Cross and the French Croix de guerre. After the war, he was an instructor at the Engineer School, and then at the United States Military Academy at West Point. He resigned from the Regular Army in 1926 to become Commissioner of Canals and Waterway for the State of New York from 1926 to 1930, and head of construction and engineering of the New York State Department of Public Works from 1930 until 1941.
During World War II he returned to active duty as Groves' executive officer in the Operations Branch of the Construction Division under the Office of the Quartermaster General. He went to the China-Burma-India theater to help build the Ledo Road. In January 1945, Groves chose Farrell as his second-in-command of the Manhattan Project. Farrell observed the Trinity test at the Alamogordo Bombing and Gunnery Range with J. Robert Oppenheimer. In August 1945, he went to Tinian to supervise the bombing of Hiroshima and Nagasaki. Afterwards he led teams of scientists to inspect the effects of the atomic bombs.
In 1946 he was appointed chairman of t
|
https://en.wikipedia.org/wiki/Phenomics
|
Phenomics is the systematic study of traits that make up a phenotype. It was coined by UC Berkeley and LBNL scientist Steven A. Garan. As such, it is a transdisciplinary area of research that involves biology, data sciences, engineering and other fields. Phenomics is concerned with the measurement of the phenotype where a phenome is a set of traits (physical and biochemical traits) that can be produced by a given organism over the course of development and in response to genetic mutation and environmental influences. It is also important to remember that an organisms phenotype changes with time. The relationship between phenotype and genotype enables researchers to understand and study pleiotropy. Phenomics concepts are used in functional genomics, pharmaceutical research, metabolic engineering, agricultural research, and increasingly in phylogenetics.
Technical challenges involve improving, both qualitatively and quantitatively, the capacity to measure phenomes.
Applications
Plant sciences
In plant sciences, phenomics research occurs in both field and controlled environments. Field phenomics encompasses the measurement of phenotypes that occur in both cultivated and natural conditions, whereas controlled environment phenomics research involves the use of glass houses, growth chambers, and other systems where growth conditions can be manipulated. The University of Arizona's Field Scanner in Maricopa, Arizona is a platform developed to measure field phenotypes. Controlled
|
https://en.wikipedia.org/wiki/Bruce%20Maccabee
|
Bruce Maccabee (born May 6, 1942) is an American optical physicist formerly employed by the U.S. Navy, and a ufologist.
Biography
Maccabee received a B.S. in physics at Worcester Polytechnic Institute in Worcester, Mass., and then at American University, Washington, DC, (M.S. and Ph.D. in physics). In 1972 he began his career at the Naval Ordnance Laboratory, White Oak, Silver Spring, Maryland; which later became the Naval Surface Warfare Center Dahlgren Division. Maccabee retired from government service in 2008. He has worked on optical data processing, generation of underwater sound with lasers and various aspects of the Strategic Defense Initiative (SDI) and Ballistic Missile Defense (BMD) using high power lasers.
Ufology
Maccabee has been interested in UFOs since the late 1960s when he joined the National Investigations Committee on Aerial Phenomena (NICAP) and was active in research and investigation for NICAP until its demise in 1980. He became a member of the Mutual UFO Network (MUFON) in 1975 and was subsequently appointed to the position of state director for Maryland, a position he still holds. In 1979 he was instrumental in establishing the Fund for UFO Research (FUFOR) and was the chairman for about 13 years. He presently serves on the National Board of the Fund.
His UFO research and investigations (which, he often stresses, are completely unrelated to his Navy work) have included the Kenneth Arnold sighting (June 24, 1947), the McMinnville, Oregon (Trent
|
https://en.wikipedia.org/wiki/Thermal%20physics
|
Thermal physics is the combined study of thermodynamics, statistical mechanics, and kinetic theory of gases. This umbrella-subject is typically designed for physics students and functions to provide a general introduction to each of three core heat-related subjects. Other authors, however, define thermal physics loosely as a summation of only thermodynamics and statistical mechanics.
Thermal physics can be seen as the study of system with larger number of atom, it unites thermodynamics to statistical mechanics.
Overview
Thermal physics, generally speaking, is the study of the statistical nature of physical systems from an energetic perspective. Starting with the basics of heat and temperature, thermal physics analyzes the first law of thermodynamics and second law of thermodynamics from the statistical perspective, in terms of the number of microstates corresponding to a given macrostate. In addition, the concept of entropy is studied via quantum theory.
A central topic in thermal physics is the canonical probability distribution. The electromagnetic nature of photons and phonons are studied which show that the oscillations of electromagnetic fields and of crystal lattices have much in common. Waves form a basis for both, provided one incorporates quantum theory.
Other topics studied in thermal physics include: chemical potential, the quantum nature of an ideal gas, i.e. in terms of fermions and bosons, Bose–Einstein condensation, Gibbs free energy, Helmholtz free
|
https://en.wikipedia.org/wiki/Hong%20Kong%20Association%20of%20Science%20and%20Mathematics%20Education
|
Hong Kong Association of Science and Mathematics Education is a society to promote and improve the teaching methodology of the science and mathematics in Hong Kong. Founded in 1964, current members are secondary school teachers, professors and lecturers in the universities and government officials in education.
External links
Official website
Education in Hong Kong
|
https://en.wikipedia.org/wiki/Transheterozygote
|
The term transheterozygote is used in modern genetics periodicals in two different ways. In the first, the transheterozygote has one mutant (-) and one wildtype allele (+) at each of two different genes (A-/A+ and B-/B+ where A and B are different genes). In the second, the transheterozygote carries two different mutated alleles of the same gene (A*/A', see example below). This second definition also applies to the term "heteroallelic combination".
Organisms with one mutant and one wildtype allele at one locus are called simply heterozygous, not transheterozygous.
Transheterozygotes are useful in the study of genetic interactions and complementation testing.
Transheterozygous at two loci
A transheterozygote is a diploid organism that is heterozygous at two different loci (genes). Each of the two loci has one natural (or wild type) allele and one allele that differs from the natural allele because of a mutation. Such an organism can be created by crossing together two organisms that carry one mutation each, in two different genes, and selecting for the presence of both mutations simultaneously in an individual offspring. The offspring will have one mutant allele and one wildtype allele at each of the two genes being studied.
Transheterozygotes are useful in the study of genetic interactions. An example from Drosophila research: the wing vein phenotype of a recessive mutation in the Epidermal growth factor receptor (Egfr), a gene required for communication between cells,
|
https://en.wikipedia.org/wiki/David%20R.%20Cheriton%20School%20of%20Computer%20Science
|
The David R. Cheriton School of Computer Science is a professional school within the Faculty of Mathematics at the University of Waterloo. QS World University Rankings ranked the David R. Cheriton School of Computer Science 24th in the world, 10th in North America and 2nd in Canada in Computer Science in 2014. U.S. News & World Report ranked the David R. Cheriton School of Computer Science 42nd in world and second in Canada.
History
In 1965, when Mathematics was still a department within the Faculty of Arts, four third-year mathematics students (Richard Shirley, Angus German, James G. Mitchell, and Bob Zarnke) wrote the WATFOR compiler for the FORTRAN programming language, under the direction of lecturer Peter Shantz. "Within a year it would be adopted by computing centres in over eight countries, and the number of student users at UW increased to over 2500." Later on in 1966, two mathematics lecturers (Paul Dirksen and Paul H. Cress) led a team that developed WATFOR 360, for which they received the 1972 Grace Murray Hopper Award from the Association for Computing Machinery.
UW's Faculty of Mathematics was later established in 1967. As a result, the Department of Applied Analysis and Computer Science (AA&CS) was created. By 1969, AA&CS had become the largest department in the faculty. At that point, the first two PhD degrees in computer science were awarded, to Byron L. Ehle, for a thesis on numerical analysis, and to Hugh Williams, for a thesis on computational number
|
https://en.wikipedia.org/wiki/Carlson%27s%20theorem
|
In mathematics, in the area of complex analysis, Carlson's theorem is a uniqueness theorem which was discovered by Fritz David Carlson. Informally, it states that two different analytic functions which do not grow very fast at infinity can not coincide at the integers. The theorem may be obtained from the Phragmén–Lindelöf theorem, which is itself an extension of the maximum-modulus theorem.
Carlson's theorem is typically invoked to defend the uniqueness of a Newton series expansion. Carlson's theorem has generalized analogues for other expansions.
Statement
Assume that satisfies the following three conditions. The first two conditions bound the growth of at infinity, whereas the third one states that vanishes on the non-negative integers.
is an entire function of exponential type, meaning that for some real values , .
There exists such that
for every non-negative integer .
Then is identically zero.
Sharpness
First condition
The first condition may be relaxed: it is enough to assume that is analytic in , continuous in , and satisfies
for some real values , .
Second condition
To see that the second condition is sharp, consider the function . It vanishes on the integers; however, it grows exponentially on the imaginary axis with a growth rate of , and indeed it is not identically zero.
Third condition
A result, due to , relaxes the condition that vanish on the integers. Namely, Rubel showed that the conclusion of the theorem remains valid if vanishes
|
https://en.wikipedia.org/wiki/Ukkonen%27s%20algorithm
|
In computer science, Ukkonen's algorithm is a linear-time, online algorithm for constructing suffix trees, proposed by Esko Ukkonen in 1995. The algorithm begins with an implicit suffix tree containing the first character of the string. Then it steps through the string, adding successive characters until the tree is complete. This order addition of characters gives Ukkonen's algorithm its "on-line" property. The original algorithm presented by Peter Weiner proceeded backward from the last character to the first one from the shortest to the longest suffix. A simpler algorithm was found by Edward M. McCreight, going from the longest to the shortest suffix.
Implicit suffix tree
While generating suffix tree using Ukkonen's algorithm, we will see implicit suffix tree in intermediate steps depending on characters in string S. In implicit suffix trees, there will be no edge with $ (or any other termination character) label and no internal node with only one edge going out of it.
High level description of Ukkonen's algorithm
Ukkonen's algorithm constructs an implicit suffix tree T for each prefix S[1...i] of S (S being the string of length n). It first builds T using 1 character, then T using 2 character, then T using 3 character, ..., T using the n character. You can find the following characteristics in a suffix tree that uses Ukkonen's algorithm:
Implicit suffix tree T is built on top of implicit suffix tree T .
At any given time, Ukkonen's algorithm builds the suffix tree fo
|
https://en.wikipedia.org/wiki/Richard%20Gans
|
Richard Martin Gans (7 March 1880 – 27 June 1954), German of Jewish origin, born in Hamburg, was the physicist who founded the Physics Institute of the National University of La Plata, Argentina. He was its Director in two different periods.
During the first one, starting in 1911, he continued the work started by Emil Bose raising the research level of the institute to international renown. In 1914 he founded the publication of a scientific journal: Contribución al estudio de las ciencias fisicomatemáticas, with two series: matematicofísica and técnica.
His second period in La Plata was from the late 1940s through the early 1950s, when he played an important role as member of one of the commissions which reviewed Ronald Richter's claims related to the Huemul Project.
After leaving La Plata in 1951 he taught theoretical and advanced physics at the University of Buenos Aires.
Gans theory is named after Richard Gans. This theory gives the solutions to the Maxwell equations for prolate and oblate spheroidal particles. It is an extension of Mie theory and thus sometimes called Mie-Gans theory. He first published these equations describing the scattering of elongated particles in 1912 for gold particles. In 1915, the solution for silver particles was published. Gans also rederived Lord Rayleigh scattering approximation for optically soft spheres, which is now known as Rayleigh–Gans approximation.
His doctoral students include Daniele Amati and Alberto Sirlin.
Studies
Gans
|
https://en.wikipedia.org/wiki/Vertical%20and%20horizontal%20bundles
|
In mathematics, the vertical bundle and the horizontal bundle are vector bundles associated to a smooth fiber bundle. More precisely, given a smooth fiber bundle , the vertical bundle and horizontal bundle are subbundles of the tangent bundle of whose Whitney sum satisfies . This means that, over each point , the fibers and form complementary subspaces of the tangent space . The vertical bundle consists of all vectors that are tangent to the fibers, while the horizontal bundle requires some choice of complementary subbundle.
To make this precise, define the vertical space at to be . That is, the differential (where ) is a linear surjection whose kernel has the same dimension as the fibers of . If we write , then consists of exactly the vectors in which are also tangent to . The name is motivated by low-dimensional examples like the trivial line bundle over a circle, which is sometimes depicted as a vertical cylinder projecting to a horizontal circle. A subspace of is called a horizontal space if is the direct sum of and .
The disjoint union of the vertical spaces VeE for each e in E is the subbundle VE of TE; this is the vertical bundle of E. Likewise, provided the horizontal spaces vary smoothly with e, their disjoint union is a horizontal bundle. The use of the words "the" and "a" here is intentional: each vertical subspace is unique, defined explicitly by . Excluding trivial cases, there are an infinite number of horizontal subspaces at each point. Also
|
https://en.wikipedia.org/wiki/Difference%20polynomials
|
In mathematics, in the area of complex analysis, the general difference polynomials are a polynomial sequence, a certain subclass of the Sheffer polynomials, which include the Newton polynomials, Selberg's polynomials, and the Stirling interpolation polynomials as special cases.
Definition
The general difference polynomial sequence is given by
where is the binomial coefficient. For , the generated polynomials are the Newton polynomials
The case of generates Selberg's polynomials, and the case of generates Stirling's interpolation polynomials.
Moving differences
Given an analytic function , define the moving difference of f as
where is the forward difference operator. Then, provided that f obeys certain summability conditions, then it may be represented in terms of these polynomials as
The conditions for summability (that is, convergence) for this sequence is a fairly complex topic; in general, one may say that a necessary condition is that the analytic function be of less than exponential type. Summability conditions are discussed in detail in Boas & Buck.
Generating function
The generating function for the general difference polynomials is given by
This generating function can be brought into the form of the generalized Appell representation
by setting , , and .
See also
Carlson's theorem
Bernoulli polynomials of the second kind
References
Ralph P. Boas, Jr. and R. Creighton Buck, Polynomial Expansions of Analytic Functions (Second Printing Corrected),
|
https://en.wikipedia.org/wiki/Stirling%20polynomials
|
In mathematics, the Stirling polynomials are a family of polynomials that generalize important sequences of numbers appearing in combinatorics and analysis, which are closely related to the Stirling numbers, the Bernoulli numbers, and the generalized Bernoulli polynomials. There are multiple variants of the Stirling polynomial sequence considered below most notably including the Sheffer sequence form of the sequence, , defined characteristically through the special form of its exponential generating function, and the Stirling (convolution) polynomials, , which also satisfy a characteristic ordinary generating function and that are of use in generalizing the Stirling numbers (of both kinds) to arbitrary complex-valued inputs. We consider the "convolution polynomial" variant of this sequence and its properties second in the last subsection of the article. Still other variants of the Stirling polynomials are studied in the supplementary links to the articles given in the references.
Definition and examples
For nonnegative integers k, the Stirling polynomials, Sk(x), are a Sheffer sequence for defined by the exponential generating function
The Stirling polynomials are a special case of the Nørlund polynomials (or generalized Bernoulli polynomials) each with exponential generating function
given by the relation .
The first 10 Stirling polynomials are given in the following table:
{| class="wikitable"
!k !! Sk(x)
|-
| 0 ||
|-
| 1 ||
|-
| 2 ||
|-
| 3 ||
|-
| 4 ||
|-
|
|
https://en.wikipedia.org/wiki/%C3%89douard%20Le%20Roy
|
Édouard Louis Emmanuel Julien Le Roy (; 18 June 1870 in Paris – 10 November 1954 in Paris) was a French philosopher and mathematician.
Life
Le Roy entered the École Normale Supérieure in 1892, and received the agrégation in mathematics in 1895. He became Doctor in Sciences in 1898, taught in several high schools, and became in 1909 professor of mathematics at the Lycée Saint-Louis in Paris.
From then on, Le Roy took an important interest in philosophy and metaphysics. A friend of Teilhard de Chardin and Henri Bergson's closer disciple, he succeeded Bergson at the College of France (1922) and, in 1945, at the Académie française. In 1919, Le Roy was also elected member of the Académie des Sciences morales et politiques.
Le Roy was especially interested in the relations between science and morality. Along with Henri Poincaré and Pierre Duhem, he supported a conventionalist thesis on the foundation of mathematics. Although a fervent Catholic, he extended this conventionalist theory to revealed truths, which did not, according to him, withdraw any of their strength. He rejected in the domain of religious dogmas, abstract reasonings and speculative theology in favour of instinctive faith, heart and sentiment. He was one of those close to Bergson who encouraged him to turn to the study of mysticism, explored in his later works. His conventionalism led his works, charged of modernism, to be placed on the Index by the Holy See.
Works
Théorie du potentiel newtonien : leçons profess
|
https://en.wikipedia.org/wiki/Abox
|
In computer science, the terms TBox and ABox are used to describe two different types of statements in knowledge bases. TBox statements are the "terminology component", and describe a domain of interest by defining classes and properties as a domain vocabulary. ABox statements are the "assertion component" — facts associated with the TBox's conceptual model or ontologies.
Together ABox and TBox statements make up a knowledge base or a knowledge graph.
ABox statements must be TBox-compliant: they are assertions that use the vocabulary defined by the TBox.
TBox statements are sometimes associated with object-oriented classes and ABox statements associated with instances of those classes.
Examples of ABox and TBox statements
ABox statements typically deal with concrete entities. They specify what category an entity belongs to, or what relation one entity has to another entity.
Item A is-an-instance-of Category C
Item A has-this-relation-to Item B
Examples:
Niger is-a country.
Chad is-a country
Niger is-next-to Chad.
Agadez is-a city.
Agadez is-located-in Niger.
TBox statements typically (or definitions of domain categories and implied relations) such as:
An entity X can be a country or a city
So Dagamanet is-a neighbourhood is not a fact you can specify, though it is a fact in real life.
A is-next-to B if B is-next-to A
So Niger is-next-to Chad implies Chad is-next-to Niger.
X is a place if X is-a city or X is-a country.
So Niger is-a country implies Niger
|
https://en.wikipedia.org/wiki/Confluency
|
In cell culture biology, confluence refers to the percentage of the surface of a culture dish that is covered by adherent cells. For example, 50 percent confluence means roughly half of the surface is covered, while 100 percent confluence means the surface is completely covered by the cells, and no more room is left for the cells to grow as a monolayer. The cell number refers to, trivially, the number of cells in a given region.
Impact on research
Many cell lines exhibit differences in growth rate or gene expression depending on the degree of confluence. Cells are typically passaged before becoming fully confluent in order to maintain their proliferation phenotype. Some cell types are not limited by contact inhibition, such as immortalized cells, and may continue to divide and form layers on top of the parent cells. To achieve optimal and consistent results, experiments are usually performed using cells at a particular confluence, depending on the cell type. Extracellular export of cell free material is also dependent on the cell confluence .
Estimation
Rule of thumb
Comparing the amount of space covered by cells with unoccupied space using the naked eye can provide a rough estimate of confluency.
Hemocytometer
A hemocytometer can be used to count cells, giving the cell number.
References
Cell culture
|
https://en.wikipedia.org/wiki/Journal%20of%20the%20Atmospheric%20Sciences
|
The Journal of the Atmospheric Sciences (until 1962 titled Journal of Meteorology) is a scientific journal published by the American Meteorological Society. It covers basic research related to the physics, dynamics, and chemistry of the atmosphere of Earth and other planets, with emphasis on the quantitative and deductive aspects of the subject.
See also
List of scientific journals in earth and atmospheric sciences
References
External links
English-language journals
Monthly journals
Academic journals established in 1944
American Meteorological Society academic journals
|
https://en.wikipedia.org/wiki/Journal%20of%20Applied%20Meteorology%20and%20Climatology
|
The Journal of Applied Meteorology and Climatology (JAMC; formerly Journal of Applied Meteorology) is a scientific journal published by the American Meteorological Society.
Applied research related to the physical meteorology, cloud physics, hydrology, weather modification, satellite meteorology, boundary layer processes, air pollution meteorology (including dispersion and chemical processes), agricultural and forest meteorology, and applied meteorological numerical models of all types.
See also
List of scientific journals
List of scientific journals in earth and atmospheric sciences
Atmospheric dispersion modeling
Academic journals established in 1962
English-language journals
American Meteorological Society academic journals
Climatology journals
|
https://en.wikipedia.org/wiki/Journal%20of%20Climate
|
The Journal of Climate is a biweekly peer-reviewed scientific journal published semi-monthly by the American Meteorological Society. It covers research that advances basic understanding of the dynamics and physics of the climate system on large spatial scales, including variability of the atmosphere, oceans, land surface, and cryosphere; past, present, and projected future changes in the climate system; and climate simulation and prediction.
See also
List of scientific journals in earth and atmospheric sciences
External links
Climatology journals
Academic journals established in 1988
English-language journals
American Meteorological Society academic journals
|
https://en.wikipedia.org/wiki/Kosaraju%27s%20algorithm
|
In computer science, Kosaraju-Sharir's algorithm (also known as Kosaraju's algorithm) is a linear time algorithm to find the strongly connected components of a directed graph. Aho, Hopcroft and Ullman credit it to S. Rao Kosaraju and Micha Sharir. Kosaraju suggested it in 1978 but did not publish it, while Sharir independently discovered it and published it in 1981. It makes use of the fact that the transpose graph (the same graph with the direction of every edge reversed) has exactly the same strongly connected components as the original graph.
The algorithm
The primitive graph operations that the algorithm uses are to enumerate the vertices of the graph, to store data per vertex (if not in the graph data structure itself, then in some table that can use vertices as indices), to enumerate the out-neighbours of a vertex (traverse edges in the forward direction), and to enumerate the in-neighbours of a vertex (traverse edges in the backward direction); however the last can be done without, at the price of constructing a representation of the transpose graph during the forward traversal phase. The only additional data structure needed by the algorithm is an ordered list of graph vertices, that will grow to contain each vertex once.
If strong components are to be represented by appointing a separate root vertex for each component, and assigning to each vertex the root vertex of its component, then Kosaraju's algorithm can be stated as follows.
For each vertex of the graph,
|
https://en.wikipedia.org/wiki/Raymond%20Pierrehumbert
|
Raymond Thomas Pierrehumbert is the Halley Professor of Physics at the University of Oxford. Previously, he was Louis Block Professor in Geophysical Sciences at the University of Chicago. He was a lead author on the Third Assessment Report of the IPCC (Intergovernmental Panel on Climate Change) and a co-author of the National Research Council report on abrupt climate change.
Education and awards
He earned a degree in Physics (A.B) from Harvard College and a PhD in Aeronautics and Astronautics from the Massachusetts Institute of Technology.
He was awarded a John Simon Guggenheim Fellowship in 1996, which was used to launch collaborative work on the climate of early Mars with collaborators in Paris. He is a Fellow of the American Geophysical Union (AGU) and has been named Chevalier de l'Ordre des Palmes Académiques by the Republic of France. He was elected to the American Academy of Arts and Sciences in 2015 and sits on the Science and Security Board of the Bulletin of the Atomic Scientists. In 2020, Pierrehumbert was elected a Fellow of the Royal Society.
Research
Pierrehumbert's central research interest is how climate works as a system and developing idealized mathematical models to be used to address questions of climate science such as how the earth kept from freezing over: the faint young sun paradox.
Pierrehumbert contributes to RealClimate and is a strong critic of solar geoengineering research.
Personal life
Pierrehumbert is married to Janet Pierrehumbert, pro
|
https://en.wikipedia.org/wiki/L-reduction
|
In computer science, particularly the study of approximation algorithms, an
L-reduction ("linear reduction") is a transformation of optimization problems which linearly preserves approximability features; it is one type of approximation-preserving reduction. L-reductions in studies of approximability of optimization problems play a similar role to that of polynomial reductions in the studies of computational complexity of decision problems.
The term L reduction is sometimes used to refer to log-space reductions, by analogy with the complexity class L, but this is a different concept.
Definition
Let A and B be optimization problems and cA and cB their respective cost functions. A pair of functions f and g is an L-reduction if all of the following conditions are met:
functions f and g are computable in polynomial time,
if x is an instance of problem A, then f(x) is an instance of problem B,
if y' is a solution to f(x), then g(y' ) is a solution to x,
there exists a positive constant α such that
,
there exists a positive constant β such that for every solution y' to f(x)
.
Properties
Implication of PTAS reduction
An L-reduction from problem A to problem B implies an AP-reduction when A and B are minimization problems and a PTAS reduction when A and B are maximization problems. In both cases, when B has a PTAS and there is an L-reduction from A to B, then A also has a PTAS. This enables the use of L-reduction as a replacement for showing the existence of a PTAS-red
|
https://en.wikipedia.org/wiki/Daniel%20P.%20Schrag
|
Daniel Paul Schrag (born January 25, 1966) is the Sturgis Hooper Professor of Geology, Professor of Environmental Science and Engineering at Harvard University and Director of the Harvard University Center for the Environment. He also co-directs the Science, Technology and Public Policy Program at the Belfer Center for Science and International Affairs at the Harvard University Harvard Kennedy School. He is also an external professor at the Santa Fe Institute.
He has also worked on a variety of clean energy projects incorporating carbon capture and storage to reduce emissions from power plants, fuel refineries and fertilizer plants. With John Marshall, he co-founded The Potential Energy Coalition, an environmental NGO aimed at deploying more effective communication strategies around climate change. With Eric Love, he co-founded The Carbon Endowment, an environmental NGO aimed at acquiring underground coal reserves and conserving them in perpetuity. He has served on the advisory boards of a variety of clean energy companies including Kobold Metals, a company trying to accelerate the discovery of critical metals for lithium ion batteries.
In 2023, an investigative report in the Harvard Crimson revealed that Schrag has faced allegations of bullying and creation of toxic workplace environments going back several decades, although the report only cited specific comments from the past three years. Twelve former students who worked with Schrag wrote a letter to the Crimson, fo
|
https://en.wikipedia.org/wiki/Actor%20model%20middle%20history
|
In computer science, the Actor model, first published in 1973 , is a mathematical model of concurrent computation. This article reports on the middle history of the Actor model in which major themes were initial implementations, initial applications, and development of the first proof theory and denotational model. It is the follow on article to Actor model early history which reports on the early history of the Actor model which concerned the basic development of the concepts. The article Actor model later history reports on developments after the ones reported in this article.
Proving properties of Actor systems
Carl Hewitt [1974] published the principle of Actor induction which is:
Suppose that an Actor has property when it is created
Further suppose that if has property when it processes a message, then it has property when it processes the next message.
Then always has the property .
In his doctoral dissertation, Aki Yonezawa developed further techniques for proving properties of Actor systems including those that make use of migration. Russ Atkinson and Carl Hewitt developed techniques for proving properties of Serializers that are guardians of shared resources. Gerry Barber's doctoral dissertation concerned reasoning about change in knowledgeable office systems.
Garbage collection
Garbage collection (the automatic reclamation of unused storage) was an important theme in the development of the Actor model.
In his doctoral dissertation, Peter Bishop develop
|
https://en.wikipedia.org/wiki/Actor%20model%20later%20history
|
In computer science, the Actor model, first published in 1973 , is a mathematical model of concurrent computation. This article reports on the later history of the Actor model in which major themes were investigation of the basic power of the model, study of issues of compositionality, development of architectures, and application to Open systems. It is the follow on article to Actor model middle history which reports on the initial implementations, initial applications, and development of the first proof theory and denotational model.
Power of the Actor Model
Investigations began into the basic power of the Actor model. Carl Hewitt [1985] argued that because of the use of Arbiters that the Actor model was more powerful than logic programming (see indeterminacy in concurrent computation).
A family of Prolog-like concurrent message passing systems using unification of shared variables and data structure streams for messages were developed by Keith Clark, Hervé Gallaire, Steve Gregory, Vijay Saraswat, Udi Shapiro, Kazunori Ueda, etc. Some of these authors made claims that these systems were based on mathematical logic. However, like the Actor model, the Prolog-like concurrent systems were based on message passing and consequently were subject to indeterminacy in the ordering of messages in streams that was similar to the indeterminacy in arrival ordering of messages sent to Actors. Consequently Carl Hewitt and Gul Agha [1991] concluded that the Prolog-like concurrent syste
|
https://en.wikipedia.org/wiki/Bayfront%20Health%20St.%20Petersburg
|
Bayfront Health St. Petersburg is a 480-bed tertiary care center equipped to provide comprehensive medical and surgical care. The hospital offers many areas of expertise, including surgery and trauma, neuroscience, cardiology, acute rehabilitation and obstetrics. The hospital has a 38-bed neurosciences unit with a dedicated neurology team consisting of board-certified neuroscience specialists and physicians with subspecialty expertise in stroke, epilepsy and Parkinson's disease, and includes a Level IV Epilepsy Center. Bayfront Health Baby Place offers full obstetrical services.
Bayfront Health St. Petersburg, formerly known as Bayfront Medical Center, is an Orlando Health wholly owned hospital in St. Petersburg, Florida. Bayfront Health-St. Petersburg is Pinellas County's only trauma center and St. Petersburg's longest-standing hospital.
Bayfront is a not-for-profit teaching hospital that provides comprehensive services in trauma and emergency care; orthopaedics; obstetrics and gynecology; cardiac medicine and surgery (specializing in valve surgery); neurosciences (with its own epilepsy center); sports medicine; surgery and rehabilitation. More than 550 physicians are on staff with specialties ranging from open-heart surgery to fertility treatment.
It is also home to Bayflite, an air medical helicopter transport program. Bayflite, the largest hospital-based flight program in the Southeastern United States, was started more than two decades ago and the first flight program
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.