source
stringlengths 31
168
| text
stringlengths 51
3k
|
---|---|
https://en.wikipedia.org/wiki/Pisg%20%28software%29
|
pisg, short for Perl IRC Statistics Generator is a popular open-source Internet Relay Chat (IRC) log file analysis and statistical visualization program. It is written in perl by Morten Brix Pedersen. It analyzes various formats of log files from IRC clients and bots and generates HTML pages containing statistics about the channel the logs were taken from. It is often considered a competitor to mIRCStats, a similar shareware program.
pisg supports many log formats, including: mIRC, Trillian, Eggdrop, irssi, and more, and can be customized to work with other log file formats. Because it is open-source, pisg has an active community for further developing log interpreters. pisg runs on basically any platform with perl, including Linux, BSD, Microsoft Windows and Mac OS X.
References
External links
The Perl Programming Language
Internet Relay Chat
Cross-platform free software
Free software programmed in Perl
|
https://en.wikipedia.org/wiki/Product%20of%20groups
|
In mathematics, a product of groups usually refers to a direct product of groups, but may also mean:
semidirect product
Product of group subsets
wreath product
free product
central product
|
https://en.wikipedia.org/wiki/Pseudo%20algebraically%20closed%20field
|
In mathematics, a field is pseudo algebraically closed if it satisfies certain properties which hold for algebraically closed fields. The concept was introduced by James Ax in 1967.
Formulation
A field K is pseudo algebraically closed (usually abbreviated by PAC) if one of the following equivalent conditions holds:
Each absolutely irreducible variety defined over has a -rational point.
For each absolutely irreducible polynomial with and for each nonzero there exists such that and .
Each absolutely irreducible polynomial has infinitely many -rational points.
If is a finitely generated integral domain over with quotient field which is regular over , then there exist a homomorphism such that for each .
Examples
Algebraically closed fields and separably closed fields are always PAC.
Pseudo-finite fields and hyper-finite fields are PAC.
A non-principal ultraproduct of distinct finite fields is (pseudo-finite and hence) PAC. Ax deduces this from the Riemann hypothesis for curves over finite fields.
Infinite algebraic extensions of finite fields are PAC.
The PAC Nullstellensatz. The absolute Galois group of a field is profinite, hence compact, and hence equipped with a normalized Haar measure. Let be a countable Hilbertian field and let be a positive integer. Then for almost all -tuples , the fixed field of the subgroup generated by the automorphisms is PAC. Here the phrase "almost all" means "all but a set of measure zero". (This result is a consequence of Hilbert's irreducibility theorem.)
Let K be the maximal totally real Galois extension of the rational numbers and i the square root of −1. Then K(i) is PAC.
Properties
The Brauer group of a PAC field is trivial, as any Severi–Brauer variety has a rational point.
The absolute Galois group of a PAC field is a projective profinite group; equivalently, it has cohomological dimension at most 1.
A PAC field of characteristic zero is C1.
References
Algebraic geometry
Field (mathematics)
|
https://en.wikipedia.org/wiki/List%20of%20Cyberchase%20episodes
|
Cyberchase is an animated mathematics series that currently airs on PBS Kids. The show revolves around three Earth children (Jackie, Matt, and Inez), who use mathematics and problem-solving skills in a quest to save Cyberspace from a villain known as The Hacker. The three are transported into Cyberspace by Motherboard, the ruler of this virtual realm. Together with Motherboard's helper, Digit (a robotic bird), the three new friends compose the Cybersquad.
Each animated episode is followed by a live-action For Real interstitial before the credits, hosted by young, comedic actors who explore the episode's math topic in the real world. The show is created by the Thirteen Education division of WNET (channel 13), the PBS station for Greater New York.
After the fifth episode of Season 8 in 2010, Cyberchase went on hiatus. However, on April 3, 2013, it was announced on the show's official Facebook page that it would return for a ninth season during the fall.
On February 10, 2015, Gilbert Gottfried, the voice of Digit, announced that five new episodes were expected to be broadcast in the half of that year as the show's tenth season. In April 2015, the show's Twitter account retweeted a photo indicating that the season would focus on health, math, and the environment.
In January 2017, it was announced that Cyberchase would be returning for an eleventh season, with ten new episodes set to air later in the year. In May, producer Kristin DiQuollo and director Meeka Stuart answered questions about the show in a 19-minute video.
In October 2018, it was announced that Cyberchase would air for a twelfth season. The season premiered with a movie special on April 19, 2019, with the remaining episodes set to begin airing in the fall; However, all but two of the episodes premiered in 2020.
A thirteenth season was confirmed by Robert Tinkler, the voice actor of Delete, on Twitter, which premiered on February 25, 2022.
A fourteenth season premiered on April 21, 2023.
Series overview
Episodes
Webisodes (2001, 2003, and 2005)
Preceding the televised animated episodes, in December 2001, three webisodes called "How It All Started" were added to the website.
Webisode 1 had 16 panels.
Webisode 2 had 13 panels.
Webisode 3 had 13 panels.
By April 13, 2003, the trilogy was expanded into "Web Adventures", with a 4th webisode called NUMBERLESS PUDDLES! which had 20 panels.
On December 14, 2005, a 5th webisode called HACKER JACK was added, which had 4 panels.
The "Web Adventures" webisodes page was still up in December 2012, but as of March 8, 2014, it was changed into a redirect to the Math Games page introduced in 2011.
Season 1 (2002)
In the pilot episode "Lost My Marbles", Hacker infects Motherboard with a computer virus. At a library on Earth, while looking at a computer, Jackie, Matt, and Inez (who accidentally let Hacker unleash the virus by touching the computer map simultaneously) are sucked through an interdimensional portal into Cyberspace. In their fir
|
https://en.wikipedia.org/wiki/Binomial%20proportion%20confidence%20interval
|
In statistics, a binomial proportion confidence interval is a confidence interval for the probability of success calculated from the outcome of a series of success–failure experiments (Bernoulli trials). In other words, a binomial proportion confidence interval is an interval estimate of a success probability p when only the number of experiments n and the number of successes nS are known.
There are several formulas for a binomial confidence interval, but all of them rely on the assumption of a binomial distribution. In general, a binomial distribution applies when an experiment is repeated a fixed number of times, each trial of the experiment has two possible outcomes (success and failure), the probability of success is the same for each trial, and the trials are statistically independent. Because the binomial distribution is a discrete probability distribution (i.e., not continuous) and difficult to calculate for large numbers of trials, a variety of approximations are used to calculate this confidence interval, all with their own tradeoffs in accuracy and computational intensity.
A simple example of a binomial distribution is the set of various possible outcomes, and their probabilities, for the number of heads observed when a coin is flipped ten times. The observed binomial proportion is the fraction of the flips that turn out to be heads. Given this observed proportion, the confidence interval for the true probability of the coin landing on heads is a range of possible proportions, which may or may not contain the true proportion. A 95% confidence interval for the proportion, for instance, will contain the true proportion 95% of the times that the procedure for constructing the confidence interval is employed.
Normal approximation interval or Wald interval
A commonly used formula for a binomial confidence interval relies on approximating the distribution of error about a binomially-distributed observation, , with a normal distribution. This approximation is based on the central limit theorem and is unreliable when the sample size is small or the success probability is close to 0 or 1.
Using the normal approximation, the success probability p is estimated as
or the equivalent
where is the proportion of successes in a Bernoulli trial process, measured with trials yielding successes and failures, and is the quantile of a standard normal distribution (i.e., the probit) corresponding to the target error rate . For a 95% confidence level, the error , so and .
From this one finds two problems. First, for approaching unit (or zero), the interval narrows to zero width (implying certainty). Second, for values of (or equivalently for ), the interval boundaries exceed (overshoot).
An important theoretical derivation of this confidence interval involves the inversion of a hypothesis test. Under this formulation, the confidence interval represents those values of the population parameter that would have large p-values if they wer
|
https://en.wikipedia.org/wiki/Inductive%20data%20type
|
Inductive data type may refer to:
Algebraic data type, a datatype each of whose values is data from other datatypes wrapped in one of the constructors of the datatype
Inductive family, a family of inductive data types indexed by another type or value
Recursive data type, a data type for values that may contain other values of the same type
See also
Inductive type
Induction (disambiguation)
Type theory
Dependently typed programming
|
https://en.wikipedia.org/wiki/Momentum%20map
|
In mathematics, specifically in symplectic geometry, the momentum map (or, by false etymology, moment map) is a tool associated with a Hamiltonian action of a Lie group on a symplectic manifold, used to construct conserved quantities for the action. The momentum map generalizes the classical notions of linear and angular momentum. It is an essential ingredient in various constructions of symplectic manifolds, including symplectic (Marsden–Weinstein) quotients, discussed below, and symplectic cuts and sums.
Formal definition
Let M be a manifold with symplectic form ω. Suppose that a Lie group G acts on M via symplectomorphisms (that is, the action of each g in G preserves ω). Let be the Lie algebra of G, its dual, and
the pairing between the two. Any ξ in induces a vector field ρ(ξ) on M describing the infinitesimal action of ξ. To be precise, at a point x in M the vector is
where is the exponential map and denotes the G-action on M. Let denote the contraction of this vector field with ω. Because G acts by symplectomorphisms, it follows that is closed (for all ξ in ).
Suppose that is not just closed but also exact, so that for some function . If this holds, then one may choose the to make the map linear. A momentum map for the G-action on (M, ω) is a map such that
for all ξ in . Here is the function from M to R defined by . The momentum map is uniquely defined up to an additive constant of integration (on each connected component).
An -action on a symplectic manifold is called Hamiltonian if it is symplectic and if there exists a momentum map.
A momentum map is often also required to be -equivariant, where G acts on via the coadjoint action, and sometimes this requirement is included in the definition of a Hamiltonian group action. If the group is compact or semisimple, then the constant of integration can always be chosen to make the momentum map coadjoint equivariant. However, in general the coadjoint action must be modified to make the map equivariant (this is the case for example for the Euclidean group). The modification is by a 1-cocycle on the group with values in , as first described by Souriau (1970).
Examples of momentum maps
In the case of a Hamiltonian action of the circle , the Lie algebra dual is naturally identified with , and the momentum map is simply the Hamiltonian function that generates the circle action.
Another classical case occurs when is the cotangent bundle of and is the Euclidean group generated by rotations and translations. That is, is a six-dimensional group, the semidirect product of and . The six components of the momentum map are then the three angular momenta and the three linear momenta.
Let be a smooth manifold and let be its cotangent bundle, with projection map . Let denote the tautological 1-form on . Suppose acts on . The induced action of on the symplectic manifold , given by for is Hamiltonian with momentum map for all . Here denotes the contraction of the vector
|
https://en.wikipedia.org/wiki/Marriage%20problem
|
In mathematics, marriage problem may refer to:
Assignment problem, consisting of finding a maximum weight matching in a weighted bipartite graph
Secretary problem, also called the sultan's dowry or best choice problem, in optimal stopping theory
Stable marriage problem, the problem of finding a stable matching between two equally sized sets of elements given an ordering of preferences for each element
|
https://en.wikipedia.org/wiki/Evans%20Hall%20%28UC%20Berkeley%29
|
Evans Hall is the statistics, economics, and mathematics building on the campus of the University of California, Berkeley.
Computer History importance
Evans Hall also served as the gateway for the entire west coast's ARPAnet access during the early stages of the Internet's existence; at the time, the backbone was a 56kbit/s line to Chicago.
Because of its proximity to the engineering school, and the location of both the departments of Computer Science, and Mathematics, Evans Hall was the building in which the original vi text editor was programmed, as well as the birthplace of Berkeley Unix (BSD), and Rogue, which was further developed there by Glenn C Wickman, and Michael Toy. Rogue's origins included the curses library, which Rogue was originally written to test. Additionally, both Ingres and Postgres were originally coded in Evans, under Prof. Michael Stonebraker's direction.
The office of Professor Doug Cooper, who wrote the widely used programming textbook "Oh! Pascal!", was in this building.
Architecture
Construction
Evans Hall is situated at the northeast corner of campus, just east of Memorial Glade. It was built in 1971 and is named after Griffith C. Evans, chairman of mathematics from 1934 to 1949 who combined the fields of mathematics and economics. The architect was Gardner Dailey.
In the 1990s, this building saw significant renovation including seismic retrofits and a new paint job. Today, the building sports a blue-green exterior with orange-red accents.
Safety concerns
As part of the University's New Century Plan, the building is recommended for demolition and replacement, due in part to its unsafe earthquake readiness rating. In 2000, it was proposed that two shorter buildings replace Evans Hall.
Although Evans Hall's seismic rating is poor, the rating is common on the UC Berkeley campus with over fifty buildings sharing the rating. A rating of poor translates to that a major earthquake would likely cause "significant structural damage and appreciable life hazards".
During the early 2000s, because of rusting of the frame of the building, "large pieces of concrete began falling off the face of Evans Hall without warning". Repairing the building cost two million dollars.
In February 2022, the University announced that due to cost, Evans Hall will not be seismically renovated and will be demolished.
Aesthetic complaints
Evans Hall was voted one of the ugliest buildings in UC Berkeley by its student body.
Evans Hall is known for its large number of windowless classrooms. The Chronicle of Higher Education has called it "an imposing concrete structure that most people on the campus would like to see demolished". Former chancellor Robert M. Berdahl has described the building as without "stirrings of pride in placement, or massing, or architectural design". Some complain the building disturbs the view of the San Francisco Bay.
Math related murals have been painted inside the building in protest against its aesthetics.
Evan
|
https://en.wikipedia.org/wiki/Snub%2024-cell
|
In geometry, the snub 24-cell or snub disicositetrachoron is a convex uniform 4-polytope composed of 120 regular tetrahedral and 24 icosahedral cells. Five tetrahedra and three icosahedra meet at each vertex. In total it has 480 triangular faces, 432 edges, and 96 vertices. One can build it from the 600-cell by diminishing a select subset of icosahedral pyramids and leaving only their icosahedral bases, thereby removing 480 tetrahedra and replacing them with 24 icosahedra.
Topologically, under its highest symmetry, [3+,4,3], as an alternation of a truncated 24-cell, it contains 24 pyritohedra (an icosahedron with Th symmetry), 24 regular tetrahedra, and 96 triangular pyramids.
Semiregular polytope
It is one of three semiregular 4-polytopes made of two or more cells which are Platonic solids, discovered by Thorold Gosset in his 1900 paper. He called it a tetricosahedric for being made of tetrahedron and icosahedron cells. (The other two are the rectified 5-cell and rectified 600-cell.)
Alternative names
Snub icositetrachoron
Snub demitesseract
Semi-snub polyoctahedron (John Conway)
Sadi (Jonathan Bowers) for snub disicositetrachoron
Tetricosahedric (Thorold Gosset)
Geometry
Coordinates
The vertices of a snub 24-cell centered at the origin of 4-space, with edges of length 2, are obtained by taking even permutations of
(0, ±1, ±φ, ±φ2)
where φ = ≈ 1.618 is the golden ratio.
The unit-radius coordinates of the snub 24-cell, with edges of length φ−1 ≈ 0.618, are the even permutations of
(±, ±, ±, 0)
These 96 vertices can be found by partitioning each of the 96 edges of a 24-cell in the golden ratio in a consistent manner dimensionally analogous to the way the 12 vertices of an icosahedron or "snub octahedron" can be produced by partitioning the 12 edges of an octahedron in the golden ratio. This can be done by first placing vectors along the 24-cell's edges such that each two-dimensional face is bounded by a cycle, then similarly partitioning each edge into the golden ratio along the direction of its vector. This is equivalent to the snub truncation construction of the 24-cell described below.
The 96 vertices of the snub 24-cell, together with the 24 vertices of a 24-cell, form the 120 vertices of the 600-cell.
Constructions
The snub 24-cell is derived from the 24-cell by a special form of truncation.
Truncations remove vertices by cutting through the edges incident to the vertex; forms of truncation differ by where on the edge the cut is made. The common truncations of the 24-cell include the recitified 24-cell (which cuts each edge at its midpoint, producing a polytope bounded by 24 cubes and 24 cuboctahedra), and the truncated 24-cell (which cuts each edge one-third of its length from the vertex, producing a polytope bounded by 24 cubes and 24 truncated octahedra). In these truncations a cube is produced in place of the removed vertex, because the vertex figure of the 24-cell is a cube and the cuts are equidistant from the ver
|
https://en.wikipedia.org/wiki/Direct%20product%20of%20groups
|
In mathematics, specifically in group theory, the direct product is an operation that takes two groups and and constructs a new group, usually denoted . This operation is the group-theoretic analogue of the Cartesian product of sets and is one of several important notions of direct product in mathematics.
In the context of abelian groups, the direct product is sometimes referred to as the direct sum, and is denoted . Direct sums play an important role in the classification of abelian groups: according to the fundamental theorem of finite abelian groups, every finite abelian group can be expressed as the direct sum of cyclic groups.
Definition
Given groups (with operation ) and (with operation ), the direct product is defined as follows:
The resulting algebraic object satisfies the axioms for a group. Specifically:
Associativity The binary operation on is associative.
Identity The direct product has an identity element, namely , where is the identity element of and is the identity element of .
Inverses The inverse of an element of is the pair , where is the inverse of in , and is the inverse of in .
Examples
Let be the group of real numbers under addition. Then the direct product is the group of all two-component vectors under the operation of vector addition:
.
Let be the group of positive real numbers under multiplication. Then the direct product is the group of all vectors in the first quadrant under the operation of component-wise multiplication
.
Let and be cyclic groups with two elements each:
Then the direct product is isomorphic to the Klein four-group:
Elementary properties
Algebraic structure
Let and be groups, let , and consider the following two subsets of :
and .
Both of these are in fact subgroups of , the first being isomorphic to , and the second being isomorphic to . If we identify these with and , respectively, then we can think of the direct product as containing the original groups and as subgroups.
These subgroups of have the following three important properties:
(Saying again that we identify and with and , respectively.)
The intersection is trivial.
Every element of can be expressed uniquely as the product of an element of and an element of .
Every element of commutes with every element of .
Together, these three properties completely determine the algebraic structure of the direct product . That is, if is any group having subgroups and that satisfy the properties above, then is necessarily isomorphic to the direct product of and . In this situation, is sometimes referred to as the internal direct product of its subgroups and .
In some contexts, the third property above is replaced by the following:
3′. Both and are normal in .
This property is equivalent to property 3, since the elements of two normal subgroups with trivial intersection necessarily commute, a fact which can be deduced by considering the commutator of any in , in .
Examples
P
|
https://en.wikipedia.org/wiki/Louis%20Moresi
|
Louis-Noël Moresi (born 30 October 1965) is a Professor of Computational Mathematics & Geophysics at The Australian National University. He has deeply influenced the understanding of the Geophysics community through his own research as well as providing software for the community to use.
Early career
The London-born Moresi began his scientific career at Kodak as a research assistant in 1985 where he worked with Dr John Goddard on the synthesis of stabilizers (anti-oxidants) for yellow dyes in photographic emulsions. In the same year, he began undergraduate studies at Clare College, Cambridge at the University of Cambridge. There he completed a Natural Sciences Tripos in 1988, with final year options in Seismology, Physics of the Earth and Environmental Science, taking classes under Dan McKenzie. In his last year he received the Horn Prize for his results in his final examinations.
From 1988 to 1992, he completed his doctoral studies in the Department of Earth Sciences at the University of Oxford. He focused his PhD thesis on the influence of mantle convection on surface observables such as Topography and Geoid. His particular emphasis was on the role of temperature-dependent viscosity and partial melting for both Earth and Venus.
Employment
From 1992 to 1995 he worked as a fellow in geophysics at Caltech. There he worked with Mike Gurnis on 3D dynamic models of Subduction in the North West Pacific Ocean as well as mantle convection in Earth and Venus with Slava Solomatov. After this he worked as a postdoctoral fellow at the Research School of Earth Sciences at ANU until 1997. Then he moved to Perth where he worked for CSIRO in the division of exploration and mining as a senior research scientist. There he studied large-scale continental deformation interacting with mantle convection.
In 2002 he moved to Monash University, where he was a professor as well as the co-director of Monash Cluster Computing, a parallel computing research centre. He was on the Australian Research Council College of Experts from 2012-2014. In 2014, he moved to Melbourne University through the Research at Melbourne Accelerator Program as a professor of geophysics. In 2019 he moved back to The Australian National University Research School of Earth Sciences as a Professor of Geophysics.
Software development
It was during Moresi's time at Caltech that he wrote the popular mantle convection software program call Citcom. This is a 2D and 3D Eulerian Finite Element code which is designed to solve problems with extremely large variations in viscosity. Though it was originally a Cartesian serial code, there are currently many versions, including a spherical parallel version called CitComS (see Citcom for code's history).
During his time at CSIRO, he reworked Citcom using the Particle-in-cell approach and created a new program called Ellipsis. Having lagrangian integration points meant that the scientist using the code could track history and material properties throu
|
https://en.wikipedia.org/wiki/Semantic%20neural%20network
|
Semantic neural network (SNN) is based on John von Neumann's neural network [von Neumann, 1966] and Nikolai Amosov M-Network. There are limitations to a link topology for the von Neumann’s network but SNN accept a case without these limitations. Only logical values can be processed, but SNN accept that fuzzy values can be processed too. All neurons into the von Neumann network are synchronized by tacts. For further use of self-synchronizing circuit technique SNN accepts neurons can be self-running or synchronized.
In contrast to the von Neumann network there are no limitations for topology of neurons for semantic networks. It leads to the impossibility of relative addressing of neurons as it was done by von Neumann. In this case an absolute readdressing should be used. Every neuron should have a unique identifier that would provide a direct access to another neuron. Of course, neurons interacting by axons-dendrites should have each other's identifiers. An absolute readdressing can be modulated by using neuron specificity as it was realized for biological neural networks.
There’s no description for self-reflectiveness and self-modification abilities into the initial description of semantic networks [Dudar Z.V., Shuklin D.E., 2000]. But in [Shuklin D.E. 2004] a conclusion had been drawn about the necessity of introspection and self-modification abilities in the system. For maintenance of these abilities a concept of pointer to neuron is provided. Pointers represent virtual connections between neurons. In this model, bodies and signals transferring through the neurons connections represent a physical body, and virtual connections between neurons are representing an astral body. It is proposed to create models of artificial neuron networks on the basis of virtual machine supporting the opportunity for paranormal effects.
SNN is generally used for natural language processing.
Related models
Computational creativity
Semantic hashing
Semantic Pointer Architecture
Sparse distributed memory
References
Neumann, J., 1966. Theory of self-reproducing automata, edited and completed by Arthur W. Burks. - University of Illinois press, Urbana and London
Dudar Z.V., Shuklin D.E., 2000. Implementation of neurons for semantic neural nets that’s understanding texts in natural language. In Radio-electronika i informatika KhTURE, 2000. No 4. Р. 89-96.
Shuklin D.E., 2004. The further development of semantic neural network models. In Artificial Intelligence, Donetsk, "Nauka i obrazovanie" Institute of Artificial Intelligence, Ukraine, 2004, No 3. P. 598-606
Shuklin D.E. The Structure of a Semantic Neural Network Extracting the Meaning from a Text, In Cybernetics and Systems Analysis, Volume 37, Number 2, 4 March 2001, pp. 182–186(5)
Shuklin D.E. The Structure of a Semantic Neural Network Realizing Morphological and Syntactic Analysis of a Text, In Cybernetics and Systems Analysis, Volume 37, Number 5, September 2001, pp. 770–776(7)
Shuklin D.E. Realizat
|
https://en.wikipedia.org/wiki/NUTS%20statistical%20regions%20of%20Spain
|
In the NUTS (Nomenclature of Territorial Units for Statistics) codes of Spain (ES), the following are the first-level political and administrative divisions.
Overall
NUTS Codes
Local administrative units
Below the NUTS levels, the two LAU (Local Administrative Units) levels are:
The LAU codes of Spain can be downloaded here:
NUTS codes
Older Codes
In the 2003 version, the two provinces of the Canary Islands were coded as follows:
See also
Subdivisions of Spain
ISO 3166-2 codes of Spain
FIPS region codes of Spain
Sources
Hierarchical list of the Nomenclature of territorial units for statistics - NUTS and the Statistical regions of Europe
Overview map of EU Countries - NUTS level 1
ESPANA - NUTS level 2
ESPANA - NUTS level 3
Correspondence between the NUTS levels and the national administrative units
List of current NUTS codes
Download current NUTS codes (ODS format)
Provinces of Spain, Statoids.com
Spain
Spain
Nuts
|
https://en.wikipedia.org/wiki/Proof%20of%20impossibility
|
In mathematics, a proof of impossibility is a proof that demonstrates that a particular problem cannot be solved as described in the claim, or that a particular set of problems cannot be solved in general. Such a case is also known as a negative proof, proof of an impossibility theorem, or negative result. Proofs of impossibility often are the resolutions to decades or centuries of work attempting to find a solution, eventually proving that there is no solution. Proving that something is impossible is usually much harder than the opposite task, as it is often necessary to develop a proof that works in general, rather than to just show a particular example. Impossibility theorems are usually expressible as negative existential propositions or universal propositions in logic.
The irrationality of the square root of 2 is one of the oldest proofs of impossibility. It shows that it is impossible to express the square root of 2 as a ratio of two integers. Another consequential proof of impossibility was Ferdinand von Lindemann's proof in 1882, which showed that the problem of squaring the circle cannot be solved because the number is transcendental (i.e., non-algebraic), and that only a subset of the algebraic numbers can be constructed by compass and straightedge. Two other classical problems—trisecting the general angle and doubling the cube—were also proved impossible in the 19th century, and all of these problems gave rise to research into more complicated mathematical structures.
A problem that arose in the 16th century was creating a general formula using radicals to express the solution of any polynomial equation of fixed degree k, where k ≥ 5. In the 1820s, the Abel–Ruffini theorem (also known as Abel's impossibility theorem) showed this to be impossible, using concepts such as solvable groups from Galois theory—a new sub-field of abstract algebra.
Some of the most important proofs of impossibility found in the 20th century were those related to undecidability, which showed that there are problems that cannot be solved in general by any algorithm, with one of the more prominent ones being the halting problem. Gödel's incompleteness theorems were other examples that uncovered fundamental limitations in the provability of formal systems.
In computational complexity theory, techniques like relativization (the addition of an oracle) allow for "weak" proofs of impossibility, in that proofs techniques that are not affected by relativization cannot resolve the P versus NP problem. Another technique is the proof of completeness for a complexity class, which provides evidence for the difficulty of problems by showing them to be just as hard to solve as any other problem in the class. In particular, a complete problem is intractable if one of the problems in its class is.
Types of proof
By contradiction
One of the widely used types of impossibility proof is proof by contradiction. In this type of proof, it is shown that if a proposition, such as a
|
https://en.wikipedia.org/wiki/Super%20vector%20space
|
In mathematics, a super vector space is a -graded vector space, that is, a vector space over a field with a given decomposition of subspaces of grade and grade . The study of super vector spaces and their generalizations is sometimes called super linear algebra. These objects find their principal application in theoretical physics where they are used to describe the various algebraic aspects of supersymmetry.
Definitions
A super vector space is a -graded vector space with decomposition
Vectors that are elements of either or are said to be homogeneous. The parity of a nonzero homogeneous element, denoted by , is or according to whether it is in or ,
Vectors of parity are called even and those of parity are called odd. In theoretical physics, the even elements are sometimes called Bose elements or bosonic, and the odd elements Fermi elements or fermionic. Definitions for super vector spaces are often given only in terms of homogeneous elements and then extended to nonhomogeneous elements by linearity.
If is finite-dimensional and the dimensions of and are and respectively, then is said to have dimension . The standard super coordinate space, denoted , is the ordinary coordinate space where the even subspace is spanned by the first coordinate basis vectors and the odd space is spanned by the last .
A homogeneous subspace of a super vector space is a linear subspace that is spanned by homogeneous elements. Homogeneous subspaces are super vector spaces in their own right (with the obvious grading).
For any super vector space , one can define the parity reversed space to be the super vector space with the even and odd subspaces interchanged. That is,
Linear transformations
A homomorphism, a morphism in the category of super vector spaces, from one super vector space to another is a grade-preserving linear transformation. A linear transformation between super vector spaces is grade preserving if
That is, it maps the even elements of to even elements of and odd elements of to odd elements of . An isomorphism of super vector spaces is a bijective homomorphism. The set of all homomorphisms is denoted .
Every linear transformation, not necessarily grade-preserving, from one super vector space to another can be written uniquely as the sum of a grade-preserving transformation and a grade-reversing one—that is, a transformation such that
Declaring the grade-preserving transformations to be even and the grade-reversing ones to be odd gives the space of all linear transformations from to , denoted and called internal , the structure of a super vector space. In particular,
A grade-reversing transformation from to can be regarded as a homomorphism from to the parity reversed space , so that
Operations on super vector spaces
The usual algebraic constructions for ordinary vector spaces have their counterpart in the super vector space setting.
Dual space
The dual space of a super vector space can be regarded as a sup
|
https://en.wikipedia.org/wiki/Cantellated%20tesseract
|
In four-dimensional geometry, a cantellated tesseract is a convex uniform 4-polytope, being a cantellation (a 2nd order truncation) of the regular tesseract.
There are four degrees of cantellations of the tesseract including with permutations truncations. Two are also derived from the 24-cell family.
Cantellated tesseract
The cantellated tesseract, bicantellated 16-cell, or small rhombated tesseract is a convex uniform 4-polytope or 4-dimensional polytope bounded by 56 cells: 8 small rhombicuboctahedra, 16 octahedra, and 32 triangular prisms.
Construction
In the process of cantellation, a polytope's 2-faces are effectively shrunk. The rhombicuboctahedron can be called a cantellated cube, since if its six faces are shrunk in their respective planes, each vertex will separate into the three vertices of the rhombicuboctahedron's triangles, and each edge will separate into two of the opposite edges of the rhombicuboctahedrons twelve non-axial squares.
When the same process is applied to the tesseract, each of the eight cubes becomes a rhombicuboctahedron in the described way. In addition however, since each cube's edge was previously shared with two other cubes, the separating edges form the three parallel edges of a triangular prism—32 triangular prisms, since there were 32 edges. Further, since each vertex was previously shared with three other cubes, the vertex would split into 12 rather than three new vertices. However, since some of the shrunken faces continues to be shared, certain pairs of these 12 potential vertices are identical to each other, and therefore only 6 new vertices are created from each original vertex (hence the cantellated tesseract's 96 vertices compared to the tesseract's 16). These six new vertices form the vertices of an octahedron—16 octahedra, since the tesseract had 16 vertices.
Cartesian coordinates
The Cartesian coordinates of the vertices of a cantellated tesseract with edge length 2 is given by all permutations of:
Structure
The 8 small rhombicuboctahedral cells are joined to each other via their axial square faces. Their non-axial square faces, which correspond with the edges of a cube, are connected to the triangular prisms. The triangular faces of the small rhombicuboctahedra and the triangular prisms are connected to the 16 octahedra.
Its structure can be imagined by means of the tesseract itself: the rhombicuboctahedra are analogous to the tesseract's cells, the triangular prisms are analogous to the tesseract's edges, and the octahedra are analogous to the tesseract's vertices.
Images
Projections
The following is the layout of the cantellated tesseract's cells under the parallel projection into 3-dimensional space, small rhombicuboctahedron first:
The projection envelope is a truncated cube.
The nearest and farthest small rhombicuboctahedral cells from the 4D viewpoint project to the volume of the same shape inscribed in the projection envelope.
The axial squares of this central small rhom
|
https://en.wikipedia.org/wiki/Biquadratic%20field
|
In mathematics, a biquadratic field is a number field K of a particular kind, which is a Galois extension of the rational number field Q with Galois group the Klein four-group.
Structure and subfields
Biquadratic fields are all obtained by adjoining two square roots. Therefore in explicit terms they have the form
K = Q(,)
for rational numbers a and b. There is no loss of generality in taking a and b to be non-zero and square-free integers.
According to Galois theory, there must be three quadratic fields contained in K, since the Galois group has three subgroups of index 2. The third subfield, to add to the evident Q() and Q(), is Q().
L-function
Biquadratic fields are the simplest examples of abelian extensions of Q that are not cyclic extensions. According to general theory the Dedekind zeta-function of such a field is a product of the Riemann zeta-function and three Dirichlet L-functions. Those L-functions are for the Dirichlet characters which are the Jacobi symbols attached to the three quadratic fields. Therefore taking the product of the Dedekind zeta-functions of the quadratic fields, multiplying them together, and dividing by the square of the Riemann zeta-function, is a recipe for the Dedekind zeta-function of the biquadratic field. This illustrates also some general principles on abelian extensions, such as the calculation of the conductor of a field.
Such L-functions have applications in analytic theory (Siegel zeroes), and in some of Kronecker's work.
References
Section 12 of
Algebraic number theory
Galois theory
|
https://en.wikipedia.org/wiki/Case%20analysis
|
Case analysis may refer to
Proof by cases in mathematics
Case study, detailed examination of a subject
The case method used in teaching
|
https://en.wikipedia.org/wiki/IPM%20School%20of%20Cognitive%20Sciences
|
The School of Cognitive Sciences forms part of the Institute for Studies in Theoretical Physics and Mathematics (IPM) in Tehran, Iran. The school was called the School of Intelligent Systems (SIS) until 2003 when it was renamed to the School of Cognitive Sciences. The research is predominantly focused on cognitive Neuroscience.
The research programs cover diverse areas including cognitive neuroscience, neural modeling, psychophysics, linguistics, neural networks and artificial intelligence. Since its inception the school has been managed by Prof. Caro Lucas (School of ECE, University of Tehran), Prof. Shahin Rouhani (Physics Department, Sharif University of Technology), Prof. Hossein Esteky (Shahid Beheshti University of Medical Sciences) and most recently by Prof. Mojtaba Zarei (Institute for Medical Science and Technology, Shahid Beheshti University).
The school earned enormous recognition with the publication of the article entitled "Microstimulation of inferotemporal cortex influences face categorization" by Seyed Reza Afraz, Roozbeh Kiani and Hossein Esteky in Nature. The article was published on August 10, 2006.
See also
Institute for Studies in Theoretical Physics and Mathematics
External links
IPM School of Cognitive Sciences
Research institutes in Iran
Cognitive science research institutes
|
https://en.wikipedia.org/wiki/Detlev%20Buchholz
|
Detlev Buchholz (born 31 May 1944) is a German theoretical physicist. He investigates quantum field theory, especially in the axiomatic framework of algebraic quantum field theory.
Biography
Buchholz studied physics in Hannover and Hamburg where he acquired his Diplom in 1968. After graduation, he continued his studies in Physics in Hamburg. In 1970–1971 he was at the University of Pennsylvania. After receiving his PhD in 1973 under Rudolf Haag he worked at the University of Hamburg and was in 1974–1975 at CERN. From 1975 to 1978 he worked as a research assistant in Hamburg, where he got his habilitation in 1977. In 1978–1979 he had a Max Kade grant at the University of California, Berkeley. In 1979 he was a professor in Hamburg and changed to the University of Göttingen in 1997. He retired in 2010 as professor emeritus.
Buchholz made contributions to relativistic quantum physics and quantum field theory, especially in the area of algebraic quantum field theory. Using the methods of Tomita–Takesaki theory, he obtained the split property from nuclearity conditions, a strong result about the locality of the theory. His contributions include the concept of infraparticles.
Honors and awards
In 1977 Detlev Buchholz won, together with Gert Strobl, the Physics Prize
of the German Physical Society
(today known as Gustav-Hertz-Preis) and In 1979 the Physics Prize of the Göttingen Academy of Sciences. In 1995 Buchholz received the Japanese-German Research Award of the Japan Society for the Promotion of Science and the Alexander von Humboldt Foundation. In 1998 he was an Invited Speaker at the International Congress of Mathematicians in Berlin. He has been editor-in-chief of the scientific journal Reviews in Mathematical Physics. In 2008 Buchholz was awarded the Max Planck Medal for outstanding contributions to quantum field theory.
Selected works
(Article on Buchholz's receipt of the Planck medal.)
See also
Algebraic quantum field theory
Infraparticle
Local quantum physics
Quantum field theory
References
External links
.
.
.
Theoretical physicists
20th-century German physicists
21st-century German physicists
Winners of the Max Planck Medal
Academic staff of the University of Hamburg
Academic staff of the University of Göttingen
People associated with CERN
Scientists from Gdańsk
1944 births
Living people
|
https://en.wikipedia.org/wiki/Supergraph
|
In mathematics and physics, the word supergraph has several meanings:
In graph theory, if A is a subgraph of B, then B is said to be a supergraph of A.
In the context of particle physics, a supergraph is a Feynman diagram that calculates scattering amplitudes in a supersymmetric theory using the advantages of the superspace formalism.
Synonym for epigraph, i.e. the set of points lying on or above a function's graph.
|
https://en.wikipedia.org/wiki/Optical%20cross%20section
|
Optical cross section (OCS) is a value which describes the maximum amount of optical flux reflected back to the source. The standard unit of measurement is m2/sr. OCS is dependent on the geometry and the reflectivity at a particular wavelength of an object. Optical cross section is useful in fields such as LIDAR. In the field of radar this is referred to as radar cross-section. Objects such as license plates on automobiles have a high optical cross section to maximize the laser return to the speed detector gun.
Flat mirror
Optical cross section of a flat mirror with a given reflectivity at a particular wavelength can be expressed by the formula
Where is the cross sectional diameter of the beam. Note that the direction of the light has to be perpendicular to the mirror surface for this formula to be valid, else the return from the mirror would no longer go back to it source.
In order to maximize the return a corner reflector is used. The alignment of a corner reflector with respect to the source is not as critical as the alignment of a flat mirror.
Other optical devices
Optical cross section is not limited to reflective surfaces. Optical devices such as telescopes and cameras will return some of the optical flux back to the source, since it has optics that reflect some light. The Optical cross section of a camera can vary over time due to the camera shutter opening and closing.
References
Cross section
Lidar
|
https://en.wikipedia.org/wiki/Supersonic%20wind%20tunnel
|
A supersonic wind tunnel is a wind tunnel that produces supersonic speeds (1.2<M<5)
The Mach number and flow are determined by the nozzle geometry. The Reynolds number is varied by changing the density level (pressure in the settling chamber). Therefore, a high pressure ratio is required (for a supersonic regime at M=4, this ratio is of the order of 10). Apart from that, condensation of moisture or even gas liquefaction can occur if the static temperature becomes cold enough. This means that a supersonic wind tunnel usually needs a drying or a pre-heating facility.
A supersonic wind tunnel has a large power demand, so most are designed for intermittent instead of continuous operation.
The first supersonic wind tunnel (with a cross section of 2 cm) was built in National Physical Laboratory in England, and started working in 1922.
Restrictions for supersonic tunnel operation
Minimum required pressure ratio
Optimistic estimate:
Pressure ratio the total pressure ratio over normal shock at M in test section:
Examples:
Temperature effects: condensation
Temperature in the test section:
with = 330 K: = 70 K at = 4
The velocity range is limited by reservoir temperature
Power requirements
The power required to run a supersonic wind tunnel is enormous, of the order of 50 MW per square meter of test section cross-sectional area. For this reason most wind tunnels operate intermittently using energy stored in high-pressure tanks. These wind tunnels are also called intermittent supersonic blowdown wind tunnels (of which a schematic preview is given below). Another way of achieving the huge power output is with the use of a vacuum storage tank. These tunnels are called indraft supersonic wind tunnels, and are seldom used because they are restricted to low Reynolds numbers. Some large countries have built major supersonic tunnels that run continuously; one is shown in the photo.
Other problems operating a supersonic wind tunnel include:
starting and unstart of the test section (related to maintaining at least a minimum pressure ratio)
adequate supply of dry air
wall interference effects due to shock wave reflection and (sometimes) blockage
high-quality instruments capable of rapid measurements due to short run times in intermittent tunnels
Tunnels such as a Ludwieg tube have short test times (usually less than one second), relatively high Reynolds number, and low power requirements.
Further reading
See also
Low speed wind tunnel
High speed wind tunnel
Hypersonic wind tunnel
Ludwieg tube
Shock tube
External links
Supersonic wind tunnel test demonstration (Mach 2.5) with flat plate and wedge creating an oblique shock(Video)
Fluid dynamics
Aerodynamics
Wind tunnels
|
https://en.wikipedia.org/wiki/Tobit%20model
|
In statistics, a tobit model is any of a class of regression models in which the observed range of the dependent variable is censored in some way. The term was coined by Arthur Goldberger in reference to James Tobin, who developed the model in 1958 to mitigate the problem of zero-inflated data for observations of household expenditure on durable goods. Because Tobin's method can be easily extended to handle truncated and other non-randomly selected samples, some authors adopt a broader definition of the tobit model that includes these cases.
Tobin's idea was to modify the likelihood function so that it reflects the unequal sampling probability for each observation depending on whether the latent dependent variable fell above or below the determined threshold. For a sample that, as in Tobin's original case, was censored from below at zero, the sampling probability for each non-limit observation is simply the height of the appropriate density function. For any limit observation, it is the cumulative distribution, i.e. the integral below zero of the appropriate density function. The tobit likelihood function is thus a mixture of densities and cumulative distribution functions.
The likelihood function
Below are the likelihood and log likelihood functions for a type I tobit. This is a tobit that is censored from below at when the latent variable . In writing out the likelihood function, we first define an indicator function :
Next, let be the standard normal cumulative distribution function and to be the standard normal probability density function. For a data set with N observations the likelihood function for a type I tobit is
and the log likelihood is given by
Reparametrization
The log-likelihood as stated above is not globally concave, which complicates the maximum likelihood estimation. Olsen suggested the simple reparametrization and , resulting in a transformed log-likelihood,
which is globally concave in terms of the transformed parameters.
For the truncated (tobit II) model, Orme showed that while the log-likelihood is not globally concave, it is concave at any stationary point under the above transformation.
Consistency
If the relationship parameter is estimated by regressing the observed on , the resulting ordinary least squares regression estimator is inconsistent. It will yield a downwards-biased estimate of the slope coefficient and an upward-biased estimate of the intercept. Takeshi Amemiya (1973) has proven that the maximum likelihood estimator suggested by Tobin for this model is consistent.
Interpretation
The coefficient should not be interpreted as the effect of on , as one would with a linear regression model; this is a common error. Instead, it should be interpreted as the combination of
(1) the change in of those above the limit, weighted by the probability of being above the limit; and
(2) the change in the probability of being above the limit, weighted by the expected value of if above.
Variations o
|
https://en.wikipedia.org/wiki/Typographical%20Number%20Theory
|
Typographical Number Theory (TNT) is a formal axiomatic system describing the natural numbers that appears in Douglas Hofstadter's book Gödel, Escher, Bach. It is an implementation of Peano arithmetic that Hofstadter uses to help explain Gödel's incompleteness theorems.
Like any system implementing the Peano axioms, TNT is capable of referring to itself (it is self-referential).
Numerals
TNT does not use a distinct symbol for each natural number. Instead it makes use of a simple, uniform way of giving a compound symbol to each natural number:
{|
| zero
| align=right | 0
|-
| one
| align=right | S0
|-
| two
| align=right | SS0
|-
| three
| align=right | SSS0
|-
| four
| align=right | SSSS0
|-
| five
| align=right | SSSSS0
|}
The symbol S can be interpreted as "the successor of", or "the number after". Since this is, however, a number theory, such interpretations are useful, but not strict. It cannot be said that because four is the successor of three that four is SSSS0, but rather that since three is the successor of two, which is the successor of one, which is the successor of zero, which has been described as 0, four can be "proved" to be SSSS0. TNT is designed such that everything must be proven before it can be said to be true.
Variables
In order to refer to unspecified terms, TNT makes use of five variables. These are
a, b, c, d, e.
More variables can be constructed by adding the prime symbol after them; for example,
a, b, c, a, a‴ are all variables.
In the more rigid version of TNT, known as "austere" TNT, only
a, a, a‴ etc. are used.
Operators
Addition and multiplication of numerals
In Typographical Number Theory, the usual symbols of "+" for additions, and "·" for multiplications are used. Thus to write "b plus c" is to write
(b + c)
and "a times d" is written as
(a·d)
The parentheses are required. Any laxness would violate TNT's formation system (although it is trivially proved this formalism is unnecessary for operations which are both commutative and associative). Also only two terms can be operated on at once. Therefore, to write "a plus b plus c" is to write either
((a + b) + c)
or
(a + (b + c))
Equivalency
The "Equals" operator is used to denote equivalence. It is defined by the symbol "=", and takes roughly the same meaning as it usually does in mathematics. For instance,
(SSS0 + SSS0) = SSSSSS0
is a theorem statement in TNT, with the interpretation "3 plus 3 equals 6".
Negation
In Typographical Number Theory, negation, i.e. the turning of a statement to its opposite, is denoted by the "~" or negation operator. For instance,
~((SSS0 + SSS0) = SSSSSSS0)
is a theorem in TNT, interpreted as "3 plus 3 is not equal to 7".
By negation, this means negation in Boolean logic (logical negation), rather than simply being the opposite. For example, if I were to say "I am eating a grapefruit", the opposite is "I am not eating a grapefruit", rather than "I am eating something other than a grapefruit". Similarly "The
|
https://en.wikipedia.org/wiki/Quasitriangular%20Hopf%20algebra
|
In mathematics, a Hopf algebra, H, is quasitriangular if there exists an invertible element, R, of such that
for all , where is the coproduct on H, and the linear map is given by ,
,
,
where , , and , where , , and , are algebra morphisms determined by
R is called the R-matrix.
As a consequence of the properties of quasitriangularity, the R-matrix, R, is a solution of the Yang–Baxter equation (and so a module V of H can be used to determine quasi-invariants of braids, knots and links). Also as a consequence of the properties of quasitriangularity, ; moreover
, , and . One may further show that the
antipode S must be a linear isomorphism, and thus S2 is an automorphism. In fact, S2 is given by conjugating by an invertible element: where (cf. Ribbon Hopf algebras).
It is possible to construct a quasitriangular Hopf algebra from a Hopf algebra and its dual, using the Drinfeld quantum double construction.
If the Hopf algebra H is quasitriangular, then the category of modules over H is braided with braiding
.
Twisting
The property of being a quasi-triangular Hopf algebra is preserved by twisting via an invertible element such that and satisfying the cocycle condition
Furthermore, is invertible and the twisted antipode is given by , with the twisted comultiplication, R-matrix and co-unit change according to those defined for the quasi-triangular quasi-Hopf algebra. Such a twist is known as an admissible (or Drinfeld) twist.
See also
Quasi-triangular quasi-Hopf algebra
Ribbon Hopf algebra
Notes
References
Hopf algebras
|
https://en.wikipedia.org/wiki/Conway%20knot
|
In mathematics, in particular in knot theory, the Conway knot (or Conway's knot) is a particular knot with 11 crossings, named after John Horton Conway.
It is related by mutation to the Kinoshita–Terasaka knot, with which it shares the same Jones polynomial. Both knots also have the curious property of having the same Alexander polynomial and Conway polynomial as the unknot.
The issue of the sliceness of the Conway knot was resolved in 2020 by Lisa Piccirillo, 50 years after John Horton Conway first proposed the knot. Her proof made use of Rasmussen's s-invariant, and showed that the knot is not a smoothly slice knot, though it is topologically slice (the Kinoshita–Terasaka knot is both).
References
External links
Conway knot on The Knot Atlas.
Conway knot illustrated by knotilus.
Prime knots and links
John Horton Conway
|
https://en.wikipedia.org/wiki/College%20Football%20News
|
College Football News (CFN) is a magazine and website published by College Football News, Inc., headquartered in Chicago, Illinois. News coverage includes scores, statistics, rankings, and reports on college football games. Analysis includes comparisons between teams, predictions of game outcomes and high-school recruiting information. They also give awards to players in various categories.
The website has fan discussion boards on topics relating to college football. Content from College Football News is used on partner sites, such as that of Fox Sports, and by independent organizations, such as the National Football League.
In the summer of 2006, the College Football News website joined the Scout.com Network. However, they maintain separate editorial selections of All-America teams.
References
Mack Brown Texas Football The University of Texas official football webpage
Orangemen recognized by CollegeFootballNews.com Syracuse University Athletics
Bowl Matchups Fox Sports
NFL Draft Analysis National Football League
Scout.com Scout.com
External links
College Football News
American football websites
Companies based in Chicago
|
https://en.wikipedia.org/wiki/Hidden%20subgroup%20problem
|
The hidden subgroup problem (HSP) is a topic of research in mathematics and theoretical computer science. The framework captures problems such as factoring, discrete logarithm, graph isomorphism, and the shortest vector problem. This makes it especially important in the theory of quantum computing because Shor's algorithm for factoring in quantum computing is an instance of the hidden subgroup problem for finite abelian groups, while the other problems correspond to finite groups that are not abelian.
Problem statement
Given a group , a subgroup , and a set , we say a function hides the subgroup if for all if and only if . Equivalently, is constant on the cosets of H, while it is different between the different cosets of H.
Hidden subgroup problem: Let be a group, a finite set, and a function that hides a subgroup . The function is given via an oracle, which uses bits. Using information gained from evaluations of via its oracle, determine a generating set for .
A special case is when is a group and is a group homomorphism in which case corresponds to the kernel of .
Motivation
The hidden subgroup problem is especially important in the theory of quantum computing for the following reasons.
Shor's algorithm for factoring and for finding discrete logarithms (as well as several of its extensions) relies on the ability of quantum computers to solve the HSP for finite abelian groups.
The existence of efficient quantum algorithms for HSPs for certain non-abelian groups would imply efficient quantum algorithms for two major problems: the graph isomorphism problem and certain shortest vector problems (SVPs) in lattices. More precisely, an efficient quantum algorithm for the HSP for the symmetric group would give a quantum algorithm for the graph isomorphism. An efficient quantum algorithm for the HSP for the dihedral group would give a quantum algorithm for the unique SVP.
Algorithms
There is an efficient quantum algorithm for solving HSP over finite abelian groups in time polynomial in . For arbitrary groups, it is known that the hidden subgroup problem is solvable using a polynomial number of evaluations of the oracle. However, the circuits that implement this may be exponential in , making the algorithm not efficient overall; efficient algorithms must be polynomial in the number of oracle evaluations and running time. The existence of such an algorithm for arbitrary groups is open. Quantum polynomial time algorithms exist for certain subclasses of groups, such as semi-direct products of some abelian groups.
Algorithm for abelian groups
The algorithm for abelian groups uses representations, i.e. homomorphisms from to , the general linear group over the complex numbers. A representation is irreducible if it cannot be expressed as the direct product of two or more representations of . For an abelian group, all the irreducible representations are the characters, which are the representations of dimension one; there are no irredu
|
https://en.wikipedia.org/wiki/Variable%20%28mathematics%29
|
In mathematics, a variable (from Latin variabilis, "changeable") is a symbol that represents a mathematical object. A variable may represent a number, a vector, a matrix, a function, the argument of a function, a set, or an element of a set.
Algebraic computations with variables as if they were explicit numbers solve a range of problems in a single computation. For example, the quadratic formula solves any quadratic equation by substituting the numeric values of the coefficients of that equation for the variables that represent them in the quadratic formula. In mathematical logic, a variable is either a symbol representing an unspecified term of the theory (a meta-variable), or a basic object of the theory that is manipulated without referring to its possible intuitive interpretation.
History
In ancient works such as Euclid's Elements, single letters refer to geometric points and shapes. In the 7th century, Brahmagupta used different colours to represent the unknowns in algebraic equations in the Brāhmasphuṭasiddhānta. One section of this book is called "Equations of Several Colours".
At the end of the 16th century, François Viète introduced the idea of representing known and unknown numbers by letters, nowadays called variables, and the idea of computing with them as if they were numbers—in order to obtain the result by a simple replacement. Viète's convention was to use consonants for known values, and vowels for unknowns.
In 1637, René Descartes "invented the convention of representing unknowns in equations by x, y, and z, and knowns by a, b, and c". Contrarily to Viète's convention, Descartes' is still commonly in use. The history of the letter x in math was discussed in a 1887 Scientific American article.
Starting in the 1660s, Isaac Newton and Gottfried Wilhelm Leibniz independently developed the infinitesimal calculus, which essentially consists of studying how an infinitesimal variation of a variable quantity induces a corresponding variation of another quantity which is a function of the first variable. Almost a century later, Leonhard Euler fixed the terminology of infinitesimal calculus, and introduced the notation for a function , its variable and its value . Until the end of the 19th century, the word variable referred almost exclusively to the arguments and the values of functions.
In the second half of the 19th century, it appeared that the foundation of infinitesimal calculus was not formalized enough to deal with apparent paradoxes such as a nowhere differentiable continuous function. To solve this problem, Karl Weierstrass introduced a new formalism consisting of replacing the intuitive notion of limit by a formal definition. The older notion of limit was "when the variable varies and tends toward , then tends toward ", without any accurate definition of "tends". Weierstrass replaced this sentence by the formula
in which none of the five variables is considered as varying.
This static formulation led to the modern n
|
https://en.wikipedia.org/wiki/Sot
|
Sot or SOT may refer to:
Mathematics, science, and technology
Small-outline transistor
Society of Toxicology, U.S.
Sound on tape, in television broadcasting
Strong operator topology, in mathematics
Places
Sot (village), Vojvodina, Serbia
Sodankylä Airfield, Sodankylä, Lapland, Finland, IATA code
Stoke-on-Trent railway station, England, station code
Other uses
Sotho language, a Bantu language of South Africa, ISO 639 code
Special Occupational Taxpayers, some US Firearm Licensees
Gamasot or sot, a Korean cauldron
Gazeta Sot, a daily newspaper in Albania
See also
Sots (disambiguation)
|
https://en.wikipedia.org/wiki/Quantum%20mirage
|
In physics, a quantum mirage is a peculiar result in quantum chaos. Every system of quantum dynamical billiards will exhibit an effect called scarring, where the quantum probability density shows traces of the paths a classical billiard ball would take. For an elliptical arena, the scarring is particularly pronounced at the foci, as this is the region where many classical trajectories converge. The scars at the foci are colloquially referred to as the "quantum mirage".
The quantum mirage was first experimentally observed by Hari Manoharan, Christopher Lutz and Donald Eigler at the IBM Almaden Research Center in San Jose, California in 2000. The effect is quite remarkable but in general agreement with prior work on the quantum mechanics of dynamical billiards in elliptical arenas.
Quantum corral
The mirage occurs at the foci of a quantum corral, a ring of atoms arranged in an arbitrary shape on a substrate. The quantum corral was demonstrated in 1993 by Lutz, Eigler, and Crommie using an elliptical ring of iron atoms on a copper surface using the tip of a low-temperature scanning tunneling microscope to manipulate individual atoms. The ferromagnetic iron atoms reflected the surface electrons of the copper inside the ring into a wave pattern, as predicted by the theory of quantum mechanics.
Quantum corrals can be viewed as artificial atoms that even show similar chemical bonding properties as real atoms.
The size and shape of the corral determine its quantum states, including the energy and distribution of the electrons. To make conditions suitable for the mirage the team at Almaden chose a configuration of the corral which concentrated the electrons at the foci of the ellipse.
When scientists placed a magnetic cobalt atom at one focus of the corral, a mirage of the atom appeared at the other focus. Specifically the same electronic properties were present in the electrons surrounding both foci, even though the cobalt atom was only present at one focus. In scanning tunneling microscopy, an atomically sharp metal tip is advanced towards the atomically flat sample surface until electron tunneling out of the atom and into the advancing tip becomes effective. Using the sharp tip we can also arrange atoms adsorbed on the surface into unique shapes; for example, 48 adsorbed iron atoms on Cu(111) arranged into a 14.26 nm diameter circle. The electrons on the copper surface are trapped inside the circle formed by the iron atoms. A standing wave pattern emerges with a large peak at the center due to the constructive interference of electrons on the copper surface as they scatter off the adsorbed iron atoms.
Applications
IBM scientists are hoping to use quantum mirages to construct atomic scale processors in the future.
References
External links
"Quantum Mirage" may enable atom-scale circuits, IBM Research Almaden, 3rd Feb 2000
Theory of Quantum Corrals and Quantum Mirages
Microscopic theory for quantum mirages in quantum corrals
Quantum elec
|
https://en.wikipedia.org/wiki/Roger%20Jones%20%28mathematician%29
|
Roger L. Jones is an American mathematician specializing in harmonic analysis and ergodic theory.
Biography
He obtained a B.S. in mathematics in 1971 from University at Albany, SUNY, and a Ph.D. in mathematics in 1974 from Rutgers University, with thesis Inequalities for the Ergodic Maximal Function written under the direction of Richard Floyd Gundy. He has recently retired from a professorship in mathematics at DePaul University in Chicago. There he taught everything from remedial math to graduate-level courses. During his tenure at DePaul, Roger published numerous research papers in math, was awarded an excellence in teaching award, chaired the DePaul University Mathematics Department, and was awarded National Science Foundation grants related to teaching mathematics. He has also worked with the Chicago Public Schools on improving math instruction.
Roger was honored for his research work at the International Conference on Harmonic Analysis and Ergodic theory that was held in the name of Roger and his colleague Marshall Ash.
Roger has since retired from teaching at DePaul and moved to Northern Wisconsin, where he teaches mathematics at Conserve School.
Appointments
1974-1977: DePaul University, Assistant Professor
1977-1984: DePaul University, Associate Professor
1982-1985: DePaul University, Chairman: Department of Mathematics
1984-2004: DePaul University, Professor
2004–present: DePaul University, Professor Emeritus
Professional memberships
Mathematical Association of America
American Mathematical Society
Publications
References
External links
Roger Jones' DePaul Syllabus
Conference on Harmonic Analysis and Ergodic Theory
Living people
Year of birth missing (living people)
Rutgers University alumni
20th-century American mathematicians
21st-century American mathematicians
DePaul University faculty
University at Albany, SUNY alumni
Mathematical analysts
Mathematics educators
|
https://en.wikipedia.org/wiki/Outcome
|
Outcome may refer to:
Outcome (probability), the result of an experiment in probability theory
Outcome (game theory), the result of players' decisions in game theory
The Outcome, a 2005 Spanish film
An outcome measure (or endpoint) in a clinical trial
The National Outcomes adopted as targets by the Scottish Government
See also
Outcome-based education
Outcomes theory
|
https://en.wikipedia.org/wiki/Capable
|
Capable may refer to:
, a World War II minesweeper
, an ocean surveillance ship
the defining property of a member of a capable group in mathematics
|
https://en.wikipedia.org/wiki/Intercontinental%20Cup%20records%20and%20statistics
|
Statistics for the Intercontinental Cup which ran from 1960 to 2004.
Finals
By country
By team
By continent
After the events of the 1969 Intercontinental Cup, many European Cup champions refused to play in the Intercontinental Cup. On five occasions, they were replaced by the tournament's runners-up. Two Intercontinental Cups were called off after the runners-up also declined to participate.
Man of the Match
Since 1980
See also
Intercontinental Cup
FIFA Club World Cup, the Intercontinental Cup's succeeding competition
Copa Libertadores
UEFA Champions League
Notes
References
Statistics
International club association football competition records and statistics
|
https://en.wikipedia.org/wiki/Arnold%20tongue
|
In mathematics, particularly in dynamical systems, Arnold tongues (named after Vladimir Arnold) are a pictorial phenomenon that occur when visualizing how the rotation number of a dynamical system, or other related invariant property thereof, changes according to two or more of its parameters. The regions of constant rotation number have been observed, for some dynamical systems, to form geometric shapes that resemble tongues, in which case they are called Arnold tongues.
Arnold tongues are observed in a large variety of natural phenomena that involve oscillating quantities, such as concentration of enzymes and substrates in biological processes and cardiac electric waves. Sometimes the frequency of oscillation depends on, or is constrained (i.e., phase-locked or mode-locked, in some contexts) based on some quantity, and it is often of interest to study this relation. For instance, the outset of a tumor triggers in the area a series of substance (mainly proteins) oscillations that interact with each other; simulations show that these interactions cause Arnold tongues to appear, that is, the frequency of some oscillations constrain the others, and this can be used to control tumor growth.
Other examples where Arnold tongues can be found include the inharmonicity of musical instruments, orbital resonance and tidal locking of orbiting moons, mode-locking in fiber optics and phase-locked loops and other electronic oscillators, as well as in cardiac rhythms, heart arrhythmias and cell cycle.
One of the simplest physical models that exhibits mode-locking consists of two rotating disks connected by a weak spring. One disk is allowed to spin freely, and the other is driven by a motor. Mode locking occurs when the freely-spinning disk turns at a frequency that is a rational multiple of that of the driven rotator.
The simplest mathematical model that exhibits mode-locking is the circle map, which attempts to capture the motion of the spinning disks at discrete time intervals.
Standard circle map
Arnold tongues appear most frequently when studying the interaction between oscillators, particularly in the case where one oscillator drives another. That is, one oscillator depends on the other but not the other way around, so they do not mutually influence each other as happens in Kuramoto models, for example. This is a particular case of driven oscillators, with a driving force that has a periodic behaviour. As a practical example, heart cells (the external oscillator) produce periodic electric signals to stimulate heart contractions (the driven oscillator); here, it could be useful to determine the relation between the frequency of the oscillators, possibly to design better artificial pacemakers. The family of circle maps serves as a useful mathematical model for this biological phenomenon, as well as many others.
The family of circle maps are functions (or endomorphisms) of the circle to itself. It is mathematically simpler to consider a point in the c
|
https://en.wikipedia.org/wiki/Type%20%28model%20theory%29
|
In model theory and related areas of mathematics, a type is an object that describes how a (real or possible) element or finite collection of elements in a mathematical structure might behave. More precisely, it is a set of first-order formulas in a language L with free variables x1, x2,…, xn that are true of a set of n-tuples of an L-structure . Depending on the context, types can be complete or partial and they may use a fixed set of constants, A, from the structure . The question of which types represent actual elements of leads to the ideas of saturated models and omitting types.
Formal definition
Consider a structure for a language L. Let M be the universe of the structure. For every A ⊆ M, let L(A) be the language obtained from L by adding a constant ca for every a ∈ A. In other words,
A 1-type (of ) over A is a set p(x) of formulas in L(A) with at most one free variable x (therefore 1-type) such that for every finite subset p0(x) ⊆ p(x) there is some b ∈ M, depending on p0(x), with (i.e. all formulas in p0(x) are true in when x is replaced by b).
Similarly an n-type (of ) over A is defined to be a set p(x1,…,xn) = p(x) of formulas in L(A), each having its free variables occurring only among the given n free variables x1,…,xn, such that for every finite subset p0(x) ⊆ p(x) there are some elements b1,…,bn ∈ M with .
A complete type of over A is one that is maximal with respect to inclusion. Equivalently, for every either or . Any non-complete type is called a partial type.
So, the word type in general refers to any n-type, partial or complete, over any chosen set of parameters (possibly the empty set).
An n-type p(x) is said to be realized in if there is an element b ∈ Mn such that . The existence of such a realization is guaranteed for any type by the compactness theorem, although the realization might take place in some elementary extension of , rather than in itself.
If a complete type is realized by b in , then the type is typically denoted and referred to as the complete type of b over A.
A type p(x) is said to be isolated by , for , if for all we have . Since finite subsets of a type are always realized in , there is always an element b ∈ Mn such that φ(b) is true in ; i.e. , thus b realizes the entire isolated type. So isolated types will be realized in every elementary substructure or extension. Because of this, isolated types can never be omitted (see below).
A model that realizes the maximum possible variety of types is called a saturated model, and the ultrapower construction provides one way of producing saturated models.
Examples of types
Consider the language L with one binary relation symbol, which we denote as . Let be the structure for this language, which is the ordinal with its standard well-ordering. Let denote the first-order theory of .
Consider the set of L(ω)-formulas . First, we claim this is a type. Let be a finite subset of . We need to find a that satisfies all the formulas in .
|
https://en.wikipedia.org/wiki/Ternary
|
Ternary (from Latin ternarius) or trinary is an adjective meaning "composed of three items". It can refer to:
Mathematics and logic
Ternary numeral system, a base-3 counting system
Balanced ternary, a positional numeral system, useful for comparison logic
Ternary logic, a logic system with the values true, false, and some other value
Ternary plot or ternary graph, a plot that shows the ratios of three proportions
Ternary relation, a finitary relation in which the number of places in the relation is three
Ternary operation, an operation that takes three parameters
Ternary function, a function that takes three arguments
Computing
Ternary signal, a signal that can assume three significant values
Ternary computer, a computer using a ternary numeral system
Ternary tree, a tree data structure in computer science
Ternary search tree, a ternary (three-way) tree data structure of strings
Ternary search, a computer science technique for finding the minimum or maximum of a function
Ternary heap, a data structure in computer science
Ternary Golay code, a perfect [11, 6, 5] ternary linear code
?:, a ternary conditional operator used for basic conditional expressions in several programming languages
Other uses
Ternary complex, a complex formed by the interaction of three molecules
Ternary compound, a type of chemical compound
Ternary form, a form used for structuring music
Ternary name for any taxon below the rank of species
See also
Tertiary (disambiguation)
Binary (disambiguation)
Quaternary (disambiguation)
|
https://en.wikipedia.org/wiki/Spacetime%20%28disambiguation%29
|
Spacetime is a mathematical model in mathematics and physics.
Spacetime, space-time, space time or Space and time may also refer to:
Science and mathematics
Complex spacetime, a theoretical extension of spacetime into complex-valued space and time coordinates
Spacetime diagram, a diagram in the theory of relativity
Space time (chemical engineering), a unit or measure of reaction time
Space and time in Kant's Critique of Pure Reason
Computing
SpaceTime (software), 3D search engine software
Space–time tradeoff, a concept in computing
Space–time code (STC), a technique in data transmission
Music
Jonah Sharp or Spacetime Continuum, music producer
"Space Time", a song on The Shamen album Boss Drum
"Space and Time", a song on The Verve album Urban Hymns
"Spacetime", a song by Tinashe from Nightride
"Time and Space", a song by +/- from Let's Build a Fire
Other uses
SpaceTime, a role-playing game
"Space" and "Time", two mini-episodes of the TV series Doctor Who
"Spacetime" (Agents of S.H.I.E.L.D.), a season 3 episode of the TV series
Space and Time (magazine), a magazine featuring speculative fiction
See also
Time (disambiguation)
Space (disambiguation)
Timespace (disambiguation)
|
https://en.wikipedia.org/wiki/Friedrich%20Wilhelm%20Levi
|
Friedrich Wilhelm Daniel Levi (February 6, 1888 – January 1, 1966) was a German mathematician known for his work in abstract algebra, especially torsion-free abelian groups. He also worked in geometry, topology, set theory, and analysis.
Early life and education
Levi was born to Georg Levi and Emma Blum in Mulhouse in Alsace-Lorraine, then part of the German Empire. He received his Ph.D. in 1911 under Heinrich Martin Weber at the University of Strasbourg.
Career
Levi served his mandatory military service in the German Army in 1906–1907, and was called up again serving in the artillery during World War I, 1914–18. Awarded the Iron Cross, he was discharged as a lieutenant. In 1917, he married Barbara Fitting, with whom he eventually had three children (Paul Levi, Charlotte, and Suzanne). He taught at the University of Leipzig from 1920 to 1935, when the Nazi government dismissed him because of his Jewish ancestry. Friedrich and Barbara moved to Calcutta, India.
In 1935 he accepted an offer as head of the Mathematics Department at the University of Calcutta. He introduced the Levi graph in 1940 at a series of lectures on finite geometry. He contributed to the understanding of combinatorics on words when he articulated the Levi lemma in an article for the Calcutta Mathematical Society. In 1948, Levi became professor of mathematics at Tata Institute of Fundamental Research in Mumbai, India. According to Raghavan Narasimhan, Levi had an important influence on the development of 20th century mathematics in India, especially by introducing modern algebra at the University of Calcutta.
In 1952, he returned to Germany and was a professor at the Free University of Berlin and later University of Freiburg. He died in Freiburg on the first day of 1966. A bibliography of 70 works in mathematics by Levi is included in the 1991 tribute by László Fuchs and Rüdiger Göbel.
Selected publications
Abelsche Gruppen mit abzählbaren Elementen. B. G. Teubner, Leipzig [1919]. (Habilitationsschrift, Universität Leipzig)
Geometrische Konfigurationen. Hirzel, Leipzig 1929.
Reinhold Baer and Friedrich Levi: Ränder topologischer Räume. Hirzel, Leipzig 1930.
On the fundamentals of analysis. Six lectures delivered in February 1938 at the University of Calcutta. University of Calcutta, Calcutta 1939.
F. W. Levi and R. N. Sen: Plane geometry. Calcutta 1939.
Finite geometrical systems. Six public lectures delivered in February 1940 at the University of Calcutta. University of Calcutta, Calcutta 1942.
Algebra. University of Calcutta, Calcutta 1942.
References
External links
20th-century German mathematicians
German Army personnel of World War I
Group theorists
University of Strasbourg alumni
Academic staff of the University of Calcutta
Academic staff of Leipzig University
Academic staff of the Free University of Berlin
Academic staff of the University of Freiburg
People from Alsace-Lorraine
Scientists from Mulhouse
Alsatian Jews
Levites
1888 births
1966 deaths
Presi
|
https://en.wikipedia.org/wiki/Neusis%20construction
|
In geometry, the neusis (; ; plural: ) is a geometric construction method that was used in antiquity by Greek mathematicians.
Geometric construction
The neusis construction consists of fitting a line element of given length () in between two given lines ( and ), in such a way that the line element, or its extension, passes through a given point . That is, one end of the line element has to lie on , the other end on , while the line element is "inclined" towards .
Point is called the pole of the neusis, line the directrix, or guiding line, and line the catch line. Length is called the diastema ().
A neusis construction might be performed by means of a marked ruler that is rotatable around the point (this may be done by putting a pin into the point and then pressing the ruler against the pin). In the figure one end of the ruler is marked with a yellow eye with crosshairs: this is the origin of the scale division on the ruler. A second marking on the ruler (the blue eye) indicates the distance from the origin. The yellow eye is moved along line , until the blue eye coincides with line . The position of the line element thus found is shown in the figure as a dark blue bar.
Use of the neusis
Neuseis have been important because they sometimes provide a means to solve geometric problems that are not solvable by means of compass and straightedge alone. Examples are the trisection of any angle in three equal parts, and the doubling of the cube. Mathematicians such as Archimedes of Syracuse (287–212 BC) and Pappus of Alexandria (290–350 AD) freely used neuseis; Sir Isaac Newton (1642–1726) followed their line of thought, and also used neusis constructions. Nevertheless, gradually the technique dropped out of use.
Regular polygons
In 2002, A. Baragar showed that every point constructible with marked ruler and compass lies in a tower of fields over , , such that the degree of the extension at each step is no higher than 6. Of all prime-power polygons below the 128-gon, this is enough to show that the regular 23-, 29-, 43-, 47-, 49-, 53-, 59-, 67-, 71-, 79-, 83-, 89-, 103-, 107-, 113-, 121-, and 127-gons cannot be constructed with neusis. (If a regular p-gon is constructible, then is constructible, and in these cases p − 1 has a prime factor higher than 5.) The 3-, 4-, 5-, 6-, 8-, 10-, 12-, 15-, 16-, 17-, 20-, 24-, 30-, 32-, 34-, 40-, 48-, 51-, 60-, 64-, 68-, 80-, 85-, 96-, 102-, 120-, and 128-gons can be constructed with only a straightedge and compass, and the 7-, 9-, 13-, 14-, 18-, 19-, 21-, 26-, 27-, 28-, 35-, 36-, 37-, 38-, 39-, 42-, 52-, 54-, 56-, 57-, 63-, 65-, 70-, 72-, 73-, 74-, 76-, 78-, 81-, 84-, 91-, 95-, 97-, 104-, 105-, 108-, 109-, 111-, 112-, 114-, 117-, 119-, and 126-gons with angle trisection. However, it is not known in general if all quintics (fifth-order polynomials) have neusis-constructible roots, which is relevant for the 11-, 25-, 31-, 41-, 61-, 101-, and 125-gons. Benjamin and Snyder showed in 2014 that the regular 11-
|
https://en.wikipedia.org/wiki/Snub%20polyhedron
|
In geometry, a snub polyhedron is a polyhedron obtained by performing a snub operation: alternating a corresponding omnitruncated or truncated polyhedron, depending on the definition. Some, but not all, authors include antiprisms as snub polyhedra, as they are obtained by this construction from a degenerate "polyhedron" with only two faces (a dihedron).
Chiral snub polyhedra do not always have reflection symmetry and hence sometimes have two enantiomorphous (left- and right-handed) forms which are reflections of each other. Their symmetry groups are all point groups.
For example, the snub cube:
Snub polyhedra have Wythoff symbol and by extension, vertex configuration . Retrosnub polyhedra (a subset of the snub polyhedron, containing the great icosahedron, small retrosnub icosicosidodecahedron, and great retrosnub icosidodecahedron) still have this form of Wythoff symbol, but their vertex configurations are instead
List of snub polyhedra
Uniform
There are 12 uniform snub polyhedra, not including the antiprisms, the icosahedron as a snub tetrahedron, the great icosahedron as a retrosnub tetrahedron and the great disnub dirhombidodecahedron, also known as Skilling's figure.
When the Schwarz triangle of the snub polyhedron is isosceles, the snub polyhedron is not chiral. This is the case for the antiprisms, the icosahedron, the great icosahedron, the small snub icosicosidodecahedron, and the small retrosnub icosicosidodecahedron.
In the pictures of the snub derivation (showing a distorted snub polyhedron, topologically identical to the uniform version, arrived at from geometrically alternating the parent uniform omnitruncated polyhedron) where green is not present, the faces derived from alternation are coloured red and yellow, while the snub triangles are blue. Where green is present (only for the snub icosidodecadodecahedron and great snub dodecicosidodecahedron), the faces derived from alternation are red, yellow, and blue, while the snub triangles are green.
Notes:
The icosahedron, snub cube and snub dodecahedron are the only three convex ones. They are obtained by snubification of the truncated octahedron, truncated cuboctahedron and the truncated icosidodecahedron - the three convex truncated quasiregular polyhedra.
The only snub polyhedron with the chiral octahedral group of symmetries is the snub cube.
Only the icosahedron and the great icosahedron are also regular polyhedra. They are also deltahedra.
Only the icosahedron, great icosahedron, small snub icosicosidodecahedron, small retrosnub icosicosidodecahedron, great dirhombicosidodecahedron, and great disnub dirhombidodecahedron also have reflective symmetries.
There is also the infinite set of antiprisms. They are formed from prisms, which are truncated hosohedra, degenerate regular polyhedra. Those up to hexagonal are listed below. In the pictures showing the snub derivation, the faces derived from alternation (of the prism bases) are coloured red, and the snub triangles are
|
https://en.wikipedia.org/wiki/Photoswitch
|
A photoswitch is a type of molecule that can change its structural geometry and chemical properties upon irradiation with electromagnetic radiation. Although often used interchangeably with the term molecular machine, a switch does not perform work upon a change in its shape whereas a machine does. However, photochromic compounds are the necessary building blocks for light driven molecular motors and machines. Upon irradiation with light, photoisomerization about double bonds in the molecule can lead to changes in the cis- or trans- configuration. These photochromic molecules are being considered for a range of applications.
Chemical structures and properties
A photochromic compound can change its configuration or structure upon irradiation with light. Several examples of photochromic compounds include: azobenzene, spiropyran, merocyanine, diarylethene, spirooxazine, fulgide, hydrazone, nobormadiene, thioindigo, acrylamide-azobenzene-quaternary ammonia, donor-acceptor Stenhouse adducts, stilbene, etc.
Isomerization
Upon isomerization from the absorption of light, a π-to-π* or n-to-π* electronic transition can occur with the subsequent release of light (fluorescence or phosphorescence) or heat when electrons transit from an excited state to a ground state. A photostationary state can be achieved when the irradiation of light no longer converts one form of an isomer into another; however, a mixture of cis- and trans- isomers will always exist with a higher percentage of one versus the other depending on the photoconditions.
Mechanism
Although the mechanism for photoisomerization is still debated amongst most scientists, increasing evidence supports cis-/trans- isomerization of polyenes favoring the hula twist rather than the one-bond-flip. The one-bond-flip isomerizes at the reactive double bond while the hula twist undergoes a conformational isomerization at the adjacent single bond. However, the interconversion of stereoisomers of stilbene proceeds via one-bond-flip.
Quantum yield
One of the most important properties of a photoswitch is its quantum yield which measures the effectiveness of absorbed light to induce photoisomerization. Quantum yield is modeled and calculated using Arrhenius kinetics. Photoswitches can be in solution or in the solid state; however, switching in the solid state is more difficult to observe due to the lack of molecular freedom of motion, solid packing, and the fast thermal reversion to the ground state. Through chemical modification, red shifting the wavelengths of absorption needed to cause isomerizaiton leads to low light induced switching which has applications in photopharmacology.
Catalysis
When a photochromic compound is incorporated into a suitable catalytic molecule, photoswitchable catalysis can result from the reversible changes in geometric conformation upon irradiation with light. As one of the most widely studied photoswitches, azobenzene has been shown to be an effective switch for regulating
|
https://en.wikipedia.org/wiki/Order-4%20dodecahedral%20honeycomb
|
In hyperbolic geometry, the order-4 dodecahedral honeycomb is one of four compact regular space-filling tessellations (or honeycombs) of hyperbolic 3-space. With Schläfli symbol it has four dodecahedra around each edge, and 8 dodecahedra around each vertex in an octahedral arrangement. Its vertices are constructed from 3 orthogonal axes. Its dual is the order-5 cubic honeycomb.
Description
The dihedral angle of a regular dodecahedron is ~116.6°, so it is impossible to fit 4 of them on an edge in Euclidean 3-space. However in hyperbolic space a properly-scaled regular dodecahedron can be scaled so that its dihedral angles are reduced to 90 degrees, and then four fit exactly on every edge.
Symmetry
It has a half symmetry construction, {5,31,1}, with two types (colors) of dodecahedra in the Wythoff construction. ↔ .
Images
A view of the order-4 dodecahedral honeycomb under the Beltrami-Klein model
Related polytopes and honeycombs
There are four regular compact honeycombs in 3D hyperbolic space:
There are fifteen uniform honeycombs in the [5,3,4] Coxeter group family, including this regular form.
There are eleven uniform honeycombs in the bifurcating [5,31,1] Coxeter group family, including this honeycomb in its alternated form.
This construction can be represented by alternation (checkerboard) with two colors of dodecahedral cells.
This honeycomb is also related to the 16-cell, cubic honeycomb, and order-4 hexagonal tiling honeycomb all which have octahedral vertex figures:
This honeycomb is a part of a sequence of polychora and honeycombs with dodecahedral cells:
Rectified order-4 dodecahedral honeycomb
The rectified order-4 dodecahedral honeycomb, , has alternating octahedron and icosidodecahedron cells, with a square prism vertex figure.
Related honeycombs
There are four rectified compact regular honeycombs:
Truncated order-4 dodecahedral honeycomb
The truncated order-4 dodecahedral honeycomb, , has octahedron and truncated dodecahedron cells, with a square pyramid vertex figure.
It can be seen as analogous to the 2D hyperbolic truncated order-4 pentagonal tiling, t{5,4} with truncated pentagon and square faces:
Related honeycombs
Bitruncated order-4 dodecahedral honeycomb
The bitruncated order-4 dodecahedral honeycomb, or bitruncated order-5 cubic honeycomb, , has truncated octahedron and truncated icosahedron cells, with a digonal disphenoid vertex figure.
Related honeycombs
Cantellated order-4 dodecahedral honeycomb
The cantellated order-4 dodecahedral honeycomb, , has rhombicosidodecahedron, cuboctahedron, and cube cells, with a wedge vertex figure.
Related honeycombs
Cantitruncated order-4 dodecahedral honeycomb
The cantitruncated order-4 dodecahedral honeycomb, , has truncated icosidodecahedron, truncated octahedron, and cube cells, with a mirrored sphenoid vertex figure.
Related honeycombs
Runcinated order-4 dodecahedral honeycomb
The runcinated order-4 dodecahedral honeycomb is the same as the runci
|
https://en.wikipedia.org/wiki/Lemniscate%20elliptic%20functions
|
In mathematics, the lemniscate elliptic functions are elliptic functions related to the arc length of the lemniscate of Bernoulli. They were first studied by Giulio Fagnano in 1718 and later by Leonhard Euler and Carl Friedrich Gauss, among others.
The lemniscate sine and lemniscate cosine functions, usually written with the symbols and (sometimes the symbols and or and are used instead), are analogous to the trigonometric functions sine and cosine. While the trigonometric sine relates the arc length to the chord length in a unit-diameter circle the lemniscate sine relates the arc length to the chord length of a lemniscate
The lemniscate functions have periods related to a number called the lemniscate constant, the ratio of a lemniscate's perimeter to its diameter. This number is a quartic analog of the (quadratic) , ratio of perimeter to diameter of a circle.
As complex functions, and have a square period lattice (a multiple of the Gaussian integers) with fundamental periods and are a special case of two Jacobi elliptic functions on that lattice, .
Similarly, the hyperbolic lemniscate sine and hyperbolic lemniscate cosine have a square period lattice with fundamental periods
The lemniscate functions and the hyperbolic lemniscate functions are related to the Weierstrass elliptic function .
Lemniscate sine and cosine functions
Definitions
The lemniscate functions and can be defined as the solution to the initial value problem:
or equivalently as the inverses of an elliptic integral, the Schwarz–Christoffel map from the complex unit disk to a square with corners
Beyond that square, the functions can be analytically continued to the whole complex plane by a series of reflections.
By comparison, the circular sine and cosine can be defined as the solution to the initial value problem:
or as inverses of a map from the upper half-plane to a half-infinite strip with real part between and positive imaginary part:
Relation to the lemniscate constant
The lemniscate functions have minimal real period , minimal imaginary period and fundamental complex periods and for a constant called the lemniscate constant,
The lemniscate functions satisfy the basic relation analogous to the relation
The lemniscate constant is a close analog of the circle constant , and many identities involving have analogues involving , as identities involving the trigonometric functions have analogues involving the lemniscate functions. For example, Viète's formula for can be written:
An analogous formula for is:
The Machin formula for is and several similar formulas for can be developed using trigonometric angle sum identities, e.g. Euler's formula . Analogous formulas can be developed for , including the following found by Gauss:
The lemniscate and circle constants were found by Gauss to be related to each-other by the arithmetic-geometric mean :
Argument identities
Zeros, poles and symmetries
The lemniscate functions and are
|
https://en.wikipedia.org/wiki/Martingale%20representation%20theorem
|
In probability theory, the martingale representation theorem states that a random variable that is measurable with respect to the filtration generated by a Brownian motion can be written in terms of an Itô integral with respect to this Brownian motion.
The theorem only asserts the existence of the representation and does not help to find it explicitly; it is possible in many cases to determine the form of the representation using Malliavin calculus.
Similar theorems also exist for martingales on filtrations induced by jump processes, for example, by Markov chains.
Statement
Let be a Brownian motion on a standard filtered probability space and let be the augmented filtration generated by . If X is a square integrable random variable measurable with respect to , then there exists a predictable process C which is adapted with respect to such that
Consequently,
Application in finance
The martingale representation theorem can be used to establish the existence
of a hedging strategy.
Suppose that is a Q-martingale process, whose volatility is always non-zero.
Then, if is any other Q-martingale, there exists an -previsible process , unique up to sets of measure 0, such that with probability one, and N can be written as:
The replicating strategy is defined to be:
hold units of the stock at the time t, and
hold units of the bond.
where is the stock price discounted by the bond price to time and is the expected payoff of the option at time .
At the expiration day T, the value of the portfolio is:
and it is easy to check that the strategy is self-financing: the change in the value of the portfolio only depends on the change of the asset prices .
See also
Backward stochastic differential equation
References
Montin, Benoît. (2002) "Stochastic Processes Applied in Finance"
Elliott, Robert (1976) "Stochastic Integrals for Martingales of a Jump Process with Partially Accessible Jump Times", Zeitschrift für Wahrscheinlichkeitstheorie und verwandte Gebiete, 36, 213–226
Martingale theory
Probability theorems
|
https://en.wikipedia.org/wiki/Narcissistic%20number
|
In number theory, a narcissistic number (also known as a pluperfect digital invariant (PPDI), an Armstrong number (after Michael F. Armstrong) or a plus perfect number) in a given number base is a number that is the sum of its own digits each raised to the power of the number of digits.
Definition
Let be a natural number. We define the narcissistic function for base to be the following:
where is the number of digits in the number in base , and
is the value of each digit of the number. A natural number is a narcissistic number if it is a fixed point for , which occurs if . The natural numbers are trivial narcissistic numbers for all , all other narcissistic numbers are nontrivial narcissistic numbers.
For example, the number 153 in base is a narcissistic number, because and .
A natural number is a sociable narcissistic number if it is a periodic point for , where for a positive integer (here is the th iterate of ), and forms a cycle of period . A narcissistic number is a sociable narcissistic number with , and an amicable narcissistic number is a sociable narcissistic number with .
All natural numbers are preperiodic points for , regardless of the base. This is because for any given digit count , the minimum possible value of is , the maximum possible value of is , and the narcissistic function value is . Thus, any narcissistic number must satisfy the inequality . Multiplying all sides by , we get , or equivalently, . Since , this means that there will be a maximum value where , because of the exponential nature of and the linearity of . Beyond this value , always. Thus, there are a finite number of narcissistic numbers, and any natural number is guaranteed to reach a periodic point or a fixed point less than , making it a preperiodic point. Setting equal to 10 shows that the largest narcissistic number in base 10 must be less than .
The number of iterations needed for to reach a fixed point is the narcissistic function's persistence of , and undefined if it never reaches a fixed point.
A base has at least one two-digit narcissistic number if and only if is not prime, and the number of two-digit narcissistic numbers in base equals , where is the number of positive divisors of .
Every base that is not a multiple of nine has at least one three-digit narcissistic number. The bases that do not are
2, 72, 90, 108, 153, 270, 423, 450, 531, 558, 630, 648, 738, 1044, 1098, 1125, 1224, 1242, 1287, 1440, 1503, 1566, 1611, 1620, 1800, 1935, ...
There are only 89 narcissistic numbers in base 10, of which the largest is
115,132,219,018,763,992,565,095,597,973,971,522,401
with 39 digits.
Narcissistic numbers and cycles of Fb for specific b
All numbers are represented in base . '#' is the length of each known finite sequence.
Extension to negative integers
Narcissistic numbers can be extended to the negative integers by use of a signed-digit representation to represent each integer.
Programming example
Python
The
|
https://en.wikipedia.org/wiki/Molecular%20configuration
|
The molecular configuration of a molecule is the permanent geometry that results from the spatial arrangement of its bonds. The ability of the same set of atoms to form two or more molecules with different configurations is stereoisomerism. This is distinct from constitutional isomerism which arises from atoms being connected in a different order. Conformers which arise from single bond rotations, if not isolatable as atropisomers, do not count as distinct molecular configurations as the spatial connectivity of bonds is identical.
Enantiomers
Enantiomers are molecules having one or more chiral centres that are mirror images of each other. Chiral centres are designated R or S. If the 3 groups projecting towards you are arranged clockwise from highest priority to lowest priority, that centre is designated R. If counterclockwise, the centre is S. Priority is based on atomic number: atoms with higher atomic number are higher priority. If two molecules with one or more chiral centres differ in all of those centres, they are enantiomers.
Diastereomers
Diastereomers are distinct molecular configurations that are a broader category. They usually differ in physical characteristics as well as chemical properties. If two molecules with more than one chiral centre differ in one or more (but not all) centres, they are diastereomers. All stereoisomers that are not enantiomers are diastereomers. Diastereomerism also exists in alkenes. Alkenes are designated Z or E depending on group priority on adjacent carbon atoms. E/Z notation describes the absolute stereochemistry of the double bond. Cis/trans notation is also used to describe the relative orientations of groups.
Configurations in amino acids
Amino acids are designated either L or D depending on relative group arrangements around the stereogenic carbon center. L/D designations are not related to S/R absolute configurations. Only L configured amino acids are found in biological organisms. All amino acids except for L-cysteine have an S configuration and glycine is non-chiral.
In general, all L designated amino acids are enantiomers of their D counterparts except for isoleucine and threonine which contain two carbon stereocenters, making them diastereomers.
Configurations of pharmacological compounds
Used as drugs, compounds with different configuration normally have different physiological activity, including the desired pharmacological effect, the toxicology and the metabolism. Enantiomeric ratios and purity is an important factor in clinical assessments. Racemic mixtures are those that contain equimolar amounts of both enantiomers of a compound. Racemate and single enantiomer actions differ in most cases.
See also
Absolute configuration
References
Molecules
Stereochemistry
|
https://en.wikipedia.org/wiki/Kronecker%27s%20lemma
|
In mathematics, Kronecker's lemma (see, e.g., ) is a result about the relationship between convergence of infinite sums and convergence of sequences. The lemma is often used in the proofs of theorems concerning sums of independent random variables such as the strong Law of large numbers. The lemma is named after the German mathematician Leopold Kronecker.
The lemma
If is an infinite sequence of real numbers such that
exists and is finite, then we have for all and that
Proof
Let denote the partial sums of the x'''s. Using summation by parts,
Pick any ε > 0. Now choose N so that is ε-close to s for k > N. This can be done as the sequence converges to s. Then the right hand side is:
Now, let n go to infinity. The first term goes to s, which cancels with the third term. The second term goes to zero (as the sum is a fixed value). Since the b'' sequence is increasing, the last term is bounded by .
References
Mathematical series
Lemmas
|
https://en.wikipedia.org/wiki/Gerhard%20Hessenberg
|
Gerhard Hessenberg (; 16 August 1874 – 16 November 1925) was a German mathematician who worked in projective geometry, differential geometry, and set theory.
Career
Hessenberg received his Ph.D. from the University of Berlin in 1899 under the guidance of Hermann Schwarz and Lazarus Fuchs.
His name is usually associated with projective geometry, where he is known for proving that Desargues' theorem is a consequence of Pappus's hexagon theorem, and differential geometry where he is known for introducing the concept of a connection. He was also a set theorist: the Hessenberg sum and product of ordinals are named after him. However, Hessenberg matrices are named for Karl Hessenberg, a near relative.
In 1908 Gerhard Hessenberg was an Invited Speaker of the International Congress of Mathematicians in Rome.
Publications
(also in book form as a separate publication from Verlag Vandenhoeck und Ruprecht, Göttingen 1906).
(unaltered reprint of the Teubner edition of 1912).
Notes
External links
1874 births
1925 deaths
19th-century German mathematicians
20th-century German mathematicians
Geometers
|
https://en.wikipedia.org/wiki/Time%20dependent%20vector%20field
|
In mathematics, a time dependent vector field is a construction in vector calculus which generalizes the concept of vector fields. It can be thought of as a vector field which moves as time passes. For every instant of time, it associates a vector to every point in a Euclidean space or in a manifold.
Definition
A time dependent vector field on a manifold M is a map from an open subset on
such that for every , is an element of .
For every such that the set
is nonempty, is a vector field in the usual sense defined on the open set .
Associated differential equation
Given a time dependent vector field X on a manifold M, we can associate to it the following differential equation:
which is called nonautonomous by definition.
Integral curve
An integral curve of the equation above (also called an integral curve of X) is a map
such that , is an element of the domain of definition of X and
.
Equivalence with time-independent vector fields
A time dependent vector field on can be thought of as a vector field on where does not depend on
Conversely, associated with a time-dependent vector field on is a time-independent one
on In coordinates,
The system of autonomous differential equations for is equivalent to that of non-autonomous ones for and is a bijection between the sets of integral curves of and respectively.
Flow
The flow of a time dependent vector field X, is the unique differentiable map
such that for every ,
is the integral curve of X that satisfies .
Properties
We define as
If and then
, is a diffeomorphism with inverse .
Applications
Let X and Y be smooth time dependent vector fields and the flow of X. The following identity can be proved:
Also, we can define time dependent tensor fields in an analogous way, and prove this similar identity, assuming that is a smooth time dependent tensor field:
This last identity is useful to prove the Darboux theorem.
References
Lee, John M., Introduction to Smooth Manifolds, Springer-Verlag, New York (2003) . Graduate-level textbook on smooth manifolds.
Differential geometry
Vector calculus
|
https://en.wikipedia.org/wiki/Musical%20isomorphism
|
In mathematics—more specifically, in differential geometry—the musical isomorphism (or canonical isomorphism) is an isomorphism between the tangent bundle and the cotangent bundle of a pseudo-Riemannian manifold induced by its metric tensor. There are similar isomorphisms on symplectic manifolds. The term musical refers to the use of the symbols (flat) and (sharp).
In the notation of Ricci calculus, it is also known as raising and lowering indices.
Motivation
In linear algebra, a finite-dimensional vector space is isomorphic to its dual space but not canonically isomorphic to it. On the other hand a finite-dimensional vector space endowed with a non-degenerate bilinear form , is canonically isomorphic to its dual, the isomorphism being given by:
An example is where is a Euclidean space, and is its inner product.
Musical isomorphisms are the global version of this isomorphism and its inverse, for the tangent bundle and cotangent bundle of a (pseudo-)Riemannian manifold . They are isomorphisms of vector bundles which are at any point the above isomorphism applied to the (pseudo-)Euclidean space (the tangent space of at point ) endowed with the inner product . More generally, musical isomorphisms always exist between a vector bundle endowed with a bundle metric and its dual.
Because every paracompact manifold can be endowed with a Riemannian metric, the musical isomorphisms allow to show that on those spaces a vector bundle is always isomorphic to its dual (but not canonically unless a (pseudo-)Riemannian metric has been associated with the manifold).
Discussion
Let be a pseudo-Riemannian manifold. Suppose is a moving tangent frame (see also smooth frame) for the tangent bundle with, as dual frame (see also dual basis), the moving coframe (a moving tangent frame for the cotangent bundle ; see also coframe) . Then, locally, we may express the pseudo-Riemannian metric (which is a -covariant tensor field that is symmetric and nondegenerate) as (where we employ the Einstein summation convention).
Given a vector field and denoting , we define its flat by:
This is referred to as lowering an index. Using angle bracket notation for the bilinear form defined by , we obtain the somewhat more transparent relation
for any vector fields and .
In the same way, given a covector field and denoting , we define its sharp by:
where are the components of the inverse metric tensor (given by the entries of the inverse matrix to ). Taking the sharp of a covector field is referred to as raising an index. In angle bracket notation, this reads
for any covector field and any vector field .
Through this construction, we have two mutually inverse isomorphisms
These are isomorphisms of vector bundles and, hence, we have, for each in , mutually inverse vector space isomorphisms between and .
Extension to tensor products
The musical isomorphisms may also be extended to the bundles
Which index is to be raised or lowered must be indicated. For
|
https://en.wikipedia.org/wiki/Resampling%20%28statistics%29
|
In statistics, resampling is the creation of new samples based on one observed sample.
Resampling methods are:
Permutation tests (also re-randomization tests)
Bootstrapping
Cross validation
Permutation tests
Permutation tests rely on resampling the original data assuming the null hypothesis. Based on the resampled data it can be concluded how likely the original data is to occur under the null hypothesis.
Bootstrap
Bootstrapping is a statistical method for estimating the sampling distribution of an estimator by sampling with replacement from the original sample, most often with the purpose of deriving robust estimates of standard errors and confidence intervals of a population parameter like a mean, median, proportion, odds ratio, correlation coefficient or regression coefficient. It has been called the plug-in principle, as it is the method of estimation of functionals of a population distribution by evaluating the same functionals at the empirical distribution based on a sample.
For example, when estimating the population mean, this method uses the sample mean; to estimate the population median, it uses the sample median; to estimate the population regression line, it uses the sample regression line.
It may also be used for constructing hypothesis tests. It is often used as a robust alternative to inference based on parametric assumptions when those assumptions are in doubt, or where parametric inference is impossible or requires very complicated formulas for the calculation of standard errors. Bootstrapping techniques are also used in the updating-selection transitions of particle filters, genetic type algorithms and related resample/reconfiguration Monte Carlo methods used in computational physics. In this context, the bootstrap is used to replace sequentially empirical weighted probability measures by empirical measures. The bootstrap allows to replace the samples with low weights by copies of the samples with high weights.
Cross-validation
Cross-validation is a statistical method for validating a predictive model. Subsets of the data are held out for use as validating sets; a model is fit to the remaining data (a training set) and used to predict for the validation set. Averaging the quality of the predictions across the validation sets yields an overall measure of prediction accuracy. Cross-validation is employed repeatedly in building decision trees.
One form of cross-validation leaves out a single observation at a time; this is similar to the jackknife. Another, K-fold cross-validation, splits the data into K subsets; each is held out in turn as the validation set.
This avoids "self-influence". For comparison, in regression analysis methods such as linear regression, each y value draws the regression line toward itself, making the prediction of that value appear more accurate than it really is. Cross-validation applied to linear regression predicts the y value for each observation without using that observation.
This is
|
https://en.wikipedia.org/wiki/Coil
|
Coil or COIL may refer to:
Geometry
Helix, a space curve that winds around a line
Spiral, a curve that winds around a central point
Science and technology
Coil (chemistry), a tube used to cool and condense steam from a distillation
Coil spring, used to store energy, absorb shock, or maintain a force between two surfaces
Inductor or coil, a passive two-terminal electrical component
Electromagnetic coil, formed when a conductor is wound around a core or form to create an inductor or electromagnet
Induction coil, a type of electrical transformer used to produce high-voltage pulses from a low-voltage direct current supply
Ignition coil, used in internal combustion engines to create a pulse of high voltage for a spark plug
Intrauterine device or coil, a contraceptive device
Chemical oxygen iodine laser, a near–infrared chemical laser
Coil, a binary digit or bit in some communication protocols such as Modbus
COIL, the gene that encodes the protein coilin
Coiled tubing, a long metal pipe which is supplied spooled on a large reel
Music
Coil (band), an English experimental band
Coil (album), a 1997 album by Toad the Wet Sprocket
"Coil", a song by Opeth from Watershed
Fictional entities
The Coil, a fictional organization in the G.I. Joe universe
Magnemite or Coil, a Pokémon character
Coil, a crime lord from the web serial Worm
People with the surname
Liam Mac Cóil, Irish novelist
Other uses
Coil (hieroglyph), an Egyptian hieratic hieroglyph
Coiled basketry, using grasses, rushes and pine needles
Coiling (pottery), a method of creating pottery
Coil (video game), a video game by Edmund McMillen and Florian Himsl
Turmoil or burden, as in mortal coil
Coiling, a method for storing rope or cable
See also
Helix (disambiguation)
Loop (disambiguation)
Spiral (disambiguation)
|
https://en.wikipedia.org/wiki/Augmented%20Dickey%E2%80%93Fuller%20test
|
In statistics, an augmented Dickey–Fuller test (ADF) tests the null hypothesis that a unit root is present in a time series sample. The alternative hypothesis is different depending on which version of the test is used, but is usually stationarity or trend-stationarity. It is an augmented version of the Dickey–Fuller test for a larger and more complicated set of time series models.
The augmented Dickey–Fuller (ADF) statistic, used in the test, is a negative number. The more negative it is, the stronger the rejection of the hypothesis that there is a unit root at some level of confidence.
Testing procedure
The testing procedure for the ADF test is the same as for the Dickey–Fuller test but it is applied to the model
where is a constant, the coefficient on a time trend and the lag order of the autoregressive process. Imposing the constraints and corresponds to modelling a random walk and using the constraint corresponds to modeling a random walk with a drift. Consequently, there are three main versions of the test, analogous to the ones discussed on Dickey–Fuller test (see that page for a discussion on dealing with uncertainty about including the intercept and deterministic time trend terms in the test equation.)
By including lags of the order p the ADF formulation allows for higher-order autoregressive processes. This means that the lag length p has to be determined when applying the test. One possible approach is to test down from high orders and examine the t-values on coefficients. An alternative approach is to examine information criteria such as the Akaike information criterion, Bayesian information criterion or the Hannan–Quinn information criterion.
The unit root test is then carried out under the null hypothesis against the alternative hypothesis of Once a value for the test statistic
is computed it can be compared to the relevant critical value for the Dickey–Fuller test. As this test is asymmetrical, we are only concerned with negative values of our test statistic . If the calculated test statistic is less (more negative) than the critical value, then the null hypothesis of is rejected and no unit root is present.
Intuition
The intuition behind the test is that if the series is characterised by a unit root process then the lagged level of the series () will provide no relevant information in predicting the change in besides the one obtained in the lagged changes (). In this case the and null hypothesis is not rejected. In contrast, when the process has no unit root, it is stationary and hence exhibits reversion to the mean - so the lagged level will provide relevant information in predicting the change of the series and the null hypothesis of a unit root will be rejected.
Examples
A model that includes a constant and a time trend is estimated using sample of 50 observations and yields the statistic of −4.57. This is more negative than the tabulated critical value of −3.50, so at the 95 percent level the null hypothes
|
https://en.wikipedia.org/wiki/168%20%28number%29
|
168 (one hundred [and] sixty-eight) is the natural number following 167 and preceding 169.
In mathematics
168 is an even number, a composite number, an abundant number, and an idoneal number.
There are 168 primes less than 1000. 168 is the product of the first two perfect numbers.
168 is the order of the group PSL(2,7), the second smallest nonabelian simple group.
From Hurwitz's automorphisms theorem, 168 is the maximum possible number of automorphisms of a genus 3 Riemann surface, this maximum being achieved by the Klein quartic, whose symmetry group is PSL(2,7). The Fano plane has 168 symmetries.
168 is the largest known n such that 2n does not contain all decimal digits.
168 is the fourth Dedekind number.
In astronomy
168P/Hergenrother is a periodic comet in the Solar System
168 Sibylla is a dark Main belt asteroid
In the military
was an Imperial Japanese Navy during World War II
is a United States Navy fleet tugboat
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy during World War II
was a United States Navy during World War I
was a United States Navy during World War II
was a United States Navy steamer during World War II
was a La Salle-class transport during World War II
In movies
The 168 Film Project in Los Angeles
In transportation
New York City Subway stations
168th Street (New York City Subway); a subway station complex at 168th Street and Broadway consisting of:
168th Street (IRT Broadway – Seventh Avenue Line); serving the train
168th Street (IND Eighth Avenue Line); serving the trains
168th Street (BMT Jamaica Line); was the former terminal of the BMT Jamaica Line in Queens
British Rail Class 168
VASP Flight 168 from Rio de Janeiro, Brazil to Fortaleza; crashed on June 8, 1982
In other fields
168 is also:
The year AD 168 or 168 BC
168 AH is a year in the Islamic calendar that corresponds to 784 – 785 CE
The number of hours in a week, or 7 x 24 hours
In the game of dominoes, tiles are marked with a number of spots, or pips. The Double 6 set (28 tiles) totals 168 pips
Tracy 168, a New York City graffiti artist
Minuscule 168 is a Greek minuscule manuscript of the New Testament
Some Chinese consider 168 a lucky number, because it is roughly homophonous with the phrase "一路發", which means "fortune all the way", or, as the United States Mint claims- "Prosperity Forever"
168 is the name of the most commonly used laboratory strain of Bacillus subtilis, a bacterium
See also
List of highways numbered 168
United States Supreme Court cases, Volume 168
United Nations Security Council Resolution 168
References
External links
Number Facts and Trivia: 168
The Number 168
The Positive Integer 168
Prime curiosities: 168
Integers
|
https://en.wikipedia.org/wiki/Ordered%20logit
|
In statistics, the ordered logit model (also ordered logistic regression or proportional odds model) is an ordinal regression model—that is, a regression model for ordinal dependent variables—first considered by Peter McCullagh. For example, if one question on a survey is to be answered by a choice among "poor", "fair", "good", "very good" and "excellent", and the purpose of the analysis is to see how well that response can be predicted by the responses to other questions, some of which may be quantitative, then ordered logistic regression may be used. It can be thought of as an extension of the logistic regression model that applies to dichotomous dependent variables, allowing for more than two (ordered) response categories.
The model and the proportional odds assumption
The model only applies to data that meet the proportional odds assumption, the meaning of which can be exemplified as follows. Suppose there are five outcomes: "poor", "fair", "good", "very good", and "excellent". We assume that the probabilities of these outcomes are given by p1(x), p2(x), p3(x), p4(x), p5(x), all of which are functions of some independent variable(s) x. Then, for a fixed value of x, the logarithms of the odds (not the logarithms of the probabilities) of answering in certain ways are:
The proportional odds assumption states that the numbers added to each of these logarithms to get the next are the same regardless of x. In other words, the difference between the logarithm of the odds of having poor or fair health minus the logarithm of having poor health is the same regardless of x; similarly, the logarithm of the odds of having poor, fair, or good health minus the logarithm of having poor or fair health is the same regardless of x; etc.
Examples of multiple-ordered response categories include bond ratings, opinion surveys with responses ranging from "strongly agree" to "strongly disagree," levels of state spending on government programs (high, medium, or low), the level of insurance coverage chosen (none, partial, or full), and employment status (not employed, employed part-time, or fully employed).
Ordered logit can be derived from a latent-variable model, similar to the one from which binary logistic regression can be derived. Suppose the underlying process to be characterized is
where is an unobserved dependent variable (perhaps the exact level of agreement with the statement proposed by the pollster); is the vector of independent variables; is the error term, assumed to follow a standard logistic distribution; and is the vector of regression coefficients which we wish to estimate. Further suppose that while we cannot observe , we instead can only observe the categories of response
where the parameters are the externally imposed endpoints of the observable categories. Then the ordered logit technique will use the observations on y, which are a form of censored data on y*, to fit the parameter vector .
Estimation
For details on how the
|
https://en.wikipedia.org/wiki/Block%20%28permutation%20group%20theory%29
|
In mathematics and group theory, a block system for the action of a group G on a set X is a partition of X that is G-invariant. In terms of the associated equivalence relation on X, G-invariance means that
x ~ y implies gx ~ gy
for all g ∈ G and all x, y ∈ X. The action of G on X induces a natural action of G on any block system for X.
The set of orbits of the G-set X is an example of a block system. The corresponding equivalence relation is the smallest G-invariant equivalence on X such that the induced action on the block system is trivial.
The partition into singleton sets is a block system and if X is non-empty then the partition into one set X itself is a block system as well (if X is a singleton set then these two partitions are identical). A transitive (and thus non-empty) G-set X is said to be primitive if it has no other block systems. For a non-empty G-set X the transitivity requirement in the previous definition is only necessary in the case when |X|=2 and the group action is trivial.
Characterization of blocks
Each element of some block system is called a block. A block can be characterized as a non-empty subset B of X such that for all g ∈ G, either
gB = B (g fixes B) or
gB ∩ B = ∅ (g moves B entirely).
Proof: Assume that B is a block, and for some g ∈ G it's gB ∩ B ≠ ∅. Then for some x ∈ B it's gx ~ x. Let y ∈ B, then x ~ y and from the G-invariance it follows that gx ~ gy. Thus y ~ gy and so gB ⊆ B. The condition gx ~ x also implies x ~ g−1x, and by the same method it follows that g−1B ⊆ B, and thus B ⊆ gB. In the other direction, if the set B satisfies the given condition then the system {gB | g ∈ G} together with the complement of the union of these sets is a block system containing B.
In particular, if B is a block then gB is a block for any g ∈ G, and if G acts transitively on X then the set {gB | g ∈ G} is a block system on X.
Stabilizers of blocks
If B is a block, the stabilizer of B is the subgroup
GB = { g ∈ G | gB = B }.
The stabilizer of a block contains the stabilizer Gx of each of its elements. Conversely, if x ∈ X and H is a subgroup of G containing Gx, then the orbit H.x of x under H is a block contained in the orbit G.x and containing x.
For any x ∈ X, block B containing x and subgroup H ⊆ G containing Gx it's GB.x = B ∩ G.x and GH.x = H.
It follows that the blocks containing x and contained in G.x are in one-to-one correspondence with the subgroups of G containing Gx. In particular, if the G-set X is transitive then the blocks containing x are in one-to-one correspondence with the subgroups of G containing Gx. In this case the G-set X is primitive if and only if either the group action is trivial (then X = {x}) or the stabilizer Gx is a maximal subgroup of G (then the stabilizers of all elements of X are the maximal subgroups of G conjugate to Gx because Ggx = g ⋅ Gx ⋅ g−1).
See also
Congruence relation
Permutation groups
|
https://en.wikipedia.org/wiki/Kirillov%20character%20formula
|
In mathematics, for a Lie group , the Kirillov orbit method gives a heuristic method in representation theory. It connects the Fourier transforms of coadjoint orbits, which lie in the dual space of the Lie algebra of G, to the infinitesimal characters of the irreducible representations. The method got its name after the Russian mathematician Alexandre Kirillov.
At its simplest, it states that a character of a Lie group may be given by the Fourier transform of the Dirac delta function supported on the coadjoint orbits, weighted by the square-root of the Jacobian of the exponential map, denoted by . It does not apply to all Lie groups, but works for a number of classes of connected Lie groups, including nilpotent, some semisimple groups, and compact groups.
The Kirillov orbit method has led to a number of important developments in Lie theory, including the Duflo isomorphism and the wrapping map.
Character formula for compact Lie groups
Let be the highest weight of an irreducible representation , where is the dual of the Lie algebra of the maximal torus, and let be half the sum of the positive roots.
We denote by the coadjoint orbit through and by the -invariant measure on with total mass , known as the Liouville measure. If is the character of the representation, the Kirillov's character formula for compact Lie groups is given by
,
where is the Jacobian of the exponential map.
Example: SU(2)
For the case of SU(2), the highest weights are the positive half integers, and . The coadjoint orbits are the two-dimensional spheres of radius , centered at the origin in 3-dimensional space.
By the theory of Bessel functions, it may be shown that
and
thus yielding the characters of SU(2):
See also
Weyl character formula
Localization formula for equivariant cohomology
References
Kirillov, A. A., Lectures on the Orbit Method, Graduate Studies in Mathematics, 64, AMS, Rhode Island, 2004.
Representation theory of Lie groups
|
https://en.wikipedia.org/wiki/Cocurvature
|
In mathematics in the branch of differential geometry, the cocurvature of a connection on a manifold is the obstruction to the integrability of the vertical bundle.
Definition
If M is a manifold and P is a connection on M, that is a vector-valued 1-form on M which is a projection on TM such that PabPbc = Pac, then the cocurvature is a vector-valued 2-form on M defined by
where X and Y are vector fields on M.
See also
Curvature
Lie bracket
Frölicher-Nijenhuis bracket
Differential geometry
Curvature (mathematics)
|
https://en.wikipedia.org/wiki/Vector-valued%20differential%20form
|
In mathematics, a vector-valued differential form on a manifold M is a differential form on M with values in a vector space V. More generally, it is a differential form with values in some vector bundle E over M. Ordinary differential forms can be viewed as R-valued differential forms.
An important case of vector-valued differential forms are Lie algebra-valued forms. (A connection form is an example of such a form.)
Definition
Let M be a smooth manifold and E → M be a smooth vector bundle over M. We denote the space of smooth sections of a bundle E by Γ(E). An E-valued differential form of degree p is a smooth section of the tensor product bundle of E with Λp(T ∗M), the p-th exterior power of the cotangent bundle of M. The space of such forms is denoted by
Because Γ is a strong monoidal functor, this can also be interpreted as
where the latter two tensor products are the tensor product of modules over the ring Ω0(M) of smooth R-valued functions on M (see the seventh example here). By convention, an E-valued 0-form is just a section of the bundle E. That is,
Equivalently, an E-valued differential form can be defined as a bundle morphism
which is totally skew-symmetric.
Let V be a fixed vector space. A V-valued differential form of degree p is a differential form of degree p with values in the trivial bundle M × V. The space of such forms is denoted Ωp(M, V). When V = R one recovers the definition of an ordinary differential form. If V is finite-dimensional, then one can show that the natural homomorphism
where the first tensor product is of vector spaces over R, is an isomorphism.
Operations on vector-valued forms
Pullback
One can define the pullback of vector-valued forms by smooth maps just as for ordinary forms. The pullback of an E-valued form on N by a smooth map φ : M → N is an (φ*E)-valued form on M, where φ*E is the pullback bundle of E by φ.
The formula is given just as in the ordinary case. For any E-valued p-form ω on N the pullback φ*ω is given by
Wedge product
Just as for ordinary differential forms, one can define a wedge product of vector-valued forms. The wedge product of an E1-valued p-form with an E2-valued q-form is naturally an (E1⊗E2)-valued (p+q)-form:
The definition is just as for ordinary forms with the exception that real multiplication is replaced with the tensor product:
In particular, the wedge product of an ordinary (R-valued) p-form with an E-valued q-form is naturally an E-valued (p+q)-form (since the tensor product of E with the trivial bundle M × R is naturally isomorphic to E). For ω ∈ Ωp(M) and η ∈ Ωq(M, E) one has the usual commutativity relation:
In general, the wedge product of two E-valued forms is not another E-valued form, but rather an (E⊗E)-valued form. However, if E is an algebra bundle (i.e. a bundle of algebras rather than just vector spaces) one can compose with multiplication in E to obtain an E-valued form. If E is a bundle of commutative, associative algebras then, with this mo
|
https://en.wikipedia.org/wiki/Eisenstein%20ideal
|
In mathematics, the Eisenstein ideal is an ideal in the endomorphism ring of the Jacobian variety of a modular curve, consisting roughly of elements of the Hecke algebra of Hecke operators that annihilate the Eisenstein series. It was introduced by , in studying the rational points of modular curves. An Eisenstein prime is a prime in the support of the Eisenstein ideal (this has nothing to do with primes in the Eisenstein integers).
Definition
Let N be a rational prime, and define
J0(N) = J
as the Jacobian variety of the modular curve
X0(N) = X.
There are endomorphisms Tl of J for each prime number l not dividing N. These come from the Hecke operator, considered first as an algebraic correspondence on X, and from there as acting on divisor classes, which gives the action on J. There is also a Fricke involution w (and Atkin–Lehner involutions if N is composite). The Eisenstein ideal, in the (unital) subring of End(J) generated as a ring by the Tl, is generated as an ideal by the elements
Tl − l - 1
for all l not dividing N, and by
w + 1.
Geometric definition
Suppose that T* is the ring generated by the Hecke operators acting on all modular forms for Γ0(N) (not just the cusp forms). The ring T of Hecke operators on the cusp forms is a quotient of T*, so Spec(T) can be viewed as a subscheme of Spec(T*). Similarly Spec(T*) contains a line (called the Eisenstein line) isomorphic to Spec(Z) coming from the action of Hecke operators on the Eisenstein series. The Eisenstein ideal is the ideal defining the intersection of the Eisenstein line with Spec(T) in Spec(T*).
Example
The Eisenstein ideal can also be defined for higher weight modular forms. Suppose that T is the full Hecke algebra generated by Hecke operators Tn acting on the 2-dimensional space of modular forms of level 1 and weight 12.This space is 2 dimensional, spanned by the Eigenforms given by the Eisenstein series E12 and the modular discriminant Δ. The map taking a Hecke operator Tn to its eigenvalues (σ11(n),τ(n)) gives a homomorphism from T into the ring Z×Z (where τ is the Ramanujan tau function and σ11(n) is the sum of the 11th powers of the divisors of n). The image is the set of pairs (c,d) with c and d congruent mod 691 because of Ramanujan's congruence σ11(n) ≡ τ(n) mod 691. The Hecke algebra of Hecke operators acting on the cusp form Δ is just isomorphic to Z. If we identify it with Z then the Eisenstein ideal is (691).
References
Modular forms
Abelian varieties
|
https://en.wikipedia.org/wiki/Rate%20function
|
In mathematics — specifically, in large deviations theory — a rate function is a function used to quantify the probabilities of rare events. Such functions are used to formulate large deviation principle. A large deviation principle quantifies the asymptotic probability of rare events for a sequence of probabilities.
A rate function is also called a Cramér function, after the Swedish probabilist Harald Cramér.
Definitions
Rate function An extended real-valued function I : X → [0, +∞] defined on a Hausdorff topological space X is said to be a rate function if it is not identically +∞ and is lower semi-continuous, i.e. all the sub-level sets
are closed in X.
If, furthermore, they are compact, then I is said to be a good rate function.
A family of probability measures (μδ)δ > 0 on X is said to satisfy the large deviation principle with rate function I : X → [0, +∞) (and rate 1 ⁄ δ) if, for every closed set F ⊆ X and every open set G ⊆ X,
If the upper bound (U) holds only for compact (instead of closed) sets F, then (μδ)δ>0 is said to satisfy the weak large deviations principle (with rate 1 ⁄ δ and weak rate function I).
Remarks
The role of the open and closed sets in the large deviation principle is similar to their role in the weak convergence of probability measures: recall that (μδ)δ > 0 is said to converge weakly to μ if, for every closed set F ⊆ X and every open set G ⊆ X,
There is some variation in the nomenclature used in the literature: for example, den Hollander (2000) uses simply "rate function" where this article — following Dembo & Zeitouni (1998) — uses "good rate function", and "weak rate function". Regardless of the nomenclature used for rate functions, examination of whether the upper bound inequality (U) is supposed to hold for closed or compact sets tells one whether the large deviation principle in use is strong or weak.
Properties
Uniqueness
A natural question to ask, given the somewhat abstract setting of the general framework above, is whether the rate function is unique. This turns out to be the case: given a sequence of probability measures (μδ)δ>0 on X satisfying the large deviation principle for two rate functions I and J, it follows that I(x) = J(x) for all x ∈ X.
Exponential tightness
It is possible to convert a weak large deviation principle into a strong one if the measures converge sufficiently quickly. If the upper bound holds for compact sets F and the sequence of measures (μδ)δ>0 is exponentially tight, then the upper bound also holds for closed sets F. In other words, exponential tightness enables one to convert a weak large deviation principle into a strong one.
Continuity
Naïvely, one might try to replace the two inequalities (U) and (L) by the single requirement that, for all Borel sets S ⊆ X,
The equality (E) is far too restrictive, since many interesting examples satisfy (U) and (L) but not (E). For example, the measure μδ might be non-atomic for all δ, so the equality (E) could hold for
|
https://en.wikipedia.org/wiki/Large%20deviations%20theory
|
In probability theory, the theory of large deviations concerns the asymptotic behaviour of remote tails of sequences of probability distributions. While some basic ideas of the theory can be traced to Laplace, the formalization started with insurance mathematics, namely ruin theory with Cramér and Lundberg. A unified formalization of large deviation theory was developed in 1966, in a paper by Varadhan. Large deviations theory formalizes the heuristic ideas of concentration of measures and widely generalizes the notion of convergence of probability measures.
Roughly speaking, large deviations theory concerns itself with the exponential decline of the probability measures of certain kinds of extreme or tail events.
Introductory examples
An elementary example
Consider a sequence of independent tosses of a fair coin. The possible outcomes could be heads or tails. Let us denote the possible outcome of the i-th trial by where we encode head as 1 and tail as 0. Now let denote the mean value after trials, namely
Then lies between 0 and 1. From the law of large numbers it follows that as N grows, the distribution of converges to (the expected value of a single coin toss).
Moreover, by the central limit theorem, it follows that is approximately normally distributed for large The central limit theorem can provide more detailed information about the behavior of than the law of large numbers. For example, we can approximately find a tail probability of that is greater than for a fixed value of However, the approximation by the central limit theorem may not be accurate if is far from unless is sufficiently large. Also, it does not provide information about the convergence of the tail probabilities as However, the large deviation theory can provide answers for such problems.
Let us make this statement more precise. For a given value let us compute the tail probability Define
Note that the function is a convex, nonnegative function that is zero at and increases as approaches It is the negative of the Bernoulli entropy with that it's appropriate for coin tosses follows from the asymptotic equipartition property applied to a Bernoulli trial. Then by Chernoff's inequality, it can be shown that This bound is rather sharp, in the sense that cannot be replaced with a larger number which would yield a strict inequality for all positive (However, the exponential bound can still be reduced by a subexponential factor on the order of this follows from the Stirling approximation applied to the binomial coefficient appearing in the Bernoulli distribution.) Hence, we obtain the following result:
The probability decays exponentially as at a rate depending on x. This formula approximates any tail probability of the sample mean of i.i.d. variables and gives its convergence as the number of samples increases.
Large deviations for sums of independent random variables
In the above example of coin-tossing we explicitly assumed that each tos
|
https://en.wikipedia.org/wiki/David%20E.%20Evans
|
David E. Evans FLSW was born in 1950 at Glanamman, Dyfed, Wales. He is a professor of mathematics at Cardiff University, specialising in knot theory. He has published a number of books, many in collaboration with Yasuyuki Kawahigashi.
He studied at New College, Oxford, and Jesus College, Oxford.
From 1975 to 1976 Evans worked as a scholar and research assistant in the department of theoretical physics at the Dublin Institute for Advanced Studies. Over the next few years he travelled around the world working as a research fellow at UCLA (1977); Australian National University, Canberra (1982, 1989); Kyoto University (1982-83, 1985); and the University of Ottawa (1983). Between 1987 and 1998 he worked as a professor at Swansea, Wales. Since 1998, he has worked as a professor at Cardiff University.
Awards and honours
Junior Mathematical Prize, 1972
Senior Mathematical Prize, 1975
Johnson Prizes, 1975
Whitehead Prize – London Mathematical Society, 1989.
Elected a Fellow of the Learned Society of Wales, 2011.
Notable published works
Quantum Symmetries on Operator Algebras (David E. Evans and Yasuyuki Kawahigashi, published 21 May 1998) One of the first books to examine post-1981 combinatorial-algebraic developments with respect to operator algebras. Intended for an audience of graduate students and researchers of the field.
Integrable lattice models for conjugate A^(1)_n (David E. Evans and R. E. Behrend, published 2004 in J. Phys. A) Evans's most recently published paper.
References
External links
Homepage
1950 births
20th-century British mathematicians
21st-century British mathematicians
Alumni of New College, Oxford
Alumni of Jesus College, Oxford
Whitehead Prize winners
Academics of Cardiff University
Living people
People from Glanamman
Academics of the Dublin Institute for Advanced Studies
Fellows of the Learned Society of Wales
|
https://en.wikipedia.org/wiki/Factorization%20of%20polynomials
|
In mathematics and computer algebra, factorization of polynomials or polynomial factorization expresses a polynomial with coefficients in a given field or in the integers as the product of irreducible factors with coefficients in the same domain. Polynomial factorization is one of the fundamental components of computer algebra systems.
The first polynomial factorization algorithm was published by Theodor von Schubert in 1793. Leopold Kronecker rediscovered Schubert's algorithm in 1882 and extended it to multivariate polynomials and coefficients in an algebraic extension. But most of the knowledge on this topic is not older than circa 1965 and the first computer algebra systems:
When the long-known finite step algorithms were first put on computers, they turned out to be highly inefficient. The fact that almost any uni- or multivariate polynomial of degree up to 100 and with coefficients of a moderate size (up to 100 bits) can be factored by modern algorithms in a few minutes of computer time indicates how successfully this problem has been attacked during the past fifteen years. (Erich Kaltofen, 1982)
Nowadays, modern algorithms and computers can quickly factor univariate polynomials of degree more than 1000 having coefficients with thousands of digits. For this purpose, even for factoring over the rational numbers and number fields, a fundamental step is a factorization of a polynomial over a finite field.
Formulation of the question
Polynomial rings over the integers or over a field are unique factorization domains. This means that every element of these rings is a product of a constant and a product of irreducible polynomials (those that are not the product of two non-constant polynomials). Moreover, this decomposition is unique up to multiplication of the factors by invertible constants.
Factorization depends on the base field. For example, the fundamental theorem of algebra, which states that every polynomial with complex coefficients has complex roots, implies that a polynomial with integer coefficients can be factored (with root-finding algorithms) into linear factors over the complex field C. Similarly, over the field of reals, the irreducible factors have degree at most two, while there are polynomials of any degree that are irreducible over the field of rationals Q.
The question of polynomial factorization makes sense only for coefficients in a computable field whose every element may be represented in a computer and for which there are algorithms for the arithmetic operations. However, this is not a sufficient condition: Fröhlich and Shepherdson give examples of such fields for which no factorization algorithm can exist.
The fields of coefficients for which factorization algorithms are known include prime fields (that is, the field of the rational number and the fields of the integers modulo a prime number) and their finitely generated field extensions. Integer coefficients are also tractable. Kronecker's classical method is
|
https://en.wikipedia.org/wiki/Mario%20Livio
|
Mario Livio (born June 19, 1945) is an Israeli-American astrophysicist and an author of works that popularize science and mathematics. For 24 years (1991–2015) he was an astrophysicist at the Space Telescope Science Institute, which operates the Hubble Space Telescope. He has published more than 400 scientific articles on topics including cosmology, supernova explosions, black holes, extrasolar planets, and the emergence of life in the universe. His book on the irrational number phi, The Golden Ratio: The Story of Phi, the World's Most Astonishing Number (2002), won the Peano Prize and the International Pythagoras Prize for popular books on mathematics.
Scientific career
Livio earned a Bachelor of Science degree in physics and mathematics at the Hebrew University of Jerusalem, a Master of Science degree in theoretical particle physics at the Weizmann Institute, and a Ph.D. in theoretical astrophysics at Tel Aviv University. He was a professor of physics at the Technion – Israel Institute of Technology from 1981 to 1991, before moving to the Space Telescope Science Institute.
Livio has focused much of his research on supernova explosions and their use in determining the rate of expansion of the universe. He has also studied so-called dark energy, black holes, and the formation of planetary systems around young stars. He has contributed to hundreds of papers in peer-reviewed journals on astrophysics. Among his prominent contributions, he has authored and co-authored important papers on topics related to accretion onto compact objects (white dwarfs, neutron stars, and black holes). In 1980, he published one of the very first multi-dimensional numerical simulations of the collapse of a massive star and a supernova explosion. He was one of the pioneers in the study of common envelope evolution of binary stars, and he applied the results to the shaping of planetary nebulae as well as to the progenitors of Type Ia supernovae. Together with D. Eichler, T. Piran, and D. Schramm he published a seminal paper in which the authors predicted that merging neutron stars produce Gamma-Ray bursts, gravitational waves, and certain heavy elements. All of these predictions have later been confirmed.
In 2009, the American Association for the Advancement of Science (AAAS) Council elected him as a Fellow of the AAAS. Livio was cited for his "distinguished contributions to astrophysics through research on stars and galaxies and through communicating and interpreting science and mathematics to the public." He is also cited in the American Men and Women of Science.
Since 2010, Livio has mainly concentrated on the problem of the emergence of life in the universe. In this context, he co-authored (primarily with Rebecca G. Martin) a series of works related to life on Earth and life's potential emergence on extrasolar planets. In addition, in 2015 he reviewed the scientific achievements of the Hubble Space Telescope in its first 25 years in operation.
Livio has been nomi
|
https://en.wikipedia.org/wiki/Islam%20in%20Iceland
|
Islam in Iceland is a minority religion. The Pew Research Center estimated that the number of Muslims in Iceland was below its 10,000 minimum threshold, and official statistics put the figure at under 1,300, or 0.33% out of the total population of 385,230.In 2011, Icelandic Muslims attracted the interest of Al Jazeera; the channel planned a documentary dealing with Muslims in Iceland and New Zealand. Al Jazeera was interested in how Ramadan is honored in the higher latitudes where the night can be of unusual length when compared to the majority-Muslim lands.
History
The earliest mention of Iceland in Muslim sources originates in the works of Muhammad al-Idrisi (1099–1165/66) in his famous Tabula Rogeriana, which mentions Iceland's location in the North Sea.
The long-distance trading and raiding networks of the Vikings will have meant that various Icelanders, like the Norwegians Rögnvald Kali Kolsson or Harald Hardrada, came into direct contact with the Muslim world during the Middle Ages; indirect connections are best attested by finds of Arabic coins in Iceland, as also widely in the Viking world.
Following Iceland's conversion to Christianity around 1000, some Icelanders encountered the Islamic world through pilgrimage, for example to Jerusalem, of the kind described by Abbot Nikulás Bergsson in his Leiðarvísir og borgarskipan.
From around the late thirteenth century, a fantastical version of the Islamic world is prominent in medieval Icelandic romance, partly inspired by Continental narratives influenced by the Crusades. Although this image generally characterises the Islamic world as 'heathen', and repeats the misconceptions of Islam widespread in the medieval West, it also varies substantially from text to text, sometimes, for example, associating the Islamic world with great wealth, wisdom, or chivalry. Romance continued to serve as a medium for Icelanders to contemplate Islam in the post-medieval period, for example in Jón Oddsson Hjaltalín's eighteenth-century romance Fimmbræðra saga, which combined traditional storytelling with Continental Enlightenment scholarship.
Perhaps the earliest known example of Muslims coming to Iceland occurred in 1627, when the Dutch Muslim Jan Janszoon and his Barbary pirates raided portions of Iceland, including the southwest coast, Vestmannaeyjar, and the eastern fjords. This event is known in Icelandic history as the Tyrkjaránið (the "Turkish Abductions"). An estimated 400-800 Icelanders were sold into slavery.
Islam started to gain presence in Icelandic culture around the 1970s, partly through immigration from the Islamic world (for example Salmann Tamimi) and partly through Icelanders' exposure to Islamic culture while travelling (for example Ibrahim Sverrir Agnarsson). Some of the immigrants simply came of their own accord; others came as refugees, including groups from Kosovo. The Quran was first translated into Icelandic in 1993, with a corrected edition in 2003.
Demographics
Salmann Tamimi es
|
https://en.wikipedia.org/wiki/Stat%20padding
|
In sports, stat padding is an action that improves a player's statistics despite being of little benefit to his or her team or its chance of winning.
Notable players accused of stat padding
Basketball
Russell Westbrook holds the record for most career triple-doubles in the NBA, most of which involved heavy stat padding, many theorise.
Giannis Antetokounmpo was accused of stat padding after he intentionally missed a last-second shot to get the rebound he needed to secure a triple-double. The rebound was later rescinded by the NBA.
See also
Running up the score
References
Sports records and statistics
Sports terminology
|
https://en.wikipedia.org/wiki/Mouse%20%28set%20theory%29
|
In set theory, a mouse is a small model of (a fragment of) Zermelo–Fraenkel set theory with desirable properties. The exact definition depends on the context. In most cases, there is a technical definition of "premouse" and an added condition of iterability (referring to the existence of wellfounded iterated ultrapowers): a mouse is then an iterable premouse. The notion of mouse generalizes the concept of a level of Gödel's constructible hierarchy while being able to incorporate large cardinals.
Mice are important ingredients of the construction of core models. The concept was isolated by Ronald Jensen in the 1970s and has been used since then in core model constructions of many authors.
References
Inner model theory
|
https://en.wikipedia.org/wiki/Core%20model
|
In set theory, the core model is a definable inner model of the universe of all sets. Even though set theorists refer to "the core model", it is not a uniquely identified mathematical object. Rather, it is a class of inner models that under the right set-theoretic assumptions have very special properties, most notably covering properties. Intuitively, the core model is "the largest canonical inner model there is" (Ernest Schimmerling and John R. Steel) and is typically associated with a large cardinal notion. If Φ is a large cardinal notion, then the phrase "core model below Φ" refers to the definable inner model that exhibits the special properties under the assumption that there does not exist a cardinal satisfying Φ. The core model program seeks to analyze large cardinal axioms by determining the core models below them.
History
The first core model was Kurt Gödel's constructible universe L. Ronald Jensen proved the covering lemma for L in the 1970s under the assumption of the non-existence of zero sharp, establishing that L is the "core model below zero sharp". The work of Solovay isolated another core model L[U], for U an ultrafilter on a measurable cardinal (and its associated "sharp", zero dagger). Together with Tony Dodd, Jensen constructed the Dodd–Jensen core model ("the core model below a measurable cardinal") and proved the covering lemma for it and a generalized covering lemma for L[U].
Mitchell used coherent sequences of measures to develop core models containing multiple or higher-order measurables. Still later, the Steel core model used extenders and iteration trees to construct a core model below a Woodin cardinal.
Construction of core models
Core models are constructed by transfinite recursion from small fragments of the core model called mice. An important ingredient of the construction is the comparison lemma that allows giving a wellordering of the relevant mice.
At the level of strong cardinals and above, one constructs an intermediate countably certified core model Kc, and then, if possible, extracts K from Kc.
Properties of core models
Kc (and hence K) is a fine-structural countably iterable extender model below long extenders. (It is not currently known how to deal with long extenders, which establish that a cardinal is superstrong.) Here countable iterability means ω1+1 iterability for all countable elementary substructures of initial segments, and it suffices to develop basic theory, including certain condensation properties. The theory of such models is canonical and well understood. They satisfy GCH, the diamond principle for all stationary subsets of regular cardinals, the square principle (except at subcompact cardinals), and other principles holding in L.
Kc is maximal in several senses. Kc computes the successors of measurable and many singular cardinals correctly. Also, it is expected that under an appropriate weakening of countable certifiability, Kc would correctly compute the successors of all weakly
|
https://en.wikipedia.org/wiki/Wild%20card%20%28foresight%29
|
In a view of the future, a wild card is a low-probability, large-effect event. This concept may be introduced into anticipatory decision-making activity in order to increase the ability of organizations and governments to adapt to surprises arising in turbulent (business) environments. Such sudden and unique incidents might constitute turning points in the evolution of a certain trend or system. Wild cards may or may not be announced by weak signals, which are incomplete and fragmented data from which foresight information might be inferred.
Description
Arguably the best known work in wild cards comes from John Petersen, the author of Out of The Blue – How to Anticipate Big Future Surprises. Petersen's book articulates a series of events that due to their likelihood to surprise and potential for effect might be considered 'Wildcards'. He defines wild cards as "Low Probability, High Impact events that, were they to occur, would severely impact the human condition".
Building on Petersen's work, futurist Marcus Barber developed an additional wild card tool called a "Reference Impact Grid" (RIG) in 2004 which helps strategists and risk managers define vulnerabilities within a given system and to then consider what type of event might destabilize that system. Challenging Petersen's hypothesis, his additional thoughts on cascading discontinuity sets' broke away from idea that wild cards are always a singular one off event, to introduce the idea that a series of interrelated events might also achieve a similar outcome to the big one-off event. A cascading discontinuity set can achieve a similar outcome to a one off wild card via a series of smaller, unplanned events that eventually come together to overwhelm the system's ability to cope. Like the big wild card, the end result is the same – the system no longer has the resources available to it to continue functioning and is overwhelmed.
The concept of wild cards comes close to the black swan theory described by Nassim Nicholas Taleb in his 2007 book The Black Swan. Black swans however can be seen as events that somehow are written in destiny (or the stars) and will occur anyhow.
The title refers to the "black swans" that existed already for millions of years in Australia but became only known to non-Aboriginal Australians only when they sailed there. Nicholas stresses therefore the surprising side and unpredictability of the black swan as well as their certainty (or unavoidability).
Another concept that comes close to the concept of wild cards and black swans is the tipping point of Malcolm Gladwell's The Tipping Point, which actually is a special form of a wild card that realizes itself by accumulation within a system that reveals itself in a drastic change of the system.
Some authors plea for a better understanding of the nature of events that people share under the concepts as wild cards, black swans, breakthroughs and so on. Victor van Rij uses the concept of wild card and sees these as ev
|
https://en.wikipedia.org/wiki/Shape%20theory
|
Shape theory refers to three different theories:
Shape theory in topology
Shape analysis (disambiguation) in mathematics and computer science
Shape theory of olfaction
|
https://en.wikipedia.org/wiki/Herbert%20Wilf
|
Herbert Saul Wilf (June 13, 1931 – January 7, 2012) was an American mathematician, specializing in combinatorics and graph theory. He was the Thomas A. Scott Professor of Mathematics in Combinatorial Analysis and Computing at the University of Pennsylvania. He wrote numerous books and research papers. Together with Neil Calkin he founded The Electronic Journal of Combinatorics in 1994 and was its editor-in-chief until 2001.
Biography
Wilf was the author of numerous papers and books, and was adviser and mentor to many students and colleagues. His collaborators include Doron Zeilberger and Donald Knuth. One of Wilf's former students is Richard Garfield, the creator of the collectible card game Magic: The Gathering. He also served as a thesis advisor for E. Roy Weintraub in the late 1960s.
Wilf died of a progressive neuromuscular disease in 2012.
Awards
In 1998, Wilf and Zeilberger received the Leroy P. Steele Prize for Seminal Contribution to Research for their joint paper, "Rational functions certify combinatorial identities" (Journal of the American Mathematical Society, 3 (1990) 147–158). The prize citation reads: "New mathematical ideas can have an impact on experts in a field, on people outside the field, and on how the field develops after the idea has been introduced. The remarkably simple idea of the work of Wilf and Zeilberger has already changed a part of mathematics for the experts, for the high-level users outside the area, and the area itself." Their work has been translated into computer packages that have simplified hypergeometric summation.
In 2002, Wilf was awarded the Euler Medal by the Institute of Combinatorics and its Applications.
Selected publications
1971: (editor with Frank Harary) Mathematical Aspects of Electrical Networks Analysis, SIAM-AMS Proceedings, Volume 3,American Mathematical Society
1998: (with N. J. Calkin) "The Number of Independent Sets in a Grid Graph", SIAM Journal on Discrete Mathematics
Books
A=B (with Doron Zeilberger and Marko Petkovšek)
Algorithms and Complexity
generatingfunctionology.
Mathematics for the Physical Sciences
Combinatorial Algorithms, with Albert Nijenhuis
Lecture notes
East Side, West Side
Lectures on Integer Partitions
Lecture Notes on Numerical Analysis (with Dennis Deturck)
See also
Line graph
References
External links
Herbert Wilf's homepage
Wilf's obituary at the University of Pennsylvania
The Electronic Journal of Combinatorics
1931 births
20th-century American mathematicians
21st-century American mathematicians
Combinatorialists
University of Pennsylvania faculty
Mathematicians at the University of Pennsylvania
2012 deaths
The American Mathematical Monthly editors
American textbook writers
Massachusetts Institute of Technology alumni
|
https://en.wikipedia.org/wiki/Arax%C3%A1
|
Araxá () is a municipality in Western Minas Gerais state, Brazil. Its estimated population by IBGE (Brazilian Institute of Geography and Statistics) in 2020 is 107,337 inhabitants and the area of the municipality is , with making up the urban perimeter.
Geography
The elevation of the city center is . The highest point in the municipality is Serrra da Bocaina at , and the lowest point is the Capivara river at . In 2004 the annual average temperature was . The annual rainfall was .
Demographics
Population in 1970: 35,676
Population in 1980: 53,404
Population in 1991: 65,911
Population in 2000: 78,997 (77,743 lived in the urban area)
Population in 2010: 93,071
Population in 2020: 107,337
Origin of the name
The town was named after the Amerindian tribe of the Araxás who lived there at the time it was first discovered. The name means "the place from where the sun is seen first".
Araxó, an extinct Jê language, was once spoken in the municipality.
Micro-region
Araxá is the center of a statistical micro-region including 10 municipalities: Araxá, Campos Altos, Ibiá, Nova Ponte, Pedrinópolis, Perdizes, Pratinha, Sacramento, Santa Juliana, and Tapira. The population of this micro-region was 233,114 (2020) in an area of .
Neighboring municipalities
The neighboring municipalities are Perdizes (N and NW), Ibiá (E), Tapira (S), and Sacramento (SW).
Communications
Araxá is served by a good system of federal and state highways that link the municipality to the main economic centers of the country. The highways with access to Araxá are:
BR 452 – Araxá/Uberlândia/Tupaciguara
BR 262 – Belo Horizonte/Vitória/Corumbá)
MG 428 – As far as the border of Minas Gerais – São Paulo
Transport
The city is served by the Araxá Airport, which has daily flights to Belo Horizonte, Uberaba, Uberlândia and São Paulo.
The Centro Atlântica Railway goes through the city, but it is limited to cargo transport.
Distances
Belo Horizonte: the state capital is away
Brasília:
Rio de Janeiro:
Uberaba:
São Paulo:
Economic activities
The economy is based on tourism, services, mining, industry, and some agriculture.
Araxá is famous in Brazil for its spa with medicinal mud and mineral waters. One of Brazil's most emblematic hotels, the Grande Hotel, is the center of attraction. Opened in 1944 by governor Benedito Valadares and President Vargas, the Hotel initiated an era of splendor to Araxá and the inland region of the state. It was the stage for huge social, political and cultural events. Overall, the city's hotel sector has 24 establishments offering 2,708 beds (2004).
One of Brazil's most famous soap operas, Dona Beija, loosely based on the life of a legendary historic character of the city, was filmed here.
In addition to tourism, the city has a niobium mine. That metal is used in special steels and alloys for jet engine components, rocket sub-assemblies, and heat-resisting and combustion equipment. Reserves are about 460 million tons, sufficient to satisfy current
|
https://en.wikipedia.org/wiki/Ternary%20relation
|
In mathematics, a ternary relation or triadic relation is a finitary relation in which the number of places in the relation is three. Ternary relations may also be referred to as 3-adic, 3-ary, 3-dimensional, or 3-place.
Just as a binary relation is formally defined as a set of pairs, i.e. a subset of the Cartesian product of some sets A and B, so a ternary relation is a set of triples, forming a subset of the Cartesian product of three sets A, B and C.
An example of a ternary relation in elementary geometry can be given on triples of points, where a triple is in the relation if the three points are collinear. Another geometric example can be obtained by considering triples consisting of two points and a line, where a triple is in the ternary relation if the two points determine (are incident with) the line.
Examples
Binary functions
A function in two variables, mapping two values from sets A and B, respectively, to a value in C associates to every pair (a,b) in an element f(a, b) in C. Therefore, its graph consists of pairs of the form . Such pairs in which the first element is itself a pair are often identified with triples. This makes the graph of f a ternary relation between A, B and C, consisting of all triples , satisfying , , and
Cyclic orders
Given any set A whose elements are arranged on a circle, one can define a ternary relation R on A, i.e. a subset of A3 = , by stipulating that holds if and only if the elements a, b and c are pairwise different and when going from a to c in a clockwise direction one passes through b. For example, if A = { } represents the hours on a clock face, then holds and does not hold.
Betweenness relations
Ternary equivalence relation
Congruence relation
The ordinary congruence of arithmetics
which holds for three integers a, b, and m if and only if m divides a − b, formally may be considered as a ternary relation. However, usually, this instead is considered as a family of binary relations between the a and the b, indexed by the modulus m. For each fixed m, indeed this binary relation has some natural properties, like being an equivalence relation; while the combined ternary relation in general is not studied as one relation.
Typing relation
A typing relation indicates that is a term of type in context , and is thus a ternary relation between contexts, terms and types.
Schröder rules
Given homogeneous relations A, B, and C on a set, a ternary relation can be defined using composition of relations AB and inclusion AB ⊆ C. Within the calculus of relations each relation A has a converse relation AT and a complement relation Using these involutions, Augustus De Morgan and Ernst Schröder showed that is equivalent to and also equivalent to The mutual equivalences of these forms, constructed from the ternary are called the Schröder rules.
References
Further reading
Mathematical relations
ru:Тернарное отношение
|
https://en.wikipedia.org/wiki/Log%20probability
|
In probability theory and computer science, a log probability is simply a logarithm of a probability. The use of log probabilities means representing probabilities on a logarithmic scale , instead of the standard unit interval.
Since the probabilities of independent events multiply, and logarithms convert multiplication to addition, log probabilities of independent events add. Log probabilities are thus practical for computations, and have an intuitive interpretation in terms of information theory: the negative of the average log probability is the information entropy of an event. Similarly, likelihoods are often transformed to the log scale, and the corresponding log-likelihood can be interpreted as the degree to which an event supports a statistical model. The log probability is widely used in implementations of computations with probability, and is studied as a concept in its own right in some applications of information theory, such as natural language processing.
Motivation
Representing probabilities in this way has several practical advantages:
Speed. Since multiplication is more expensive than addition, taking the product of a high number of probabilities is often faster if they are represented in log form. (The conversion to log form is expensive, but is only incurred once.) Multiplication arises from calculating the probability that multiple independent events occur: the probability that all independent events of interest occur is the product of all these events' probabilities.
Accuracy. The use of log probabilities improves numerical stability, when the probabilities are very small, because of the way in which computers approximate real numbers.
Simplicity. Many probability distributions have an exponential form. Taking the log of these distributions eliminates the exponential function, unwrapping the exponent. For example, the log probability of the normal distribution's probability density function is instead of . Log probabilities make some mathematical manipulations easier to perform.
Optimization. Since most common probability distributions—notably the exponential family—are only logarithmically concave, and concavity of the objective function plays a key role in the maximization of a function such as probability. Optimizers work better with log probabilities.
Representation issues
The logarithm function is not defined for zero, so log probabilities can only represent non-zero probabilities. Since the logarithm of a number in interval is negative, often the negative log probabilities are used. In that case the log probabilities in the following formulas would be inverted.
Any base can be selected for the logarithm.
Basic manipulations
In this section we would name probabilities in logarithmic space and for short:
The product of probabilities corresponds to addition in logarithmic space.
The sum of probabilities is a bit more involved to compute in logarithmic space, requiring the computation of one ex
|
https://en.wikipedia.org/wiki/Apotome%20%28mathematics%29
|
In the historical study of mathematics, an apotome is a line segment formed from a longer line segment by breaking it into two parts, one of which is commensurable only in power to the whole; the other part is the apotome. In this definition, two line segments are said to be "commensurable only in power" when the ratio of their lengths is an irrational number but the ratio of their squared lengths is rational.
Translated into modern algebraic language, an apotome can be interpreted as a quadratic irrational number formed by subtracting one square root of a rational number from another.
This concept of the apotome appears in Euclid's Elements beginning in book X, where Euclid defines two special kinds of apotomes. In an apotome of the first kind, the whole is rational, while in an apotome of the second kind, the part subtracted from it is rational; both kinds of apotomes also satisfy an additional condition. Euclid Proposition XIII.6 states that, if a rational line segment is split into two pieces in the golden ratio, then both pieces may be represented as apotomes.
References
Mathematical terminology
Euclidean geometry
|
https://en.wikipedia.org/wiki/Ronald%20Jensen
|
Ronald Björn Jensen (born April 1, 1936) is an American mathematician who lives in Germany, primarily known for his work in mathematical logic and set theory.
Career
Jensen completed a BA in economics at American University in 1959, and a Ph.D. in mathematics at the University of Bonn in 1964. His supervisor was Gisbert Hasenjaeger. Jensen taught at Rockefeller University, 1969–71, and the University of California, Berkeley, 1971–73. The balance of his academic career was spent in Europe at the
University of Bonn, the University of Oslo, the University of Freiburg, the University of Oxford, and the Humboldt-Universität zu Berlin, from which he retired in 2001. He now resides in Berlin.
Jensen was honored by the Association for Symbolic Logic as the first Gödel Lecturer in 1990. In 2015, the European Set Theory Society awarded him and John R. Steel the Hausdorff Medal for their paper "K without the measurable".
Results
Jensen's better-known results include the:
Axiomatic set theory NFU, a variant of New Foundations (NF) where extensionality is weakened to allow several sets with no elements, and the proof of NFU's consistency relative to Peano arithmetic;
Fine structure theory of the constructible universe L. This work led to his being awarded in 2003 the Leroy P. Steele Prize for Seminal Contribution to Research of the American Mathematical Society for his 1972 paper titled "The fine structure of the constructible hierarchy";
Definitions and proofs of various infinitary combinatorial principles in L, including diamond , square, and morass;
Jensen's covering theorem for L;
General theory of core models and the construction of the Dodd–Jensen core model;
Consistency of CH plus Suslin's hypothesis.
Technique of coding the universe by a real.
Selected publications
Articles
Ronald Jensen, 1969, « On the Consistency of a Slight(?) Modification of Quine's NF », Synthese 19: 250–263. With discussion by Quine.
The fine structure of the constructible hierarchy, Annals of Mathematical Logic, vol 4, Issue 3, August 1972, pp. 229–308
with Anthony J. Dodd: The core model, Annals of Mathematical Logic, vol 20, 1981, pp. 43–75.
with Anthony J. Dodd: The covering lemma for K, Annals of Mathematical Logic, vol 22, 1982, pp. 1–30.
Inner models and large cardinals. Bulletin of Symbolic Logic vol 1, Issue 4 (1995): 393-407.
with John R. Steel: K without the measurable, The Journal of Symbolic Logic, vol 78, Issue 3, 2013, pp. 708–734.
Books
Modelle der Mengenlehre. Widerspruchsfreiheit und Unabhängigkeit der Kontinuumshypothese und des Auswahlaxioms. (Lecture Notes in Mathematics; vol. 37). Springer, Berlin 1967.
as editor with Alexander Pestel: Set theory and model theory: proceedings of an informal symposium held at Bonn, June 1–3, 1979. Berlin; New York: Springer-Verlag, 1981.
with Aaron Beller and Philip Welch: Coding the Universe. Cambridge University Press, Cambridge 1982, .
References
External links
Jensen's page at the Humboldt-Uni
|
https://en.wikipedia.org/wiki/Butterfly%20%28options%29
|
In finance, a butterfly (or simply fly) is a limited risk, non-directional options strategy that is designed to have a high probability of earning a limited profit when the future volatility of the underlying asset is expected to be lower (when long the butterfly) or higher (when short the butterfly) than that asset's current implied volatility.
Long butterfly
A long butterfly position will make profit if the future volatility is lower than the implied volatility.
A long butterfly options strategy consists of the following options:
Long 1 call with a strike price of (X − a)
Short 2 calls with a strike price of X
Long 1 call with a strike price of (X + a)
where X = the spot price (i.e. current market price of underlying) and a > 0.
Using put–call parity a long butterfly can also be created as follows:
Long 1 put with a strike price of (X + a)
Short 2 puts with a strike price of X
Long 1 put with a strike price of (X − a)
where X = the spot price and a > 0.
All the options have the same expiration date.
At expiration the value (but not the profit) of the butterfly will be:
zero if the price of the underlying is below (X − a) or above (X + a)
positive if the price of the underlying is between (X - a) and (X + a)
The maximum value occurs at X (see diagram).
Short butterfly
A short butterfly position will make profit if the future volatility is higher than the implied volatility.
A short butterfly options strategy consists of the same options as a long butterfly. However now the middle strike option position is a long position and the upper and lower strike option positions are short.
Margin requirements
In the United States, margin requirements for all options positions, including a butterfly, are governed by what is known as Regulation T. However brokers are permitted to apply more stringent margin requirements than the regulations.
Use in calculating implied distributions
The price of a butterfly centered around some strike price can be used to estimate the implied probability of the underlying being at that strike price at expiry. This means the set of market prices for butterflies centered around different strike prices can be used to infer the market's belief about the probability distribution for the underlying price at expiry. This implied distribution may be different from the lognormal distribution assumed in the popular Black-Scholes model, and studying it can reveal ways in which real-world assets differ from the idealized assets described by Black-Scholes.
Butterfly variations
The double option position in the middle is called the body, while the two other positions are called the wings.
In case the distance between middle strike price and strikes above and below is unequal, such position is referred to as "broken wings" butterfly (or "broken fly" for short).
An iron butterfly recreates the payoff diagram of a butterfly, but with a combination of two calls and two puts.
The option strategy where the middle
|
https://en.wikipedia.org/wiki/1%2B1
|
1+1 is a mathematical expression that evaluates to:
2 (number) (in ordinary arithmetic)
1 (number) (in Boolean algebra with a notation where '+' denotes a logical disjunction)
0 (number) (in Boolean algebra with a notation where '+' denotes 'exclusive or' operation, or in a quotient ring of numbers modulo 2)
The terms 1+1, One Plus One, or One and One may refer to:
1+1
1 + 1 + 1 + 1 + ⋯, a mathematical divergent series
1+1 (TV channel), a Ukrainian TV channel
1+1 (Grin album), 1972
1+1 (Herbie Hancock and Wayne Shorter album), 1997
"1+1" (song), by Beyoncé Knowles
"1+1", a 2021 song by Sia from Music
"1+1", a 2021 song by Pentagon from Love or Take
One Plus One
OnePlus One, an Android smartphone
One Plus One, original title of Jean-Luc Godard's 1968 film Sympathy for the Devil
One Plus One, 2002 graphic novel published by Oni Press
One Plus One (TV programme), a weekly interview show aired by ABC in Australia
Unomásuno (English: One Plus One), a Mexican newspaper
One and One
One and One (musical), an American 1970s off-Broadway musical comedy (also known as One & One)
One and One (song), written by Billy Steinberg, Rick Nowels and Marie-Claire D'Ubaldo and notably covered by Robert Miles feat. Maria Nayler
"One and One (Ain't I Good Enough)", a 1987 song by Wa Wa Nee
One-and-one, a type of free throw in basketball
Fish and chips, in Dublin slang
Other
1&1 Internet, a web hosting company
1 and 1 (SHINee album), a 2016 reissue of their album 1 of 1
See also
One and One Is One (disambiguation)
One Plus One Is One, a 2004 album by Badly Drawn Boy
One on One (disambiguation)
One by One (disambiguation)
|
https://en.wikipedia.org/wiki/NHIS
|
NHIS may refer to:
National Health Insurance Scheme (Ghana)
National Health Insurance Scheme (Nigeria)
National Health Interview Survey, annual survey by the National Center for Health Statistics in the United States
National Homelessness Information System, a system to collect and analyze data on the use of homeless shelters in Canada
New Hampshire International Speedway, former name of the New Hampshire Motor Speedway in the United States
See also
National Health Insurance Scheme (disambiguation)
|
https://en.wikipedia.org/wiki/Trigenus
|
In low-dimensional topology, the trigenus of a closed 3-manifold is an invariant consisting of an ordered triple . It is obtained by minimizing the genera of three orientable handle bodies — with no intersection between their interiors— which decompose the manifold as far as the Heegaard genus need only two.
That is, a decomposition with
for and being the genus of .
For orientable spaces, ,
where is 's Heegaard genus.
For non-orientable spaces the has the form
depending on the
image of the first Stiefel–Whitney characteristic class under a Bockstein homomorphism, respectively for
It has been proved that the number has a relation with the concept of Stiefel–Whitney surface, that is, an orientable surface which is embedded in , has minimal genus and represents the first Stiefel–Whitney class under the duality map , that is, . If then , and if
then .
Theorem
A manifold S is a Stiefel–Whitney surface in M, if and only if S and M−int(N(S)) are orientable.
References
J.C. Gómez Larrañaga, W. Heil, V.M. Núñez. Stiefel–Whitney surfaces and decompositions of 3-manifolds into handlebodies, Topology Appl. 60 (1994), 267–280.
J.C. Gómez Larrañaga, W. Heil, V.M. Núñez. Stiefel–Whitney surfaces and the trigenus of non-orientable 3-manifolds, Manuscripta Math. 100 (1999), 405–422.
"On the trigenus of surface bundles over ", 2005, Soc. Mat. Mex. | pdf
Geometric topology
3-manifolds
|
https://en.wikipedia.org/wiki/Francisco%20Javier%20Gonz%C3%A1lez-Acu%C3%B1a
|
Francisco Javier González-Acuña (nickname "Fico") is a mathematician in the UNAM's institute of mathematics and CIMAT, specializing in low-dimensional topology.
Education
He did his graduate studies at Princeton University, obtaining his Ph.D. in 1970. His thesis, written under the supervision of Ralph Fox, was titled On homology spheres.
Research
An early result of González-Acuña is that a group G is the homomorphic image of some knot group if and only if G is finitely generated and has weight at most one. This result (a "remarkable theorem", as Lee Neuwirth called it in his review),
was published in 1975 in Annals of Mathematics. In 1978, together with José María Montesinos, he answered a question posed by Fox, proving the existence of 2-knots whose groups have infinitely many ends.
With Hamish Short, González-Acuña proposed and worked on the cabling conjecture: the only knots in the 3-sphere which admit a reducible Dehn surgery, i.e. a surgery which results in a reducible 3-manifold, are the cable knots.
See also
CIMAT
UNAM
Selected publications
González-Acuña, F., Homomorphs of knot groups, Annals of Mathematics (2) 102 (1975), no. 2, 37–377.
González-Acuña, F., Montesinos, José M., Ends of knot groups, Annals of Mathematics (2) 108 (1978), no. 1, 91–96.
González-Acuña, F., Short, Hamish, Knot surgery and primeness. Math. Proc. Cambridge Philos. Soc. 99 (1986), no. 1, 89–102.
J.C. Gómez-Larrañaga, F.J. González-Acuña, J. Hoste. Minimal Atlases on 3-manifolds, Math. Proc. Camb. Phil. Soc. 109 (1991), 105–115.
References
External links
portrait
Unsolvability of word problems with knot groups, at arXiv-2010 and L'Enseignement Mathematique.
https://www.cimat.mx/es/node/590
https://escueladenudos.matem.unam.mx/
https://www.smm.org.mx/noticia/121/escuela-fico-gonzalez-acuna-de-nudos-y-3-variedades
Living people
Year of birth missing (living people)
20th-century Mexican mathematicians
21st-century Mexican mathematicians
Princeton University alumni
Topologists
Academic staff of the National Autonomous University of Mexico
|
https://en.wikipedia.org/wiki/Johannes%20van%20der%20Corput
|
Johannes Gaultherus van der Corput (4 September 1890 – 16 September 1975) was a Dutch mathematician, working in the field of analytic number theory.
He was appointed professor at the University of Fribourg (Switzerland) in 1922, at the University of Groningen in 1923,
and at the University of Amsterdam in 1946.
He was one of the founders of the Mathematisch Centrum in Amsterdam, of which he also was the first director. From 1953 on he worked in the United States at the University of California, Berkeley, and the University of Wisconsin–Madison.
He introduced the van der Corput lemma, a technique for creating an upper bound on the measure of a set drawn from harmonic analysis, and the van der Corput theorem on equidistribution modulo 1.
He became member of the Royal Netherlands Academy of Arts and Sciences in 1929, and foreign member in 1953. He was a Plenary Speaker of the ICM in 1936 in Oslo.
See also
van der Corput inequality
van der Corput lemma (harmonic analysis)
van der Corput's method
van der Corput sequence
van der Corput's theorem
References
1890 births
1975 deaths
Scientists from Rotterdam
20th-century Dutch mathematicians
Members of the Royal Netherlands Academy of Arts and Sciences
Number theorists
Academic staff of the University of Fribourg
Academic staff of the University of Groningen
Academic staff of the University of Amsterdam
University of California, Berkeley College of Letters and Science faculty
University of Wisconsin–Madison faculty
|
https://en.wikipedia.org/wiki/Mazur%E2%80%93Ulam%20theorem
|
In mathematics, the Mazur–Ulam theorem states that if and are normed spaces over R and the mapping
is a surjective isometry, then is affine. It was proved by Stanisław Mazur and Stanisław Ulam in response to a question raised by Stefan Banach.
For strictly convex spaces the result is true, and easy, even for isometries which are not necessarily surjective. In this case, for any and in , and for any in , write
and denote the closed ball of radius around by . Then is the unique element of , so, since is injective, is the unique element of
and therefore is equal to . Therefore is an affine map. This argument fails in the general case, because in a normed space which is not strictly convex two tangent balls may meet in some flat convex region of their boundary, not just a single point.
See also
Aleksandrov–Rassias problem
References
Normed spaces
Theorems in functional analysis
|
https://en.wikipedia.org/wiki/192%20%28number%29
|
192 (one hundred [and] ninety-two) is the natural number following 191 and preceding 193.
In mathematics
192 has the prime factorization . Because it has so many small prime factors, it is the smallest number with 14 divisors, namely 1, 2, 3, 4, 6, 8, 12, 16, 24, 32, 48, 64, 96, and 192 itself. Because its only prime factors are 2 and 3, it is a 3-smooth number.
192 is the sum of ten consecutive primes (5 + 7 + 11 + 13 + 17 + 19 + 23 + 29 + 31 + 37).
192 is a Leyland number of the second kind.
See also
192 (disambiguation)
References
Integers
|
https://en.wikipedia.org/wiki/Roy%20Morgan
|
Roy Morgan, formerly known as Roy Morgan Research, is an independent Australian social and political market research and public opinion statistics company headquartered in Melbourne, Victoria. It operates nationally as Roy Morgan and internationally as Roy Morgan International. The Morgan Poll, a political poll that tracks voting intentions, is its most well-known product in Australia.
Foundation
The company was founded by Roy Morgan (1908–1985) in 1941; its Executive Chairman today is his son, Gary Morgan; CEO is Michele Levine.
Commercial performance
The company has annual turnover of more than A$40 million, and along with the head office in Melbourne, also has offices in Sydney, Perth and Brisbane as well as offices of Roy Morgan International in Auckland, London, New York City, Princeton and Jakarta.
The results are published on their website and by media sources (newspapers, magazines, television, radio, the Internet and online subscription services such as Crikey and Henry Thornton magazine).
Products and services
Morgan Poll
The Morgan Poll is a political polling service that tracks the voting intentions of Australian voters, which caters for detailed demographic and geographic analyses of the results and is widely reported.
The Worm
The company is a major provider of advertising and media planning data and undertakes large government, social and corporate research programs.
Roy Morgan developed the Worm, which first appeared on live TV on the Network Ten political talk program Face to Face
This leading Audience Response Measurement technology was colloquially described as The Worm because of the live graphs that snake their way over the television screen, displaying the audience's reactions to visual stimuli (like for example an election debate) in real-time. After being commissioned to provide The Worm to the Nine Network for a decade, Roy Morgan discovered that Nine had secretly registered 'The Worm' as a trademark. Primarily as a result of an ensuing dispute, Roy Morgan changed the branding from The Worm to The Reactor in 2004 and continued to develop the product which is now primarily conducted online and via The Reactor mobile app.
Roy Morgan conducts the fieldwork for The Melbourne Institute's Household, Income and Labour Dynamics in Australia Survey (HILDA).
See also
Essential Media Communications
Newspoll
YouGov
References
External links
Market research companies of Australia
Public opinion research companies
Companies based in Melbourne
Australian companies established in 1941
Consulting firms established in 1941
Opinion polling in Australia
|
https://en.wikipedia.org/wiki/Brazilian%20Institute%20of%20Public%20Opinion%20and%20Statistics
|
The Brazilian Institute of Public Opinion and Statistics (IBOPE based on the Portuguese language name, Instituto Brasileiro de Opinião Pública e Estatística) does market research to provide information regarding Brazilian and Latin American markets. IBOPE provides data on media, public opinion, voting intention, consumption, behavior, marketing, branding and other issues as required by clients.
Established in 1942, it is listed in the Honomichl Top 25 Global Research Organizations rating. The name IBOPE is listed in the Brazilian dictionary as a synonym of audience ratings research.
History
IBOPE was created in 1942 by the radio broadcaster Auricélio Penteado, owner of Radio Kosmos in São Paulo. In that year, he decided to apply research methodologies he had learned while studying in the United States under George Gallup, the founder of the American Institute of Public Opinion, in order to quantify the size of the audience of his broadcast in Brazil.
When he measured the radio audience in São Paulo, Auricélio proved that Radio Kosmos wasn't among the most listened to stations. Therefore, he would dedicate himself exclusively to research. In 1950, Penteado leaves the presidency of the company in charge to a group of directors.
In 1977, Paulo de Tarso Montenegro became the president of the company. One year later, he invited his children, Carlos Augusto Montenegro and Luís Paulo Montenegro, to join the company. IBOPE carried out the first voting intention polls, anticipating with extreme precision the outcome of electoral contests in the late 1970s.
In the 1990s, IBOPE partners with entrepreneurs in Mexico, Colombia, Venezuela, Ecuador, Peru, Chile and Argentina. From these partnerships, the company begins to supply consolidated data for Latin America cable TV. Currently, in addition to Brazil, the company has offices in 14 countries.
In 2014, the IBOPE Media division (measuring media audience and advertising investment) was sold to the Kantar Group, which changed its name to Kantar Ibope Media.
In January 2021, when the rights to use the IBOPE name expired, the institute founders and directors founded the Inteligência em Pesquisa e Consultoria Estratégica (Ipec) group,.
Measurement
IBOPE was the first company worldwide to offer the service of TV audience measurement in real time from São Paulo in 1988.
In every city where the audience is measured, IBOPE choose a group of homes at random to represent the population. With the authorization of the family members, each television is equipped with a device called peoplemeter, which identifies and records which channels are being watched.
The device sends, by cell phone system, the information of all changes made by the viewer channel to a central collection of indices that processes, analyzes and distributes to customers.
The system of television audience measurement in real time is used in Brazil, Chile and Argentina.
Top-rated programs
This table lists all the TV shows with the highest
|
https://en.wikipedia.org/wiki/Duarte%20Leite
|
Duarte Leite Pereira da Silva, GCC (11 August 1864 in Porto – 29 September 1950 in Porto; ), was a Portuguese historian, mathematician, journalist, diplomat and politician. He graduated in Mathematics at the University of Coimbra, in 1885. He taught at the Politecnic Academy of Porto, from 1886 to 1911. Meanwhile, he was also the director of the newspaper diary "A Pátria". As a historian, he published many studies, later compiled in "História dos Descobrimentos" (History of the Discoveries), in 2 volumes.
Political career
After the overthrow of the Portuguese monarchy in 1910, he was Minister of Finance during the Augusto de Vasconcelos government (1911–1912), and succeeded him, as Prime Minister and Minister of Internal Affairs, from 16 June 1912 to 9 January 1913.
From 1914 to 1931 he served as Portuguese ambassador to Brazil. He was a candidate to the Presidency of the Republic in the elections held in the Congress of the Republic, in 1925. Faithful all his life to his left-wing republican principles, he became a member of the 1945–48 Movement of Democratic Unity, which during its brief lifespan functioned as the first form of legalized opposition to Salazar's far-right Estado Novo (New State) regimen.
External links
1864 births
1950 deaths
People from Porto
Portuguese Republican Party politicians
Prime Ministers of Portugal
Finance ministers of Portugal
Government ministers of Portugal
Ambassadors of Portugal to Brazil
Portuguese anti-fascists
20th-century Portuguese historians
Portuguese journalists
Male journalists
University of Coimbra alumni
|
https://en.wikipedia.org/wiki/Lee%20Hwa%20Chung%20theorem
|
The Lee Hwa Chung theorem is a theorem in symplectic topology.
The statement is as follows. Let M be a symplectic manifold with symplectic form ω. Let be a differential k-form on M which is invariant for all Hamiltonian vector fields. Then:
If k is odd,
If k is even, , where
References
Lee, John M., Introduction to Smooth Manifolds, Springer-Verlag, New York (2003) . Graduate-level textbook on smooth manifolds.
Hwa-Chung, Lee, "The Universal Integral Invariants of Hamiltonian Systems and Application to the Theory of Canonical Transformations", Proceedings of the Royal Society of Edinburgh. Section A. Mathematical and Physical Sciences, 62(03), 237–246. doi:10.1017/s0080454100006646
Symplectic topology
Theorems in differential geometry
|
https://en.wikipedia.org/wiki/Inner%20model%20theory
|
In set theory, inner model theory is the study of certain models of ZFC or some fragment or strengthening thereof. Ordinarily these models are transitive subsets or subclasses of the von Neumann universe V, or sometimes of a generic extension of V. Inner model theory studies the relationships of these models to determinacy, large cardinals, and descriptive set theory. Despite the name, it is considered more a branch of set theory than of model theory.
Examples
The class of all sets is an inner model containing all other inner models.
The first non-trivial example of an inner model was the constructible universe L developed by Kurt Gödel. Every model M of ZF has an inner model LM satisfying the axiom of constructibility, and this will be the smallest inner model of M containing all the ordinals of M. Regardless of the properties of the original model, LM will satisfy the generalized continuum hypothesis and combinatorial axioms such as the diamond principle ◊.
HOD, the class of sets that are hereditarily ordinal definable, form an inner model, which satisfies ZFC.
The sets that are hereditarily definable over a countable sequence of ordinals form an inner model, used in Solovay's theorem.
L(R), the smallest inner model containing all real numbers and all ordinals.
L[U], the class constructed relative to a normal, non-principal, -complete ultrafilter U over an ordinal (see zero dagger).
Consistency results
One important use of inner models is the proof of consistency results. If it can be shown that every model of an axiom A has an inner model satisfying axiom B, then if A is consistent, B must also be consistent. This analysis is most useful when A is an axiom independent of ZFC, for example a large cardinal axiom; it is one of the tools used to rank axioms by consistency strength.
References
See also
Core model
Inner model
|
https://en.wikipedia.org/wiki/Tetrahedral%20molecular%20geometry
|
In a tetrahedral molecular geometry, a central atom is located at the center with four substituents that are located at the corners of a tetrahedron. The bond angles are cos−1(−) = 109.4712206...° ≈ 109.5° when all four substituents are the same, as in methane () as well as its heavier analogues. Methane and other perfectly symmetrical tetrahedral molecules belong to point group Td, but most tetrahedral molecules have lower symmetry. Tetrahedral molecules can be chiral.
Tetrahedral bond angle
The bond angle for a symmetric tetrahedral molecule such as CH4 may be calculated using the dot product of two vectors. As shown in the diagram, the molecule can be inscribed in a cube with the tetravalent atom (e.g. carbon) at the cube centre which is the origin of coordinates, O. The four monovalent atoms (e.g. hydrogens) are at four corners of the cube (A, B, C, D) chosen so that no two atoms are at adjacent corners linked by only one cube edge. If the edge length of the cube is chosen as 2 units, then the two bonds OA and OB correspond to the vectors a = (1, –1, 1) and b = (1, 1, –1), and the bond angle θ is the angle between these two vectors. This angle may be calculated from the dot product of the two vectors, defined as a • b = ||a|| ||b|| cos θ where ||a|| denotes the length of vector a. As shown in the diagram, the dot product here is –1 and the length of each vector is √3, so that cos θ = –1/3 and the tetrahedral bond angle θ = arccos(–1/3) ≃ 109.47°.
Examples
Main group chemistry
Aside from virtually all saturated organic compounds, most compounds of Si, Ge, and Sn are tetrahedral. Often tetrahedral molecules feature multiple bonding to the outer ligands, as in xenon tetroxide (XeO4), the perchlorate ion (), the sulfate ion (), the phosphate ion (). Thiazyl trifluoride () is tetrahedral, featuring a sulfur-to-nitrogen triple bond.
Other molecules have a tetrahedral arrangement of electron pairs around a central atom; for example ammonia () with the nitrogen atom surrounded by three hydrogens and one lone pair. However the usual classification considers only the bonded atoms and not the lone pair, so that ammonia is actually considered as pyramidal. The H–N–H angles are 107°, contracted from 109.5°. This difference is attributed to the influence of the lone pair which exerts a greater repulsive influence than a bonded atom.
Transition metal chemistry
Again the geometry is widespread, particularly so for complexes where the metal has d0 or d10 configuration. Illustrative examples include tetrakis(triphenylphosphine)palladium(0) (), nickel carbonyl (), and titanium tetrachloride (). Many complexes with incompletely filled d-shells are often tetrahedral, e.g. the tetrahalides of iron(II), cobalt(II), and nickel(II).
Water structure
In the gas phase, a single water molecule has an oxygen atom surrounded by two hydrogens and two lone pairs, and the geometry is simply described as bent without considering the nonbonding lone pairs.
However, i
|
https://en.wikipedia.org/wiki/William%20Bedwell
|
William Bedwell (1561 – 5 May 1632 near London) was an English priest and scholar, specializing in Arabic and other "oriental" languages as well as in mathematics.
Bedwell was educated at Trinity College, Cambridge. He served the Church of England as Rector of St Ethelburga's Bishopsgate and Vicar of All Hallows, Tottenham (known at the time as 'Tottenham High Cross') from 1607. He was the author of the first local history of the area, A Briefe Description of the Towne of Tottenham.
He published in quarto an edition of the Epistles of John in Arabic, with a Latin version, printed by the Raphelengius family at Antwerp in 1612. He also left many Arabic manuscripts to the University of Cambridge and a typeface for printing them. According to McClure, it was Bedwell, and not Thomas Van Erpen, who was the first to revive the study of Arabic literature in Europe. His uncompleted preparations for an Arabic lexicon were eclipsed by the publication of a similar work by Jacobus Golius in 1653. He asserted that knowledge of Arabic was necessary to a deeper understanding of ancient Hebrew. Bedwell's manuscripts were loaned, following his death, to the University of Cambridge, where they were consulted by Edmund Castell during the creation of the monumental Lexicon Heptaglotton (1669). Another manuscript, for a dictionary of Persian, was in the possession of William Laud, Archbishop of Canterbury, and now resides at the Bodleian Library. Besides his Arabic Epistles of John, his best known published work was A Discovery of the Impostures of Mahomet and of the Koran, (1615). He was among the 'First Westminster Company' charged by James I of England with the translation of the first twelve books of the King James Version of the Bible.
Bedwell also invented a ruler for geometrical purposes, similar to the Gunter's scale. He died at his vicarage at the age of 72.
References
McClure, Alexander. (1858) The Translators Revived: A Biographical Memoir of the Authors of the English Version of the Holy Bible. Mobile, Alabama: R. E. Publications (republished by the Maranatha Bible Society, 1984 ASIN B0006YJPI8 )
Nicolson, Adam. (2003) God's Secretaries: The Making of the King James Bible. New York: HarperCollins
External links
1561 births
1632 deaths
Alumni of Trinity College, Cambridge
English lexicographers
Translators of the King James Version
Christian Hebraists
Writers from London
17th-century English Anglican priests
16th-century English mathematicians
17th-century English mathematicians
16th-century English historians
17th-century English historians
|
https://en.wikipedia.org/wiki/Relative%20contact%20homology
|
In mathematics, in the area of symplectic topology, relative contact homology is an invariant of spaces together with a chosen subspace. Namely, it is associated to a contact manifold and one of its Legendrian submanifolds. It is a part of a more general invariant known as symplectic field theory, and is defined using pseudoholomorphic curves.
Legendrian knots
The simplest case yields invariants of Legendrian knots inside contact three-manifolds. The relative contact homology has been shown to be a strictly more powerful invariant than the "classical invariants", namely Thurston-Bennequin number and rotation number (within a class of smooth knots).
Yuri Chekanov developed a purely combinatorial version of relative contact homology for Legendrian knots, i.e. a combinatorially defined invariant that reproduces the results of relative contact homology.
Tamas Kalman developed a combinatorial invariant for loops of Legendrian knots, with which he detected differences between the fundamental groups of the space of smooth knots and of the space of Legendrian knots.
Higher-dimensional legendrian submanifolds
In the work of Lenhard Ng, relative SFT is used to obtain invariants of smooth knots: a knot or link inside a topological three-manifold gives rise to a Legendrian torus inside a contact five-manifold, consisisting of the unit conormal bundle to the knot inside the unit cotangent bundle of the ambient three-manifold. The relative SFT of this pair is a differential graded algebra; Ng derives a powerful knot invariant from a combinatorial version of the zero-th degree part of the homology. It has the form of a finitely presented tensor algebra over a certain ring of multivariable Laurent polynomials with integer coefficients. This invariant assigns distinct invariants to (at least) knots of at most ten crossings, and dominates the Alexander polynomial and the A-polynomial (and thus distinguishes the unknot).
See also
Relative homology
References
Lenhard Ng, Conormal bundles, contact homology, and knot invariants.
Tobias Ekholm, John Etnyre, Michael G. Sullivan, Legendrian Submanifolds in $R^{2n+1}$ and Contact Homology.
Yuri Chekanov, "Differential Algebra of Legendrian Links". Inventiones Mathematicae 150 (2002), pp. 441-483.
Contact homology and one parameter families of Legendrian knots by Tamas Kalman
Symplectic topology
Morse theory
Homology theory
Contact geometry
|
https://en.wikipedia.org/wiki/Conway%20polynomial
|
In mathematics, Conway polynomial can refer to:
the Alexander–Conway polynomial in knot theory
the Conway polynomial (finite fields)
the polynomial of degree 71 that has Conway's constant as its single positive real root
|
https://en.wikipedia.org/wiki/Permutation%20%28disambiguation%29
|
In mathematics, permutation relates to the act of arranging all the members of a set into some sequence or order.
Permutation may also refer to:
An alteration or transformation of a previous object or concept; see iteration
Permutation, as a mathematical concept
Permutation test in statistics
Permutation (Cryptography), a series of linked mathematical operations used in block cipher algorithms such as AES.
Permutation box, a cryptography method of bit shuffling used to permute or transpose bits across S-boxes.
Permutation (music), as a concept related to musical set theory
Permutation (Amon Tobin album), 1998
Permutation (Bill Laswell album), 1999
"Permutation" (song), an instrumental song by the Red Hot Chili Peppers
|
https://en.wikipedia.org/wiki/Inverse%20hyperbolic%20functions
|
In mathematics, the inverse hyperbolic functions are inverses of the hyperbolic functions, analogous to the inverse circular functions. There are six in common use: inverse hyperbolic sine, inverse hyperbolic cosine, inverse hyperbolic tangent, inverse hyperbolic cosecant, inverse hyperbolic secant, and inverse hyperbolic cotangent. They are commonly denoted by the symbols for the hyperbolic functions, prefixed with arc- or ar-.
For a given value of a hyperbolic function, the inverse hyperbolic function provides the corresponding hyperbolic angle measure, for example and Hyperbolic angle measure is the length of an arc of a unit hyperbola as measured in the Lorentzian plane (not the length of a hyperbolic arc in the Euclidean plane), and twice the area of the corresponding hyperbolic sector. This is analogous to the way circular angle measure is the arc length of an arc of the unit circle in the Euclidean plane or twice the area of the corresponding circular sector. Alternately hyperbolic angle is the area of a sector of the hyperbola Some authors call the inverse hyperbolic functions hyperbolic area functions.
Hyperbolic functions occur in the calculations of angles and distances in hyperbolic geometry. It also occurs in the solutions of many linear differential equations (such as the equation defining a catenary), cubic equations, and Laplace's equation in Cartesian coordinates. Laplace's equations are important in many areas of physics, including electromagnetic theory, heat transfer, fluid dynamics, and special relativity.
Notation
The earliest and most widely adopted symbols use the prefix arc- (that is: , , , , , ), by analogy with the inverse circular functions (, etc.). For a unit hyperbola ("Lorentzian circle") in the Lorentzian plane (pseudo-Euclidean plane of signature ) or in the hyperbolic number plane, the hyperbolic angle measure (argument to the hyperbolic functions) is indeed the arc length of a hyperbolic arc.
Also common is the notation etc., although care must be taken to avoid misinterpretations of the superscript −1 as an exponent. The standard convention is that or means the inverse function while or means the reciprocal Especially inconsistent is the conventional use of positive integer superscripts to indicate an exponent rather than function composition, e.g. conventionally means and not
Because the argument of hyperbolic functions is not the arclength of a hyperbolic arc in the Euclidean plane, some authors have condemned the prefix arc-, arguing that the prefix ar- (for area) or arg- (for argument) should be preferred. Following this recommendation, the ISO 80000-2 standard abbreviations use the prefix ar- (that is: , , , , , ).
In computer programming languages, inverse circular and hyperbolic functions are often named with the shorter prefix a- (, etc.).
This article will consistently adopt the prefix ar- for convenience.
Definitions in terms of logarithms
Since the hyperbolic functions
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.