source
stringlengths
31
168
text
stringlengths
51
3k
https://en.wikipedia.org/wiki/Matrix%20calculus
In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a multivariate function with respect to a single variable, into vectors and matrices that can be treated as single entities. This greatly simplifies operations such as finding the maximum or minimum of a multivariate function and solving systems of differential equations. The notation used here is commonly used in statistics and engineering, while the tensor index notation is preferred in physics. Two competing notational conventions split the field of matrix calculus into two separate groups. The two groups can be distinguished by whether they write the derivative of a scalar with respect to a vector as a column vector or a row vector. Both of these conventions are possible even when the common assumption is made that vectors should be treated as column vectors when combined with matrices (rather than row vectors). A single convention can be somewhat standard throughout a single field that commonly uses matrix calculus (e.g. econometrics, statistics, estimation theory and machine learning). However, even within a given field different authors can be found using competing conventions. Authors of both groups often write as though their specific conventions were standard. Serious mistakes can result when combining results from different authors without carefully verifying that compatible notations have been used. Definitions of these two conventions and comparisons between them are collected in the layout conventions section. Scope Matrix calculus refers to a number of different notations that use matrices and vectors to collect the derivative of each component of the dependent variable with respect to each component of the independent variable. In general, the independent variable can be a scalar, a vector, or a matrix while the dependent variable can be any of these as well. Each different situation will lead to a different set of rules, or a separate calculus, using the broader sense of the term. Matrix notation serves as a convenient way to collect the many derivatives in an organized way. As a first example, consider the gradient from vector calculus. For a scalar function of three independent variables, , the gradient is given by the vector equation where represents a unit vector in the direction for . This type of generalized derivative can be seen as the derivative of a scalar, f, with respect to a vector, , and its result can be easily collected in vector form. More complicated examples include the derivative of a scalar function with respect to a matrix, known as the gradient matrix, which collects the derivative with respect to each matrix element in the corresponding position in the resulting matrix. In that case the scalar must be a function of each of the independent variables in the matrix. A
https://en.wikipedia.org/wiki/Egorov%27s%20theorem
In measure theory, an area of mathematics, Egorov's theorem establishes a condition for the uniform convergence of a pointwise convergent sequence of measurable functions. It is also named Severini–Egoroff theorem or Severini–Egorov theorem, after Carlo Severini, an Italian mathematician, and Dmitri Egorov, a Russian physicist and geometer, who published independent proofs respectively in 1910 and 1911. Egorov's theorem can be used along with compactly supported continuous functions to prove Lusin's theorem for integrable functions. Historical note The first proof of the theorem was given by Carlo Severini in 1910: he used the result as a tool in his research on series of orthogonal functions. His work remained apparently unnoticed outside Italy, probably due to the fact that it is written in Italian, appeared in a scientific journal with limited diffusion and was considered only as a means to obtain other theorems. A year later Dmitri Egorov published his independently proved results, and the theorem became widely known under his name: however, it is not uncommon to find references to this theorem as the Severini–Egoroff theorem. The first mathematicians to prove independently the theorem in the nowadays common abstract measure space setting were , and in : an earlier generalization is due to Nikolai Luzin, who succeeded in slightly relaxing the requirement of finiteness of measure of the domain of convergence of the pointwise converging functions in the ample paper . Further generalizations were given much later by Pavel Korovkin, in the paper , and by Gabriel Mokobodzki in the paper . Formal statement and proof Statement Let (fn) be a sequence of M-valued measurable functions, where M is a separable metric space, on some measure space (X,Σ,μ), and suppose there is a measurable subset A ⊆ X, with finite μ-measure, such that (fn) converges μ-almost everywhere on A to a limit function f. The following result holds: for every ε > 0, there exists a measurable subset B of A such that μ(B) < ε, and (fn) converges to f uniformly on A \ B. Here, μ(B) denotes the μ-measure of B. In words, the theorem says that pointwise convergence almost everywhere on A implies the apparently much stronger uniform convergence everywhere except on some subset B of arbitrarily small measure. This type of convergence is also called almost uniform convergence. Discussion of assumptions and a counterexample The hypothesis μ(A) < ∞ is necessary. To see this, it is simple to construct a counterexample when μ is the Lebesgue measure: consider the sequence of real-valued indicator functions defined on the real line. This sequence converges pointwise to the zero function everywhere but does not converge uniformly on for any set B of finite measure: a counterexample in the general -dimensional real vector space can be constructed as shown by . The separability of the metric space is needed to make sure that for M-valued, measurable functions f and g, the distance d(f(x
https://en.wikipedia.org/wiki/Winsorizing
Winsorizing or winsorization is the transformation of statistics by limiting extreme values in the statistical data to reduce the effect of possibly spurious outliers. It is named after the engineer-turned-biostatistician Charles P. Winsor (1895–1951). The effect is the same as clipping in signal processing. The distribution of many statistics can be heavily influenced by outliers. A typical strategy is to set all outliers to a specified percentile of the data; for example, a 90% winsorization would see all data below the 5th percentile set to the 5th percentile, and data above the 95th percentile set to the 95th percentile. Winsorized estimators are usually more robust to outliers than their more standard forms, although there are alternatives, such as trimming, that will achieve a similar effect. Example Consider the data set consisting of: {92, 19, 101, 58, 1053, 91, 26, 78, 10, 13, −40, 101, 86, 85, 15, 89, 89, 28, −5, 41} (N = 20, mean = 101.5) The data below the 5th percentile lies between −40 and −5, while the data above the 95th percentile lies between 101 and 1053 (pertinent values shown in bold); accordingly, a 90% winsorization would result in the following: {92, 19, 101, 58, 101, 91, 26, 78, 10, 13, −5, 101, 86, 85, 15, 89, 89, 28, −5, 41} (N = 20, mean = 55.65) After winsorization the mean has dropped to nearly half its previous value, and is consequently more in line with the data it represents. Python can winsorize data using SciPy library : from scipy.stats.mstats import winsorize winsorize([92, 19, 101, 58, 1053, 91, 26, 78, 10, 13, -40, 101, 86, 85, 15, 89, 89, 28, -5, 41], limits=[0.05, 0.05]) R can winsorize data using the DescTools package: library(DescTools) a<-c(92, 19, 101, 58, 1053, 91, 26, 78, 10, 13, -40, 101, 86, 85, 15, 89, 89, 28, -5, 41) DescTools::Winsorize(a, probs = c(0.05, 0.95)) Distinction from trimming Note that winsorizing is not equivalent to simply excluding data, which is a simpler procedure, called trimming or truncation, but is a method of censoring data. In a trimmed estimator, the extreme values are discarded; in a winsorized estimator, the extreme values are instead replaced by certain percentiles (the trimmed minimum and maximum). Thus a winsorized mean is not the same as a truncated mean. For instance, the 10% trimmed mean is the average of the 5th to 95th percentile of the data, while the 90% winsorized mean sets the bottom 5% to the 5th percentile, the top 5% to the 95th percentile, and then averages the data. In the previous example the trimmed mean would be obtained from the smaller set: {92, 19, 101, 58, 91, 26, 78, 10, 13, 101, 86, 85, 15, 89, 89, 28, −5, 41} (N = 18, mean = 56.5) In this case, the winsorized mean can equivalently be expressed as a weighted average of the truncated mean and the 5th and 95th percentiles (for the 10% winsorized mean, 0.05 times the 5th percentile, 0.9 times the 10% trimmed mean, and 0.05 times the 95th percentile) though in general winsorized
https://en.wikipedia.org/wiki/Transvection
Transvection may refer to: Transvection (flying) Transvection (genetics) Mathematics The creation of a transvectant in invariant theory A shear mapping in linear algebra Raising and lowering indices
https://en.wikipedia.org/wiki/Tav
Tav or TAV may refer to: Math, science and technology Tav (number), in set theory the collection of all cardinal numbers The Advanced Visualizer, a 3D graphics software package Tomato aspermy virus, a plant virus Tropical Atlantic Variability, in meteorology Ti-6Al-4V, a titanium alloy containing aluminum and vanadium Treno Alta Velocità, a special-purpose entity for the construction of a high-speed rail network in Italy Codes TAV, IATA airport code for Tau Airport, American Samoa TAV, airline code for Compañía de Servicios Aéreos Tavisa tav, ISO 639-3 code for the Tatuyo language of Colombia Nickname Tav Falco (born 1945), American musical performer, performance artist, actor, filmmaker and photographer Octavio M. Salati (1914–2001), American engineer, academic and educator Other uses Tav (letter), the last letter of many Semitic abjads TAV Airports Holding (Tepe Akfen Ventures), a Turkish airport operator Treno Alta Velocità, the working name for the planning and construction of the Italian high-speed rail network The Artists Village, an experimental arts group in Singapore Toon-A-Vision channel See also TAV-8B, the two-seat version of the McDonnell Douglas AV-8B Harrier II aircraft TAV RJ-SP, the Brazilian Rio–São Paulo high-speed rail system
https://en.wikipedia.org/wiki/Gr%C3%B6nwall%27s%20inequality
In mathematics, Grönwall's inequality (also called Grönwall's lemma or the Grönwall–Bellman inequality) allows one to bound a function that is known to satisfy a certain differential or integral inequality by the solution of the corresponding differential or integral equation. There are two forms of the lemma, a differential form and an integral form. For the latter there are several variants. Grönwall's inequality is an important tool to obtain various estimates in the theory of ordinary and stochastic differential equations. In particular, it provides a comparison theorem that can be used to prove uniqueness of a solution to the initial value problem; see the Picard–Lindelöf theorem. It is named for Thomas Hakon Grönwall (1877–1932). Grönwall is the Swedish spelling of his name, but he spelled his name as Gronwall in his scientific publications after emigrating to the United States. The inequality was first proven by Grönwall in 1919 (the integral form below with and being constants). Richard Bellman proved a slightly more general integral form in 1943. A nonlinear generalization of the Grönwall–Bellman inequality is known as Bihari–LaSalle inequality. Other variants and generalizations can be found in Pachpatte, B.G. (1998). Differential form Let denote an interval of the real line of the form or or with . Let and be real-valued continuous functions defined on . If  is differentiable in the interior of (the interval without the end points and possibly ) and satisfies the differential inequality then is bounded by the solution of the corresponding differential equation : for all . Remark: There are no assumptions on the signs of the functions and . Proof Define the function Note that satisfies with and for all . By the quotient rule Thus the derivative of the function is non-positive and the function is bounded above by its value at the initial point of the interval : which is Grönwall's inequality. Integral form for continuous functions Let denote an interval of the real line of the form or or with . Let , and be real-valued functions defined on . Assume that and are continuous and that the negative part of is integrable on every closed and bounded subinterval of . (a) If  is non-negative and if satisfies the integral inequality then (b) If, in addition, the function is non-decreasing, then Remarks: There are no assumptions on the signs of the functions and . Compared to the differential form, differentiability of is not needed for the integral form. For a version of Grönwall's inequality which doesn't need continuity of and , see the version in the next section. Proof (a) Define Using the product rule, the chain rule, the derivative of the exponential function and the fundamental theorem of calculus, we obtain for the derivative where we used the assumed integral inequality for the upper estimate. Since and the exponential are non-negative, this gives an upper estimate for the de
https://en.wikipedia.org/wiki/F2
F2, F.II or similar may refer to: Science and mathematics F2, the chemical formula for fluorine or GF(2), in mathematics, the Galois field of two elements F2, a category in the Fujita scale of tornado intensity F2 hybrid, a type of crossing in classical genetics F2 layer, a stratum of the Earth's ionosphere F-2 mycotoxin or zearalenone, a chemical produced by fungi NIST-F2, an atomic clock Medicine F2 or Foundation Year 2, part of the UK Foundation Programme for postgraduate medical practitioners F-2 (drug), a psychedelic drug F2-isoprostane, a compound formed in vivo from fatty acids F2 gene, present on human chromosome 11, encodes for thrombin Films F2: Fun and Frustration, a 2019 Indian Telugu language film starring Venkatesh and Varun Tej Technology , a function key on a computer keyboard F2 or flat twin engine, a type of two-cylinder internal combustion engine Anik F2, a Canadian geostationary communications satellite Nikon F2, a professional SLR camera Transportation Aviation Fly Air, IATA designator F2, a Turkish private airline Fieseler F2 Tiger, a German single-seat aerobatic biplane Flanders F.2, a 1911 British experimental single-seat monoplane aircraft Fokker F.II, a 1919 German early airliner Metropolitan-Vickers F.2, a 1941 British early turbojet engine Rail EMD F2, a class of GM diesel freight locomotive built in 1946 H&BR Class F2, a class of 0-6-2T steam locomotives of the Hull and Barnsley Railway Road Alta F2, a 1952 British racing car DKW F2, a 1930s German small car Marussia F2, a Russian large luxury car Boat F2, the Sydney Ferries' services to Taronga Zoo Military Air Beechcraft F-2 Expeditor, an American twin engine reconnaissance aircraft Blackburn F.2 Lincock, a 1928 British single-seat lightweight fighter Bristol F.2 Fighter, a 1916 British two-seat biplane fighter and reconnaissance aircraft Dassault Mirage F2, a 1960s French prototype two-seat attack fighter Fairey F.2, a 1917 British fighter prototype Felixstowe F.2, a 1917 British First World War flying boat Hunter F 2, a 1953 Hawker Hunter fighter aircraft variant Lübeck-Travemünde F.2, early German reconnaissance floatplane McDonnell F-2 Banshee, a carrier-based jet fighter aircraft of the 1950s Mitsubishi F-2, a Japanese fighter aircraft, based on the F-16 Fighting Falcon RAF Tornado F2, a long-range, twin-engine swing-wing interceptor F 2 Hägernäs, a former Swedish Air Force wing Sea , an F class First World War submarine of the Royal Navy , a 1910s F-class submarine of the United States Navy 20 mm modèle F2 gun, a French naval gun Land F2 81mm Mortar, an Australian weapon FR F2 sniper rifle, the standard sniper rifle of the French military KMW F2, a German modular wheeled armoured vehicle Other uses F2 (classification), a wheelchair sport classification F-2 visa, a U.S. visa type for dependents of students F2Freestylers, British freestyle football duo also known as The F2 F2 Logistics, Filipino lo
https://en.wikipedia.org/wiki/H4
H4, H04, or H-4 may refer to: Science and mathematics ATC code H04 Pancreatic hormones, a subgroup of the Anatomical Therapeutic Chemical Classification System Histamine H4 receptor, a human gene Histone H4, a protein involved in the structure of chromatin in eukaryotic cells Hydrogen-4 (H-4), an isotope of hydrogen H4, a symmetry group in the fifth dimension; see H4 polytope H-4, a huge eruption of the Hekla volcano around 2310 BC Technology H-4 SOW, a precision-guided glide bomb used by the Pakistan Air Force H4 (chronometer), an 18th-century marine chronometer designed by John Harrison Zoom H4 Handy Recorder, a handheld digital audio recorder , level 4 heading markup for HTML Web pages, see HTML element#heading Transport Automobiles and roads H4, a halogen headlamp bulb H4, development name of the Hummer HX concept car H-4, shorthand for a 4-cylinder horizontally-opposed or "flat four" engine H4 Dansteed Way, a road in Milton Keynes, England Aviation GEN H-4, a Japanese helicopter design Hughes H-4 Hercules (Spruce Goose), the largest flying boat ever built Héli Sécurité Helicopter Airlines (IATA code), based in France; see List of airline codes (H) Inter Islands Airlines (IATA code), a former airline based in Cape Verde Ships and submarines HMS H4, a 1915 British Royal Navy H-class submarine HMS Tenedos (H04), a 1918 British Royal Navy Admiralty S-class destroyer USS H-4 (SS-147), a 1918 United States Navy submarine Rail GNR Class H4, a class of British steam locomotives Other uses Halo 4, a video game created by 343 Industries for the Xbox 360 H4 (film), a 2013 film H-4 visa, issued by the U.S. Citizenship and Immigration Services British NVC community H4, a heath community in the British National Vegetation Classification system H4 (classification), a cycling classification used in para-cycling See also 4H (disambiguation) HHhH, a novel by the French author Laurent Binet
https://en.wikipedia.org/wiki/%28B%2C%20N%29%20pair
In mathematics, a (B, N) pair is a structure on groups of Lie type that allows one to give uniform proofs of many results, instead of giving a large number of case-by-case proofs. Roughly speaking, it shows that all such groups are similar to the general linear group over a field. They were introduced by the mathematician Jacques Tits, and are also sometimes known as Tits systems. Definition A (B, N) pair is a pair of subgroups B and N of a group G such that the following axioms hold: G is generated by B and N. The intersection, T, of B and N is a normal subgroup of N. The group W = N/T is generated by a set S of elements of order 2 such that If s is an element of S and w is an element of W then sBw is contained in the union of BswB and BwB. No element of S normalizes B. The set S is uniquely determined by B and N and the pair (W,S) is a Coxeter system. Terminology BN pairs are closely related to reductive groups and the terminology in both subjects overlaps. The size of S is called the rank. We call B the (standard) Borel subgroup, T the (standard) Cartan subgroup, and W the Weyl group. A subgroup of G is called parabolic if it contains a conjugate of B, standard parabolic if, in fact, it contains B itself, and a Borel (or minimal parabolic) if it is a conjugate of B. Examples Abstract examples of BN pairs arise from certain group actions. Suppose that G is any doubly transitive permutation group on a set E with more than 2 elements. We let B be the subgroup of G fixing a point x, and we let N be the subgroup fixing or exchanging 2 points x and y. The subgroup T is then the set of elements fixing both x and y, and W has order 2 and its nontrivial element is represented by anything exchanging x and y. Conversely, if G has a (B, N) pair of rank 1, then the action of G on the cosets of B is doubly transitive. So BN pairs of rank 1 are more or less the same as doubly transitive actions on sets with more than 2 elements. More concrete examples of BN pairs can be found in reductive groups. Suppose that G is the general linear group GLn(K) over a field K. We take B to be the upper triangular matrices, T to be the diagonal matrices, and N to be the monomial matrices, i.e. matrices with exactly one non-zero element in each row and column. There are n − 1 generators, represented by the matrices obtained by swapping two adjacent rows of a diagonal matrix. The Weyl group is the symmetric group on n letters. More generally, if G is a reductive group over a field K then the group G=G(K) has a BN pair in which B=P(K), where P is a minimal parabolic subgroup of G, and N=N(K), where N is the normalizer of a split maximal torus contained in P. In particular, any finite group of Lie type has the structure of a BN-pair. Over the field of two elements, the Cartan subgroup is trivial in this example. A semisimple simply-connected algebraic group over a local field has a BN-pair where B is an Iwahori subgroup. Properties Bruhat decomposition The
https://en.wikipedia.org/wiki/List%20of%20municipalities%20of%20Sweden
This is a list of municipalities of Sweden after the division at the turn of the year of 2011–12. There are 290 municipalities. All statistics are from 1 January 2013, except for population (30 September 2013) and density (1 January 2013 and 30 September 2013). Code refers to the municipality code, Total area includes seawater area and Inland water area excludes the four largest lakes in Sweden (Vänern, Vättern, Mälaren and Hjälmaren). List References Folkmängd i riket, län och kommuner 30 september 2013 och befolkningsförändringar 1 juli - 30 september 2013 . Statistics Sweden. Land- och vattenareal per den 1 januari efter region och arealtyp. År 2012 - 2013 . Statistics Sweden. See also List of urban areas in Sweden Demographics of Sweden
https://en.wikipedia.org/wiki/Minitab
Minitab is a statistics package developed at the Pennsylvania State University by researchers Barbara F. Ryan, Thomas A. Ryan, Jr., and Brian L. Joiner in conjunction with Triola Statistics Company in 1972. It began as a light version of OMNITAB, a statistical analysis program by National Institute of Standards and Technology. History Minitab is a statistics package developed at the Pennsylvania State University by researchers Barbara F. Ryan, Thomas A. Ryan, Jr., Brian L. Joiner in 1972. The project received funding from the Triola Statistics Company. It began as a light version of OMNITAB, a statistical analysis program by NIST, which was conceived by Joseph Hilsenrath in years 19621964 for the IBM 7090. The documentation for the latest version of OMNITAB, OMNITAB 80, was last published 1986, and there has been no significant development since then. Minitab is distributed by Minitab, LLC, a privately owned company headquartered in State College, Pennsylvania. In 2020, during the COVID-19 pandemic, Minitab LLC requested and received between $5 million and $10 million under the Paycheck Protection Program to avoid having to let go 250 employees. As of 2021, Minitab LLC had subsidiaries in the UK, France, Germany, Hong Kong, and Australia. Interoperability Minitab, LLC also produces other software that can be used in conjunction with Minitab; Minitab Connect helps businesses centralize and organize their data, Quality Trainer is an eLearning package that teaches statistical concepts, Minitab Workspace provides project planning and visualization tools, and Minitab Engage is a tool for Idea and Innovation Management, as well as managing Six Sigma and Lean manufacturing deployments. In October 2020, Minitab launched the first cloud-based version of its statistical software. As of June 2021, the Minitab Desktop app is only available for Windows, with a former version for MacOS (Minitab 19.x) no longer being supported. See also List of statistical packages Comparison of statistical packages References Further reading "Minitab Statistical Software Features – Minitab." Software for Statistics, Process Improvement, Six Sigma, Quality – Minitab. N.p., n.d. Web. 11 Apr. 2011. Groebner, David F., Mark L. Berenson, David M. Levine, Timothy C. Krehbiel, and Hang Lau. Applied management statistics. Custom ed. Boston, MA: Pearson Custom Publishing/Pearson/Prentice Hall, 2008. Print Akers, Michael D (2018), Exploring, Analysing and Interpreting Data with Minitab 18 (1st ed.), United Kingdom, Compass Publishing. External links Statistical software Quality Pennsylvania State University Windows-only proprietary software
https://en.wikipedia.org/wiki/Uniformizable%20space
In mathematics, a topological space X is uniformizable if there exists a uniform structure on X that induces the topology of X. Equivalently, X is uniformizable if and only if it is homeomorphic to a uniform space (equipped with the topology induced by the uniform structure). Any (pseudo)metrizable space is uniformizable since the (pseudo)metric uniformity induces the (pseudo)metric topology. The converse fails: There are uniformizable spaces that are not (pseudo)metrizable. However, it is true that the topology of a uniformizable space can always be induced by a family of pseudometrics; indeed, this is because any uniformity on a set X can be defined by a family of pseudometrics. Showing that a space is uniformizable is much simpler than showing it is metrizable. In fact, uniformizability is equivalent to a common separation axiom: A topological space is uniformizable if and only if it is completely regular. Induced uniformity One way to construct a uniform structure on a topological space X is to take the initial uniformity on X induced by C(X), the family of real-valued continuous functions on X. This is the coarsest uniformity on X for which all such functions are uniformly continuous. A subbase for this uniformity is given by the set of all entourages where f ∈ C(X) and ε > 0. The uniform topology generated by the above uniformity is the initial topology induced by the family C(X). In general, this topology will be coarser than the given topology on X. The two topologies will coincide if and only if X is completely regular. Fine uniformity Given a uniformizable space X there is a finest uniformity on X compatible with the topology of X called the fine uniformity or universal uniformity. A uniform space is said to be fine if it has the fine uniformity generated by its uniform topology. The fine uniformity is characterized by the universal property: any continuous function f from a fine space X to a uniform space Y is uniformly continuous. This implies that the functor F : CReg → Uni that assigns to any completely regular space X the fine uniformity on X is left adjoint to the forgetful functor sending a uniform space to its underlying completely regular space. Explicitly, the fine uniformity on a completely regular space X is generated by all open neighborhoods D of the diagonal in X × X (with the product topology) such that there exists a sequence D1, D2, … of open neighborhoods of the diagonal with D = D1 and . The uniformity on a completely regular space X induced by C(X) (see the previous section) is not always the fine uniformity. References Properties of topological spaces Uniform spaces
https://en.wikipedia.org/wiki/Multigraph
In mathematics, and more specifically in graph theory, a multigraph is a graph which is permitted to have multiple edges (also called parallel edges), that is, edges that have the same end nodes. Thus two vertices may be connected by more than one edge. There are 2 distinct notions of multiple edges: Edges without own identity: The identity of an edge is defined solely by the two nodes it connects. In this case, the term "multiple edges" means that the same edge can occur several times between these two nodes. Edges with own identity: Edges are primitive entities just like nodes. When multiple edges connect two nodes, these are different edges. A multigraph is different from a hypergraph, which is a graph in which an edge can connect any number of nodes, not just two. For some authors, the terms pseudograph and multigraph are synonymous. For others, a pseudograph is a multigraph that is permitted to have loops. Undirected multigraph (edges without own identity) A multigraph G is an ordered pair G := (V, E) with V a set of vertices or nodes, E a multiset of unordered pairs of vertices, called edges or lines. Undirected multigraph (edges with own identity) A multigraph G is an ordered triple G := (V, E, r) with V a set of vertices or nodes, E a set of edges or lines, r : E → {{x,y} : x, y ∈ V}, assigning to each edge an unordered pair of endpoint nodes. Some authors allow multigraphs to have loops, that is, an edge that connects a vertex to itself, while others call these pseudographs, reserving the term multigraph for the case with no loops. Directed multigraph (edges without own identity) A multidigraph is a directed graph which is permitted to have multiple arcs, i.e., arcs with the same source and target nodes. A multidigraph G is an ordered pair G := (V, A) with V a set of vertices or nodes, A a multiset of ordered pairs of vertices called directed edges, arcs or arrows. A mixed multigraph G := (V, E, A) may be defined in the same way as a mixed graph. Directed multigraph (edges with own identity) A multidigraph or quiver G is an ordered 4-tuple G := (V, A, s, t) with V a set of vertices or nodes, A a set of edges or lines, , assigning to each edge its source node, , assigning to each edge its target node. This notion might be used to model the possible flight connections offered by an airline. In this case the multigraph would be a directed graph with pairs of directed parallel edges connecting cities to show that it is possible to fly both to and from these locations. In category theory a small category can be defined as a multidigraph (with edges having their own identity) equipped with an associative composition law and a distinguished self-loop at each vertex serving as the left and right identity for composition. For this reason, in category theory the term graph is standardly taken to mean "multidigraph", and the underlying multidigraph of a category is called its underlying digraph. Labeling Multigraphs and
https://en.wikipedia.org/wiki/Elementary%20mathematics
Elementary mathematics, also known as primary or secondary school mathematics, is the study of mathematics topics that are commonly taught at the primary or secondary school levels around the world. It includes a wide range of mathematical concepts and skills, including number sense, algebra, geometry, measurement, and data analysis. These concepts and skills form the foundation for more advanced mathematical study and are essential for success in many fields and everyday life. The study of elementary mathematics is a crucial part of a student's education and lays the foundation for future academic and career success. Strands of elementary mathematics Number sense and numeration Number Sense is an understanding of numbers and operations. In the 'Number Sense and Numeration' strand students develop an understanding of numbers by being taught various ways of representing numbers, as well as the relationships among numbers. Properties of the natural numbers such as divisibility and the distribution of prime numbers, are studied in basic number theory, another part of elementary mathematics. Elementary Focus Abacus LCM and GCD Fractions and Decimals Place Value & Face Value Addition and subtraction Multiplication and Division Counting Counting Money Algebra Representing and ordering numbers Estimating Approximating Problem Solving To have a strong foundation in mathematics and to be able to succeed in the other strands students need to have a fundamental understanding of number sense and numeration. Spatial sense 'Measurement skills and concepts' or 'Spatial Sense' are directly related to the world in which students live. Many of the concepts that students are taught in this strand are also used in other subjects such as science, social studies, and physical education In the measurement strand students learn about the measurable attributes of objects, in addition to the basic metric system. Elementary Focus Standard and non-standard units of measurement telling time using 12 hour clock and 24 hour clock comparing objects using measurable attributes measuring height, length, width centimetres and metres mass and capacity temperature change days, months, weeks, years distances using kilometres measuring kilograms and litres determining area and perimeter determining grams and millilitre determining measurements using shapes such as a triangular prism The measurement strand consists of multiple forms of measurement, as Marian Small states: "Measurement is the process of assigning a qualitative or quantitative description of size to an object based on a particular attribute." Equations and formulas A formula is an entity constructed using the symbols and formation rules of a given logical language. For example, determining the volume of a sphere requires a significant amount of integral calculus or its geometrical analogue, the method of exhaustion; but, having done this once in terms of some parameter (the radius for example
https://en.wikipedia.org/wiki/K-nearest%20neighbors%20algorithm
In statistics, the k-nearest neighbors algorithm (k-NN) is a non-parametric supervised learning method first developed by Evelyn Fix and Joseph Hodges in 1951, and later expanded by Thomas Cover. It is used for classification and regression. In both cases, the input consists of the k closest training examples in a data set. The output depends on whether k-NN is used for classification or regression: In k-NN classification, the output is a class membership. An object is classified by a plurality vote of its neighbors, with the object being assigned to the class most common among its k nearest neighbors (k is a positive integer, typically small). If k = 1, then the object is simply assigned to the class of that single nearest neighbor. In k-NN regression, the output is the property value for the object. This value is the average of the values of k nearest neighbors. If k = 1, then the output is simply assigned to the value of that single nearest neighbor. k-NN is a type of classification where the function is only approximated locally and all computation is deferred until function evaluation. Since this algorithm relies on distance for classification, if the features represent different physical units or come in vastly different scales then normalizing the training data can improve its accuracy dramatically. Both for classification and regression, a useful technique can be to assign weights to the contributions of the neighbors, so that the nearer neighbors contribute more to the average than the more distant ones. For example, a common weighting scheme consists in giving each neighbor a weight of 1/d, where d is the distance to the neighbor. The neighbors are taken from a set of objects for which the class (for k-NN classification) or the object property value (for k-NN regression) is known. This can be thought of as the training set for the algorithm, though no explicit training step is required. A peculiarity of the k-NN algorithm is that it is sensitive to the local structure of the data. Statistical setting Suppose we have pairs taking values in , where is the class label of , so that for (and probability distributions ). Given some norm on and a point , let be a reordering of the training data such that . Algorithm The training examples are vectors in a multidimensional feature space, each with a class label. The training phase of the algorithm consists only of storing the feature vectors and class labels of the training samples. In the classification phase, k is a user-defined constant, and an unlabeled vector (a query or test point) is classified by assigning the label which is most frequent among the k training samples nearest to that query point. A commonly used distance metric for continuous variables is Euclidean distance. For discrete variables, such as for text classification, another metric can be used, such as the overlap metric (or Hamming distance). In the context of gene expression microarray data, for example,
https://en.wikipedia.org/wiki/L%C3%A9vy%27s%20continuity%20theorem
In probability theory, Lévy’s continuity theorem, or Lévy's convergence theorem, named after the French mathematician Paul Lévy, connects convergence in distribution of the sequence of random variables with pointwise convergence of their characteristic functions. This theorem is the basis for one approach to prove the central limit theorem and is one of the major theorems concerning characteristic functions. Statement Suppose we have If the sequence of characteristic functions converges pointwise to some function then the following statements become equivalent: Proof Rigorous proofs of this theorem are available. References Probability theorems Theorems in statistics Paul Lévy (mathematician)
https://en.wikipedia.org/wiki/Risk%20analysis%20%28engineering%29
Risk analysis is the science of risks and their probability and evaluation. Probabilistic risk assessment is one analysis strategy usually employed in science and engineering. In a probabilistic risk assessment risks are identified and then assessed in terms of likelihood of occurrence of a consequence and the magnitude of a potential consequence. Risk analysis and the risk workshop Risk analysis should be performed as part of the risk management process for each project. The data of which would be based on risk discussion workshops to identify potential issues and risks ahead of time before these were to pose cost and/ or schedule negative impacts (see the article on cost contingency for a discussion of the estimation of cost impacts). The risk workshops should be attended by a large group, ideally between six and ten individuals from the various departmental functions (e.g. project manager, construction manager, site superintendent, and representatives from operations, procurement, [project] controls, etc.) so as to cover every risk element from different perspectives. The outcome of the risk analysis would be the creation or review of the risk register to identify and quantify risk elements to the project and their potential impact. Given that risk management is a continuous and iterative process, the risk workshop members would regroup on at regular intervals and project milestones to review the risk register mitigation plans, make changes to it as appropriate and following those changes re-run the risk model. By constantly monitoring risks these can be successfully mitigated resulting in a cost and schedule savings with a positive impact on the project. Risk analysis and information security The risk evaluation of the information technology environment has been the subject of some methodologies; information security is a science based on the evaluation and management of security risks regarding the information used by organizations to pursue their business objectives. Standardization bodies like ISO, NIST, The Open Group, and Information Security Forum have published different standards in this field. See also Actuarial science Benefit risk Cost risk Event chain methodology ENISA Information security Information Security Forum ISO IT risk NIST Optimism bias Project management Reference class forecasting External links European Institute of risk management Harvard Center for Risk Analysis Center for Risk Management of Engineering Systems, University of Virginia
https://en.wikipedia.org/wiki/Arithmetica
Arithmetica () is an Ancient Greek text on mathematics written by the mathematician Diophantus () in the 3rd century AD. It is a collection of 130 algebraic problems giving numerical solutions of determinate equations (those with a unique solution) and indeterminate equations. Summary Equations in the book are presently called Diophantine equations. The method for solving these equations is known as Diophantine analysis. Most of the Arithmetica problems lead to quadratic equations. In Book 3, Diophantus solves problems of finding values which make two linear expressions simultaneously into squares or cubes. In book 4, he finds rational powers between given numbers. He also noticed that numbers of the form cannot be the sum of two squares. Diophantus also appears to know that every number can be written as the sum of four squares. If he did know this result (in the sense of having proved it as opposed to merely conjectured it), his doing so would be truly remarkable: even Fermat, who stated the result, failed to provide a proof of it and it was not settled until Joseph Louis Lagrange proved it using results due to Leonhard Euler. Arithmetica was originally written in thirteen books, but the Greek manuscripts that survived to the present contain no more than six books. In 1968, Fuat Sezgin found four previously unknown books of Arithmetica at the shrine of Imam Rezā in the holy Islamic city of Mashhad in northeastern Iran. The four books are thought to have been translated from Greek to Arabic by Qusta ibn Luqa (820–912). Norbert Schappacher has written: [The four missing books] resurfaced around 1971 in the Astan Quds Library in Meshed (Iran) in a copy from 1198 AD. It was not catalogued under the name of Diophantus (but under that of Qusta ibn Luqa) because the librarian was apparently not able to read the main line of the cover page where Diophantus’s name appears in geometric Kufi calligraphy. Arithmetica became known to mathematicians in the Islamic world in the tenth century when Abu'l-Wefa translated it into Arabic. Syncopated algebra Diophantus was a Hellenistic mathematician who lived circa 250 AD, but the uncertainty of this date is so great that it may be off by more than a century. He is known for having written Arithmetica, a treatise that was originally thirteen books but of which only the first six have survived. Arithmetica is the earliest extant work present that solve arithmetic problems by algebra. Diophantus however did not invent the method of algebra, which existed before him. Algebra was practiced and diffused orally by practitioners, with Diophantus picking up technique to solve problems in arithmetic. In modern algebra polynomial is linear combination of variable x that is build of exponentiation, scalar multiplication, addition, and subtraction. Algebra of Diophantus, similar to medieval arabic algebra is aggregation of objects of different types with no operations present For example Diophantus Polynomial "
https://en.wikipedia.org/wiki/Radius
In classical geometry, a radius (: radii or radiuses) of a circle or sphere is any of the line segments from its center to its perimeter, and in more modern usage, it is also their length. The name comes from the Latin radius, meaning ray but also the spoke of a chariot wheel. The typical abbreviation and mathematical variable name for radius is R or r. By extension, the diameter D is defined as twice the radius: If an object does not have a center, the term may refer to its circumradius, the radius of its circumscribed circle or circumscribed sphere. In either case, the radius may be more than half the diameter, which is usually defined as the maximum distance between any two points of the figure. The inradius of a geometric figure is usually the radius of the largest circle or sphere contained in it. The inner radius of a ring, tube or other hollow object is the radius of its cavity. For regular polygons, the radius is the same as its circumradius. The inradius of a regular polygon is also called apothem. In graph theory, the radius of a graph is the minimum over all vertices u of the maximum distance from u to any other vertex of the graph. The radius of the circle with perimeter (circumference) C is Formula For many geometric figures, the radius has a well-defined relationship with other measures of the figure. Circles The radius of a circle with area is The radius of the circle that passes through the three non-collinear points , , and is given by where is the angle . This formula uses the law of sines. If the three points are given by their coordinates , , and , the radius can be expressed as Regular polygons The radius of a regular polygon with sides of length is given by , where Values of for small values of are given in the table. If then these values are also the radii of the corresponding regular polygons. Hypercubes The radius of a d-dimensional hypercube with side s is Use in coordinate systems Polar coordinates The polar coordinate system is a two-dimensional coordinate system in which each point on a plane is determined by a distance from a fixed point and an angle from a fixed direction. The fixed point (analogous to the origin of a Cartesian system) is called the pole, and the ray from the pole in the fixed direction is the polar axis. The distance from the pole is called the radial coordinate or radius, and the angle is the angular coordinate, polar angle, or azimuth. Cylindrical coordinates In the cylindrical coordinate system, there is a chosen reference axis and a chosen reference plane perpendicular to that axis. The origin of the system is the point where all three coordinates can be given as zero. This is the intersection between the reference plane and the axis. The axis is variously called the cylindrical or longitudinal axis, to differentiate it from the polar axis, which is the ray that lies in the reference plane, starting at the origin and pointing in the reference direction
https://en.wikipedia.org/wiki/Harold%20S.%20Shapiro
Harold Seymour Shapiro (2 April 1928 – 5 March 2021) was a professor of mathematics at the Royal Institute of Technology in Stockholm, Sweden, best known for inventing the so-called Shapiro polynomials (also known as Golay–Shapiro polynomials or Rudin–Shapiro polynomials) and for work on quadrature domains. His main research areas were approximation theory, complex analysis, functional analysis, and partial differential equations. He was also interested in the pedagogy of problem-solving. Born and raised in Brooklyn, New York, to a Jewish family, Shapiro earned a B.Sc. from the City College of New York in 1949 and earned his M.S. degree from the Massachusetts Institute of Technology in 1951. He received his Ph.D. in 1952 from MIT; his thesis was written under the supervision of Norman Levinson. He was the father of cosmologist Max Tegmark, a graduate of the Royal Institute of Technology and now a professor at MIT. Shapiro died on 5 March 2021, aged 92. See also Rudin–Shapiro sequence List of Jewish mathematicians#S References External links Shapiro's homepage Rudin–Shapiro Curve by Eric Rowland, The Wolfram Demonstrations Project. 1928 births 2021 deaths 20th-century American mathematicians 21st-century American mathematicians American Jews Academic staff of the KTH Royal Institute of Technology Massachusetts Institute of Technology alumni Mathematical analysts Functional analysts Approximation theorists American emigrants to Sweden
https://en.wikipedia.org/wiki/Antiparallel
The term antiparallel may refer to: Antiparallel (biochemistry), the orientation of adjacent molecules Antiparallel (mathematics), a congruent but opposite relative orientation of two lines in relation to another line or angle Antiparallel vectors, a pair of vectors pointed in opposite directions Antiparallel (electronics), the polarity of devices run in parallel See also Antiparallelogram
https://en.wikipedia.org/wiki/Stirling%20transform
In combinatorial mathematics, the Stirling transform of a sequence { an : n = 1, 2, 3, ... } of numbers is the sequence { bn : n = 1, 2, 3, ... } given by where is the Stirling number of the second kind, also denoted S(n,k) (with a capital S), which is the number of partitions of a set of size n into k parts. The inverse transform is where s(n,k) (with a lower-case s) is a Stirling number of the first kind. Berstein and Sloane (cited below) state "If an is the number of objects in some class with points labeled 1, 2, ..., n (with all labels distinct, i.e. ordinary labeled structures), then bn is the number of objects with points labeled 1, 2, ..., n (with repetitions allowed)." If is a formal power series, and with an and bn as above, then Likewise, the inverse transform leads to the generating function identity See also Binomial transform Generating function transformation List of factorial and binomial topics References . Khristo N. Boyadzhiev, Notes on the Binomial Transform, Theory and Table, with Appendix on the Stirling Transform (2018), World Scientific. Factorial and binomial topics Transforms
https://en.wikipedia.org/wiki/Fibrant%20object
In mathematics, specifically in homotopy theory in the context of a model category M, a fibrant object A of M is an object that has a fibration to the terminal object of the category. Properties The fibrant objects of a closed model category are characterized by having a right lifting property with respect to any trivial cofibration in the category. This property makes fibrant objects the "correct" objects on which to define homotopy groups. In the context of the theory of simplicial sets, the fibrant objects are known as Kan complexes after Daniel Kan. They are the Kan fibrations over a point. Dually is the notion of cofibrant object, defined to be an object such that the unique morphism from the initial object to is a cofibration. References P.G. Goerss and J.F. Jardine, Simplicial Homotopy Theory, Progress in Math., Vol. 174, Birkhauser, Boston-Basel-Berlin, 1999. . Homotopy theory Objects (category theory)
https://en.wikipedia.org/wiki/Quadratic%20classifier
In statistics, a quadratic classifier is a statistical classifier that uses a quadratic decision surface to separate measurements of two or more classes of objects or events. It is a more general version of the linear classifier. The classification problem Statistical classification considers a set of vectors of observations x of an object or event, each of which has a known type y. This set is referred to as the training set. The problem is then to determine, for a given new observation vector, what the best class should be. For a quadratic classifier, the correct solution is assumed to be quadratic in the measurements, so y will be decided based on In the special case where each observation consists of two measurements, this means that the surfaces separating the classes will be conic sections (i.e. either a line, a circle or ellipse, a parabola or a hyperbola). In this sense, we can state that a quadratic model is a generalization of the linear model, and its use is justified by the desire to extend the classifier's ability to represent more complex separating surfaces. Quadratic discriminant analysis Quadratic discriminant analysis (QDA) is closely related to linear discriminant analysis (LDA), where it is assumed that the measurements from each class are normally distributed. Unlike LDA however, in QDA there is no assumption that the covariance of each of the classes is identical. When the normality assumption is true, the best possible test for the hypothesis that a given measurement is from a given class is the likelihood ratio test. Suppose there are only two groups, with means and covariance matrices corresponding to and respectively. Then the likelihood ratio is given by for some threshold . After some rearrangement, it can be shown that the resulting separating surface between the classes is a quadratic. The sample estimates of the mean vector and variance-covariance matrices will substitute the population quantities in this formula. Other While QDA is the most commonly-used method for obtaining a classifier, other methods are also possible. One such method is to create a longer measurement vector from the old one by adding all pairwise products of individual measurements. For instance, the vector would become Finding a quadratic classifier for the original measurements would then become the same as finding a linear classifier based on the expanded measurement vector. This observation has been used in extending neural network models; the "circular" case, which corresponds to introducing only the sum of pure quadratic terms with no mixed products (), has been proven to be the optimal compromise between extending the classifier's representation power and controlling the risk of overfitting (Vapnik-Chervonenkis dimension). For linear classifiers based only on dot products, these expanded measurements do not have to be actually computed, since the dot product in the higher-dimensional space is simply related to that in the
https://en.wikipedia.org/wiki/Dickson%20invariant
In mathematics, the Dickson invariant, named after Leonard Eugene Dickson, may mean: The Dickson invariant of an element of the orthogonal group in characteristic 2 A modular invariant of a group studied by Dickson
https://en.wikipedia.org/wiki/Mills%27%20constant
In number theory, Mills' constant is defined as the smallest positive real number A such that the floor function of the double exponential function is a prime number for all positive natural numbers n. This constant is named after William Harold Mills who proved in 1947 the existence of A based on results of Guido Hoheisel and Albert Ingham on the prime gaps. Its value is unproven, but if the Riemann hypothesis is true, it is approximately 1.3063778838630806904686144926... . Mills primes The primes generated by Mills' constant are known as Mills primes; if the Riemann hypothesis is true, the sequence begins . If ai denotes the i th prime in this sequence, then ai can be calculated as the smallest prime number larger than . In order to ensure that rounding , for n = 1, 2, 3, …, produces this sequence of primes, it must be the case that . The Hoheisel–Ingham results guarantee that there exists a prime between any two sufficiently large cube numbers, which is sufficient to prove this inequality if we start from a sufficiently large first prime . The Riemann hypothesis implies that there exists a prime between any two consecutive cubes, allowing the sufficiently large condition to be removed, and allowing the sequence of Mills primes to begin at a1 = 2. For all a > , there is at least one prime between and . This upper bound is much too large to be practical, as it is infeasible to check every number below that figure. However, the value of Mills' constant can be verified by calculating the first prime in the sequence that is greater than that figure. As of April 2017, the 11th number in the sequence is the largest one that has been proved prime. It is and has 20562 digits. , the largest known Mills probable prime (under the Riemann hypothesis) is , which is 555,154 digits long. Numerical calculation By calculating the sequence of Mills primes, one can approximate Mills' constant as Caldwell and Cheng used this method to compute 6850 base 10 digits of Mills' constant under the assumption that the Riemann hypothesis is true. There is no closed-form formula known for Mills' constant, and it is not even known whether this number is rational. Fractional representations Below are fractions which approximate Mills' constant, listed in order of increasing accuracy (with continued-fraction convergents in bold) : 1/1, 3/2, 4/3, 9/7, 13/10, 17/13, 47/36, 64/49, 81/62, 145/111, 226/173, 307/235, 840/643, 1147/878, 3134/2399, 4281/3277, 5428/4155, 6575/5033, 12003/9188, 221482/169539, 233485/178727, 245488/187915, 257491/197103, 269494/206291, 281497/215479, 293500/224667, 305503/233855, 317506/243043, 329509/252231, 341512/261419, 353515/270607, 365518/279795, 377521/288983, 389524/298171, 401527/307359, 413530/316547, 425533/325735, 4692866/3592273, 5118399/3918008, 5543932/4243743, 5969465/4569478, 6394998/4895213, 6820531/5220948, 7246064/5546683,7671597/5872418, 8097130/6198153, 8522663/6523888, 8948196/6849623, 9373729/7175358, 27695654/2120
https://en.wikipedia.org/wiki/Canada/USA%20Mathcamp
Canada/USA Mathcamp is a five-week academic summer program for middle and high school students in mathematics. Mathcamp was founded in 1993 by Dr. George Thomas, who believed that students interested in mathematics frequently lacked the resources and camaraderie to pursue their interest. Mira Bernstein became the director when Thomas left in 2002 to found MathPath, a program for younger students. Mathcamp is held each year at a college campus in the United States or Canada. Past locations have included the University of Toronto, the University of Washington, Colorado College, Reed College, University of Puget Sound, Colby College, the University of British Columbia, Mount Holyoke College, and the Colorado School of Mines. Mathcamp enrolls about 120 students yearly, 45–55 returning and 65–75 new. The application process for new students includes an entrance exam (the "Qualifying Quiz"), personal essay, and two letters of recommendation, but no grade reports. The process is intended to ensure that the students who are most passionate about math come to camp. Admission is selective: in 2016, the acceptance rate was 15%. Mathcamp courses cover various branches of recreational and college-level mathematics. Classes at Mathcamp come in four difficulty levels. The easier classes often include basic proof techniques, number theory, graph theory, and combinatorial game theory, while the more difficult classes cover advanced topics in abstract algebra, topology, theoretical computer science, category theory, and mathematical analysis. There are generally four class periods each day and five classes offered during each period intended for varying student interests and backgrounds. Graduate student mentors teach most of the classes, while undergraduate junior counselors, all of them Mathcamp alumni, do most of the behind-the-scenes work. Mathcamp has had a number of renowned guest speakers, including John Conway, Avi Wigderson, and Serge Lang. Culture In 2004, some campers created Foodtongue, a constructed language in which every word is a word that means a food in the English language. One of the cardinal rules of the language is an agreed ban of direct translation. Foodtongue remains popular among campers, and was compiled into an online wiki, updated and referenced by speakers of the language. Notable alumni Scott Aaronson, American theoretical computer scientist Jennifer Balakrishnan, mathematician Sam Bankman-Fried, founder and former CEO of FTX Greg Brockman, co-founder and CTO of OpenAI Tamara Broderick, computer scientist Ivan Corwin, mathematician Sherry Gong, mathematician Shotaro Makisumi, mathematician and speedcuber Palmer Mebane, former World Puzzle Championship winner Mike Shulman, mathematician Sam Trabucco, former co-CEO of Alameda Research and a New York Times crossword puzzle constructor Gary Wang (executive), co-founder of FTX See also Ross Mathematics Program MathPath Mathematical Olympiad Program AwesomeMath Pr
https://en.wikipedia.org/wiki/Carl%20Christoffer%20Georg%20Andr%C3%A6
Carl Christopher Georg Andræ (14 October 1812 – 2 February 1893) was a Danish politician and mathematician. From 1842 until 1854, he was professor of mathematics and mechanics at the national military college. He was elected to the Royal Danish Academy of Sciences and Letters in 1853. Andræ was by royal appointment a member of the 1848 Danish Constituent Assembly. In 1854, he became Finance Minister in the Cabinet of Bang before also becoming Council President of Denmark 1856-1857 as leader of the Cabinet of Andræ. After being replaced as Council President by Carl Christian Hall in 1857 Andræ continued as Finance Minister in the Cabinet of Hall I until 1858. Being an individualist he, after the defeat of the National Liberals, never formally joined any political group but remained for the rest of his life a sceptical de facto conservative spectator of the 'Constitutional Struggle'. Early life and education Andræ was born in Hjertebjerg Rectory on the island of Møn. His parents were captain at the Third Jutland Infantry Regiment Johann Georg Andræ (1775–1814) Nicoline Christine Holm (1789–1862). He enrolled at Landkadetakademiet in 1825. In 1829, he was appointed to Second Lieutenant in the Road Corps. He followed a course in mathematics under Hans Christian Ørsted at the College of Applied Sciences before enrolling at the new Militære Højskole in 1830. He graduated with honours in December 1834 and was then made a First Lieutenant in the Engineering Corps. He completed two study trips to Paris in 1835–38, and he made significant contributions to the field of geodesy. Single transferable vote Andræ developed a ranked voting system of what is now called the single transferable vote (STV), which was used in Danish elections from 1855. This was two years before Thomas Hare published his first description of an STV system, without reference to Andræ. Though thoroughly convinced of the soundness of his method of electing representatives and ready to defend it in the cabinet or the parliament, he made no effort to bring it to the attention of scientific men and statesmen in other countries, much less to defend his claim as an inventor. Personal life In 1842, Andræ married Hansine Pouline Schack, an early feminist, who commented on his political views in her diaries, published from 1914 to 1920 as Geheimeraadinde Andræs politiske Dagbøger. He died on 2 February 1893. He is buried in Assistens Cemetery in Copenhagen. Notes External links Author profile in the database zbMATH References 1812 births 1893 deaths People from Møn Burials at Assistens Cemetery (Copenhagen) Danish Finance Ministers Danish mathematicians Prime Ministers of Denmark Single transferable vote Speakers of the Folketing 19th-century Danish politicians Members of the Rigsrådet (1855-1866)
https://en.wikipedia.org/wiki/Vuong%27s%20closeness%20test
In statistics, the Vuong closeness test is a likelihood-ratio-based test for model selection using the Kullback–Leibler information criterion. This statistic makes probabilistic statements about two models. They can be nested, strictly non-nested or partially non-nested (also called overlapping). The statistic tests the null hypothesis that the two models are equally close to the true data generating process, against the alternative that one model is closer. It cannot make any decision whether the "closer" model is the true model. Technical description With strictly non-nested models and iid exogenous variables, model 1 (2) is preferred with significance level α, if the z statistic with exceeds the positive (falls below the negative) (1 − α)-quantile of the standard normal distribution. Here K1 and K2 are the numbers of parameters in models 1 and 2 respectively. The numerator is the difference between the maximum likelihoods of the two models, corrected for the number of coefficients analogous to the BIC, the term in the denominator of the expression for Z, , is defined by setting equal to either the mean of the squares of the pointwise log-likelihood ratios , or to the sample variance of these values, where For nested or partially non-nested (overlapping) models the statistic has to be compared to critical values from a weighted sum of chi squared distributions. This can be approximated by a gamma distribution (in shape-rate form): with and is a vector of eigenvalues of a matrix of conditional expectations. The computation is quite difficult, so that in the overlapping and nested case many authors only derive statements from a subjective evaluation of the Z statistic (is it subjectively "big enough" to accept my hypothesis?). Improper use for zero-inflated models Vuong's test for non-nested models has been used in model selection to compare a zero-inflated count model to its non-zero-inflated counterpart (e.g., zero-inflated Poisson model versus ordinary Poisson model). Wilson (2015) argues that such use of Vuong's test is invalid as a non-zero-inflated model is neither strictly non-nested nor partially non-nested in its zero-inflated counterpart. The core of the misunderstanding appears to be the terminology, which offers itself to being incorrectly understood to imply that all pairs of non-nested models are either strictly non-nested or partially non-nested (aka overlapping). Crucially, the definitions of strictly non-nested and partially non-nested in Vuong (1989) do not unite to mean "all pairs of models that are not nested". In other words, there are non-nested models that are neither strictly non-nested nor partially non-nested. The zero-inflated Poisson model and its non-zero-inflated counterpart are an example of such a pair of non-nested models. Consequently, Vuong's test is not a valid test for discriminating between them. Example of strictly and partially non-nested models Vuong (1989) gives two exampl
https://en.wikipedia.org/wiki/Mathematics%20of%20general%20relativity
When studying and formulating Albert Einstein's theory of general relativity, various mathematical structures and techniques are utilized. The main tools used in this geometrical theory of gravitation are tensor fields defined on a Lorentzian manifold representing spacetime. This article is a general description of the mathematics of general relativity. Note: General relativity articles using tensors will use the abstract index notation. Tensors The principle of general covariance was one of the central principles in the development of general relativity. It states that the laws of physics should take the same mathematical form in all reference frames. The term 'general covariance' was used in the early formulation of general relativity, but the principle is now often referred to as 'diffeomorphism covariance'. Diffeomorphism covariance is not the defining feature of general relativity,[1] and controversies remain regarding its present status in general relativity. However, the invariance property of physical laws implied in the principle, coupled with the fact that the theory is essentially geometrical in character (making use of non-Euclidean geometries), suggested that general relativity be formulated using the language of tensors. This will be discussed further below. Spacetime as a manifold Most modern approaches to mathematical general relativity begin with the concept of a manifold. More precisely, the basic physical construct representing a curved is modelled by a four-dimensional, smooth, connected, Lorentzian manifold. Other physical descriptors are represented by various tensors, discussed below. The rationale for choosing a manifold as the fundamental mathematical structure is to reflect desirable physical properties. For example, in the theory of manifolds, each point is contained in a (by no means unique) coordinate chart, and this chart can be thought of as representing the 'local spacetime' around the observer (represented by the point). The principle of local Lorentz covariance, which states that the laws of special relativity hold locally about each point of spacetime, lends further support to the choice of a manifold structure for representing spacetime, as locally around a point on a general manifold, the region 'looks like', or approximates very closely Minkowski space (flat spacetime). The idea of coordinate charts as 'local observers who can perform measurements in their vicinity' also makes good physical sense, as this is how one actually collects physical data - locally. For cosmological problems, a coordinate chart may be quite large. Local versus global structure An important distinction in physics is the difference between local and global structures. Measurements in physics are performed in a relatively small region of spacetime and this is one reason for studying the local structure of spacetime in general relativity, whereas determining the global spacetime structure is important, especially in cosmologi
https://en.wikipedia.org/wiki/Bounded%20set%20%28topological%20vector%20space%29
In functional analysis and related areas of mathematics, a set in a topological vector space is called bounded or von Neumann bounded, if every neighborhood of the zero vector can be inflated to include the set. A set that is not bounded is called unbounded. Bounded sets are a natural way to define locally convex polar topologies on the vector spaces in a dual pair, as the polar set of a bounded set is an absolutely convex and absorbing set. The concept was first introduced by John von Neumann and Andrey Kolmogorov in 1935. Definition Suppose is a topological vector space (TVS) over a field A subset of is called or just in if any of the following equivalent conditions are satisfied: : For every neighborhood of the origin there exists a real such that for all scalars satisfying This was the definition introduced by John von Neumann in 1935. is absorbed by every neighborhood of the origin. For every neighborhood of the origin there exists a scalar such that For every neighborhood of the origin there exists a real such that for all scalars satisfying For every neighborhood of the origin there exists a real such that for all real Any one of statements (1) through (5) above but with the word "neighborhood" replaced by any of the following: "balanced neighborhood," "open balanced neighborhood," "closed balanced neighborhood," "open neighborhood," "closed neighborhood". e.g. Statement (2) may become: is bounded if and only if is absorbed by every balanced neighborhood of the origin. If is locally convex then the adjective "convex" may be also be added to any of these 5 replacements. For every sequence of scalars that converges to and every sequence in the sequence converges to in This was the definition of "bounded" that Andrey Kolmogorov used in 1934, which is the same as the definition introduced by Stanisław Mazur and Władysław Orlicz in 1933 for metrizable TVS. Kolmogorov used this definition to prove that a TVS is seminormable if and only if it has a bounded convex neighborhood of the origin. For every sequence in the sequence converges to in Every countable subset of is bounded (according to any defining condition other than this one). If is a neighborhood basis for at the origin then this list may be extended to include: Any one of statements (1) through (5) above but with the neighborhoods limited to those belonging to e.g. Statement (3) may become: For every there exists a scalar such that If is a locally convex space whose topology is defined by a family of continuous seminorms, then this list may be extended to include: is bounded for all There exists a sequence of non-zero scalars such that for every sequence in the sequence is bounded in (according to any defining condition other than this one). For all is bounded (according to any defining condition other than this one) in the semi normed space If is a normed space with norm (or more generally, if it is a seminorm
https://en.wikipedia.org/wiki/Polar%20topology
In functional analysis and related areas of mathematics a polar topology, topology of -convergence or topology of uniform convergence on the sets of is a method to define locally convex topologies on the vector spaces of a pairing. Preliminaries A pairing is a triple consisting of two vector spaces over a field (either the real numbers or complex numbers) and a bilinear map A dual pair or dual system is a pairing satisfying the following two separation axioms: separates/distinguishes points of : for all non-zero there exists such that and separates/distinguishes points of : for all non-zero there exists such that Polars The polar or absolute polar of a subset is the set Dually, the polar or absolute polar of a subset is denoted by and defined by In this case, the absolute polar of a subset is also called the prepolar of and may be denoted by The polar is a convex balanced set containing the origin. If then the bipolar of denoted by is defined by Similarly, if then the bipolar of is defined to be Weak topologies Suppose that is a pairing of vector spaces over Notation: For all let denote the linear functional on defined by and let Similarly, for all let be defined by and let The weak topology on induced by (and ) is the weakest TVS topology on denoted by or simply making all maps continuous, as ranges over Similarly, there are the dual definition of the weak topology on induced by (and ), which is denoted by or simply : it is the weakest TVS topology on making all maps continuous, as ranges over Weak boundedness and absorbing polars It is because of the following theorem that it is almost always assumed that the family consists of -bounded subsets of Dual definitions and results Every pairing can be associated with a corresponding pairing where by definition There is a repeating theme in duality theory, which is that any definition for a pairing has a corresponding dual definition for the pairing Convention and Definition: Given any definition for a pairing one obtains a dual definition by applying it to the pairing If the definition depends on the order of and (e.g. the definition of "the weak topology defined on by ") then by switching the order of and it is meant that this definition should be applied to (e.g. this gives us the definition of "the weak topology defined on by "). For instance, after defining " distinguishes points of " (resp, " is a total subset of ") as above, then the dual definition of " distinguishes points of " (resp, " is a total subset of ") is immediately obtained. For instance, once is defined then it should be automatically assume that has been defined without mentioning the analogous definition. The same applies to many theorems. Convention: Adhering to common practice, unless clarity is needed, whenever a definition (or result) is given for a pairing then mention the corresponding dual definition (or result) will be omi
https://en.wikipedia.org/wiki/Dual%20topology
In functional analysis and related areas of mathematics a dual topology is a locally convex topology on a vector space that is induced by the continuous dual of the vector space, by means of the bilinear form (also called pairing) associated with the dual pair. The different dual topologies for a given dual pair are characterized by the Mackey–Arens theorem. All locally convex topologies with their continuous dual are trivially a dual pair and the locally convex topology is a dual topology. Several topological properties depend only on the dual pair and not on the chosen dual topology and thus it is often possible to substitute a complicated dual topology by a simpler one. Definition Given a dual pair , a dual topology on is a locally convex topology so that Here denotes the continuous dual of and means that there is a linear isomorphism (If a locally convex topology on is not a dual topology, then either is not surjective or it is ill-defined since the linear functional is not continuous on for some .) Properties Theorem (by Mackey): Given a dual pair, the bounded sets under any dual topology are identical. Under any dual topology the same sets are barrelled. Characterization of dual topologies The Mackey–Arens theorem, named after George Mackey and Richard Arens, characterizes all possible dual topologies on a locally convex space. The theorem shows that the coarsest dual topology is the weak topology, the topology of uniform convergence on all finite subsets of , and the finest topology is the Mackey topology, the topology of uniform convergence on all absolutely convex weakly compact subsets of . Mackey–Arens theorem Given a dual pair with a locally convex space and its continuous dual, then is a dual topology on if and only if it is a topology of uniform convergence on a family of absolutely convex and weakly compact subsets of See also Polar topology References Topology of function spaces Linear functionals
https://en.wikipedia.org/wiki/Leonard%20Eugene%20Dickson
Leonard Eugene Dickson (January 22, 1874 – January 17, 1954) was an American mathematician. He was one of the first American researchers in abstract algebra, in particular the theory of finite fields and classical groups, and is also remembered for a three-volume history of number theory, History of the Theory of Numbers. The L. E. Dickson instructorships at the University of Chicago Department of Mathematics are named after him. Life Dickson considered himself a Texan by virtue of having grown up in Cleburne, where his father was a banker, merchant, and real estate investor. He attended the University of Texas at Austin, where George Bruce Halsted encouraged his study of mathematics. Dickson earned a B.S. in 1893 and an M.S. in 1894, under Halsted's supervision. Dickson first specialised in Halsted's own specialty, geometry. Both the University of Chicago and Harvard University welcomed Dickson as a Ph.D. student, and Dickson initially accepted Harvard's offer, but chose to attend Chicago instead. In 1896, when he was only 22 years of age, he was awarded Chicago's first doctorate in mathematics, for a dissertation titled The Analytic Representation of Substitutions on a Power of a Prime Number of Letters with a Discussion of the Linear Group, supervised by E. H. Moore. Dickson then went to Leipzig and Paris to study under Sophus Lie and Camille Jordan, respectively. On returning to the US, he became an instructor at the University of California. In 1899 and at the extraordinarily young age of 25, Dickson was appointed associate professor at the University of Texas. Chicago countered by offering him a position in 1900, and he spent the balance of his career there. At Chicago, he supervised 53 Ph.D. theses; his most accomplished student was probably A. A. Albert. He was a visiting professor at the University of California in 1914, 1918, and 1922. In 1939, he returned to Texas to retire. Dickson married Susan McLeod Davis in 1902; they had two children, Campbell and Eleanor. Dickson was elected to the National Academy of Sciences in 1913, and was also a member of the American Philosophical Society, the American Academy of Arts and Sciences, the London Mathematical Society, the French Academy of Sciences and the Union of Czech Mathematicians and Physicists. Dickson was the first recipient of a prize created in 1924 by The American Association for the Advancement of Science, for his work on the arithmetics of algebras. Harvard (1936) and Princeton (1941) awarded him honorary doctorates. Dickson presided over the American Mathematical Society in 1917–1918. His December 1918 presidential address, titled "Mathematics in War Perspective", criticized American mathematics for falling short of those of Britain, France, and Germany: "Let it not again become possible that thousands of young men shall be so seriously handicapped in their Army and Navy work by lack of adequate preparation in mathematics." In 1928, he was also the first recipient of the
https://en.wikipedia.org/wiki/Mackey%20topology
In functional analysis and related areas of mathematics, the Mackey topology, named after George Mackey, is the finest topology for a topological vector space which still preserves the continuous dual. In other words the Mackey topology does not make linear functions continuous which were discontinuous in the default topology. A topological vector space (TVS) is called a Mackey space if its topology is the same as the Mackey topology. The Mackey topology is the opposite of the weak topology, which is the coarsest topology on a topological vector space which preserves the continuity of all linear functions in the continuous dual. The Mackey–Arens theorem states that all possible dual topologies are finer than the weak topology and coarser than the Mackey topology. Definition Definition for a pairing Given a pairing the Mackey topology on induced by denoted by is the polar topology defined on by using the set of all -compact disks in When is endowed with the Mackey topology then it will be denoted by or simply or if no ambiguity can arise. A linear map is said to be Mackey continuous (with respect to pairings and ) if is continuous. Definition for a topological vector space The definition of the Mackey topology for a topological vector space (TVS) is a specialization of the above definition of the Mackey topology of a pairing. If is a TVS with continuous dual space then the evaluation map on is called the canonical pairing. The Mackey topology on a TVS denoted by is the Mackey topology on induced by the canonical pairing That is, the Mackey topology is the polar topology on obtained by using the set of all weak*-compact disks in When is endowed with the Mackey topology then it will be denoted by or simply if no ambiguity can arise. A linear map between TVSs is Mackey continuous if is continuous. Examples Every metrizable locally convex with continuous dual carries the Mackey topology, that is or to put it more succinctly every metrizable locally convex space is a Mackey space. Every Hausdorff barreled locally convex space is Mackey. Every Fréchet space carries the Mackey topology and the topology coincides with the strong topology, that is Applications The Mackey topology has an application in economies with infinitely many commodities. See also Citations Bibliography Topological vector spaces
https://en.wikipedia.org/wiki/Little%20Caesars
Little Caesar Enterprises Inc. (doing business as Little Caesars) is an American multinational pizza chain. Based on 2020 statistics, Little Caesars is the third largest pizza chain by total sales in the United States, behind Pizza Hut and Domino's Pizza. It operates and franchises pizza restaurants in the United States and internationally in Asia, Europe, the Middle East, Canada, Latin America and the Caribbean. The company was founded in 1959 and is based in Detroit, Michigan, headquartered in a newly-built annex of the Fox Theatre building in Downtown Detroit. Little Caesar Enterprises, Inc. is owned by Ilitch Holdings, which also owns the Detroit Tigers, who play across the street at Comerica Park, and the Detroit Red Wings, nearby at Little Caesars Arena. History Little Caesars Pizza was founded on May 8, 1959, by the married couple Mike Ilitch and Marian Ilitch. The first location was in a strip mall in Garden City, Michigan, a suburb of Detroit, and named "Little Caesar's Pizza Treat". The original store closed in October 2018, relocating down the street to a new building in nearby Westland. The first Little Caesar's franchise location opened in 1962 in Warren, Michigan and was still called Little Caesar's Pizza Treat. The same year the Little Caesar's logo became a 3D figure and was used in outdoor signage. The company is well known for its advertising catchphrase "Pizza! Pizza!", which was introduced in 1979. The phrase refers to two pizzas being offered for the comparable price of a single pizza from competitors. Initially, the pizzas were served in a single long package (a piece of corrugated cardboard in 2-by-1 proportions, with two pizzas placed side by side, then slid into a form-fitting paper sleeve that was folded and stapled closed). In 1988, they introduced a square deep-dish pizza called “Pan! Pan!”. Customers could purchase the “Pan! Pan!” pizzas as part of the 2-for-1 deal or mix and match with one pan pizza and one original round pizza. Little Caesars has since discarded the unwieldy packaging in favor of typical pizza boxes. In addition to pizza with "exotic" toppings, they served hot dogs, chicken, shrimp, and fish. In the '90s, the chain opened its pizza playground restaurant named "Caesarland", which featured interactive play equipment, sports, video games, and more. In 1997, the chain introduced shaker boards to advertise its "Hot-N-Ready Pizza", a large pepperoni pizza sold for $5. The concept was successful enough to become a permanent fixture of the chain, and Little Caesars' business model has shifted to focus more on carryout. In 1998, Little Caesars filled what was then the largest pizza order, filling an order of 13,386 pizzas from the VF Corporation of Greensboro, North Carolina. Little Caesars was among the first to use a new kind of speed-cooking conveyor oven, the "Rotary Air Impingement Oven". On December 10, 2014, Little Caesars announced plans for a new eight-story, 205,000-square-foot Global Resou
https://en.wikipedia.org/wiki/Irish%20Newfoundlanders
In modern Newfoundland (), many Newfoundlanders are of Irish descent. According to the Statistics Canada 2016 census, 20.7% of Newfoundlanders claim Irish ancestry (other major groups in the province include 37.5% English, 6.8% Scottish, and 5.2% French). However, this figure greatly under-represents the true number of Newfoundlanders of Irish ancestry, as 53.9% claimed "Canadian" as their ethnic origin in the same census. The majority of these respondents were of Irish, English, and Scottish origins, but no longer self-identify with their ethnic ancestral origins due to having lived in Canada for many generations. Even so, the family names, the features and colouring, the predominance of Catholics in some areas (particularly on the southeast portion of the Avalon Peninsula), the prevalence of Irish music, and even the accents of the people in these areas, are so reminiscent of rural Ireland that Irish author Tim Pat Coogan has described Newfoundland as "the most Irish place in the world outside of Ireland." History The large Irish Catholic element in Newfoundland in the 19th century played a major role in Newfoundland history, and developed a strong local culture of their own. They were in repeated political conflict—sometimes violent—with the Protestant Scots-Irish "Orange" element. These migrations were seasonal or temporary. Most Irish migrants were young men working on contract for English merchants and planters. It was a substantial migration, peaking in the 1770s and 1780s when more than 100 ships and 5,000 men cleared Irish ports for the fishery. The exodus from Ulster to the United States excepted, it was the most substantial movement of Irish across the Atlantic in the 18th century. Some went on to other North American destinations, some stayed, and many engaged in what has been called "to-ing and fro-ing", an annual seasonal migration between Ireland and Newfoundland due to fisheries and trade. As a result, the Newfoundland Irish remained in constant contact with news, politics, and cultural movements back in Ireland. Virtually from its inception, a small number of young Irish women joined the migration. They tended to stay and marry overwintering Irish male migrants. Seasonal and temporary migrations slowly evolved into emigration and the formation of permanent Irish family settlement in Newfoundland. This pattern intensified with the collapse of the old migratory cod fishery after 1790. An increase in Irish immigration, particularly of women, between 1800 and 1835, and the related natural population growth, helped transform the social, demographic, and cultural character of Newfoundland. In 1836, the government in St. John's commissioned a census that exceeded in its detail anything recorded to that time. More than 400 settlements were listed. The Irish, and their offspring, composed half of the total population. Close to three-quarters of them lived in St. John's and its near hinterland, from Renews to Carbonear, an area still
https://en.wikipedia.org/wiki/PMI
PMI may stand for: Computer science Pointwise mutual information, in statistics Privilege Management Infrastructure in cryptography Product and manufacturing information in CAD systems Companies Philip Morris International, tobacco company Picture Music International, former division of EMI Precious Moments, Inc., American giftware catalog company Precision Monolithics, a semiconductor manufacturer Economics Passenger-mile Post-merger integration Private mortgage insurance or lenders mortgage insurance Purchasing Managers' Index, of business sentiment Locations Palma de Mallorca Airport (IATA airport code PMI) Mathematics Pointwise mutual information, measure in statistical probability theory Principle of Mathematical Induction, a method of proof involving the natural numbers Organizations Plumbing Manufacturers International Project Management Institute Palang Merah Indonesia, the Indonesian Red Cross Society Schools Philippine Maritime Institute PMI College - Bohol, Tagbilaran City Pima Medical Institute, US Medicine The pulse at the point of maximum impulse (PMI) is the apex beat of the heart Post-mortem interval, the time since a death Technique Positive material identification of a metallic alloy Preventive maintenance inspection, USAF Other uses US Presidential Management Internship, now Presidential Management Fellows Program See also
https://en.wikipedia.org/wiki/Reference%20class%20problem
In statistics, the reference class problem is the problem of deciding what class to use when calculating the probability applicable to a particular case. For example, to estimate the probability of an aircraft crashing, we could refer to the frequency of crashes among various different sets of aircraft: all aircraft, this make of aircraft, aircraft flown by this company in the last ten years, etc. In this example, the aircraft for which we wish to calculate the probability of a crash is a member of many different classes, in which the frequency of crashes differs. It is not obvious which class we should refer to for this aircraft. In general, any case is a member of very many classes among which the frequency of the attribute of interest differs. The reference class problem discusses which class is the most appropriate to use. More formally, many arguments in statistics take the form of a statistical syllogism: proportion of are is an Therefore, the chance that is a is is called the "reference class" and is the "attribute class" and is the individual object. How is one to choose an appropriate class ? In Bayesian statistics, the problem arises as that of deciding on a prior probability for the outcome in question (or when considering multiple outcomes, a prior probability distribution). History John Venn stated in 1876 that "every single thing or event has an indefinite number of properties or attributes observable in it, and might therefore be considered as belonging to an indefinite number of different classes of things", leading to problems with how to assign probabilities to a single case. He used as an example the probability that John Smith, a consumptive Englishman aged fifty, will live to sixty-one. The name "problem of the reference class" was given by Hans Reichenbach, who wrote, "If we are asked to find the probability holding for an individual future event, we must first incorporate the event into a suitable reference class. An individual thing or event may be incorporated in many reference classes, from which different probabilities will result." There has also been discussion of the reference class problem in philosophy and in the life sciences, e.g., clinical trial prediction. Legal applications Applying Bayesian probability in practice involves assessing a prior probability which is then applied to a likelihood function and updated through the use of Bayes' theorem. Suppose we wish to assess the probability of guilt of a defendant in a court case in which DNA (or other probabilistic) evidence is available. We first need to assess the prior probability of guilt of the defendant. We could say that the crime occurred in a city of 1,000,000 people, of whom 15% meet the requirements of being the same sex, age group and approximate description as the perpetrator. That suggests a prior probability of guilt of 1 in 150,000. We could cast the net wider and say that there is, say, a 25% chance that the perpetrator is
https://en.wikipedia.org/wiki/Maximum%20a%20posteriori%20estimation
In Bayesian statistics, a maximum a posteriori probability (MAP) estimate is an estimate of an unknown quantity, that equals the mode of the posterior distribution. The MAP can be used to obtain a point estimate of an unobserved quantity on the basis of empirical data. It is closely related to the method of maximum likelihood (ML) estimation, but employs an augmented optimization objective which incorporates a prior distribution (that quantifies the additional information available through prior knowledge of a related event) over the quantity one wants to estimate. MAP estimation can therefore be seen as a regularization of maximum likelihood estimation. Description Assume that we want to estimate an unobserved population parameter on the basis of observations . Let be the sampling distribution of , so that is the probability of when the underlying population parameter is . Then the function: is known as the likelihood function and the estimate: is the maximum likelihood estimate of . Now assume that a prior distribution over exists. This allows us to treat as a random variable as in Bayesian statistics. We can calculate the posterior distribution of using Bayes' theorem: where is density function of , is the domain of . The method of maximum a posteriori estimation then estimates as the mode of the posterior distribution of this random variable: The denominator of the posterior distribution (so-called marginal likelihood) is always positive and does not depend on and therefore plays no role in the optimization. Observe that the MAP estimate of coincides with the ML estimate when the prior is uniform (i.e., is a constant function). When the loss function is of the form as goes to 0, the Bayes estimator approaches the MAP estimator, provided that the distribution of is quasi-concave. But generally a MAP estimator is not a Bayes estimator unless is discrete. Computation MAP estimates can be computed in several ways: Analytically, when the mode(s) of the posterior distribution can be given in closed form. This is the case when conjugate priors are used. Via numerical optimization such as the conjugate gradient method or Newton's method. This usually requires first or second derivatives, which have to be evaluated analytically or numerically. Via a modification of an expectation-maximization algorithm. This does not require derivatives of the posterior density. Via a Monte Carlo method using simulated annealing Limitations While only mild conditions are required for MAP estimation to be a limiting case of Bayes estimation (under the 0–1 loss function), it is not very representative of Bayesian methods in general. This is because MAP estimates are point estimates, whereas Bayesian methods are characterized by the use of distributions to summarize data and draw inferences: thus, Bayesian methods tend to report the posterior mean or median instead, together with credible intervals. This is both because these est
https://en.wikipedia.org/wiki/Copula%20%28probability%20theory%29
In probability theory and statistics, a copula is a multivariate cumulative distribution function for which the marginal probability distribution of each variable is uniform on the interval [0, 1]. Copulas are used to describe/model the dependence (inter-correlation) between random variables. Their name, introduced by applied mathematician Abe Sklar in 1959, comes from the Latin for "link" or "tie", similar but unrelated to grammatical copulas in linguistics. Copulas have been used widely in quantitative finance to model and minimize tail risk and portfolio-optimization applications. Sklar's theorem states that any multivariate joint distribution can be written in terms of univariate marginal distribution functions and a copula which describes the dependence structure between the variables. Copulas are popular in high-dimensional statistical applications as they allow one to easily model and estimate the distribution of random vectors by estimating marginals and copulae separately. There are many parametric copula families available, which usually have parameters that control the strength of dependence. Some popular parametric copula models are outlined below. Two-dimensional copulas are known in some other areas of mathematics under the name permutons and doubly-stochastic measures. Mathematical definition Consider a random vector . Suppose its marginals are continuous, i.e. the marginal CDFs are continuous functions. By applying the probability integral transform to each component, the random vector has marginals that are uniformly distributed on the interval [0, 1]. The copula of is defined as the joint cumulative distribution function of : The copula C contains all information on the dependence structure between the components of whereas the marginal cumulative distribution functions contain all information on the marginal distributions of . The reverse of these steps can be used to generate pseudo-random samples from general classes of multivariate probability distributions. That is, given a procedure to generate a sample from the copula function, the required sample can be constructed as The generalized inverses are unproblematic almost surely, since the were assumed to be continuous. Furthermore, the above formula for the copula function can be rewritten as: Definition In probabilistic terms, is a d-dimensional copula if C is a joint cumulative distribution function of a d-dimensional random vector on the unit cube with uniform marginals. In analytic terms, is a d-dimensional copula if , the copula is zero if any one of the arguments is zero, , the copula is equal to u if one argument is u and all others 1, C is d-non-decreasing, i.e., for each hyperrectangle the C-volume of B is non-negative: where the . For instance, in the bivariate case, is a bivariate copula if , and for all and . Sklar's theorem Sklar's theorem, named after Abe Sklar, provides the theoretical foundation for the application of copul
https://en.wikipedia.org/wiki/Borel%E2%80%93Carath%C3%A9odory%20theorem
In mathematics, the Borel–Carathéodory theorem in complex analysis shows that an analytic function may be bounded by its real part. It is an application of the maximum modulus principle. It is named for Émile Borel and Constantin Carathéodory. Statement of the theorem Let a function be analytic on a closed disc of radius R centered at the origin. Suppose that r < R. Then, we have the following inequality: Here, the norm on the left-hand side denotes the maximum value of f in the closed disc: (where the last equality is due to the maximum modulus principle). Proof Define A by If f is constant c, the inequality follows from , so we may assume f is nonconstant. First let f(0) = 0. Since Re f is harmonic, Re f(0) is equal to the average of its values around any circle centered at 0. That is, Since f is regular and nonconstant, we have that Re f is also nonconstant. Since Re f(0) = 0, we must have Re for some z on the circle , so we may take . Now f maps into the half-plane P to the left of the x=A line. Roughly, our goal is to map this half-plane to a disk, apply Schwarz's lemma there, and make out the stated inequality. sends P to the standard left half-plane. sends the left half-plane to the circle of radius R centered at the origin. The composite, which maps 0 to 0, is the desired map: From Schwarz's lemma applied to the composite of this map and f, we have Take |z| ≤ r. The above becomes so , as claimed. In the general case, we may apply the above to f(z)-f(0): which, when rearranged, gives the claim. Alternative result and proof We start with the following result: Applications Borel–Carathéodory is often used to bound the logarithm of derivatives, such as in the proof of Hadamard factorization theorem. The following example is a strengthening of Liouville's theorem. References Sources Lang, Serge (1999). Complex Analysis (4th ed.). New York: Springer-Verlag, Inc. . Titchmarsh, E. C. (1938). The theory of functions. Oxford University Press. Theorems in complex analysis Articles containing proofs
https://en.wikipedia.org/wiki/Pin%20group
In mathematics, the pin group is a certain subgroup of the Clifford algebra associated to a quadratic space. It maps 2-to-1 to the orthogonal group, just as the spin group maps 2-to-1 to the special orthogonal group. In general the map from the Pin group to the orthogonal group is not surjective or a universal covering space, but if the quadratic form is definite (and dimension is greater than 2), it is both. The non-trivial element of the kernel is denoted which should not be confused with the orthogonal transform of reflection through the origin, generally denoted General definition Let be a vector space with a non-degenerate quadratic form . The pin group is the subset of the Clifford algebra consisting of elements of the form , where the are vectors such that . The spin group is defined similarly, but with restricted to be even; it is a subgroup of the pin group. In this article, is always a real vector space. When has basis vectors satisfying and the pin group is denoted Pin(p, q). Geometrically, for vectors with , is the reflection of a vector across the hyperplane orthogonal to . More generally, an element of the pin group acts on vectors by transforming to , which is the composition of k reflections. Since every orthogonal transformation can be expressed as a composition of reflections (the Cartan–Dieudonné theorem), it follows that this representation of the pin group is a homomorphism from the pin group onto the orthogonal group. This is often called the twisted adjoint representation. The elements ±1 of the pin group are the elements which map to the identity , and every element of O(p, q) corresponds to exactly two elements of Pin(p, q). Definite form The pin group of a definite form maps onto the orthogonal group, and each component is simply connected (in dimension 3 and higher): it double covers the orthogonal group. The pin groups for a positive definite quadratic form Q and for its negative −Q are not isomorphic, but the orthogonal groups are. In terms of the standard forms, O(n, 0) = O(0, n), but Pin(n, 0) and Pin(0, n) are in general not isomorphic. Using the "+" sign convention for Clifford algebras (where ), one writes and these both map onto O(n) = O(n, 0) = O(0, n). By contrast, we have the natural isomorphism Spin(n, 0) ≅ Spin(0, n) and they are both the (unique) non-trivial double cover of the special orthogonal group SO(n), which is the (unique) universal cover for n ≥ 3. Indefinite form There are as many as eight different double covers of O(p, q), for p, q ≠ 0, which correspond to the extensions of the center (which is either C2 × C2 or C4) by C2. Only two of them are pin groups—those that admit the Clifford algebra as a representation. They are called Pin(p, q) and Pin(q, p) respectively. As topological group Every connected topological group has a unique universal cover as a topological space, which has a unique group structure as a central extension by the fundamental group. For a dis
https://en.wikipedia.org/wiki/Graph%20morphism
Graph morphism may refer to: Graph homomorphism, in graph theory, a homomorphism between graphs Graph morphism, in algebraic geometry, a type of morphism of schemes
https://en.wikipedia.org/wiki/Montel%20space
In functional analysis and related areas of mathematics, a Montel space, named after Paul Montel, is any topological vector space (TVS) in which an analog of Montel's theorem holds. Specifically, a Montel space is a barrelled topological vector space in which every closed and bounded subset is compact. Definition A topological vector space (TVS) has the if every closed and bounded subset is compact. A is a barrelled topological vector space with the Heine–Borel property. Equivalently, it is an infrabarrelled semi-Montel space where a Hausdorff locally convex topological vector space is called a or if every bounded subset is relatively compact. A subset of a TVS is compact if and only if it is complete and totally bounded. A is a Fréchet space that is also a Montel space. Characterizations A separable Fréchet space is a Montel space if and only if each weak-* convergent sequence in its continuous dual is strongly convergent. A Fréchet space is a Montel space if and only if every bounded continuous function sends closed bounded absolutely convex subsets of to relatively compact subsets of Moreover, if denotes the vector space of all bounded continuous functions on a Fréchet space then is Montel if and only if every sequence in that converges to zero in the compact-open topology also converges uniformly to zero on all closed bounded absolutely convex subsets of Sufficient conditions Semi-Montel spaces A closed vector subspace of a semi-Montel space is again a semi-Montel space. The locally convex direct sum of any family of semi-Montel spaces is again a semi-Montel space. The inverse limit of an inverse system consisting of semi-Montel spaces is again a semi-Montel space. The Cartesian product of any family of semi-Montel spaces (resp. Montel spaces) is again a semi-Montel space (resp. a Montel space). Montel spaces The strong dual of a Montel space is Montel. A barrelled quasi-complete nuclear space is a Montel space. Every product and locally convex direct sum of a family of Montel spaces is a Montel space. The strict inductive limit of a sequence of Montel spaces is a Montel space. In contrast, closed subspaces and separated quotients of Montel spaces are in general not even reflexive. Every Fréchet Schwartz space is a Montel space. Properties Montel spaces are paracompact and normal. Semi-Montel spaces are quasi-complete and semi-reflexive while Montel spaces are reflexive. No infinite-dimensional Banach space is a Montel space. This is because a Banach space cannot satisfy the Heine–Borel property: the closed unit ball is closed and bounded, but not compact. Fréchet Montel spaces are separable and have a bornological strong dual. A metrizable Montel space is separable. Fréchet–Montel spaces are distinguished spaces. Examples In classical complex analysis, Montel's theorem asserts that the space of holomorphic functions on an open connected subset of the complex numbers has this property. Ma
https://en.wikipedia.org/wiki/FOIL%20method
In elementary algebra, FOIL is a mnemonic for the standard method of multiplying two binomials—hence the method may be referred to as the FOIL method. The word FOIL is an acronym for the four terms of the product: First ("first" terms of each binomial are multiplied together) Outer ("outside" terms are multiplied—that is, the first term of the first binomial and the second term of the second) Inner ("inside" terms are multiplied—second term of the first binomial and first term of the second) Last ("last" terms of each binomial are multiplied) The general form is Note that is both a "first" term and an "outer" term; is both a "last" and "inner" term, and so forth. The order of the four terms in the sum is not important and need not match the order of the letters in the word FOIL. History The FOIL method is a special case of a more general method for multiplying algebraic expressions using the distributive law. The word FOIL was originally intended solely as a mnemonic for high-school students learning algebra. The term appears in William Betz's 1929 text Algebra for Today, where he states: ... first terms, outer terms, inner terms, last terms. (The rule stated above may also be remembered by the word FOIL, suggested by the first letters of the words first, outer, inner, last.) William Betz was active in the movement to reform mathematics in the United States at that time, had written many texts on elementary mathematics topics and had "devoted his life to the improvement of mathematics education". Many students and educators in the US now use the word "FOIL" as a verb meaning "to expand the product of two binomials". Examples The method is most commonly used to multiply linear binomials. For example, If either binomial involves subtraction, the corresponding terms must be negated. For example, The distributive law The FOIL method is equivalent to a two-step process involving the distributive law: In the first step, the () is distributed over the addition in first binomial. In the second step, the distributive law is used to simplify each of the two terms. Note that this process involves a total of three applications of the distributive property. In contrast to the FOIL method, the method using distributivity can be applied easily to products with more terms such as trinomials and higher. Reverse FOIL The FOIL rule converts a product of two binomials into a sum of four (or fewer, if like terms are then combined) monomials. The reverse process is called factoring or factorization. In particular, if the proof above is read in reverse it illustrates the technique called factoring by grouping. Table as an alternative to FOIL A visual memory tool can replace the FOIL mnemonic for a pair of polynomials with any number of terms. Make a table with the terms of the first polynomial on the left edge and the terms of the second on the top edge, then fill in the table with products of multiplication. The table equivalent to the FOIL rule
https://en.wikipedia.org/wiki/Pulation%20square
In category theory, a branch of mathematics, a pulation square (also called a Doolittle diagram) is a diagram that is simultaneously a pullback square and a pushout square. It is a self-dual concept. References Adámek, Jiří, Herrlich, Horst, & Strecker, George E. (1990). Abstract and Concrete Categories (4.2MB PDF). Originally publ. John Wiley & Sons. . (now free on-line edition) Herrlich, Horst, & Strecker, George E., Category Theory, Heldermann Verlag (2007). Category theory
https://en.wikipedia.org/wiki/Oskar%20Becker
Oscar Becker (5 September 1889 – 13 November 1964) was a German philosopher, logician, mathematician, and historian of mathematics. Early life Becker was born in Leipzig, where he studied mathematics. His dissertation under Otto Hölder and Karl Rohn (1914) was On the Decomposition of Polygons in non-intersecting triangles on the Basis of the Axioms of Connection and Order. He served in World War I and returned to study philosophy with Edmund Husserl, writing his Habilitationsschrift on Investigations of the Phenomenological Foundations of Geometry and their Physical Applications, (1923). Becker was Husserl's assistant, informally, and then official editor of the Yearbook for Phenomenological Research. Work in phenomenology and mathematical philosophy Becker published his major work, Mathematical Existence in the Yearbook in 1927, the same year Martin Heidegger's Being and Time appeared there. Becker attended Heidegger's seminars at this period. Becker utilized not only Husserlian phenomenology but, much more controversially, Heideggerian hermeneutics, discussing arithmetical counting as "being toward death". His work was criticized both by neo-Kantians and by more mainstream, rationalist logicians, to whom Becker feistily replied. This work has not had great influence on later debates in the foundations of mathematics, despite its many interesting analyses of the topic of its title. Becker debated with David Hilbert and Paul Bernays over the role of the potential infinite in Hilbert's formalist metamathematics. Becker argued that Hilbert could not stick with finitism, but had to assume the potential infinite. Clearly enough, Hilbert and Bernays do implicitly accept the potential infinite, but they claim that each induction in their proofs is finite. Becker was correct that complete induction was needed for assertions of consistency in the form of universally quantified sentences, as opposed to claiming that a predicate holds for each individual natural number. Paraontologie In discussing Heidegger, Becker introduced the German neologism Paraontologie. Although fundamentally different this usage has provided some minor influences in the term "paraontology" in English made more recently by Nahum Chandler, Fred Moten, and others, in discussing blackness. Intuitionistic and modal logic Becker made a start toward the formalization of L. E. J. Brouwer's intuitionistic logic. He developed a semantics of intuitionistic logic based on Husserl's phenomenology, and this semantics was used by Arend Heyting in his own formalization. Becker struggled, somewhat unsuccessfully, with the formulation of the rejection of excluded middle appropriate for intuitionistic logic. Becker failed in the end to correctly distinguish classical and intuitionistic negation, but he made a start. In an appendix to his book on mathematical existence, Becker set the problem of finding a formal calculus for intuitionistic logic. In a series of works in the early 1950s he surv
https://en.wikipedia.org/wiki/Invariant%20polynomial
In mathematics, an invariant polynomial is a polynomial that is invariant under a group acting on a vector space . Therefore, is a -invariant polynomial if for all and . Cases of particular importance are for Γ a finite group (in the theory of Molien series, in particular), a compact group, a Lie group or algebraic group. For a basis-independent definition of 'polynomial' nothing is lost by referring to the symmetric powers of the given linear representation of Γ. References Commutative algebra Invariant theory Polynomials
https://en.wikipedia.org/wiki/T-structure
In the branch of mathematics called homological algebra, a t-structure is a way to axiomatize the properties of an abelian subcategory of a derived category. A t-structure on consists of two subcategories of a triangulated category or stable infinity category which abstract the idea of complexes whose cohomology vanishes in positive, respectively negative, degrees. There can be many distinct t-structures on the same category, and the interplay between these structures has implications for algebra and geometry. The notion of a t-structure arose in the work of Beilinson, Bernstein, Deligne, and Gabber on perverse sheaves. Definition Fix a triangulated category with translation functor . A t-structure on is a pair of full subcategories, each of which is stable under isomorphism, which satisfy the following three axioms. If X is an object of and Y is an object of , then If X is an object of , then X[1] is also an object of . Similarly, if Y is an object of , then Y[-1] is also an object of . If A is an object of , then there exists a distinguished triangle such that X is an object of and Y is an object of . It can be shown that the subcategories and are closed under extensions in . In particular, they are stable under finite direct sums. Suppose that is a t-structure on . In this case, for any integer n, we define to be the full subcategory of whose objects have the form , where is an object of . Similarly, is the full subcategory of objects , where is an object of . More briefly, we define With this notation, the axioms above may be rewritten as: If X is an object of and Y is an object of , then and . If A is an object of , then there exists a distinguished triangle such that X is an object of and Y is an object of . The heart or core of the t-structure is the full subcategory consisting of objects contained in both and , that is, The heart of a t-structure is an abelian category (whereas a triangulated category is additive but almost never abelian), and it is stable under extensions. A triangulated category with a choice of t-structure is sometimes called a t-category. Variations It is clear that, to define a t-structure, it suffices to fix integers m and n and specify and . Some authors define a t-structure to be the pair . The two subcategories and determine each other. An object X is in if and only if for all objects Y in , and vice versa. That is, are left and right orthogonal complements of each other. Consequently, it is enough to specify only one of and . Moreover, because these subcategories are full by definition, it is enough to specify their objects. The above notation is adapted to the study of cohomology. When the goal is to study homology, slightly different notation is used. A homological t-structure on is a pair such that, if we define then is a (cohomological) t-structure on . That is, the definition is the same except that upper indices are converted to lower indic
https://en.wikipedia.org/wiki/UEFA%20coefficient
In European football, the UEFA coefficients are statistics based in weighted arithmetic means used for ranking and seeding teams in club and international competitions. Introduced in 1979 for men's football tournaments, and after applied in women's football and futsal, the coefficients are calculated by UEFA, who administer football within Europe, as well as Armenia, Cyprus, Israel and the Asian parts of some transcontinental countries. The confederation publishes three types of rankings: one analysing a single season, one analysing a five-year span and another analysing a ten-year span. For men's competitions (discussed in this article), three sets of coefficients are calculated: National team coefficient: used during 1997–2017 to rank national teams, for seeding in the UEFA Euro qualifying and finals tournaments. UEFA decided after 2017, instead to seed national teams based on the: Overall ranking of the biennial UEFA Nations League for the seeded draw of groups in the UEFA Euro qualification stage. Overall ranking of the UEFA Euro qualification stage for the seeded draw of groups in the UEFA Euro final tournament. Association coefficient: used to rank the collective performance of the clubs of each member association, for assigning the number of places, and at what stage clubs enter the UEFA Champions League, UEFA Europa League and the UEFA Europa Conference League Club coefficient: used to rank individual clubs, for seeding in the UEFA Champions League, UEFA Europa League, UEFA Cup Winners' Cup (until 1999) and UEFA Europa Conference League (since 2021) Men's national team coefficient The UEFA national team coefficient was first introduced in November 1997, and for the first time used for seeding the UEFA Euro 2000 qualification groups and UEFA Euro 2000 final tournament. The ranking system derived from the results of each European national football team, and was only calculated by UEFA every second year in November; defined as being the point of time when all UEFA nations had completed the qualification stage of the upcoming World Cup or European Championship tournament. The purpose of calculating the coefficients was to compile an official UEFA rank, to be used as seeding criteria for the European nations, when drawing up qualification groups and the final tournament groups of the European Championship. Similar to how the FIFA World Rankings previously had been created and used as a seeding tool, when drawing up qualification groups and final tournament groups for the FIFA World Cup. The FIFA World Ranking has always been used for the seeded draw of UEFA qualification groups for the FIFA World Cup, since the 1998 qualification draw took place in December 1995; except for the qualifiers for 2002 and 2006, where UEFA instead opted to use the UEFA national team coefficient also as the ranking system for the seeded draw of FIFA World Cup qualification groups. Old ranking and calculation method (1997–2007) Until the end of the Euro 200
https://en.wikipedia.org/wiki/E8%20lattice
In mathematics, the E lattice is a special lattice in R. It can be characterized as the unique positive-definite, even, unimodular lattice of rank 8. The name derives from the fact that it is the root lattice of the E root system. The norm of the E lattice (divided by 2) is a positive definite even unimodular quadratic form in 8 variables, and conversely such a quadratic form can be used to construct a positive-definite, even, unimodular lattice of rank 8. The existence of such a form was first shown by H. J. S. Smith in 1867, and the first explicit construction of this quadratic form was given by Korkin and Zolotarev in 1873. The E lattice is also called the Gosset lattice after Thorold Gosset who was one of the first to study the geometry of the lattice itself around 1900. Lattice points The E lattice is a discrete subgroup of R of full rank (i.e. it spans all of R). It can be given explicitly by the set of points Γ ⊂ R such that all the coordinates are integers or all the coordinates are half-integers (a mixture of integers and half-integers is not allowed), and the sum of the eight coordinates is an even integer. In symbols, It is not hard to check that the sum of two lattice points is another lattice point, so that Γ is indeed a subgroup. An alternative description of the E lattice which is sometimes convenient is the set of all points in Γ′ ⊂ R such that all the coordinates are integers and the sum of the coordinates is even, or all the coordinates are half-integers and the sum of the coordinates is odd. In symbols, The lattices Γ and Γ′ are isomorphic and one may pass from one to the other by changing the signs of any odd number of half-integer coordinates. The lattice Γ is sometimes called the even coordinate system for E while the lattice Γ′ is called the odd coordinate system. Unless we specify otherwise we shall work in the even coordinate system. Properties The E lattice Γ can be characterized as the unique lattice in R with the following properties: It is integral, meaning that all scalar products of lattice elements are integers. It is unimodular, meaning that it is integral, and can be generated by the columns of an 8×8 matrix with determinant ±1 (i.e. the volume of the fundamental parallelotope of the lattice is 1). Equivalently, Γ is self-dual, meaning it is equal to its dual lattice. It is even, meaning that the norm of any lattice vector is even. Even unimodular lattices can occur only in dimensions divisible by 8. In dimension 16 there are two such lattices: Γ ⊕ Γ and Γ (constructed in an analogous fashion to Γ. In dimension 24 there are 24 such lattices, called Niemeier lattices. The most important of these is the Leech lattice. One possible basis for Γ is given by the columns of the (upper triangular) matrix Γ is then the integral span of these vectors. All other possible bases are obtained from this one by right multiplication by elements of GL(8,Z). The shortest nonzero vectors in Γ have length equal to √2. There
https://en.wikipedia.org/wiki/Beta%20normal%20form
In the lambda calculus, a term is in beta normal form if no beta reduction is possible. A term is in beta-eta normal form if neither a beta reduction nor an eta reduction is possible. A term is in head normal form if there is no beta-redex in head position. The normal form of a term, if one exists, is unique (as a corollary of the Church–Rosser theorem). However, a term may have more than one head normal form. Beta reduction In the lambda calculus, a beta redex is a term of the form: . A redex is in head position in a term , if has the following shape (note that application has higher priority than abstraction, and that the formula below is meant to be a lambda-abstraction, not an application): , where and . A beta reduction is an application of the following rewrite rule to a beta redex contained in a term: where is the result of substituting the term for the variable in the term . A head beta reduction is a beta reduction applied in head position, that is, of the following form: , where and . Any other reduction is an internal beta reduction. A normal form is a term that does not contain any beta redex, i.e. that cannot be further reduced. A head normal form is a term that does not contain a beta redex in head position, i.e. that cannot be further reduced by a head reduction. When considering the simple lambda calculus (viz. without the addition of constant or function symbols, meant to be reduced by additional delta rule), head normal forms are the terms of the following shape: , where is a variable, and . A head normal form is not always a normal form, because the applied arguments need not be normal. However, the converse is true: any normal form is also a head normal form. In fact, the normal forms are exactly the head normal forms in which the subterms are themselves normal forms. This gives an inductive syntactic description of normal forms. There is also the notion of weak head normal form: a term in weak head normal form is either a term in head normal form or a lambda abstraction. This means a redex may appear inside a lambda body. Reduction strategies In general, a given term can contain several redexes, hence several different beta reductions could be applied. We may specify a strategy to choose which redex to reduce. Normal-order reduction is the strategy in which one continually applies the rule for beta reduction in head position until no more such reductions are possible. At that point, the resulting term is in head normal form. One then continues applying head reduction in the subterms , from left to right. Stated otherwise, normal‐order reduction is the strategy that always reduces the left‐most outer‐most redex first. By contrast, in applicative order reduction, one applies the internal reductions first, and then only applies the head reduction when no more internal reductions are possible. Normal-order reduction is complete, in the sense that if a term has a head normal form, then normal‐order r
https://en.wikipedia.org/wiki/Variogram
In spatial statistics the theoretical variogram, denoted , is a function describing the degree of spatial dependence of a spatial random field or stochastic process . The semivariogram is half the variogram. In the case of a concrete example from the field of gold mining, a variogram will give a measure of how much two samples taken from the mining area will vary in gold percentage depending on the distance between those samples. Samples taken far apart will vary more than samples taken close to each other. Definition The semivariogram was first defined by Matheron (1963) as half the average squared difference between the values at points ( and ) separated at distance . Formally where is a point in the geometric field , and is the value at that point. The triple integral is over 3 dimensions. is the separation distance (e.g., in meters or km) of interest. For example, the value could represent the iron content in soil, at some location (with geographic coordinates of latitude, longitude, and elevation) over some region with element of volume . To obtain the semivariogram for a given , all pairs of points at that exact distance would be sampled. In practice it is impossible to sample everywhere, so the empirical variogram is used instead. The variogram is twice the semivariogram and can be defined, equivalently, as the variance of the difference between field values at two locations ( and , note change of notation from to and to ) across realizations of the field (Cressie 1993): If the spatial random field has constant mean , this is equivalent to the expectation for the squared increment of the values between locations and (Wackernagel 2003) (where and are points in space and possibly time): In the case of a stationary process, the variogram and semivariogram can be represented as a function of the difference between locations only, by the following relation (Cressie 1993): If the process is furthermore isotropic, then the variogram and semivariogram can be represented by a function of the distance only (Cressie 1993): The indexes or are typically not written. The terms are used for all three forms of the function. Moreover, the term "variogram" is sometimes used to denote the semivariogram, and the symbol is sometimes used for the variogram, which brings some confusion. Properties According to (Cressie 1993, Chiles and Delfiner 1999, Wackernagel 2003) the theoretical variogram has the following properties: The semivariogram is nonnegative , since it is the expectation of a square. The semivariogram at distance 0 is always 0, since . A function is a semivariogram if and only if it is a conditionally negative definite function, i.e. for all weights subject to and locations it holds: which corresponds to the fact that the variance of is given by the negative of this double sum and must be nonnegative. If the covariance function of a stationary process exists it is related to variogram by If a statio
https://en.wikipedia.org/wiki/Markov%20chain%20geostatistics
Markov chain geostatistics uses Markov chain spatial models, simulation algorithms and associated spatial correlation measures (e.g., transiogram) based on the Markov chain random field theory, which extends a single Markov chain into a multi-dimensional random field for geostatistical modeling. A Markov chain random field is still a single spatial Markov chain. The spatial Markov chain moves or jumps in a space and decides its state at any unobserved location through interactions with its nearest known neighbors in different directions. The data interaction process can be well explained as a local sequential Bayesian updating process within a neighborhood. Because single-step transition probability matrices are difficult to estimate from sparse sample data and are impractical in representing the complex spatial heterogeneity of states, the transiogram, which is defined as a transition probability function over the distance lag, is proposed as the accompanying spatial measure of Markov chain random fields. References Li, W. 2007. Markov chain random fields for estimation of categorical variables. Math. Geol., 39(3): 321–335. Li, W. et al. 2015. Bayesian Markov chain random field cosimulation for improving land cover classification accuracy. Math. Geosci., 47(2): 123–148. Li, W., and C. Zhang. 2019. Markov chain random fields in the perspective of spatial Bayesian networks and optimal neighborhoods for simulation of categorical fields. Computational Geosciences, 23(5): 1087-1106. http://gisweb.grove.ad.uconn.edu/weidong/Markov_chain_spatial_statistics.htm Geostatistics Interpolation Markov models
https://en.wikipedia.org/wiki/Continuous%20linear%20operator
In functional analysis and related areas of mathematics, a continuous linear operator or continuous linear mapping is a continuous linear transformation between topological vector spaces. An operator between two normed spaces is a bounded linear operator if and only if it is a continuous linear operator. Continuous linear operators Characterizations of continuity Suppose that is a linear operator between two topological vector spaces (TVSs). The following are equivalent: is continuous. is continuous at some point is continuous at the origin in If is locally convex then this list may be extended to include: for every continuous seminorm on there exists a continuous seminorm on such that If and are both Hausdorff locally convex spaces then this list may be extended to include: is weakly continuous and its transpose maps equicontinuous subsets of to equicontinuous subsets of If is a sequential space (such as a pseudometrizable space) then this list may be extended to include: is sequentially continuous at some (or equivalently, at every) point of its domain. If is pseudometrizable or metrizable (such as a normed or Banach space) then we may add to this list: is a bounded linear operator (that is, it maps bounded subsets of to bounded subsets of ). If is seminormable space (such as a normed space) then this list may be extended to include: maps some neighborhood of 0 to a bounded subset of If and are both normed or seminormed spaces (with both seminorms denoted by ) then this list may be extended to include: for every there exists some such that If and are Hausdorff locally convex spaces with finite-dimensional then this list may be extended to include: the graph of is closed in Continuity and boundedness Throughout, is a linear map between topological vector spaces (TVSs). Bounded subset The notion of a "bounded set" for a topological vector space is that of being a von Neumann bounded set. If the space happens to also be a normed space (or a seminormed space) then a subset is von Neumann bounded if and only if it is , meaning that A subset of a normed (or seminormed) space is called if it is norm-bounded (or equivalently, von Neumann bounded). For example, the scalar field ( or ) with the absolute value is a normed space, so a subset is bounded if and only if is finite, which happens if and only if is contained in some open (or closed) ball centered at the origin (zero). Any translation, scalar multiple, and subset of a bounded set is again bounded. Function bounded on a set If is a set then is said to be if is a bounded subset of which if is a normed (or seminormed) space happens if and only if A linear map is bounded on a set if and only if it is bounded on for every (because and any translation of a bounded set is again bounded) if and only if it is bounded on for every non-zero scalar (because and any scalar multiple of a bounded set is again bounded). Conseq
https://en.wikipedia.org/wiki/Paul%20Vojta
Paul Alan Vojta (born September 30, 1957) is an American mathematician, known for his work in number theory on Diophantine geometry and Diophantine approximation. Contributions In formulating Vojta's conjecture, he pointed out the possible existence of parallels between the Nevanlinna theory of complex analysis, and diophantine analysis in the circle of ideas around the Mordell conjecture and abc conjecture. This suggested the importance of the integer solutions (affine space) aspect of diophantine equations. Vojta wrote the .dvi-previewer xdvi. Education and career He was an undergraduate student at the University of Minnesota, where he became a Putnam Fellow in 1977, and a doctoral student at Harvard University (1983). He currently is a professor in the Department of Mathematics at the University of California, Berkeley. Awards and honors In 2012 he became a fellow of the American Mathematical Society. Selected publications Diophantine Approximations and Value Distribution Theory, Lecture Notes in Mathematics 1239, Springer Verlag, 1987, References External links Vojta's home page 1957 births Living people Arithmetic geometers Putnam Fellows Institute for Advanced Study visiting scholars University of Minnesota alumni Harvard University alumni University of California, Berkeley faculty 20th-century American mathematicians Fellows of the American Mathematical Society International Mathematical Olympiad participants 21st-century American mathematicians
https://en.wikipedia.org/wiki/William%20Magee%20%28archbishop%20of%20Dublin%29
William Magee (18 March 176618 August 1831) was an Irish academic and Church of Ireland clergyman. He taught at Trinity College Dublin, serving as Erasmus Smith's Professor of Mathematics (1800–1811), was Bishop of Raphoe (1819–1822) and then Archbishop of Dublin until his death. Biography He was born at Enniskillen, County Fermanagh, Ireland, the third son of farmer John Magee and Jane Glasgow. He was educated at Trinity College Dublin (BA 1786, MA 1789, BD 1797, DD 1801), where he had been a Scholar (1784), and was elected fellow in 1788. He was appointed Erasmus Smith Professor of Mathematics (and Senior Fellow) in 1800, and in 1813 was elected a Fellow of the Royal Society as a "gentleman of high distinction for mathematical & philosophical knowledge & Author of several works of importance". Thought not a research mathematician, he was a popular teacher at TCD and was well-liked by students. He had been ordained into the Church of Ireland in 1790, and two of his sermons (preached in the college chapel in 1798 and 1799) formed the basis of his "Discourses on the Scriptural Doctrines of Atonement and Sacrifice" (1801), a polemic against Unitarian theology, which was answered by Lant Carpenter. In 1812 he had resigned from TCD to undertake the charge of the livings of Cappagh, County Tyrone, and Killyleagh, County Down. In 1813 he became Dean of Cork. He was well known as a preacher and promoter of the Irish Second Reformation, and in 1819 he was consecrated Bishop of Raphoe. In 1822 the Archbishop of Dublin was translated to Armagh, and Magee succeeded him at Dublin. Though in most respects a tolerant man, he steadily opposed the movement for Catholic Emancipation. He gained notoriety for prohibiting the Catholic inhabitants of Glendalough from celebrating Mass "as they had theretofore done in their ancient and venerated cathedral of St. Kevin". He died on 18 August 1831 at Stillorgan, near Dublin. He had 16 children, of whom 3 sons and 9 daughters survived him. He was the grandfather of Archbishop William Connor Magee of York. References Works of the Most Reverend William Magee, D.D., 1842. External links Library Ireland: William Magee, Archbishop of Dublin The Works of the Most Reverend William Magee, Volume 1 1766 births 1831 deaths People from Enniskillen Academics of Trinity College Dublin Alumni of Trinity College Dublin Anglican archbishops of Dublin Anglican bishops of Raphoe Fellows of the Royal Society Fellows of Trinity College Dublin Members of the Privy Council of Ireland Scholars of Trinity College Dublin Deans of Cork Irish Anglican archbishops Christian clergy from County Fermanagh Scholars and academics from County Fermanagh 19th-century Irish mathematicians 20th-century Irish mathematicians
https://en.wikipedia.org/wiki/Symmetric%20bilinear%20form
In mathematics, a symmetric bilinear form on a vector space is a bilinear map from two copies of the vector space to the field of scalars such that the order of the two vectors does not affect the value of the map. In other words, it is a bilinear function that maps every pair of elements of the vector space to the underlying field such that for every and in . They are also referred to more briefly as just symmetric forms when "bilinear" is understood. Symmetric bilinear forms on finite-dimensional vector spaces precisely correspond to symmetric matrices given a basis for V. Among bilinear forms, the symmetric ones are important because they are the ones for which the vector space admits a particularly simple kind of basis known as an orthogonal basis (at least when the characteristic of the field is not 2). Given a symmetric bilinear form B, the function is the associated quadratic form on the vector space. Moreover, if the characteristic of the field is not 2, B is the unique symmetric bilinear form associated with q. Formal definition Let V be a vector space of dimension n over a field K. A map is a symmetric bilinear form on the space if: The last two axioms only establish linearity in the first argument, but the first axiom (symmetry) then immediately implies linearity in the second argument as well. Examples Let , the n dimensional real vector space. Then the standard dot product is a symmetric bilinear form, . The matrix corresponding to this bilinear form (see below) on a standard basis is the identity matrix. Let V be any vector space (including possibly infinite-dimensional), and assume T is a linear function from V to the field. Then the function defined by is a symmetric bilinear form. Let V be the vector space of continuous single-variable real functions. For one can define . By the properties of definite integrals, this defines a symmetric bilinear form on V. This is an example of a symmetric bilinear form which is not associated to any symmetric matrix (since the vector space is infinite-dimensional). Matrix representation Let be a basis for V. Define the matrix A by . The matrix A is a symmetric matrix exactly due to symmetry of the bilinear form. If we let the n×1 matrix x represent the vector v with respect to this basis, and similarly let the n×1 matrix y represent the vector w, then is given by : Suppose C' is another basis for V, with : with S an invertible n×n matrix. Now the new matrix representation for the symmetric bilinear form is given by Orthogonality and singularity Two vectors v and w are defined to be orthogonal with respect to the bilinear form B if , which, for a symmetric bilinear form, is equivalent to . The radical of a bilinear form B is the set of vectors orthogonal with every vector in V. That this is a subspace of V follows from the linearity of B in each of its arguments. When working with a matrix representation A with respect to a certain basis, v, represented
https://en.wikipedia.org/wiki/Gaussian%20random%20field
In statistics, a Gaussian random field (GRF) is a random field involving Gaussian probability density functions of the variables. A one-dimensional GRF is also called a Gaussian process. An important special case of a GRF is the Gaussian free field. With regard to applications of GRFs, the initial conditions of physical cosmology generated by quantum mechanical fluctuations during cosmic inflation are thought to be a GRF with a nearly scale invariant spectrum. Construction One way of constructing a GRF is by assuming that the field is the sum of a large number of plane, cylindrical or spherical waves with uniformly distributed random phase. Where applicable, the central limit theorem dictates that at any point, the sum of these individual plane-wave contributions will exhibit a Gaussian distribution. This type of GRF is completely described by its power spectral density, and hence, through the Wiener–Khinchin theorem, by its two-point autocorrelation function, which is related to the power spectral density through a Fourier transformation. Suppose f(x) is the value of a GRF at a point x in some D-dimensional space. If we make a vector of the values of f at N points, x1, ..., xN, in the D-dimensional space, then the vector (f(x1), ..., f(xN)) will always be distributed as a multivariate Gaussian. References External links For details on the generation of Gaussian random fields using Matlab, see circulant embedding method for Gaussian random field. Spatial processes
https://en.wikipedia.org/wiki/Pearson%20distribution
The Pearson distribution is a family of continuous probability distributions. It was first published by Karl Pearson in 1895 and subsequently extended by him in 1901 and 1916 in a series of articles on biostatistics. History The Pearson system was originally devised in an effort to model visibly skewed observations. It was well known at the time how to adjust a theoretical model to fit the first two cumulants or moments of observed data: Any probability distribution can be extended straightforwardly to form a location-scale family. Except in pathological cases, a location-scale family can be made to fit the observed mean (first cumulant) and variance (second cumulant) arbitrarily well. However, it was not known how to construct probability distributions in which the skewness (standardized third cumulant) and kurtosis (standardized fourth cumulant) could be adjusted equally freely. This need became apparent when trying to fit known theoretical models to observed data that exhibited skewness. Pearson's examples include survival data, which are usually asymmetric. In his original paper, Pearson (1895, p. 360) identified four types of distributions (numbered I through IV) in addition to the normal distribution (which was originally known as type V). The classification depended on whether the distributions were supported on a bounded interval, on a half-line, or on the whole real line; and whether they were potentially skewed or necessarily symmetric. A second paper (Pearson 1901) fixed two omissions: it redefined the type V distribution (originally just the normal distribution, but now the inverse-gamma distribution) and introduced the type VI distribution. Together the first two papers cover the five main types of the Pearson system (I, III, IV, V, and VI). In a third paper, Pearson (1916) introduced further special cases and subtypes (VII through XII). Rhind (1909, pp. 430–432) devised a simple way of visualizing the parameter space of the Pearson system, which was subsequently adopted by Pearson (1916, plate 1 and pp. 430ff., 448ff.). The Pearson types are characterized by two quantities, commonly referred to as β1 and β2. The first is the square of the skewness: where γ1 is the skewness, or third standardized moment. The second is the traditional kurtosis, or fourth standardized moment: β2 = γ2 + 3. (Modern treatments define kurtosis γ2 in terms of cumulants instead of moments, so that for a normal distribution we have γ2 = 0 and β2 = 3. Here we follow the historical precedent and use β2.) The diagram on the right shows which Pearson type a given concrete distribution (identified by a point (β1, β2)) belongs to. Many of the skewed and/or non-mesokurtic distributions familiar to us today were still unknown in the early 1890s. What is now known as the beta distribution had been used by Thomas Bayes as a posterior distribution of the parameter of a Bernoulli distribution in his 1763 work on inverse probability. The Beta distribution gained pro
https://en.wikipedia.org/wiki/Bornological%20space
In mathematics, particularly in functional analysis, a bornological space is a type of space which, in some sense, possesses the minimum amount of structure needed to address questions of boundedness of sets and linear maps, in the same way that a topological space possesses the minimum amount of structure needed to address questions of continuity. Bornological spaces are distinguished by the property that a linear map from a bornological space into any locally convex spaces is continuous if and only if it is a bounded linear operator. Bornological spaces were first studied by George Mackey. The name was coined by Bourbaki after , the French word for "bounded". Bornologies and bounded maps A on a set is a collection of subsets of that satisfy all the following conditions: covers that is, ; is stable under inclusions; that is, if and then ; is stable under finite unions; that is, if then ; Elements of the collection are called or simply if is understood. The pair is called a or a . A or of a bornology is a subset of such that each element of is a subset of some element of Given a collection of subsets of the smallest bornology containing is called the If and are bornological sets then their on is the bornology having as a base the collection of all sets of the form where and A subset of is bounded in the product bornology if and only if its image under the canonical projections onto and are both bounded. Bounded maps If and are bornological sets then a function is said to be a or a (with respect to these bornologies) if it maps -bounded subsets of to -bounded subsets of that is, if If in addition is a bijection and is also bounded then is called a . Vector bornologies Let be a vector space over a field where has a bornology A bornology on is called a if it is stable under vector addition, scalar multiplication, and the formation of balanced hulls (i.e. if the sum of two bounded sets is bounded, etc.). If is a topological vector space (TVS) and is a bornology on then the following are equivalent: is a vector bornology; Finite sums and balanced hulls of -bounded sets are -bounded; The scalar multiplication map defined by and the addition map defined by are both bounded when their domains carry their product bornologies (i.e. they map bounded subsets to bounded subsets). A vector bornology is called a if it is stable under the formation of convex hulls (i.e. the convex hull of a bounded set is bounded) then And a vector bornology is called if the only bounded vector subspace of is the 0-dimensional trivial space Usually, is either the real or complex numbers, in which case a vector bornology on will be called a if has a base consisting of convex sets. Bornivorous subsets A subset of is called and a if it absorbs every bounded set. In a vector bornology, is bornivorous if it absorbs every bounded balanced set and in a convex vector bornology is
https://en.wikipedia.org/wiki/Mackey%20space
In mathematics, particularly in functional analysis, a Mackey space is a locally convex topological vector space X such that the topology of X coincides with the Mackey topology τ(X,X′), the finest topology which still preserves the continuous dual. They are named after George Mackey. Examples Examples of locally convex spaces that are Mackey spaces include: All barrelled spaces and more generally all infrabarreled spaces Hence in particular all bornological spaces and reflexive spaces All metrizable spaces. In particular, all Fréchet spaces, including all Banach spaces and specifically Hilbert spaces, are Mackey spaces. The product, locally convex direct sum, and the inductive limit of a family of Mackey spaces is a Mackey space. Properties A locally convex space with continuous dual is a Mackey space if and only if each convex and -relatively compact subset of is equicontinuous. The completion of a Mackey space is again a Mackey space. A separated quotient of a Mackey space is again a Mackey space. A Mackey space need not be separable, complete, quasi-barrelled, nor -quasi-barrelled. See also References Topological vector spaces
https://en.wikipedia.org/wiki/Studia%20Mathematica
Studia Mathematica is a triannual peer-reviewed scientific journal of mathematics published by the Polish Academy of Sciences. Papers are written in English, French, German, or Russian, primarily covering functional analysis, abstract methods of mathematical analysis, and probability theory. The editor-in-chief is Adam Skalski. History The journal was established in 1929 by Stefan Banach and Hugo Steinhaus and its first editors were Banach, Steinhaus and Herman Auerbach. Due to the Second World War publication stopped after volume 9 (1940) and was not resumed until volume 10 in 1948. Abstracting and indexing The journal is abstracted and indexed in: Current Contents/Physical, Chemical & Earth Sciences MathSciNet Science Citation Index Scopus Zentralblatt MATH According to the Journal Citation Reports, the journal has a 2018 impact factor of 0.617. References External links Mathematics journals Academic journals established in 1929 Polish mathematics Multilingual journals Polish Academy of Sciences academic journals
https://en.wikipedia.org/wiki/Topology%20dissemination%20based%20on%20reverse-path%20forwarding
Topology broadcast based on reverse-path forwarding (TBRPF) is a link-state routing protocol for wireless mesh networks. The obvious design for a wireless link-state protocol (such as the optimized link-state routing protocol) transmits large amounts of routing data, and this limits the utility of a link-state protocol when the network is made of moving nodes. The number and size of the routing transmissions make the network unusable for any but the smallest networks. The conventional solution is to use a distance-vector routing protocol such as AODV, which usually transmits no data about routing. However, distance-vector routing requires more time to establish a connection, and the routes are less optimized than a link-state router. TBRPF transmits only the differences between the previous network state and the current network state. Therefore, routing messages are smaller, and can therefore be sent more frequently. This means that nodes' routing tables are more up-to-date. TBRPF is controlled under a US patent filed in December 2000 and assigned to SRI International (Patent ID 6845091, issued January 18, 2005). Further reading B. Bellur, and R.G. Ogier. 1999. "A Reliable, Efficient Topology Broadcast Protocol for Dynamic Networks," Proc. IEEE INFOCOMM ’99, pp. 178–186. R.G. Ogier, M.G. Lewis, F.L. Templin, and B. Bellur. 2002. "Topology Broadcast based on Reverse Path Forwarding (TBRPF)," RFC 3684. External links : Topology Dissemination Based on Reverse-Path Forwarding (TBRPF) Packethop Inc. website Wireless networking Ad hoc routing protocols SRI International
https://en.wikipedia.org/wiki/Canadian%20Society%20for%20History%20and%20Philosophy%20of%20Mathematics
The Canadian Society for History and Philosophy of Mathematics (CSHPM) is dedicated to the study of the history and philosophy of mathematics in Canada. It was proposed by Kenneth O. May, in conjunction with the journal Historia Mathematica, and was founded in 1974. See also Canadian Mathematical Society List of Mathematical Societies References Mathematical societies History of mathematics History organizations based in Canada Philosophical societies in Canada
https://en.wikipedia.org/wiki/Membership%20function%20%28mathematics%29
In mathematics, the membership function of a fuzzy set is a generalization of the indicator function for classical sets. In fuzzy logic, it represents the degree of truth as an extension of valuation. Degrees of truth are often confused with probabilities, although they are conceptually distinct, because fuzzy truth represents membership in vaguely defined sets, not likelihood of some event or condition. Membership functions were introduced by Aliasker Zadeh in the first paper on fuzzy sets (1965). Aliasker Zadeh, in his theory of fuzzy sets, proposed using a membership function (with a range covering the interval (0,1)) operating on the domain of all possible values. Definition For any set , a membership function on is any function from to the real unit interval . Membership functions represent fuzzy subsets of . The membership function which represents a fuzzy set is usually denoted by For an element of , the value is called the membership degree of in the fuzzy set The membership degree quantifies the grade of membership of the element to the fuzzy set The value 0 means that is not a member of the fuzzy set; the value 1 means that is fully a member of the fuzzy set. The values between 0 and 1 characterize fuzzy members, which belong to the fuzzy set only partially. Sometimes, a more general definition is used, where membership functions take values in an arbitrary fixed algebra or structure ; usually it is required that be at least a poset or lattice. The usual membership functions with values in [0, 1] are then called [0, 1]-valued membership functions. Capacity See the article on Capacity of a set for a closely related definition in mathematics. One application of membership functions is as capacities in decision theory. In decision theory, a capacity is defined as a function, from S, the set of subsets of some set, into , such that is set-wise monotone and is normalized (i.e. This is a generalization of the notion of a probability measure, where the probability axiom of countable additivity is weakened. A capacity is used as a subjective measure of the likelihood of an event, and the "expected value" of an outcome given a certain capacity can be found by taking the Choquet integral over the capacity. See also Defuzzification Fuzzy measure theory Fuzzy set operations Rough set References Bibliography Zadeh L.A., 1965, "Fuzzy sets". Information and Control 8: 338–353. Goguen J.A, 1967, "L-fuzzy sets". Journal of Mathematical Analysis and Applications 18: 145–174 External links Fuzzy Image Processing Fuzzy logic
https://en.wikipedia.org/wiki/Stata
Stata (, , alternatively , occasionally stylized as STATA) is a general-purpose statistical software package developed by StataCorp for data manipulation, visualization, statistics, and automated reporting. It is used by researchers in many fields, including biomedicine, economics, epidemiology, and sociology. Stata was initially developed by Computing Resource Center in California and the first version was released in 1985. In 1993, the company moved to College Station, TX and was renamed Stata Corporation, now known as StataCorp. A major release in 2003 included a new graphics system and dialog boxes for all commands. Since then, a new version has been released once every two years. The current version is Stata 18, released in April 2023. Technical overview and terminology User interface From its creation, Stata has always employed an integrated command-line interface. Starting with version 8.0, Stata has included a graphical user interface based on Qt framework which uses menus and dialog boxes to give access to many built-in commands. The dataset can be viewed or edited in spreadsheet format. From version 11 on, other commands can be executed while the data browser or editor is opened. Data structure and storage Until the release of version 16, Stata could only open a single dataset at any one time. Stata allows for flexibility with assigning data types to data. Its compress command automatically reassigns data to data types that take up less memory without loss of information. Stata utilizes integer storage types which occupy only one or two bytes rather than four, and single-precision (4 bytes) rather than double-precision (8 bytes) is the default for floating-point numbers. Stata's data format is always tabular in format. Stata refers to the columns of tabular data as variables. Data format compatibility Stata can import data in a variety of formats. This includes ASCII data formats (such as CSV or databank formats) and spreadsheet formats (including various Excel formats). Stata's proprietary file formats have changed over time, although not every Stata release includes a new dataset format. Every version of Stata can read all older dataset formats, and can write both the current and most recent previous dataset format, using the saveold command. Thus, the current Stata release can always open datasets that were created with older versions, but older versions cannot read newer format datasets. Stata can read and write SAS XPORT format datasets natively, using the fdause and fdasave commands. Some other econometric applications, including gretl, can directly import Stata file formats. History Origins The development of Stata began in 1984, initially by William (Bill) Gould and later by Sean Becketti. The software was originally intended to compete with statistical programs for personal computers such as SYSTAT and MicroTSP. Stata was written, then as now, in the C programming language, initially for PCs running the DOS oper
https://en.wikipedia.org/wiki/Mathematical%20structure
In mathematics, a structure is a set endowed with some additional features on the set (e.g. an operation, relation, metric, or topology). Often, the additional features are attached or related to the set, so as to provide it with some additional meaning or significance. A partial list of possible structures are measures, algebraic structures (groups, fields, etc.), topologies, metric structures (geometries), orders, events, equivalence relations, differential structures, and categories. Sometimes, a set is endowed with more than one feature simultaneously, which allows mathematicians to study the interaction between the different structures more richly. For example, an ordering imposes a rigid form, shape, or topology on the set, and if a set has both a topology feature and a group feature, such that these two features are related in a certain way, then the structure becomes a topological group. Mappings between sets which preserve structures (i.e., structures in the domain are mapped to equivalent structures in the codomain) are of special interest in many fields of mathematics. Examples are homomorphisms, which preserve algebraic structures; homeomorphisms, which preserve topological structures; and diffeomorphisms, which preserve differential structures. History In 1939, the French group with the pseudonym Nicolas Bourbaki saw structures as the root of mathematics. They first mentioned them in their "Fascicule" of Theory of Sets and expanded it into Chapter IV of the 1957 edition. They identified three mother structures: algebraic, topological, and order. Example: the real numbers The set of real numbers has several standard structures: An order: each number is either less than or greater than any other number. Algebraic structure: there are operations of multiplication and addition that make it into a field. A measure: intervals of the real line have a specific length, which can be extended to the Lebesgue measure on many of its subsets. A metric: there is a notion of distance between points. A geometry: it is equipped with a metric and is flat. A topology: there is a notion of open sets. There are interfaces among these: Its order and, independently, its metric structure induce its topology. Its order and algebraic structure make it into an ordered field. Its algebraic structure and topology make it into a Lie group, a type of topological group. See also Abstract structure Isomorphism Equivalent definitions of mathematical structures Intuitionistic type theory Space (mathematics) References Further reading External links (provides a model theoretic definition.) Mathematical structures in computer science (journal) Type theory Set theory
https://en.wikipedia.org/wiki/221%20%28number%29
221 (two hundred [and] twenty-one) is the natural number following 220 and preceding 222. In mathematics Its factorization as 13 × 17 makes 221 the product of two consecutive prime numbers, the sixth smallest such product. 221 is a centered square number. In other fields In Texas hold 'em, the probability of being dealt pocket aces (the strongest possible outcome in the initial deal of two cards per player) is 1/221. Sherlock Holmes's home address: 221B Baker Street. References Integers
https://en.wikipedia.org/wiki/Josephus%20problem
In computer science and mathematics, the Josephus problem (or Josephus permutation) is a theoretical problem related to a certain counting-out game. Such games are used to pick out a person from a group, e.g. eeny, meeny, miny, moe. In the particular counting-out game that gives rise to the Josephus problem, a number of people are standing in a circle waiting to be executed. Counting begins at a specified point in the circle and proceeds around the circle in a specified direction. After a specified number of people are skipped, the next person is executed. The procedure is repeated with the remaining people, starting with the next person, going in the same direction and skipping the same number of people, until only one person remains, and is freed. The problem—given the number of people, starting point, direction, and number to be skipped—is to choose the position in the initial circle to avoid execution. History The problem is named after Flavius Josephus, a Jewish historian living in the 1st century. According to Josephus' firsthand account of the siege of Yodfat, he and his 40 soldiers were trapped in a cave by Roman soldiers. They chose suicide over capture, and settled on a serial method of committing suicide by drawing lots. Josephus states that by luck or possibly by the hand of God, he and another man remained until the end and surrendered to the Romans rather than killing themselves. This is the story given in Book 3, Chapter 8, part 7 of Josephus' The Jewish War (writing of himself in the third person): The details of the mechanism used in this feat are rather vague. According to James Dowdy and Michael Mays, in 1612 Claude Gaspard Bachet de Méziriac suggested the specific mechanism of arranging the men in a circle and counting by threes to determine the order of elimination. This story has been often repeated and the specific details vary considerably from source to source. For instance, Israel Nathan Herstein and Irving Kaplansky (1974) have Josephus and 39 comrades stand in a circle with every seventh man eliminated. A history of the problem can be found in S. L. Zabell's Letter to the editor of the Fibonacci Quarterly. As to intentionality, Josephus asked: “shall we put it down to divine providence or just to luck?” But the surviving Slavonic manuscript of Josephus tells a different story: that he “counted the numbers cunningly and so managed to deceive all the others”. Josephus had an accomplice; the problem was then to find the places of the two last remaining survivors (whose conspiracy would ensure their survival). It is alleged that he placed himself and the other man in the 31st and 16th place respectively (for = 3 below). Variants and generalizations A medieval version of the Josephus problem involves 15 Turks and 15 Christians aboard a ship in a storm which will sink unless half the passengers are thrown overboard. All 30 stand in a circle and every ninth person is to be tossed into the sea. The Christians need t
https://en.wikipedia.org/wiki/Skeleton%20%28category%20theory%29
In mathematics, a skeleton of a category is a subcategory that, roughly speaking, does not contain any extraneous isomorphisms. In a certain sense, the skeleton of a category is the "smallest" equivalent category, which captures all "categorical properties" of the original. In fact, two categories are equivalent if and only if they have isomorphic skeletons. A category is called skeletal if isomorphic objects are necessarily identical. Definition A skeleton of a category C is an equivalent category D in which no two distinct objects are isomorphic. It is generally considered to be a subcategory. In detail, a skeleton of C is a category D such that: D is a subcategory of C: every object of D is an object of C for every pair of objects d1 and d2 of D, the morphisms in D are morphisms in C, i.e. and the identities and compositions in D are the restrictions of those in C. The inclusion of D in C is full, meaning that for every pair of objects d1 and d2 of D we strengthen the above subset relation to an equality: The inclusion of D in C is essentially surjective: Every C-object is isomorphic to some D-object. D is skeletal: No two distinct D-objects are isomorphic. Existence and uniqueness It is a basic fact that every small category has a skeleton; more generally, every accessible category has a skeleton. (This is equivalent to the axiom of choice.) Also, although a category may have many distinct skeletons, any two skeletons are isomorphic as categories, so up to isomorphism of categories, the skeleton of a category is unique. The importance of skeletons comes from the fact that they are (up to isomorphism of categories), canonical representatives of the equivalence classes of categories under the equivalence relation of equivalence of categories. This follows from the fact that any skeleton of a category C is equivalent to C, and that two categories are equivalent if and only if they have isomorphic skeletons. Examples The category Set of all sets has the subcategory of all cardinal numbers as a skeleton. The category K-Vect of all vector spaces over a fixed field has the subcategory consisting of all powers , where α is any cardinal number, as a skeleton; for any finite m and n, the maps are exactly the n × m matrices with entries in K. FinSet, the category of all finite sets has FinOrd, the category of all finite ordinal numbers, as a skeleton. The category of all well-ordered sets has the subcategory of all ordinal numbers as a skeleton. A preorder, i.e. a small category such that for every pair of objects , the set either has one element or is empty, has a partially ordered set as a skeleton. See also Glossary of category theory Thin category References Adámek, Jiří, Herrlich, Horst, & Strecker, George E. (1990). Abstract and Concrete Categories. Originally published by John Wiley & Sons. . (now free on-line edition) Robert Goldblatt (1984). Topoi, the Categorial Analysis of Logic (Studies in logic and the foundati
https://en.wikipedia.org/wiki/Normally%20distributed%20and%20uncorrelated%20does%20not%20imply%20independent
In probability theory, although simple examples illustrate that linear uncorrelatedness of two random variables does not in general imply their independence, it is sometimes mistakenly thought that it does imply that when the two random variables are normally distributed. This article demonstrates that assumption of normal distributions does not have that consequence, although the multivariate normal distribution, including the bivariate normal distribution, does. To say that the pair of random variables has a bivariate normal distribution means that every linear combination of and for constant (i.e. not random) coefficients and (not both equal to zero) has a univariate normal distribution. In that case, if and are uncorrelated then they are independent. However, it is possible for two random variables and to be so distributed jointly that each one alone is marginally normally distributed, and they are uncorrelated, but they are not independent; examples are given below. Examples A symmetric example Suppose has a normal distribution with expected value 0 and variance 1. Let have the Rademacher distribution, so that or , each with probability 1/2, and assume is independent of . Let . Then and are uncorrelated; both have the same normal distribution; and and are not independent. To see that and are uncorrelated, one may consider the covariance : by definition, it is Then by definition of the random variables , , and , and the independence of from , one has To see that has the same normal distribution as , consider (since and both have the same normal distribution), where is the cumulative distribution function of the Standard normal distribution. To see that and are not independent, observe that or that . Finally, the distribution of the simple linear combination concentrates positive probability at 0: . Therefore, the random variable is not normally distributed, and so also and are not jointly normally distributed (by the definition above). An asymmetric example Suppose has a normal distribution with expected value 0 and variance 1. Let where is a positive number to be specified below. If is very small, then the correlation is near if is very large, then is near 1. Since the correlation is a continuous function of , the intermediate value theorem implies there is some particular value of that makes the correlation 0. That value is approximately 1.54. In that case, and are uncorrelated, but they are clearly not independent, since completely determines . To see that is normally distributed—indeed, that its distribution is the same as that of —one may compute its cumulative distribution function: where the next-to-last equality follows from the symmetry of the distribution of and the symmetry of the condition that . In this example, the difference is nowhere near being normally distributed, since it has a substantial probability (about 0.88) of it being equal to 
https://en.wikipedia.org/wiki/Maximum%20entropy%20probability%20distribution
In statistics and information theory, a maximum entropy probability distribution has entropy that is at least as great as that of all other members of a specified class of probability distributions. According to the principle of maximum entropy, if nothing is known about a distribution except that it belongs to a certain class (usually defined in terms of specified properties or measures), then the distribution with the largest entropy should be chosen as the least-informative default. The motivation is twofold: first, maximizing entropy minimizes the amount of prior information built into the distribution; second, many physical systems tend to move towards maximal entropy configurations over time. Definition of entropy and differential entropy If is a discrete random variable with distribution given by then the entropy of is defined as If is a continuous random variable with probability density , then the differential entropy of is defined as The quantity is understood to be zero whenever . This is a special case of more general forms described in the articles Entropy (information theory), Principle of maximum entropy, and differential entropy. In connection with maximum entropy distributions, this is the only one needed, because maximizing will also maximize the more general forms. The base of the logarithm is not important as long as the same one is used consistently: change of base merely results in a rescaling of the entropy. Information theorists may prefer to use base 2 in order to express the entropy in bits; mathematicians and physicists will often prefer the natural logarithm, resulting in a unit of nats for the entropy. The choice of the measure is however crucial in determining the entropy and the resulting maximum entropy distribution, even though the usual recourse to the Lebesgue measure is often defended as "natural". Distributions with measured constants Many statistical distributions of applicable interest are those for which the moments or other measurable quantities are constrained to be constants. The following theorem by Ludwig Boltzmann gives the form of the probability density under these constraints. Continuous case Suppose is a closed subset of the real numbers and we choose to specify measurable functions and numbers . We consider the class of all real-valued random variables which are supported on (i.e. whose density function is zero outside of ) and which satisfy the moment conditions: If there is a member in whose density function is positive everywhere in , and if there exists a maximal entropy distribution for , then its probability density has the following form: where we assume that . The constant and the Lagrange multipliers solve the constrained optimization problem with (this condition ensures that integrates to unity): Using the Karush–Kuhn–Tucker conditions, it can be shown that the optimization problem has a unique solution because the objective function in the op
https://en.wikipedia.org/wiki/ALGOL%2068C
ALGOL 68C is an imperative computer programming language, a dialect of ALGOL 68, that was developed by Stephen R. Bourne and Michael Guy to program the Cambridge Algebra System (CAMAL). The initial compiler was written in the Princeton Syntax Compiler (PSYCO, by Edgar T. Irons) that was implemented by J. H. Mathewman at Cambridge. ALGOL 68C was later used for the CHAOS OS for the capability-based security CAP computer at University of Cambridge in 1971. Other early contributors were Andrew D. Birrell and Ian Walker. Subsequent work was done on the compiler after Bourne left Cambridge University in 1975. Garbage collection was added, and the code base is still running on an emulated OS/MVT using Hercules. The ALGOL 68C compiler generated output in ZCODE, a register-based intermediate language, which could then be either interpreted or compiled to a native executable. This ability to interpret or compile ZCODE encouraged the porting of ALGOL 68C to many different computing platforms. Aside from the CAP computer, the compiler was ported to systems including Conversational Monitor System (CMS), TOPS-10, and Zilog Z80. Popular culture A very early predecessor of this compiler was used by Guy and Bourne to write the first Game of Life programs on the PDP-7 with a DEC 340 display. Various Liverpool Software Gazette issues detail the Z80 implementation. The compiler required about 120 KB of memory to run; hence the Z80's 64 KB memory is actually too small to run the compiler. So ALGOL 68C programs for the Z80 had to be cross-compiled from the larger CAP computer, or an IBM System/370 mainframe computer. Algol 68C and Unix Stephen Bourne subsequently reused ALGOL 68's if ~ then ~ else ~ fi, case ~ in ~ out ~ esac and for ~ while ~ do ~ od clauses in the common Unix Bourne shell, but with in's syntax changed, out removed, and od replaced with done (to avoid conflict with the od utility). After Cambridge, Bourne spent nine years at Bell Labs with the Version 7 Unix (Seventh Edition Unix) team. As well as developing the Bourne shell, he ported ALGOL 68C to Unix on the DEC PDP-11-45 and included a special option in his Unix debugger Advanced Debugger (adb) to obtain a stack backtrace for programs written in ALGOL 68C. Here is an extract from the Unix 7th edition manual pages: NAME adb - debugger SYNOPSIS adb [-w] [ objfil [ corfil ] ] [...] COMMANDS [...] $modifier Miscellaneous commands. The available modifiers are: [...] a ALGOL 68 stack backtrace. If address is given then it is taken to be the address of the current frame (instead of r4). If count is given then only the first count frames are printed. ALGOL 68C extensions to ALGOL 68 Below is a sampling of some notable extensions: Automatic op:= for any operator, e.g. *:= and +:= UPTO, DOWNTO and UNTIL in loop-cla
https://en.wikipedia.org/wiki/Arkansas%20School%20for%20Mathematics%2C%20Sciences%2C%20and%20the%20Arts
The Arkansas School for Mathematics, Sciences, and the Arts (ASMSA) is a public residential high school located in Hot Springs, Arkansas that serves sophomores, juniors, and seniors. It is a part of the University of Arkansas administrative system and a member of the NCSSSMST. The school was originally known as The Arkansas School for Mathematics and Sciences (abbreviated ASMS). The school is accredited by AdvancED. School description Academically, the school is modeled after the North Carolina School of Science and Mathematics. Studies focus on mathematics, computer science, science, and humanities. All courses are taught at the Honors level or above. ASMSA offers approximately 50 courses for university credit through a partnership with the University of Arkansas at Fort Smith and other advanced high school courses for elective credit. ASMSA graduates finish their experience having earned an average of 50 college credit hours. ASMSA has an arts program, which was added in 2004 by the state legislature. Though not yet at the depth of the school's STEM-based programs, investment has been made in recent years to enhance the studio and digital arts experiences. Since 2015, the school has added three full-time faculty members in studio art and music to achieve this goal. The school was created in 1991 with backing from then-Governor Bill Clinton. The charter class enrolled as juniors in 1993 and graduated in 1995. Prospective students apply during the spring of their sophomore or freshman year and submit application forms, grade transcripts, SAT or ACT results, and three letters of recommendation. Students can enter via normal admissions as a junior or enter through early admissions as a sophomore. Additionally, some students can repeat their junior year of high school at ASMSA if they choose to apply their current junior year, called Super Juniors. Most of the campus of the school itself is located in the former St. Joseph's Catholic Hospital in the William Jefferson Clinton Presidential Park in the historic district of Hot Springs, and it is surrounded on three sides by the Hot Springs National Park. All faculty have at least a master's degree in their field, and 48% have a Ph.D. or other terminal degree in their field. Notable professors at the school have included Don Baker, who was a Foreign Service Officer for the United States Department of State; Mrs. Melanie Nichols, who has served on several AP committees and has been active in the math education community, was a mathematics teacher at the school before becoming Dean of Academic Affairs in 2006; Brian Monson, who has previously taught at the University of Tulsa and the Oklahoma School of Science and Mathematics, is the Associate Dean for STEM and teaches AP Physics C, Optics, and Folk Music and Acoustics, and plays the harmonica and the mandolin; and Charlie Cole Chaffin, who was a chemistry teacher at the school, was a member of the Arkansas State Senate. Several former and current
https://en.wikipedia.org/wiki/Frobenius%20algebra
In mathematics, especially in the fields of representation theory and module theory, a Frobenius algebra is a finite-dimensional unital associative algebra with a special kind of bilinear form which gives the algebras particularly nice duality theories. Frobenius algebras began to be studied in the 1930s by Richard Brauer and Cecil Nesbitt and were named after Georg Frobenius. Tadashi Nakayama discovered the beginnings of a rich duality theory , . Jean Dieudonné used this to characterize Frobenius algebras . Frobenius algebras were generalized to quasi-Frobenius rings, those Noetherian rings whose right regular representation is injective. In recent times, interest has been renewed in Frobenius algebras due to connections to topological quantum field theory. Definition A finite-dimensional, unital, associative algebra A defined over a field k is said to be a Frobenius algebra if A is equipped with a nondegenerate bilinear form that satisfies the following equation: . This bilinear form is called the Frobenius form of the algebra. Equivalently, one may equip A with a linear functional such that the kernel of λ contains no nonzero left ideal of A. A Frobenius algebra is called symmetric if σ is symmetric, or equivalently λ satisfies . There is also a different, mostly unrelated notion of the symmetric algebra of a vector space. Nakayama automorphism For a Frobenius algebra A with σ as above, the automorphism ν of A such that is Nakayama automorphism associated to A and σ. Examples Any matrix algebra defined over a field k is a Frobenius algebra with Frobenius form σ(a,b)=tr(a·b) where tr denotes the trace. Any finite-dimensional unital associative algebra A has a natural homomorphism to its own endomorphism ring End(A). A bilinear form can be defined on A in the sense of the previous example. If this bilinear form is nondegenerate, then it equips A with the structure of a Frobenius algebra. Every group ring k[G] of a finite group G over a field k is a symmetric Frobenius algebra, with Frobenius form σ(a,b) given by the coefficient of the identity element in a·b. For a field k, the four-dimensional k-algebra k[x,y]/ (x2, y2) is a Frobenius algebra. This follows from the characterization of commutative local Frobenius rings below, since this ring is a local ring with its maximal ideal generated by x and y, and unique minimal ideal generated by xy. For a field k, the three-dimensional k-algebra A=k[x,y]/ (x, y)2 is not a Frobenius algebra. The A homomorphism from xA into A induced by x ↦ y cannot be extended to an A homomorphism from A into A, showing that the ring is not self-injective, thus not Frobenius. Any finite-dimensional Hopf algebra, by a 1969 theorem of Larson-Sweedler on Hopf modules and integrals. Properties The direct product and tensor product of Frobenius algebras are Frobenius algebras. A finite-dimensional commutative local algebra over a field is Frobenius if and only if the right regular module is injective, i
https://en.wikipedia.org/wiki/Albanese%20variety
In mathematics, the Albanese variety , named for Giacomo Albanese, is a generalization of the Jacobian variety of a curve. Precise statement The Albanese variety is the abelian variety generated by a variety taking a given point of to the identity of . In other words, there is a morphism from the variety to its Albanese variety , such that any morphism from to an abelian variety (taking the given point to the identity) factors uniquely through . For complex manifolds, defined the Albanese variety in a similar way, as a morphism from to a torus such that any morphism to a torus factors uniquely through this map. (It is an analytic variety in this case; it need not be algebraic.) Properties For compact Kähler manifolds the dimension of the Albanese variety is the Hodge number , the dimension of the space of differentials of the first kind on , which for surfaces is called the irregularity of a surface. In terms of differential forms, any holomorphic 1-form on is a pullback of translation-invariant 1-form on the Albanese variety, coming from the holomorphic cotangent space of at its identity element. Just as for the curve case, by choice of a base point on (from which to 'integrate'), an Albanese morphism is defined, along which the 1-forms pull back. This morphism is unique up to a translation on the Albanese variety. For varieties over fields of positive characteristic, the dimension of the Albanese variety may be less than the Hodge numbers and (which need not be equal). To see the former note that the Albanese variety is dual to the Picard variety, whose tangent space at the identity is given by That is a result of Jun-ichi Igusa in the bibliography. Roitman's theorem If the ground field k is algebraically closed, the Albanese map can be shown to factor over a group homomorphism (also called the Albanese map) from the Chow group of 0-dimensional cycles on V to the group of rational points of , which is an abelian group since is an abelian variety. Roitman's theorem, introduced by , asserts that, for l prime to char(k), the Albanese map induces an isomorphism on the l-torsion subgroups. The constraint on the primality of the order of torsion to the characteristic of the base field has been removed by Milne shortly thereafter: the torsion subgroup of and the torsion subgroup of k-valued points of the Albanese variety of X coincide. Replacing the Chow group by Suslin–Voevodsky algebraic singular homology after the introduction of Motivic cohomology Roitman's theorem has been obtained and reformulated in the motivic framework. For example, a similar result holds for non-singular quasi-projective varieties. Further versions of Roitman's theorem are available for normal schemes. Actually, the most general formulations of Roitman's theorem (i.e. homological, cohomological, and Borel–Moore) involve the motivic Albanese complex and have been proven by Luca Barbieri-Viale and Bruno Kahn (see the references III.13). Connection
https://en.wikipedia.org/wiki/La%20Pocati%C3%A8re
La Pocatière () is a town in the Kamouraska Regional County Municipality in the Bas-Saint-Laurent region of Quebec, Canada. Demographics In the 2021 Census of Population conducted by Statistics Canada, La Pocatière had a population of living in of its total private dwellings, a change of from its 2016 population of . With a land area of , it had a population density of in 2021. Economy Alstom has a plant which manufactures subway and railway cars: R62A (New York City Subway car) Montreal Metro MR-73 and MPM-10 Boston Red Line (MBTA) 1800-85 series cars MultiLevel Coach cars VIA Rail LRC (train) set shell for Toronto Transit Commission subway cars (T1 and TR) and streetcars (Flexity Outlook) The plant was built in 1961 to build Moto-Ski snowmobiles and the plant was converted to railcars in 1971 (Bombardier continued to market Moto-Ski until 1985). Culture and attractions La Pocatière is home to the Musée François-Pilote, a museum of Quebec ethnology. The museum features exhibits on the history of agricultural education, a number of historical period rooms, stuffed bird and animal displays, and presentations on other aspects of local history. Near the city are small isolated hills known as monadnocks. The Montagne du College-de-Sainte-Anne-de-la-Pocatière is 119 metres high. City council City council consists of a mayor and six councillors: As of 2017 the council consisted of: Mayor: Sylvain Hudon Councillors: 1: Lise Bellefeuille 2: Claude Brochu 3: Steve Leclerc 4: Pierre Darveau 5: Lise Garneau, 6: Louise Lacoursière Education There are three post-secondary institutions in La Pocatière: Collège de Sainte-Anne-de-la-Pocatière - c. 1827 Cégep de La Pocatière c. 1967 Institut de technologie agroalimentaire - agro-food technology institute with focus on equines (horses) at the La Pocatière campus The town has three public schools under the Commission scolaire de Kamouraska—Rivière-du-Loup: École Sacré-Coeur - elementary school École Polyvalente La Pocatière - secondary school Éducation des adultes - adult learning Twin towns La Pocatière is twinned with: Coutances in France See also List of municipalities in Quebec References External links City web site La Pocatière en images... Cities and towns in Quebec Canada geography articles needing translation from French Wikipedia
https://en.wikipedia.org/wiki/Coherent%20duality
In mathematics, coherent duality is any of a number of generalisations of Serre duality, applying to coherent sheaves, in algebraic geometry and complex manifold theory, as well as some aspects of commutative algebra that are part of the 'local' theory. The historical roots of the theory lie in the idea of the adjoint linear system of a linear system of divisors in classical algebraic geometry. This was re-expressed, with the advent of sheaf theory, in a way that made an analogy with Poincaré duality more apparent. Then according to a general principle, Grothendieck's relative point of view, the theory of Jean-Pierre Serre was extended to a proper morphism; Serre duality was recovered as the case of the morphism of a non-singular projective variety (or complete variety) to a point. The resulting theory is now sometimes called Serre–Grothendieck–Verdier duality, and is a basic tool in algebraic geometry. A treatment of this theory, Residues and Duality (1966) by Robin Hartshorne, became a reference. One concrete spin-off was the Grothendieck residue. To go beyond proper morphisms, as for the versions of Poincaré duality that are not for closed manifolds, requires some version of the compact support concept. This was addressed in SGA2 in terms of local cohomology, and Grothendieck local duality; and subsequently. The Greenlees–May duality, first formulated in 1976 by Ralf Strebel and in 1978 by Eben Matlis, is part of the continuing consideration of this area. Adjoint functor point of view While Serre duality uses a line bundle or invertible sheaf as a dualizing sheaf, the general theory (it turns out) cannot be quite so simple. (More precisely, it can, but at the cost of imposing the Gorenstein ring condition.) In a characteristic turn, Grothendieck reformulated general coherent duality as the existence of a right adjoint functor , called twisted or exceptional inverse image functor, to a higher direct image with compact support functor . Higher direct images are a sheafified form of sheaf cohomology in this case with proper (compact) support; they are bundled up into a single functor by means of the derived category formulation of homological algebra (introduced with this case in mind). If is proper, then is a right adjoint to the inverse image functor . The existence theorem for the twisted inverse image is the name given to the proof of the existence for what would be the counit for the comonad of the sought-for adjunction, namely a natural transformation , which is denoted by (Hartshorne) or (Verdier). It is the aspect of the theory closest to the classical meaning, as the notation suggests, that duality is defined by integration. To be more precise, exists as an exact functor from a derived category of quasi-coherent sheaves on , to the analogous category on , whenever is a proper or quasi projective morphism of noetherian schemes, of finite Krull dimension. From this the rest of the theory can be derived: dualizing complexes p
https://en.wikipedia.org/wiki/Symplectic
The term "symplectic" is a calque of "complex" introduced by Hermann Weyl in 1939. In mathematics it may refer to: Symplectic Clifford algebra, see Weyl algebra Symplectic geometry Symplectic group Symplectic integrator Symplectic manifold Symplectic matrix Symplectic representation Symplectic vector space It can also refer to: Symplectic bone, a bone found in fish skulls Symplectite, in reference to a mineral intergrowth texture See also Metaplectic group Symplectomorphism
https://en.wikipedia.org/wiki/Differential%20algebra
In mathematics, differential algebra is, broadly speaking, the area of mathematics consisting in the study of differential equations and differential operators as algebraic objects in view of deriving properties of differential equations and operators without computing the solutions, similarly as polynomial algebras are used for the study of algebraic varieties, which are solution sets of systems of polynomial equations. Weyl algebras and Lie algebras may be considered as belonging to differential algebra. More specifically, differential algebra refers to the theory introduced by Joseph Ritt in 1950, in which differential rings, differential fields, and differential algebras are rings, fields, and algebras equipped with finitely many derivations. A natural example of a differential field is the field of rational functions in one variable over the complex numbers, where the derivation is differentiation with respect to More generally, every differential equation may be viewed as an element of a differential algebra over the differential field generated by the (known) functions appearing in the equation. History Joseph Ritt developed differential algebra because he viewed attempts to reduce systems of differential equations to various canonical forms as an unsatisfactory approach. However, the success of algebraic elimination methods and algebraic manifold theory motivated Ritt to consider a similar approach for differential equations. His efforts led to an initial paper Manifolds Of Functions Defined By Systems Of Algebraic Differential Equations and 2 books, Differential Equations From The Algebraic Standpoint and Differential Algebra. Ellis Kolchin, Ritt's student, advanced this field and published Differential Algebra And Algebraic Groups. Differential rings Definition A derivation on a ring is a function such that and (Leibniz product rule), for every and in A derivation is linear over the integers since these identities imply and A differential ring is a commutative ring equipped with one or more derivations that commute pairwise; that is, for every pair of derivations and every When there is only one derivation one talks often of an ordinary differential ring; otherwise, one talks of a partial differential ring. A differential field is differentiable ring that is also a field. A differential algebra over a differential field is a differential ring that contains as a subring such that the restriction to of the derivations of equal the derivations of (A more general definition is given below, which covers the case where is not a field, and is essentially equivalent when is a field.) A Witt algebra is a differential ring that contains the field of the rational numbers. Equivalently, this is a differential algebra over since can be considered as a differential field on which every derivation is the zero function. The constants of a differential ring are the elements such that for every derivation
https://en.wikipedia.org/wiki/Base%20change
In mathematics, base change may mean: Base change map in algebraic geometry Fiber product of schemes in algebraic geometry Change of base (disambiguation) in linear algebra or numeral systems Base change lifting of automorphic forms
https://en.wikipedia.org/wiki/Thabit%20number
In number theory, a Thabit number, Thâbit ibn Qurra number, or 321 number is an integer of the form for a non-negative integer n. The first few Thabit numbers are: 2, 5, 11, 23, 47, 95, 191, 383, 767, 1535, 3071, 6143, 12287, 24575, 49151, 98303, 196607, 393215, 786431, 1572863, ... The 9th century mathematician, physician, astronomer and translator Thābit ibn Qurra is credited as the first to study these numbers and their relation to amicable numbers. Properties The binary representation of the Thabit number 3·2n−1 is n+2 digits long, consisting of "10" followed by n 1s. The first few Thabit numbers that are prime (Thabit primes or 321 primes): 2, 5, 11, 23, 47, 191, 383, 6143, 786431, 51539607551, 824633720831, ... , there are 67 known prime Thabit numbers. Their n values are: 0, 1, 2, 3, 4, 6, 7, 11, 18, 34, 38, 43, 55, 64, 76, 94, 103, 143, 206, 216, 306, 324, 391, 458, 470, 827, 1274, 3276, 4204, 5134, 7559, 12676, 14898, 18123, 18819, 25690, 26459, 41628, 51387, 71783, 80330, 85687, 88171, 97063, 123630, 155930, 164987, 234760, 414840, 584995, 702038, 727699, 992700, 1201046, 1232255, 2312734, 3136255, 4235414, 6090515, 11484018, 11731850, 11895718, 16819291, 17748034, 18196595, 18924988, 20928756, ... The primes for 234760 ≤ n ≤ 3136255 were found by the distributed computing project 321 search. In 2008, PrimeGrid took over the search for Thabit primes. It is still searching and has already found all currently known Thabit primes with n ≥ 4235414. It is also searching for primes of the form 3·2n+1, such primes are called Thabit primes of the second kind or 321 primes of the second kind. The first few Thabit numbers of the second kind are: 4, 7, 13, 25, 49, 97, 193, 385, 769, 1537, 3073, 6145, 12289, 24577, 49153, 98305, 196609, 393217, 786433, 1572865, ... The first few Thabit primes of the second kind are: 7, 13, 97, 193, 769, 12289, 786433, 3221225473, 206158430209, 6597069766657, 221360928884514619393, ... Their n values are: 1, 2, 5, 6, 8, 12, 18, 30, 36, 41, 66, 189, 201, 209, 276, 353, 408, 438, 534, 2208, 2816, 3168, 3189, 3912, 20909, 34350, 42294, 42665, 44685, 48150, 54792, 55182, 59973, 80190, 157169, 213321, 303093, 362765, 382449, 709968, 801978, 916773, 1832496, 2145353, 2291610, 2478785, 5082306, 7033641, 10829346, 16408818, ... Connection with amicable numbers When both n and n−1 yield Thabit primes (of the first kind), and is also prime, a pair of amicable numbers can be calculated as follows: and For example, n = 2 gives the Thabit prime 11, and n−1 = 1 gives the Thabit prime 5, and our third term is 71. Then, 22=4, multiplied by 5 and 11 results in 220, whose divisors add up to 284, and 4 times 71 is 284, whose divisors add up to 220. The only known n satisfying these conditions are 2, 4 and 7, corresponding to the Thabit primes 11, 47 and 383 given by n, the Thabit primes 5, 23 and 191 given by n−1, and our third terms are 71, 1151 and 73727. (The corresponding amicable pairs are (220, 2
https://en.wikipedia.org/wiki/Transition%20function
In mathematics, a transition function may refer to: a transition map between two charts of an atlas of a manifold or other topological space the function that defines the transitions of a state transition system in computing, which may refer more specifically to a Turing machine, finite-state machine, or cellular automaton a stochastic kernel In statistics and probability theory, the conditional probability distribution function controlling the transitions of a stochastic process
https://en.wikipedia.org/wiki/Blowing%20up
In mathematics, blowing up or blowup is a type of geometric transformation which replaces a subspace of a given space with all the directions pointing out of that subspace. For example, the blowup of a point in a plane replaces the point with the projectivized tangent space at that point. The metaphor is that of zooming in on a photograph to enlarge part of the picture, rather than referring to an explosion. Blowups are the most fundamental transformation in birational geometry, because every birational morphism between projective varieties is a blowup. The weak factorization theorem says that every birational map can be factored as a composition of particularly simple blowups. The Cremona group, the group of birational automorphisms of the plane, is generated by blowups. Besides their importance in describing birational transformations, blowups are also an important way of constructing new spaces. For instance, most procedures for resolution of singularities proceed by blowing up singularities until they become smooth. A consequence of this is that blowups can be used to resolve the singularities of birational maps. Classically, blowups were defined extrinsically, by first defining the blowup on spaces such as projective space using an explicit construction in coordinates and then defining blowups on other spaces in terms of an embedding. This is reflected in some of the terminology, such as the classical term monoidal transformation. Contemporary algebraic geometry treats blowing up as an intrinsic operation on an algebraic variety. From this perspective, a blowup is the universal (in the sense of category theory) way to turn a subvariety into a Cartier divisor. A blowup can also be called monoidal transformation, locally quadratic transformation, dilatation, σ-process, or Hopf map. The blowup of a point in a plane The simplest case of a blowup is the blowup of a point in a plane. Most of the general features of blowing up can be seen in this example. The blowup has a synthetic description as an incidence correspondence. Recall that the Grassmannian G(1,2) parametrizes the set of all lines through a point in the plane. The blowup of the projective plane P2 at the point P, which we will denote X, is Here Q denotes another point and is an element of the Grassmannian. X is a projective variety because it is a closed subvariety of a product of projective varieties. It comes with a natural morphism π to P2 that takes the pair to Q. This morphism is an isomorphism on the open subset of all points with Q ≠ P because the line is determined by those two points. When Q = P, however, the line can be any line through P. These lines correspond to the space of directions through P, which is isomorphic to P1. This P1 is called the exceptional divisor, and by definition it is the projectivized normal space at P. Because P is a point, the normal space is the same as the tangent space, so the exceptional divisor is isomorphic to the proj
https://en.wikipedia.org/wiki/Hamiltonian%20vector%20field
In mathematics and physics, a Hamiltonian vector field on a symplectic manifold is a vector field defined for any energy function or Hamiltonian. Named after the physicist and mathematician Sir William Rowan Hamilton, a Hamiltonian vector field is a geometric manifestation of Hamilton's equations in classical mechanics. The integral curves of a Hamiltonian vector field represent solutions to the equations of motion in the Hamiltonian form. The diffeomorphisms of a symplectic manifold arising from the flow of a Hamiltonian vector field are known as canonical transformations in physics and (Hamiltonian) symplectomorphisms in mathematics. Hamiltonian vector fields can be defined more generally on an arbitrary Poisson manifold. The Lie bracket of two Hamiltonian vector fields corresponding to functions f and g on the manifold is itself a Hamiltonian vector field, with the Hamiltonian given by the Poisson bracket of f and g. Definition Suppose that is a symplectic manifold. Since the symplectic form is nondegenerate, it sets up a fiberwise-linear isomorphism between the tangent bundle and the cotangent bundle , with the inverse Therefore, one-forms on a symplectic manifold may be identified with vector fields and every differentiable function determines a unique vector field , called the Hamiltonian vector field with the Hamiltonian , by defining for every vector field on , Note: Some authors define the Hamiltonian vector field with the opposite sign. One has to be mindful of varying conventions in physical and mathematical literature. Examples Suppose that is a -dimensional symplectic manifold. Then locally, one may choose canonical coordinates on , in which the symplectic form is expressed as: where denotes the exterior derivative and denotes the exterior product. Then the Hamiltonian vector field with Hamiltonian takes the form: where is a square matrix and The matrix is frequently denoted with . Suppose that M = R2n is the 2n-dimensional symplectic vector space with (global) canonical coordinates. If then if then if then if then Properties The assignment is linear, so that the sum of two Hamiltonian functions transforms into the sum of the corresponding Hamiltonian vector fields. Suppose that are canonical coordinates on (see above). Then a curve is an integral curve of the Hamiltonian vector field if and only if it is a solution of Hamilton's equations: The Hamiltonian is constant along the integral curves, because . That is, is actually independent of . This property corresponds to the conservation of energy in Hamiltonian mechanics. More generally, if two functions and have a zero Poisson bracket (cf. below), then is constant along the integral curves of , and similarly, is constant along the integral curves of . This fact is the abstract mathematical principle behind Noether's theorem. The symplectic form is preserved by the Hamiltonian flow. Equivalently, the Lie derivative
https://en.wikipedia.org/wiki/Symplectic%20vector%20field
In physics and mathematics, a symplectic vector field is one whose flow preserves a symplectic form. That is, if is a symplectic manifold with smooth manifold and symplectic form , then a vector field in the Lie algebra is symplectic if its flow preserves the symplectic structure. In other words, the Lie derivative of the vector field must vanish: . An alternative definition is that a vector field is symplectic if its interior product with the symplectic form is closed. (The interior product gives a map from vector fields to 1-forms, which is an isomorphism due to the nondegeneracy of a symplectic 2-form.) The equivalence of the definitions follows from the closedness of the symplectic form and Cartan's magic formula for the Lie derivative in terms of the exterior derivative. If the interior product of a vector field with the symplectic form is an exact form (and in particular, a closed form), then it is called a Hamiltonian vector field. If the first De Rham cohomology group of the manifold is trivial, all closed forms are exact, so all symplectic vector fields are Hamiltonian. That is, the obstruction to a symplectic vector field being Hamiltonian lives in . In particular, symplectic vector fields on simply connected manifolds are Hamiltonian. The Lie bracket of two symplectic vector fields is Hamiltonian, and thus the collection of symplectic vector fields and the collection of Hamiltonian vector fields both form Lie algebras. References Symplectic geometry
https://en.wikipedia.org/wiki/Three-dimensional%20face%20recognition
Three-dimensional face recognition (3D face recognition) is a modality of facial recognition methods in which the three-dimensional geometry of the human face is used. It has been shown that 3D face recognition methods can achieve significantly higher accuracy than their 2D counterparts, rivaling fingerprint recognition. 3D face recognition has the potential to achieve better accuracy than its 2D counterpart by measuring geometry of rigid features on the face. This avoids such pitfalls of 2D face recognition algorithms as change in lighting, different facial expressions, make-up and head orientation. Another approach is to use the 3D model to improve accuracy of traditional image based recognition by transforming the head into a known view. Additionally, most 3D scanners acquire both a 3D mesh and the corresponding texture. This allows combining the output of pure 3D matchers with the more traditional 2D face recognition algorithms, thus yielding better performance (as shown in FRVT 2006). The main technological limitation of 3D face recognition methods is the acquisition of 3D image, which usually requires a range camera. Alternatively, multiple images from different angles from a common camera (e.g. webcam) may be used to create the 3D model with significant post-processing. (See 3D data acquisition and object reconstruction.) This is also a reason why 3D face recognition methods have emerged significantly later (in the late 1980s) than 2D methods. Recently commercial solutions have implemented depth perception by projecting a grid onto the face and integrating video capture of it into a high resolution 3D model. This allows for good recognition accuracy with low cost off-the-shelf components. 3D face recognition is still an active research field, though several vendors offer commercial solutions. See also 3D object recognition Facial recognition system References A. Rashad, A Hamdy, M A Saleh and M Eladawy, "3D face recognition using 2DPCA", (IJCSNS) International Journal of Computer Science and Network Security,Vol.(12),2009. http://paper.ijcsns.org/07_book/200912/20091222.pdf External links CVPR 2008 Workshop on 3D Face Processing Face Recognition Grand Challenge Face Recognition Homepage 3D Face Recognition Project and Research Papers Technion 3D face recognition project Mitsubishi Electric Research Laboratories 3D face recognition project L-1 Identity commercial 3D face recognition system Fast 3D scan technology for 3D face recognition at the Geometric Modelling and Pattern Recognition Group, UK 3D Face Recognition Using a Deformable Model at the Computational Biomedicine Lab, Houston, TX 3D Face Recognition Using Photometric Stereo, UK Facial recognition 3D imaging
https://en.wikipedia.org/wiki/Final%20topology
In general topology and related areas of mathematics, the final topology (or coinduced, strong, colimit, or inductive topology) on a set with respect to a family of functions from topological spaces into is the finest topology on that makes all those functions continuous. The quotient topology on a quotient space is a final topology, with respect to a single surjective function, namely the quotient map. The disjoint union topology is the final topology with respect to the inclusion maps. The final topology is also the topology that every direct limit in the category of topological spaces is endowed with, and it is in the context of direct limits that the final topology often appears. A topology is coherent with some collection of subspaces if and only if it is the final topology induced by the natural inclusions. The dual notion is the initial topology, which for a given family of functions from a set into topological spaces is the coarsest topology on that makes those functions continuous. Definition Given a set and an -indexed family of topological spaces with associated functions the is the finest topology on such that is continuous for each . Explicitly, the final topology may be described as follows: a subset of is open in the final topology (that is, ) if and only if is open in for each . The closed subsets have an analogous characterization: a subset of is closed in the final topology if and only if is closed in for each . The family of functions that induces the final topology on is usually a set of functions. But the same construction can be performed if is a proper class of functions, and the result is still well-defined in Zermelo–Fraenkel set theory. In that case there is always a subfamily of with a set, such that the final topologies on induced by and by coincide. For more on this, see for example the discussion here. As an example, a commonly used variant of the notion of compactly generated space is defined as the final topology with respect to a proper class of functions. Examples The important special case where the family of maps consists of a single surjective map can be completely characterized using the notion of quotient map. A surjective function between topological spaces is a quotient map if and only if the topology on coincides with the final topology induced by the family . In particular: the quotient topology is the final topology on the quotient space induced by the quotient map. The final topology on a set induced by a family of -valued maps can be viewed as a far reaching generalization of the quotient topology, where multiple maps may be used instead of just one and where these maps are not required to be surjections. Given topological spaces , the disjoint union topology on the disjoint union is the final topology on the disjoint union induced by the natural injections. Given a family of topologies on a fixed set the final topology on with respec
https://en.wikipedia.org/wiki/139%20%28number%29
139 (one hundred [and] thirty-nine) is the natural number following 138 and preceding 140. In mathematics 139 is the 34th prime number. It is a twin prime with 137. Because 141 is a semiprime, 139 is a Chen prime. 139 is the smallest prime before a prime gap of length 10. This number is the sum of five consecutive prime numbers (19 + 23 + 29 + 31 + 37). It is the smallest factor of 64079 which is the smallest Lucas number with prime index which is not prime. It is also the smallest factor of the first nine terms of the Euclid–Mullin sequence, making it the tenth term. 139 is a happy number and a strictly non-palindromic number. In the military RUM-139 VL-ASROC is a United States Navy ASROC anti-submarine missile was a United States Navy Admirable-class minesweeper during World War II was a United States Navy Haskell-class attack transport during World War II was a United States Navy destroyer during World War II was a United States Navy transport ship during World War I and World War II was a tanker loaned to the Soviet Union during World War II, then returned to the United States in 1944 was a United States Navy cargo ship during World War II was a United States Navy Des Moines-class heavy cruiser following World War II was a United States Navy Wickes-class destroyer during World War II In transportation British Rail Class 139 is the TOPS classification assigned to the lightweight railcars by West Midlands Trains on the Stourbridge Town Branch Line Fiat M139 platform is the next-generation premium rear wheel drive automobile platform from Fiat London Buses route 139 is a Transport for London contracted bus route in London In other fields 139 is also: The year AD 139 or 139 BC 139 AH is a year in the Islamic calendar that corresponds to 756 – 757 CE. 139 Juewa is a large and dark main belt asteroid discovered in 1874 The atomic number of untriennium, an unsynthesized chemical element Gull Lake No. 139 is a rural municipality in Saskatchewan, Canada Sonnet 139 by William Shakespeare 139 Rb Gahhie is a village in Chak Jhumra Tehsil in the Punjab province of Pakistan Miss 139 was a 1921 film starring Diana Allen and Marc McDermott Motorola C139 model cellphone See also List of highways numbered 139 M139 (disambiguation) United Nations Security Council Resolution 139 United States Supreme Court cases, Volume 139 References External links Psalm 139 Integers
https://en.wikipedia.org/wiki/146%20%28number%29
146 (one hundred [and] forty-six) is the natural number following 145 and preceding 147. In mathematics 146 is an octahedral number, the number of spheres that can be packed into in a regular octahedron with six spheres along each edge. For an octahedron with seven spheres along each edge, the number of spheres on the surface of the octahedron is again 146. It is also possible to arrange 146 disks in the plane into an irregular octagon with six disks on each side, making 146 an octo number. There is no integer with exactly 146 coprimes less than it, so 146 is a nontotient. It is also never the difference between an integer and the total of coprimes below it, so it is a noncototient. And it is not the sum of proper divisors of any number, making it an untouchable number. There are 146 connected partially ordered sets with four labeled elements. See also 146 (disambiguation) References Integers
https://en.wikipedia.org/wiki/Natural%20density
In number theory, natural density, also referred to as asymptotic density or arithmetic density, is one method to measure how "large" a subset of the set of natural numbers is. It relies chiefly on the probability of encountering members of the desired subset when combing through the interval as grows large. Intuitively, it is thought that there are more positive integers than perfect squares, since every perfect square is already positive, and many other positive integers exist besides. However, the set of positive integers is not in fact larger than the set of perfect squares: both sets are infinite and countable and can therefore be put in one-to-one correspondence. Nevertheless if one goes through the natural numbers, the squares become increasingly scarce. The notion of natural density makes this intuition precise for many, but not all, subsets of the naturals (see Schnirelmann density, which is similar to natural density but defined for all subsets of ). If an integer is randomly selected from the interval , then the probability that it belongs to is the ratio of the number of elements of in to the total number of elements in . If this probability tends to some limit as tends to infinity, then this limit is referred to as the asymptotic density of . This notion can be understood as a kind of probability of choosing a number from the set . Indeed, the asymptotic density (as well as some other types of densities) is studied in probabilistic number theory. Definition A subset of positive integers has natural density if the proportion of elements of among all natural numbers from 1 to converges to as tends to infinity. More explicitly, if one defines for any natural number the counting function as the number of elements of less than or equal to , then the natural density of being exactly means that It follows from the definition that if a set has natural density then . Upper and lower asymptotic density Let be a subset of the set of natural numbers For any , define to be the intersection and let be the number of elements of less than or equal to . Define the upper asymptotic density of (also called the "upper density") by where lim sup is the limit superior. Similarly, define the lower asymptotic density of (also called the "lower density") by where lim inf is the limit inferior. One may say has asymptotic density if , in which case is equal to this common value. This definition can be restated in the following way: if this limit exists. These definitions may equivalently be expressed in the following way. Given a subset of , write it as an increasing sequence indexed by the natural numbers: Then and if the limit exists. A somewhat weaker notion of density is the upper Banach density of a set This is defined as Properties and examples For any finite set F of positive integers, d(F) = 0. If d(A) exists for some set A and Ac denotes its complement set with respect to , then d(Ac) = 1 − d
https://en.wikipedia.org/wiki/Grothendieck%20duality
In mathematics, Grothendieck duality may refer to: Coherent duality of coherent sheaves Grothendieck local duality of modules over a local ring
https://en.wikipedia.org/wiki/Duke%20Mathematical%20Journal
Duke Mathematical Journal is a peer-reviewed mathematics journal published by Duke University Press. It was established in 1935. The founding editors-in-chief were David Widder, Arthur Coble, and Joseph Miller Thomas. The first issue included a paper by Solomon Lefschetz. Leonard Carlitz served on the editorial board for 35 years, from 1938 to 1973. The current managing editor is Richard Hain (Duke University). Impact According to the journal homepage, the journal has a 2018 impact factor of 2.194, ranking it in the top ten mathematics journals in the world. References External links Mathematics journals Mathematical Journal Academic journals established in 1935 Multilingual journals English-language journals French-language journals Duke University Press academic journals
https://en.wikipedia.org/wiki/Fundamental%20solution
In mathematics, a fundamental solution for a linear partial differential operator is a formulation in the language of distribution theory of the older idea of a Green's function (although unlike Green's functions, fundamental solutions do not address boundary conditions). In terms of the Dirac delta "function" , a fundamental solution is a solution of the inhomogeneous equation Here is a priori only assumed to be a distribution. This concept has long been utilized for the Laplacian in two and three dimensions. It was investigated for all dimensions for the Laplacian by Marcel Riesz. The existence of a fundamental solution for any operator with constant coefficients — the most important case, directly linked to the possibility of using convolution to solve an arbitrary right hand side — was shown by Bernard Malgrange and Leon Ehrenpreis. In the context of functional analysis, fundamental solutions are usually developed via the Fredholm alternative and explored in Fredholm theory. Example Consider the following differential equation with The fundamental solutions can be obtained by solving , explicitly, Since for the unit step function (also known as the Heaviside function) we have there is a solution Here is an arbitrary constant introduced by the integration. For convenience, set . After integrating and choosing the new integration constant as zero, one has Motivation Once the fundamental solution is found, it is straightforward to find a solution of the original equation, through convolution of the fundamental solution and the desired right hand side. Fundamental solutions also play an important role in the numerical solution of partial differential equations by the boundary element method. Application to the example Consider the operator and the differential equation mentioned in the example, We can find the solution of the original equation by convolution (denoted by an asterisk) of the right-hand side with the fundamental solution : This shows that some care must be taken when working with functions which do not have enough regularity (e.g. compact support, L1 integrability) since, we know that the desired solution is , while the above integral diverges for all . The two expressions for are, however, equal as distributions. An example that more clearly works where is the characteristic (indicator) function of the unit interval . In that case, it can be verified that the convolution of with is which is a solution, i.e., has second derivative equal to . Proof that the convolution is a solution Denote the convolution of functions and as . Say we are trying to find the solution of . We want to prove that is a solution of the previous equation, i.e. we want to prove that . When applying the differential operator, , to the convolution, it is known that provided has constant coefficients. If is the fundamental solution, the right side of the equation reduces to But since the delta function is an identity e
https://en.wikipedia.org/wiki/W%C5%82odzimierz%20Sto%C5%BCek
Włodzimierz Stożek (23 July 1883 – 3 or 4 July 1941) was a Polish mathematician of the Lwów School of Mathematics. Head of the Mathematics Faculty on the Lwów University of Technology. He was arrested and murdered together with his two sons: the 29-year-old engineer Eustachy and 24-year-old Emanuel, graduate of the Institute of Technology by Nazis during the Second World War on 3 or 4 July in Lviv, during the Massacre of Lviv professors. In December 1944, Stefan Banach wrote the following tribute to Stożek: Professor Włodzimierz Stożek was an outstanding mathematician, the author of numerous papers on the theory of integral equations, potential theory, as well as on many other branches of mathematics. His work is widely known in Poland and also abroad. He had a very charming personality and was a distinguished scholar, beloved by his young students as someone with a very caring heart. He was always ready to assist anyone who asked for his help. He took little notice of nationality differences. Those of us who were close to him and knew him well now esteem even more highly his enlightened personality and great cultural contributions. He will be remembered as a great intellectual who loved all of humanity and served it faithfully. References Emilia Jakimowicz and Adam Miranowicz (2011) Stefan Banach: Remarkable Life, Brilliant Mathematics, 3rd edition, page 25, Gdańsk University Press, . External links 1883 births 1941 deaths Lwów School of Mathematics Victims of the Massacre of Lwów professors Lviv Polytechnic alumni People from the Kingdom of Galicia and Lodomeria People from Zhovkva
https://en.wikipedia.org/wiki/Stanis%C5%82aw%20Ruziewicz
Stanisław Ruziewicz (29 August 1889 – 12 July 1941) was a Polish mathematician and one of the founders of the Lwów School of Mathematics. He was a former student of Wacław Sierpiński, earning his doctorate in 1913 from the University of Lwów; his thesis concerned continuous functions that are not differentiable. He became a professor at the same university (then named Jan Kazimierz University) and rector of the Academy of Foreign Trade in Lwów. During the Second World War, Ruziewicz's home city of Lwów was annexed by the Ukrainian Soviet Socialist Republic, but then taken over by the General Government of German-occupied Poland in July 1941; Ruziewicz was arrested and murdered by the Gestapo on 12 July 1941 in Lwów, during the Massacre of Lwów professors. The Ruziewicz problem, asking whether the Lebesgue measure on the sphere may be characterized by certain of its properties, is named after him. References 1889 births 1941 deaths Lwów School of Mathematics Victims of the Massacre of Lwów professors People from the Kingdom of Galicia and Lodomeria People from Kolomyia Polish people executed by Nazi Germany
https://en.wikipedia.org/wiki/Charles%20Parsons%20%28philosopher%29
Charles Dacre Parsons (born April 13, 1933) is an American philosopher best known for his work in the philosophy of mathematics and the study of the philosophy of Immanuel Kant. He is professor emeritus at Harvard University. Life and career Parsons is a son of the famous Harvard sociologist Talcott Parsons. He earned his Ph.D. in philosophy at Harvard University in 1961, under the direction of Burton Dreben and Willard Van Orman Quine. He taught for many years at Columbia University before moving to Harvard University in 1989. He retired in 2005 as the Edgar Pierce professor of philosophy, a position formerly held by Quine. He is an elected Fellow of the American Academy of Arts and Sciences and the Norwegian Academy of Science and Letters. Among his former doctoral students are Michael Levin, James Higginbotham, Peter Ludlow, Gila Sher, Øystein Linnebo, Richard Tieszen, and Mark van Atten. In 2017, Parsons held the Gödel Lecture titled Gödel and the universe of sets. Philosophical work In addition to his work in logic and the philosophy of mathematics, Parsons was an editor, with Solomon Feferman and others, of the posthumous works of Kurt Gödel. He has also written on historical figures, especially Immanuel Kant, Gottlob Frege, Kurt Gödel, and Willard Van Orman Quine. Works Books 1983. Mathematics in Philosophy: Selected Essays. Ithaca, N.Y.: Cornell Univ. Press. 2008. Mathematical Thought and its Objects. Cambridge Univ. Press. 2012. From Kant to Husserl: Selected Essays. Cambridge, Massachusetts, and London: Harvard Univ. Press. 2014a. Philosophy of Mathematics in the Twentieth Century: Selected Essays. Cambridge, Massachusetts, and London: Harvard Univ. Press. Selected articles 1987. "Developing Arithmetic in Set Theory without infinity: Some Historical Remarks". History and Philosophy of Logic, vol. 8, pp. 201–213. 1990a. "The Uniqueness of the Natural Numbers". Iyyun, vol. 39, pp. 13–44. ISSN 0021-3306. 1990b. "The Structuralist View of Mathematical Objects". Synthese, vol. 84 (3), pp. 303–346. 2014b. "Analyticity for Realists". In Interpreting Gödel: Critical Essays, ed. J. Kennedy. Cambridge University Press, pp. 131–150. References 1933 births American logicians Harvard Graduate School of Arts and Sciences alumni Columbia University faculty Harvard University faculty Harvard University Department of Philosophy faculty Living people Mathematical logicians Philosophers of mathematics Members of the Norwegian Academy of Science and Letters
https://en.wikipedia.org/wiki/Blocking%20%28statistics%29
In the statistical theory of the design of experiments, blocking is the arranging of experimental units that are similar to one another in groups (blocks). Blocking can be used to tackle the problem of pseudoreplication. Use Blocking reduces unexplained variability. Its principle lies in the fact that variability which cannot be overcome (e.g. needing two batches of raw material to produce 1 container of a chemical) is confounded or aliased with a(n) (higher/highest order) interaction to eliminate its influence on the end product. High order interactions are usually of the least importance (think of the fact that temperature of a reactor or the batch of raw materials is more important than the combination of the two – this is especially true when more (3, 4, ...) factors are present); thus it is preferable to confound this variability with the higher interaction. Examples Male and female: An experiment is designed to test a new drug on patients. There are two levels of the treatment, drug, and placebo, administered to male and female patients in a double blind trial. The sex of the patient is a blocking factor accounting for treatment variability between males and females. This reduces sources of variability and thus leads to greater precision. Elevation: An experiment is designed to test the effects of a new pesticide on a specific patch of grass. The grass area contains a major elevation change and thus consists of two distinct regions – 'high elevation' and 'low elevation'. A treatment group (the new pesticide) and a placebo group are applied to both the high elevation and low elevation areas of grass. In this instance the researcher is blocking the elevation factor which may account for variability in the pesticide's application. Intervention: Suppose a process is invented that intends to make the soles of shoes last longer, and a plan is formed to conduct a field trial. Given a group of n volunteers, one possible design would be to give n/2 of them shoes with the new soles and n/2 of them shoes with the ordinary soles, randomizing the assignment of the two kinds of soles. This type of experiment is a completely randomized design. Both groups are then asked to use their shoes for a period of time, and then measure the degree of wear of the soles. This is a workable experimental design, but purely from the point of view of statistical accuracy (ignoring any other factors), a better design would be to give each person one regular sole and one new sole, randomly assigning the two types to the left and right shoe of each volunteer. Such a design is called a "randomized complete block design." This design will be more sensitive than the first, because each person is acting as his/her own control and thus the control group is more closely matched to the treatment group block design In the statistical theory of the design of experiments, blocking is the arranging of experimental units in groups (blocks) that are similar to one another. Typically