text
stringlengths
16
172k
source
stringlengths
32
122
Thelaw of averagesis the commonly held belief that a particularoutcomeoreventwill, over certain periods of time, occur at afrequencythat is similar to itsprobability.[1][2]Depending on context or application it can be considered a valid common-sense observation or a misunderstanding of probability. This notion can lead to thegambler's fallacywhen one becomes convinced that a particular outcome must come soon simply because it has not occurred recently (e.g. believing that because three consecutive coin flips yieldedheads, the next coin flip must be virtually guaranteed to betails). As invoked in everyday life, the "law" usually reflectswishful thinkingor a poor understanding ofstatisticsrather than any mathematical principle. While there is a realtheoremthat a random variable will reflect its underlying probability over a very large sample, the law of averages typically assumes that an unnatural short-term "balance" must occur.[3]Typical applications also generally assume nobiasin the underlying probability distribution, which is frequently at odds with theempirical evidence.[4] Thegambler's fallacyis a particular misapplication of the law of averages in which the gambler believes that a particular outcome is more likely because it has not happened recently, or (conversely) that because a particular outcome has recently occurred, it will be less likely in the immediate future.[5] As an example, consider aroulettewheel that has landed on red in three consecutive spins. An onlooker might apply the law of averages to conclude that on its next spin it is guaranteed (or at least is much more likely) to land on black. Of course, the wheel has no memory and its probabilities do not change according to past results. So even if the wheel has landed on red in ten or a hundred consecutive spins, the probability that the next spin will be black is still no more than 48.6% (assuming afairEuropean wheel with only one green zero; it would be exactly 50% if there were no green zero and the wheel were fair, and 47.4% for a fair American wheel with one green "0" and one green "00"). Similarly, there is no statistical basis for the belief that lottery numbers which haven't appeared recently are due to appear soon. (There is some value in choosing lottery numbers that are, in general, lesspopularthan others — not because they are any more or less likely to come up, but because the largest prizes are usually shared among all of the people who chose the winning numbers. The unpopular numbers are just as likely to come up as the popular numbers are, and in the event of a big win, one would likely have to share it with fewer other people. Seeparimutuel betting.) Another application of the law of averages is a belief that a sample's behaviour must line up with theexpected valuebased on population statistics. For example, suppose afair coinis flipped 100 times. Using the law of averages, one might predict that there will be 50 heads and 50 tails. While this is the single most likely outcome, there is only an 8% chance of it occurring according toP(X=50∣n=100,p=0.5){\displaystyle P(X=50\mid n=100,p=0.5)}of thebinomial distribution. Predictions based on the law of averages are even less useful if the sampledoes not reflect the population. In this example, one tries to increase the probability of a rare event occurring at least once by carrying out more trials. For example, a job seeker might argue, "If I send my résumé to enough places, the law of averages says that someone will eventually hire me." Assuming a non-zero probability, it is true that conducting more trials increases the overall likelihood of the desired outcome. However, there is no particular number of trials that guarantees that outcome; rather, the probability that it will already have occurredapproaches but never quite reaches100%. TheSteve Goodmansong "A Dying Cub Fan's Last Request" mentions the law of averages in reference to theChicago Cubslack of championship success. At the time Goodman recorded the song in 1981, the Cubs had not won aNational Leaguechampionship since1945, and had not won aWorld Seriessince1908. This futility would continue until the Cubs would finally win both in2016.
https://en.wikipedia.org/wiki/Law_of_averages
Inmathematics, aproof without words(orvisual proof) is an illustration of anidentityor mathematical statement which can be demonstrated asself-evidentby a diagram without any accompanying explanatory text. Such proofs can be considered moreelegantthan formal ormathematically rigorousproofs due to their self-evident nature.[1]When the diagram demonstrates a particular case of a general statement, to be a proof, it must be generalisable.[2] A proof without words is not the same as amathematical proof, because it omits the details of the logical argument it illustrates. However, it can provide valuable intuitions to the viewer that can help them formulate or better understand a true proof. The statement that the sum of all positiveodd numbersup to 2n− 1 is aperfect square—more specifically, the perfect squaren2—can be demonstrated by a proof without words.[3] In one corner of a grid, a single block represents 1, the first square. That can be wrapped on two sides by a strip of three blocks (the next odd number) to make a 2 × 2 block: 4, the second square. Adding a further five blocks makes a 3 × 3 block: 9, the third square. This process can be continued indefinitely. ThePythagorean theoremthata2+b2=c2{\displaystyle a^{2}+b^{2}=c^{2}}can be proven without words.[4] One method of doing so is to visualise a larger square of sidesa+b{\displaystyle a+b}, with four right-angled triangles of sidesa{\displaystyle a},b{\displaystyle b}andc{\displaystyle c}in its corners, such that the space in the middle is a diagonal square with an area ofc2{\displaystyle c^{2}}. The four triangles can be rearranged within the larger square to split its unused space into two squares ofa2{\displaystyle a^{2}}andb2{\displaystyle b^{2}}.[5] Jensen's inequalitycan also be proven graphically. A dashed curve along theXaxis is the hypothetical distribution ofX, while a dashed curve along theYaxis is the corresponding distribution ofYvalues. The convex mappingY(X) increasingly "stretches" the distribution for increasing values ofX.[6] Mathematics MagazineandThe College Mathematics Journalrun a regular feature titled "Proof without words" containing, as the title suggests, proofs without words.[3]The Art of Problem Solving andUSAMTSwebsites runJava appletsillustrating proofs without words.[7][8] For a proof to be accepted by the mathematical community, it mustlogicallyshow how the statement it aims to prove follows totally and inevitably from a set ofassumptions.[9]A proof without words might imply such an argument, but it does not make one directly, so it cannot take the place of a formal proof where one is required.[10][11]Rather,mathematiciansuse proofs without words as illustrations and teaching aids for ideas that have already been proven formally.[12][13]
https://en.wikipedia.org/wiki/Proof_without_words#Jensen.27s_inequality
Inmathematics, more precisely in the theory of functions ofseveral complex variables, apseudoconvex setis a special type ofopen setin then-dimensional complex spaceCn. Pseudoconvex sets are important, as they allow for classification ofdomains of holomorphy. Let be a domain, that is, anopenconnectedsubset. One says thatG{\displaystyle G}ispseudoconvex(orHartogspseudoconvex) if there exists acontinuousplurisubharmonic functionφ{\displaystyle \varphi }onG{\displaystyle G}such that the set is arelatively compactsubset ofG{\displaystyle G}for allreal numbersx.{\displaystyle x.}In other words, a domain is pseudoconvex ifG{\displaystyle G}has a continuous plurisubharmonicexhaustion function. Every (geometrically)convex setis pseudoconvex. However, there are pseudoconvex domains which are not geometrically convex. WhenG{\displaystyle G}has aC2{\displaystyle C^{2}}(twicecontinuously differentiable)boundary, this notion is the same as Levi pseudoconvexity, which is easier to work with. More specifically, with aC2{\displaystyle C^{2}}boundary, it can be shown thatG{\displaystyle G}has a defining function, i.e., that there existsρ:Cn→R{\displaystyle \rho :\mathbb {C} ^{n}\to \mathbb {R} }which isC2{\displaystyle C^{2}}so thatG={ρ<0}{\displaystyle G=\{\rho <0\}}, and∂G={ρ=0}{\displaystyle \partial G=\{\rho =0\}}. Now,G{\displaystyle G}is pseudoconvex iff for everyp∈∂G{\displaystyle p\in \partial G}andw{\displaystyle w}in the complex tangent space at p, that is, The definition above is analogous to definitions of convexity in Real Analysis. IfG{\displaystyle G}does not have aC2{\displaystyle C^{2}}boundary, the following approximation result can be useful. Proposition 1IfG{\displaystyle G}is pseudoconvex, then there existbounded, strongly Levi pseudoconvex domainsGk⊂G{\displaystyle G_{k}\subset G}withC∞{\displaystyle C^{\infty }}(smooth) boundary which are relatively compact inG{\displaystyle G}, such that This is because once we have aφ{\displaystyle \varphi }as in the definition we can actually find aC∞exhaustion function. In one complex dimension, every open domain is pseudoconvex. The concept of pseudoconvexity is thus more useful in dimensions higher than 1. This article incorporates material from Pseudoconvex onPlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.
https://en.wikipedia.org/wiki/Pseudoconvexity
AJones diagramis a type ofCartesian graphdeveloped byLoyd A. Jonesin the 1940s, where each axis represents a differentvariable. In a Jones diagram opposite directions of an axis represent different quantities, unlike in a Cartesian graph where they represent positive or negativesignsof the same quantity. The Jones diagram therefore represents four variables. Each quadrant shares the vertical axis with its horizontal neighbor, and the horizontal axis with the vertical neighbor. For example, the top left quadrant shares its vertical axis with the top right quadrant, and the horizontal axis with the bottom left quadrant. The overall system response is inquadrantI; the variables that contribute to it are in quadrants II through IV. A common application of Jones diagrams is inphotography, specifically in displaying sensitivity to light with what are also called "tone reproductiondiagrams". These diagrams are used in the design of photographic systems (film,paper, etc.) to determine the relationship between the light a viewer would see at the time a photo was taken to the light that a viewer would see looking at the finished photograph. The Jones diagram concept can be used for variables that depend successively on each other. Jones's original diagram used eleven quadrants[how?]to show all the elements of his photographic system. This photography-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Jones_diagram
Applicative computing systems, orACSare the systems of object calculi founded oncombinatory logicandlambda calculus.[1]The only essential notion which is under consideration in these systems is the representation ofobject. Incombinatory logicthe only metaoperator isapplicationin a sense of applying one object to other. Inlambda calculustwo metaoperators are used:application– the same as in combinatory logic, andfunctional abstractionwhich binds the only variable in one object. The objects generated in these systems are the functional entities with the following features: ACS give a sound ground forapplicative approachto programming. Applicative computing systems' lack of storage and history sensitivity is the basic reason they have not provided a foundation for computer design. Moreover, most applicative systems employ the substitution operation of thelambda calculusas their basic operation. This operation is one of virtually unlimited power, but its complete and efficient realization presents great difficulties to the machine designer.[2]
https://en.wikipedia.org/wiki/Applicative_computing_systems
TheB, C, K, Wsystem is a variant ofcombinatory logicthat takes as primitive the combinatorsB, C, K, andW. This system was discovered byHaskell Curryin his doctoral thesisGrundlagen der kombinatorischen Logik, whose results are set out in Curry (1930). It has expressive power equivalent to that ofS, K, Isystem. Both systems are fully interchangeable. When compiling to combinators, an implementation may equally choose one system or the other, or both, as it helps shorten the encodings of functions. For example, the encodings ofCexclusively in terms ofS,K,I, as well as ofSinB,C,W,Kare long and complicated, as can be seen below, while their corresponding computational machine implementations are equally trivial. It can be worth it to add additional interpretation rules, allowing for much shorter code which can lead to more efficient execution. The combinators are defined as follows: Intuitively, In recent decades, theSKI combinator calculus, with only two primitive combinators,KandS, has become the canonical approach tocombinatory logic.B, C, andWcan be expressed in terms ofSandKas follows: Another way is, having definedBas above, to further defineC=S(BBS)(KK) andW=CSI. In fact,S(Kx)yz =Bxyz andSx(Ky)z =Cxyz, as is easily verified. Going the other direction, SKI can be defined in terms of B, C, K, W as: Also of note,Ycombinator has a short expression in this system, asY=BU(CBU) =BU(BWB) =B(W(WK))(BWB), whereU=WI=SIIis the self-application combinator. Using just two combinators,BandW, an infinite number of fixpoint combinators can be constructed, one example beingB(WW)(BW(BBB)), discovered by R. Statman in 1986.[2] The combinatorsB,C,KandWcorrespond to four well-known axioms ofsentential logic: Function application corresponds to the rulemodus ponens: The axiomsAB,AC,AKandAW, and the ruleMPare complete for the implicational fragment ofintuitionistic logic. In order for combinatory logic to have as a model:
https://en.wikipedia.org/wiki/B,_C,_K,_W_system
Thecategorical abstract machine(CAM) is amodel of computationfor programs[1]that preserves the abilities of applicative, functional, or compositional style. It is based on the techniques ofapplicative computing. The notion of the categorical abstract machine arose in the mid-1980s. It took its place in computer science as a kind oftheory of computationfor programmers, represented byCartesian closed categoryand embedded into thecombinatory logic. CAM is a transparent and sound mathematical representation for the languages of functional programming. The machine code can be optimized using the equational form of a theory of computation. Using CAM, the various mechanisms of computation such asrecursionorlazy evaluationcan be emulated as well as parameter passing, such ascall by name,call by value, and so on. In theory, CAM preserves[how?]all the advantages of object approach towards programming or computing. The main current implementation is OCaml, which added class inheritance and dynamic method dispatch toCamlthe Categorical Abstract Machine Language. Both are variants of MetaLanguageML, and all three languages implementtype inference. One of the implementation approaches to functional languages is given by the machinery based onsupercombinators, or an SK-machine, by D. Turner. The notion of CAM gives an alternative approach. The structure of CAM consists of syntactic, semantic, and computational constituents. Syntax is based onde Bruijn’snotation, which overcomes the difficulties of using bound variables. The evaluations are similar to those ofP. Landin’sSECD machine. With this coverage, CAM gives a sound ground for syntax, semantics, andtheory of computation. This comprehension arises as being influenced by the functional style of programming.
https://en.wikipedia.org/wiki/Categorical_abstract_machine
Incomputer science,lambda calculiare said to haveexplicit substitutionsif they pay special attention to the formalization of the process ofsubstitution. This is in contrast to the standardlambda calculuswhere substitutions are performed bybeta reductionsin an implicit manner which is not expressed within the calculus; the "freshness" conditions in such implicit calculi are a notorious source of errors.[1]The concept has appeared in a large number of published papers in quite different fields, such as inabstract machines,predicate logic, andsymbolic computation. A simple example of alambda calculuswith explicit substitution is "λx", which adds one new form of term to thelambda calculus, namely the form M⟨x:=N⟩, which reads "M where x will be substituted by N". (The meaning of the new term is the same as the common idiomletx:=NinM from many programming languages.) λx can be written with the followingrewritingrules: While making substitution explicit, this formulation still retains the complexity of thelambda calculus"variable convention", requiring arbitrary renaming of variables during reduction to ensure that the "(x≠y and x not free in N)" condition on the last rule is always satisfied before applying the rule. Therefore many calculi of explicit substitution avoid variable names altogether by using a so-called "name-free"De Bruijn indexnotation. Explicit substitutions were sketched in the preface of Curry's book on Combinatory logic[2]and grew out of an ‘implementation trick’ used, for example, byAUTOMATH, and became a respectable syntactic theory inlambda calculusandrewritingtheory. Though it actually originated withde Bruijn,[3]the idea of a specific calculus where substitutions are part of the object language, and not of the informal meta-theory, is traditionally credited toAbadi,Cardelli, Curien, and Lévy. Their seminal paper[4]on the λσ calculus explains that implementations oflambda calculusneed to be very careful when dealing with substitutions. Without sophisticated mechanisms for structure-sharing, substitutions can cause a size explosion, and therefore, in practice, substitutions are delayed and explicitly recorded. This makes the correspondence between the theory and the implementation highly non-trivial and correctness of implementations can be hard to establish. One solution is to make the substitutions part of the calculus, that is, to have a calculus of explicit substitutions. Once substitution has been made explicit, however, the basic properties of substitution change from being semantic to syntactic properties. One most important example is the "substitution lemma", which with the notation of λx becomes A surprising counterexample, due to Melliès,[5]shows that the way this rule is encoded in the original calculus of explicit substitutions is notstrongly normalizing. Following this, a multitude of calculi were described trying to offer the best compromise between syntactic properties of explicit substitution calculi.[6][7][8]
https://en.wikipedia.org/wiki/Explicit_substitution
Agraph reduction machineis a special-purposecomputerbuilt to performcombinatorcalculations bygraph reduction. Examples include the SKIM ("S-K-I machine") computer, built at theUniversity of Cambridge Computer Laboratory,[1]the multiprocessor GRIP ("Graph Reduction In Parallel") computer, built atUniversity College London,[2][3]and the Reduceron, which was implemented on anFPGAwith the single purpose of executingHaskell.[4][5] Thiscomputer hardwarearticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Graph_reduction_machine
TheSKI combinator calculusis acombinatory logic systemand acomputational system. It can be thought of as a computer programming language, though it is not convenient for writing software.[citation needed]Instead, it is important in the mathematical theory ofalgorithmsbecause it is an extremely simpleTuring completelanguage. It can be likened to a reduced version of the untypedlambda calculus. It was introduced byMoses Schönfinkel[1]andHaskell Curry.[2] All operations in lambda calculus can be encoded viaabstraction eliminationinto the SKI calculus asbinary treeswhose leaves are one of the three symbolsS,K, andI(calledcombinators). Although the most formal representation of the objects in this system requires binary trees, for simpler typesetting they are often represented as parenthesized expressions, as a shorthand for the tree they represent. Any subtrees may be parenthesized, but often only the right-side subtrees are parenthesized, with left associativity implied for any unparenthesized applications. For example,ISKmeans ((IS)K). Using this notation, a tree whose left subtree is the treeKSand whose right subtree is the treeSKcan be written asKS(SK). If more explicitness is desired, the implied parentheses can be included as well: ((KS)(SK)). Informally, and using programming language jargon, a tree (xy) can be thought of as a functionxapplied to an argumenty. When evaluated (i.e., when the function is "applied" to the argument), the tree "returns a value",i.e., transforms into another tree. The "function", "argument" and the "value" are either combinators or binary trees. If they are binary trees, they may be thought of as functions too, if needed. Theevaluationoperation is defined as follows: (x,y, andzrepresent expressions made from the functionsS,K, andI, and set values): Ireturns its argument: K, when applied to any argumentx, yields a one-argument constant functionKx, which, when applied to any argumenty, returnsx: Sis a substitution operator. It takes three arguments and then returns the first argument applied to the third, which is then applied to the result of the second argument applied to the third. More clearly: Example computation:SKSKevaluates toKK(SK) by theS-rule. Then if we evaluateKK(SK), we getKby theK-rule. As no further rule can be applied, the computation halts here. For all treesxand all treesy,SKxywill always evaluate toyin two steps,Ky(xy) =y, so the ultimate result of evaluatingSKxywill always equal the result of evaluatingy. We say thatSKxandIare "functionally equivalent" for anyxbecause they always yield the same result when applied to anyy. From these definitions it can be shown that SKI calculus is not the minimum system that can fully perform the computations of lambda calculus, as all occurrences ofIin any expression can be replaced by (SKK) or (SKS) or (SKx) for anyx, and the resulting expression will yield the same result. So the "I" is merelysyntactic sugar. SinceIis optional, the system is also referred asSK calculusorSK combinator calculus. It is possible to define a complete system using only one (improper) combinator. An example is Chris Barker'siotacombinator, which can be expressed in terms ofSandKas follows: It is possible to reconstructS,K, andIfrom the iota combinator. Applying ι to itself gives ιι = ιSK=SSKK=SK(KK) which is functionally equivalent toI.Kcan be constructed by applying ι twice toI(which is equivalent to application of ι to itself): ι(ι(ιι)) = ι(ιιSK) = ι(ISK) = ι(SK) =SKSK=K. Applying ι one more time gives ι(ι(ι(ιι))) = ιK=KSK=S. The simplest possible term forming a basis is X = λf.f λxyz.x z (y z) λxyz.x, which satisfies X X =K, and X (X X) =S. The terms and derivations in this system can also be more formally defined: Terms: The setTof terms is defined recursively by the following rules. Derivations: A derivation is a finite sequence of terms defined recursively by the following rules (where α and ι are words over the alphabet {S,K,I, (, )} while β, γ and δ are terms): Assuming a sequence is a valid derivation to begin with, it can be extended using these rules. All derivations of length 1 are valid derivations. An expression in the lambda calculus can be converted into an SKI combinator calculus expression in accordance with the following rules: SIIis an expression that takes an argument and applies that argument to itself: This is also known asUcombinator,Ux=xx. One interesting property of it is that its self-application is irreducible: Or, using the equation as its definition directly, we immediately getUU=UU. Another thing is that it allows one to write a function that applies one thing to the self application of another thing: or it can be seen as defining yet another combinator directly,Hxy=x(yy). This function can be used to achieverecursion. If β is the function that applies α to the self application of something else, then the self-application of this β is the fixed point of that α: Or, directly again from the derived definition,Hα(Hα) = α(Hα(Hα)). If α expresses a "computational step" computed by αρν for some ρ and ν, that assumes ρν' expresses "the rest of the computation" (for some ν' that α will "compute" from ν), then its fixed point ββ expresses the whole recursive computation, since usingthe same functionββ for the "rest of computation" call (with ββν = α(ββ)ν) is the very definition of recursion: ρν' = ββν' = α(ββ)ν' = ... . α will have to employ some kind of conditional to stop at some "base case" and not make the recursive call then, to avoid divergence. This can be formalized, with as which gives usone possible encodingof theYcombinator. A shorter variation replaces its two leading subterms with justSSI, sinceHα(Hα) =SHHα =SSIHα. This becomes much shorter with the use of theB,C,Wcombinators, asthe equivalent And with a pseudo-Haskellsyntax it becomes the exceptionally shortY=U. (.U). Following this approach, other fixpoint combinator definitions are possible. Thus, In astrict programming languagetheY combinatorwill expand untilstack overflow, or never halt in case oftail call optimization.[6]TheZ combinatorwill work instrict languages(also called eager languages, where applicative evaluation order is applied). S(K(SI))Kreverses the two terms following it: It is thus equivalent toCI. And in general,S(K(Sf))Kis equivalent toCf, for anyf. SKI combinator calculus can also implementBoolean logicin the form of anif-then-elsestructure. Anif-then-elsestructure consists of a Boolean expression that is either true (T) or false (F) and two arguments, such that: and The key is in defining the two Boolean expressions. The first works just like one of our basic combinators: The second is also fairly simple: Once true and false are defined, all Boolean logic can be implemented in terms ofif-then-elsestructures. BooleanNOT(which returns the opposite of a given Boolean) works the same as theif-then-elsestructure, withFandTas the second and third values, so it can be implemented as a postfix operation: If this is put in anif-then-elsestructure, it can be shown that this has the expected result BooleanOR(which returnsTif either of the two Boolean values surrounding it isT) works the same as anif-then-elsestructure withTas the second value, so it can be implemented as an infix operation: If this is put in anif-then-elsestructure, it can be shown that this has the expected result: BooleanAND(which returnsTif both of the two Boolean values surrounding it areT) works the same as anif-then-elsestructure withFas the third value, so it can be implemented as a postfix operation: If this is put in anif-then-elsestructure, it can be shown that this has the expected result: Because this definesT,F,NOT(as a postfix operator),OR(as an infix operator), andAND(as a postfix operator) in terms of SKI notation, this proves that the SKI system can fully express Boolean logic. As the SKI calculus iscomplete, it is also possible to expressNOT,ORandANDas prefix operators: The combinatorsKandScorrespond to two well-known axioms ofsentential logic: Function application corresponds to the rulemodus ponens: The axiomsAKandAS, and the ruleMPare complete for the implicational fragment ofintuitionistic logic. In order for combinatory logic to have as a model: This connection between the types of combinators and the corresponding logical axioms is an instance of theCurry–Howard isomorphism. There may be multiple ways to do a reduction. All are equivalent, as long as you don't break order of operations
https://en.wikipedia.org/wiki/SKI_combinator_calculus
Asupercombinatoris amathematical expressionwhich isfully boundand self-contained. It may be either aconstantor acombinatorwhere all the subexpressions are supercombinators. Supercombinators are used in the implementation of functional languages. In mathematical terms, alambda expressionSis a supercombinator ofaritynif it has no free variables and is of the form λx1.λx2...λxn.E(withn≥ 0, so that lambdas are not required) such thatEitself is not alambda abstractionand any lambda abstraction inEis again a supercombinator. Thistheoretical computer science–related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Supercombinator
To Mock a Mockingbird and Other Logic Puzzles: Including an Amazing Adventure in Combinatory Logic(1985,ISBN0-19-280142-2) is a book by themathematicianandlogicianRaymond Smullyan. It contains many nontrivial recreational puzzles of the sort for which Smullyan is well known. It is also a gentle and humorous introduction tocombinatory logicand the associatedmetamathematics, built on an elaborateornithologicalmetaphor. Combinatory logic, functionally equivalent to thelambda calculus, is a branch ofsymbolic logichaving the expressive power ofset theory, and with deep connections to questions ofcomputabilityandprovability. Smullyan's exposition takes the form of an imaginary account of two men going into a forest and discussing the unusual "birds" (combinators) they find there (bird watching was a hobby of one of the founders of combinatory logic,Haskell Curry, and another founderMoses Schönfinkel'sname meansbeautiful spark(or possiblybeautiful little-finch). Each species of bird in Smullyan's forest stands for a particular kind ofcombinatorappearing in the conventional treatment of combinatory logic. Each bird has a distinctive call, which it emits when it hears the call of another bird. Hence an initial call by certain "birds" gives rise to a cascading sequence of calls by a succession of birds. Deep inside the forest dwells the Mockingbird, which imitates other birds hearing themselves. The resulting cascade of calls and responses analogizes to abstractmodels of computing. With this analogy in hand, one can explore advanced topics in the mathematicaltheory of computability, such asChurch–Turing computabilityandGödel's theorem. While the book starts off with simple riddles, it eventually shifts to a tale of Inspector Craig of Scotland Yard, who appears in Smullyan's other books; traveling from forest to forest, learning from different professors about all the different kinds of birds. He starts off in a certain enchanted forest, then goes to an unnamed forest, then to Curry's Forest (named after Haskell Curry), then to Russell's Forest, then to The Forest Without a Name, then to Gödel's Forest and finally to The Master Forest where he also answers The Grand Question.
https://en.wikipedia.org/wiki/To_Mock_a_Mockingbird
Inmathematics, thecomposition operatorCϕ{\displaystyle C_{\phi }}with symbolϕ{\displaystyle \phi }is alinear operatordefined by the ruleCϕ(f)=f∘ϕ{\displaystyle C_{\phi }(f)=f\circ \phi }wheref∘ϕ{\displaystyle f\circ \phi }denotesfunction composition. It is also encountered in composition ofpermutationsin permutations groups. The study of composition operators is covered byAMS category 47B33. Inphysics, and especially the area ofdynamical systems, the composition operator is usually referred to as theKoopman operator[1][2](and its wild surge in popularity[3]is sometimes jokingly called "Koopmania"[4]), named afterBernard Koopman. It is theleft-adjointof thetransfer operatorof Frobenius–Perron. Using the language ofcategory theory, the composition operator is apull-backon the space ofmeasurable functions; it is adjoint to thetransfer operatorin the same way that the pull-back is adjoint to thepush-forward; the composition operator is theinverse image functor. Since the domain considered here is that ofBorel functions, the above describes the Koopman operator as it appears inBorel functional calculus. Thedomainof a composition operator can be taken more narrowly, as someBanach space, often consisting ofholomorphic functions: for example, someHardy spaceorBergman space. In this case, the composition operator lies in the realm of somefunctional calculus, such as theholomorphic functional calculus. Interesting questions posed in the study of composition operators often relate to how thespectral propertiesof the operator depend on thefunction space. Other questions include whetherCϕ{\displaystyle C_{\phi }}iscompactortrace-class; answers typically depend on how the functionφ{\displaystyle \varphi }behaves on theboundaryof some domain. When the transfer operator is a left-shift operator, the Koopman operator, as its adjoint, can be taken to be the right-shift operator. An appropriate basis, explicitly manifesting the shift, can often be found in theorthogonal polynomials. When these are orthogonal on the real number line, the shift is given by theJacobi operator.[5]When the polynomials are orthogonal on some region of the complex plane (viz, inBergman space), the Jacobi operator is replaced by aHessenberg operator.[6] In mathematics, composition operators commonly occur in the study ofshift operators, for example, in theBeurling–Lax theoremand theWold decomposition. Shift operators can be studied as one-dimensionalspin lattices. Composition operators appear in the theory ofAleksandrov–Clark measures. Theeigenvalueequation of the composition operator isSchröder's equation, and the principaleigenfunctionf(x){\displaystyle f(x)}is often calledSchröder's functionorKoenigs function. The composition operator has been used in data-driven techniques for dynamical systems in the context ofdynamic mode decompositionalgorithms, which approximate the modes and eigenvalues of the composition operator.
https://en.wikipedia.org/wiki/Composition_operator
In mathematics, apolynomial decompositionexpresses apolynomialfas thefunctional compositiong∘h{\displaystyle g\circ h}of polynomialsgandh, wheregandhhavedegreegreater than 1; it is an algebraicfunctional decomposition.Algorithmsare known for decomposingunivariate polynomialsinpolynomial time. Polynomials which are decomposable in this way arecomposite polynomials; those which are not areindecomposable polynomialsor sometimesprime polynomials[1](not to be confused withirreducible polynomials, which cannot befactored into products of polynomials). The degree of a composite polynomial is always acomposite number, the product of the degrees of the composed polynomials. The rest of this article discusses only univariate polynomials; algorithms also exist for multivariate polynomials of arbitrary degree.[2] In the simplest case, one of the polynomials is amonomial. For example, decomposes into since using thering operator symbol∘to denotefunction composition. Less trivially, A polynomial may have distinct decompositions into indecomposable polynomials wheref=g1∘g2∘⋯∘gm=h1∘h2∘⋯∘hn{\displaystyle f=g_{1}\circ g_{2}\circ \cdots \circ g_{m}=h_{1}\circ h_{2}\circ \cdots \circ h_{n}}wheregi≠hi{\displaystyle g_{i}\neq h_{i}}for somei{\displaystyle i}. The restriction in the definition to polynomials of degree greater than one excludes the infinitely many decompositions possible with linear polynomials. Joseph Rittproved thatm=n{\displaystyle m=n}, and the degrees of the components are the same, but possibly in different order; this isRitt's polynomial decomposition theorem.[1][3]For example,x2∘x3=x3∘x2{\displaystyle x^{2}\circ x^{3}=x^{3}\circ x^{2}}. A polynomial decomposition may enable more efficient evaluation of a polynomial. For example, can be calculated with 3 multiplications and 3 additions using the decomposition, whileHorner's methodwould require 7 multiplications and 8 additions. A polynomial decomposition enables calculation of symbolic roots usingradicals, even for someirreducible polynomials. This technique is used in manycomputer algebra systems.[4]For example, using the decomposition the roots of this irreducible polynomial can be calculated as[5] Even in the case ofquartic polynomials, where there is an explicit formula for the roots, solving using the decomposition often gives a simpler form. For example, the decomposition gives the roots[5] but straightforward application of thequartic formulagives equivalent results but in a form that is difficult tosimplifyand difficult to understand; one of the four roots is: The first algorithm for polynomial decomposition was published in 1985,[6]though it had been discovered in 1976,[7]and implemented in theMacsyma/Maximacomputer algebra system.[8]That algorithm takes exponential time in worst case, but works independently of thecharacteristicof the underlyingfield. A 1989 algorithm runs in polynomial time but with restrictions on the characteristic.[9] A 2014 algorithm calculates a decomposition in polynomial time and without restrictions on the characteristic.[10]
https://en.wikipedia.org/wiki/Polynomial_decomposition
In mathematics, aCarleman matrixis a matrix used to convertfunction compositionintomatrix multiplication. It is often used in iteration theory to find the continuousiteration of functionswhich cannot be iterated bypattern recognitionalone. Other uses of Carleman matrices occur in the theory ofprobabilitygenerating functions, andMarkov chains. TheCarleman matrixof an infinitely differentiable functionf(x){\displaystyle f(x)}is defined as: so as to satisfy the (Taylor series) equation: For instance, the computation off(x){\displaystyle f(x)}by simply amounts to the dot-product of row 1 ofM[f]{\displaystyle M[f]}with a column vector[1,x,x2,x3,...]τ{\displaystyle \left[1,x,x^{2},x^{3},...\right]^{\tau }}. The entries ofM[f]{\displaystyle M[f]}in the next row give the 2nd power off(x){\displaystyle f(x)}: and also, in order to have the zeroth power off(x){\displaystyle f(x)}inM[f]{\displaystyle M[f]}, we adopt the row 0 containing zeros everywhere except the first position, such that Thus, thedot productofM[f]{\displaystyle M[f]}with the column vector[1,x,x2,...]T{\displaystyle {\begin{bmatrix}1,x,x^{2},...\end{bmatrix}}^{T}}yields the column vector[1,f(x),f(x)2,...]T{\displaystyle \left[1,f(x),f(x)^{2},...\right]^{T}}, i.e., A generalization of the Carleman matrix of a function can be defined around any point, such as: orM[f]x0=M[g]{\displaystyle M[f]_{x_{0}}=M[g]}whereg(x)=f(x+x0)−x0{\displaystyle g(x)=f(x+x_{0})-x_{0}}. This allows thematrix powerto be related as: If we setψn(x)=xn{\displaystyle \psi _{n}(x)=x^{n}}we have theCarleman matrix. Becauseh(x)=∑ncn(h)⋅ψn(x)=∑ncn(h)⋅xn{\displaystyle h(x)=\sum _{n}c_{n}(h)\cdot \psi _{n}(x)=\sum _{n}c_{n}(h)\cdot x^{n}}then we know that the n-th coefficientcn(h){\displaystyle c_{n}(h)}must be the nth-coefficient of thetaylor seriesofh{\displaystyle h}. Thereforecn(h)=1n!h(n)(0){\displaystyle c_{n}(h)={\frac {1}{n!}}h^{(n)}(0)}ThereforeG[f]mn=cn(ψm∘f)=cn(f(x)m)=1n![dndxn(f(x))m]x=0{\displaystyle G[f]_{mn}=c_{n}(\psi _{m}\circ f)=c_{n}(f(x)^{m})={\frac {1}{n!}}\left[{\frac {d^{n}}{dx^{n}}}(f(x))^{m}\right]_{x=0}}Which is theCarleman matrixgiven above.(It's important to note that this is not an orthornormal basis) If{en(x)}n{\displaystyle \{e_{n}(x)\}_{n}}is an orthonormal basis for a Hilbert Space with a defined inner product⟨f,g⟩{\displaystyle \langle f,g\rangle }, we can setψn=en{\displaystyle \psi _{n}=e_{n}}andcn(h){\displaystyle c_{n}(h)}will be⟨h,en⟩{\displaystyle {\displaystyle \langle h,e_{n}\rangle }}. ThenG[f]mn=cn(em∘f)=⟨em∘f,en⟩{\displaystyle G[f]_{mn}=c_{n}(e_{m}\circ f)=\langle e_{m}\circ f,e_{n}\rangle }. Ifen(x)=einx{\displaystyle e_{n}(x)=e^{inx}}we have the analogous forFourier Series. Letc^n{\displaystyle {\hat {c}}_{n}}andG^{\displaystyle {\hat {G}}}represent the carleman coefficient and matrix in the fourier basis. Because the basis is orthogonal, we have. Then, therefore,G^[f]mn=cn^(em∘f)=⟨em∘f,en⟩{\displaystyle {\hat {G}}[f]_{mn}={\hat {c_{n}}}(e_{m}\circ f)=\langle e_{m}\circ f,e_{n}\rangle }which is Carleman matrices satisfy the fundamental relationship which makes the Carleman matrixMa (direct) representation off(x){\displaystyle f(x)}. Here the termf∘g{\displaystyle f\circ g}denotes the composition of functionsf(g(x)){\displaystyle f(g(x))}. Other properties include: The Carleman matrix of a constant is: The Carleman matrix of the identity function is: The Carleman matrix of a constant addition is: The Carleman matrix of thesuccessor functionis equivalent to theBinomial coefficient: The Carleman matrix of thelogarithmis related to the (signed)Stirling numbers of the first kindscaled byfactorials: The Carleman matrix of thelogarithmis related to the (unsigned)Stirling numbers of the first kindscaled byfactorials: The Carleman matrix of theexponential functionis related to theStirling numbers of the second kindscaled byfactorials: The Carleman matrix ofexponential functionsis: The Carleman matrix of a constant multiple is: The Carleman matrix of a linear function is: The Carleman matrix of a functionf(x)=∑k=1∞fkxk{\displaystyle f(x)=\sum _{k=1}^{\infty }f_{k}x^{k}}is: The Carleman matrix of a functionf(x)=∑k=0∞fkxk{\displaystyle f(x)=\sum _{k=0}^{\infty }f_{k}x^{k}}is: TheBell matrixor theJabotinsky matrixof a functionf(x){\displaystyle f(x)}is defined as[1][2][3] so as to satisfy the equation These matrices were developed in 1947 by Eri Jabotinsky to represent convolutions of polynomials.[4]It is thetransposeof the Carleman matrix and satisfy B[f∘g]=B[g]B[f],{\displaystyle B[f\circ g]=B[g]B[f]~,}which makes the Bell matrixBananti-representationoff(x){\displaystyle f(x)}.
https://en.wikipedia.org/wiki/Carleman_matrix
Inmathematicsandcomputer science,curryingis the technique of translating afunctionthat takes multipleargumentsinto a sequence of families of functions, each taking a single argument. In the prototypical example, one begins with a functionf:(X×Y)→Z{\displaystyle f:(X\times Y)\to Z}that takes two arguments, one fromX{\displaystyle X}and one fromY,{\displaystyle Y,}and produces objects inZ.{\displaystyle Z.}The curried form of this function treats the first argument as a parameter, so as to create a family of functionsfx:Y→Z.{\displaystyle f_{x}:Y\to Z.}The family is arranged so that for each objectx{\displaystyle x}inX,{\displaystyle X,}there is exactly one functionfx.{\displaystyle f_{x}.} In this example,curry{\displaystyle {\mbox{curry}}}itself becomes a function, that takesf{\displaystyle f}as an argument, and returns a function that maps eachx{\displaystyle x}tofx.{\displaystyle f_{x}.}The proper notation for expressing this is verbose. The functionf{\displaystyle f}belongs to the set of functions(X×Y)→Z.{\displaystyle (X\times Y)\to Z.}Meanwhile,fx{\displaystyle f_{x}}belongs to the set of functionsY→Z.{\displaystyle Y\to Z.}Thus, something that mapsx{\displaystyle x}tofx{\displaystyle f_{x}}will be of the typeX→[Y→Z].{\displaystyle X\to [Y\to Z].}With this notation,curry{\displaystyle {\mbox{curry}}}is a function that takes objects from the first set, and returns objects in the second set, and so one writescurry:[(X×Y)→Z]→(X→[Y→Z]).{\displaystyle {\mbox{curry}}:[(X\times Y)\to Z]\to (X\to [Y\to Z]).}This is a somewhat informal example; more precise definitions of what is meant by "object" and "function" are given below. These definitions vary from context to context, and take different forms, depending on the theory that one is working in. Currying is related to, but not the same as,partial application.[1][2]The example above can be used to illustrate partial application; it is quite similar. Partial application is the functionapply{\displaystyle {\mbox{apply}}}that takes the pairf{\displaystyle f}andx{\displaystyle x}together as arguments, and returnsfx.{\displaystyle f_{x}.}Using the same notation as above, partial application has the signatureapply:([(X×Y)→Z]×X)→[Y→Z].{\displaystyle {\mbox{apply}}:([(X\times Y)\to Z]\times X)\to [Y\to Z].}Written this way, application can be seen to be adjoint to currying. The currying of a function with more than two arguments can be defined by induction. Currying is useful in both practical and theoretical settings. Infunctional programminglanguages, and many others, it provides a way of automatically managing how arguments are passed to functions and exceptions. Intheoretical computer science, it provides a way to study functions with multiple arguments in simpler theoretical models which provide only one argument. The most general setting for the strict notion of currying and uncurrying is in theclosed monoidal categories, which underpins a vast generalization of theCurry–Howard correspondenceof proofs and programs to a correspondence with many other structures, including quantum mechanics, cobordisms and string theory.[3] The concept of currying was introduced byGottlob Frege,[4][5]developed byMoses Schönfinkel,[6][5][7][8][9][10][11]and further developed byHaskell Curry.[8][10][12][13] Uncurryingis thedualtransformation to currying, and can be seen as a form ofdefunctionalization. It takes afunctionf{\displaystyle f}whose return value is another functiong{\displaystyle g}, and yields a new functionf′{\displaystyle f'}that takes as parameters the arguments for bothf{\displaystyle f}andg{\displaystyle g}, and returns, as a result, the application off{\displaystyle f}and subsequently,g{\displaystyle g}, to those arguments. The process can be iterated. Currying provides a way for working with functions that take multiple arguments, and using them in frameworks where functions might take only one argument. For example, someanalytical techniquescan only be applied tofunctionswith a single argument. Practical functions frequently take more arguments than this.Fregeshowed that it was sufficient to provide solutions for the single argument case, as it was possible to transform a function with multiple arguments into a chain of single-argument functions instead. This transformation is the process now known as currying.[14]All "ordinary" functions that might typically be encountered inmathematical analysisor incomputer programmingcan be curried. However, there are categories in which currying is not possible; the most general categories which allow currying are theclosed monoidal categories. Someprogramming languagesalmost always use curried functions to achieve multiple arguments; notable examples areMLandHaskell, where in both cases all functions have exactly one argument. This property is inherited fromlambda calculus, where multi-argument functions are usually represented in curried form. Currying is related to, but not the same aspartial application.[1][2]In practice, the programming technique ofclosurescan be used to perform partial application and a kind of currying, by hiding arguments in an environment that travels with the curried function. The "Curry" in "Currying" is a reference to logicianHaskell Curry, who used the concept extensively, butMoses Schönfinkelhad the idea six years before Curry.[10]The alternative name "Schönfinkelisation" has been proposed.[15]In the mathematical context, the principle can be traced back to work in 1893 byFrege.[4][5] The originator of the word "currying" is not clear.David Turnersays the word was coined byChristopher Stracheyin his 1967 lecture notesFundamental Concepts in Programming Languages,[16]but that source introduces the concept as "a device originated by Schönfinkel", and the term "currying" is not used, while Curry is mentioned later in the context of higher-order functions.[7]John C. Reynoldsdefined "currying" in a 1972 paper, but did not claim to have coined the term.[8] Currying is most easily understood by starting with an informal definition, which can then be molded to fit many different domains. First, there is some notation to be established. The notationX→Y{\displaystyle X\to Y}denotes allfunctionsfromX{\displaystyle X}toY{\displaystyle Y}. Iff{\displaystyle f}is such a function, we writef:X→Y{\displaystyle f\colon X\to Y}. LetX×Y{\displaystyle X\times Y}denote theordered pairsof the elements ofX{\displaystyle X}andY{\displaystyle Y}respectively, that is, theCartesian productofX{\displaystyle X}andY{\displaystyle Y}. Here,X{\displaystyle X}andY{\displaystyle Y}may be sets, or they may be types, or they may be other kinds of objects, as explored below. Given a function curryingconstructs a new function That is,g{\displaystyle g}takes an argument of typeX{\displaystyle X}and returns a function of typeY→Z{\displaystyle Y\to Z}. It is defined by forx{\displaystyle x}of typeX{\displaystyle X}andy{\displaystyle y}of typeY{\displaystyle Y}. We then also write Uncurryingis the reverse transformation, and is most easily understood in terms of its right adjoint, thefunctionapply.{\displaystyle \operatorname {apply} .} Inset theory, the notationYX{\displaystyle Y^{X}}is used to denote thesetof functions from the setX{\displaystyle X}to the setY{\displaystyle Y}. Currying is thenatural bijectionbetween the setAB×C{\displaystyle A^{B\times C}}of functions fromB×C{\displaystyle B\times C}toA{\displaystyle A}, and the set(AC)B{\displaystyle (A^{C})^{B}}of functions fromB{\displaystyle B}to the set of functions fromC{\displaystyle C}toA{\displaystyle A}. In symbols: Indeed, it is this natural bijection that justifies theexponential notationfor the set of functions. As is the case in all instances of currying, the formula above describes anadjoint pair of functors: for every fixed setC{\displaystyle C}, the functorB↦B×C{\displaystyle B\mapsto B\times C}is left adjoint to the functorA↦AC{\displaystyle A\mapsto A^{C}}. In thecategory of sets, theobjectYX{\displaystyle Y^{X}}is called theexponential object. In the theory offunction spaces, such as infunctional analysisorhomotopy theory, one is commonly interested incontinuous functionsbetweentopological spaces. One writesHom(X,Y){\displaystyle {\text{Hom}}(X,Y)}(theHom functor) for the set ofallfunctions fromX{\displaystyle X}toY{\displaystyle Y}, and uses the notationYX{\displaystyle Y^{X}}to denote the subset of continuous functions. Here,curry{\displaystyle {\text{curry}}}is thebijection while uncurrying is the inverse map. If the setYX{\displaystyle Y^{X}}of continuous functions fromX{\displaystyle X}toY{\displaystyle Y}is given thecompact-open topology, and if the spaceY{\displaystyle Y}islocally compact Hausdorff, then is ahomeomorphism. This is also the case whenX{\displaystyle X},Y{\displaystyle Y}andYX{\displaystyle Y^{X}}arecompactly generated,[17]: chapter 5[18]although there are more cases.[19][20] One useful corollary is that a function is continuousif and only ifits curried form is continuous. Another important result is that theapplication map, usually called "evaluation" in this context, is continuous (note thatevalis a strictly different concept in computer science.) That is, eval:YX×X→Y(f,x)↦f(x){\displaystyle {\begin{aligned}&&{\text{eval}}:Y^{X}\times X\to Y\\&&(f,x)\mapsto f(x)\end{aligned}}} is continuous whenYX{\displaystyle Y^{X}}is compact-open andY{\displaystyle Y}locally compact Hausdorff.[21]These two results are central for establishing the continuity ofhomotopy, i.e. whenX{\displaystyle X}is the unit intervalI{\displaystyle I}, so thatZI×Y≅(ZY)I{\displaystyle Z^{I\times Y}\cong (Z^{Y})^{I}}can be thought of as either a homotopy of two functions fromY{\displaystyle Y}toZ{\displaystyle Z}, or, equivalently, a single (continuous) path inZY{\displaystyle Z^{Y}}. Inalgebraic topology, currying serves as an example ofEckmann–Hilton duality, and, as such, plays an important role in a variety of different settings. For example,loop spaceis adjoint toreduced suspensions; this is commonly written as where[A,B]{\displaystyle [A,B]}is the set ofhomotopy classesof mapsA→B{\displaystyle A\rightarrow B}, andΣA{\displaystyle \Sigma A}is thesuspensionofA, andΩA{\displaystyle \Omega A}is theloop spaceofA. In essence, the suspensionΣX{\displaystyle \Sigma X}can be seen as the cartesian product ofX{\displaystyle X}with the unit interval, modulo an equivalence relation to turn the interval into a loop. The curried form then maps the spaceX{\displaystyle X}to the space of functions from loops intoZ{\displaystyle Z}, that is, fromX{\displaystyle X}intoΩZ{\displaystyle \Omega Z}.[21]Thencurry{\displaystyle {\text{curry}}}is theadjoint functorthat maps suspensions to loop spaces, and uncurrying is the dual.[21] The duality between themapping coneand the mapping fiber (cofibrationandfibration)[17]: chapters 6,7can be understood as a form of currying, which in turn leads to the duality of thelong exactand coexactPuppe sequences. Inhomological algebra, the relationship between currying and uncurrying is known astensor-hom adjunction. Here, an interesting twist arises: theHom functorand thetensor productfunctor might notliftto anexact sequence; this leads to the definition of theExt functorand theTor functor. Inorder theory, that is, the theory oflatticesofpartially ordered sets,curry{\displaystyle {\text{curry}}}is acontinuous functionwhen the lattice is given theScott topology.[22]Scott-continuous functions were first investigated in the attempt to provide a semantics forlambda calculus(as ordinary set theory is inadequate to do this). More generally, Scott-continuous functions are now studied indomain theory, which encompasses the study ofdenotational semanticsof computer algorithms. Note that the Scott topology is quite different than many common topologies one might encounter in thecategory of topological spaces; the Scott topology is typicallyfiner, and is notsober. The notion of continuity makes its appearance inhomotopy type theory, where, roughly speaking, two computer programs can be considered to be homotopic, i.e. compute the same results, if they can be "continuously"refactoredfrom one to the other. Intheoretical computer science, currying provides a way to study functions with multiple arguments in very simple theoretical models, such as thelambda calculus, in which functions only take a single argument. Consider a functionf(x,y){\displaystyle f(x,y)}taking two arguments, and having the type(X×Y)→Z{\displaystyle (X\times Y)\to Z}, which should be understood to mean thatxmust have the typeX{\displaystyle X},ymust have the typeY{\displaystyle Y}, and the function itself returns the typeZ{\displaystyle Z}. The curried form offis defined as whereλ{\displaystyle \lambda }is the abstractor of lambda calculus. Since curry takes, as input, functions with the type(X×Y)→Z{\displaystyle (X\times Y)\to Z}, one concludes that the type of curry itself is The → operator is often consideredright-associative, so the curried function typeX→(Y→Z){\displaystyle X\to (Y\to Z)}is often written asX→Y→Z{\displaystyle X\to Y\to Z}. Conversely,function applicationis considered to beleft-associative, so thatf(x,y){\displaystyle f(x,y)}is equivalent to That is, the parenthesis are not required to disambiguate the order of the application. Curried functions may be used in anyprogramming languagethat supportsclosures; however, uncurried functions are generally preferred for efficiency reasons, since the overhead of partial application and closure creation can then be avoided for most function calls. Intype theory, the general idea of atype systemin computer science is formalized into a specific algebra of types. For example, when writingf:X→Y{\displaystyle f\colon X\to Y}, the intent is thatX{\displaystyle X}andY{\displaystyle Y}aretypes, while the arrow→{\displaystyle \to }is atype constructor, specifically, thefunction typeor arrow type. Similarly, the Cartesian productX×Y{\displaystyle X\times Y}of types is constructed by theproduct typeconstructor×{\displaystyle \times }. The type-theoretical approach is expressed in programming languages such asMLand the languages derived from and inspired by it:Caml,Haskell, andF#. The type-theoretical approach provides a natural complement to the language ofcategory theory, as discussed below. This is because categories, and specifically,monoidal categories, have aninternal language, withsimply typed lambda calculusbeing the most prominent example of such a language. It is important in this context, because it can be built from a single type constructor, the arrow type. Currying then endows the language with a natural product type. The correspondence between objects in categories and types then allows programming languages to be re-interpreted as logics (viaCurry–Howard correspondence), and as other types of mathematical systems, as explored further, below. Under theCurry–Howard correspondence, the existence of currying and uncurrying is equivalent to the logical theorem((A∧B)→C)⇔(A→(B→C)){\displaystyle ((A\land B)\to C)\Leftrightarrow (A\to (B\to C))}(also known asexportation), astuples(product type) corresponds to conjunction in logic, and function type corresponds to implication. Theexponential objectQP{\displaystyle Q^{P}}in the category ofHeyting algebrasis normally written asmaterial implicationP→Q{\displaystyle P\to Q}. Distributive Heyting algebras areBoolean algebras, and the exponential object has the explicit form¬P∨Q{\displaystyle \neg P\lor Q}, thus making it clear that the exponential object really ismaterial implication.[23] The above notions of currying and uncurrying find their most general, abstract statement incategory theory. Currying is auniversal propertyof anexponential object, and gives rise to anadjunctionincartesian closed categories. That is, there is anaturalisomorphismbetween themorphismsfrom abinary productf:(X×Y)→Z{\displaystyle f\colon (X\times Y)\to Z}and the morphisms to an exponential objectg:X→ZY{\displaystyle g\colon X\to Z^{Y}}. This generalizes to a broader result inclosed monoidal categories: Currying is the statement that thetensor productand theinternal Homareadjoint functors; that is, for every objectB{\displaystyle B}there is anatural isomorphism: Here,Homdenotes the (external) Hom-functor of all morphisms in the category, whileB⇒C{\displaystyle B\Rightarrow C}denotes the internal hom functor in the closed monoidal category. For thecategory of sets, the two are the same. When the product is the cartesian product, then the internal homB⇒C{\displaystyle B\Rightarrow C}becomes the exponential objectCB{\displaystyle C^{B}}. Currying can break down in one of two ways. One is if a category is notclosed, and thus lacks an internal hom functor (possibly because there is more than one choice for such a functor). Another way is if it is notmonoidal, and thus lacks a product (that is, lacks a way of writing down pairs of objects). Categories that do have both products and internal homs are exactly the closed monoidal categories. The setting of cartesian closed categories is sufficient for the discussion ofclassical logic; the more general setting of closed monoidal categories is suitable forquantum computation.[24] The difference between these two is that theproductfor cartesian categories (such as thecategory of sets,complete partial ordersorHeyting algebras) is just theCartesian product; it is interpreted as anordered pairof items (or a list).Simply typed lambda calculusis theinternal languageof cartesian closed categories; and it is for this reason that pairs and lists are the primarytypesin thetype theoryofLISP,Schemeand manyfunctional programming languages. By contrast, the product formonoidal categories(such asHilbert spaceand thevector spacesoffunctional analysis) is thetensor product. The internal language of such categories islinear logic, a form ofquantum logic; the correspondingtype systemis thelinear type system. Such categories are suitable for describingentangled quantum states, and, more generally, allow a vast generalization of theCurry–Howard correspondencetoquantum mechanics, tocobordismsinalgebraic topology, and tostring theory.[3]Thelinear type system, andlinear logicare useful for describingsynchronization primitives, such as mutual exclusion locks, and the operation of vending machines. Currying and partial function application are often conflated.[1][2]One of the significant differences between the two is that a call to a partially applied function returns the result right away, not another function down the currying chain; this distinction can be illustrated clearly for functions whosearityis greater than two.[25] Given a function of typef:(X×Y×Z)→N{\displaystyle f\colon (X\times Y\times Z)\to N}, currying producescurry(f):X→(Y→(Z→N)){\displaystyle {\text{curry}}(f)\colon X\to (Y\to (Z\to N))}. That is, while an evaluation of the first function might be represented asf(1,2,3){\displaystyle f(1,2,3)}, evaluation of the curried function would be represented asfcurried(1)(2)(3){\displaystyle f_{\text{curried}}(1)(2)(3)}, applying each argument in turn to a single-argument function returned by the previous invocation. Note that after callingfcurried(1){\displaystyle f_{\text{curried}}(1)}, we are left with a function that takes a single argument and returns another function, not a function that takes two arguments. In contrast,partial function applicationrefers to the process of fixing a number of arguments to a function, producing another function of smaller arity. Given the definition off{\displaystyle f}above, we might fix (or 'bind') the first argument, producing a function of typepartial(f):(Y×Z)→N{\displaystyle {\text{partial}}(f)\colon (Y\times Z)\to N}. Evaluation of this function might be represented asfpartial(2,3){\displaystyle f_{\text{partial}}(2,3)}. Note that the result of partial function application in this case is a function that takes two arguments. Intuitively, partial function application says "if you fix the firstargumentof the function, you get a function of the remaining arguments". For example, if functiondivstands for the division operationx/y, thendivwith the parameterxfixed at 1 (i.e.,div1) is another function: the same as the functioninvthat returns the multiplicative inverse of its argument, defined byinv(y) = 1/y. The practical motivation for partial application is that very often the functions obtained by supplying some but not all of the arguments to a function are useful; for example, many languages have a function or operator similar toplus_one. Partial application makes it easy to define these functions, for example by creating a function that represents the addition operator with 1 bound as its first argument. Partial application can be seen as evaluating a curried function at a fixed point, e.g. givenf:(X×Y×Z)→N{\displaystyle f\colon (X\times Y\times Z)\to N}anda∈X{\displaystyle a\in X}thencurry(partial(f)a)(y)(z)=curry(f)(a)(y)(z){\displaystyle {\text{curry}}({\text{partial}}(f)_{a})(y)(z)={\text{curry}}(f)(a)(y)(z)}or simplypartial(f)a=curry1(f)(a){\displaystyle {\text{partial}}(f)_{a}={\text{curry}}_{1}(f)(a)}wherecurry1{\displaystyle {\text{curry}}_{1}}curries f's first parameter. Thus, partial application is reduced to a curried function at a fixed point. Further, a curried function at a fixed point is (trivially), a partial application. For further evidence, note that, given any functionf(x,y){\displaystyle f(x,y)}, a functiong(y,x){\displaystyle g(y,x)}may be defined such thatg(y,x)=f(x,y){\displaystyle g(y,x)=f(x,y)}. Thus, any partial application may be reduced to a single curry operation. As such, curry is more suitably defined as an operation which, in many theoretical cases, is often applied recursively, but which is theoretically indistinguishable (when considered as an operation) from a partial application. So, a partial application can be defined as the objective result of a single application of the curry operator on some ordering of the inputs of some function.
https://en.wikipedia.org/wiki/Currying
Inobject-oriented programming,inheritanceis the mechanism of basing anobjectorclassupon another object (prototype-based inheritance) or class (class-based inheritance), retaining similarimplementation. Also defined as deriving new classes (sub classes) from existing ones such as super class orbase classand then forming them into a hierarchy of classes. In most class-based object-oriented languages likeC++, an object created through inheritance, a "child object", acquires all the properties and behaviors of the "parent object", with the exception of:constructors, destructors,overloaded operatorsandfriend functionsof the base class. Inheritance allows programmers to create classes that are built upon existing classes,[1]to specify a new implementation while maintaining the same behaviors (realizing an interface), to reuse code and to independently extend original software via public classes andinterfaces. The relationships of objects or classes through inheritance give rise to adirected acyclic graph. An inherited class is called asubclassof its parent class or super class. The terminheritanceis loosely used for both class-based and prototype-based programming, but in narrow use the term is reserved for class-based programming (one classinherits fromanother), with the corresponding technique in prototype-based programming being instead calleddelegation(one objectdelegates toanother). Class-modifying inheritance patterns can be pre-defined according to simple network interface parameters such that inter-language compatibility is preserved.[2][3] Inheritance should not be confused withsubtyping.[4][5]In some languages inheritance and subtyping agree,[a]whereas in others they differ; in general, subtyping establishes anis-arelationship, whereas inheritance only reuses implementation and establishes a syntactic relationship, not necessarily a semantic relationship (inheritance does not ensure behavioral subtyping). To distinguish these concepts, subtyping is sometimes referred to asinterface inheritance(without acknowledging that the specialization of type variables also induces a subtyping relation), whereas inheritance as defined here is known asimplementation inheritanceorcode inheritance.[6]Still, inheritance is a commonly used mechanism for establishing subtype relationships.[7] Inheritance is contrasted withobject composition, where one objectcontainsanother object (or objects of one class contain objects of another class); seecomposition over inheritance. In contrast to subtyping’sis-arelationship, composition implements ahas-arelationship. Mathematically speaking, inheritance in any system of classes induces astrict partial orderon the set of classes in that system. In 1966,Tony Hoarepresented some remarks on records, and in particular, the idea of record subclasses, record types with common properties but discriminated by a variant tag and having fields private to the variant.[8]Influenced by this, in 1967Ole-Johan DahlandKristen Nygaardpresented a design that allowed specifying objects that belonged to different classes but had common properties. The common properties were collected in a superclass, and each superclass could itself potentially have a superclass. The values of a subclass were thus compound objects, consisting of some number of prefix parts belonging to various superclasses, plus a main part belonging to the subclass. These parts were all concatenated together.[9]The attributes of a compound object would be accessible by dot notation. This idea was first adopted in theSimula67 programming language.[10]The idea then spread toSmalltalk,C++,Java,Python, and many other languages. There are various types of inheritance, based on paradigm and specific language.[11] "Multiple inheritance... was widely supposed to be very difficult to implement efficiently. For example, in a summary of C++ in his book onObjective C,Brad Coxactually claimed that adding multiple inheritance to C++ was impossible. Thus, multiple inheritance seemed more of a challenge. Since I had considered multiple inheritance as early as 1982 and found a simple and efficient implementation technique in 1984, I couldn't resist the challenge. I suspect this to be the only case in which fashion affected the sequence of events."[12] Subclasses,derived classes,heir classes, orchild classesaremodularderivative classes that inherit one or morelanguageentities from one or more other classes (calledsuperclass,base classes, orparent classes). The semantics of class inheritance vary from language to language, but commonly the subclass automatically inherits theinstance variablesandmember functionsof its superclasses. The general form of defining a derived class is:[13] Some languages also support the inheritance of other constructs. For example, inEiffel,contractsthat define the specification of a class are also inherited by heirs. The superclass establishes a common interface and foundational functionality, which specialized subclasses can inherit, modify, and supplement. The software inherited by a subclass is consideredreusedin the subclass. A reference to an instance of a class may actually be referring to one of its subclasses. The actual class of the object being referenced is impossible to predict atcompile-time. A uniform interface is used to invoke the member functions of objects of a number of different classes. Subclasses may replace superclass functions with entirely new functions that must share the samemethod signature. In some languages a class may be declared asnon-subclassableby adding certainclass modifiersto the class declaration. Examples include thefinalkeyword inJavaandC++11onwards or thesealedkeyword in C#. Such modifiers are added to the class declaration before theclasskeyword and the class identifier declaration. Such non-subclassable classes restrictreusability, particularly when developers only have access to precompiledbinariesand notsource code. A non-subclassable class has no subclasses, so it can be easily deduced atcompile timethat references or pointers to objects of that class are actually referencing instances of that class and not instances of subclasses (they do not exist) or instances of superclasses (upcastinga reference type violates the type system). Because the exact type of the object being referenced is known before execution,early binding(also calledstatic dispatch) can be used instead oflate binding(also calleddynamic dispatch), which requires one or morevirtual method tablelookups depending on whethermultiple inheritanceor onlysingle inheritanceare supported in the programming language that is being used. Just as classes may be non-subclassable, method declarations may contain method modifiers that prevent the method from being overridden (i.e. replaced with a new function with the same name and type signature in a subclass). Aprivatemethod is un-overridable simply because it is not accessible by classes other than the class it is a member function of (this is not true for C++, though). Afinalmethod in Java, asealedmethod in C# or afrozenfeature in Eiffel cannot be overridden. If a superclass method is avirtual method, then invocations of the superclass method will bedynamically dispatched. Some languages require that method be specifically declared as virtual (e.g. C++), and in others, all methods are virtual (e.g. Java). An invocation of a non-virtual method will always be statically dispatched (i.e. the address of the function call is determined at compile-time). Static dispatch is faster than dynamic dispatch and allows optimizations such asinline expansion. The following table shows which variables and functions get inherited dependent on the visibility given when deriving the class, using the terminology established by C++.[14] Inheritance is used to co-relate two or more classes to each other. Manyobject-oriented programming languagespermit a class or object to replace the implementation of an aspect—typically a behavior—that it has inherited. This process is calledoverriding. Overriding introduces a complication: which version of the behavior does an instance of the inherited class use—the one that is part of its own class, or the one from the parent (base) class? The answer varies between programming languages, and some languages provide the ability to indicate that a particular behavior is not to be overridden and should behave as defined by the base class. For instance, in C#, the base method or property can only be overridden in a subclass if it is marked with the virtual, abstract, or override modifier, while in programming languages such as Java, different methods can be called to override other methods.[15]An alternative to overriding ishidingthe inherited code. Implementation inheritance is the mechanism whereby a subclassre-usescode in a base class. By default the subclass retains all of the operations of the base class, but the subclass mayoverridesome or all operations, replacing the base-class implementation with its own. In the following Python example, subclassesSquareSumComputerandCubeSumComputeroverride thetransform()method of the base classSumComputer. The base class comprises operations to compute the sum of thesquaresbetween two integers. The subclass re-uses all of the functionality of the base class with the exception of the operation that transforms a number into its square, replacing it with an operation that transforms a number into itssquareandcuberespectively. The subclasses therefore compute the sum of the squares/cubes between two integers. Below is an example of Python. In most quarters, class inheritance for the sole purpose of code reuse has fallen out of favor.[citation needed]The primary concern is that implementation inheritance does not provide any assurance ofpolymorphicsubstitutability—an instance of the reusing class cannot necessarily be substituted for an instance of the inherited class. An alternative technique, explicitdelegation, requires more programming effort, but avoids the substitutability issue.[citation needed]In C++ private inheritance can be used as a form ofimplementation inheritancewithout substitutability. Whereas public inheritance represents an "is-a" relationship and delegation represents a "has-a" relationship, private (and protected) inheritance can be thought of as an "is implemented in terms of" relationship.[16] Another frequent use of inheritance is to guarantee that classes maintain a certain common interface; that is, they implement the same methods. The parent class can be a combination of implemented operations and operations that are to be implemented in the child classes. Often, there is no interface change between the supertype and subtype- the child implements the behavior described instead of its parent class.[17] Inheritance is similar to but distinct fromsubtyping.[4]Subtyping enables a giventypeto be substituted for another type or abstraction and is said to establish anis-arelationship between the subtype and some existing abstraction, either implicitly or explicitly, depending on language support. The relationship can be expressed explicitly via inheritance in languages that support inheritance as a subtyping mechanism. For example, the following C++ code establishes an explicit inheritance relationship between classesBandA, whereBis both a subclass and a subtype ofAand can be used as anAwherever aBis specified (via a reference, a pointer or the object itself). In programming languages that do not support inheritance as asubtypingmechanism, the relationship between a base class and a derived class is only a relationship between implementations (a mechanism for code reuse), as compared to a relationship betweentypes. Inheritance, even in programming languages that support inheritance as a subtyping mechanism, does not necessarily entailbehavioral subtyping. It is entirely possible to derive a class whose object will behave incorrectly when used in a context where the parent class is expected; see theLiskov substitution principle.[18](Compareconnotation/denotation.) In some OOP languages, the notions of code reuse and subtyping coincide because the only way to declare a subtype is to define a new class that inherits the implementation of another. Using inheritance extensively in designing a program imposes certain constraints. For example, consider a classPersonthat contains a person's name, date of birth, address and phone number. We can define a subclass ofPersoncalledStudentthat contains the person's grade point average and classes taken, and another subclass ofPersoncalledEmployeethat contains the person's job-title, employer, and salary. In defining this inheritance hierarchy we have already defined certain restrictions, not all of which are desirable: Thecomposite reuse principleis an alternative to inheritance. This technique supports polymorphism and code reuse by separating behaviors from the primary class hierarchy and including specific behavior classes as required in any business domain class. This approach avoids the static nature of a class hierarchy by allowing behavior modifications at run time and allows one class to implement behaviors buffet-style, instead of being restricted to the behaviors of its ancestor classes. Implementation inheritance has been controversial among programmers and theoreticians of object-oriented programming since at least the 1990s. Among the critics are the authors ofDesign Patterns, who advocate instead for interface inheritance, and favorcomposition over inheritance. For example, the decorator pattern (as mentionedabove) has been proposed to overcome the static nature of inheritance between classes. As a more fundamental solution to the same problem,role-oriented programmingintroduces a distinct relationship,played-by, combining properties of inheritance and composition into a new concept.[citation needed] According toAllen Holub, the main problem with implementation inheritance is that it introduces unnecessarycouplingin the form of the"fragile base class problem":[6]modifications to the base class implementation can cause inadvertent behavioral changes in subclasses. Using interfaces avoids this problem because no implementation is shared, only the API.[19]Another way of stating this is that "inheritance breaksencapsulation".[20]The problem surfaces clearly in open object-oriented systems such asframeworks, where client code is expected to inherit from system-supplied classes and then substituted for the system's classes in its algorithms.[6] Reportedly, Java inventorJames Goslinghas spoken against implementation inheritance, stating that he would not include it if he were to redesign Java.[19]Language designs that decouple inheritance from subtyping (interface inheritance) appeared as early as 1990;[21]a modern example of this is theGoprogramming language. Complex inheritance, or inheritance used within an insufficiently mature design, may lead to theyo-yo problem. When inheritance was used as a primary approach to structure programs in the late 1990s, developers tended to break code into more layers of inheritance as the system functionality grew. If a development team combined multiple layers of inheritance with the single responsibility principle, this resulted in many very thin layers of code, with many layers consisting of only 1 or 2 lines of actual code.[citation needed]Too many layers make debugging a significant challenge, as it becomes hard to determine which layer needs to be debugged. Another issue with inheritance is that subclasses must be defined in code, which means that program users cannot add new subclasses at runtime. Other design patterns (such asEntity–component–system) allow program users to define variations of an entity at runtime.
https://en.wikipedia.org/wiki/Implementation_inheritance
Inobject-oriented programming,behavioral subtypingis the principle that subclasses should satisfy the expectations of clients accessing subclass objects through references of superclass type, not just as regards syntactic safety (such as the absence of "method-not-found" errors) but also as regards behavioral correctness. Specifically, properties that clients can prove using the specification of an object's presumed type should hold even though the object is actually a member of a subtype of that type.[1] For example, consider a typeStackand a typeQueue, which both have aputmethod to add an element and agetmethod to remove one. Suppose the documentation associated with these types specifies that type Stack's methods shall behave as expected for stacks (i.e. they shall exhibitLIFObehavior), and that type Queue's methods shall behave as expected for queues (i.e. they shall exhibitFIFObehavior). Suppose, now, that type Stack was declared as a subclass of type Queue. Most programming language compilers ignore documentation and perform only the checks that are necessary to preservetype safety. Since, for each method of type Queue, type Stack provides a method with a matching name and signature, this check would succeed. However, clients accessing a Stack object through a reference of type Queue would, based on Queue's documentation, expect FIFO behavior but observe LIFO behavior, invalidating these clients' correctness proofs and potentially leading to incorrect behavior of the program as a whole. This example violates behavioral subtyping because type Stack is not a behavioral subtype of type Queue: it is not the case that the behavior described by the documentation of type Stack (i.e. LIFO behavior) complies with the documentation of type Queue (which requires FIFO behavior). In contrast, a program where both Stack and Queue are subclasses of a type Bag, whose specification forgetis merely that it removessomeelement, does satisfy behavioral subtyping and allows clients to safely reason about correctness based on the presumed types of the objects they interact with. Indeed, any object that satisfies the Stack or Queue specification also satisfies the Bag specification. It is important to stress that whether a type S is a behavioral subtype of a type T depends only on thespecification(i.e. thedocumentation) of type T; theimplementationof type T, if it has any, is completely irrelevant to this question. Indeed, type T need not even have an implementation; it might be a purely abstract class. As another case in point, type Stack above is a behavioral subtype of type Bag even if type Bag'simplementationexhibits FIFO behavior: what matters is that type Bag'sspecificationdoes not specify which element is removed by methodget. This also means that behavioral subtyping can be discussed only with respect to a particular (behavioral) specification for each type involved and that if the types involved have no well-defined behavioral specification, behavioral subtyping cannot be discussed meaningfully. A type S is a behavioral subtype of a type T if each behavior allowed by the specification of S is also allowed by the specification of T. This requires, in particular, that for each method M of T, the specification of M in S isstrongerthan the one in T. A method specification given by apreconditionPsand apostconditionQsis stronger than one given by a precondition Ptand a postcondition Qt(formally: (Ps, Qs) ⇒ (Pt, Qt)) if Psisweakerthan Pt(i.e. Ptimplies Ps) and Qsis stronger than Qt(i.e. Qsimplies Qt). That is, strengthening a method specification can be done by strengthening the postcondition and byweakeningthe precondition. Indeed, a method specification is stronger if it imposes more specific constraints on the outputs for inputs that were already supported, or if it requires more inputs to be supported. For example, consider the (very weak) specification for a method that computes the absolute value of an argumentx, that specifies a precondition 0 ≤ x and a postcondition 0 ≤ result. This specification says the method need not support negative values forx, and it need only ensure the result is nonnegative as well. Two possible ways to strengthen this specification are by strengthening the postcondition to state result = |x|, i.e. the result is equal to the absolute value of x, or by weakening the precondition to "true", i.e. all values forxshould be supported. Of course, we can also combine both, into a specification that states that the result should equal the absolute value ofx, for any value ofx. Note, however, that it is possible to strengthen a specification ((Ps, Qs) ⇒ (Pt, Qt)) without strengthening the postcondition (Qs⇏ Qt).[2][3]Consider a specification for the absolute value method that specifies a precondition 0 ≤ x and a postcondition result = x. The specification that specifies a precondition "true" and a postcondition result = |x| strengthens this specification, even though the postcondition result = |x| does not strengthen (or weaken) the postcondition result = x. The necessary condition for a specification with precondition Psand postcondition Qsto be stronger than a specification with precondition Ptand postcondition Qtis that Psis weaker than Ptand "Qsor not Ps" is stronger than "Qtor not Pt". Indeed, "result = |x| or false" does strengthen "result = x or x < 0". In an influential keynote address[4]on data abstraction and class hierarchies at the OOPSLA 1987 programming language research conference,Barbara Liskovsaid the following: "What is wanted here is something like the following substitution property: If for each objecto1{\displaystyle o_{1}}of type S there is an objecto2{\displaystyle o_{2}}of type T such that for all programs P defined in terms of T, the behavior of P is unchanged wheno1{\displaystyle o_{1}}is substituted foro2{\displaystyle o_{2}}, then S is a subtype of T." This characterization has since been widely known as theLiskov substitution principle(LSP). Unfortunately, though, it has several issues. Firstly, in its original formulation, it is too strong: we rarely want the behavior of a subclass to be identical to that of its superclass; substituting a subclass object for a superclass object is often done with the intent to change the program's behavior, albeit, if behavioral subtyping is respected, in a way that maintains the program's desirable properties. Secondly, it makes no mention ofspecifications, so it invites an incorrect reading where theimplementationof type S is compared to theimplementationof type T. This is problematic for several reasons, one being that it does not support the common case where T is abstract and has no implementation. Thirdly, and most subtly, in the context of object-oriented imperative programming it is difficult to define precisely what it means to universally or existentially quantify over objects of a given type, or to substitute one object for another.[3]In the example above, we are not substituting a Stack object for a Bag object, we are simply using a Stack object as a Bag object. In an interview in 2016, Liskov herself explains that what she presented in her keynote address was an "informal rule", that Jeannette Wing later proposed that they "try to figure out precisely what this means", which led to their joint publication[1]on behavioral subtyping, and indeed that "technically, it's called behavioral subtyping".[5]During the interview, she does not use substitution terminology to discuss the concepts.
https://en.wikipedia.org/wiki/Inheritance_semantics
Infunctional programming, aniterateeis acomposableabstractionfor incrementally processing sequentially presented chunks of input data in apurely functionalfashion. With iteratees, it is possible to lazily transform how a resource will emit data, for example, by converting each chunk of the input to uppercase as they are retrieved or by limiting the data to only the five first chunks without loading the whole input data into memory. Iteratees are also responsible for opening and closing resources, providing predictable resource management. On each step, an iteratee is presented with one of three possible types of values: the next chunk of data, a value to indicate no data is available, or a value to indicate the iteration process has finished. It may return one of three possible types of values, to indicate to the caller what should be done next: one that means "stop" (and contains the final return value), one that means "continue" (and specifies how to continue), and one that means "signal an error". The latter types of values in effect represent the possible "states" of an iteratee. An iteratee would typically start in the "continue" state. Iteratees are used inHaskellandScala(in thePlay Framework[1]and inScalaz), and are also available forF#.[2]Various slightly different implementations of iteratees exist. For example, in the Play framework, they involveFuturesso that asynchronous processing can be performed. Because iteratees are called by other code which feeds them with data, they are an example ofinversion of control. However, unlike many other examples of inversion of control such asSAXXML parsing, the iteratee retains a limited amount of control over the process. It cannot reverse back and look at previous data (unless it stores that data internally), but it can stop the process cleanly without throwing anexception(using exceptions as a means ofcontrol flow, rather than to signal an exceptional event, is often frowned upon by programmers[3]). The following abstractions are not strictly speaking necessary to work with iteratees, but they do make it more convenient. AnEnumeratoris a convenient abstraction for feeding data into an iteratee from an arbitrary data source. Typically the enumerator will take care of any necessary resource cleanup associated with the data source. Because the enumerator knows exactly when the iteratee has finished reading data, it will do the resource cleanup (such as closing a file) at exactly the right time – neither too early nor too late. However, it can do this without needing to know about, or being co-located to, the implementation of the iteratee – so enumerators and iteratees form an example ofseparation of concerns. AnEnumerateeis a convenient abstraction for transforming the output of either an enumerator or iteratee, and feeding that output to an iteratee. For example, a "map" enumeratee wouldmapa function over each input chunk.[4] Iteratees were created due to problems with existingpurely functionalsolutions to the problem of makinginput/outputcomposable yet correct. Lazy I/O in Haskell allowed pure functions to operate on data on disk as if it were in memory, without explicitly doing I/O at all after opening the file - a kind ofmemory-mapped filefeature - but because it was impossible in general (due to theHalting problem) for the runtime to know whether the file or other resource was still needed, excessive numbers of files could be left open unnecessarily, resulting infile descriptorexhaustion at theoperating systemlevel. TraditionalC-style I/O, on the other hand, was too low-level and required the developer to be concerned with low-level details such as the current position in the file, which hindered composability. Iteratees and enumerators combine the high-level functional programming benefits of lazy I/O, with the ability to control resources and low-level details where necessary afforded by C-style I/O.[5] Iteratees are used in the Play framework to push data out to long-runningCometandWebSocketconnections toweb browsers. Iteratees may also be used to perform incrementalparsing(that is, parsing that does not read all the data into memory at once), for example ofJSON.[6] Iteratees are a very general abstraction and can be used for arbitrary kinds ofsequentialinformation processing (or mixed sequential/random-access processing) - and need not involve any I/O at all. This makes it easy to repurpose an iteratee to work on an in-memory dataset instead of data flowing in from the network. In a sense, a distant predecessor of the notion of an enumerator pushing data into a chain of one or more iteratees, was thepipelineconcept in operating systems. However, unlike a typical pipeline, iteratees are not separate processes (and hence do not have the overhead ofIPC) - or even separate threads, although they can perform work in a similar manner to a chain of worker threads sending messages to each other. This means that iteratees are more lightweight than processes or threads - unlike the situations with separate processes or threads, no extra stacks are needed. Iteratees and enumerators were invented byOleg Kiselyovfor use in Haskell.[5]Later, they were introduced into Scalaz (in version 5.0; enumeratees were absent and were introduced in Scalaz 7) and into Play Framework 2.0. Iteratees have been formally modelled asfree monads, allowing equational laws to be validated, and employed to optimise programs using iteratees.[5]
https://en.wikipedia.org/wiki/Iteratee
InUnix-likecomputeroperating systems, apipelineis a mechanism forinter-process communicationusing message passing. A pipeline is a set ofprocesseschained together by theirstandard streams, so that the output text of each process (stdout) is passed directly as input (stdin) to the next one. The second process is started as the first process is still executing, and they are executedconcurrently. The concept of pipelines was championed byDouglas McIlroyatUnix's ancestral home ofBell Labs, during the development of Unix, shaping itstoolbox philosophy. It is named by analogy to a physicalpipeline. A key feature of these pipelines is their "hiding of internals". This in turn allows for more clarity and simplicity in the system. Thepipesin the pipeline areanonymous pipes(as opposed tonamed pipes), where data written by one process is buffered by the operating system until it is read by the next process, and this uni-directional channel disappears when the processes are completed. The standardshellsyntax foranonymous pipesis to list multiple commands, separated by vertical bars ("pipes" in common Unix verbiage). The pipeline concept was invented byDouglas McIlroy[1]and first described in theman pagesofVersion 3 Unix.[2][3]McIlroy noticed that much of the timecommand shellspassed the output file from one program as input to another. The concept of pipelines was championed byDouglas McIlroyatUnix's ancestral home ofBell Labs, during the development of Unix, shaping itstoolbox philosophy.[4][5] His ideas were implemented in 1973 when ("in one feverish night", wrote McIlroy)Ken Thompsonadded thepipe()system call and pipes to theshelland several utilities in Version 3 Unix. "The next day", McIlroy continued, "saw an unforgettable orgy of one-liners as everybody joined in the excitement of plumbing." McIlroy also credits Thompson with the|notation, which greatly simplified the description of pipe syntax inVersion 4.[6][2] Although developed independently, Unix pipes are related to, and were preceded by, the 'communication files' developed by Ken Lochner[7]in the 1960s for theDartmouth Time-Sharing System.[8] This feature ofUnixwas borrowed by other operating systems, such asMS-DOSand theCMS Pipelinespackage onVM/CMSandMVS, and eventually came to be designated thepipes and filters design patternofsoftware engineering. InTony Hoare'scommunicating sequential processes(CSP), McIlroy's pipes are further developed.[9] A pipeline mechanism is used forinter-process communicationusing message passing. A pipeline is a set ofprocesseschained together by theirstandard streams, so that the output text of each process (stdout) is passed directly as input (stdin) to the next one. The second process is started as the first process is still executing, and they are executedconcurrently. It is named by analogy to a physicalpipeline. A key feature of these pipelines is their "hiding of internals".[10]This in turn allows for more clarity and simplicity in the system. In most Unix-like systems, all processes of a pipeline are started at the same time, with their streams appropriately connected, and managed by theschedulertogether with all other processes running on the machine. An important aspect of this, setting Unix pipes apart from other pipe implementations, is the concept ofbuffering: for example a sending program may produce 5000bytespersecond, and a receiving program may only be able to accept 100 bytes per second, but no data is lost. Instead, the output of the sending program is held in the buffer. When the receiving program is ready to read data, the next program in the pipeline reads from the buffer. If the buffer is filled, the sending program is stopped (blocked) until at least some data is removed from the buffer by the receiver. In Linux, the size of the buffer is 65,536 bytes (64KiB). An open source third-party filter calledbfris available to provide larger buffers if required. Tools likenetcatandsocatcan connect pipes to TCP/IPsockets. All widely used Unix shells have a special syntax construct for the creation of pipelines. In all usage one writes the commands in sequence, separated by theASCIIvertical barcharacter|(which, for this reason, is often called "pipe character"). The shell starts the processes and arranges for the necessary connections between their standard streams (including some amount ofbufferstorage). The pipeline usesanonymous pipes. For anonymous pipes, data written by one process is buffered by the operating system until it is read by the next process, and this uni-directional channel disappears when the processes are completed; this differs fromnamed pipes, where messages are passed to or from a pipe that is named by making it a file, and remains after the processes are completed. The standardshellsyntax foranonymous pipesis to list multiple commands, separated byvertical bars("pipes" in common Unix verbiage): For example, to list files in the current directory (ls), retain only the lines oflsoutput containing the string"key"(grep), and view the result in a scrolling page (less), a user types the following into the command line of a terminal: The commandls -lis executed as a process, the output (stdout) of which is piped to the input (stdin) of the process forgrep key; and likewise for the process forless. Eachprocesstakes input from the previous process and produces output for the next process viastandard streams. Each|tells the shell to connect the standard output of the command on the left to the standard input of the command on the right by aninter-process communicationmechanism called an(anonymous) pipe, implemented in the operating system. Pipes are unidirectional; data flows through the pipeline from left to right. Below is an example of a pipeline that implements a kind ofspell checkerfor thewebresource indicated by aURL. An explanation of what it does follows. By default, thestandard error streams("stderr") of the processes in a pipeline are not passed on through the pipe; instead, they are merged and directed to theconsole. However, many shells have additional syntax for changing this behavior. In thecshshell, for instance, using|&instead of|signifies that the standard error stream should also be merged with the standard output and fed to the next process. TheBashshell can also merge standard error with|&since version 4.0[11]or using2>&1, as well as redirect it to a different file. In the most commonly used simple pipelines the shell connects a series of sub-processes via pipes, and executes external commands within each sub-process. Thus the shell itself is doing no direct processing of the data flowing through the pipeline. However, it's possible for the shell to perform processing directly, using a so-calledmillorpipemill(since awhilecommand is used to "mill" over the results from the initial command). This construct generally looks something like: Such pipemill may not perform as intended if the body of the loop includes commands, such ascatandssh, that read fromstdin:[12]on the loop's first iteration, such a program (let's call itthe drain) will read the remaining output fromcommand, and the loop will then terminate (with results depending on the specifics of the drain). There are a couple of possible ways to avoid this behavior. First, some drains support an option to disable reading fromstdin(e.g.ssh -n). Alternatively, if the drain does notneedto read any input fromstdinto do something useful, it can be given< /dev/nullas input. As all components of a pipe are run in parallel, a shell typically forks a subprocess (a subshell) to handle its contents, making it impossible to propagate variable changes to the outside shell environment. To remedy this issue, the "pipemill" can instead be fed from ahere documentcontaining acommand substitution, which waits for the pipeline to finish running before milling through the contents. Alternatively, anamed pipeor aprocess substitutioncan be used for parallel execution.GNU bashalso has alastpipeoption to disable forking for the last pipe component.[13] Pipelines can be created under program control. The Unixpipe()system callasks the operating system to construct a newanonymous pipeobject. This results in two new, opened file descriptors in the process: the read-only end of the pipe, and the write-only end. The pipe ends appear to be normal, anonymousfile descriptors, except that they have no ability to seek. To avoiddeadlockand exploit parallelism, the Unix process with one or more new pipes will then, generally, callfork()to create new processes. Each process will then close the end(s) of the pipe that it will not be using before producing or consuming any data. Alternatively, a process might create newthreadsand use the pipe to communicate between them. Named pipesmay also be created usingmkfifo()ormknod()and then presented as the input or output file to programs as they are invoked. They allow multi-path pipes to be created, and are especially effective when combined with standard error redirection, or withtee. The robot in the icon forApple'sAutomator, which also uses a pipeline concept to chain repetitive commands together, holds a pipe in homage to the original Unix concept.
https://en.wikipedia.org/wiki/Pipeline_(Unix)
Virtual inheritanceis aC++technique that ensures only one copy of abase class's member variables areinheritedby grandchild derived classes. Without virtual inheritance, if two classesBandCinherit from a classA, and a classDinherits from bothBandC, thenDwill contain two copies ofA's member variables: one viaB, and one viaC. These will be accessible independently, usingscope resolution. Instead, if classesBandCinherit virtually from classA, then objects of classDwill contain only one set of the member variables from classA. This feature is most useful formultiple inheritance, as it makes the virtual base a commonsubobjectfor the deriving class and all classes that are derived from it. This can be used to avoid thediamond problemby clarifying ambiguity over which ancestor class to use, as from the perspective of the deriving class (Din the example above) the virtual base (A) acts as though it were the direct base class ofD, not a class derived indirectly through a base (BorC).[1][2] It is used when inheritance represents restriction of a set rather than composition of parts. In C++, a base class intended to be common throughout the hierarchy is denoted as virtual with thevirtualkeyword. Consider the following class hierarchy. As declared above, a call tobat.Eatis ambiguous because there are twoAnimal(indirect) base classes inBat, so anyBatobject has two differentAnimalbase class subobjects. So, an attempt to directly bind a reference to theAnimalsubobject of aBatobject would fail, since the binding is inherently ambiguous: To disambiguate, one would have to explicitly convertbatto either base class subobject: In order to callEat, the same disambiguation, or explicit qualification is needed:static_cast<Mammal&>(bat).Eat()orstatic_cast<WingedAnimal&>(bat).Eat()or alternativelybat.Mammal::Eat()andbat.WingedAnimal::Eat(). Explicit qualification not only uses an easier, uniform syntax for both pointers and objects but also allows for static dispatch, so it would arguably be the preferable method. In this case, the double inheritance ofAnimalis probably unwanted, as we want to model that the relation (Batis anAnimal) exists only once; that aBatis aMammaland is aWingedAnimal, does not imply that it is anAnimaltwice: anAnimalbase class corresponds to a contract thatBatimplements (the "is a" relationship above really means "implements the requirements of"), and aBatonly implements theAnimalcontract once. The real world meaning of "is aonly once" is thatBatshould have only one way of implementingEat, not two different ways, depending on whether theMammalview of theBatis eating, or theWingedAnimalview of theBat. (In the first code example we see thatEatis not overridden in eitherMammalorWingedAnimal, so the twoAnimalsubobjects will actually behave the same, but this is just a degenerate case, and that does not make a difference from the C++ point of view.) This situation is sometimes referred to asdiamond inheritance(seeDiamond problem) because the inheritance diagram is in the shape of a diamond. Virtual inheritance can help to solve this problem. We can re-declare our classes as follows: TheAnimalportion ofBat::WingedAnimalis now thesameAnimalinstance as the one used byBat::Mammal, which is to say that aBathas only one, shared,Animalinstance in its representation and so a call toBat::Eatis unambiguous. Additionally, a direct cast fromBattoAnimalis also unambiguous, now that there exists only oneAnimalinstance whichBatcould be converted to. The ability to share a single instance of theAnimalparent betweenMammalandWingedAnimalis enabled by recording the memory offset between theMammalorWingedAnimalmembers and those of the baseAnimalwithin the derived class. However this offset can in the general case only be known at runtime, thusBatmust become (vpointer,Mammal,vpointer,WingedAnimal,Bat,Animal). There are twovtablepointers, one per inheritance hierarchy that virtually inheritsAnimal. In this example, one forMammaland one forWingedAnimal. The object size has therefore increased by two pointers, but now there is only oneAnimaland no ambiguity. All objects of typeBatwill use the same vpointers, but eachBatobject will contain its own uniqueAnimalobject. If another class inherits fromMammal, such asSquirrel, then the vpointer in theMammalpart ofSquirrelwill generally be different to the vpointer in theMammalpart ofBatthough they may happen to be the same if theSquirrelclass is the same size asBat. This example to illustrates a case where base classAhas a constructor variablemsgand an additional ancestorEis derived from grandchild classD. Here,Amust be constructed in bothDandE. Further, inspection of the variablemsgillustrates how classAbecomes a direct base class of its deriving class, as opposed to a base class of any intermediate deriving classed betweenAand the final deriving class. The code below may be explored interactivelyhere. Suppose a pure virtual method is defined in the base class. If a deriving class inherits the base class virtually, then the pure virtual method does not need to be defined in that deriving class. However, if the deriving class does not inherit the base class virtually, then all virtual methods must be defined. The code below may be explored interactivelyhere.
https://en.wikipedia.org/wiki/Virtual_inheritance
ABayesian network(also known as aBayes network,Bayes net,belief network, ordecision network) is aprobabilistic graphical modelthat represents a set of variables and theirconditional dependenciesvia adirected acyclic graph(DAG).[1]While it is one of several forms ofcausal notation, causal networks are special cases of Bayesian networks. Bayesian networks are ideal for taking an event that occurred and predicting the likelihood that any one of several possible known causes was the contributing factor. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms can performinferenceandlearningin Bayesian networks. Bayesian networks that model sequences of variables (e.g.speech signalsorprotein sequences) are calleddynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are calledinfluence diagrams. Formally, Bayesian networks aredirected acyclic graphs(DAGs) whose nodes represent variables in theBayesiansense: they may be observable quantities,latent variables, unknown parameters or hypotheses. Each edge represents a direct conditional dependency. Any pair of nodes that are not connected (i.e. no path connects one node to the other) represent variables that areconditionally independentof each other. Each node is associated with aprobability functionthat takes, as input, a particular set of values for the node'sparentvariables, and gives (as output) the probability (or probability distribution, if applicable) of the variable represented by the node. For example, ifm{\displaystyle m}parent nodes representm{\displaystyle m}Boolean variables, then the probability function could be represented by a table of2m{\displaystyle 2^{m}}entries, one entry for each of the2m{\displaystyle 2^{m}}possible parent combinations. Similar ideas may be applied to undirected, and possibly cyclic, graphs such asMarkov networks. Suppose we want to model the dependencies between three variables: the sprinkler (or more appropriately, its state - whether it is on or not), the presence or absence of rain and whether the grass is wet or not. Observe that two events can cause the grass to become wet: an active sprinkler or rain. Rain has a direct effect on the use of the sprinkler (namely that when it rains, the sprinkler usually is not active). This situation can be modeled with a Bayesian network (shown to the right). Each variable has two possible values, T (for true) and F (for false). Thejoint probability functionis, by thechain rule of probability, whereG= "Grass wet (true/false)",S= "Sprinkler turned on (true/false)", andR= "Raining (true/false)". The model can answer questions about the presence of a cause given the presence of an effect (so-called inverse probability) like "What is the probability that it is raining, given the grass is wet?" by using theconditional probabilityformula and summing over allnuisance variables: Using the expansion for the joint probability functionPr(G,S,R){\displaystyle \Pr(G,S,R)}and the conditional probabilities from theconditional probability tables (CPTs)stated in the diagram, one can evaluate each term in the sums in the numerator and denominator. For example, Then the numerical results (subscripted by the associated variable values) are To answer an interventional question, such as "What is the probability that it would rain, given that we wet the grass?" the answer is governed by the post-intervention joint distribution function obtained by removing the factorPr(G∣S,R){\displaystyle \Pr(G\mid S,R)}from the pre-intervention distribution. The do operator forces the value of G to be true. The probability of rain is unaffected by the action: To predict the impact of turning the sprinkler on: with the termPr(S=T∣R){\displaystyle \Pr(S=T\mid R)}removed, showing that the action affects the grass but not the rain. These predictions may not be feasible given unobserved variables, as in most policy evaluation problems. The effect of the actiondo(x){\displaystyle {\text{do}}(x)}can still be predicted, however, whenever the back-door criterion is satisfied.[2][3]It states that, if a setZof nodes can be observed thatd-separates[4](or blocks) all back-door paths fromXtoYthen A back-door path is one that ends with an arrow intoX. Sets that satisfy the back-door criterion are called "sufficient" or "admissible." For example, the setZ=Ris admissible for predicting the effect ofS=TonG, becauseRd-separates the (only) back-door pathS←R→G. However, ifSis not observed, no other setd-separates this path and the effect of turning the sprinkler on (S=T) on the grass (G) cannot be predicted from passive observations. In that caseP(G| do(S=T)) is not "identified". This reflects the fact that, lacking interventional data, the observed dependence betweenSandGis due to a causal connection or is spurious (apparent dependence arising from a common cause,R). (seeSimpson's paradox) To determine whether a causal relation is identified from an arbitrary Bayesian network with unobserved variables, one can use the three rules of "do-calculus"[2][5]and test whether alldoterms can be removed from the expression of that relation, thus confirming that the desired quantity is estimable from frequency data.[6] Using a Bayesian network can save considerable amounts of memory over exhaustive probability tables, if the dependencies in the joint distribution are sparse. For example, a naive way of storing the conditional probabilities of 10 two-valued variables as a table requires storage space for210=1024{\displaystyle 2^{10}=1024}values. If no variable's local distribution depends on more than three parent variables, the Bayesian network representation stores at most10⋅23=80{\displaystyle 10\cdot 2^{3}=80}values. One advantage of Bayesian networks is that it is intuitively easier for a human to understand (a sparse set of) direct dependencies and local distributions than complete joint distributions. Bayesian networks perform three main inference tasks: Because a Bayesian network is a complete model for its variables and their relationships, it can be used to answer probabilistic queries about them. For example, the network can be used to update knowledge of the state of a subset of variables when other variables (theevidencevariables) are observed. This process of computing theposteriordistribution of variables given evidence is called probabilistic inference. The posterior gives a universalsufficient statisticfor detection applications, when choosing values for the variable subset that minimize some expected loss function, for instance the probability of decision error. A Bayesian network can thus be considered a mechanism for automatically applyingBayes' theoremto complex problems. The most common exact inference methods are:variable elimination, which eliminates (by integration or summation) the non-observed non-query variables one by one by distributing the sum over the product;clique tree propagation, which caches the computation so that many variables can be queried at one time and new evidence can be propagated quickly; and recursive conditioning and AND/OR search, which allow for aspace–time tradeoffand match the efficiency of variable elimination when enough space is used. All of these methods have complexity that is exponential in the network'streewidth. The most commonapproximate inferencealgorithms areimportance sampling, stochasticMCMCsimulation, mini-bucket elimination,loopy belief propagation,generalized belief propagationandvariational methods. In order to fully specify the Bayesian network and thus fully represent thejoint probability distribution, it is necessary to specify for each nodeXthe probability distribution forXconditional uponX'sparents. The distribution ofXconditional upon its parents may have any form. It is common to work with discrete orGaussian distributionssince that simplifies calculations. Sometimes only constraints on distribution are known; one can then use theprinciple of maximum entropyto determine a single distribution, the one with the greatestentropygiven the constraints. (Analogously, in the specific context of adynamic Bayesian network, the conditional distribution for the hidden state's temporal evolution is commonly specified to maximize theentropy rateof the implied stochastic process.) Often these conditional distributions include parameters that are unknown and must be estimated from data, e.g., via themaximum likelihoodapproach. Direct maximization of the likelihood (or of theposterior probability) is often complex given unobserved variables. A classical approach to this problem is theexpectation-maximization algorithm, which alternates computing expected values of the unobserved variables conditional on observed data, with maximizing the complete likelihood (or posterior) assuming that previously computed expected values are correct. Under mild regularity conditions, this process converges on maximum likelihood (or maximum posterior) values for parameters. A more fully Bayesian approach to parameters is to treat them as additional unobserved variables and to compute a full posterior distribution over all nodes conditional upon observed data, then to integrate out the parameters. This approach can be expensive and lead to large dimension models, making classical parameter-setting approaches more tractable. In the simplest case, a Bayesian network is specified by an expert and is then used to perform inference. In other applications, the task of defining the network is too complex for humans. In this case, the network structure and the parameters of the local distributions must be learned from data. Automatically learning the graph structure of a Bayesian network (BN) is a challenge pursued withinmachine learning. The basic idea goes back to a recovery algorithm developed by Rebane andPearl[7]and rests on the distinction between the three possible patterns allowed in a 3-node DAG: The first 2 represent the same dependencies (X{\displaystyle X}andZ{\displaystyle Z}are independent givenY{\displaystyle Y}) and are, therefore, indistinguishable. The collider, however, can be uniquely identified, sinceX{\displaystyle X}andZ{\displaystyle Z}are marginally independent and all other pairs are dependent. Thus, while theskeletons(the graphs stripped of arrows) of these three triplets are identical, the directionality of the arrows is partially identifiable. The same distinction applies whenX{\displaystyle X}andZ{\displaystyle Z}have common parents, except that one must first condition on those parents. Algorithms have been developed to systematically determine the skeleton of the underlying graph and, then, orient all arrows whose directionality is dictated by the conditional independences observed.[2][8][9][10] An alternative method of structural learning uses optimization-based search. It requires ascoring functionand a search strategy. A common scoring function isposterior probabilityof the structure given the training data, like theBICor the BDeu. The time requirement of anexhaustive searchreturning a structure that maximizes the score issuperexponentialin the number of variables. A local search strategy makes incremental changes aimed at improving the score of the structure. A global search algorithm likeMarkov chain Monte Carlocan avoid getting trapped inlocal minima. Friedman et al.[11][12]discuss usingmutual informationbetween variables and finding a structure that maximizes this. They do this by restricting the parent candidate set toknodes and exhaustively searching therein. A particularly fast method for exact BN learning is to cast the problem as an optimization problem, and solve it usinginteger programming. Acyclicity constraints are added to the integer program (IP) during solving in the form ofcutting planes.[13]Such method can handle problems with up to 100 variables. In order to deal with problems with thousands of variables, a different approach is necessary. One is to first sample one ordering, and then find the optimal BN structure with respect to that ordering. This implies working on the search space of the possible orderings, which is convenient as it is smaller than the space of network structures. Multiple orderings are then sampled and evaluated. This method has been proven to be the best available in literature when the number of variables is huge.[14] Another method consists of focusing on the sub-class of decomposable models, for which theMLEhave a closed form. It is then possible to discover a consistent structure for hundreds of variables.[15] Learning Bayesian networks with bounded treewidth is necessary to allow exact, tractable inference, since the worst-case inference complexity is exponential in the treewidth k (under the exponential time hypothesis). Yet, as a global property of the graph, it considerably increases the difficulty of the learning process. In this context it is possible to useK-treefor effective learning.[16] Given datax{\displaystyle x\,\!}and parameterθ{\displaystyle \theta }, a simpleBayesian analysisstarts with aprior probability(prior)p(θ){\displaystyle p(\theta )}andlikelihoodp(x∣θ){\displaystyle p(x\mid \theta )}to compute aposterior probabilityp(θ∣x)∝p(x∣θ)p(θ){\displaystyle p(\theta \mid x)\propto p(x\mid \theta )p(\theta )}. Often the prior onθ{\displaystyle \theta }depends in turn on other parametersφ{\displaystyle \varphi }that are not mentioned in the likelihood. So, the priorp(θ){\displaystyle p(\theta )}must be replaced by a likelihoodp(θ∣φ){\displaystyle p(\theta \mid \varphi )}, and a priorp(φ){\displaystyle p(\varphi )}on the newly introduced parametersφ{\displaystyle \varphi }is required, resulting in a posterior probability This is the simplest example of ahierarchical Bayes model. The process may be repeated; for example, the parametersφ{\displaystyle \varphi }may depend in turn on additional parametersψ{\displaystyle \psi \,\!}, which require their own prior. Eventually the process must terminate, with priors that do not depend on unmentioned parameters. Given the measured quantitiesx1,…,xn{\displaystyle x_{1},\dots ,x_{n}\,\!}each withnormally distributederrors of knownstandard deviationσ{\displaystyle \sigma \,\!}, Suppose we are interested in estimating theθi{\displaystyle \theta _{i}}. An approach would be to estimate theθi{\displaystyle \theta _{i}}using amaximum likelihoodapproach; since the observations are independent, the likelihood factorizes and the maximum likelihood estimate is simply However, if the quantities are related, so that for example the individualθi{\displaystyle \theta _{i}}have themselves been drawn from an underlying distribution, then this relationship destroys the independence and suggests a more complex model, e.g., withimproper priorsφ∼flat{\displaystyle \varphi \sim {\text{flat}}},τ∼flat∈(0,∞){\displaystyle \tau \sim {\text{flat}}\in (0,\infty )}. Whenn≥3{\displaystyle n\geq 3}, this is anidentified model(i.e. there exists a unique solution for the model's parameters), and the posterior distributions of the individualθi{\displaystyle \theta _{i}}will tend to move, orshrinkaway from the maximum likelihood estimates towards their common mean. Thisshrinkageis a typical behavior in hierarchical Bayes models. Some care is needed when choosing priors in a hierarchical model, particularly on scale variables at higher levels of the hierarchy such as the variableτ{\displaystyle \tau \,\!}in the example. The usual priors such as theJeffreys prioroften do not work, because the posterior distribution will not be normalizable and estimates made by minimizing theexpected losswill beinadmissible. Several equivalent definitions of a Bayesian network have been offered. For the following, letG= (V,E) be adirected acyclic graph(DAG) and letX= (Xv),v∈Vbe a set ofrandom variablesindexed byV. Xis a Bayesian network with respect toGif its jointprobability density function(with respect to aproduct measure) can be written as a product of the individual density functions, conditional on their parent variables:[17] where pa(v) is the set of parents ofv(i.e. those vertices pointing directly tovvia a single edge). For any set of random variables, the probability of any member of ajoint distributioncan be calculated from conditional probabilities using thechain rule(given atopological orderingofX) as follows:[17] Using the definition above, this can be written as: The difference between the two expressions is theconditional independenceof the variables from any of their non-descendants, given the values of their parent variables. Xis a Bayesian network with respect toGif it satisfies thelocal Markov property: each variable isconditionally independentof its non-descendants given its parent variables:[18] where de(v) is the set of descendants andV\ de(v) is the set of non-descendants ofv. This can be expressed in terms similar to the first definition, as The set of parents is a subset of the set of non-descendants because the graph isacyclic. In general, learning a Bayesian network from data is known to beNP-hard.[19]This is due in part to thecombinatorial explosionofenumerating DAGsas the number of variables increases. Nevertheless, insights about an underlying Bayesian network can be learned from data in polynomial time by focusing on its marginal independence structure:[20]while the conditional independence statements of a distribution modeled by a Bayesian network are encoded by a DAG (according to the factorization and Markov properties above), its marginal independence statements—the conditional independence statements in which the conditioning set is empty—are encoded by asimple undirected graphwith special properties such as equalintersectionandindependence numbers. Developing a Bayesian network often begins with creating a DAGGsuch thatXsatisfies the local Markov property with respect toG. Sometimes this is acausalDAG. The conditional probability distributions of each variable given its parents inGare assessed. In many cases, in particular in the case where the variables are discrete, if the joint distribution ofXis the product of these conditional distributions, thenXis a Bayesian network with respect toG.[21] TheMarkov blanketof a node is the set of nodes consisting of its parents, its children, and any other parents of its children. The Markov blanket renders the node independent of the rest of the network; the joint distribution of the variables in the Markov blanket of a node is sufficient knowledge for calculating the distribution of the node.Xis a Bayesian network with respect toGif every node is conditionally independent of all other nodes in the network, given itsMarkov blanket.[18] This definition can be made more general by defining the "d"-separation of two nodes, where d stands for directional.[2]We first define the "d"-separation of a trail and then we will define the "d"-separation of two nodes in terms of that. LetPbe a trail from nodeutov. A trail is a loop-free, undirected (i.e. all edge directions are ignored) path between two nodes. ThenPis said to bed-separated by a set of nodesZif any of the following conditions holds: The nodesuandvared-separated byZif all trails between them ared-separated. Ifuandvare not d-separated, they are d-connected. Xis a Bayesian network with respect toGif, for any two nodesu,v: whereZis a set whichd-separatesuandv. (TheMarkov blanketis the minimal set of nodes whichd-separates nodevfrom all other nodes.) Although Bayesian networks are often used to representcausalrelationships, this need not be the case: a directed edge fromutovdoes not require thatXvbe causally dependent onXu. This is demonstrated by the fact that Bayesian networks on the graphs: are equivalent: that is they impose exactly the same conditional independence requirements. A causal network is a Bayesian network with the requirement that the relationships be causal. The additional semantics of causal networks specify that if a nodeXis actively caused to be in a given statex(an action written as do(X=x)), then the probability density function changes to that of the network obtained by cutting the links from the parents ofXtoX, and settingXto the caused valuex.[2]Using these semantics, the impact of external interventions from data obtained prior to intervention can be predicted. In 1990, while working at Stanford University on large bioinformatic applications, Cooper proved that exact inference in Bayesian networks isNP-hard.[22]This result prompted research on approximation algorithms with the aim of developing a tractable approximation to probabilistic inference. In 1993, Paul Dagum andMichael Lubyproved two surprising results on the complexity of approximation of probabilistic inference in Bayesian networks.[23]First, they proved that no tractabledeterministic algorithmcan approximate probabilistic inference to within anabsolute errorɛ< 1/2. Second, they proved that no tractablerandomized algorithmcan approximate probabilistic inference to within an absolute errorɛ< 1/2 with confidence probability greater than 1/2. At about the same time,Rothproved that exact inference in Bayesian networks is in fact#P-complete(and thus as hard as counting the number of satisfying assignments of aconjunctive normal formformula (CNF)) and that approximate inference within a factor 2n1−ɛfor everyɛ> 0, even for Bayesian networks with restricted architecture, is NP-hard.[24][25] In practical terms, these complexity results suggested that while Bayesian networks were rich representations for AI and machine learning applications, their use in large real-world applications would need to be tempered by either topological structural constraints, such as naïve Bayes networks, or by restrictions on the conditional probabilities. The bounded variance algorithm[26]developed by Dagum and Luby was the first provable fast approximation algorithm to efficiently approximate probabilistic inference in Bayesian networks with guarantees on the error approximation. This powerful algorithm required the minor restriction on the conditional probabilities of the Bayesian network to be bounded away from zero and one by1/p(n){\displaystyle 1/p(n)}wherep(n){\displaystyle p(n)}was any polynomial of the number of nodes in the network,n{\displaystyle n}. Notable software for Bayesian networks include: The term Bayesian network was coined byJudea Pearlin 1985 to emphasize:[28] In the late 1980s Pearl'sProbabilistic Reasoning in Intelligent Systems[30]andNeapolitan'sProbabilistic Reasoning in Expert Systems[31]summarized their properties and established them as a field of study.
https://en.wikipedia.org/wiki/Bayesian_networks
Inmathematics, theL-functionsofnumber theoryare expected to have several characteristic properties, one of which is that they satisfy certainfunctional equations. There is an elaborate theory of what these equations should be, much of which is still conjectural. A prototypical example, theRiemann zeta functionhas a functional equation relating its value at thecomplex numberswith its value at 1 −s. In every case this relates to some value ζ(s) that is only defined byanalytic continuationfrom theinfinite seriesdefinition. That is, writing – as is conventional – σ for the real part ofs, the functional equation relates the cases and also changes a case with in thecritical stripto another such case, reflected in the line σ = ½. Therefore, use of the functional equation is basic, in order to study the zeta-function in the wholecomplex plane. The functional equation in question for the Riemann zeta function takes the simple form whereZ(s) is ζ(s) multiplied by agamma-factor, involving thegamma function. This is now read as an 'extra' factor in theEuler productfor the zeta-function, corresponding to theinfinite prime. Just the same shape of functional equation holds for theDedekind zeta functionof anumber fieldK, with an appropriate gamma-factor that depends only on the embeddings ofK(in algebraic terms, on thetensor productofKwith thereal field). There is a similar equation for theDirichlet L-functions, but this time relating them in pairs:[1] with χ aprimitive Dirichlet character, χ*its complex conjugate, Λ the L-function multiplied by a gamma-factor, and ε a complex number ofabsolute value1, of shape whereG(χ) is aGauss sumformed from χ. This equation has the same function on both sides if and only if χ is areal character, taking values in {0,1,−1}. Then ε must be 1 or −1, and the case of the value −1 would imply a zero ofΛ(s) ats= ½. According to the theory (of Gauss, in effect) of Gauss sums, the value is always 1, so no suchsimplezero can exist (the function isevenabout the point). A unified theory of such functional equations was given byErich Hecke, and the theory was taken up again inTate's thesisbyJohn Tate. Hecke found generalised characters of number fields, now calledHecke characters, for which his proof (based ontheta functions) also worked. These characters and their associated L-functions are now understood to be strictly related tocomplex multiplication, as the Dirichlet characters are tocyclotomic fields. There are also functional equations for thelocal zeta-functions, arising at a fundamental level for the (analogue of)Poincaré dualityinétale cohomology. The Euler products of theHasse–Weil zeta-functionfor analgebraic varietyVover a number fieldK, formed by reducingmoduloprime idealsto get local zeta-functions, are conjectured to have aglobalfunctional equation; but this is currently considered out of reach except in special cases. The definition can be read directly out of étale cohomology theory, again; but in general some assumption coming fromautomorphic representationtheory seems required to get the functional equation. TheTaniyama–Shimura conjecturewas a particular case of this as general theory. By relating the gamma-factor aspect toHodge theory, and detailed studies of the expected ε factor, the theory as empirical has been brought to quite a refined state, even if proofs are missing.
https://en.wikipedia.org/wiki/Functional_equation_(L-function)
Afunctional differential equationis adifferential equationwith deviating argument. That is, a functional differential equation is an equation that contains a function and some of its derivatives evaluated at different argument values.[1] Functional differential equations find use in mathematical models that assume a specified behavior or phenomenon depends on the present as well as the past state of a system.[2]In other words, past events explicitly influence future results. For this reason, functional differential equations are more applicable thanordinary differential equations (ODE), in which future behavior only implicitly depends on the past. Unlike ordinary differential equations, which contain a function of one variable and its derivatives evaluated with the same input, functional differential equations contain a function and its derivatives evaluated with different input values. The simplest type of functional differential equation called theretarded functional differential equationorretarded differential difference equation, is of the form[3] The simplest, fundamental functional differential equation is the linear first-order delay differential equation[4][unreliable source?]which is given by whereα1,α2,τ{\displaystyle \alpha _{1},\alpha _{2},\tau }are constants,f(t){\displaystyle f(t)}is some continuous function, andx{\displaystyle x}is a scalar. Below is a table with a comparison of several ordinary and functional differential equations. "Functional differential equation" is the general name for a number of more specific types of differential equations that are used in numerous applications. There are delay differential equations, integro-differential equations, and so on. Differential difference equations are functional differential equations in which the argument values are discrete.[1]The general form for functional differential equations of finitely many discrete deviating arguments is wherex(t)∈Rm,n1,n2,…,ni≥0,{\displaystyle x(t)\in \mathbb {R} ^{m},\,n_{1},n_{2},\ldots ,n_{i}\geq 0,}andτ1(t),τ2(t),…,τi(t)≥0{\displaystyle \tau _{1}(t),\tau _{2}(t),\ldots ,\tau _{i}(t)\geq 0} Differential difference equations are also referred to asretarded,neutral,advanced, andmixedfunctional differential equations. This classification depends on whether the rate of change of the current state of the system depends on past values, future values, or both.[5] Functional differential equations of retarded type occur whenmax{n1,n2,…,nk}<n{\displaystyle \max\{n_{1},n_{2},\ldots ,n_{k}\ \}<n}for the equation given above. In other words, this class of functional differential equations depends on the past and present values of the function with delays. A simple example of a retarded functional differential equation is whereas a more general form for discrete deviating arguments can be written as Functional differential equations of neutral type, or neutral differential equations occur when Neutral differential equations depend on past and present values of the function, similarly to retarded differential equations, except it also depends on derivatives with delays. In other words, retarded differential equations do not involve the given function's derivative with delays while neutral differential equations do. Integro-differential equations of Volterra type are functional differential equations with continuous argument values.[1]Integro-differential equations involve both the integrals and derivatives of some function with respect to its argument. The continuous integro-differential equation for retarded functional differential equations,x′(t)=f(t,x(t−τ1(t)),x(t−τ2(t)),…,x(t−τk(t))){\displaystyle x'(t)=f{\bigl (}t,x(t-\tau _{1}(t)),x(t-\tau _{2}(t)),\ldots ,x(t-\tau _{k}(t)){\bigr )}}, can be written as Functional differential equations have been used in models that determine future behavior of a certain phenomenon determined by the present and the past. Future behavior of phenomena, described by the solutions of ODEs, assumes that behavior is independent of the past.[2]However, there can be many situations that depend on past behavior. FDEs are applicable for models in multiple fields, such as medicine, mechanics, biology, and economics. FDEs have been used in research for heat-transfer, signal processing, evolution of a species, traffic flow and study of epidemics.[1][4] Alogistic equationforpopulation growthis given bydxdt=ρx(t)(1−x(t)k),{\displaystyle {\mathrm {d} x \over \mathrm {d} t}=\rho \,x(t)\left(1-{\frac {x(t)}{k}}\right),}whereρis the reproduction rate andkis thecarrying capacity.x(t){\displaystyle x(t)}represents the population size at timet, andρ(1−x(t)k){\textstyle \rho \left(1-{\frac {x(t)}{k}}\right)}is the density-dependent reproduction rate.[7] If we were to now apply this to an earlier timet−τ{\displaystyle t-\tau }, we getdxdt=ρx(t)(1−x(t−τ)k){\displaystyle {\mathrm {d} x \over \mathrm {d} t}=\rho \,x(t)\left(1-{\frac {x(t-\tau )}{k}}\right)} Upon exposure to applications of ordinary differential equations, many come across the mixing model of some chemical solution. Suppose there is a container holding liters of salt water. Salt water is flowing in, and out of the container at the same rater{\displaystyle r}of liters per second. In other words, the rate of water flowing in is equal to the rate of the salt water solution flowing out. LetV{\displaystyle V}be the amount in liters of salt water in the container andx(t){\displaystyle x(t)}be the uniform concentration in grams per liter of salt water at timet{\displaystyle t}. Then, we have the differential equation[8]x′(t)=−rVx(t),rV>0{\displaystyle x'(t)=-{\frac {r}{V}}x(t),{\frac {r}{V}}>0} The problem with this equation is that it makes the assumption that every drop of water that enters the contain is instantaneously mixed into the solution. This can be eliminated by using a FDE instead of an ODE. Letx(t){\displaystyle x(t)}be the average concentration at timet{\displaystyle t}, rather than uniform. Then, let's assume the solution leaving the container at timet{\displaystyle t}is equal tox(t−τ),τ>0{\displaystyle x(t-\tau ),\tau >0}, the average concentration at some earlier time. Then, the equation is a delay-differential equation of the form[8]x′(t)=−rVx(t−τ){\displaystyle x'(t)=-{\frac {r}{V}}x(t-\tau )} The Lotka–Volterra predator-prey model was originally developed to observe the population of sharks and fish in the Adriatic Sea; however, this model has been used in many other fields for different uses, such as describing chemical reactions. Modelling predatory-prey population has always been widely researched, and as a result, there have been many different forms of the original equation. One example, as shown by Xu, Wu (2013),[9]of the Lotka–Volterra model with time-delay is given below:p′(t)=p(t)[r1(t)−a11(t)p(t−τ11(t))−a12(t)P1(t−τ12(t))−a13(t)P2(t−τ13(t))]{\displaystyle p'(t)=p(t){\Biggl [}r_{1}(t)-a_{11}(t)p{\biggl (}t-\tau _{11}(t){\biggr )}-a_{12}(t)P_{1}{\biggl (}t-\tau _{12}(t){\biggr )}-a_{13}(t)P_{2}{\biggl (}t-\tau _{13}(t){\biggr )}{\Biggr ]}}P1′(t)=P1(t)[−r2(t)+a21(t)p(t−τ21(t))−a22(t)P1(t−τ22(t))−a23(t)P2(t−τ23(t))]{\displaystyle P_{1}'(t)=P_{1}(t){\Biggl [}-r_{2}(t)+a_{21}(t)p{\biggl (}t-\tau _{21}(t){\biggr )}-a_{22}(t)P_{1}{\biggl (}t-\tau _{22}(t){\biggr )}-a_{23}(t)P_{2}{\biggl (}t-\tau _{23}(t){\biggr )}{\Biggr ]}}P2′(t)=P2(t)[−r2(t)+a31(t)p(t−τ31(t))−a32(t)P1(t−τ32(t))−a33(t)P2(t−τ33(t))]{\displaystyle P_{2}'(t)=P_{2}(t){\Biggl [}-r_{2}(t)+a_{31}(t)p{\biggl (}t-\tau _{31}(t){\biggr )}-a_{32}(t)P_{1}{\biggl (}t-\tau _{32}(t){\biggr )}-a_{33}(t)P_{2}{\biggl (}t-\tau _{33}(t){\biggr )}{\Biggr ]}}wherep(t){\displaystyle p(t)}denotes the prey population density at time t,P1(t){\displaystyle P_{1}(t)}andP2(t){\displaystyle P_{2}(t)}denote the density of the predator population at timet,ri,aij∈C(R,[0,∞)){\displaystyle t,r_{i},a_{ij}\in C(\mathbb {R} ,[0,\infty ))}andτij∈C(R,R){\displaystyle \tau _{ij}\in C(\mathbb {R} ,\mathbb {R} )} Examples of other models that have used FDEs, namely RFDEs, are given below:
https://en.wikipedia.org/wiki/Functional_differential_equation
Algorithmic transparencyis the principle that the factors that influence the decisions made byalgorithmsshould be visible, or transparent, to the people who use, regulate, and are affected by systems that employ those algorithms. Although the phrase was coined in 2016 by Nicholas Diakopoulos and Michael Koliska about the role of algorithms in deciding the content of digital journalism services,[1]the underlying principle dates back to the 1970s and the rise of automated systems for scoring consumer credit. The phrases "algorithmic transparency" and "algorithmic accountability"[2]are sometimes used interchangeably – especially since they were coined by the same people – but they have subtly different meanings. Specifically, "algorithmic transparency" states that the inputs to the algorithm and the algorithm's use itself must be known, but they need not be fair. "Algorithmic accountability" implies that the organizations that use algorithms must be accountable for the decisions made by those algorithms, even though the decisions are being made by a machine, and not by a human being.[3] Current research around algorithmic transparency interested in both societal effects of accessing remote services running algorithms.,[4]as well as mathematical and computer science approaches that can be used to achieve algorithmic transparency[5]In the United States, theFederal Trade Commission's Bureau of Consumer Protection studies how algorithms are used by consumers by conducting its own research on algorithmic transparency and by funding external research.[6]In theEuropean Union, the data protection laws that came into effect in May 2018 include a "right to explanation" of decisions made by algorithms, though it is unclear what this means.[7]Furthermore, the European Union founded The European Center for Algorithmic Transparency (ECAT).[8]
https://en.wikipedia.org/wiki/Algorithmic_transparency
In theregulationofalgorithms, particularlyartificial intelligenceand its subfield ofmachine learning, aright to explanation(orright toanexplanation) is arightto be given anexplanationfor an output of the algorithm. Such rights primarily refer toindividual rightsto be given an explanation for decisions that significantly affect an individual, particularly legally or financially. For example, a person who applies for a loan and is denied may ask for an explanation, which could be "Credit bureauX reports that you declared bankruptcy last year; this is the main factor in considering you too likely to default, and thus we will not give you the loan you applied for." Some suchlegal rightsalready exist, while the scope of a general "right to explanation" is a matter of ongoing debate. There have been arguments made that a "social right to explanation" is a crucial foundation for an information society, particularly as the institutions of that society will need to use digital technologies, artificial intelligence, machine learning.[1]In other words, that the relatedautomated decision makingsystems that useexplainabilitywould be moretrustworthyand transparent. Without this right, which could be constituted both legally and throughprofessional standards, the public will be left without much recourse to challenge the decisions of automated systems. Under theEqual Credit Opportunity Act(Regulation B of theCode of Federal Regulations), Title 12, Chapter X, Part 1002,§1002.9, creditors are required to notify applicants who are denied credit with specific reasons for the detail. As detailed in §1002.9(b)(2):[2] (2) Statement of specific reasons. The statement of reasons for adverse action required by paragraph (a)(2)(i) of this section must be specific and indicate the principal reason(s) for the adverse action. Statements that the adverse action was based on the creditor's internal standards or policies or that the applicant, joint applicant, or similar party failed to achieve a qualifying score on the creditor's credit scoring system are insufficient. Theofficial interpretationof this section details what types of statements are acceptable. Creditors comply with this regulation by providing a list of reasons (generally at most 4, per interpretation of regulations), consisting of a numericreason code(as identifier) and an associated explanation, identifying the main factors affecting a credit score.[3]An example might be:[4] The European UnionGeneral Data Protection Regulation(enacted 2016, taking effect 2018) extends the automated decision-making rights in the 1995Data Protection Directiveto provide a legally disputed form of a right to an explanation, stated as such inRecital 71: "[the data subject should have] the right ... to obtain an explanation of the decision reached". In full: The data subject should have the right not to be subject to a decision, which may include a measure, evaluating personal aspects relating to him or her which is based solely on automated processing and which produces legal effects concerning him or her or similarly significantly affects him or her, such as automatic refusal of an online credit application or e-recruiting practices without any human intervention. ... In any case, such processing should be subject to suitable safeguards, which should include specific information to the data subject and the right to obtain human intervention, to express his or her point of view, to obtain an explanation of the decision reached after such assessment and to challenge the decision. However, the extent to which the regulations themselves provide a "right to explanation" is heavily debated.[5][6][7]There are two main strands of criticism. There are significant legal issues with the right as found in Article 22 — as recitals are not binding, and the right to an explanation is not mentioned in the binding articles of the text, having been removed during the legislative process.[6]In addition, there are significant restrictions on the types ofautomated decisionsthat are covered — which must be both "solely" based on automated processing, and have legal or similarly significant effects — which significantly limits the range of automated systems and decisions to which the right would apply.[6]In particular, the right is unlikely to apply in many of the cases of algorithmic controversy that have been picked up in the media.[8] A second potential source of such a right has been pointed to in Article 15, the "right of access by the data subject". This restates a similar provision from the 1995 Data Protection Directive, allowing the data subject access to "meaningful information about the logic involved" in the same significant, solely automated decision-making, found in Article 22. Yet this too suffers from alleged challenges that relate to the timing of when this right can be drawn upon, as well as practical challenges that mean it may not be binding in many cases of public concern.[6] Other EU legislative instruments contain explanation rights. The European Union'sArtificial Intelligence Actprovides in Article 86 a "[r]ight to explanation of individual decision-making" of certain high risk systems which produce significant, adverse effects to an individual's health, safety or fundamental rights.[9]The right provides for "clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken", although only applies to the extent other law does not provide such a right. TheDigital Services Actin Article 27, and the Platform to Business Regulation in Article 5,[10]both contain rights to have the main parameters of certainrecommender systemsto be made clear, although these provisions have been criticised as not matching the way that such systems work.[11]ThePlatform Work Directive, which provides for regulation of automation ingig economywork as an extension ofdata protectionlaw, further contains explanation provisions in Article 11,[12]using the specific language of "explanation" in a binding article rather than a recital as is the case in the GDPR. Scholars note that remains uncertainty as to whether these provisions imply sufficiently tailored explanation in practice which will need to be resolved by courts.[13] InFrancethe 2016Loi pour une République numérique(Digital Republic Act orloi numérique) amends the country's administrative code to introduce a new provision for the explanation of decisions made by public sector bodies about individuals.[14]It notes that where there is "a decision taken on the basis of an algorithmic treatment", the rules that define that treatment and its “principal characteristics” must be communicated to the citizen upon request, where there is not an exclusion (e.g. for national security or defence). These should include the following: Scholars have noted that this right, while limited to administrative decisions, goes beyond the GDPR right to explicitly apply to decision support rather than decisions "solely" based on automated processing, as well as provides a framework for explaining specific decisions.[14]Indeed, the GDPR automated decision-making rights in the European Union, one of the places a "right to an explanation" has been sought within, find their origins in French law in the late 1970s.[15] Some argue that a "right to explanation" is at best unnecessary, at worst harmful, and threatens to stifle innovation. Specific criticisms include: favoring human decisions over machine decisions, being redundant with existing laws, and focusing on process over outcome.[16] Authors of study “Slave to the Algorithm? Why a 'Right to an Explanation' Is Probably Not the Remedy You Are Looking For” Lilian Edwards and Michael Veale argue that a right to explanation is not the solution to harms caused to stakeholders by algorithmic decisions. They also state that the right of explanation in the GDPR is narrowly-defined, and is not compatible with how modern machine learning technologies are being developed. With these limitations, defining transparency within the context ofalgorithmic accountabilityremains a problem. For example, providing the source code of algorithms may not be sufficient and may create other problems in terms of privacy disclosures and the gaming of technical systems. To mitigate this issue, Edwards and Veale argue that an auditing system could be more effective, to allow auditors to look at the inputs and outputs of a decision process from an external shell, in other words, “explaining black boxes without opening them.”[8] Similarly, Oxford scholars Bryce Goodman and Seth Flaxman assert that the GDPR creates a ‘right to explanation’, but does not elaborate much beyond that point, stating the limitations in the current GDPR. In regards to this debate, scholars Andrew D Selbst and Julia Powles state that the debate should redirect to discussing whether one uses the phrase ‘right to explanation’ or not, more attention must be paid to the GDPR's express requirements and how they relate to its background goals, and more thought must be given to determining what the legislative text actually means.[17] More fundamentally, many algorithms used in machine learning are not easily explainable. For example, the output of adeep neural networkdepends on many layers of computations, connected in a complex way, and no one input or computation may be a dominant factor. The field ofExplainable AIseeks to provide better explanations from existing algorithms, and algorithms that are more easily explainable, but it is a young and active field.[18][19] Others argue that the difficulties with explainability are due to its overly narrow focus on technical solutions rather than connecting the issue to the wider questions raised by a "social right to explanation."[1] Edwards and Veale see the right to explanation as providing some grounds for explanations about specific decisions. They discuss two types of algorithmic explanations, model centric explanations and subject-centric explanations (SCEs), which are broadly aligned with explanations about systems or decisions.[8] SCEs are seen as the best way to provide for some remedy, although with some severe constraints if the data is just too complex. Their proposal is to break down the full model and focus on particular issues through pedagogical explanations to a particular query, “which could be real or could be fictitious or exploratory”. These explanations will necessarily involve trade offs with accuracy to reduce complexity. With growing interest in explanation of technical decision-making systems in the field of human-computer interaction design, researchers and designers put in efforts to open the black box in terms of mathematically interpretable models as removed from cognitive science and the actual needs of people. Alternative approaches would be to allow users to explore the system's behavior freely through interactive explanations. One of Edwards and Veale's proposals is to partially remove transparency as a necessary key step towards accountability and redress. They argue that people trying to tackle data protection issues have a desire for an action, not for an explanation. The actual value of an explanation will not be to relieve or redress the emotional or economic damage suffered, but to understand why something happened and helping ensure a mistake doesn't happen again.[8] On a broader scale, In the studyExplainable machine learning in deployment,authors recommend building an explainable framework clearly establishing the desiderata by identifying stakeholder, engaging with stakeholders, and understanding the purpose of the explanation. Alongside, concerns of explainability such as issues on causality, privacy, and performance improvement must be considered into the system.[20]
https://en.wikipedia.org/wiki/Right_to_explanation
Accumulated local effects(ALE) is amachine learninginterpretability method.[1] ALE uses a conditional feature distribution as an input and generates augmented data, creating more realistic data than a marginal distribution.[2] It ignores far out-of-distribution (outlier) values.[1]Unlike partial dependence plots and marginal plots, ALE is not defeated in the presence of correlated predictors.[3] It analyzes differences in predictions instead of averaging them by calculating the average of the differences in model predictions over the augmented data, instead of the average of the predictions themselves.[2] Given a model that predicts house prices based on its distance from city center and size of the building area, ALE compares the differences of predictions of houses of different sizes. The result separates the impact of the size from otherwise correlated features.[1] Defining evaluation windows is subjective. High correlations between features can defeat the technique.[1][3]ALE requires more and more uniformly distributed observations than PDP so that theconditional distributioncan be reliably determined. The technique may produce inadequate results if the data is highly sparse, which is more common with high-dimensional data (curse of dimensionality).[2]
https://en.wikipedia.org/wiki/Accumulated_local_effects
Instatistics, amixture modelis aprobabilistic modelfor representing the presence ofsubpopulationswithin an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation belongs. Formally a mixture model corresponds to themixture distributionthat represents theprobability distributionof observations in the overall population. However, while problems associated with "mixture distributions" relate to deriving the properties of the overall population from those of the sub-populations, "mixture models" are used to makestatistical inferencesabout the properties of the sub-populations given only observations on the pooled population, without sub-population identity information. Mixture models are used for clustering, under the namemodel-based clustering, and also fordensity estimation. Mixture models should not be confused with models forcompositional data, i.e., data whose components are constrained to sum to a constant value (1, 100%, etc.). However, compositional models can be thought of as mixture models, where members of the population are sampled at random. Conversely, mixture models can be thought of as compositional models, where thetotal sizereading population has been normalized to 1. A typical finite-dimensional mixture model is ahierarchical modelconsisting of the following components: In addition, in aBayesian setting, the mixture weights and parameters will themselves be random variables, andprior distributionswill be placed over the variables. In such a case, the weights are typically viewed as aK-dimensional random vector drawn from aDirichlet distribution(theconjugate priorof the categorical distribution), and the parameters will be distributed according to their respective conjugate priors. Mathematically, a basic parametric mixture model can be described as follows: In a Bayesian setting, all parameters are associated with random variables, as follows: This characterization usesFandHto describe arbitrary distributions over observations and parameters, respectively. TypicallyHwill be theconjugate priorofF. The two most common choices ofFareGaussianaka "normal" (for real-valued observations) andcategorical(for discrete observations). Other common possibilities for the distribution of the mixture components are: A typical non-BayesianGaussianmixture model looks like this: A Bayesian version of aGaussianmixture model is as follows: A Bayesian Gaussian mixture model is commonly extended to fit a vector of unknown parameters (denoted in bold), or multivariate normal distributions. In a multivariate distribution (i.e. one modelling a vectorx{\displaystyle {\boldsymbol {x}}}withNrandom variables) one may model a vector of parameters (such as several observations of a signal or patches within an image) using a Gaussian mixture model prior distribution on the vector of estimates given byp(θ)=∑i=1KϕiN(μi,Σi){\displaystyle p({\boldsymbol {\theta }})=\sum _{i=1}^{K}\phi _{i}{\mathcal {N}}({\boldsymbol {\mu }}_{i},{\boldsymbol {\Sigma }}_{i})}where theithvector component is characterized by normal distributions with weightsϕi{\displaystyle \phi _{i}}, meansμi{\displaystyle {\boldsymbol {\mu }}_{i}}and covariance matricesΣi{\displaystyle {\boldsymbol {\Sigma }}_{i}}. To incorporate this prior into a Bayesian estimation, the prior is multiplied with the known distributionp(x|θ){\displaystyle p({\boldsymbol {x|\theta }})}of the datax{\displaystyle {\boldsymbol {x}}}conditioned on the parametersθ{\displaystyle {\boldsymbol {\theta }}}to be estimated. With this formulation, theposterior distributionp(θ|x){\displaystyle p({\boldsymbol {\theta |x}})}isalsoa Gaussian mixture model of the formp(θ|x)=∑i=1Kϕ~iN(μ~i,Σ~i){\displaystyle p({\boldsymbol {\theta |x}})=\sum _{i=1}^{K}{\tilde {\phi }}_{i}{\mathcal {N}}({\boldsymbol {{\tilde {\mu }}_{i}}},{\boldsymbol {\tilde {\Sigma }}}_{i})}with new parametersϕ~i,μ~i{\displaystyle {\tilde {\phi }}_{i},{\boldsymbol {\tilde {\mu }}}_{i}}andΣ~i{\displaystyle {\boldsymbol {\tilde {\Sigma }}}_{i}}that are updated using theEM algorithm.[2]Although EM-based parameter updates are well-established, providing the initial estimates for these parameters is currently an area of active research. Note that this formulation yields a closed-form solution to the complete posterior distribution. Estimations of the random variableθ{\displaystyle {\boldsymbol {\theta }}}may be obtained via one of several estimators, such as the mean or maximum of the posterior distribution. Such distributions are useful for assuming patch-wise shapes of images and clusters, for example. In the case of image representation, each Gaussian may be tilted, expanded, and warped according to the covariance matricesΣi{\displaystyle {\boldsymbol {\Sigma }}_{i}}. One Gaussian distribution of the set is fit to each patch (usually of size 8×8 pixels) in the image. Notably, any distribution of points around a cluster (seek-means) may be accurately given enough Gaussian components, but scarcely overK=20 components are needed to accurately model a given image distribution or cluster of data. A typical non-Bayesian mixture model withcategoricalobservations looks like this: The random variables: A typical Bayesian mixture model withcategoricalobservations looks like this: The random variables: Financial returns often behave differently in normal situations and during crisis times. A mixture model[3]for return data seems reasonable. Sometimes the model used is ajump-diffusion model, or as a mixture of two normal distributions. SeeFinancial economics § Challenges and criticismandFinancial risk management § Bankingfor further context. Assume that we observe the prices ofNdifferent houses. Different types of houses in different neighborhoods will have vastly different prices, but the price of a particular type of house in a particular neighborhood (e.g., three-bedroom house in moderately upscale neighborhood) will tend to cluster fairly closely around the mean. One possible model of such prices would be to assume that the prices are accurately described by a mixture model withKdifferent components, each distributed as anormal distributionwith unknown mean and variance, with each component specifying a particular combination of house type/neighborhood. Fitting this model to observed prices, e.g., using theexpectation-maximization algorithm, would tend to cluster the prices according to house type/neighborhood and reveal the spread of prices in each type/neighborhood. (Note that for values such as prices or incomes that are guaranteed to be positive and which tend to growexponentially, alog-normal distributionmight actually be a better model than a normal distribution.) Assume that a document is composed ofNdifferent words from a total vocabulary of sizeV, where each word corresponds to one ofKpossible topics. The distribution of such words could be modelled as a mixture ofKdifferentV-dimensionalcategorical distributions. A model of this sort is commonly termed atopic model. Note thatexpectation maximizationapplied to such a model will typically fail to produce realistic results, due (among other things) to theexcessive number of parameters. Some sorts of additional assumptions are typically necessary to get good results. Typically two sorts of additional components are added to the model: The following example is based on an example inChristopher M. Bishop,Pattern Recognition and Machine Learning.[4] Imagine that we are given anN×Nblack-and-white image that is known to be a scan of a hand-written digit between 0 and 9, but we don't know which digit is written. We can create a mixture model withK=10{\displaystyle K=10}different components, where each component is a vector of sizeN2{\displaystyle N^{2}}ofBernoulli distributions(one per pixel). Such a model can be trained with theexpectation-maximization algorithmon an unlabeled set of hand-written digits, and will effectively cluster the images according to the digit being written. The same model could then be used to recognize the digit of another image simply by holding the parameters constant, computing the probability of the new image for each possible digit (a trivial calculation), and returning the digit that generated the highest probability. Mixture models apply in the problem of directing multiple projectiles at a target (as in air, land, or sea defense applications), where the physical and/or statistical characteristics of the projectiles differ within the multiple projectiles. An example might be shots from multiple munitions types or shots from multiple locations directed at one target. The combination of projectile types may be characterized as a Gaussian mixture model.[5]Further, a well-known measure of accuracy for a group of projectiles is thecircular error probable(CEP), which is the numberRsuch that, on average, half of the group of projectiles falls within the circle of radiusRabout the target point. The mixture model can be used to determine (or estimate) the valueR. The mixture model properly captures the different types of projectiles. The financial example above is one direct application of the mixture model, a situation in which we assume an underlying mechanism so that each observation belongs to one of some number of different sources or categories. This underlying mechanism may or may not, however, be observable. In this form of mixture, each of the sources is described by a component probability density function, and its mixture weight is the probability that an observation comes from this component. In an indirect application of the mixture model we do not assume such a mechanism. The mixture model is simply used for its mathematical flexibilities. For example, a mixture of twonormal distributionswith different means may result in a density with twomodes, which is not modeled by standard parametric distributions. Another example is given by the possibility of mixture distributions to model fatter tails than the basic Gaussian ones, so as to be a candidate for modeling more extreme events. The mixture model-based clustering is also predominantly used in identifying the state of the machine inpredictive maintenance. Density plots are used to analyze the density of high dimensional features. If multi-model densities are observed, then it is assumed that a finite set of densities are formed by a finite set of normal mixtures. A multivariate Gaussian mixture model is used to cluster the feature data into k number of groups where k represents each state of the machine. The machine state can be a normal state, power off state, or faulty state.[6]Each formed cluster can be diagnosed using techniques such as spectral analysis. In the recent years, this has also been widely used in other areas such as early fault detection.[7] In image processing and computer vision, traditionalimage segmentationmodels often assign to onepixelonly one exclusive pattern. In fuzzy or soft segmentation, any pattern can have certain "ownership" over any single pixel. If the patterns are Gaussian, fuzzy segmentation naturally results in Gaussian mixtures. Combined with other analytic or geometric tools (e.g., phase transitions over diffusive boundaries), such spatially regularized mixture models could lead to more realistic and computationally efficient segmentation methods.[8] Probabilistic mixture models such asGaussian mixture models(GMM) are used to resolvepoint set registrationproblems in image processing and computer vision fields. For pair-wisepoint set registration, one point set is regarded as the centroids of mixture models, and the other point set is regarded as data points (observations). State-of-the-art methods are e.g.coherent point drift(CPD)[9]andStudent's t-distributionmixture models (TMM).[10]The result of recent research demonstrate the superiority of hybrid mixture models[11](e.g. combining Student's t-distribution and Watson distribution/Bingham distributionto model spatial positions and axes orientations separately) compare to CPD and TMM, in terms of inherent robustness, accuracy and discriminative capacity. Identifiability refers to the existence of a unique characterization for any one of the models in the class (family) being considered. Estimation procedures may not be well-defined and asymptotic theory may not hold if a model is not identifiable. LetJbe the class of all binomial distributions withn= 2. Then a mixture of two members ofJwould have p0=π(1−θ1)2+(1−π)(1−θ2)2p1=2πθ1(1−θ1)+2(1−π)θ2(1−θ2){\displaystyle {\begin{aligned}p_{0}&=\pi {\left(1-\theta _{1}\right)}^{2}+\left(1-\pi \right){\left(1-\theta _{2}\right)}^{2}\\[1ex]p_{1}&=2\pi \theta _{1}\left(1-\theta _{1}\right)+2\left(1-\pi \right)\theta _{2}\left(1-\theta _{2}\right)\end{aligned}}} andp2= 1 −p0−p1. Clearly, givenp0andp1, it is not possible to determine the above mixture model uniquely, as there are three parameters(π,θ1,θ2)to be determined. Consider a mixture of parametric distributions of the same class. Let J={f(⋅;θ):θ∈Ω}{\displaystyle J=\{f(\cdot ;\theta ):\theta \in \Omega \}} be the class of all component distributions. Then theconvex hullKofJdefines the class of all finite mixture of distributions inJ: K={p(⋅):p(⋅)=∑i=1naifi(⋅;θi),ai>0,∑i=1nai=1,fi(⋅;θi)∈J∀i,n}{\displaystyle K=\left\{p(\cdot ):p(\cdot )=\sum _{i=1}^{n}a_{i}f_{i}(\cdot ;\theta _{i}),a_{i}>0,\sum _{i=1}^{n}a_{i}=1,f_{i}(\cdot ;\theta _{i})\in J\ \forall i,n\right\}} Kis said to be identifiable if all its members are unique, that is, given two memberspandp′inK, being mixtures ofkdistributions andk′distributions respectively inJ, we havep=p′if and only if, first of all,k=k′and secondly we can reorder the summations such thatai=ai′andfi=fi′for alli. Parametric mixture models are often used when we know the distributionYand we can sample fromX, but we would like to determine theaiandθivalues. Such situations can arise in studies in which we sample from a population that is composed of several distinct subpopulations. It is common to think of probability mixture modeling as a missing data problem. One way to understand this is to assume that the data points under consideration have "membership" in one of the distributions we are using to model the data. When we start, this membership is unknown, or missing. The job of estimation is to devise appropriate parameters for the model functions we choose, with the connection to the data points being represented as their membership in the individual model distributions. A variety of approaches to the problem of mixture decomposition have been proposed, many of which focus on maximum likelihood methods such asexpectation maximization(EM) or maximuma posterioriestimation (MAP). Generally these methods consider separately the questions of system identification and parameter estimation; methods to determine the number and functional form of components within a mixture are distinguished from methods to estimate the corresponding parameter values. Some notable departures are the graphical methods as outlined in Tarter and Lock[12]and more recentlyminimum message length(MML) techniques such as Figueiredo and Jain[13]and to some extent the moment matching pattern analysis routines suggested by McWilliam and Loh (2009).[14] Expectation maximization(EM) is seemingly the most popular technique used to determine the parameters of a mixture with ana priorigiven number of components. This is a particular way of implementingmaximum likelihoodestimation for this problem. EM is of particular appeal for finite normal mixtures where closed-form expressions are possible such as in the following iterative algorithm by Dempsteret al.(1977)[15] with the posterior probabilities Thus on the basis of the current estimate for the parameters, theconditional probabilityfor a given observationx(t)being generated from statesis determined for eacht= 1, …,N;Nbeing the sample size. The parameters are then updated such that the new component weights correspond to the average conditional probability and each component mean and covariance is the component specific weighted average of the mean and covariance of the entire sample. Dempster[15]also showed that each successive EM iteration will not decrease the likelihood, a property not shared by other gradient based maximization techniques. Moreover, EM naturally embeds within it constraints on the probability vector, and for sufficiently large sample sizes positive definiteness of the covariance iterates. This is a key advantage since explicitly constrained methods incur extra computational costs to check and maintain appropriate values. Theoretically EM is a first-order algorithm and as such converges slowly to a fixed-point solution. Redner and Walker (1984)[full citation needed]make this point arguing in favour of superlinear and second order Newton and quasi-Newton methods and reporting slow convergence in EM on the basis of their empirical tests. They do concede that convergence in likelihood was rapid even if convergence in the parameter values themselves was not. The relative merits of EM and other algorithms vis-à-vis convergence have been discussed in other literature.[16] Other common objections to the use of EM are that it has a propensity to spuriously identify local maxima, as well as displaying sensitivity to initial values.[17][18]One may address these problems by evaluating EM at several initial points in the parameter space but this is computationally costly and other approaches, such as the annealing EM method of Udea and Nakano (1998) (in which the initial components are essentially forced to overlap, providing a less heterogeneous basis for initial guesses), may be preferable. Figueiredo and Jain[13]note that convergence to 'meaningless' parameter values obtained at the boundary (where regularity conditions breakdown, e.g., Ghosh and Sen (1985)) is frequently observed when the number of model components exceeds the optimal/true one. On this basis they suggest a unified approach to estimation and identification in which the initialnis chosen to greatly exceed the expected optimal value. Their optimization routine is constructed via a minimum message length (MML) criterion that effectively eliminates a candidate component if there is insufficient information to support it. In this way it is possible to systematize reductions innand consider estimation and identification jointly. With initial guesses for the parameters of our mixture model, "partial membership" of each data point in each constituent distribution is computed by calculatingexpectation valuesfor the membership variables of each data point. That is, for each data pointxjand distributionYi, the membership valueyi,jis: With expectation values in hand for group membership,plug-in estimatesare recomputed for the distribution parameters. The mixing coefficientsaiare themeansof the membership values over theNdata points. The component model parametersθiare also calculated by expectation maximization using data pointsxjthat have been weighted using the membership values. For example, ifθis a meanμ With new estimates foraiand theθi's, the expectation step is repeated to recompute new membership values. The entire procedure is repeated until model parameters converge. As an alternative to the EM algorithm, the mixture model parameters can be deduced usingposterior samplingas indicated byBayes' theorem. This is still regarded as an incomplete data problem in which membership of data points is the missing data. A two-step iterative procedure known asGibbs samplingcan be used. The previous example of a mixture of twoGaussian distributionscan demonstrate how the method works. As before, initial guesses of the parameters for the mixture model are made. Instead of computing partial memberships for each elemental distribution, a membership value for each data point is drawn from aBernoulli distribution(that is, it will be assigned to either the first or the second Gaussian). The Bernoulli parameterθis determined for each data point on the basis of one of the constituent distributions.[vague]Draws from the distribution generate membership associations for each data point. Plug-in estimators can then be used as in the M step of EM to generate a new set of mixture model parameters, and the binomial draw step repeated. Themethod of moment matchingis one of the oldest techniques for determining the mixture parameters dating back to Karl Pearson's seminal work of 1894. In this approach the parameters of the mixture are determined such that the composite distribution has moments matching some given value. In many instances extraction of solutions to the moment equations may present non-trivial algebraic or computational problems. Moreover, numerical analysis by Day[19]has indicated that such methods may be inefficient compared to EM. Nonetheless, there has been renewed interest in this method, e.g., Craigmile and Titterington (1998) and Wang.[20] McWilliam and Loh (2009) consider the characterisation of a hyper-cuboid normal mixturecopulain large dimensional systems for which EM would be computationally prohibitive. Here a pattern analysis routine is used to generate multivariate tail-dependencies consistent with a set of univariate and (in some sense) bivariate moments. The performance of this method is then evaluated using equity log-return data withKolmogorov–Smirnovtest statistics suggesting a good descriptive fit. Some problems in mixture model estimation can be solved usingspectral methods. In particular it becomes useful if data pointsxiare points in high-dimensionalreal space, and the hidden distributions are known to belog-concave(such asGaussian distributionorExponential distribution). Spectral methods of learning mixture models are based on the use ofSingular Value Decompositionof a matrix which contains data points. The idea is to consider the topksingular vectors, wherekis the number of distributions to be learned. The projection of each data point to alinear subspacespanned by those vectors groups points originating from the same distribution very close together, while points from different distributions stay far apart. One distinctive feature of the spectral method is that it allows us toprovethat if distributions satisfy certain separation condition (e.g., not too close), then the estimated mixture will be very close to the true one with high probability. Tarter and Lock[12]describe a graphical approach to mixture identification in which a kernel function is applied to an empirical frequency plot so to reduce intra-component variance. In this way one may more readily identify components having differing means. While thisλ-method does not require prior knowledge of the number or functional form of the components its success does rely on the choice of the kernel parameters which to some extent implicitly embeds assumptions about the component structure. Some of them can even probably learn mixtures ofheavy-tailed distributionsincluding those with infinitevariance(seelinks to papersbelow). In this setting, EM based methods would not work, since the Expectation step would diverge due to presence ofoutliers. To simulate a sample of sizeNthat is from a mixture of distributionsFi,i=1 ton, with probabilitiespi(sum=pi= 1): In aBayesian setting, additional levels can be added to thegraphical modeldefining the mixture model. For example, in the commonlatent Dirichlet allocationtopic model, the observations are sets of words drawn fromDdifferent documents and theKmixture components represent topics that are shared across documents. Each document has a different set of mixture weights, which specify the topics prevalent in that document. All sets of mixture weights share commonhyperparameters. A very common extension is to connect thelatent variablesdefining the mixture component identities into aMarkov chain, instead of assuming that they areindependent identically distributedrandom variables. The resulting model is termed ahidden Markov modeland is one of the most common sequential hierarchical models. Numerous extensions of hidden Markov models have been developed; see the resulting article for more information. Mixture distributions and the problem of mixture decomposition, that is the identification of its constituent components and the parameters thereof, has been cited in the literature as far back as 1846 (Quetelet in McLachlan,[17]2000) although common reference is made to the work ofKarl Pearson(1894)[21]as the first author to explicitly address the decomposition problem in characterising non-normal attributes of forehead to body length ratios in female shore crab populations. The motivation for this work was provided by the zoologistWalter Frank Raphael Weldonwho had speculated in 1893 (in Tarter and Lock[12]) that asymmetry in the histogram of these ratios could signal evolutionary divergence. Pearson's approach was to fit a univariate mixture of two normals to the data by choosing the five parameters of the mixture such that the empirical moments matched that of the model. While his work was successful in identifying two potentially distinct sub-populations and in demonstrating the flexibility of mixtures as a moment matching tool, the formulation required the solution of a 9th degree (nonic) polynomial which at the time posed a significant computational challenge. Subsequent works focused on addressing these problems, but it was not until the advent of the modern computer and the popularisation ofMaximum Likelihood(MLE) parameterisation techniques that research really took off.[22]Since that time there has been a vast body of research on the subject spanning areas such asfisheries research,agriculture,botany,economics,medicine,genetics,psychology,palaeontology,electrophoresis,finance,geologyandzoology.[23]
https://en.wikipedia.org/wiki/Mixture_model
Inprobability theory,de Finetti's theoremstates thatexchangeableobservations areconditionally independentrelative to somelatent variable. Anepistemic probabilitydistributioncould then be assigned to this variable. It is named in honor ofBruno de Finetti, and one of its uses is in providing a pragmatic approach to de Finetti's well-known dictum "Probability does not exist".[1] For the special case of an exchangeable sequence ofBernoullirandom variables it states that such a sequence is a "mixture" of sequences ofindependent and identically distributed(i.i.d.) Bernoulli random variables. A sequence of random variables is called exchangeable if the joint distribution of the sequence is unchanged by any permutation of a finite set of indices. In general, while the variables of the exchangeable sequence are notthemselvesindependent, only exchangeable, there is anunderlyingfamily of i.i.d. random variables. That is, there are underlying, generally unobservable, quantities that are i.i.d. – exchangeable sequences are mixtures of i.i.d. sequences. A Bayesian statistician often seeks the conditional probability distribution of a random quantity given the data. The concept ofexchangeabilitywas introduced by de Finetti. De Finetti's theorem explains a mathematical relationship between independence and exchangeability.[2] An infinite sequence of random variables is said to be exchangeable if for anynatural numbernand any finite sequencei1, ...,inand any permutation of the sequence π:{i1, ...,in} → {i1, ...,in}, both have the samejoint probability distribution. If an identically distributed sequence isindependent, then the sequence is exchangeable; however, the converse is false—there exist exchangeable random variables that are not statistically independent, for example thePólya urn model. Arandom variableXhas aBernoulli distributionif Pr(X= 1) =pand Pr(X= 0) = 1 −pfor somep∈ (0, 1). De Finetti's theorem states that the probability distribution of any infinite exchangeable sequence of Bernoulli random variables is a "mixture" of the probability distributions of independent and identically distributed sequences of Bernoulli random variables. "Mixture", in this sense, means a weighted average, but this need not mean a finite or countably infinite (i.e., discrete) weighted average: it can be anintegral over a measurerather than a sum. More precisely, supposeX1,X2,X3, ... is an infinite exchangeable sequence of Bernoulli-distributed random variables. Then there is some probability measuremon the interval [0, 1] and some random variableYsuch that SupposeX1,X2,X3,…{\displaystyle X_{1},X_{2},X_{3},\ldots }is an infinite exchangeable sequence of Bernoulli random variables. ThenX1,X2,X3,…{\displaystyle X_{1},X_{2},X_{3},\ldots }are conditionally independent and identically distributed given theexchangeable sigma-algebra(i.e., the sigma-algebra consisting of events that are measurable with respect toX1,X2,…{\displaystyle X_{1},X_{2},\ldots }and invariant under finite permutations of the indices). According toDavid Spiegelhalter(ref 1) the theorem provides a pragmatic approach to de Finetti's statement that "Probability does not exist". If our view of the probability of a sequence of events is subjective but remains unaffected by the order in which we make our observations, then the sequence can be regarded asexchangeable. De Finetti's theorem then implies that believing the sequence to be exchangeable is mathematically equivalent to acting as if the events areindependentand have an objective underlying probability of occurring, with our uncertainty about what that probability is being expressed by a subjective probability distribution function. According to Spiegelhalter: ″This is remarkable: it shows that, starting from a specific, but purely subjective, expression of convictions, we should act as if events were driven by objective chances." As a concrete example, we construct a sequence of random variables, by "mixing" two i.i.d. sequences as follows. We assumep= 2/3 with probability 1/2 andp= 9/10 with probability 1/2. Given the eventp= 2/3, the conditional distribution of the sequence is that theXiare independent and identically distributed andX1= 1 with probability 2/3 andX1= 0 with probability 1 − 2/3. Given the eventp= 9/10, the conditional distribution of the sequence is that theXiare independent and identically distributed andX1= 1 with probability 9/10 andX1= 0 with probability 1 − 9/10. This can be interpreted as follows: Make two biased coins, one showing "heads" with 2/3 probability and one showing "heads" with 9/10 probability. Flip a fair coin once to decide which biased coin to use for all flips that are recorded. Here "heads" at flip i means Xi=1. The independence asserted here isconditionalindependence, i.e. the Bernoulli random variables in the sequence are conditionally independent given the event thatp= 2/3, and are conditionally independent given the event thatp= 9/10. But they are not unconditionally independent; they are positivelycorrelated. In view of thestrong law of large numbers, we can say that Rather than concentrating probability 1/2 at each of two points between 0 and 1, the "mixing distribution" can be anyprobability distributionsupported on the interval from 0 to 1; which one it is depends on the joint distribution of the infinite sequence of Bernoulli random variables. The definition of exchangeability, and the statement of the theorem, also makes sense for finite length sequences but the theorem is not generally true in that case. It is true if the sequence can be extended to an exchangeable sequence that is infinitely long. The simplest example of an exchangeable sequence of Bernoulli random variables that cannot be so extended is the one in whichX1= 1 −X2andX1is either 0 or 1, each with probability 1/2. This sequence is exchangeable, but cannot be extended to an exchangeable sequence of length 3, let alone an infinitely long one. De Finetti's theorem can be expressed as acategorical limitin thecategory of Markov kernels.[3][4][5] Let(X,A){\displaystyle (X,{\mathcal {A}})}be astandard Borel space, and consider the space of sequences onX{\displaystyle X}, the countable productXN{\displaystyle X^{\mathbb {N} }}(equipped with theproduct sigma-algebra). Given a finitepermutationσ{\displaystyle \sigma }, denote again byσ{\displaystyle \sigma }the permutation action onXN{\displaystyle X^{\mathbb {N} }}, as well as theMarkov kernelXN→XN{\displaystyle X^{\mathbb {N} }\to X^{\mathbb {N} }}induced by it. In terms ofcategory theory, we have adiagramwith a single object,XN{\displaystyle X^{\mathbb {N} }}, and a countable number of arrows, one for each permutation. Recall now that aprobability measurep{\displaystyle p}is equivalently aMarkov kernelfrom the one-point measurable space. Aprobability measurep{\displaystyle p}onXN{\displaystyle X^{\mathbb {N} }}isexchangeableif and only if, as Markov kernels,σ∘p=p{\displaystyle \sigma \circ p=p}for every permutationσ{\displaystyle \sigma }. More generally, given any standard Borel spaceY{\displaystyle Y}, one can call a Markov kernelk:Y→X{\displaystyle k:Y\to X}exchangeableifσ∘k=k{\displaystyle \sigma \circ k=k}for everyσ{\displaystyle \sigma }, i.e. if the following diagram commutes, giving acone. De Finetti's theorem can be now stated as the fact that the spacePX{\displaystyle PX}ofprobability measuresoverX{\displaystyle X}(Giry monad) forms auniversal(orlimit) cone.[4]More in detail, consider the Markov kerneliidN:PX→XN{\displaystyle \mathrm {iid} _{\mathbb {N} }:PX\to X^{\mathbb {N} }}constructed as follows, using theKolmogorov extension theorem: for all measurable subsetsA1,…,An{\displaystyle A_{1},\dots ,A_{n}}ofX{\displaystyle X}. Note that we can interpret this kernel as taking a probability measurep∈PX{\displaystyle p\in PX}as input and returning aniid sequenceonXN{\displaystyle X^{\mathbb {N} }}distributed according top{\displaystyle p}. Since iid sequences are exchangeable,iidN:PX→XN{\displaystyle \mathrm {iid} _{\mathbb {N} }:PX\to X^{\mathbb {N} }}is an exchangeable kernel in the sense defined above. The kerneliidN:PX→XN{\displaystyle \mathrm {iid} _{\mathbb {N} }:PX\to X^{\mathbb {N} }}doesn't just form a cone, but alimitcone: given any exchangeable kernelk:Y→X{\displaystyle k:Y\to X}, there exists a unique kernelk~:Y→PX{\displaystyle {\tilde {k}}:Y\to PX}such thatk=iidN∘k~{\displaystyle k=\mathrm {iid} _{\mathbb {N} }\circ {\tilde {k}}}, i.e. making the following diagram commute: In particular, for any exchangeable probability measurep{\displaystyle p}onXN{\displaystyle X^{\mathbb {N} }}, there exists a unique probability measurep~{\displaystyle {\tilde {p}}}onPX{\displaystyle PX}(i.e. a probability measure over probability measures) such thatp=iidN∘p~{\displaystyle p=\mathrm {iid} _{\mathbb {N} }\circ {\tilde {p}}}, i.e. such that for all measurable subsetsA1,…,An{\displaystyle A_{1},\dots ,A_{n}}ofX{\displaystyle X}, In other words,p{\displaystyle p}is amixtureofiid measuresonX{\displaystyle X}(the ones formed byq{\displaystyle q}in the integral above). Versions of de Finetti's theorem forfiniteexchangeable sequences,[6][7]and forMarkov exchangeablesequences[8]have been proved by Diaconis and Freedman and by Kerns and Szekely. Two notions of partial exchangeability of arrays, known asseparateandjoint exchangeabilitylead to extensions of de Finetti's theorem for arrays by Aldous and Hoover.[9] The computable de Finetti theorem shows that if an exchangeable sequence of real random variables is given by a computer program, then a program which samples from the mixing measure can be automatically recovered.[10] In the setting offree probability, there is a noncommutative extension of de Finetti's theorem which characterizes noncommutative sequences invariant under quantum permutations.[11] Extensions of de Finetti's theorem to quantum states have been found to be useful inquantum information,[12][13][14]in topics likequantum key distribution[15]andentanglementdetection.[16]A multivariate extension of de Finetti’s theorem can be used to deriveBose–Einstein statisticsfrom the statistics of classical (i.e. independent) particles.[17]
https://en.wikipedia.org/wiki/De_Finetti_theorem
Inmathematics, ameasurable spaceorBorel space[1]is a basic object inmeasure theory. It consists of asetand aσ-algebra, which defines thesubsetsthat will be measured. It captures and generalises intuitive notions such as length, area, and volume with a setX{\displaystyle X}of 'points' in the space, butregionsof the space are the elements of theσ-algebra, since the intuitive measures are not usually defined for points. The algebra also captures the relationships that might be expected of regions: that a region can be defined as an intersection of other regions, a union of other regions, or the space with the exception of another region. Consider a setX{\displaystyle X}and aσ-algebraF{\displaystyle {\mathcal {F}}}onX.{\displaystyle X.}Then the tuple(X,F){\displaystyle (X,{\mathcal {F}})}is called a measurable space.[2]The elements ofF{\displaystyle {\mathcal {F}}}are calledmeasurable setswithin the measurable space. Note that in contrast to ameasure space, nomeasureis needed for a measurable space. Look at the set:X={1,2,3}.{\displaystyle X=\{1,2,3\}.}One possibleσ{\displaystyle \sigma }-algebra would be:F1={X,∅}.{\displaystyle {\mathcal {F}}_{1}=\{X,\varnothing \}.}Then(X,F1){\displaystyle \left(X,{\mathcal {F}}_{1}\right)}is a measurable space. Another possibleσ{\displaystyle \sigma }-algebra would be thepower setonX{\displaystyle X}:F2=P(X).{\displaystyle {\mathcal {F}}_{2}={\mathcal {P}}(X).}With this, a second measurable space on the setX{\displaystyle X}is given by(X,F2).{\displaystyle \left(X,{\mathcal {F}}_{2}\right).} IfX{\displaystyle X}is finite or countably infinite, theσ{\displaystyle \sigma }-algebra is most often thepower setonX,{\displaystyle X,}soF=P(X).{\displaystyle {\mathcal {F}}={\mathcal {P}}(X).}This leads to the measurable space(X,P(X)).{\displaystyle (X,{\mathcal {P}}(X)).} IfX{\displaystyle X}is atopological space, theσ{\displaystyle \sigma }-algebra is most commonly theBorelσ{\displaystyle \sigma }-algebraB,{\displaystyle {\mathcal {B}},}soF=B(X).{\displaystyle {\mathcal {F}}={\mathcal {B}}(X).}This leads to the measurable space(X,B(X)){\displaystyle (X,{\mathcal {B}}(X))}that is common for all topological spaces such as the real numbersR.{\displaystyle \mathbb {R} .} The term Borel space is used for different types of measurable spaces. It can refer to Additionally, asemiringis aπ-systemwhere every complementB∖A{\displaystyle B\setminus A}is equal to a finitedisjoint unionof sets inF.{\displaystyle {\mathcal {F}}.}Asemialgebrais a semiring where every complementΩ∖A{\displaystyle \Omega \setminus A}is equal to a finitedisjoint unionof sets inF.{\displaystyle {\mathcal {F}}.}A,B,A1,A2,…{\displaystyle A,B,A_{1},A_{2},\ldots }are arbitrary elements ofF{\displaystyle {\mathcal {F}}}and it is assumed thatF≠∅.{\displaystyle {\mathcal {F}}\neq \varnothing .}
https://en.wikipedia.org/wiki/Measurable_space
Inprobability theory, aMarkov kernel(also known as astochastic kernelorprobability kernel) is a map that in the general theory ofMarkov processesplays the role that thetransition matrixdoes in the theory of Markov processes with afinitestate space.[1] Let(X,A){\displaystyle (X,{\mathcal {A}})}and(Y,B){\displaystyle (Y,{\mathcal {B}})}bemeasurable spaces. AMarkov kernelwith source(X,A){\displaystyle (X,{\mathcal {A}})}and target(Y,B){\displaystyle (Y,{\mathcal {B}})}, sometimes written asκ:(X,A)→(Y,B){\displaystyle \kappa :(X,{\mathcal {A}})\to (Y,{\mathcal {B}})}, is a functionκ:B×X→[0,1]{\displaystyle \kappa :{\mathcal {B}}\times X\to [0,1]}with the following properties: In other words it associates to each pointx∈X{\displaystyle x\in X}aprobability measureκ(dy|x):B↦κ(B,x){\displaystyle \kappa (dy|x):B\mapsto \kappa (B,x)}on(Y,B){\displaystyle (Y,{\mathcal {B}})}such that, for every measurable setB∈B{\displaystyle B\in {\mathcal {B}}}, the mapx↦κ(B,x){\displaystyle x\mapsto \kappa (B,x)}is measurable with respect to theσ{\displaystyle \sigma }-algebraA{\displaystyle {\mathcal {A}}}.[2] TakeX=Y=Z{\displaystyle X=Y=\mathbb {Z} }, andA=B=P(Z){\displaystyle {\mathcal {A}}={\mathcal {B}}={\mathcal {P}}(\mathbb {Z} )}(thepower setofZ{\displaystyle \mathbb {Z} }). Then a Markov kernel is fully determined by the probability it assigns to singletons{m},m∈Y=Z{\displaystyle \{m\},\,m\in Y=\mathbb {Z} }for eachn∈X=Z{\displaystyle n\in X=\mathbb {Z} }: Now the random walkκ{\displaystyle \kappa }that goes to the right with probabilityp{\displaystyle p}and to the left with probability1−p{\displaystyle 1-p}is defined by whereδ{\displaystyle \delta }is theKronecker delta. The transition probabilitiesP(m|n)=κ({m}|n){\displaystyle P(m|n)=\kappa (\{m\}|n)}for the random walk are equivalent to the Markov kernel. More generally takeX{\displaystyle X}andY{\displaystyle Y}both countable andA=P(X),B=P(Y){\displaystyle {\mathcal {A}}={\mathcal {P}}(X),\ {\mathcal {B}}={\mathcal {P}}(Y)}. Again a Markov kernel is defined by the probability it assigns to singleton sets for eachi∈X{\displaystyle i\in X} We define a Markov process by defining a transition probabilityP(j|i)=Kji{\displaystyle P(j|i)=K_{ji}}where the numbersKji{\displaystyle K_{ji}}define a (countable)stochastic matrix(Kji){\displaystyle (K_{ji})}i.e. We then define Again the transition probability, the stochastic matrix and the Markov kernel are equivalent reformulations. Letν{\displaystyle \nu }be ameasureon(Y,B){\displaystyle (Y,{\mathcal {B}})}, andk:Y×X→[0,∞]{\displaystyle k:Y\times X\to [0,\infty ]}ameasurable functionwith respect to theproductσ{\displaystyle \sigma }-algebraA⊗B{\displaystyle {\mathcal {A}}\otimes {\mathcal {B}}}such that thenκ(dy|x)=k(y,x)ν(dy){\displaystyle \kappa (dy|x)=k(y,x)\nu (dy)}i.e. the mapping defines a Markov kernel.[3]This example generalises the countable Markov process example whereν{\displaystyle \nu }was thecounting measure. Moreover it encompasses other important examples such as the convolution kernels, in particular the Markov kernels defined by the heat equation. The latter example includes theGaussian kernelonX=Y=R{\displaystyle X=Y=\mathbb {R} }withν(dx)=dx{\displaystyle \nu (dx)=dx}standard Lebesgue measure and Take(X,A){\displaystyle (X,{\mathcal {A}})}and(Y,B){\displaystyle (Y,{\mathcal {B}})}arbitrary measurable spaces, and letf:X→Y{\displaystyle f:X\to Y}be a measurable function. Now defineκ(dy|x)=δf(x)(dy){\displaystyle \kappa (dy|x)=\delta _{f(x)}(dy)}i.e. Note that the indicator function1f−1(B){\displaystyle \mathbf {1} _{f^{-1}(B)}}isA{\displaystyle {\mathcal {A}}}-measurable for allB∈B{\displaystyle B\in {\mathcal {B}}}ifff{\displaystyle f}is measurable. This example allows us to think of a Markov kernel as a generalised function with a (in general) random rather than certain value. That is, it is amultivalued functionwhere the values are not equally weighted. As a less obvious example, takeX=N,A=P(N){\displaystyle X=\mathbb {N} ,{\mathcal {A}}={\mathcal {P}}(\mathbb {N} )}, and(Y,B){\displaystyle (Y,{\mathcal {B}})}the real numbersR{\displaystyle \mathbb {R} }with the standard sigma algebra ofBorel sets. Then wherex{\displaystyle x}is the number of element at the staten{\displaystyle n},ξi{\displaystyle \xi _{i}}arei.i.d.random variables(usually with mean 0) and where1B{\displaystyle \mathbf {1} _{B}}is the indicator function. For the simple case ofcoin flipsthis models the different levels of aGalton board. Given measurable spaces(X,A){\displaystyle (X,{\mathcal {A}})},(Y,B){\displaystyle (Y,{\mathcal {B}})}we consider a Markov kernelκ:B×X→[0,1]{\displaystyle \kappa :{\mathcal {B}}\times X\to [0,1]}as a morphismκ:X→Y{\displaystyle \kappa :X\to Y}. Intuitively, rather than assigning to eachx∈X{\displaystyle x\in X}a sharply defined pointy∈Y{\displaystyle y\in Y}the kernel assigns a "fuzzy" point inY{\displaystyle Y}which is only known with some level of uncertainty, much like actual physical measurements. If we have a third measurable space(Z,C){\displaystyle (Z,{\mathcal {C}})}, and probability kernelsκ:X→Y{\displaystyle \kappa :X\to Y}andλ:Y→Z{\displaystyle \lambda :Y\to Z}, we can define a compositionλ∘κ:X→Z{\displaystyle \lambda \circ \kappa :X\to Z}by theChapman-Kolmogorov equation The composition is associative by the Monotone Convergence Theorem and the identity function considered as a Markov kernel (i.e. the delta measureκ1(dx′|x)=δx(dx′){\displaystyle \kappa _{1}(dx'|x)=\delta _{x}(dx')}) is the unit for this composition. This composition defines the structure of acategoryon the measurable spaces with Markov kernels as morphisms, first defined by Lawvere,[4]thecategory of Markov kernels. A composition of a probability space(X,A,PX){\displaystyle (X,{\mathcal {A}},P_{X})}and a probability kernelκ:(X,A)→(Y,B){\displaystyle \kappa :(X,{\mathcal {A}})\to (Y,{\mathcal {B}})}defines a probability space(Y,B,PY=κ∘PX){\displaystyle (Y,{\mathcal {B}},P_{Y}=\kappa \circ P_{X})}, where the probability measure is given by Let(X,A,P){\displaystyle (X,{\mathcal {A}},P)}be a probability space andκ{\displaystyle \kappa }a Markov kernel from(X,A){\displaystyle (X,{\mathcal {A}})}to some(Y,B){\displaystyle (Y,{\mathcal {B}})}. Then there exists a unique measureQ{\displaystyle Q}on(X×Y,A⊗B){\displaystyle (X\times Y,{\mathcal {A}}\otimes {\mathcal {B}})}, such that: Let(S,Y){\displaystyle (S,Y)}be aBorel space,X{\displaystyle X}a(S,Y){\displaystyle (S,Y)}-valued random variable on the measure space(Ω,F,P){\displaystyle (\Omega ,{\mathcal {F}},P)}andG⊆F{\displaystyle {\mathcal {G}}\subseteq {\mathcal {F}}}a sub-σ{\displaystyle \sigma }-algebra. Then there exists a Markov kernelκ{\displaystyle \kappa }from(Ω,G){\displaystyle (\Omega ,{\mathcal {G}})}to(S,Y){\displaystyle (S,Y)}, such thatκ(⋅,B){\displaystyle \kappa (\cdot ,B)}is a version of theconditional expectationE[1{X∈B}∣G]{\displaystyle \mathbb {E} [\mathbf {1} _{\{X\in B\}}\mid {\mathcal {G}}]}for everyB∈Y{\displaystyle B\in Y}, i.e. It is called regular conditional distribution ofX{\displaystyle X}givenG{\displaystyle {\mathcal {G}}}and is not uniquely defined. Transition kernelsgeneralize Markov kernels in the sense that for allx∈X{\displaystyle x\in X}, the map can be any type of (non negative) measure, not necessarily a probability measure.
https://en.wikipedia.org/wiki/Markov_kernel
Incategory theory, a branch ofmathematics, amonadis a triple(T,η,μ){\displaystyle (T,\eta ,\mu )}consisting of afunctorTfrom a category to itself and twonatural transformationsη,μ{\displaystyle \eta ,\mu }that satisfy the conditions like associativity. For example, ifF,G{\displaystyle F,G}are functorsadjointto each other, thenT=G∘F{\displaystyle T=G\circ F}together withη,μ{\displaystyle \eta ,\mu }determined by the adjoint relation is a monad. In concise terms, a monad is amonoidin thecategoryofendofunctorsof some fixed category (an endofunctor is afunctormapping a category to itself). According toJohn Baez, a monad can be considered at least in two ways:[1] Monads are used in the theory of pairs ofadjoint functors, and they generalizeclosure operatorsonpartially ordered setsto arbitrary categories. Monads are also useful in thetheory of datatypes, thedenotational semanticsofimperative programming languages, and infunctional programming languages, allowing languages without mutable state to do things such as simulatefor-loops; seeMonad (functional programming). A monad is also called, especially in old literature, atriple,triad,standard constructionandfundamental construction.[2] We had some time to talk, and during the course of it I realized I’d become less scared of certain topics involving monads. Monads seem to bother a lot of people. There’s even a YouTube video called The Monads Hurt My Head! ... Shortly thereafter, the woman speaking exclaims: What the heck?! How do you even explain what a monad is? A monad is a certain type ofendofunctor. For example, ifF{\displaystyle F}andG{\displaystyle G}are a pair ofadjoint functors, withF{\displaystyle F}left adjoint toG{\displaystyle G}, then the compositionG∘F{\displaystyle G\circ F}is a monad. IfF{\displaystyle F}andG{\displaystyle G}are inverse to each other, the corresponding monad is theidentity functor. In general, adjunctions are notequivalences—they relate categories of different natures. The monad theory matters as part of the effort to capture what it is that adjunctions 'preserve'. The other half of the theory, of what can be learned likewise from consideration ofF∘G{\displaystyle F\circ G}, is discussed under the dual theory ofcomonads. Throughout this article,C{\displaystyle C}denotes acategory. AmonadonC{\displaystyle C}consists of an endofunctorT:C→C{\displaystyle T\colon C\to C}together with twonatural transformations:η:1C→T{\displaystyle \eta \colon 1_{C}\to T}(where1C{\displaystyle 1_{C}}denotes the identity functor onC{\displaystyle C}) andμ:T2→T{\displaystyle \mu \colon T^{2}\to T}(whereT2{\displaystyle T^{2}}is the functorT∘T{\displaystyle T\circ T}fromC{\displaystyle C}toC{\displaystyle C}). These are required to fulfill the following conditions (sometimes calledcoherence conditions): We can rewrite these conditions using the followingcommutative diagrams: See the article onnatural transformationsfor the explanation of the notationsTμ{\displaystyle T\mu }andμT{\displaystyle \mu T}, or see below the commutative diagrams not using these notions: The first axiom is akin to theassociativityinmonoidsif we think ofμ{\displaystyle \mu }as the monoid's binary operation, and the second axiom is akin to the existence of anidentity element(which we think of as given byη{\displaystyle \eta }). Indeed, a monad onC{\displaystyle C}can alternatively be defined as amonoidin the categoryEndC{\displaystyle \mathbf {End} _{C}}whose objects are the endofunctors ofC{\displaystyle C}and whose morphisms are the natural transformations between them, with themonoidal structureinduced by the composition of endofunctors. Thepower set monadis a monadP{\displaystyle {\mathcal {P}}}on the categorySet{\displaystyle \mathbf {Set} }: For a setA{\displaystyle A}letT(A){\displaystyle T(A)}be thepower setofA{\displaystyle A}and for a functionf:A→B{\displaystyle f\colon A\to B}letT(f){\displaystyle T(f)}be the function between the power sets induced by takingdirect imagesunderf{\displaystyle f}. For every setA{\displaystyle A}, we have a mapηA:A→T(A){\displaystyle \eta _{A}\colon A\to T(A)}, which assigns to everya∈A{\displaystyle a\in A}thesingleton{a}{\displaystyle \{a\}}. The function takes a set of sets to itsunion. These data describe a monad. The axioms of a monad are formally similar to themonoidaxioms. In fact, monads are special cases of monoids, namely they are precisely the monoids amongendofunctorsEnd⁡(C){\displaystyle \operatorname {End} (C)}, which is equipped with the multiplication given by composition of endofunctors. Composition of monads is not, in general, a monad. For example, the double power set functorP∘P{\displaystyle {\mathcal {P}}\circ {\mathcal {P}}}does not admit any monad structure.[3] Thecategorical dualdefinition is a formal definition of acomonad(orcotriple); this can be said quickly in the terms that a comonad for a categoryC{\displaystyle C}is a monad for theopposite categoryCop{\displaystyle C^{\mathrm {op} }}. It is therefore a functorU{\displaystyle U}fromC{\displaystyle C}to itself, with a set of axioms forcounitandcomultiplicationthat come from reversing the arrows everywhere in the definition just given. Monads are to monoids as comonads are tocomonoids. Every set is a comonoid in a unique way, so comonoids are less familiar inabstract algebrathan monoids; however, comonoids in the category of vector spaces with its usual tensor product are important and widely studied under the name ofcoalgebras. The notion of monad was invented byRoger Godementin 1958 under the name "standard construction". Monad has been called "dual standard construction", "triple", "monoid" and "triad".[4]The term "monad" is used at latest 1967, byJean Bénabou.[5][6] Theidentity functoron a categoryC{\displaystyle C}is a monad. Its multiplication and unit are theidentity functionon the objects ofC{\displaystyle C}. Anyadjunction gives rise to a monad onC. This very widespread construction works as follows: the endofunctor is the composite This endofunctor is quickly seen to be a monad, where the unit map stems from the unit mapidC→G∘F{\displaystyle \operatorname {id} _{C}\to G\circ F}of the adjunction, and the multiplication map is constructed using the counit map of the adjunction: In fact,any monad can be found as an explicit adjunction of functorsusing theEilenberg–Moore categoryCT{\displaystyle C^{T}}(the category ofT{\displaystyle T}-algebras).[7] Thedouble dualization monad, for a fixedfieldkarises from the adjunction where both functors are given by sending avector spaceVto itsdual vector spaceV∗:=Hom⁡(V,k){\displaystyle V^{*}:=\operatorname {Hom} (V,k)}. The associated monad sends a vector spaceVto itsdouble dualV∗∗{\displaystyle V^{**}}. This monad is discussed, in much greater generality, byKock (1970). For categories arising frompartially ordered sets(P,≤){\displaystyle (P,\leq )}(with a single morphism fromx{\displaystyle x}toy{\displaystyle y}if and only ifx≤y{\displaystyle x\leq y}), then the formalism becomes much simpler: adjoint pairs areGalois connectionsand monads areclosure operators. For example, letG{\displaystyle G}be theforgetful functorfromthe categoryGrpofgroupsto thecategorySetof sets, and letF{\displaystyle F}be thefree groupfunctor from the category of sets to the category of groups. ThenF{\displaystyle F}is left adjoint ofG{\displaystyle G}. In this case, the associated monadT=G∘F{\displaystyle T=G\circ F}takes a setX{\displaystyle X}and returns the underlying set of the free groupFree(X){\displaystyle \mathrm {Free} (X)}. The unit map of this monad is given by the maps including any setX{\displaystyle X}into the setFree(X){\displaystyle \mathrm {Free} (X)}in the natural way, as strings of length 1. Further, the multiplication of this monad is the map made out of a naturalconcatenationor 'flattening' of 'strings of strings'. This amounts to twonatural transformations. The preceding example about free groups can be generalized to any type of algebra in the sense of avariety of algebrasinuniversal algebra. Thus, every such type of algebra gives rise to a monad on the category of sets. Importantly, the algebra type can be recovered from the monad (as the category of Eilenberg–Moore algebras), so monads can also be seen as generalizing varieties of universal algebras. Another monad arising from an adjunction is whenT{\displaystyle T}is the endofunctor on the category of vector spaces which maps a vector spaceV{\displaystyle V}to itstensor algebraT(V){\displaystyle T(V)}, and which maps linear maps to their tensor product. We then have a natural transformation corresponding to the embedding ofV{\displaystyle V}into itstensor algebra, and a natural transformation corresponding to the map fromT(T(V)){\displaystyle T(T(V))}toT(V){\displaystyle T(V)}obtained by simply expanding all tensor products. Under mild conditions, functors not admitting a left adjoint also give rise to a monad, the so-calledcodensity monad. For example, the inclusion does not admit a left adjoint. Its codensity monad is the monad on sets sending any setXto the set ofultrafiltersonX. This and similar examples are discussed inLeinster (2013). The following monads over the category of sets are used indenotational semanticsofimperative programming languages, and analogous constructions are used in functional programming. The endofunctor of themaybeorpartialitymonad adds a disjoint point:[8] The unit is given by the inclusion of a setX{\displaystyle X}intoX∗{\displaystyle X_{*}}: The multiplication maps elements ofX{\displaystyle X}to themselves, and the two disjoint points in(X∗)∗{\displaystyle (X_{*})_{*}}to the one inX∗{\displaystyle X_{*}}. In both functional programming and denotational semantics, the maybe monad modelspartial computations, that is, computations that may fail. Given a setS{\displaystyle S}, the endofunctor of thestate monadmaps each setX{\displaystyle X}to the set of functionsS→S×X{\displaystyle S\to S\times X}. The component of the unit atX{\displaystyle X}maps each elementx∈X{\displaystyle x\in X}to the function The multiplication maps the functionf:S→S×(S→S×X),s↦(s′,f′){\displaystyle f:S\to S\times (S\to S\times X),s\mapsto (s',f')}to the function In functional programming and denotational semantics, the state monad modelsstateful computations. Given a setE{\displaystyle E}, the endofunctor of thereaderorenvironment monadmaps each setX{\displaystyle X}to the set of functionsE→X{\displaystyle E\to X}. Thus, the endofunctor of this monad is exactly thehom functorHom(E,−){\displaystyle \mathrm {Hom} (E,-)}. The component of the unit atX{\displaystyle X}maps each elementx∈X{\displaystyle x\in X}to theconstant functione↦x{\displaystyle e\mapsto x}. In functional programming and denotational semantics, the environment monad models computations with access to some read-only data. Thelistornondeterminism monadmaps a setXto the set of finitesequences(i.e.,lists) with elements fromX. The unit maps an elementxinXto the singleton list [x]. The multiplication concatenates a list of lists into a single list. In functional programming, the list monad is used to modelnondeterministic computations. The covariant powerset monad is also known as theset monad, and is also used to model nondeterministic computation. Given a monad(T,η,μ){\displaystyle (T,\eta ,\mu )}on a categoryC{\displaystyle C}, it is natural to considerT{\displaystyle T}-algebras, i.e., objects ofC{\displaystyle C}acted upon byT{\displaystyle T}in a way which is compatible with the unit and multiplication of the monad. More formally, aT{\displaystyle T}-algebra(x,h){\displaystyle (x,h)}is an objectx{\displaystyle x}ofC{\displaystyle C}together with an arrowh:Tx→x{\displaystyle h\colon Tx\to x}ofC{\displaystyle C}called thestructure mapof the algebra such that the diagrams commute. A morphismf:(x,h)→(x′,h′){\displaystyle f\colon (x,h)\to (x',h')}ofT{\displaystyle T}-algebras is an arrowf:x→x′{\displaystyle f\colon x\to x'}ofC{\displaystyle C}such that the diagram commutes.T{\displaystyle T}-algebras form a category called theEilenberg–Moore categoryand denoted byCT{\displaystyle C^{T}}. For example, for the free group monad discussed above, aT{\displaystyle T}-algebra is a setX{\displaystyle X}together with a map from the free group generated byX{\displaystyle X}towardsX{\displaystyle X}subject to associativity and unitality conditions. Such a structure is equivalent to saying thatX{\displaystyle X}is a group itself. Another example is thedistribution monadD{\displaystyle {\mathcal {D}}}on the category of sets. It is defined by sending a setX{\displaystyle X}to the set of functionsf:X→[0,1]{\displaystyle f:X\to [0,1]}with finite support and such that their sum is equal to1{\displaystyle 1}. In set-builder notation, this is the setD(X)={f:X→[0,1]:#supp(f)<+∞∑x∈Xf(x)=1}{\displaystyle {\mathcal {D}}(X)=\left\{f:X\to [0,1]:{\begin{matrix}\#{\text{supp}}(f)<+\infty \\\sum _{x\in X}f(x)=1\end{matrix}}\right\}}By inspection of the definitions, it can be shown that algebras over the distribution monad are equivalent toconvex sets, i.e., sets equipped with operationsx+ry{\displaystyle x+_{r}y}forr∈[0,1]{\displaystyle r\in [0,1]}subject to axioms resembling the behavior of convex linear combinationsrx+(1−r)y{\displaystyle rx+(1-r)y}in Euclidean space.[9] Another useful example of a monad is the symmetric algebra functor on the category ofR{\displaystyle R}-modules for a commutative ringR{\displaystyle R}.Sym∙(−):Mod(R)→Mod(R){\displaystyle {\text{Sym}}^{\bullet }(-):{\text{Mod}}(R)\to {\text{Mod}}(R)}sending anR{\displaystyle R}-moduleM{\displaystyle M}to the direct sum ofsymmetric tensorpowersSym∙(M)=⨁k=0∞Symk(M){\displaystyle {\text{Sym}}^{\bullet }(M)=\bigoplus _{k=0}^{\infty }{\text{Sym}}^{k}(M)}whereSym0(M)=R{\displaystyle {\text{Sym}}^{0}(M)=R}. For example,Sym∙(R⊕n)≅R[x1,…,xn]{\displaystyle {\text{Sym}}^{\bullet }(R^{\oplus n})\cong R[x_{1},\ldots ,x_{n}]}where theR{\displaystyle R}-algebra on the right is considered as a module. Then, an algebra over this monad are commutativeR{\displaystyle R}-algebras. There are also algebras over the monads for the alternating tensorsAlt∙(−){\displaystyle {\text{Alt}}^{\bullet }(-)}and total tensor functorsT∙(−){\displaystyle T^{\bullet }(-)}giving anti-symmetricR{\displaystyle R}-algebras, and freeR{\displaystyle R}-algebras, soAlt∙(R⊕n)=R(x1,…,xn)T∙(R⊕n)=R⟨x1,…,xn⟩{\displaystyle {\begin{aligned}{\text{Alt}}^{\bullet }(R^{\oplus n})&=R(x_{1},\ldots ,x_{n})\\{\text{T}}^{\bullet }(R^{\oplus n})&=R\langle x_{1},\ldots ,x_{n}\rangle \end{aligned}}}where the first ring is the free anti-symmetric algebra overR{\displaystyle R}inn{\displaystyle n}-generators and the second ring is the free algebra overR{\displaystyle R}inn{\displaystyle n}-generators. There is an analogous construction forcommutativeS{\displaystyle \mathbb {S} }-algebras[10]pg 113which gives commutativeA{\displaystyle A}-algebras for a commutativeS{\displaystyle \mathbb {S} }-algebraA{\displaystyle A}. IfMA{\displaystyle {\mathcal {M}}_{A}}is the category ofA{\displaystyle A}-modules, then the functorP:MA→MA{\displaystyle \mathbb {P} :{\mathcal {M}}_{A}\to {\mathcal {M}}_{A}}is the monad given byP(M)=⋁j≥0Mj/Σj{\displaystyle \mathbb {P} (M)=\bigvee _{j\geq 0}M^{j}/\Sigma _{j}}whereMj=M∧A⋯∧AM{\displaystyle M^{j}=M\wedge _{A}\cdots \wedge _{A}M}j{\displaystyle j}-times. Then there is an associated categoryCA{\displaystyle {\mathcal {C}}_{A}}of commutativeA{\displaystyle A}-algebras from the category of algebras over this monad. As was mentioned above, any adjunction gives rise to a monad. Conversely, every monad arises from some adjunction, namely the free–forgetful adjunction whose left adjoint sends an objectXto the freeT-algebraT(X). However, there are usually several distinct adjunctions giving rise to a monad: letAdj(C,T){\displaystyle \mathbf {Adj} (C,T)}be the category whose objects are the adjunctions(F,G,e,ε){\displaystyle (F,G,e,\varepsilon )}such that(GF,e,GεF)=(T,η,μ){\displaystyle (GF,e,G\varepsilon F)=(T,\eta ,\mu )}and whose arrows are the morphisms of adjunctions that are the identity onC{\displaystyle C}. Then the above free–forgetful adjunction involving the Eilenberg–Moore categoryCT{\displaystyle C^{T}}is a terminal object inAdj(C,T){\displaystyle \mathbf {Adj} (C,T)}. An initial object is theKleisli category, which is by definition the full subcategory ofCT{\displaystyle C^{T}}consisting only of freeT-algebras, i.e.,T-algebras of the formT(x){\displaystyle T(x)}for some objectxofC. Given any adjunction(F:C→D,G:D→C,η,ε){\displaystyle (F:C\to D,G:D\to C,\eta ,\varepsilon )}with associated monadT, the functorGcan be factored as i.e.,G(Y) can be naturally endowed with aT-algebra structure for anyYinD. The adjunction is called amonadic adjunctionif the first functorG~{\displaystyle {\tilde {G}}}yields anequivalence of categoriesbetweenDand the Eilenberg–Moore categoryCT{\displaystyle C^{T}}.[11]By extension, a functorG:D→C{\displaystyle G\colon D\to C}is said to bemonadicif it has a left adjointFforming a monadic adjunction. For example, the free–forgetful adjunction between groups and sets is monadic, since algebras over the associated monad are groups, as was mentioned above. In general, knowing that an adjunction is monadic allows one to reconstruct objects inDout of objects inCand theT-action. Beck's monadicity theoremgives a necessary and sufficient condition for an adjunction to be monadic. A simplified version of this theorem states thatGis monadic if it isconservative(orGreflects isomorphisms, i.e., a morphism inDis an isomorphism if and only if its image underGis an isomorphism inC) andChas andGpreservescoequalizers. For example, the forgetful functor from the category ofcompactHausdorff spacesto sets is monadic. However the forgetful functor from all topological spaces to sets is not conservative since there are continuous bijective maps (between non-compact or non-Hausdorff spaces) that fail to behomeomorphisms. Thus, this forgetful functor is not monadic.[12]The dual version of Beck's theorem, characterizing comonadic adjunctions, is relevant in different fields such astopos theoryand topics inalgebraic geometryrelated todescent. A first example of a comonadic adjunction is the adjunction for a ring homomorphismA→B{\displaystyle A\to B}between commutative rings. This adjunction is comonadic, by Beck's theorem, if and only ifBisfaithfully flatas anA-module. It thus allows to descendB-modules, equipped with a descent datum (i.e., an action of the comonad given by the adjunction) toA-modules. The resulting theory offaithfully flat descentis widely applied in algebraic geometry. Monads are used infunctional programmingto express types of sequential computation (sometimes with side-effects). Seemonads in functional programming, and the more mathematically oriented Wikibook moduleb:Haskell/Category theory. Monads are used in thedenotational semanticsof impure functional andimperative programming languages.[13][14] In categorical logic, an analogy has been drawn between the monad-comonad theory, andmodal logicviaclosure operators,interior algebras, and their relation tomodelsofS4andintuitionistic logics. It is possible to define monads in a2-categoryC{\displaystyle C}. Monads described above are monads forC=Cat{\displaystyle C=\mathbf {Cat} }.
https://en.wikipedia.org/wiki/Monad_(category_theory)
Inmathematics, thecategory of Markov kernels, often denotedStoch, is thecategorywhoseobjectsaremeasurable spacesand whosemorphismsareMarkov kernels.[1][2][3][4]It is analogous to thecategory of sets and functions, but where thearrowscan be interpreted as beingstochastic. Several variants of this category are used in the literature. For example, one can use subprobability kernels[5]instead of probability kernels, or more generals-finitekernels.[6]Also, one can take as morphismsequivalence classesof Markov kernels underalmost sure equality;[7]see below. Recall that aMarkov kernelbetweenmeasurable spaces(X,F){\displaystyle (X,{\mathcal {F}})}and(Y,G){\displaystyle (Y,{\mathcal {G}})}is an assignmentk:X×G→R{\displaystyle k:X\times {\mathcal {G}}\to \mathbb {R} }which ismeasurableas a function onX{\displaystyle X}and which is aprobability measureonG{\displaystyle {\mathcal {G}}}.[4]We denote its values byk(B|x){\displaystyle k(B|x)}forx∈X{\displaystyle x\in X}andB∈G{\displaystyle B\in {\mathcal {G}}}, which suggests an interpretation asconditional probability. The categoryStochhas:[4] This composition formula is sometimes called theChapman-Kolmogorov equation.[4] This composition is unital, andassociativeby themonotone convergence theorem, so that one indeed has acategory. Theterminal objectofStochis theone-point space1{\displaystyle 1}.[4]Morphisms in the form1→X{\displaystyle 1\to X}can be equivalently seen asprobability measuresonX{\displaystyle X}, since they correspond to functions1→PX{\displaystyle 1\to PX}, i.e. elements ofPX{\displaystyle PX}. Given kernelsp:1→X{\displaystyle p:1\to X}andk:X→Y{\displaystyle k:X\to Y}, the composite kernelk∘p:1→Y{\displaystyle k\circ p:1\to Y}gives the probability measure onY{\displaystyle Y}with values for every measurable subsetB{\displaystyle B}ofY{\displaystyle Y}.[7] Givenprobability spaces(X,F,p){\displaystyle (X,{\mathcal {F}},p)}and(Y,G,q){\displaystyle (Y,{\mathcal {G}},q)}, ameasure-preserving Markov kernel(X,F,p)→(Y,G,q){\displaystyle (X,{\mathcal {F}},p)\to (Y,{\mathcal {G}},q)}is a Markov kernelk:(X,F)→(Y,G){\displaystyle k:(X,{\mathcal {F}})\to (Y,{\mathcal {G}})}such that for every measurable subsetB∈G{\displaystyle B\in {\mathcal {G}}},[7] Probability spacesand measure-preserving Markov kernels form acategory, which can be seen as theslice category(HomStoch(1,−),Stoch){\displaystyle (\mathrm {Hom} _{\mathrm {Stoch} }(1,-),\mathrm {Stoch} )}. Every measurable functionf:(X,F)→(Y,G){\displaystyle f:(X,{\mathcal {F}})\to (Y,{\mathcal {G}})}defines canonically a Markov kernelδf:(X,F)→(Y,G){\displaystyle \delta _{f}:(X,{\mathcal {F}})\to (Y,{\mathcal {G}})}as follows, for everyx∈X{\displaystyle x\in X}and everyB∈G{\displaystyle B\in {\mathcal {G}}}. This construction preserves identities and compositions, and is therefore afunctorfromMeastoStoch. By functoriality, every isomorphism of measurable spaces (in the categoryMeas) induces an isomorphism inStoch. However, inStochthere are more isomorphisms, and in particular, measurable spaces can be isomorphic inStocheven when the underlying sets are not in bijection. Since the functorL:Meas→Stoch{\displaystyle L:\mathrm {Meas} \to \mathrm {Stoch} }isleft adjoint, it preservescolimits.[8]Because of this, all colimits in thecategory of measurable spacesare also colimits inStoch. For example, In general, the functorL{\displaystyle L}does not preservelimits. This in particular implies that theproduct of measurable spacesis not a product inStochin general. Since theGiry monadismonoidal, however, the product of measurable spaces still makesStochamonoidal category.[4] A limit of particular significance forprobability theoryisde Finetti's theorem, which can be interpreted as the fact that the space of probability measures (Giry monad) is the limit inStochof thediagramformed byfinite permutationsof sequences. Sometimes it is useful to consider Markov kernels only up toalmost sure equality, for example when talking aboutdisintegrationsor aboutregular conditional probability. Givenprobability spaces(X,F,p){\displaystyle (X,{\mathcal {F}},p)}and(Y,G,q){\displaystyle (Y,{\mathcal {G}},q)}, we say that two measure-preserving kernelsk,h:(X,F,p)→(Y,G,q){\displaystyle k,h:(X,{\mathcal {F}},p)\to (Y,{\mathcal {G}},q)}arealmost surely equalif and only if for every measurable subsetB∈G{\displaystyle B\in {\mathcal {G}}}, forp{\displaystyle p}-almost allx∈X{\displaystyle x\in X}.[7]This defines anequivalence relationon the set of measure-preserving Markov kernelsk,h:(X,F,p)→(Y,G,q){\displaystyle k,h:(X,{\mathcal {F}},p)\to (Y,{\mathcal {G}},q)}. Probability spaces and equivalence classes of Markov kernels under the relation defined above form acategory. When restricted tostandard Borelprobability spaces, the category is often denoted byKrn.[7]
https://en.wikipedia.org/wiki/Category_of_Markov_kernels
In mathematics, the termcategorical probabilitydenotes a collection ofcategory-theoreticapproaches toprobability theoryand related fields such asstatistics,information theoryandergodic theory. The earliest ideas in the field were developed independently byLawvereand byChentsov, where they defined a version of what we today call thecategory of Markov kernels, and appeared in 1962 and 1965 respectively.[1][2] Some of the most widely used structures in the theory are Thiscategory theory-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Categorical_probability
Pruningis adata compressiontechnique inmachine learningandsearch algorithmsthat reduces the size ofdecision treesby removing sections of the tree that are non-critical and redundant to classify instances. Pruning reduces the complexity of the finalclassifier, and hence improves predictive accuracy by the reduction ofoverfitting. One of the questions that arises in a decision tree algorithm is the optimal size of the final tree. A tree that is too large risksoverfittingthe training data and poorly generalizing to new samples. A small tree might not capture important structural information about the sample space. However, it is hard to tell when a tree algorithm should stop because it is impossible to tell if the addition of a single extra node will dramatically decrease error. This problem is known as thehorizon effect. A common strategy is to grow the tree until each node contains a small number of instances then use pruning to remove nodes that do not provide additional information.[1] Pruning should reduce the size of a learning tree without reducing predictive accuracy as measured by across-validationset. There are many techniques for tree pruning that differ in the measurement that is used to optimize performance. Pruning processes can be divided into two types (pre- and post-pruning). Pre-pruningprocedures prevent a complete induction of the training set by replacing a stop () criterion in the induction algorithm (e.g. max. Tree depth or information gain (Attr)> minGain). Pre-pruning methods are considered to be more efficient because they do not induce an entire set, but rather trees remain small from the start. Prepruning methods share a common problem, the horizon effect. This is to be understood as the undesired premature termination of the induction by the stop () criterion. Post-pruning(or just pruning) is the most common way of simplifying trees. Here, nodes and subtrees are replaced with leaves to reduce complexity. Pruning can not only significantly reduce the size but also improve the classification accuracy of unseen objects. It may be the case that the accuracy of the assignment on the train set deteriorates, but the accuracy of the classification properties of the tree increases overall. The procedures are differentiated on the basis of their approach in the tree (top-down or bottom-up). These procedures start at the last node in the tree (the lowest point). Following recursively upwards, they determine the relevance of each individual node. If the relevance for the classification is not given, the node is dropped or replaced by a leaf. The advantage is that no relevant sub-trees can be lost with this method. These methods include Reduced Error Pruning (REP), Minimum Cost Complexity Pruning (MCCP), or Minimum Error Pruning (MEP). In contrast to the bottom-up method, this method starts at the root of the tree. Following the structure below, a relevance check is carried out which decides whether a node is relevant for the classification of all n items or not. By pruning the tree at an inner node, it can happen that an entire sub-tree (regardless of its relevance) is dropped. One of these representatives is pessimistic error pruning (PEP), which brings quite good results with unseen items. One of the simplest forms of pruning is reduced error pruning. Starting at the leaves, each node is replaced with its most popular class. If the prediction accuracy is not affected then the change is kept. While somewhat naive, reduced error pruning has the advantage ofsimplicity and speed. Cost complexity pruning generates a series of trees⁠T0…Tm{\displaystyle T_{0}\dots T_{m}}⁠where⁠T0{\displaystyle T_{0}}⁠is the initial tree and⁠Tm{\displaystyle T_{m}}⁠is the root alone. At step⁠i{\displaystyle i}⁠, the tree is created by removing a subtree from tree⁠i−1{\displaystyle i-1}⁠and replacing it with a leaf node with value chosen as in the tree building algorithm. The subtree that is removed is chosen as follows: The function⁠prune⁡(T,t){\displaystyle \operatorname {prune} (T,t)}⁠defines the tree obtained by pruning the subtrees⁠t{\displaystyle t}⁠from the tree⁠T{\displaystyle T}⁠. Once the series of trees has been created, the best tree is chosen by generalized accuracy as measured by a training set or cross-validation. Pruning could be applied in acompression schemeof a learning algorithm to remove the redundant details without compromising the model's performances. In neural networks, pruning removes entire neurons or layers of neurons.
https://en.wikipedia.org/wiki/Pruning_(algorithm)
Branch and bound(BB,B&B, orBnB) is a method for solving optimization problems by breaking them down into smaller sub-problems and using a bounding function to eliminate sub-problems that cannot contain the optimal solution. It is analgorithmdesign paradigmfordiscreteandcombinatorial optimizationproblems, as well asmathematical optimization. A branch-and-bound algorithm consists of a systematic enumeration of candidate solutions by means ofstate space search: the set of candidate solutions is thought of as forming arooted treewith the full set at the root. The algorithm exploresbranchesof this tree, which represent subsets of the solution set. Before enumerating the candidate solutions of a branch, the branch is checked against upper and lower estimatedboundson the optimal solution, and is discarded if it cannot produce a better solution than the best one found so far by the algorithm. The algorithm depends on efficient estimation of the lower and upper bounds of regions/branches of the search space. If no bounds are available, the algorithm degenerates to an exhaustive search. The method was first proposed byAilsa LandandAlison Doigwhilst carrying out research at theLondon School of Economicssponsored byBritish Petroleumin 1960 fordiscrete programming,[1][2]and has become the most commonly used tool for solvingNP-hardoptimization problems.[3]The name "branch and bound" first occurred in the work of Littleet al.on thetraveling salesman problem.[4][5] The goal of a branch-and-bound algorithm is to find a valuexthat maximizes or minimizes the value of areal-valued functionf(x), called anobjective function, among some setSof admissible, orcandidate solutions. The setSis called the search space, orfeasible region. The rest of this section assumes that minimization off(x)is desired; this assumption comeswithout loss of generality, since one can find the maximum value off(x)by finding the minimum ofg(x) = −f(x). A B&B algorithm operates according to two principles: Turning these principles into a concrete algorithm for a specific optimization problem requires some kind ofdata structurethat represents sets of candidate solutions. Such a representation is called aninstanceof the problem. Denote the set of candidate solutions of an instanceIbySI. The instance representation has to come with three operations: Using these operations, a B&B algorithm performs a top-down recursive search through thetreeof instances formed by the branch operation. Upon visiting an instanceI, it checks whetherbound(I)is equal or greater than the current upper bound; if so,Imay be safely discarded from the search and the recursion stops. This pruning step is usually implemented by maintaining aglobal variablethat records the minimum upper bound seen among all instances examined so far. The following is the skeleton of a generic branch and bound algorithm for minimizing an arbitrary objective functionf.[3]To obtain an actual algorithm from this, one requires a bounding functionbound, that computes lower bounds offon nodes of thesearch tree, as well as a problem-specific branching rule. As such, the generic algorithm presented here is ahigher-order function. Several differentqueuedata structurescan be used. ThisFIFO queue-based implementation yields abreadth-first search. Astack(LIFO queue) will yield adepth-firstalgorithm. Abest-firstbranch and bound algorithm can be obtained by using apriority queuethat sorts nodes on their lower bound.[3] Examples of best-first search algorithms with this premise areDijkstra's algorithmand its descendantA* search. The depth-first variant is recommended when no good heuristic is available for producing an initial solution, because it quickly produces full solutions, and therefore upper bounds.[7] AC++-like pseudocode implementation of the above is: In the above pseudocode, the functionsheuristic_solveandpopulate_candidatescalled as subroutines must be provided as applicable to the problem. The functionsf(objective_function) andbound(lower_bound_function) are treated asfunction objectsas written, and could correspond tolambda expressions,function pointersand other types ofcallable objectsin the C++ programming language. Whenx{\displaystyle \mathbf {x} }is a vector ofRn{\displaystyle \mathbb {R} ^{n}}, branch and bound algorithms can be combined withinterval analysis[8]andcontractortechniques in order to provide guaranteed enclosures of the global minimum.[9][10] This approach is used for a number ofNP-hardproblems: Branch-and-bound may also be a base of variousheuristics. For example, one may wish to stop branching when the gap between the upper and lower bounds becomes smaller than a certain threshold. This is used when the solution is "good enough for practical purposes" and can greatly reduce the computations required. This type of solution is particularly applicable when the cost function used isnoisyor is the result ofstatistical estimatesand so is not known precisely but rather only known to lie within a range of values with a specificprobability.[citation needed] Nauet al.present a generalization of branch and bound that also subsumes theA*,B*andalpha-betasearch algorithms.[16] Branch and bound can be used to solve this problem MaximizeZ=5x1+6x2{\displaystyle Z=5x_{1}+6x_{2}}with these constraints x1+x2≤50{\displaystyle x_{1}+x_{2}\leq 50} 4x1+7x2≤280{\displaystyle 4x_{1}+7x_{2}\leq 280} x1,x2≥0{\displaystyle x_{1},x_{2}\geq 0} x1{\displaystyle x_{1}}andx2{\displaystyle x_{2}}are integers. The first step is to relax the integer constraint. We have two extreme points for the first equation that form a line:[x1x2]=[500]{\displaystyle {\begin{bmatrix}x_{1}\\x_{2}\end{bmatrix}}={\begin{bmatrix}50\\0\end{bmatrix}}}and[050]{\displaystyle {\begin{bmatrix}0\\50\end{bmatrix}}}. We can form the second line with the vector points[040]{\displaystyle {\begin{bmatrix}0\\40\end{bmatrix}}}and[700]{\displaystyle {\begin{bmatrix}70\\0\end{bmatrix}}}. The third point is[00]{\displaystyle {\begin{bmatrix}0\\0\end{bmatrix}}}. This is aconvex hull regionso the solution lies on one of the vertices of the region. We can find the intersection using row reduction, which is[70/380/3]{\displaystyle {\begin{bmatrix}70/3\\80/3\end{bmatrix}}}, or[23.33326.667]{\displaystyle {\begin{bmatrix}23.333\\26.667\end{bmatrix}}}with a value of 276.667. We test the other endpoints by sweeping the line over the region and find this is the maximum over the reals. We choose the variable with the maximum fractional part, in this casex2{\displaystyle x_{2}}becomes the parameter for the branch and bound method. We branch tox2≤26{\displaystyle x_{2}\leq 26}and obtain 276 @⟨24,26⟩{\displaystyle \langle 24,26\rangle }. We have reached an integer solution so we move to the other branchx2≥27{\displaystyle x_{2}\geq 27}. We obtain 275.75 @⟨22.75,27⟩{\displaystyle \langle 22.75,27\rangle }. We have a decimal so we branchx1{\displaystyle x_{1}}tox1≤22{\displaystyle x_{1}\leq 22}and we find 274.571 @⟨22,27.4286⟩{\displaystyle \langle 22,27.4286\rangle }. We try the other branchx1≥23{\displaystyle x_{1}\geq 23}and there are no feasible solutions. Therefore, the maximum is 276 withx1⟼24{\displaystyle x_{1}\longmapsto 24}andx2⟼26{\displaystyle x_{2}\longmapsto 26}.
https://en.wikipedia.org/wiki/Branch_and_bound
Combinatorial optimizationis a subfield ofmathematical optimizationthat consists of finding an optimal object from afinite setof objects,[1]where the set offeasible solutionsisdiscreteor can be reduced to a discrete set. Typical combinatorial optimization problems are thetravelling salesman problem("TSP"), theminimum spanning tree problem("MST"), and theknapsack problem. In many such problems, such as the ones previously mentioned,exhaustive searchis not tractable, and so specialized algorithms that quickly rule out large parts of the search space orapproximation algorithmsmust be resorted to instead. Combinatorial optimization is related tooperations research,algorithm theory, andcomputational complexity theory. It has important applications in several fields, includingartificial intelligence,machine learning,auction theory,software engineering,VLSI,applied mathematicsandtheoretical computer science. Basic applications of combinatorial optimization include, but are not limited to: There is a large amount of literature onpolynomial-time algorithmsfor certain special classes of discrete optimization. A considerable amount of it is unified by the theory oflinear programming. Some examples of combinatorial optimization problems that are covered by this framework areshortest pathsandshortest-path trees,flows and circulations,spanning trees,matching, andmatroidproblems. ForNP-completediscrete optimization problems, current research literature includes the following topics: Combinatorial optimization problems can be viewed as searching for the best element of some set of discrete items; therefore, in principle, any sort ofsearch algorithmormetaheuristiccan be used to solve them. Widely applicable approaches includebranch-and-bound(an exact algorithm which can be stopped at any point in time to serve as heuristic),branch-and-cut(uses linear optimisation to generate bounds),dynamic programming(a recursive solution construction with limited search window) andtabu search(a greedy-type swapping algorithm). However, generic search algorithms are not guaranteed to find an optimal solution first, nor are they guaranteed to run quickly (in polynomial time). Since some discrete optimization problems areNP-complete, such as the traveling salesman (decision) problem,[6]this is expected unlessP=NP. For each combinatorial optimization problem, there is a correspondingdecision problemthat asks whether there is a feasible solution for some particular measurem0{\displaystyle m_{0}}. For example, if there is agraphG{\displaystyle G}which contains verticesu{\displaystyle u}andv{\displaystyle v}, an optimization problem might be "find a path fromu{\displaystyle u}tov{\displaystyle v}that uses the fewest edges". This problem might have an answer of, say, 4. A corresponding decision problem would be "is there a path fromu{\displaystyle u}tov{\displaystyle v}that uses 10 or fewer edges?" This problem can be answered with a simple 'yes' or 'no'. The field ofapproximation algorithmsdeals with algorithms to find near-optimal solutions to hard problems. The usual decision version is then an inadequate definition of the problem since it only specifies acceptable solutions. Even though we could introduce suitable decision problems, the problem is then more naturally characterized as an optimization problem.[7] AnNP-optimization problem(NPO) is a combinatorial optimization problem with the following additional conditions.[8]Note that the below referredpolynomialsare functions of the size of the respective functions' inputs, not the size of some implicit set of input instances. This implies that the corresponding decision problem is inNP. In computer science, interesting optimization problems usually have the above properties and are therefore NPO problems. A problem is additionally called a P-optimization (PO) problem, if there exists an algorithm which finds optimal solutions in polynomial time. Often, when dealing with the class NPO, one is interested in optimization problems for which the decision versions areNP-complete. Note that hardness relations are always with respect to some reduction. Due to the connection between approximation algorithms and computational optimization problems, reductions which preserve approximation in some respect are for this subject preferred than the usualTuringandKarp reductions. An example of such a reduction would beL-reduction. For this reason, optimization problems with NP-complete decision versions are not necessarily called NPO-complete.[9] NPO is divided into the following subclasses according to their approximability:[8] An NPO problem is calledpolynomially bounded(PB) if, for every instancex{\displaystyle x}and for every solutiony∈f(x){\displaystyle y\in f(x)}, the measurem(x,y){\displaystyle m(x,y)}is bounded by a polynomial function of the size ofx{\displaystyle x}. The class NPOPB is the class of NPO problems that are polynomially-bounded.
https://en.wikipedia.org/wiki/Combinatorial_optimization
Principal variation search(sometimes equated with the practically identicalNegaScout) is anegamaxalgorithm that can be faster thanalpha–beta pruning. Like alpha–beta pruning, NegaScout is a directional search algorithm for computing theminimaxvalue of a node in atree. It dominates alpha–beta pruning in the sense that it will never examine a node that can be pruned by alpha–beta; however, it relies on accurate node ordering to capitalize on this advantage. NegaScout works best when there is a good move ordering. In practice, the move ordering is often determined by previous shallower searches. It produces more cutoffs than alpha–beta by assuming that the first explored node is the best. In other words, it supposes the first node is in theprincipal variation. Then, it can check whether that is true by searching the remaining nodes with a null window (also known as a scout window; when alpha and beta are equal), which is faster than searching with the regular alpha–beta window. If the proof fails, then the first node was not in the principal variation, and the search continues as normal alpha–beta. Hence, NegaScout works best when the move ordering is good. With a random move ordering, NegaScout will take more time than regular alpha–beta; although it will not explore any nodes alpha–beta did not, it will have to re-search many nodes. Alexander Reinefeldinvented NegaScout several decades after the invention of alpha–beta pruning. He gives a proof of correctness of NegaScout in his book.[1] Another search algorithm calledSSS*can theoretically result in fewer nodes searched. However, its original formulation has practical issues (in particular, it relies heavily on an OPEN list for storage) and nowadays most chess engines still use a form of NegaScout in their search. Most chess engines use a transposition table in which the relevant part of the search tree is stored. This part of the tree has the same size as SSS*'s OPEN list would have.[2]A reformulation called MT-SSS* allowed it to be implemented as a series of null window calls to Alpha–Beta (or NegaScout) that use a transposition table, and direct comparisons using game playing programs could be made. It did not outperform NegaScout in practice. Yet another search algorithm, which does tend to do better than NegaScout in practice, is the best-first algorithm calledMTD(f), although neither algorithm dominates the other. There are trees in which NegaScout searches fewer nodes than SSS* or MTD(f) and vice versa. NegaScout takes after SCOUT, invented byJudea Pearlin 1980, which was the first algorithm to outperform alpha–beta and to be proven asymptotically optimal.[3][4]Null windows, with β=α+1 in a negamax setting, were invented independently by J.P. Fishburn and used in an algorithm similar to SCOUT in an appendix to his Ph.D. thesis,[5]in a parallel alpha–beta algorithm,[6]and on the last subtree of a search tree root node.[7] Most of the moves are not acceptable for both players, so we do not need to fully search every node to get the exact score. The exact score is only needed for nodes in theprincipal variation(an optimal sequence of moves for both players), where it will propagate up to the root. In iterative deepening search, the previous iteration has already established a candidate for such a sequence, which is also commonly called the principal variation. For any non-leaf in this principal variation, its children are reordered such that the next node from this principal variation is the first child. All other children are assumed to result in a worse or equal score for the current player (this assumption follows from the assumption that the current PV candidate is an actual PV). To test this, we search the first move with a full window to establish an upper bound on the score of the other children, for which we conduct a zero window search to test if a move can be better. Since a zero window search is much cheaper due to the higher frequency of beta cut-offs, this can save a lot of effort. If we find that a move can raise alpha, our assumption has been disproven for this move and we do a re-search with the full window to get the exact score.[8][9]
https://en.wikipedia.org/wiki/Principal_variation_search
Thehistory of chessbegan nearly 1500 years ago. The introduction ofchess enginesaround 1960 and permanent improvement over time has made chess engines become an integral part of chess analysis and influenced what and how chess is played today by humans. It also lead to the problem ofcheating. The earliest form of a – supposedly – chess engine appears in the 18th century with a machine named theMechanical Turk. Created by Hungarian inventorWolfgang von Kempelen, the Mechanical Turk, a life sized human model, debuted in 1770 as the so-called world's first autonomous chess robot. The Mechanical Turk seemingly could play chess and beat opponents, even going as far as solving the iconicknight's tourchess puzzle. It remained in operation from 1770 to 1854, eventually being destroyed in a fire. The hoax was uncovered years after the machine’s demise, with a human being the true source of the Mechanical Turk's intelligence the entire time.[1] In 1912Leonardo Torres Quevedobuilt the first real instance of a chess computer, an automaton namedEl Ajedrecista.[2]Unlike the Mechanical Turk, El Ajedrecista was actually the first autonomous machine capable of playing chess. El Ajedrecista could play anendgamewith white, in which white has a king and rook, while black only has a king. The machine was capable of checkmating the black king (played by a human) every time, and able to identify illegal moves.[3]El Ajedrecista marked the first actual chess engine. After World War II the invention of the computer spurred the development of chess engines. Two pioneers of the computer,Alan TuringandClaude Shannonwould pick up an interest incomputer chess. In 1950, Claude Shannon published a paper detailing a program that could potentially play chess against a human.[4]One year later, Alan Turing created the first computer chess playing algorithm, yet the hardware at the time lacked in power. Turing tested his algorithm by hand, and although the algorithm itself was weak, Turing and Shannon had laid a foundation.[citation needed] In 1951 a close colleague of Turing,Dietrich Prinz, created and implemented a basic chess algorithm that was capable of solving mate in two. The algorithm ran on theFerranti Mark 1, the first commercially available computer, and although lacking the power to play a full game it served as a proof of concept for chess computing.[5] In 1957 an IBM engineer namedAlex Bernsteincreated the world's first fully automated chess engine. The engine was built for the IBM 704 mainframe, and took around eight minutes per move.[6]The computer was capable of playing an entire game. Drastic software and hardware improvements in the 1960s and 1970s lead to stronger engines. TheMinimaxalgorithm and itsalpha-beta pruningoptimization, remains key to chess programming and optimization. The algorithm, initially proven in 1928 byJohn von Neumann, focuses on maximizing one players score while minimizing the other's. Improvements and extensions to this algorithm were developed for chess programming with the goal of increasing the search depth, and so the playing strength. These included move selection techniques,heuristicapproaches,iterative deepening, and opening/endgame tablebases. During this time certain chess grandmasters devoted themselves to the improvement of chess programming. Most notably previous World Chess ChampionMikhail Botvinnik, who wrote several papers on the subject, specifically related to move selection techniques.[7] Hardware, previously the greatest limiter people like Turing and Dietrich had to face, advanced at an astonishing rate. In 1965Gordon Mooreobserved that transistor count in computers had been doubling every two years, increasing hardware speed at an exponential rate. This is commonly referred to asMoore's law. Chess specific hardware also became prominent for chess engines in this time. In 1978 a chess engine namedBellewon theNorth American Computer Chess Championshiprun by theAssociation for Computing Machinery. The engine's special hardware allowed it to analyze around thirty million positions in three minutes. Belle also held both opening and ending databases, aiding the hardware speed. Two years later Belle became the first chess engine to receive aMaster rating. The chess engines of 1960s and 1970s failed to compete successfully with top chess players. In 1968, International MasterDavid Levyoffered $3000 to any chess engine that could best him in the next ten years. In 1977 Levy faced the chess engineKaissa, winning the match without losing a single game.[8] In 1980Edward Fredkin, computer science professor atCarnegie Mellon University, offered prizes for chess engines to break barriers in the chess world. These included $10,000 for the first engine to reach Grandmaster level, and $100,000 for the first engine to beat a chess world champion.[9] Deep Bluebegan under the nameChipTest. ChipTest was developed and built byFeng-hsiung Hsu,Thomas AnantharamanandMurray Campbellat Carnegie Mellon. They entered the engine into the 1986 North American Computer Chess Championship and fell short, but won the competition in a 4–0 sweep in the next year.[10] The team developed a new machine starting in 1988, namedDeep Thought. Deep Thought had significant advantages over its previous version, and would stand apart from its competition. It became the first engine to beat a grandmaster when it playedBent Larsenin a regular tournament game the same year it came out.[10]The following year Deep Thought won theWorld Computer Chess Championshipwith an unbeaten 5-0 score. Chess engines had not yet surpassed humans, and Deep Thought fell to world champion Garry Kasparov in two matches the same year. For the following years Deep Thought remained the chess engine champion, eventually becoming Deep Thought 2 and winning theNorth American Computer Chess Championshipfor the fifth time.[10]IBMstarted sponsoring the team in 1994. In 1995, a new chess engine prototype named Deep Blue was released from the team at IBM. The engine was completed in 1996, and in the same year faced chess championGarry Kasparovfor the first time. Kasparov won thesix-game matchby the score 4–2.,[10]but this was the first time a chess engine won a game against the current world chess champion in a regular match. Deep Blue was upgraded and worked on by both engineers and top chess grandmasters, and a year later, Kasparov and Deep Blue played another match. This time Deep Blue would beat Kasparov and become the first chess engine to beat the current world chess champion in a match.[10]Despite controversial claims on Kasparov's behalf that IBM had cheated, the result was considered a momentous achievement in chess computing by many. Kasparov's defeat marked the end of a time when the best humans could beat the engines. Money continued to flow into chess computing and the industry flourished, not without controversy however. In 2011, the four time reigning champion engineRybka, was disqualified from theWorld Computer Chess Championshipfor code plagiarism.[11]New competitions sprang up, with theTop Chess Engine Championshipbeing founded in 2010 with a stronger emphasis on automated play, longer games, and allowing stronger hardware. Up until the late 2010s the world of chess computing was advancing slowly, but the progress remained consistent and the engines stronger than ever. That was until 2017 when a team of programmers at Google companyDeepMindreleased a new type of engine,AlphaZero. At the end of 2017 engineers at DeepMind released an engine named AlphaZero, using aneural networkfor its analysis, a new approach that had not been used before. While previous engines had relied on searching through trees and evaluating positions using handcrafted algorithms, AlphaZero relied on a neural network for its analysis, learning chess on its own by playing games against itself.[12]In a 100 game match against the current strongest engineStockfish, AlphaZero won 28 games and tied the remaining 72.[12]By many, AlphaZero was considered a breakthrough for chess computing and for Artificial Intelligence in general. Since 2017, the presence of neural networks in the worlds top chess engines has grown. All top engines nowadays,Leela Chess Zero, Stockfish, andKomodoinclude neural networks in their evaluation function. Yet thedeep reinforcement learningused for AlphaZero remains uncommon in top engines.
https://en.wikipedia.org/wiki/History_of_chess_engines
Draughts(British English) orcheckers(American English), also calledstraight checkersor simplydraughts,[note 1]is a form of thestrategyboard gamecheckers(or draughts). It is played on an 8×8checkerboardwith 12 pieces per side. The pieces move and capture diagonally forward, until they reach the opposite end of the board, when they are crowned and can thereafter move and capture both backward and forward. As in all forms of draughts, English draughts is played by two opponents, alternating turns on opposite sides of the board. The pieces are traditionally black, red, or white. Enemy pieces are captured by jumping over them. The 8×8 variant of draughts wasweakly solvedin 2007 by a team of Canadian computer scientists led byJonathan Schaeffer. From the standard starting position, both players can guarantee a draw withperfect play. Though pieces are traditionally made of wood, now many are made of plastic, though other materials may be used. Pieces are typically flat andcylindrical. They are invariably split into one darker and one lighter colour. Traditionally and in tournaments, these colours are red and white, but black and red are common in the United States, as well as dark- and light-stained wooden pieces. The darker-coloured side is commonly referred to as "Black"; the lighter-coloured side, "White". There are two classes of pieces:menandkings. Men are single pieces. Kings consist of two men of the same colour, stacked one on top of the other. The bottom piece is referred to ascrowned. Some sets have pieces with a crown molded, engraved or painted on one side, allowing the player to simply turn the piece over or to place the crown-side up on the crowned man, further differentiating kings from men. Pieces are often manufactured with indentations to aid stacking. Each player starts with 12 men on the dark squares of the three rows closest to that player's side (see diagram). The row closest to each player is called thekings roworcrownhead. The player with the darker-coloured pieces moves first. Thenturnsalternate. There are two different ways to move in English draughts: Jumping is always mandatory: if a player has the option to jump, they must take it, even if doing so results in disadvantage for the jumping player. For example, a mandated single jump might set up the player such that the opponent has a multi-jump in reply. Multiple jumpsare possible, if after one jump, another piece is immediately eligible to be jumped by the moved piece—even if that jump is in a different diagonal direction. If more than one multi-jump is available, the player can choose which piece to jump with, and which sequence of jumps to make. The sequence chosen is not required to be the one that maximizes the number of jumps in the turn; however, a player must make all available jumps in the sequence chosen. If a man moves into the kings row on the opponent's side of the board, it is crowned as a king and gains the ability to move both forward and backward. If a man moves into the kings row or if itjumpsinto the kings row, the current move terminates; the piece is crowned as a king but cannot jump back out as in a multi-jump until the next move. A player wins by capturing all of the opponent's pieces or by leaving the opponent with no legal move. The game is a draw if neither side can force a win, or by agreement (one side offering a draw, the other accepting). The December 1977 issue of theEnglish Draughts Association Journalpublished a letter from Alan Beckerson of London who had discovered a number of complete games of twenty moves in length. These were the shortest games ever discovered and gained Alan a place in theGuinness Book of Records. He offered a £100 prize to anybody who could discover a complete game in less than twenty moves. In February 2003,Martin Bryant(author of the Colossus draughts program) published a paper on his website[1]presenting an exhaustive analysis showing that there exist 247 games of twenty moves in length (and confirmed that this is the shortest possible game) leading (by transposition) to 32 distinct final positions. There is a standardised notation for recording games. All 32 reachable board squares are numbered in sequence. The numbering starts in Black's double-corner (where Black has two adjacent squares). Black's squares on the first rank are numbered1to4; the next rank5to8, and so on. Moves are recorded as "from-to", so a move from 9 to 14 would be recorded 9-14. Captures are notated with an "x" connecting the start and end squares. The game result is often abbreviated as BW/RW (Black/Red wins) or WW (White wins). White resigned after Black's 46th move. In Unicode, the draughts are encoded in block Miscellaneous Symbols: The men'sWorld Championshipin English draughts dates to the 1840s, predating the men'sDraughts World Championship, the championship for men inInternational draughts, by several decades. Noted world champions includeAndrew Anderson,James Wyllie,Robert Martins,Robert D. Yates,James Ferrie,Alfred Jordan,Newell W. Banks,Robert Stewart,Asa Long,Walter Hellman,Marion Tinsley,Derek Oldbury,Ron King,Michele Borghetti,Alex Moiseyev,Lubabalo Kondlo,[5]Sergio Scarpetta,Patricia Breen, andAmangul Durdyyeva.[6]Championships are held in GAYP (Go As You Please) and 3-Move versions. From 1840 to 1994, the men's winners were from Scotland, England, and the United States. From 1994 to 2023, the men's winners were from the United States,Barbados, South Africa and Italy. The woman's championship started in 1993. As of 2022, the women's winners have been from Ireland,Turkmenistan, andUkraine. The European Cup has been held since 2013; the World Cup, since 2015. The first English draughtscomputer programwas written byChristopher Strachey, M.A. at theNational Physical Laboratory(NPL), London.[7]Strachey finished the programme, written in his spare time, in February 1951. It ran for the first time on the NPL'sPilot ACEcomputer on 30 July 1951. He soon modified the programme to run on theManchester Mark 1computer. The second computer program was written in 1956 byArthur Samuel, a researcher fromIBM. Other than it being one of the most complicated game playing programs written at the time, it is also well known for being one of the first adaptive programs. It learned by playing games against modified versions of itself, with the victorious versions surviving. Samuel's program was far from mastering the game, although one win against a blind checkers master gave the general public the impression that it was very good. In November 1983, theScience Museum Oklahoma(then called the Omniplex) unveiled a new exhibit: Lefty the Checker Playing Robot. Programmed by Scott M Savage, Lefty used an Armdroidrobotic armby Colne Robotics and was powered by a6502 processorwith a combination ofBASICandAssembly codeto interactively play a round of checkers with visitors. Originally, the program was deliberately simple so that the average visitor could potentially win, but over time was improved. The improvements proved to be frustrating for the visitors, so the original code was reimplemented.[8] In the 1990s, the strongest program wasChinook, written in 1989 by a team from theUniversity of Albertaled byJonathan Schaeffer.Marion Tinsley, world champion from 1955–1962 and from 1975–1991, won a match against the machine in 1992. In 1994, Tinsley had to resign in the middle of an even match for health reasons; he died shortly thereafter. In 1995, Chinook defended its man-machine title againstDon Laffertyin a thirty-two game match. The final score was 1–0 with 31 draws for Chinook over Don Lafferty.[9]In 1996 Chinook won in the U.S. National Tournament by the widest margin ever, and was retired from play after that event. The man-machine title has not been contested since. In July 2007, in an article published inScience Magazine, Chinook's developers announced that the program had been improved to the point where it could not lose a game.[10]If no mistakes were made by either player, the game would always end in a draw. After eighteen years, they have computationally proven aweak solutionto the game of checkers.[11]Using between two hundreddesktop computersat the peak of the project and around fifty later on, the team made 1014calculations to search from the initial position to a database of positions with at most ten pieces.[12]However, the solution is only for the initial position rather than for all 156 accepted random 3-move openings of tournament play. The number of possible positions in English draughts is 500,995,484,682,338,672,639[13]and it has agame-tree complexityof approximately 1040.[14]By comparison, chess is estimated to have between1043and 1050legal positions. When draughts isgeneralizedso that it can be played on anm×nboard, the problem of determining if the first player has a win in a given position isEXPTIME-complete. The July 2007 announcement byChinook's team stating that the game had beensolvedmust be understood in the sense that, withperfect playon both sides, the game will always finish with a draw. However, not all positions that could result from imperfect play have been analysed.[15] Some top draughts programs areChinook, andKingsRow.
https://en.wikipedia.org/wiki/Computer_checkers
Computer Gois the field ofartificial intelligence(AI) dedicated to creating acomputer programthat plays the traditionalboard gameGo. The field is sharply divided into two eras. Before 2015, the programs of the era were weak. The best efforts of the 1980s and 1990s produced only AIs that could be defeated by beginners, and AIs of the early 2000s were intermediate level at best. Professionals could defeat these programs even given handicaps of 10+ stones in favor of the AI. Many of the algorithms such asalpha-beta minimaxthat performed well as AIs forcheckersandchessfell apart on Go's 19x19 board, as there were too many branching possibilities to consider. Creation of a human professional quality program with the techniques and hardware of the time was out of reach. Some AI researchers speculated that the problem was unsolvable without creation ofhuman-like AI. The application ofMonte Carlo tree searchto Go algorithms provided a notable improvement in the late2000s decade, with programs finally able to achieve alow-dan level: that of an advanced amateur. High-dan amateurs and professionals could still exploit these programs' weaknesses and win consistently, but computer performance had advanced past the intermediate (single-digitkyu) level. The tantalizing unmet goal of defeating the best human players without a handicap, long thought unreachable, brought a burst of renewed interest. The key insight proved to be an application ofmachine learninganddeep learning.DeepMind, aGoogleacquisition dedicated to AI research, producedAlphaGoin 2015 and announced it to the world in 2016.AlphaGo defeated Lee Sedol, a 9 dan professional, in a no-handicap match in 2016, thendefeated Ke Jie in 2017, who at the time continuously held the world No. 1 ranking for two years. Just ascheckers had fallen to machines in 1995andchess in 1997, computer programs finally conquered humanity's greatest Go champions in 2016–2017. DeepMind did not release AlphaGo for public use, but various programs have been built since based on the journal articles DeepMind released describing AlphaGo and its variants. Professional Go players see the game as requiring intuition, creative and strategic thinking.[1][2]It has long been considered a difficult challenge in the field ofartificial intelligence(AI) and is considerably more difficult to solve thanchess.[3]Many in the field considered Go to require more elements that mimic human thought than chess.[4]MathematicianI. J. Goodwrote in 1965:[5] Go on a computer? – In order to programme a computer to play a reasonable game of Go, rather than merely a legal game – it is necessary to formalise the principles of good strategy, or to design a learning programme. The principles are more qualitative and mysterious than in chess, and depend more on judgment. So I think it will be even more difficult to programme a computer to play a reasonable game of Go than of chess. Prior to 2015, the best Go programs only managed to reachamateur danlevel.[6][7]On the small 9×9 board, the computer fared better, and some programs managed to win a fraction of their 9×9 games against professional players. Prior to AlphaGo, some researchers had claimed that computers would never defeat top humans at Go.[8] The first Go program was written byAlbert Lindsey Zobristin 1968 as part of his thesis onpattern recognition.[9]It introduced aninfluence functionto estimate territory andZobrist hashingto detectko. In April 1981, Jonathan K Millen published an article inBytediscussing Wally, a Go program with a 15x15 board that fit within theKIM-1microcomputer's 1K RAM.[10]Bruce F. Websterpublished an article in the magazine in November 1984 discussing a Go program he had written for theApple Macintosh, including theMacFORTHsource.[11]Programs for Go were weak; a 1983 article estimated that they were at best equivalent to 20kyu, the rating of a naive novice player, and often restricted themselves to smaller boards.[12]AIs who played on theInternet Go Server (IGS)on 19x19 size boards had around 20–15kyustrength in 2003, after substantial improvements in hardware.[13] In 1998, very strong players were able to beat computer programs while giving handicaps of 25–30 stones, an enormous handicap that few human players would ever take. There was a case in the 1994 World Computer Go Championship where the winning program, Go Intellect, lost all three games against the youth players while receiving a 15-stone handicap.[14]In general, players who understood and exploited a program's weaknesses could win even through large handicaps.[15] In 2006 (with an article published in 2007),Rémi Coulomproduced a new algorithm he calledMonte Carlo tree search.[16]In it, a game tree is created as usual of potential futures that branch with every move. However, computers "score" a terminal leaf of the tree by repeated random playouts (similar toMonte Carlostrategies for other problems). The advantage is that such random playouts can be done very quickly. The intuitive objection - that random playouts do not correspond to the actual worth of a position - turned out not to be as fatal to the procedure as expected; the "tree search" side of the algorithm corrected well enough for finding reasonable future game trees to explore. Programs based on this method such as MoGo and Fuego saw better performance than classic AIs from earlier. The best programs could do especially well on the small 9x9 board, which had fewer possibilities to explore. In 2009, the first such programs appeared which could reach and hold lowdan-level rankson theKGS Go Serveron the 19x19 board. In 2010, at the 2010 European Go Congress in Finland, MogoTW played 19x19 Go againstCatalin Taranu(5p). MogoTW received a seven-stone handicap and won.[17] In 2011,Zenreached 5 dan on the server KGS, playing games of 15 seconds per move. The account which reached that rank uses a cluster version of Zen running on a 26-core machine.[18] In 2012, Zen beatTakemiya Masaki(9p) by 11 points at five stones handicap, followed by a 20-point win at four stones handicap.[19] In 2013,Crazy StonebeatYoshio Ishida(9p) in a 19×19 game at four stones handicap.[20] The 2014 Codecentric Go Challenge, a best-of-five match in an even 19x19 game, was played between Crazy Stone and Franz-Jozef Dickhut (6d). No stronger player had ever before agreed to play a serious competition against a go program on even terms. Franz-Jozef Dickhut won, though Crazy Stone won the first match by 1.5 points.[21] AlphaGo, developed byGoogle DeepMind, was a significant advance in computer strength compared to previous Go programs. It used techniques that combineddeep learningandMonte Carlo tree search.[22]In October 2015, it defeatedFan Hui, the European Go champion, five times out of five in tournament conditions.[23]In March 2016, AlphaGo beatLee Sedolin the first three of five matches.[24]This was the first time that a9-danmaster had played a professional game against a computer without handicap.[25]Lee won the fourth match, describing his win as "invaluable".[26]AlphaGo won the final match two days later.[27][28]With this victory, AlphaGo became the first program to beat a 9 dan human professional in a game without handicaps on a full-sized board. In May 2017, AlphaGo beatKe Jie, who at the time was ranked top in the world,[29][30]in athree-game matchduring theFuture of Go Summit.[31] In October 2017, DeepMind revealed a new version of AlphaGo, trained only through self play, that had surpassed all previous versions, beating the Ke Jie version in 89 out of 100 games.[32] After the basic principles of AlphaGo were published in the journalNature, other teams have been able to produce high-level programs. Work on Go AI since has largely consisted of emulating the techniques used to build AlphaGo, which proved so much stronger than everything else. By 2017, bothZenandTencent's projectFine Artwere capable of defeating very high-level professionals some of the time. The open sourceLeela Zeroengine was created as well. For a long time, it was a widely held opinion that computer Go posed a problem fundamentally different fromcomputer chess. Many considered a strong Go-playing program something that could be achieved only in the far future, as a result of fundamental advances in general artificial intelligence technology. Those who thought the problem feasible believed that domain knowledge would be required to be effective against human experts. Therefore, a large part of the computer Go development effort was during these times focused on ways of representing human-like expert knowledge and combining this with local search to answer questions of a tactical nature. The result of this were programs that handled many specific situations well but which had very pronounced weaknesses in their overall handling of the game. Also, these classical programs gained almost nothing from increases in available computing power. Progress in the field was generally slow. The large board (19×19, 361 intersections) is often noted as one of the primary reasons why a strong program is hard to create. The large board size prevents analpha-beta searcherfrom achieving deep look-ahead without significant search extensions orpruningheuristics. In 2002, a computer program called MIGOS (MIni GO Solver) completely solved the game of Go for the 5×5 board. Black wins, taking the whole board.[33] Continuing the comparison to chess, Go moves are not as limited by the rules of the game. For the first move in chess, the player has twenty choices. Go players begin with a choice of 55 distinct legal moves, accounting for symmetry. This number rises quickly as symmetry is broken, and soon almost all of the 361 points of the board must be evaluated. One of the most basic tasks in a game is to assess a board position: which side is favored, and by how much? In chess, many future positions in a tree are direct wins for one side, and boards have a reasonable heuristic for evaluation in simple material counting, as well as certain positional factors such as pawn structure. A future where one side has lost their queen for no benefit clearly favors the other side. These types of positional evaluation rules cannot efficiently be applied to Go. The value of a Go position depends on a complex analysis to determine whether or not the group is alive, which stones can be connected to one another, and heuristics around the extent to which a strong position has influence, or the extent to which a weak position can be attacked. A stone placed might not have immediate influence, but after many moves could become highly important in retrospect as other areas of the board take shape. Poor evaluation of board states will cause the AI to work toward positions it incorrectly believes favor it, but actually do not. One of the main concerns for a Go player is which groups of stones can be kept alive and which can be captured. This general class of problems is known aslife and death. Knowledge-based AI systems sometimes attempted to understand the life and death status of groups on the board. The most direct approach is to perform atree searchon the moves which potentially affect the stones in question, and then to record the status of the stones at the end of the main line of play. However, within time and memory constraints, it is not generally possible to determine with complete accuracy which moves could affect the 'life' of a group of stones. This implies that someheuristicmust be applied to select which moves to consider. The net effect is that for any given program, there is a trade-off between playing speed and life and death reading abilities. An issue that all Go programs must tackle is how to represent the current state of the game. The most direct way of representing a board is as a one- or two-dimensional array, where elements in the array represent points on the board, and can take on a value corresponding to a white stone, a black stone, or an empty intersection. Additional data is needed to store how many stones have been captured, whose turn it is, and which intersections are illegal due to theKo rule. In general, machine learning programs stop there at this simplest form and let the organic AIs come to their own understanding of the meaning of the board, likely simply using Monte Carlo playouts to "score" a board as good or bad for a player. "Classic" AI programs that attempted to directly model a human's strategy might go further, however, such as layering on data such as stones believed to be dead, stones that are unconditionally alive, stones in asekistate of mutual life, and so forth in their representation of the state of the game. Historically,symbolic artificial intelligencetechniques have been used to approach the problem of Go AI.Neural networksbegan to be tried as an alternative approach in the 2000s decade, as they required immense computing power that was expensive-to-impossible to reach in earlier decades. These approaches attempt to mitigate the problems of the game of Go having a highbranching factorand numerous other difficulties. The only choice a program needs to make is where to place its next stone. However, this decision is made difficult by the wide range of impacts a single stone can have across the entire board, and the complex interactions various stones' groups can have with each other. Various architectures have arisen for handling this problem. Popular techniques and design philosophies include: Onetraditional AItechnique for creating game playing software is to use aminimaxtree search. This involves playing out all hypothetical moves on the board up to a certain point, then using anevaluation functionto estimate the value of that position for the current player. The move which leads to the best hypothetical board is selected, and the process is repeated each turn. While tree searches have been very effective incomputer chess, they have seen less success in Computer Go programs. This is partly because it has traditionally been difficult to create an effective evaluation function for a Go board, and partly because the large number of possible moves each side can make each leads to a highbranching factor. This makes this technique very computationally expensive. Because of this, many programs which use search trees extensively can only play on the smaller 9×9 board, rather than full 19×19 ones. There are several techniques, which can greatly improve the performance of search trees in terms of both speed and memory. Pruning techniques such asalpha–beta pruning,Principal Variation Search, andMTD(f)can reduce the effective branching factor without loss of strength. In tactical areas such as life and death, Go is particularly amenable to caching techniques such astransposition tables. These can reduce the amount of repeated effort, especially when combined with aniterative deepeningapproach. In order to quickly store a full-sized Go board in a transposition table, ahashingtechnique for mathematically summarizing is generally necessary.Zobrist hashingis very popular in Go programs because it has low collision rates, and can be iteratively updated at each move with just twoXORs, rather than being calculated from scratch. Even using these performance-enhancing techniques, full tree searches on a full-sized board are still prohibitively slow. Searches can be sped up by using large amounts of domain specific pruning techniques, such as not considering moves where your opponent is already strong, and selective extensions like always considering moves next to groups of stones which areabout to be captured. However, both of these options introduce a significant risk of not considering a vital move which would have changed the course of the game. Results of computer competitions show that pattern matching techniques for choosing a handful of appropriate moves combined with fast localized tactical searches (explained above) were once sufficient to produce a competitive program. For example,GNU Gowas competitive until 2008. Human novices often learn from the game records of old games played by master players. AI work in the 1990s often involved attempting to "teach" the AI human-style heuristics of Go knowledge. In 1996, Tim Klinger and David Mechner acknowledged the beginner-level strength of the best AIs and argued that "it is our belief that with better tools for representing and maintaining Go knowledge, it will be possible to develop stronger Go programs."[34]They proposed two ways: recognizing common configurations of stones and their positions and concentrating on local battles. In 2001, one paper concluded that "Go programs are still lacking in both quality and quantity of knowledge," and that fixing this would improve Go AI performance.[35] In theory, the use of expert knowledge would improve Go software. Hundreds of guidelines and rules of thumb for strong play have been formulated by both high-level amateurs and professionals. The programmer's task is to take theseheuristics, formalize them into computer code, and utilizepattern matchingandpattern recognitionalgorithms to recognize when these rules apply. It is also important to be able to "score" these heuristics so that when they offer conflicting advice, the system has ways to determine which heuristic is more important and applicable to the situation. Most of the relatively successful results come from programmers' individual skills at Go and their personal conjectures about Go, but not from formal mathematical assertions; they are trying to make the computer mimic the way they play Go. Competitive programs around 2001 could contain 50–100 modules that dealt with different aspects and strategies of the game, such as joseki.[35] Some examples of programs which have relied heavily on expert knowledge are Handtalk (later known as Goemate), The Many Faces of Go, Go Intellect, and Go++, each of which has at some point been considered the world's best Go program. However, these methods ultimately had diminishing returns, and never really advanced past an intermediate level at best on a full-sized board. One particular problem was overall game strategy. Even if an expert system recognizes a pattern and knows how to play a local skirmish, it may miss a looming deeper strategic problem in the future. The result is a program whose strength is less than the sum of its parts; while moves may be good on an individual tactical basis, the program can be tricked and maneuvered into ceding too much in exchange, and find itself in an overall losing position. As the 2001 survey put it, "just one bad move can ruin a good game. Program performance over a full game can be much lower than master level."[35] One major alternative to using hand-coded knowledge and searches is the use ofMonte Carlo methods. This is done by generating a list of potential moves, and for each move playing out thousands of games at random on the resulting board. The move which leads to the best set of random games for the current player is chosen as the best move. No potentially fallible knowledge-based system is required. However, because the moves used for evaluation are generated at random it is possible that a move which would be excellent except for one specific opponent response would be mistakenly evaluated as a good move. The result of this are programs which are strong in an overall strategic sense, but are imperfect tactically.[citation needed]This problem can be mitigated by adding some domain knowledge in the move generation and a greater level of search depth on top of the random evolution. Some programs which use Monte-Carlo techniques are Fuego,[36]The Many Faces of Go v12,[37]Leela,[38]MoGo,[39]Crazy Stone, MyGoFriend,[40]and Zen. In 2006, a new search technique,upper confidence bounds applied to trees(UCT),[41]was developed and applied to many 9x9 Monte-Carlo Go programs with excellent results. UCT uses the results of theplay outscollected so far to guide the search along the more successful lines of play, while still allowing alternative lines to be explored. The UCT technique along with many other optimizations for playing on the larger 19x19 board has led MoGo to become one of the strongest research programs. Successful early applications of UCT methods to 19x19 Go include MoGo, Crazy Stone, and Mango.[42]MoGo won the 2007Computer Olympiadand won one (out of three) blitz game against Guo Juan, 5th Dan Pro, in the much less complex 9x9 Go. The Many Faces of Go[43]won the 2008Computer Olympiadafter adding UCT search to its traditional knowledge-based engine. Monte-Carlo based Go engines have a reputation of being much more willing to playtenuki, moves elsewhere on the board, rather than continue a local fight than human players. This was often perceived as a weakness early in these program's existence.[44]That said, this tendency has persisted in AlphaGo's playstyle with dominant results, so this may be more of a "quirk" than a "weakness."[45] The skill level of knowledge-based systems is closely linked to the knowledge of their programmers and associated domain experts. This limitation has made it difficult to program truly strong AIs. A different path is to usemachine learningtechniques. In these, the only thing that the programmers need to program are the rules and simple scoring algorithms of how to analyze the worth of a position. The software will then automatically generates its own sense of patterns, heuristics, and strategies, in theory. This is generally done by allowing aneural networkorgenetic algorithmto either review a large database of professional games, or play many games against itself or other people or programs. These algorithms are then able to utilize this data as a means of improving their performance. Machine learning techniques can also be used in a less ambitious context to tune specific parameters of programs that rely mainly on other techniques. For example,Crazy Stonelearns move generation patterns from several hundred sample games, using a generalization of theElo rating system.[46] The most famous example of this approach is AlphaGo, which proved far more effective than previous AIs. In its first version, it had one layer that analyzed millions of existing positions to determine likely moves to prioritize as worthy of further analysis, and another layer that tried to optimize its own winning chances using the suggested likely moves from the first layer. AlphaGo used Monte Carlo tree search to score the resulting positions. A later version of AlphaGo, AlphaGoZero, eschewed learning from existing Go games, and instead learnt only from playing itself repeatedly. Other earlier programs using neural nets include NeuroGo and WinHonte. Computer Go research results are being applied to other similar fields such ascognitive science,pattern recognitionandmachine learning.[47]Combinatorial Game Theory, a branch ofapplied mathematics, is a topic relevant to computer Go.[35] John H. Conwaysuggested applyingsurreal numbersto analysis of the endgame in Go. This idea has been further developed byElwyn R. BerlekampandDavid Wolfein their bookMathematical Go.[48]Go endgames have been proven to bePSPACE-hardif the absolute best move must be calculated on an arbitrary mostly filled board. Certain complicated situations such as Triple Ko, Quadruple Ko, Molasses Ko, and Moonshine Life make this problem difficult.[49](In practice, strong Monte Carlo algorithms can still handle normal Go endgame situations well enough, and the most complicated classes of life-and-death endgame problems are unlikely to come up in a high-level game.)[50] Various difficult combinatorial problems (anyNP-hardproblem) can be converted to Go-like problems on a sufficiently large board; however, the same is true for other abstract board games, includingchessandminesweeper, when suitably generalized to a board of arbitrary size.NP-completeproblems do not tend in their general case to be easier for unaided humans than for suitably programmed computers: unaided humans are much worse than computers at solving, for example, instances of thesubset sum problem.[51][52] Several annual competitions take place between Go computer programs, including Go events at theComputer Olympiad. Regular, less formal, competitions between programs used to occur on the KGS Go Server[60](monthly) and the Computer Go Server[61](continuous). Many programs are available that allow computer Go engines to play against each other; they almost always communicate via the Go Text Protocol (GTP). The first computer Go competition was sponsored byAcornsoft,[62]and the first regular ones byUSENIX. They ran from 1984 to 1988. These competitions introduced Nemesis, the first competitive Go program fromBruce Wilcox, and G2.5 by David Fotland, which would later evolve into Cosmos and The Many Faces of Go. One of the early drivers of computer Go research was the Ing Prize, a relatively large money award sponsored by Taiwanese bankerIng Chang-ki, offered annually between 1985 and 2000 at the World Computer Go Congress (or Ing Cup). The winner of this tournament was allowed to challenge young players at a handicap in a short match. If the computer won the match, the prize was awarded and a new prize announced: a larger prize for beating the players at a lesser handicap. The series of Ing prizes was set to expire either 1) in the year 2000 or 2) when a program could beat a 1-dan professional at no handicap for 40,000,000NT dollars. The last winner was Handtalk in 1997, claiming 250,000 NT dollars for winning an 11-stone handicap match against three 11–13 year old amateur 2–6 dans. At the time the prize expired in 2000, the unclaimed prize was 400,000 NT dollars for winning a nine-stone handicap match.[63] Many other large regional Go tournaments ("congresses") had an attached computer Go event. The European Go Congress has sponsored a computer tournament since 1987, and the USENIX event evolved into the US/North American Computer Go Championship, held annually from 1988 to 2000 at the US Go Congress. Japan started sponsoring computer Go competitions in 1995. The FOST Cup was held annually from 1995 to 1999 in Tokyo. That tournament was supplanted by the Gifu Challenge, which was held annually from 2003 to 2006 in Ogaki, Gifu. TheComputer Go UEC Cuphas been held annually since 2007. When two computers play a game of Go against each other, the ideal is to treat the game in a manner identical to two humans playing while avoiding any intervention from actual humans. However, this can be difficult during end game scoring. The main problem is that Go playing software, which usually communicates using the standardizedGo Text Protocol(GTP), will not always agree with respect to the alive or dead status of stones. While there is no general way for two different programs to "talk it out" and resolve the conflict, this problem is avoided for the most part by usingChinese,Tromp-Taylor, orAmerican Go Association(AGA) rules in which continued play (without penalty) is required until there is no more disagreement on the status of any stones on the board. In practice, such as on the KGS Go Server, the server can mediate a dispute by sending a special GTP command to the two client programs indicating they should continue placing stones until there is no question about the status of any particular group (all dead stones have been captured). The CGOS Go Server usually sees programs resign before a game has even reached the scoring phase, but nevertheless supports a modified version of Tromp-Taylor rules requiring a full play out. These rule sets mean that a program which was in a winning position at the end of the game under Japanese rules (when both players have passed) could theoretically lose because of poor play in the resolution phase, but this is very unlikely and considered a normal part of the game under all of the area rule sets. The main drawback to the above system is that somerule sets(such as the traditional Japanese rules) penalize the players for making these extra moves, precluding the use of additional playout for two computers. Nevertheless, most modern Go Programs support Japanese rules against humans. Historically, another method for resolving this problem was to have an expert human judge the final board. However, this introduces subjectivity into the results and the risk that the expert would miss something the program saw.
https://en.wikipedia.org/wiki/Computer_Go
Computer Othellorefers to computer architecture encompassing computer hardware and computer software capable of playing the game ofOthello. It was notably included inMicrosoft Windowsfrom1.0toXP, where it is simply known as Reversi.[citation needed] There are many Othello programs such asNTest, Saio,Edax, Cassio, Pointy Stone, Herakles, WZebra, andLogistellothat can be downloaded from theInternetfor free. These programs, when run on any up-to-datecomputer, can play games in which the best human players are easily defeated. This is because although the consequences of moves are predictable for both computers and humans, computers are better at exploring them.[1] Computer Othello programs search for any possible legal moves using agame tree. In theory, they examine all positions / nodes, where each move by one player is called a"ply". This search continues until a certain maximum search depth or the program determines that a final "leaf" position has been reached. A naive implementation of this approach, known asMinimaxorNegamax, can only search to a small depth in a practical amount of time, so various methods have been devised to greatly increase the speed of the search for good moves. These are based onAlpha-beta pruning,Negascout,MTD(f), and NegaC*.[2]The alphabeta algorithm is a method for speeding up the Minimax searching routine by pruning off cases that will not be used anyway. This method takes advantage of the fact that every other level in the tree will maximize and every other level will minimize.[3] Several heuristics are also used to reduce the size of the searched tree: good move ordering,transposition tableand selective Search.[4] To speed up the search on machines with multiple processors or cores, a"parallel search"may be implemented. Several experiments have been made with the game Othello, like ABDADA[5]or APHID[6]Onrecentprograms, the YBWC[7]seems the preferred approach. Multi-ProbCut is a heuristic used inalpha–beta pruningof the search tree.[8]The ProbCut heuristic estimates evaluation scores at deeper levels of the search tree using alinear regressionbetween deeper and shallower scores. Multi-ProbCut extends this approach to multiple levels of the search tree. The linear regression itself is learned through previous tree searches, making the heuristic a kind of dynamic search control.[9]It is particularly useful in games such asOthellowhere there is a strong correlation between evaluations scores at deeper and shallower levels.[10][11] There are three different paradigms for creating evaluation functions. Different squares have different values - corners are good and the squares next to corners are bad. Disregarding symmetries, there are 10 different positions on a board, and each of these is given a value for each of the three possibilities: black disk, white disk and empty. A more sophisticated approach is to have different values for each position during the different stages of the game; e.g. corners are more important in the opening and early midgame than in the endgame.[12] Most human players strive to maximize mobility (number of moves available) and minimize frontier disks (disks adjacent to empty squares). Player mobility and opponent mobility are calculated, and player potential mobility and opponent potential mobility are calculated as well.[13]These measures can be found very quickly, and they significantly increase playing strength. Most programs have knowledge of edge and corner configurations and try to minimize the number of disks during the early midgame, another strategy used by human players.[12] Mobility maximization and frontier minimization can be broken down into local configurations which can be added together; the usual implementation is to evaluate each row, column, diagonal and corner configuration separately and add together the values, many different patterns have to be evaluated.[12]The process of determining values for all configurations is done by taking a large database of games played between strong players and calculating statistics for each configuration in each game stage from all the games.[12] The most common choice to predict the final disc difference uses a weighted disk difference measure where the winning side gets a bonus corresponding to the number of disks.[12] Opening books aid computer programs by giving common openings that are considered good ways to counter poor openings. All strong programs use opening books and update their books automatically after each game. To go through all positions from all games in the game database and determine the best move not played in any database game,transposition tablesare used to record positions that have been previously searched. This means those positions do not need to be searched again.[12]This is time-consuming as a deep search must be performed for each position, but once this is done, updating the book is easy. After each game played, all new positions are searched for the best deviation. Faster hardware and additional processors can improve Othello-playing program abilities, such as deeper ply searching. During gameplay, players alternate moves. The human player uses black counters while the computer uses white. The human player starts the game.[1]Othello isstrongly solvedon 4×4 and 6×6 boards, with the second player (white) winning inperfect play.[14][15] Othello 4x4 has a very small game tree and has been solved in less than one second by many simple Othello programs that use the Minimax method, which generates all possible positions (nearly 10 million). The result is that white wins with a +8 margin (3-11).[14] Othello 6x6 has been solved in less than 100 hours by many simple Othello programs that use the Minimax method, which generates all possible positions (nearly 3.6 trillion). The result is that white wins with a +4 margin (16-20).[16] The Othello 8x8 game tree size is estimated at 1054nodes, and the number of legal positions is estimated at less than 1028. As of October 2023, a preprint claims that the game has been solved, with optimal result being draw.[17][18]Computation results is also shared, making it one of the largest publicly available books.[19] Some top programs have expanded their books for many years now. As a result, many lines are in practice draws or wins for either side. Regarding the three main openings of diagonal, perpendicular and parallel, it appears that both diagonal and perpendicular openings lead to drawing lines, while the parallel opening is a win for black. The drawing tree also seems bigger after the diagonal opening than after the perpendicular opening.[20][failed verification]The parallel opening has strong advantages for the black player, enabling black to always win in a perfect play.[21][failed verification]
https://en.wikipedia.org/wiki/Computer_Othello
Computer shogiis a field ofartificial intelligenceconcerned with the creation ofcomputer programswhich can playshogi. The research and development of shogi software has been carried out mainly by freelance programmers, university research groups and private companies. By 2017, the strongest programs were outperforming the strongest human players. Shogi has the distinctive feature of reusing captured pieces. Therefore, shogi has a higherbranching factorthan other chess variants. The computer has more positions to examine because each piece in hand can be dropped on many squares. This gives shogi the highest number of legal positions and the highest number of possible games of all the popular chess variants. The higher numbers for shogi mean it is harder to reach the highest levels of play. The number of legal positions and the number of possible games are two measures of shogi'sgame complexity. The complexity of Go can be found atGo and mathematics. More information on the complexity of Chess can be found atShannon number. The primary components of a computer shogi program are theopening book, thesearch algorithmand theendgame. The "opening book" helps put the program in a good position and saves time. Shogi professionals, however, do not always follow an opening sequence as in chess, but make different moves to create good formation of pieces. The "search algorithm" looks ahead more deeply in a sequence of moves and allows the program to better evaluate a move. The search is harder in shogi than in chess because of the larger number of possible moves. A program will stop searching when it reaches a stable position. The problem is many positions are unstable because of the drop move. Finally, the "endgame" starts when the king is attacked and ends when the game is won. In chess, there are fewer pieces which leads to perfect play by endgame databases; However, pieces can be dropped in shogi so there are no endgame databases. Atsumeshogisolver is used to quickly find mating moves. In the 1980s, due to the immaturity of the technology in such fields asprogramming,CPUsandmemory, computer shogi programs took a long time to think, and often made moves for which there was no apparent justification. These programs had the level of an amateur of kyu rank. In the first decade of the 21st century, computer shogi has taken large steps forward in software and hardware technology. In 2007, top shogi playerYoshiharu Habuestimated the strength of the 2006 world computer shogi champion Bonanza. He contributed to the newspaperNihon Keizai Shimbunevening edition on 26 March 2007 about the match between Bonanza and thenRyūōChampionAkira Watanabe. Habu rated Bonanza's game at the level of 2 danshogi apprentice(shōreikai).[citation needed] In particular, computers are most suited to brute-force calculation, and far outperform humans at the task of finding ways of checkmating from a given position, which involves many fewer possibilities. In games with time limits of 10 seconds from the first move, computers are becoming a tough challenge for even professional shogi players.[citation needed]The past steady progress of computer shogi is a guide for the future. In 1996 Habu predicted a computer would beat him in 2015.[3]Akira Watanabe gave an interview to the newspaperAsahi Shimbunin 2012. He estimated the computer played at the 4 dan professional level. Watanabe also said the computer sometimes found moves for him.[4] On 23 October 2005, at the 3rd International Shogi Forum, theJapan Shogi AssociationpermittedToshiyuki Moriuchi, 2005Meijin, to play computer shogi program YSS. Toshiyuki Moriuchi won the game playing 30 seconds per move with a Bishophandicap.[5]In 2012, a retired professional lost a match with computer publicly first,[6]and in 2013, active shogi professionals too. The Japan Shogi Association (JSA) gave reigning Ryuo Champion Watanabe permission to compete against the reigning World Computer Shogi Champion Bonanza on 21 March 2007. Daiwa Securities sponsored the match. Hoki Kunihito wrote Bonanza. The computer was an Intel Xeon 2.66 GHz 8 core with 8 gigabytes of memory and 160-gigabyte hard drive. The game was played with 2 hours each and 1 minute byo-yomi per move after that. Those conditions favor Watanabe because longer time limits mean there are fewer mistakes from time pressure. Longer playing time also means human players can make long-term plans beyond the computer'scalculating horizon. The 2 players were not at the same playing level. Watanabe was the 2006 Ryuo Champion and he gave Bonanza a rating equivalent to first or third dan.[7]Bonanza was a little stronger than before due to program improvements and a faster computer. Watanabe prepared for a weaker Bonanza as Watanabe studied old Bonanza game records. Bonanza moved first and playedFourth File RookBear-in-the-holeas Watanabe expected. Watanabe thought some of Bonanza's moves were inferior. However, Watanabe deeply analyzed these moves thinking that maybe the computer saw something that Watanabe did not see. Watanabe commented after the game that he could have lost if Bonanza had played defensive moves before entering the endgame. But the computer choose to attack immediately instead of taking its time (and using its impressive endgame strategies) which cost it the match. Bonanza resigned after move 112.[8] After Bonanza's loss Watanabe commented on computers in his blog, "I thought they still had quite a way to go, but now we have to recognize that they've reached the point where they are getting to be a match for professionals."[citation needed]Watanabe further clarified his position on computers playing shogi in theYomiuri Shimbunon 27 June 2008 when he said "I think I'll be able to defeat shogi software for the next 10 years".[citation needed]Another indication Bonanza was far below the level of professional Watanabe came 2 months after the match at the May 2007 World Computer Shogi Championship. Bonanza lost to the 2007 World Computer Shogi Champion YSS. Then YSS lost to amateur Yukio Kato in a 15-minute game. The winners of CSA tournaments played exhibition games with strong players. These exhibition games started in 2003.[9] In each succeeding year, the human competition was stronger to match the stronger programs. Yukio Kato was the Asahi Amateur Meijin champion. Toru Shimizugami was the Amateur Meijin champion. Eiki Ito, the creator of Bonkras, said in 2011, at present, top Shogi programs like Bonkras are currently at a level of lower- to middle-class professional players.[10] The computer program Akara defeated the women's Osho championIchiyo Shimizu. Akara contained 4 computer engines: Gekisashi, GPS Shogi, Bonanza, and YSS. Akara ran on a network of 169 computers. The 4 engines voted on the best moves. Akara selects the move with the most votes. If there is a tie vote then Akara selects Gekisashi's move. Researchers at theUniversity of Tokyoand theUniversity of Electro-Communicationsdeveloped Akara. Shimizu moved first and resigned in 86 moves after 6 hours and 3 minutes. Shimizu said she was trying to play her best as if she was facing a human player. She played at the University of Tokyo on 11 October 2010. The allotted thinking time per player is 3 hours and 60 secondsbyoyomi. 750 fans attended the event. This is the third time since 2005 that the Japan Shogi Association granted permission to a professional to play a computer, and the first victory against a female professional. Akara aggressively pursued Shimizu from the start of the game. Akara played with a ranging rook strategy and offered an exchange of bishops. Shimizu made a questionable move partway though the game, and Akara went on to win.[11]Ryuo champion, Akira Watanabe, criticized Shimizu's game. On 19 November 2010, the Daily Yomiuri quoted Watanabe. Watanabe said, "Ms. Shimizu had plenty of chances to win".[12] On 24 July 2011, there was a two-game amateur versus computer match. Two computer shogi programs beat a team of two amateurs. One amateur, Mr. Kosaku, was a Shoreikai three Dan player. The other amateur, Mr. Shinoda, was the 1999 Amateur Ryuo. The allotted time for the amateurs was main time 1 hour and then 3 minutes per move. The allotted time for the computer was main time 25 minutes and then 10 seconds per move.[13][14][15] On 21 December 2011, computer program Bonkras crushed retired 68-year-oldKunio Yonenaga, the 1993 Meijin. They played 85 moves in 1 hour, 3 minutes 39 seconds on Shogi Club 24. Each player started with 15 minute time limit and an additional 60 seconds are added to each player's time limit per move. Yonenaga was gote (white) and played 2. K-62. This move was to confuse the computer by playing a move not in Bonkras'sjoseki(opening book).[citation needed]On 14 January 2012, Bonkras again defeated Yonenaga. This match is the first Denou-sen match. The game had 113 moves. Time allowed was 3 hours and then 1 minute per move. Bonkras moved first and used a ranging rook opening. Yonenaga made the same second move, K-6b, as in the previous game he lost. Bonkras ran on a Fujitsu Primergy BX400 with 6 blade servers to search 18,000,000 moves per second. Yonenaga used 2 hours 33 minutes. Bonkras used 1 hour 55 minutes.[6]Bonkras evaluated its game with Yonenaga in January 2012.[16] Denou-sen was a shogi competition where humans faced off against machines. The second Denou-sen match was a five-game match sponsored byNiconicoin which five professional shogi players played against five computers. The winners of the previous World Computer Shogi Championship played against professional shogi players. The primary time control was of 4 hours, and the secondary time control was 1 move in 60 seconds. Niconico broadcast the games live with commentary.[17][failed verification–see discussion] Hiroyuki Miurasaid before his game he would play with "all his heart and soul". Miura decided to use trusted opening theory instead of ananti-computer strategy. The computer played book moves and they castled symmetrically to defend their kings. The computer attacked quickly and Miura counterattacked with a drop move. More than 8 hours later Miura resigned. After the game, Miura said that "he should not have prepared for the game the way he did. He should have prepared for the game with a genuine sense of urgency, if only he knew, the computer was so strong."[20]Miura expressed disappointment and said he had yet to figure out where he went wrong.[21]The evaluation of the game by GPS is on the GPS Shogi web site.[22] On 31 December 2013, Funae and Tsutsukana played a second game. Tsutsukana was the same version that beat Funae on 6 April 2013. The computer was one Intel processor with 6 cores. Funae won.[23] In 2013, the Japan Shogi Association announced that five professional shogi players would play five computers from 15 March to 12 April 2014.[24]On 7 October 2013, the Japan Shogi Association picked the five players.[25] The professional shogi players played the winners of a preliminary computer tournament. The preliminary computer tournament was held 2–4 November 2013.[26] Each player started with 5 hours at 10 am. After the 5 hours, the player must complete each move in 1 minute. There was a 1 hour lunch break at 12:00 and a half hour dinner break at 5 pm.[34]Niconico is broadcasting the games live with commentary.[35]Japanese auto parts makerDensodeveloped a robotic arm to move the pieces for the computer.[36] ŌshōandKiōchampion Akira Watanabe wrote in his blog that "a human cannot think of some of Ponanza's moves such as 60.L*16 and 88.S*79. I am not sure they were the best moves or not right now, but I feel like I'm watching something incredible."[37]Kisei,ŌiandŌzachampion Yoshiharu Habu toldThe Asahi Shimbun, "I felt the machines were extraordinarily strong when I saw their games this time."[38] On Saturday 19 July 2014, Tatsuya Sugai once again got the chance to play against Shueso in what was billed as the "Shogi Denou-sen Revenge Match". Sugai had already been beaten by Shueso four months earlier in game one of Denou-sen 3, so this was seen as his chance to gain revenge for that loss. The game was sponsored by both the Japan Shogi Association and the telecommunications and media companyDwangoand was held at the Tokyo Shogi Kaikan (the Japan Shogi Association's head office). Although the playing site was closed to the public, the game was streamed live viaNiconico Livewith commentary being provided by various shogi professionals and women's professionals. Shuesho's moves were made by Denso's robotic arm. The initial time control for each player was eight hours which was then followed by a 1-minute byoyomi. In addition, four 1-hour breaks were scheduled throughout the playing session to allow both sides time to eat and rest. The game lasted through the night and into the next day and finally finished almost 20 hours after it started when Sugai resigned after Shueso's 144 move.[39][40] Shogidokoro (将棋所) is a Windows graphical user interface (GUI) that calls a program to play shogi and displays the moves on a board.[41]Shogidokoro was created in 2007. Shogidokoro uses the Universal Shogi Interface (USI). The USI is an open communication protocol that shogi programs use to communicate with a user interface. USI was designed by Norwegian computer chess programmer Tord Romstad in 2007. Tord Romstad based USI onUniversal Chess Interface(UCI). UCI was designed by computer chess programmerStefan Meyer-Kahlenin 2000. Shogidokoro can automatically run a tournament between two programs. This helps programmers to write shogi programs faster because they can skip writing the user interface part. It is also useful for testing changes to a program. Shogidokoro can be used to play shogi by adding a shogi engine to Shogidokoro. Some engines that will run under Shogidokoro are the following: The interface can also usetsumeshogisolver-only engines like SeoTsume (脊尾詰).[59]The software's menus have both Japanese and English language options available. XBoard/WinBoard is another GUI that supports shogi and other chess variants including western chess and xiangqi. Shogi support was added toWinBoardin 2007 by H.G. Muller. WinBoard uses its own protocol (Chess Engine Communication Protocol) to communicate with engines, but can connect to USI engines through the UCI2WB adapter. Engines that can natively support WinBoard protocol are Shokidoki, TJshogi, GNU Shogi and Bonanza.[60]Unlike Shogidokoro, WinBoard is free/libre and open source, and also available for the X Window System as XBoard (for Linux and Mac systems). A number of Shogi variants, such asChu ShogiandDai Shogi, are playable against AI using a forked version of Winboard. Included engines are: Shokidoki, which can play the smaller variants with drops (i.e.Minishogi); and HaChu, a large Shogi variant engine designed for playing Chu Shogi and has improved in strength over time.[61] 将棋ぶらうざQ (Shogi Browser Q) is a free cross-platform (Java) GUI, that can run USI engines and compete on Floodgate.[62]Since v3.7 both Japanese and English languages are available. BCMShogi[63]is an English language graphical user interface for the USI protocol and the WinBoard shogi protocol. It is no longer developed and currently is unavailable from the author's website. Floodgate is a computer shogi server for computers to compete and receive ratings.[64]Programs running under Shogidokoro can connect to Floodgate. The GPS team created Floodgate. Floodgate started operating continuously in 2008. The most active players have played 4,000 games. From 2008 to 2010, 167 players played 28,000 games on Floodgate. Humans are welcome to play on Floodgate. The time limit is 15 minutes per player, sudden death. From 2011 to 2018, the Floodgate's number one program increased by 1184 points, an average of 169 points per year. The annual computer vs computer world shogi championship is organized by the Computer Shogi Association (CSA) of Japan.[65]The computers play automated games through a server. Each program has 25 minutes to complete a game. The first championship was in 1990 with six programs. In 2001, it grew to 55 programs. The championship is broadcast on the Internet. At the 19th annual CSA tournament, four programs (GPS Shogi, Otsuki Shogi, Monju and KCC Shogi) that had never won a CSA tournament defeated three of the previous year's strongest programs (Bonanza, Gekisashi and YSS).[66]The top three winners of the 2010 CSA tournament are Gekisashi, Shueso and GPS Shogi.[67] In 2011, Bonkras won the CSA tournament with five wins out of seven games. Bonkras ran on a computer with three processors containing 16 cores and six gigabytes of memory. Bonanza won second place on a computer with 17 processors containing 132 cores and 300 gigabytes of memory. Shueso won third place. The 2010 CSA winner, Gekisashi, won fourth place. Ponanza won fifth place. GPS Shogi won sixth place on a computer with 263 processors containing 832 cores and 1486 gigabytes of memory.[68][69]In 2012, GPS Shogi searched 280,000,000 moves per second and the average search depth was 22.2 moves ahead. Hiroshi Yamashita, the author of YSS, maintains a list of all shogi programs that played in World Computer Shogi Championship by year and winning rank.[70] Some commercial game software which play shogi areHabu Meijin no Omoshiro ShōgiforSuper Famicom,Clubhouse GamesforNintendo DSandShotest ShogiforXbox. On 18 September 2005 a Japan Shogi Association professional 5 dan played shogi against a computer. The game was played at the 29th Hokkoku Osho-Cup Shogi Tournament in Komatsu, Japan. The Matsue National College of Technology developed the computer program Tacos. Tacos played first and chose the static rook line in the opening. Professional Hashimoto followed the opening line while changing his bishop with the bishop of Tacos. Tacos had a good development with some advantages in the opening and middle game even until move 80. Many amateur players expected Tacos to win. However, professional Hashimoto defended and Tacos played strange moves. Tacos lost.[71] On 14 October 2005, the Japan Shogi Association banned professional shogi players from competing against a computer.[72]The Japan Shogi Association said the rule is to preserve the dignity of its professionals, and to make the most of computer shogi as a potential business opportunity. The ban prevents the rating of computers relative to professional players. From 2008 to 2012, the Japan Shogi Association (with Kunio Yonenaga as president) did not permit any games between a professional and a computer.
https://en.wikipedia.org/wiki/Computer_shogi
Thefog of waris theuncertaintyinsituational awarenessexperienced by participants inmilitary operations.[1]The term seeks to capture the uncertainty regarding one's own capability, adversary capability, and adversaryintentduring an engagement, operation, or campaign. Military forces try to reduce the fog of war throughmilitary intelligenceand friendly force tracking systems. The term has become commonly used to define uncertainty mechanics inwargames. The word "fog" (German:Nebel), but not the exact phrase, in reference to 'uncertainty in war' was introduced by thePrussianmilitary analystCarl von Clausewitzin his posthumously published book,Vom Kriege(1832), the English translation of which was published asOn War(1873): War is the realm of uncertainty; three quarters of the factors on which action in war is based are wrapped in a fog of greater or lesser uncertainty. A sensitive and discriminating judgment is called for; a skilled intelligence to scent out the truth. It has been pointed out that von Clausewitz does not use the exact phrase "fog of war", and also uses multiple similar metaphors, such as "twilight" and "moonlight", to describe a 'lack of clarity'.[3] The first known use of the exact phrase in text dates to 1836 in a poem entitled "The Battle of Bunker Hill" byMcDonald Clarke.[4]The poem describes an assault by British forces upon an American redoubt during the 1775Battle of Bunker Hill: Will they dare a third attack?Is a question seen in every eye;Old Put across the neck and back,Rides slowly, their vengeance to defy—Wildy, in that deadly hour,The Ramparts shove their bolted shower,While mid the waving fog of war,Thunders the Yankee’s loud hurrah"[5] The first known attempt to explicitly define the "fog of war" in a military text was made in 1896 in a book titledThe Fog of Warby Sir Lonsdale Augustus Hale, where it is described as "the state of ignorance in which commanders frequently find themselves as regards the real strength and position, not only of their foes, but also of their friends."[6] The fog of war is a reality in all military conflict. Precision and certainty are unattainable goals, but modern military doctrine suggests a trade-off of precision and certainty for speed and agility. Militaries employcommand and control(C2) systems and doctrine to partially alleviate the fog of war. The term also applies to the experience of individual soldiers in battle: often cited is the pure confusion of direction, location, and perspective on a battlefield. Officers and soldiers become separated, orders become confused and subject to revision with poor communication. Sounds and vision are limited from the perspective of the individual and may not be easily resolved, resulting in a continuing uncertainty, a perceptual "fog". The fog of war has been decreasing asintelligence, surveillance and reconnaissancetechnology is improving. In 2016,Chief of Staff of the United States ArmyGen.Mark A. Milleystated that "On the futurebattlefield, if you stay in one place longer than two or three hours, you will be dead... With enemy drones and sensors constantly on the hunt for targets, there won't even be time for four hours' unbroken sleep."[7] Abstract and militaryboard gamessometimes try to capture the effect of the fog of war by hiding the identity of playing pieces, by keeping them face down or turned away from the opposing player (as inStratego) or covered (as inSquad Leader[8]). Other games, such as theDark chessandKriegspielchess-variants, playing pieces could be hidden from the players by using a duplicate, hidden game board.[9] Another version of fog of war emulation is used byblock wargamingwhere, much likeStratego, the blocks face each player, hiding their value. However, this also allows for incremental damage, where the block is rotated up to four times to indicate battle damage before the unit is eliminated from the playing field.[citation needed] Solitaire games also by their nature attempt to recreate fog of war using random dice rolls or card draws to determine events.[10]Complex double-blindminiature wargames, includingmilitary simulations, may make use of two identical maps or model landscapes, one or more referees providing limited intelligence to the opposing sides, participants in the roles of sub-unit leaders, and the use of radio sets or intercoms.[citation needed] A computer's ability to effectively hide information from a player is seen as a distinct advantage over board games when simulating war.[11]Fog of war instrategy video gamesrefers to enemy units, and often terrain, being hidden from the player; this is lifted once the area is explored, but the information is often fully or partially re-hidden whenever the player does not have a unit in that area.[12] The earliest use of fog of war was in the 1977 gameEmpirebyWalter Bright.[13]Another early use of fog of war was the 1978 gameTankticsdesigned byChris Crawford, which was criticized for its unreliable and "confusing" fog of war system.[14]Crawford, in 1982, suggested "limit[ing] the amount of information available to the human player" to compensate for the computer's lack of intelligence.[15]In a 1988Computer Gaming WorldarticleDave Arnesoncalled fog of war "one of the biggest 'plus' factors in computer simulations", while Crawford concluded, usingTankticsas an example, that video game fog of war systems became less "fun" the more realistic they were, leading the medium to instead use simplified systems.[16] Two largeBlizzardfranchises,WarcraftandStarCraft, use a fog of war which only reveals terrain features and enemy units through a player'sreconnaissance. Without a unit actively observing, previously revealed areas of the map are subject to a shroud through which only terrain is visible, but changes in enemy units or bases are not.[17]This is also common in both turn-based and real-time strategy games, such asLeague of Legends, theClose Combatseries,Total Warseries,Age of Empiresseries,Red Alertseries,Advance Warsseries,Fire Emblemseries,Sid Meier'sCivilizationseries,Supreme Commanderseriesand theXCOM series.[citation needed] Fog of war gives players an incentive to uncover a game's world. A compulsion to reveal obscured parts of a map has been described to give a sense of exploring the unknown.[18]Crawford said that "reasonable" uses of fog of war, such as needing to send out scouts, "not only seem natural, but ... add to the realism and excitement of the game"[15]Merchant Princedisplays over unexplored territory whatComputer Gaming Worlddescribed as a "renaissance-style map of dubious accuracy".[19]In some strategy games that make use of fog of war,enemy AImay have knowledge of the positions of all other units and buildings on the map regardless, to compensate for lack of true intelligence, which players may consider as cheating if discovered.[20]A designer may use fog of war to keep a game that has become impossible to win enjoyable, by hiding this fact from the player.[17]
https://en.wikipedia.org/wiki/Fog_of_war
Anti-computer tacticsare methods used by humans to try to beat computer opponents at various games, most typicallyboard gamessuch aschessandArimaa. They are most associated with competitions against computer AIs that are playing to their utmost to win, rather than AIs merely programmed to be an interesting challenge that can be given intentional weaknesses and quirks by the programmer (as in manyvideo game AIs). Such tactics are most associated with the era when AIs searched agame treewith anevaluation functionlooking for promising moves, often withAlpha–beta pruningor otherminimaxalgorithms used to narrow the search. Against such algorithms, a common tactic is to play conservatively aiming for a long-term advantage. The theory is that this advantage will manifest slowly enough that the computer is unable to notice in its search, and the computer won't play around the threat correctly. This may result in, for example, a subtle advantage that eventually turns into a winning chess endgame with apassed pawn. (Conversely, attempting to lure an AI into a short-term "trap", inviting the play of a reasonable-seeming to humans but actually disastrous move, will essentially never work against a computer in games of perfect information.) The field is most associated with the 1990s and early 2000s, when computers were very strong at games such as chess, yet beatable. Even then, the efficacy of such tactics was questionable, with several tactics such as making unusual or suboptimal moves to quickly get the computer out of itsopening bookproving ineffective in human-computer tournaments. The rise ofmachine learninghas also dented the applicability of anti-computer tactics, as machine learning algorithms tend to play the long game equally as well if not better than human players. One aspect of designing a classic AI for games of perfect information is thehorizon effect. Computer AIs examine a game tree of possible moves and counter-moves, but unless a forced win is in the tree, it needs to stop exploring new possibilities eventually. When it does, anevaluation functionis called on the board state, which often uses rough heuristics to determine which side the board favors. In chess, this might be things like material advantage (extra pieces), control of the center, king safety, and pawn structure. Exploiting the horizon effect can be done by human players by using a strategy whose fruits are apparent only beyond thepliesexamined by the AI. For example, if the AI is examining 10 plies ahead, and a strategy will "pay off" in 12-20 plies (6-10 turns), the AI won't play around the looming threat that it can't "see", similar to a person being unable to see "over the horizon" where a ship might be hid by the natural curvature of the earth. Similarly, to keep the horizon short, human players may want to keep as complicated a board state as possible. Simplifying the board by trading pieces lets the AI look "farther" into the future, as there are fewer options to consider, and thus is avoided when trying to exploit the horizon effect. A tactic that works best on AIs that are very "deterministic" and known to play in one specific way in response to a threat is to force a situation where the human knows exactly how the AI will respond. If the human picks a situation that they believe the AI handles poorly, this can lead to reliably luring the AI into such situations. Even if the AI can handle that particular play style well, if the human is confident that the AI will always pick it, it simplifies preparation for the human player - they can just learn this one situation very closely, knowing that the AI will always accept an invitation to play into that kind of board. AI games based onMonte-Carlo tree searchhave opposite strengths and weaknesses to alpha-beta AIs. While they tend to be better at long-term strategy, they have problems dealing with traps.[1]Once Monte-Carlo AIs fall into a trap, they can continue to play badly for a considerable period afterwards and may not recover.[2] While patiently accumulating an advantage may be a beneficial tactic against alpha-beta AIs who play tactically, MCTS-based AIs likeAlphaGomay themselves play in this patient strategic manner.[2]Thus deliberately tactical play, which is a bad approach against alpha-beta, becomes a viable anti-computer tatctic against MCTS. Game AIs based on neural networks can be susceptible to adversarial perturbations, where playing a meaningless move alters the AI's evaluation of the position and makes it lose. Lan et al. developed an algorithm to find modifications of board states that would lead KataGo to play inferior moves.[3]However, like adversarial examples in image recognition, these attacks are hard to devise without computer assistance. In the 1997Deep Blue versus Garry Kasparovmatch,Kasparovplayed an anti-computer tactic move at the start of the game to getDeep Blueout of itsopening book.[4]Kasparov chose the unusualMieses Openingand thought that the computer would play theopeningpoorly if it had to play itself (that is, rely on its own skills rather than use its opening book).[5]Kasparov played similar anti-computer openings in the other games of the match, but the tactic backfired.[6]About the two matches, Kasparov wrote after the second game, where he chose theRuy López, “We decided that using the same passive anti-computer strategy with black would be too dangerous. With white I could control the pace of the game much better and wait for my chances. With black it would be safer to play a known opening even if it was in Deep Blue's book especially if it was a closed opening where it would have difficulty finding a plan. The downside with this strategy as in all the games was that it wasn't my style either. While I was playing anti-computer chess I was also playing anti-Kasparov chess.” TheBrains in Bahrainwas an eight-game chess match between human chessgrandmaster, and thenWorld Champion,Vladimir Kramnikand the computer programDeep Fritz 7, held in October 2002. The match ended in a tie 4–4, with two wins for each participant and fourdraws, worth half a point each.[7] Arimaais a chess derivative specifically designed to be difficult for alpha-beta pruning AIs, inspired byKasparov's loss to Deep Bluein 1997. It allows 4 actions per "move" for a player, greatly increasing the size of the search space, and can reasonably end with a mostly full board and few captured pieces, avoidingendgame tablebasestyle "solved" positions due to scarcity of units. While human Arimaa players held out longer than chess, they too fell to superior computer AIs in 2015.[8] Thisgame-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Anti-computer_tactics
Thetrolley problemis a series ofthought experimentsinethics,psychology, andartificial intelligenceinvolving stylizedethical dilemmasof whether to sacrifice one person to save a larger number. The series usually begins with ascenarioin which arunawaytrolleyortrainis on course to collide with and kill a number of people (traditionally five) down thetrack, but a driver or bystander can intervene and divert the vehicle to kill just one person on a different track. Then other variations of the runaway vehicle, and analogous life-and-death dilemmas (medical, judicial, etc.) are posed, each containing the option to either do nothing, in which case several people will be killed, or intervene and sacrifice one initially "safe" person to save the others. Opinions on the ethics of each scenario turn out to be sensitive to details of the story that may seem immaterial to the abstract dilemma. The question of formulating a general principle that can account for the differing judgments arising in different variants of the story was raised in 1967 as part of an analysis ofdebates on abortionand thedoctrine of double effectby the English philosopherPhilippa Foot.[1]Later dubbed "the trolley problem" byJudith Jarvis Thomsonin a 1976 article that catalyzed a large literature, the subject refers to the meta-problem of why different judgments are arrived at in particular instances. Philosophers Judith Thomson,[2][3]Frances Kamm,[4]andPeter Ungerhave also analysed thedilemmaextensively.[5]Thomson's 1976 article initiated the literature on the trolley problem as a subject in its own right. Characteristic of this literature are colorful and increasingly absurd alternative scenarios in which the sacrificed person is instead pushed onto the tracks as a way to stop the trolley, has his organs harvested to save transplant patients, or is killed in more indirect ways that complicate the chain ofcausationand responsibility. Earlier forms of individual trolley scenarios antedated Foot's publication.Frank Chapman Sharpincluded a version in a moral questionnaire given to undergraduates at theUniversity of Wisconsinin 1905. In this variation, the railway'sswitchmancontrolled the switch, and the lone individual to be sacrificed (or not) was the switchman's child.[6][7]German philosopher of lawKarl Engischdiscussed a similar dilemma in hishabilitation thesisin 1930, as did German legal scholar Hans Welzel in a work from 1951.[8][9]In his commentary on theTalmud, published in 1953,Avrohom Yeshaya Karelitzconsidered the question of whether it is ethical to deflect a projectile from a larger crowd toward a smaller one.[10]Similarly, inThe Strike, a television play broadcast in the United States on June 7, 1954, a commander in theKorean Warmust choose between ordering an air strike on an encroaching enemy force at the cost of his own 20-man patrol unit, or calling off the strike and risking the lives of the main army made up of 500 men.[11] Beginning in 2001, the trolley problem and its variants have been used in empirical research onmoral psychology. It has been a topic of popular books.[12]Trolley-style scenarios also arise in discussing the ethics ofautonomous vehicledesign, which may require programming to choose whom or what to strike when acollisionappears to be unavoidable.[13]More recently, the trolley problem has also become aninternet meme.[14] Foot's version of the thought experiment, now known as "Trolley Driver", ran as follows: Suppose that a judge or magistrate is faced with rioters demanding that a culprit be found for a certain crime and threatening otherwise to take their own bloody revenge on a particular section of the community. The real culprit being unknown, the judge sees himself as able to prevent the bloodshed only by framing some innocent person and having him executed. Beside this example is placed another in which a pilot whose airplane is about to crash is deciding whether to steer from a more to a less inhabited area. To make the parallel as close as possible, it may rather be supposed that he is the driver of a runaway tram, which he can only steer from one narrow track on to another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed. In the case of the riots, the mob have five hostages, so that in both examples, the exchange is supposed to be one man's life for the lives of five.[1] Autilitarianview asserts that it is obligatory to steer to the track with one man on it. According to classical utilitarianism, such a decision would be not only permissible, but, morally speaking, the better option (than other option being no action at all).[15]This fact makes diverting the trolley obligatory. An alternative viewpoint is that since moral wrongs are already in place in the situation, moving to another track constitutes a participation in the moral wrong, making one partially responsible for the death when otherwise no one would be responsible. An opponent of action may also point to theincommensurabilityof human lives. Under some interpretations ofmoral obligation, simply being present in this situation and being able to influence its outcome constitutes an obligation to participate. If this is the case, then doing nothing would be considered an immoral act. In 2001,Joshua Greeneand colleagues published the results of the first significant empirical investigation of people's responses to trolley problems.[16]Usingfunctional magnetic resonance imaging, they demonstrated that "personal" dilemmas (like pushing a man off a footbridge) preferentially engage brain regions associated with emotion, whereas "impersonal" dilemmas (like diverting the trolley by flipping a switch) preferentially engaged regions associated with controlled reasoning. On these grounds, they advocate for thedual-process account of moral decision-making. Since then, numerous other studies have employed trolley problems to study moral judgment, investigating topics like the role and influence of stress,[17]emotional state,[18]impression management,[19]levels of anonymity,[20]different types of brain damage,[21]physiological arousal,[22]different neurotransmitters,[23]and genetic factors[24]on responses to trolley dilemmas. Trolley problems have been used as a measure of utilitarianism, but their usefulness for such purposes has been widelycriticized.[25][26][27] In 2017, a group led byMichael Stevensperformed the first realistic trolley-problem experiment, where subjects were placed alone in what they thought was a train-switching station, and shown footage that they thought was real (but was actually prerecorded) of a train going down a track, with five workers on the main track, and one on the secondary track; the participants had the option to pull the lever to divert the train toward the secondary track. Five of the seven participants did not pull the lever.[28][29] The trolley problem has been the subject of many surveys in which about 90% of respondents have chosen to kill the one and save the five.[30]If the situation is modified where the one sacrificed for the five was a relative or romantic partner, respondents are much less likely to be willing to sacrifice the one life.[31] A 2009 survey by David Bourget andDavid Chalmersshows that 68% of professional philosophers would switch (sacrifice the one individual to save five lives) in the case of the trolley problem, 8% would not switch, and the remaining 24% had another view or could not answer.[32] In a 2014 paper published in theSocial and Personality Psychology Compass,[25]researchers criticized the use of the trolley problem, arguing, among other things, that the scenario it presents is too extreme and unconnected to real-life moral situations to be useful or educational.[33] In her 2017 paper, Nassim JafariNaimi[34]lays out the reductive nature of the trolley problem in framing ethical problems that serves to uphold an impoverished version of utilitarianism. She argues that the popular argument that the trolley problem can serve as a template for algorithmic morality is based on fundamentally flawed premises that serve the most powerful with potentially dire consequences on the future of cities.[35] In 2017, in his bookOn Human Nature,Roger Scrutoncriticises the usage of ethical dilemmas such as the trolley problem and their usage by philosophers such asDerek ParfitandPeter Singeras ways of illustrating their ethical views. Scruton writes, "These 'dilemmas' have the useful character of eliminating from the situation just about every morally relevant relationship and reducing the problem to one of arithmetic alone." Scruton believes that just because one would choose to change the track so that the train hits the one person instead of the five does not mean that they are necessarily aconsequentialist. As a way of showing the flaws in consequentialist responses to ethical problems, Scruton points out paradoxical elements of belief in utilitarianism and similar beliefs. He believes thatNozick'sexperience machinethought experiment definitively disproveshedonism.[36]In his 2017 articleThe Trolley Problem and the Dropping of Atomic Bombs,Masahiro Morioka considers the dropping of atomic bombs as an example of the trolley problem and points out that there are five "problems of the trolley problem", namely, 1) rarity, 2) inevitability, 3) safety zone, 4) possibility of becoming a victim, and 5) the lack of perspective of the dead victims who were deprived of freedom of choice.[37] In a 2018 article published inPsychological Review, researchers pointed out that, as measures of utilitarian decisions, sacrificial dilemmas such as the trolley problem measure only one facet of proto-utilitarian tendencies, namely permissive attitudes toward instrumental harm, while ignoring impartial concern for the greater good. As such, the authors argued that the trolley problem provides only a partial measure of utilitarianism.[26] The basic Switch form of the trolley problem also supports comparison to other, related dilemmas: As before, a trolley is hurtling down a track towards five people. You are on a bridge under which it will pass, and you can stop it by putting something very heavy in front of it. As it happens, there is a fat man next to you – your only way to stop the trolley is to push him over the bridge and onto the track, killing him to save five. Should you proceed? Resistance to this course of action seems strong; when asked, a majority of people will approve of pulling the switch to save a net of four lives, but will disapprove of pushing the fat man to save a net of four lives.[38]This has led to attempts to find a relevant moral distinction between the two cases. One possible distinction could be that in the first case, one does not intend harm towards anyone – harming the one is just aside effectof switching the trolley away from the five. However, in the second case, harming the one is an integral part of the plan to save the five. This solution is essentially an application of thedoctrine of double effect, which says that one may take action that has bad side effects, but deliberately intending harm (even for good causes) is wrong. So, the action is permissible even if the harm to the innocent person is foreseen, so long as it is not intended. This is an argument whichShelly Kaganconsiders (and ultimately rejects) in his first bookThe Limits of Morality.[39] This variation is similar toThe Fat Man, with the additional assertion that the fat man who may be pushed is a villain who is responsible for the whole situation: the fat man was the one who tied five people to the track, and sent a trolley in their direction with the intention of killing them. In this variation, a majority of people are willing to push the fat man.[40] Unlike in the previous scenario, pushing the fat villain to stop the trolley may be seen as a form ofretributive justiceorself-defense. Variants of the original Trolley Driver dilemma arise in the design of software to controlautonomous cars.[13]Situations are anticipated where a potentially fatal collision appears to be unavoidable, but in which choices made by the car'ssoftware, such as into whom or what to crash, can affect the particulars of the deadly outcome. For example, should the software value the safety of the car's occupants more, or less, than that of potential victims outside the car.[41][42][43][44] A platform calledMoral Machine[45]was created byMIT Media Labto allow the public to express their opinions on what decisions autonomous vehicles should make in scenarios that use the trolley problem paradigm. Analysis of the data collected through Moral Machine showed broad differences in relative preferences among different countries.[46]Other approaches make use of virtual reality to assess human behavior in experimental settings.[47][48][49][50]However, some argue that the investigation of trolley-type cases is not necessary to address the ethical problem of driverless cars, because the trolley cases have a serious practical limitation. It would need to be a top-down plan in order to fit the current approaches of addressing emergencies inartificial intelligence.[51] Also, a question remains of whether the law should dictate the ethical standards that all autonomous vehicles must use, or whether individual autonomous car owners or drivers should determine their car's ethical values, such as favoring safety of the owner or the owner's family over the safety of others.[13]Although most people would not be willing to use an automated car that might sacrifice themselves in a life-or-death dilemma, some[who?]believe the somewhat counterintuitive claim that using mandatory ethics values would nevertheless be in their best interest. According to Gogoll and Müller, "the reason is, simply put, that [personalized ethics settings] would most likely result in aprisoner’s dilemma."[52] In 2016, the German government appointed a commission to study the ethical implications of autonomous driving.[53][54]The commission adopted 20 rules to be implemented in the laws that will govern the ethical choices that autonomous vehicles will make.[54]: 10–13Relevant to the trolley dilemma is this rule: 8. Genuine dilemmatic decisions, such as a decision between one human life and another, depend on the actual specific situation, incorporating “unpredictable” behaviour by parties affected. They can thus not be clearly standardized, nor can they be programmed such that they are ethically unquestionable. Technological systems must be designed to avoid accidents. However, they cannot be standardized to a complex or intuitive assessment of the impacts of an accident in such a way that they can replace or anticipate the decision of a responsible driver with the moral capacity to make correct judgements. It is true that a human driver would be acting unlawfully if he killed a person in an emergency to save the lives of one or more other persons, but he would not necessarily be acting culpably. Such legal judgements, made in retrospect and taking special circumstances into account, cannot readily be transformed into abstract/general ex ante appraisals and thus also not into corresponding programming activities. …[54]: 11
https://en.wikipedia.org/wiki/Trolley_problem
Condorcet methods Positional voting Cardinal voting Quota-remainder methods Approval-based committees Fractional social choice Semi-proportional representation By ballot type Pathological response Strategic voting Paradoxes ofmajority rule Positive results Multiwinner[1]orcommittee[2][3]votingrefers toelectoral systemsthat elect several candidates at once. Such methods can be used to electparliamentsorcommittees. There are many scenarios in which multiwinner voting is useful. They can be broadly classified into three classes, based on the main objective in electing the committee:[4] A major challenge in the study of multiwinner voting is finding reasonable adaptations of concepts from single-winner voting. These can be classified based on the voting type—ranked votingas used ininstant-runoff votingandsingle transferable votingvs.approval voting. With multiwinner voting, there are many ways to decide which candidates should be elected. In some, each voter ranks the candidates; in others they cast X votes. Furthermore, depending on the system, each voter may cast single or multiple votes. Some election systems elect multiple members by competition held among individual candidates. Each voter votes directly for one or more individual candidates. These systems includePlurality block votingandsingle non-transferable voting, adaptations offirst-past-the-post votingto a multiwinner contest. Under SNTV, each voter casts only one vote, and that means no one party can take all the seats; although, because plurality system is used to allocate seats, parties are not guaranteed to take their proportional share of seats. A ranked-vote version of SNTV,Single transferable voting, elects a mixed, balanced group of members in a single contest in almost all cases. In other systems, candidates are grouped in committees (slates or party lists) and voters cast votes for the committees (or slates). Sometimes only one slate or party takes all the seats, and sometimes members of various slates are elected. Single transferable votingelects a mixed, balanced group of members in a single contest. It does this partly by allowing votes cast on unelectable candidates to be transferred to more-popular candidates. The quota used in STV ensures minority representation - no one group can take all the seats unless the district magnitude is small, or one party takes a great proportion of the votes cast. Approval votingis a common method for single-winner elections and sometimes for multiwinner elections. In single-winner elections, voters mark their approved candidates, and the candidate with the most votes wins. Already in 1895,Thielesuggested a family of weight-based rules calledThiele's voting rules.[2][5]Each rule in the family is defined by a sequence ofkweakly positive weights,w1,...,wk(wherekis the committee size). Each voter assigns, to each committee containingpcandidates approved by the voter, a score equal tow1+...+wp. The committee with the highest total score is elected. Some common voting rules in Thiele's family are: There are rules based on other principles, such asminimax approval voting[6]and its generalisations,[7]as well asPhragmen's voting rules[8]and themethod of equal shares.[9][10] The complexity of determining the winners vary: MNTV winners can be found in polynomial time, while Chamberlin-Courant[11]and PAV are both NP-hard. Positional scoring rulesare common in rank-based single-winner voting. Each voter ranks the candidates from best to worst, a pre-specified function assigns a score to each candidate based on his rank, and the candidate with the highest total score is elected. In multiwinner voting held using these systems, we need to assign scores tocommitteesrather than to individual candidates. There are various ways to do this, for example:[1] In single-winner voting, aCondorcet winneris a candidate who wins in every head-to-head election against each of the other candidates. ACondorcet methodis a method that selects a Condorcet winner whenever it exists. There are several ways to adapt Condorcet's criterion to multiwinner voting: Excellence means that the elected committee should contain the "best" candidates. Excellence-based voting rules are often calledscreening rules.[18]They are often used as a first step in a selection of a single best candidate, that is, a method for creating ashortlist. A basic property that should be satisfied by such a rule iscommittee monotonicity(also calledhouse monotonicity, a variant ofresource monotonicity): if somekcandidates are elected by a rule, and then the committee size increases tok+1 and the rule is re-applied, then the firstkcandidates should still be elected. Some families of committee-monotone rules are: The property of committee monotonicity is incompatible with the property ofstability(a particular adaptation of Condorcet's criterion): there exists a single voting profile that admits a unique Condorcet set of size 2, and a unique Condorcet set of size 3, and they are disjoint (the set of size 2 is not contained in the set of size 3).[18] On the other hand, there exists a family of positional scoring rules - theseparable positional scoring rules- that are committee-monotone. These rules are also computable in polynomial time (if their underlying single-winner scoring functions are).[1]For example,k-Borda is separable whilemultiple non-transferable voteis not. Diversity means that the elected committee should contain the most-preferred candidates of as many voters as possible. Formally, the following axioms are reasonable for diversity-centred applications: Proportionality means that eachcohesivegroup of voters (that is: a group of voters with similar preferences) should be represented by a number of winners proportional to its size (the number of votes it receives). Formally, if the committee is of sizek, there arenvoters, and someL*n/kvoters rank the sameLcandidates at the top (or give approval to the sameLcandidates), then theseLcandidates should be elected. This principle is easy to implement when the voters vote for parties (inparty-listsystems), but it can also be adapted to approval voting or ranked voting; seejustified representationandproportionality for solid coalitions. Proportionality may be measured just on the one usable preference that determines the vote's placement. In fact, in STV, only one preference is considered for each vote (unless fractional transfers are used as under aGregory method). Voting blocs stay intact if back-up preferences are marked along party lines, but that is not always the case - in STV voters have the liberty to mark their preferences as they desire. Under STV the elected committee is composed of diverse representatives. Each substantial (quota-sized) group, as determined by the placement of the vote according to the top usable marked preference, elects its preferred candidate.
https://en.wikipedia.org/wiki/Multiwinner_voting
Inmechanism design, aregret-free truth-telling mechanism(RFTT, orregret-free mechanismfor short) is a mechanism in which each player who reveals his true private information does not feelregretafter seeing the mechanism outcome. A regret-free mechanism incentivizes agents who want to avoid regret to report their preferences truthfully. Regret-freeness is a relaxation oftruthfulness: everytruthful mechanismis regret-free, but there are regret-free mechanisms that are not truthful. As a result, regret-free mechanisms exist even in settings in which strong impossibility results prevent the existence of truthful mechanisms. There is a finite setXof potential outcomes. There is a setNof agents. Each agent i has apreferencePioverX. Amechanismorruleis a functionfthat gets as input the agents' preferences P1,...,Pn, and returns as output an outcome fromX. The agents' preferences are their private information; therefore, each agent can either report his true preference, or report some false preference. It is assumed that, once an agent observes the outcome of the mechanism, he feelsregretif his report is adominated strategy"in hindsight". That is: given all possible preferences of other agents, which are compatible with the observed outcome, there is an alternative report that would have given him the same or a better outcome. Aregret-free truth-telling mechanism[1]is a mechanism in which an agent who reports his truthful preferences never feels regret. Fernandez[2]studies RFTT in two-sided matching. He shows that: Chen and Moller[3]studyschool choice mechanisms. They focus on theefficiency-adjusted deferred-acceptancerule (EADA or EDA).[4]It is known that EDA is notstrategyprooffor the students; Chen and Moller show that EDA is RFTT. They also show that no efficient matching rule that weakly Pareto-dominates a stable matching rule is RFTT. Arribillaga, Bonifacio and Fernandez[1]study RFTTvoting rules. They show that: Tamuz, Vardi and Ziani[5]study regret infair cake-cutting. They study arepeated gamevariant ofcut-and-choose. In standard cut-and-choose, a risk-averse cutter would cut to two pieces equal in his eyes. But in their setting, there is a different cutter each day, playing cut-and-choose with the same chooser. Each cutter knows all past choices of the chooser, and can potentially exploit this information in order to make a cut that will guarantee to him more than half of the cake. Their goal is to designnon-exploitable protocols- protocols in which the cutter can never know what piece the chooser is going to choose, and therefore always cuts the cake into two pieces equal in his eyes. The idea is to restrict the positions in which the cutter can cut; such protocols are calledforced-cut protocols. A simple non-exploitable forced-cut protocol is: in each day, take all pieces generated in the previous day (by forced and non-forced cuts), and force the cutter to cut each of these pieces into two. This protocol uses 2ncuts, where n is the number of days. There are protocols that use fewer cuts, depending on the number of dimensions of the cake: Cresto and Tajer[6]also study regret infair cake-cuttingamong two agents, where the regret comes from a change in preferences: after one player sees the choice of the other player, his preferences may change. They suggest a variant ofcut and choosethat avoids this kind of regret.
https://en.wikipedia.org/wiki/Regret-free_mechanism
Indecision theoryandmachine learning,competitive regretrefers to a performance measure that evaluates an algorithm'sregretrelative to anoracleor benchmark strategy. Unlike traditional regret, which compares against the best fixed decision in hindsight, competitive regret compares against decision-makers with different capabilities—either with greater computational resources or access to additional information. The formal definition of competitive regret typically involves a ratio or difference between the regret of an algorithm and the regret of a reference oracle. An algorithm is considered to have "good" competitive regret if this ratio remains bounded even as the problem size increases. This framework has applications in various domains includingonline optimization,reinforcement learning, portfolio selection, andmulti-armed bandit problems. Competitive regret analysis provides researchers with a more nuanced evaluation metric than standard regret, helping them develop algorithms that can achieve near-optimal performance even under practical constraints and uncertainty. Consider estimating a discreteprobability distributionp{\displaystyle p}on a discrete setX{\displaystyle {\mathcal {X}}}based on dataX{\displaystyle X}, the regret of an estimator[1]q{\displaystyle q}is defined as whereP{\displaystyle {\mathcal {P}}}is the set of all possible probability distribution, and whereD(p||q){\displaystyle D(p||q)}is theKullback–Leibler divergencebetweenp{\displaystyle p}andq{\displaystyle q}. The oracle is restricted to have access to partial information of the true distributionp{\displaystyle p}by knowing the location ofp{\displaystyle p}in the parameter space up to a partition.[1]Given a partitionP{\displaystyle \mathbb {P} }of the parameter space, and suppose the oracle knows the subsetP{\displaystyle P}where the truep∈P{\displaystyle p\in P}. The oracle will have regret as The competitive regret to the oracle will be The oracle knows exactlyp{\displaystyle p}, but can only choose the estimator among natural estimators. A natural estimator assigns equal probability to the symbols which appear the same number of time in the sample.[1]The regret of the oracle is and the competitive regret is For the estimatorq{\displaystyle q}proposed in Acharya et al.(2013),[2] HereΔk{\displaystyle \Delta _{k}}denotes the k-dimensional unit simplex surface. The partitionPσ{\displaystyle \mathbb {P} _{\sigma }}denotes the permutation class onΔk{\displaystyle \Delta _{k}}, wherep{\displaystyle p}andp′{\displaystyle p'}are partitioned into the same subset if and only ifp′{\displaystyle p'}is a permutation ofp{\displaystyle p}.
https://en.wikipedia.org/wiki/Competitive_regret
Info-gap decision theoryseeks to optimizerobustnessto failure under severeuncertainty,[1][2]in particular applyingsensitivity analysisof thestability radiustype[3]to perturbations in the value of a given estimate of the parameter of interest. It has some connections withWald's maximin model; some authors distinguish them, others consider them instances of the same principle. It was developed byYakov Ben-Haim,[4]and has found manyapplicationsand described as a theory for decision-making under "severeuncertainty". It has beencriticizedas unsuited for this purpose, andalternativesproposed, including such classical approaches asrobust optimization. Info-gap theory has generated a lot of literature. Info-gap theory has been studied or applied in a range of applications including engineering,[5][6][7][8][9][10][11][12][13][14][15][16][17][18]biological conservation,[19][20][21][22][23][24][25][26][27][28][29][30]theoretical biology,[31]homeland security,[32]economics,[33][34][35]project management[36][37][38]and statistics.[39]Foundational issues related to info-gap theory have also been studied.[40][41][42][43][44][45] A typical engineering application is the vibration analysis of a cracked beam, where the location, size, shape and orientation of the crack is unknown and greatly influence the vibration dynamics.[9]Very little is usually known about these spatial and geometrical uncertainties. The info-gap analysis allows one to model these uncertainties, and to determine the degree of robustness - to these uncertainties - of properties such as vibration amplitude, natural frequencies, and natural modes of vibration. Another example is the structural design of a building subject to uncertain loads such as from wind or earthquakes.[8][10]The response of the structure depends strongly on the spatial and temporal distribution of the loads. However, storms and earthquakes are highly idiosyncratic events, and the interaction between the event and the structure involves very site-specific mechanical properties which are rarely known. The info-gap analysis enables the design of the structure to enhance structural immunity against uncertain deviations from design-base or estimated worst-case loads.[citation needed]Another engineering application involves the design of a neural net for detecting faults in a mechanical system, based on real-time measurements. A major difficulty is that faults are highly idiosyncratic, so that training data for the neural net will tend to differ substantially from data obtained from real-time faults after the net has been trained. The info-gap robustness strategy enables one to design the neural net to be robust to the disparity between training data and future real events.[11][13] The conservation biologist faces info-gaps in using biological models. They use info-gap robustness curves to select among management options for spruce-budworm populations in Eastern Canada.Burgman[46]uses the fact that the robustness curves of different alternatives can intersect. Project management is another area where info-gap uncertainty is common. The project manager often has very limited information about the duration and cost of some of the tasks in the project, and info-gap robustness can assist in project planning and integration.[37]Financial economics is another area where the future is unpredictable, which may be either pernicious or propitious. Info-gap robustness and opportuneness analyses can assist in portfolio design,credit rationing, and other applications.[33] A general criticism of non-probabilistic decision rules, discussed in detail atdecision theory: alternatives to probability theory, is that optimal decision rules (formally,admissible decision rules) canalwaysbe derived by probabilistic methods, with a suitableutility functionandprior distribution(this is the statement of the complete class theorems), and thus that non-probabilistic methods such as info-gap are unnecessary and do not yield new or better decision rules. A more general criticism of decision making under uncertainty is the impact of outsized, unexpected events, ones that are not captured by the model. This is discussed particularly inblack swan theory, and info-gap, used in isolation, is vulnerable to this, as are a fortiori all decision theories that use a fixed universe of possibilities, notably probabilistic ones. Sniedovich[47]raises two points to info-gap decision theory, one substantive, one scholarly: Sniedovich has challenged the validity of info-gap theory for making decisions under severe uncertainty. Sniedovich notes that the info-gap robustness function is "local" to the region aroundu~{\displaystyle \displaystyle {\tilde {u}}}, whereu~{\displaystyle \displaystyle {\tilde {u}}}is likely to be substantially in error. Symbolically, maxα{\displaystyle \alpha }assuming min (worst-case) outcome, or maximin. In other words, while it is not a maximin analysis of outcome over the universe of uncertainty, it is a maximin analysis over a properly construed decision space. Ben-Haim argues that info-gap's robustness model is not min-max/maximin analysis because it is not worst-case analysis ofoutcomes;it is asatisficingmodel, not an optimization model – a (straightforward) maximin analysis would consider worst-case outcomes over the entire space which, since uncertainty is often potentially unbounded, would yield an unbounded bad worst case. Sniedovich[3]has shown that info-gap's robustness model is a simplestability radiusmodel, namely a local stability model of the generic form whereB(ρ,p~){\displaystyle B(\rho ,{\tilde {p}})}denotes aballof radiusρ{\displaystyle \rho }centered atp~{\displaystyle {\tilde {p}}}andP(s){\displaystyle P(s)}denotes the set of values ofp{\displaystyle p}that satisfy pre-determined stability conditions. In other words, info-gap's robustness model is a stability radius model characterized by a stability requirement of the formrc≤R(q,p){\displaystyle r_{c}\leq R(q,p)}. Since stability radius models are designed for the analysis of small perturbations in a given nominal value of a parameter, Sniedovich[3]argues that info-gap's robustness model is unsuitable for the treatment of severe uncertainty characterized by a poor estimate and a vast uncertainty space. It is correct that the info-gap robustness function is local, and has restricted quantitative value in some cases. However, a major purpose of decision analysis is to provide focus for subjective judgments. That is, regardless of the formal analysis, a framework for discussion is provided. Without entering into any particular framework, or characteristics of frameworks in general, discussion follows about proposals for such frameworks. Simon[48]introduced the idea ofbounded rationality. Limitations on knowledge, understanding, and computational capability constrain the ability of decision makers to identify optimal choices. Simon advocatedsatisficingrather than optimizing: seeking adequate (rather than optimal) outcomes given available resources. Schwartz,[49]Conlisk[50]and others discuss extensive evidence for the phenomenon of bounded rationality among human decision makers, as well as for the advantages of satisficing when knowledge and understanding are deficient. The info-gap robustness function provides a means of implementing a satisficing strategy under bounded rationality. For instance, in discussing bounded rationality and satisficing in conservation and environmental management, Burgman notes that "Info-gap theory ... can function sensibly when there are 'severe' knowledge gaps." The info-gap robustness and opportuneness functions provide "a formal framework to explore the kinds of speculations that occur intuitively when examining decision options."[51]Burgman then proceeds to develop an info-gap robust-satisficing strategy for protecting the endangered orange-bellied parrot. Similarly, Vinot, Cogan and Cipolla[52]discuss engineering design and note that "the downside of a model-based analysis lies in the knowledge that the model behavior is only an approximation to the real system behavior. Hence the question of the honest designer: how sensitive is my measure of design success to uncertainties in my system representation? ... It is evident that if model-based analysis is to be used with any level of confidence then ... [one must] attempt to satisfy an acceptable sub-optimal level of performance while remaining maximally robust to the system uncertainties."[52]They proceed to develop an info-gap robust-satisficing design procedure for an aerospace application. Of course, decision in the face of uncertainty is nothing new, and attempts to deal with it have a long history. A number of authors have noted and discussed similarities and differences between info-gap robustness andminimaxor worst-case methods[7][16][35][37][53].[54]Sniedovich[47]has demonstrated formally that the info-gap robustness function can be represented as a maximin optimization, and is thus related to Wald's minimax theory. Sniedovich[47]has claimed that info-gap's robustness analysis is conducted in the neighborhood of an estimate that is likely to be substantially wrong, concluding that the resulting robustness function is equally likely to be substantially wrong. On the other hand, the estimate is the best one has, so it is useful to know if it can err greatly and still yield an acceptable outcome. This critical question clearly raises the issue of whether robustness (as defined by info-gap theory) is qualified to judge whether confidence is warranted,[5][55][56]and how it compares to methods used to inform decisions under uncertainty using considerationsnotlimited to the neighborhood of a bad initial guess. Answers to these questions vary with the particular problem at hand. Some general comments follow. Sensitivity analysis– how sensitive conclusions are to input assumptions – can be performed independently of a model of uncertainty: most simply, one may take two different assumed values for an input and compares the conclusions. From this perspective, info-gap can be seen as a technique of sensitivity analysis, though by no means the only. The robust optimization literature[57][58][59][60][61][62]provides methods and techniques that take aglobalapproach to robustness analysis. These methods directly address decision undersevereuncertainty, and have been used for this purpose for more than thirty years now.Wald'sMaximinmodel is the main instrument used by these methods. The principal difference between theMaximinmodel employed by info-gap and the variousMaximinmodels employed by robust optimization methods is in the manner in which the total region of uncertainty is incorporated in the robustness model. Info-gap takes a local approach that concentrates on the immediate neighborhood of the estimate. In sharp contrast, robust optimization methods set out to incorporate in the analysis the entire region of uncertainty, or at least an adequate representation thereof. In fact, some of these methods do not even use an estimate. Classical decision theory,[63][64]offers two approaches to decision-making under severe uncertainty, namelymaximinand Laplaces'principle of insufficient reason(assume all outcomes equally likely); these may be considered alternative solutions to the problem info-gap addresses. Further, as discussed atdecision theory: alternatives to probability theory,probabilists, particularly Bayesians probabilists, argue that optimal decision rules (formally,admissible decision rules) canalwaysbe derived by probabilistic methods (this is the statement of thecomplete class theorems), and thus that non-probabilistic methods such as info-gap are unnecessary and do not yield new or better decision rules. As attested by the rich literature onrobust optimization, maximin provides a wide range of methods for decision making in the face of severe uncertainty. Indeed, as discussed incriticism of info-gap decision theory, info-gap's robustness model can be interpreted as an instance of the general maximin model. As for Laplaces'principle of insufficient reason, in this context it is convenient to view it as an instance ofBayesian analysis. The essence of theBayesian analysisis applying probabilities for different possible realizations of the uncertain parameters. In the case ofKnightian (non-probabilistic) uncertainty, these probabilities represent the decision maker's "degree of belief" in a specific realization. In our example, suppose there are only five possible realizations of the uncertain revenue to allocation function. The decision maker believes that the estimated function is the most likely, and that the likelihood decreases as the difference from the estimate increases. Figure 11 exemplifies such a probability distribution. Now, for any allocation, one can construct a probability distribution of the revenue, based on his prior beliefs. The decision maker can then choose the allocation with the highest expected revenue, with the lowest probability for an unacceptable revenue, etc. The most problematic step of this analysis is the choice of the realizations probabilities. When there is an extensive and relevant past experience, an expert may use this experience to construct a probability distribution. But even with extensive past experience, when some parameters change, the expert may only be able to estimate thatA{\displaystyle A}is more likely thanB{\displaystyle B}, but will not be able to reliably quantify this difference. Furthermore, when conditions change drastically, or when there is no past experience at all, it may prove to be difficult even estimating whetherA{\displaystyle A}is more likely thanB{\displaystyle B}. Nevertheless, methodologically speaking, this difficulty is not as problematic as basing the analysis of a problem subject to severe uncertainty on a single point estimate and its immediate neighborhood, as done by info-gap. And what is more, contrary to info-gap, this approach is global, rather than local. Still, it must be stressed that Bayesian analysis does not expressly concern itself with the question of robustness. Bayesian analysis raises the issue oflearning from experienceand adjusting probabilities accordingly. In other words, decision is not a one-stop process, but profits from a sequence of decisions and observations.
https://en.wikipedia.org/wiki/Info-gap_decision_theory
Swap regretis a concept fromgame theory. It is a generalization ofregretin a repeated,n-decision game. A player'sswap-regretis defined to be the following: Intuitively, it is how much a player could improve by switching each occurrence of decisionito the best decisionjpossible inhindsight. The swap regret is always nonnegative. Swap regret is useful for computingcorrelated equilibria. Thisgame theoryarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Swap_regret
Leela Chess Zero(abbreviated asLCZero,lc0) is afree, open-sourcechess engineandvolunteer computingproject based onGoogle'sAlphaZeroengine. It was spearheaded byGary Linscott, a developer for theStockfish chess engine, and adapted from theLeela ZeroGoengine.[1] Like Leela Zero and AlphaGo Zero, early iterations of Leela Chess Zero started with no intrinsic chess-specific knowledge other than the basic rules of the game.[1]It learned how to play chess throughreinforcement learningfrom repeatedself-play, using a distributed computing network coordinated at the Leela Chess Zero website. However, as of November 2024 most models used by the engine are trained throughsupervised learningon data generated by previous reinforcement learning runs.[2] As of June 2024[update], Leela Chess Zero has played over 2.5 billion games against itself, playing around 1 million games every day,[3]and is capable of play at a level that is comparable withStockfish, the leading conventional chess program.[4][5] The Leela Chess Zero project was first announced on TalkChess.com on January 9, 2018, as an open-source, self-learning chess engine attempting to recreate the success ofAlphaZero.[1][6][7]Within the first few months of training, Leela Chess Zero had already reached theGrandmasterlevel, surpassing the strength of early releases ofRybka, Stockfish, andKomodo, despite evaluating orders of magnitude fewer positions due to the size of thedeep neural networkit uses as itsevaluation function. In December 2018, theAlphaZeroteam published a paper inSciencemagazine revealing previously undisclosed details of the architecture and training parameters used for AlphaZero.[8]These changes were soon incorporated into Leela Chess Zero and increased both its strength and training efficiency.[9] Work on Leela Chess Zero has informed the AobaZero project forshogi.[10] The engine has been rewritten and carefully iterated upon since its inception, and since 2019[11]has run on multiplebackends, allowing it to run on both CPU and GPU.[12] The engine can be configured to use differentweights,[13]including even differentarchitectures. This same mechanism of substitutable weights can also be used for alternative chess rules, such as for theFischer Random Chessvariant, which was done in 2019.[14] Like AlphaZero, Leela Chess Zero employs neural networks which output both a policy vector, a distribution over subsequent moves used to guide search, and a position evaluation. These neural networks are designed to run onGPU, unlike traditional engines. It originally usedresidual neural networks, but in 2022 switched to using atransformer-based architecture designed byDaniel MonroeandPhilip Chalmers.[2]These models represent a chessboard as a sequence of 64 tokens and apply a trunk consisting of a stack of Post-LN encoder layers, outputting a sequence of 64 encoded tokens which is used to generate a position evaluation and a distribution over subsequent moves.[2][15]They use a custom domain-specific position encoding called smolgen to improve the self-attention layer.[15] As of November 2024, the models used by the engine are significantly larger and more efficient than the residual network used by AlphaZero, reportedly achieving grandmaster-level strength at one position evaluation per move.[2][16]These models are able to detect and exploit positional features like trapped pieces and fortresses to outmaneuver traditional engines, giving Leela a unique playstyle.[2]There is also evidence that they are able to perform look-ahead.[17] Like AlphaZero, Leela Chess Zero learns throughreinforcement learning, continually training on data generated throughself-play.[1][8]However, unlike AlphaZero, Leela Chess Zero decentralizes its data generation through distributed computing, with volunteers generating self-play data on local hardware which is fed to the reinforcement algorithm.[3]In order to contribute training games, volunteers must download the latest non-release candidate (non-rc) version of the engine and the client. The client connects to the Leela Chess Zero server and iteratively receives the latest neural network version and produces self-play games which are sent back to the server and use to train the network.[18] In order to run the Leela Chess Zero engine, two components are needed: the engine binary used to perform search, and a network used to evaluate positions.[18]The client, which is used to contribute training data to the project, is not needed for this purpose. Older networks can also be downloaded and used by placing those networks in the folder with the Lc0 binary. In season 15 of theTop Chess Engine Championship, the engine AllieStein competed alongside Leela. AllieStein is a combination of two different spinoffs from Leela: Allie, which uses the same neural network as Leela, but has a unique search algorithm for exploring different lines of play, and Stein, a network which was trained usingsupervised learningon existing game data from games between other engines. While neither of these projects were admitted to TCEC separately due to their similarity to Leela, the combination of Allie's search algorithm with the Stein network, called AllieStein, was deemed unique enough to warrant its inclusion in the competition.[a][19][20] In early 2021, the LcZero blog announcedCeres, a transliteration of the engine toC Sharpwhich introduced several algorithmic improvements. The engine has performed competitively in tournaments, achieving third place in the TCEC Swiss 7 and fourth place in the TCEC Cup 4. In 2024, theCeresTrainframework was announced to support training deep neural networks for chess inPyTorch. In April 2018, Leela Chess Zero became the first engine using a deep neural network to enter theTop Chess Engine Championship(TCEC), during Season 12 in the lowest division, Division 4.[21]Out of 28 games, it won one, drew two, and lost the remainder; its sole victory came from a position in which its opponent, Scorpio 2.82, crashed in three moves.[22]However, it improved quickly. In July 2018, Leela placed seventh out of eight competitors at the 2018World Computer Chess Championship.[23]In August 2018, it won division 4 of TCEC season 13 with a record of 14 wins, 12 draws, and 2 losses.[22][24]In Division 3, Leela scored 16/28 points, finishing third behind Ethereal, which scored 22.5/28 points, and Arasan on tiebreak.[25][22] By September 2018, Leela had become competitive with the strongest engines in the world. In the 2018Chess.com Computer Chess Championship(CCCC),[26]Leela placed fifth out of 24 entrants. The top eight engines advanced to round 2, where Leela placed fourth.[27][28]Leela then won the 30-game match againstKomodoto secure third place in the tournament.[29][30]Leela participated in the "TCEC Cup", an event in which engines from different TCEC divisions can play matches against one another. Leela defeated higher-division engines Laser, Ethereal and Fire before finally being eliminated by Stockfish in the semi-finals.[22] In October and November 2018, Leela participated in theChess.com Computer Chess ChampionshipBlitz Battle, finishing third behind Stockfish and Komodo.[31][32] In December 2018, Leela participated inSeason 14 of the Top Chess Engine Championship. Leela dominated divisions 3, 2, and 1, easily finishing first in all of them. In the premier division, Stockfish dominated whileHoudini, Komodo and Leela competed for second place. It came down to a final-round game where Leela needed to hold Stockfish to a draw with black to finish second ahead of Komodo. Leela managed this and therefore met Stockfish in the superfinal. In a back and forth match, first Stockfish and then Leela took three game leads before Stockfish won by the narrow margin of 50.5–49.5.[22] In February 2019, Leela scored its first major tournament win when it defeated Houdini in the final of the second TCEC cup. Leela did not lose a game the entire tournament.[22][33]In April 2019, Leela won the Chess.com Computer Chess Championship 7: Blitz Bonanza, becoming the first neural-network project to take the title.[34] In theseason 15 of the Top Chess Engine Championship(May 2019), Leela defended its TCEC Cup title, this time defeating Stockfish with a score of 5.5–4.5 (+2 =7 −1) in the final after Stockfish blundered a seven-mantablebasedraw.[35]Leela also won the Superfinal for the first time, scoring 53.5–46.5 (+14 −7 =79) versus Stockfish, including winning as both white and black in the same predetermined opening in games 61 and 62.[36][37] Season 16 of TCECsaw Leela finish in third place in premier division, missing qualification for the Superfinal to Stockfish and the new deep neural network engine AllieStein. Leela was the only engine not to suffer any losses in the Premier division, and defeated Stockfish in one of the six games they played. However, Leela only managed to score nine wins, while AllieStein and Stockfish both scored 14 wins. This inability to defeat weaker engines led to Leela finishing third, half a point behind AllieStein and a point behind Stockfish.[38]In the fourth TCEC Cup, Leela was seeded first as the defending champion, which placed it on the opposite half of the brackets as AllieStein and Stockfish. Leela was able to qualify for the finals, where it faced Stockfish. After seven draws, Stockfish won the eighth game to win the match.[39] InSeason 17 of TCEC, held in January–April 2020, Leela regained the championship by defeating Stockfish 52.5–47.5, scoring a remarkable six wins in the final ten games, including winning as both white and black in the same predetermined opening in games 95 and 96.[40]It qualified for the superfinal again inSeason 18, but this time was defeated by Stockfish 53.5–46.5.[41]In the TCEC Cup 6 final, Leela lost to AllieStein, finishing second.[42] Season 19 of TCECsaw Leela qualify for the Superfinal again. This time it played against a new Stockfish version with support forNNUE, a shallow neural network–basedevaluation functionused primarily for the leaf nodes of the search tree. Stockfish NNUE defeated Leela convincingly with a final score of 54.5–45.5 (+18 −9 =73).[43][44]Since then, Leela has repeatedly qualified for the Superfinal, only to lose every time to Stockfish: +14 −8 =78 inSeason 20, +19 −7 =74 in Season 21, +27 −10 =63 in Season 23, +20 −16 =64 in Season 24, +27 -23 =50 in Season 25, +31 -17 =52 in Season 26, and +35 -18 =47 in Season 27. Since the introduction of NNUE to Stockfish, Leela has scored victories at the TCEC Swiss 6 and 7 and the TCEC Cup 11, and is usually a close second behind Stockfish in major tournaments.
https://en.wikipedia.org/wiki/Leela_Chess_Zero
In competitive two-player games, thekiller heuristicis a move-ordering method based on the observation that a strong move or small set of such moves in a particular position may be equally strong in similar positions at the same move (ply) in the game tree. Retaining such moves obviates the effort of rediscovering them in sibling nodes. This technique improves the efficiency ofalpha–beta pruning, which in turn improves the efficiency of theminimax algorithm.[1]Alpha–beta pruningworks best when the best moves are considered first. This is because the best moves are the ones most likely to produce acutoff, a condition where the game-playing program knows that the position it is considering could not possibly have resulted from best play by both sides and so need not be considered further. I.e. the game-playing program will always make its best available move for each position. It only needs to consider the other player's possible responses to that best move, and can skip evaluation of responses to (worse) moves it will not make. The killer heuristic attempts to produce a cutoff by assuming that a move that produced a cutoff in another branch of thegame treeat the same depth is likely to produce a cutoff in the present position, that is to say that a move that was a very good move from a different (but possibly similar) position might also be a good move in the present position. By trying thekiller movebefore other moves, a game-playing program can often produce an early cutoff, saving itself the effort of considering or even generating all legal moves from a position. In practical implementation, game-playing programs frequently keep track of two killer moves for each depth of the game tree (greater than depth of 1) and see if either of these moves, if legal, produces a cutoff before the program generates and considers the rest of the possible moves. If a non-killer move produces a cutoff, it replaces one of the two killer moves at its depth. This idea can be generalized into a set ofrefutation tables. A generalization of the killer heuristic is thehistory heuristic.[2]The history heuristic can be implemented as a table that is indexed by some characteristic of the move, for example "from" and "to" squares or piece moving and the "to" square. When there is a cutoff, the appropriate entry in the table is incremented, such as by addingdord²wheredis the current search depth.
https://en.wikipedia.org/wiki/Killer_heuristic
Inmathematics– and in particular the study of games on the unit square –Parthasarathy's theoremis a generalization ofVon Neumann's minimax theorem. It states that a particular class of games has a mixed value, provided that at least one of the players has astrategythat is restricted to absolutely continuous distributions with respect to theLebesgue measure(in other words, one of the players is forbidden to use apure strategy). The theorem is attributed toThiruvenkatachari Parthasarathy. LetX{\displaystyle X}andY{\displaystyle Y}stand for theunit interval[0,1]{\displaystyle [0,1]};MX{\displaystyle {\mathcal {M}}_{X}}denote the set ofprobability distributionsonX{\displaystyle X}(withMY{\displaystyle {\mathcal {M}}_{Y}}defined similarly); andAX{\displaystyle A_{X}}denote the set ofabsolutely continuousdistributions onX{\displaystyle X}(withAY{\displaystyle A_{Y}}defined similarly). Suppose thatk(x,y){\displaystyle k(x,y)}is bounded on the unit squareX×Y={(x,y):0≤x,y≤1}{\displaystyle X\times Y=\{(x,y):0\leq x,y\leq 1\}}and thatk(x,y){\displaystyle k(x,y)}iscontinuousexcept possibly on afinitenumber of curves of the formy=ϕk(x){\displaystyle y=\phi _{k}(x)}(withk=1,2,…,n{\displaystyle k=1,2,\ldots ,n}) where theϕk(x){\displaystyle \phi _{k}(x)}are continuous functions. Forμ∈MX,λ∈MY{\displaystyle \mu \in M_{X},\lambda \in M_{Y}}, define Then This is equivalent to the statement that the game induced byk(⋅,⋅){\displaystyle k(\cdot ,\cdot )}has a value. Note that one player (WLOGY{\displaystyle Y}) is forbidden from using a pure strategy. Parthasarathy goes on to exhibit a game in which which thus has no value. There is no contradiction because in this case neither player is restricted to absolutely continuous distributions (and the demonstration that the game has no value requires both players to use pure strategies). Thisgame theoryarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Parthasarathy%27s_theorem
Thedualof a givenlinear program(LP) is another LP that is derived from the original (theprimal) LP in the following schematic way: Theweak duality theoremstates that the objective value of the dual LP at any feasible solution is always a bound on the objective of the primal LP at any feasible solution (upper or lower bound, depending on whether it is a maximization or minimization problem). In fact, this bounding property holds for the optimal values of the dual and primal LPs. Thestrong duality theoremstates that, moreover, if the primal has an optimal solution then the dual has an optimal solution too,and the two optima are equal.[1] These theorems belong to a larger class ofduality theorems in optimization. The strong duality theorem is one of the cases in which theduality gap(the gap between the optimum of the primal and the optimum of the dual) is 0. Suppose we have the linear program: MaximizecTxsubject toAx≤b,x≥ 0. We would like to construct an upper bound on the solution. So we create a linear combination of the constraints, with positive coefficients, such that the coefficients ofxin the constraints are at leastcT. This linear combination gives us an upper bound on the objective. The variablesyof the dual LP are the coefficients of this linear combination. The dual LP tries to find such coefficients thatminimizethe resulting upper bound. This gives the following LP:[1]: 81–83 MinimizebTysubject toATy≥c,y≥ 0 This LP is called thedual ofthe original LP. The duality theorem has an economic interpretation.[2][3]If we interpret the primal LP as a classical "resource allocation" problem, its dual LP can be interpreted as a "resource valuation" problem. Consider a factory that is planning its production of goods. Letx{\displaystyle x}be its production schedule (makexi{\displaystyle x_{i}}amount of goodi{\displaystyle i}), letc≥0{\displaystyle c\geq 0}be the list of market prices (a unit of goodi{\displaystyle i}can sell forci{\displaystyle c_{i}}). The constraints it has arex≥0{\displaystyle x\geq 0}(it cannot produce negative goods) and raw-material constraints. Letb{\displaystyle b}be the raw material it has available, and letA≥0{\displaystyle A\geq 0}be the matrix of material costs (producing one unit of goodi{\displaystyle i}requiresAji{\displaystyle A_{ji}}units of raw materialj{\displaystyle j}). Then, the constrained revenue maximization is the primal LP: MaximizecTxsubject toAx≤b,x≥ 0. Now consider another factory that has no raw material, and wishes to purchase the entire stock of raw material from the previous factory. It offers a price vector ofy{\displaystyle y}(a unit of raw materiali{\displaystyle i}foryi{\displaystyle y_{i}}). For the offer to be accepted, it should be the case thatATy≥c{\displaystyle A^{T}y\geq c}, since otherwise the factory could earn more cash by producing a certain product than selling off the raw material used to produce the good. It also should bey≥0{\displaystyle y\geq 0}, since the factory would not sell any raw material with negative price. Then, the second factory's optimization problem is the dual LP: MinimizebTysubject toATy≥c,y≥ 0 The duality theorem states that the duality gap between the two LP problems is at least zero. Economically, it means that if the first factory is given an offer to buy its entire stock of raw material, at a per-item price ofy, such thatATy≥c,y≥ 0, then it should take the offer. It will make at least as much revenue as it could producing finished goods. The strong duality theorem further states that the duality gap is zero. With strong duality, the dual solutiony∗{\displaystyle y^{*}}is, economically speaking, the "equilibrium price" (seeshadow price) for the raw material that a factory with production matrixA{\displaystyle A}and raw material stockb{\displaystyle b}would accept for raw material, given the market price for finished goodsc{\displaystyle c}. (Note thaty∗{\displaystyle y^{*}}may not be unique, so the equilibrium price may not be fully determined byA{\displaystyle A},b{\displaystyle b}, andc{\displaystyle c}.) To see why, consider if the raw material pricesy≥0{\displaystyle y\geq 0}are such that(ATy)i<ci{\displaystyle (A^{T}y)_{i}<c_{i}}for somei{\displaystyle i}, then the factory would purchase more raw material to produce more of goodi{\displaystyle i}, since the prices are "too low". Conversely, if the raw material prices satisfyATy≥c,y≥0{\displaystyle A^{T}y\geq c,y\geq 0}, but does not minimizebTy{\displaystyle b^{T}y}, then the factory would make more money by selling its raw material than producing goods, since the prices are "too high". At equilibrium pricey∗{\displaystyle y^{*}}, the factory cannot increase its profit by purchasing or selling off raw material. The duality theorem has a physical interpretation too.[1]: 86–87 In general, given a primal LP, the following algorithm can be used to construct its dual LP.[1]: 85The primal LP is defined by: The dual LP is constructed as follows. From this algorithm, it is easy to see that the dual of the dual is the primal. If all constraints have the same sign, it is possible to present the above recipe in a shorter way using matrices and vectors. The following table shows the relation between various kinds of primals and duals. Below, suppose the primal LP is "maximizecTxsubject to [constraints]" and the dual LP is "minimizebTysubject to [constraints]". Theweak duality theoremsays that, for each feasible solutionxof the primal and each feasible solutionyof the dual:cTx≤bTy. In other words, the objective value in each feasible solution of the dual is an upper-bound on the objective value of the primal, and objective value in each feasible solution of the primal is a lower-bound on the objective value of the dual. Here is a proof for the primal LP "MaximizecTxsubject toAx≤b,x≥ 0": Weak duality implies: maxxcTx≤ minybTy In particular, if the primal is unbounded (from above) then the dual has no feasible solution, and if the dual is unbounded (from below) then the primal has no feasible solution. Thestrong duality theoremsays that if one of the two problems has an optimal solution, so does the other one and that the bounds given by the weak duality theorem are tight, i.e.: maxxcTx= minybTy The strong duality theorem is harder to prove; the proofs usually use the weak duality theorem as a sub-routine. One proof uses thesimplex algorithmand relies on the proof that, with the suitable pivot rule, it provides a correct solution. The proof establishes that, once the simplex algorithm finishes with a solution to the primal LP, it is possible to read from the final tableau, a solution to the dual LP. So, by running the simplex algorithm, we obtain solutions to both the primal and the dual simultaneously.[1]: 87–89 Another proof uses theFarkas lemma.[1]: 94 1. The weak duality theorem implies that finding asinglefeasible solution is as hard as finding anoptimalfeasible solution. Suppose we have an oracle that, given an LP, finds an arbitrary feasible solution (if one exists). Given the LP "MaximizecTxsubject toAx≤b,x≥ 0", we can construct another LP by combining this LP with its dual. The combined LP has bothxandyas variables: Maximize1 subject toAx≤b,ATy≥c,cTx≥bTy,x≥ 0,y≥ 0 If the combined LP has a feasible solution (x,y), then by weak duality,cTx=bTy. Soxmust be a maximal solution of the primal LP andymust be a minimal solution of the dual LP. If the combined LP has no feasible solution, then the primal LP has no feasible solution either. 2. The strong duality theorem provides a "good characterization" of the optimal value of an LP in that it allows us to easily prove that some valuetis the optimum of some LP. The proof proceeds in two steps:[4]: 260–261 Consider the primal LP, with two variables and one constraint: Applying the recipe above gives the following dual LP, with one variable and two constraints: It is easy to see that the maximum of the primal LP is attained whenx1is minimized to its lower bound (0) andx2is maximized to its upper bound under the constraint (7/6). The maximum is 4 ⋅ 7/6 = 14/3. Similarly, the minimum of the dual LP is attained wheny1is minimized to its lower bound under the constraints: the first constraint gives a lower bound of 3/5 while the second constraint gives a stricter lower bound of 4/6, so the actual lower bound is 4/6 and the minimum is 7 ⋅ 4/6 = 14/3. In accordance with the strong duality theorem, the maximum of the primal equals the minimum of the dual. We use this example to illustrate the proof of the weak duality theorem. Suppose that, in the primal LP, we want to get an upper bound on the objective3x1+4x2{\displaystyle 3x_{1}+4x_{2}}. We can use the constraint multiplied by some coefficient, sayy1{\displaystyle y_{1}}. For anyy1{\displaystyle y_{1}}we get:y1⋅(5x1+6x2)=7y1{\displaystyle y_{1}\cdot (5x_{1}+6x_{2})=7y_{1}}. Now, ify1⋅5x1≥3x1{\displaystyle y_{1}\cdot 5x_{1}\geq 3x_{1}}andy1⋅6x2≥4x2{\displaystyle y_{1}\cdot 6x_{2}\geq 4x_{2}}, theny1⋅(5x1+6x2)≥3x1+4x2{\displaystyle y_{1}\cdot (5x_{1}+6x_{2})\geq 3x_{1}+4x_{2}}, so7y1≥3x1+4x2{\displaystyle 7y_{1}\geq 3x_{1}+4x_{2}}. Hence, the objective of the dual LP is an upper bound on the objective of the primal LP. Consider a farmer who may grow wheat and barley with the set provision of someLland,Ffertilizer andPpesticide. To grow one unit of wheat, one unit of land,F1{\displaystyle F_{1}}units of fertilizer andP1{\displaystyle P_{1}}units of pesticide must be used. Similarly, to grow one unit of barley, one unit of land,F2{\displaystyle F_{2}}units of fertilizer andP2{\displaystyle P_{2}}units of pesticide must be used. The primal problem would be the farmer deciding how much wheat (x1{\displaystyle x_{1}}) and barley (x2{\displaystyle x_{2}}) to grow if their sell prices areS1{\displaystyle S_{1}}andS2{\displaystyle S_{2}}per unit. In matrix form this becomes: For the dual problem assume thatyunit prices for each of these means of production (inputs) are set by a planning board. The planning board's job is to minimize the total cost of procuring the set amounts of inputs while providing the farmer with a floor on the unit price of each of his crops (outputs),S1for wheat andS2for barley. This corresponds to the following LP: In matrix form this becomes: The primal problem deals with physical quantities. With all inputs available in limited quantities, and assuming the unit prices of all outputs is known, what quantities of outputs to produce so as to maximize total revenue? The dual problem deals with economic values. With floor guarantees on all output unit prices, and assuming the available quantity of all inputs is known, what input unit pricing scheme to set so as to minimize total expenditure? To each variable in the primal space corresponds an inequality to satisfy in the dual space, both indexed by output type. To each inequality to satisfy in the primal space corresponds a variable in the dual space, both indexed by input type. The coefficients that bound the inequalities in the primal space are used to compute the objective in the dual space, input quantities in this example. The coefficients used to compute the objective in the primal space bound the inequalities in the dual space, output unit prices in this example. Both the primal and the dual problems make use of the same matrix. In the primal space, this matrix expresses the consumption of physical quantities of inputs necessary to produce set quantities of outputs. In the dual space, it expresses the creation of the economic values associated with the outputs from set input unit prices. Since each inequality can be replaced by an equality and a slack variable, this means each primal variable corresponds to a dual slack variable, and each dual variable corresponds to a primal slack variable. This relation allows us to speak about complementary slackness. A LP can also be unbounded or infeasible. Duality theory tells us that: However, it is possible for both the dual and the primal to be infeasible. Here is an example: There is a close connection between linear programming problems, eigenequations, and von Neumann's general equilibrium model. The solution to a linear programming problem can be regarded as a generalized eigenvector. The eigenequations of a square matrix are as follows: pTA=ρpTAz=ρz{\displaystyle {\begin{matrix}\mathbf {p} ^{T}\mathbf {A} =\rho \mathbf {p} ^{T}\\\mathbf {A} \mathbf {z} =\rho {\mathbf {z} }\\\end{matrix}}} wherepT{\displaystyle \mathbf {p} ^{T}}andz{\displaystyle \mathbf {z} }are the left and right eigenvectors of the square matrixA{\displaystyle \mathbf {A} }, respectively, andρ{\displaystyle \rho }is the eigenvalue. The above eigenequations for the square matrix can be extended to von Neumann's general equilibrium model:[5][6] pTA≥ρpTBAz≤ρBz{\displaystyle {\begin{matrix}\mathbf {p} ^{T}\mathbf {A} \geq \rho \mathbf {p} ^{T}\mathbf {B} \\\mathbf {A} \mathbf {z} \leq \rho \mathbf {B} {\mathbf {z} }\\\end{matrix}}} where the economic meanings ofp{\displaystyle \mathbf {p} }andz{\displaystyle \mathbf {z} }are the equilibrium prices of various goods and the equilibrium activity levels of various economic agents, respectively. The von Neumann's equilibrium model can be further extended to the following structural equilibrium model withA{\displaystyle \mathbf {A} }andB{\displaystyle \mathbf {B} }as matrix-valued functions:[7] pTA(p,u,z)≥ρpTB(p,u,z)A(p,u,z)z≤ρB(p,u,z)z{\displaystyle {\begin{matrix}\mathbf {p} ^{T}\mathbf {A} (\mathbf {p} ,\mathbf {u} ,\mathbf {z} )\geq \rho \mathbf {p} ^{T}\mathbf {B} (\mathbf {p} ,\mathbf {u} ,\mathbf {z} )\\\mathbf {A} (\mathbf {p} ,\mathbf {u} ,\mathbf {z} )\mathbf {z} \leq \rho \mathbf {B} (\mathbf {p} ,\mathbf {u} ,\mathbf {z} ){\mathbf {z} }\\\end{matrix}}} where the economic meaning ofu{\displaystyle \mathbf {u} }is the utility levels of various consumers. A special case of the above model is pTA(u)≥pTBA(u)z≤Bz{\displaystyle {\begin{matrix}\mathbf {p} ^{T}\mathbf {A} (u)\geq \mathbf {p} ^{T}\mathbf {B} \\\mathbf {A} (u)\mathbf {z} \leq \mathbf {B} {\mathbf {z} }\end{matrix}}} This form of the structural equilibrium model and linear programming problems can often be converted to each other, that is, the solutions to these two types of problems are often consistent. If we defineA(u)=[0uA0]{\displaystyle \mathbf {A} (u)={\begin{bmatrix}\mathbf {0} &u\\\mathbf {A} &\mathbf {0} \\\end{bmatrix}}},B=[cT00b]{\displaystyle \mathbf {B} ={\begin{bmatrix}\mathbf {c} ^{T}&0\\\mathbf {0} &\mathbf {b} \\\end{bmatrix}}},p=[1y]{\displaystyle \mathbf {p} ={\begin{bmatrix}1\\\mathbf {y} \\\end{bmatrix}}},z=[x1]{\displaystyle \mathbf {z} ={\begin{bmatrix}\mathbf {x} \\1\\\end{bmatrix}}}, then the structural equilibrium model can be written as [yTAu]≥[cTyTb]{\displaystyle {\begin{bmatrix}\mathbf {y} ^{T}\mathbf {A} &u\\\end{bmatrix}}\geq {\begin{bmatrix}\mathbf {c} ^{T}&\mathbf {y} ^{T}\mathbf {b} \\\end{bmatrix}}} [uAx]≤[cTxb]{\displaystyle {\begin{bmatrix}u\\\mathbf {A} \mathbf {x} \\\end{bmatrix}}\leq {\begin{bmatrix}\mathbf {c} ^{T}\mathbf {x} \\\mathbf {b} \\\end{bmatrix}}} Let us illustrate the structural equilibrium model with the previously discussed tiny example. In this example, we haveA=[56]{\displaystyle \mathbf {A} ={\begin{bmatrix}5&6\end{bmatrix}}},A(u)=[00u560]{\displaystyle \mathbf {A} (u)={\begin{bmatrix}0&0&u\\5&6&0\\\end{bmatrix}}}andB=[340007]{\displaystyle \mathbf {B} ={\begin{bmatrix}3&4&0\\0&0&7\\\end{bmatrix}}}. To solve the structural equilibrium model, we obtain[8] p∗=(1,2/3)T,z∗=(0,7/6,1)T,u∗=14/3{\displaystyle \mathbf {p} ^{*}=(1,2/3)^{T},\quad \mathbf {z} ^{*}=(0,7/6,1)^{T},\quad u^{*}=14/3} These are consistent with the solutions to the linear programming problems. We substitute the above calculation results into the structural equilibrium model, obtainingpTA(u)=(10/3,4,14/3)≥(3,4,14/3)=pTBA(u)z=(14/3,7)T≤(14/3,7)T=Bz{\displaystyle {\begin{matrix}\mathbf {p} ^{T}\mathbf {A} (u)=(10/3,4,14/3)\geq (3,4,14/3)=\mathbf {p} ^{T}\mathbf {B} \\\mathbf {A} (u)\mathbf {z} =(14/3,7)^{T}\leq (14/3,7)^{T}=\mathbf {B} {\mathbf {z} }\end{matrix}}} The max-flow min-cut theorem is a special case of the strong duality theorem: flow-maximization is the primal LP, and cut-minimization is the dual LP. SeeMax-flow min-cut theorem#Linear program formulation. Other graph-related theorems can be proved using the strong duality theorem, in particular,Konig's theorem.[9] TheMinimax theoremfor zero-sum games can be proved using the strong-duality theorem.[1]: sub.8.1 Sometimes, one may find it more intuitive to obtain the dual program without looking at the program matrix. Consider the following linear program: We havem+nconditions and all variables are non-negative. We shall definem+ndual variables:yjandsi. We get: Since this is a minimization problem, we would like to obtain a dual program that is a lower bound of the primal. In other words, we would like the sum of all right hand side of the constraints to be the maximal under the condition that for each primal variable the sum of itscoefficientsdo not exceed its coefficient in the linear function. For example,x1appears inn+ 1 constraints. If we sum its constraints' coefficients we geta1,1y1+a1,2y2+ ... +a1,;;n;;yn+f1s1. This sum must be at mostc1. As a result, we get: Note that we assume in our calculations steps that the program is in standard form. However, any linear program may be transformed to standard form and it is therefore not a limiting factor.
https://en.wikipedia.org/wiki/Dual_linear_program
Incomputational complexity theory,Yao's principle(also calledYao's minimax principleorYao's lemma) relates the performance ofrandomized algorithmsto deterministic (non-random) algorithms. It states that, for certain classes of algorithms, and certain measures of the performance of the algorithms, the following two quantities are equal: Yao's principle is often used to prove limitations on the performance of randomized algorithms, by finding a probability distribution on inputs that is difficult for deterministic algorithms, and inferring that randomized algorithms have the same limitation on their worst case performance.[1] This principle is named afterAndrew Yao, who first proposed it in a 1977 paper.[2]It is closely related to theminimax theoremin the theory ofzero-sum games, and to theduality theory of linear programs. Consider an arbitrary real valued cost measurec(A,x){\displaystyle c(A,x)}of an algorithmA{\displaystyle A}on an inputx{\displaystyle x}, such as its running time, for which we want to study theexpected valueover randomized algorithms and random inputs. Consider, also, afinite setA{\displaystyle {\mathcal {A}}}of deterministic algorithms (made finite, for instance, by limiting the algorithms to a specific input size), and a finite setX{\displaystyle {\mathcal {X}}}of inputs to these algorithms. LetR{\displaystyle {\mathcal {R}}}denote the class of randomized algorithms obtained from probability distributions over the deterministic behaviors inA{\displaystyle {\mathcal {A}}}, and letD{\displaystyle {\mathcal {D}}}denote the class of probability distributions on inputs inX{\displaystyle {\mathcal {X}}}. Then, Yao's principle states that:[1] maxD∈DminA∈AEx∼D[c(A,x)]=minR∈Rmaxx∈XE[c(R,x)].{\displaystyle \max _{D\in {\mathcal {D}}}\min _{A\in {\mathcal {A}}}\mathbb {E} _{x\sim D}[c(A,x)]=\min _{R\in {\mathcal {R}}}\max _{x\in {\mathcal {X}}}\mathbb {E} [c(R,x)].} Here,E{\displaystyle \mathbb {E} }is notation for the expected value, andx∼D{\displaystyle x\sim D}means thatx{\displaystyle x}is a random variable distributed according toD{\displaystyle D}. Finiteness ofA{\displaystyle {\mathcal {A}}}andX{\displaystyle {\mathcal {X}}}allowsD{\displaystyle {\mathcal {D}}}andR{\displaystyle {\mathcal {R}}}to be interpreted assimplicesofprobability vectors,[3]whosecompactnessimplies that the minima and maxima in these formulas exist.[4] Another version of Yao's principle weakens it from an equality to an inequality, but at the same time generalizes it by relaxing the requirement that the algorithms and inputs come from a finite set. The direction of the inequality allows it to be used when a specific input distribution has been shown to be hard for deterministic algorithms, converting it into alower boundon the cost of all randomized algorithms. In this version, for every inputdistributionD∈D{\displaystyle D\in {\mathcal {D}}},and for every randomizedalgorithmR{\displaystyle R}inR{\displaystyle {\mathcal {R}}},[1]minA∈AEx∼D[c(A,x)]≤maxx∈XE[c(R,x)].{\displaystyle \min _{A\in {\mathcal {A}}}\mathbb {E} _{x\sim D}[c(A,x)]\leq \max _{x\in {\mathcal {X}}}\mathbb {E} [c(R,x)].}That is, the best possible deterministic performance against distributionD{\displaystyle D}is alower boundfor the performance of each randomized algorithmR{\displaystyle R}against its worst-case input. This version of Yao's principle can be proven through the chain of inequalitiesminA∈AEx∼D[c(A,x)]≤Ex∼D[c(R,x)]≤maxx∈XE[c(R,x)],{\displaystyle \min _{A\in {\mathcal {A}}}\mathbb {E} _{x\sim D}[c(A,x)]\leq \mathbb {E} _{x\sim D}[c(R,x)]\leq \max _{x\in {\mathcal {X}}}\mathbb {E} [c(R,x)],}each of which can be shown using onlylinearity of expectationand the principle thatmin≤E≤max{\displaystyle \min \leq \mathbb {E} \leq \max }for all distributions. By avoiding maximization and minimization overD{\displaystyle {\mathcal {D}}}andR{\displaystyle {\mathcal {R}}}, this version of Yao's principle can apply in some cases whereX{\displaystyle {\mathcal {X}}}orA{\displaystyle {\mathcal {A}}}are not finite.[5]Although this direction of inequality is the direction needed for proving lower bounds on randomized algorithms, the equality version of Yao's principle, when it is available, can also be useful in these proofs. The equality of the principle implies that there is no loss of generality in using the principle to prove lower bounds: whatever the actual best randomized algorithm might be, there is some input distribution through which a matching lower bound on its complexity can be proven.[6] When the costc{\displaystyle c}denotes the running time of an algorithm, Yao's principle states that the best possible running time of a deterministic algorithm, on a hard input distribution, gives a lower bound for theexpected timeof anyLas Vegas algorithmon its worst-case input. Here, a Las Vegas algorithm is a randomized algorithm whose runtime may vary, but for which the result is always correct.[7][8]For example, this form of Yao's principle has been used to prove the optimality of certainMonte Carlo tree searchalgorithms for the exact evaluation ofgame trees.[8] The time complexity ofcomparison-based sortingandselection algorithmsis often studied using the number of comparisons between pairs of data elements as a proxy for the total time. When these problems are considered over a fixed set of elements, their inputs can be expressed aspermutationsand a deterministic algorithm can be expressed as adecision tree. In this way both the inputs and the algorithms form finite sets as Yao's principle requires. Asymmetrizationargument identifies the hardest input distributions: they are therandom permutations, the distributions onn{\displaystyle n}distinct elements for which allpermutationsare equally likely. This is because, if any other distribution were hardest, averaging it with all permutations of the same hard distribution would be equally hard, and would produce the distribution for a random permutation. Yao's principle extends lower bounds for the average case number of comparisons made by deterministic algorithms, for random permutations, to the worst case analysis of randomized comparison algorithms.[2] An example given by Yao is the analysis of algorithms for finding thek{\displaystyle k}th largest of a given set ofn{\displaystyle n}values, the selection problem.[2]Subsequently, to Yao's work, Walter Cunto andIan Munroshowed that, for random permutations, any deterministic algorithm must perform at leastn+min(k,n−k)−O(1){\displaystyle n+\min(k,n-k)-O(1)}expected comparisons.[9]By Yao's principle, the same number of comparisons must be made by randomized algorithms on their worst-case input.[10]TheFloyd–Rivest algorithmcomes withinO(nlog⁡n){\displaystyle O({\sqrt {n\log n}})}comparisons of this bound.[11] Another of the original applications by Yao of his principle was to theevasiveness of graph properties, the number of tests of the adjacency of pairs of vertices needed to determine whether a graph has a given property, when the only access to the graph is through such tests.[2]Richard M. Karpconjectured that every randomized algorithm for every nontrivial monotone graph property (a property that remains true for every subgraph of a graph with the property) requires a quadratic number of tests, but only weaker bounds have been proven.[12] As Yao stated, for graph properties that are true of the empty graph but false for some other graph onn{\displaystyle n}vertices with only a bounded numbers{\displaystyle s}of edges, a randomized algorithm must probe a quadratic number of pairs of vertices. For instance, for the property of being aplanar graph,s=9{\displaystyle s=9}because the 9-edgeutility graphis non-planar. More precisely, Yao states that for these properties, at least(12−p)1s(n2){\displaystyle \left({\tfrac {1}{2}}-p\right){\tfrac {1}{s}}{\tbinom {n}{2}}}tests are needed, for everyε>0{\displaystyle \varepsilon >0}, for a randomized algorithm to have probability at mostp{\displaystyle p}of making a mistake. Yao also used this method to show that quadratically many queries are needed for the properties of containing a giventreeorcliqueas a subgraph, of containing aperfect matching, and of containing aHamiltonian cycle, for small enough constant error probabilities.[2] Inblack-box optimization, the problem is to determine the minimum or maximum value of a function, from a given class of functions, accessible only through calls to the function on arguments from some finite domain. In this case, the cost to be optimized is the number of calls. Yao's principle has been described as "the only method available for proving lower bounds for all randomized search heuristics for selected classes of problems".[13]Results that can be proven in this way include the following: Incommunication complexity, an algorithm describes acommunication protocolbetween two or more parties, and its cost may be the number of bits or messages transmitted between the parties. In this case, Yao's principle describes an equality between theaverage-case complexityof deterministic communication protocols, on an input distribution that is the worst case for the problem, and the expected communication complexity of randomized protocols on their worst-case inputs.[6][14] An example described byAvi Wigderson(based on a paper by Manu Viola) is the communication complexity for two parties, each holdingn{\displaystyle n}-bit input values, to determine which value is larger. For deterministic communication protocols, nothing better thann{\displaystyle n}bits of communication is possible, easily achieved by one party sending their whole input to the other. However, parties with a shared source of randomness and a fixed error probability can exchange 1-bithash functionsofprefixesof the input to perform a noisybinary searchfor the first position where their inputs differ, achievingO(log⁡n){\displaystyle O(\log n)}bits of communication. This is within a constant factor of optimal, as can be shown via Yao's principle with an input distribution that chooses the position of the first difference uniformly at random, and then chooses random strings for the shared prefix up to that position and the rest of the inputs after that position.[6][15] Yao's principle has also been applied to thecompetitive ratioofonline algorithms. An online algorithm must respond to a sequence of requests, without knowledge of future requests, incurring some cost or profit per request depending on its choices. The competitive ratio is the ratio of its cost or profit to the value that could be achieved achieved by anoffline algorithmwith access to knowledge of all future requests, for a worst-case request sequence that causes this ratio to be as far from one as possible. Here, one must be careful to formulate the ratio with the algorithm's performance in the numerator and the optimal performance of an offline algorithm in the denominator, so that the cost measure can be formulated as an expected value rather than as thereciprocalof an expected value.[5] An example given byBorodin & El-Yaniv (2005)concernspage replacement algorithms, which respond to requests forpagesof computer memory by using acacheofk{\displaystyle k}pages, for a given parameterk{\displaystyle k}. If a request matches a cached page, it costs nothing; otherwise one of the cached pages must be replaced by the requested page, at a cost of onepage fault. A difficult distribution of request sequences for this model can be generated by choosing each request uniformly at random from a pool ofk+1{\displaystyle k+1}pages. Any deterministic online algorith hasnk+1{\displaystyle {\tfrac {n}{k+1}}}expected page faults, overn{\displaystyle n}requests. Instead, an offline algorithm can divide the request sequence into phases within which onlyk{\displaystyle k}pages are used, incurring only one fault at the start of a phase to replace the one page that is unused within the phase. As an instance of thecoupon collector's problem, the expected requests per phase is(k+1)Hk{\displaystyle (k+1)H_{k}}, whereHk=1+12+⋯+1k{\displaystyle H_{k}=1+{\tfrac {1}{2}}+\cdots +{\tfrac {1}{k}}}is thek{\displaystyle k}thharmonic number. Byrenewal theory, the offline algorithm incursn(k+1)Hk+o(n){\displaystyle {\tfrac {n}{(k+1)H_{k}}}+o(n)}page faults with high probability, so the competitive ratio of any deterministic algorithm against this input distribution is at leastHk{\displaystyle H_{k}}. By Yao's principle,Hk{\displaystyle H_{k}}also lower bounds the competitive ratio of any randomized page replacement algorithm against a request sequence chosen by anoblivious adversaryto be a worst case for the algorithm but without knowledge of the algorithm's random choices.[16] For online problems in a general class related to theski rental problem, Seiden has proposed a cookbook method for deriving optimally hard input distributions, based on certain parameters of the problem.[17] Yao's principle may be interpreted ingame theoreticterms, via a two-playerzero-sum gamein which one player,Alice, selects a deterministic algorithm, the other player, Bob, selects an input, and the payoff is the cost of the selected algorithm on the selected input. Any randomized algorithmR{\displaystyle R}may be interpreted as a randomized choice among deterministic algorithms, and thus as amixed strategyfor Alice. Similarly, a non-random algorithm may be thought of as apure strategyfor Alice. In any two-player zero-sum game, if one player chooses a mixed strategy, then the other player has an optimal pure strategy against it. By theminimax theoremofJohn von Neumann, there exists a game valuec{\displaystyle c}, and mixed strategies for each player, such that the players can guarantee expected valuec{\displaystyle c}or better by playing those strategies, and such that the optimal pure strategy against either mixed strategy produces expected value exactlyc{\displaystyle c}. Thus, the minimax mixed strategy for Alice, set against the best opposing pure strategy for Bob, produces the same expected game valuec{\displaystyle c}as the minimax mixed strategy for Bob, set against the best opposing pure strategy for Alice. This equality of expected game values, for the game described above, is Yao's principle in its form as an equality.[5]Yao's 1977 paper, originally formulating Yao's principle, proved it in this way.[2] The optimal mixed strategy for Alice (a randomized algorithm) and the optimal mixed strategy for Bob (a hard input distribution) may each be computed using a linear program that has one player's probabilities as its variables, with a constraint on the game value for each choice of the other player. The two linear programs obtained in this way for each player aredual linear programs, whose equality is an instance of linear programming duality.[3]However, although linear programs may be solved inpolynomial time, the numbers of variables and constraints in these linear programs (numbers of possible algorithms and inputs) are typically too large to list explicitly. Therefore, formulating and solving these programs to find these optimal strategies is often impractical.[13][14] ForMonte Carlo algorithms, algorithms that use a fixed amount of computational resources but that may produce an erroneous result, a form of Yao's principle applies to the probability of an error, the error rate of an algorithm. Choosing the hardest possible input distribution, and the algorithm that achieves the lowest error rate against that distribution, gives the same error rate as choosing an optimal algorithm and its worst case input distribution. However, the hard input distributions found in this way are not robust to changes in the parameters used when applying this principle. If an input distribution requires high complexity to achieve a certain error rate, it may nevertheless have unexpectedly low complexity for a different error rate. Ben-David and Blais show that, forBoolean functionsunder many natural measures of computational complexity, there exists an input distribution that is simultaneously hard for all error rates.[18] Variants of Yao's principle have also been considered forquantum computing. In place of randomized algorithms, one may consider quantum algorithms that have a good probability of computing the correct value for every input (probability at least23{\displaystyle {\tfrac {2}{3}}}); this condition together withpolynomial timedefines the complexity classBQP. It does not make sense to ask for deterministic quantum algorithms, but instead one may consider algorithms that, for a given input distribution, have probability 1 of computing a correct answer, either in aweaksense that the inputs for which this is true have probability≥23{\displaystyle \geq {\tfrac {2}{3}}}, or in astrongsense in which, in addition, the algorithm must have probability 0 or 1 of generating any particular answer on the remaining inputs. For any Boolean function, the minimum complexity of a quantum algorithm that is correct with probability≥23{\displaystyle \geq {\tfrac {2}{3}}}against its worst-case input is less than or equal to the minimum complexity that can be attained, for a hard input distribution, by the best weak or strong quantum algorithm against that distribution. The weak form of this inequality is within a constant factor of being an equality, but the strong form is not.[19]
https://en.wikipedia.org/wiki/Yao%27s_principle
Insocial psychology,group polarizationrefers to the tendency for a group to make decisions that are more extreme than the initial inclination of its members. These more extreme decisions are towards greater risk if individuals' initial tendencies are to be risky and towards greater caution if individuals' initial tendencies are to be cautious.[1]The phenomenon also holds that a group'sattitudetoward a situation may change in the sense that the individuals' initial attitudes have strengthened and intensified after group discussion, a phenomenon known asattitude polarization.[2] Group polarization is an important phenomenon in social psychology and is observable in many social contexts. For example, a group of women who hold moderately feminist views tend to demonstrate heightened pro-feminist beliefs following group discussion.[3]Similarly, studies have shown that after deliberating together, mock jury members often decided on punitive damage awards that were either larger or smaller than the amount any individual juror had favored prior to deliberation.[4]The studies indicated that when the jurors favored a relatively low award, discussion would lead to an even more lenient result, while if the jury was inclined to impose a stiff penalty, discussion would make it even harsher.[5]Moreover, in recent years, the Internet and onlinesocial mediahave also presented opportunities to observe group polarization and compile new research. Psychologists have found that social media outlets such asFacebookandTwitterdemonstrate that group polarization can occur even when a group is not physically together. As long as the group of individuals begins with the same fundamental opinion on the topic and a consistent dialogue is kept going, group polarization can occur.[6] Research has suggested that well-established groups suffer less from polarization, as do groups discussing problems that are well known to them. However, in situations where groups are somewhat newly formed and tasks are new, group polarization can demonstrate a more profound influence on decision-making.[7] Attitude polarization, also known asbelief polarizationand thepolarization effect, is a phenomenon in which a disagreement becomes more extreme as the different parties consider evidence on the issue. It is one of the effects ofconfirmation bias: the tendency of people to search for and interpret evidence selectively, to reinforce their current beliefs or attitudes.[8]When people encounter ambiguous evidence, this bias can potentially result in each of them interpreting it as in support of their existing attitudes, widening rather than narrowing the disagreement between them.[9] The effect is observed with issues that activate emotions, such as political "hot-button" issues.[10]For most issues, new evidence does not produce a polarization effect.[11]For those issues where polarization is found, mere thinking about the issue, without contemplating new evidence, produces the effect.[11]Social comparison processes have also been invoked as an explanation for the effect, which is increased by settings in which people repeat and validate each other's statements.[12]This apparent tendency is of interest not only topsychologists, but also tosociologists[13]andphilosophers.[14] Since the late 1960s, psychologists have carried out a number of studies on various aspects of attitude polarization. In 1979,Charles Lord,Lee RossandMark Lepper[9]performed a study in which they selected two groups of people, one group strongly in favor ofcapital punishment, the other strongly opposed. The researchers initially measured the strength with which people held their position. Later, both the pro- and anti-capital punishment people were put into small groups and shown one of two cards, each containing a statement about the results of a research project written on it. For example: Kroner and Phillips (1977) compared murder rates for the year before and the year after adoption of capital punishment in 14 states. In 11 of the 14 states, murder rates were lower after adoption of the death penalty. This research supports the deterrent effect of the death penalty.[15] or: Palmer and Crandall (1977) compared murder rates in 10 pairs of neighboring states with different capital punishment laws. In 8 of the 10 pairs, murder rates were higher in the state with capital punishment. This research opposes the deterrent effect of the death penalty.[15] The researchers again asked people about the strength of their beliefs about thedeterrenceeffect of the death penalty, and, this time, also asked them about the effect that the research had had on their attitudes. In the next stage of the research, the participants were given more information about the study described on the card they received, including details of the research, critiques of the research, and the researchers' responses to those critiques. The participants' degree of commitment to their original positions were re-measured, and the participants were asked about the quality of the research and the effect the research had on their beliefs. Finally, the trial was rerun on all participants using a card that supported the opposite position to that they had initially seen. The researchers found that people tended to believe that research that supported their original views had been better conducted and was more convincing than research that didn't.[16]Whichever position they held initially, people tended to hold that position more strongly after reading research that supported it. Lordet al.point out that it is reasonable for people to be less critical of research that supports their current position, but it seems less rational for people to significantly increase the strength of their attitudes when they read supporting evidence.[17]When people had read both the research that supported their views and the research that did not, they tended to hold their original attitudesmorestrongly than before they received that information.[18]These results should be understood in the context of several problems in the implementation of the study, including the fact the researchers changed the scaling of the outcome of the variable, so measuring attitude change was impossible, and measured polarization using a subjective assessment of attitude change and not a direct measure of how much change had occurred.[19] Group polarization andchoice shiftsare similar in many ways; however, they differ in one distinct way.Group polarizationrefers to attitude change on the individual level due to the influence of the group, andchoice shiftrefers to the outcome of that attitude change; namely, the difference between the average group members' pre-group discussion attitudes and the outcome of the group decision.[7] Risky and cautious shifts are both a part of a more generalized idea known as group-induced attitude polarization. Though group polarization deals mainly with risk-involving decisions and/or opinions, discussion-induced shifts have been shown to occur on several non-risk-involving levels. This suggests that a general phenomenon of choice-shifts exists apart from only risk-related decisions.[clarification needed]Stoner (1968) found that a decision is impacted by the values behind that circumstances of the decision.[20]The study found that situations that normally favor the more risky alternative increased risky shifts. More so, situations that normally favor the cautious alternative increased cautious shifts. These findings also show the importance of previous group shifts. Choice shifts are mainly explained by largely differing human values and how highly these values are held by an individual. According to Moscovici et al. (1972) interaction within a group and differences of opinion are necessary for group polarization to take place.[21]While an extremist in the group may sway opinion, the shift can only occur with sufficient and proper interaction within the group. In other words, the extremist will have no impact without interaction. Also, Moscovici et al. found individual preferences to be irrelevant; it is differences of opinion which will cause the shift.[21]This finding demonstrates how one opinion in the group will not sway the group; it is the combination of all the individual opinions that will make an impact. The study of group polarization can be traced back to an unpublished 1961 Master's thesis by MIT student James Stoner, who observed the so-called "risky shift".[22]The concept of risky shift maintains that a group's decisions are riskier than the average of the individual decisions of members before the group met. In early studies, the risky-shift phenomenon was measured using a scale known as the Choice-Dilemmas Questionnaire. This measure required participants to consider a hypothetical scenario in which an individual is faced with a dilemma and must make a choice to resolve the issue at hand. Participants were then asked to estimate the probability that a certain choice would be of benefit or risk to the individual being discussed. Consider the following example: "Mr. A, an electrical engineer, who is married and has one child, has been working for a large electronics corporation since graduating from college five years ago. He is assured of a lifetime job with a modest, though adequate, salary and liberal pension benefits upon retirement. On the other hand, it is very unlikely that his salary will increase much before he retires. While attending a convention, Mr. A is offered a job with a small, newly founded company which has a highly uncertain future. The new job would pay more to start and would offer the possibility of a share in the owner- ship if the company survived the competition of the larger firms." Participants were then asked to imagine that they were advising Mr. A. They would then be provided with a series of probabilities that indicate whether the new company that offered him a position is financially stable. It would read as following "Please check the lowest probability that you would consider acceptable to make it worthwhile for Mr. A to take the new job." ____The chances are 1 in 10 that the company will prove financially sound. ____The chances are 3 in 10 that the company will prove financially sound. ____The chances are 5 in 10 that the company will prove financially sound. ____The chances are 7 in 10 that the company will prove financially sound. ____The chances are 9 in 10 that the company will prove financially sound. ____Place a check here if you think Mr. A should not take the new job no matter what the probabilities. Individuals completed the questionnaire and made their decisions independently of others. Later, they would be asked to join a group to reassess their choices. Indicated by shifts in the mean value, initial studies using this method revealed that group decisions tended to be relatively riskier than those that were made by individuals. This tendency also occurred when individual judgments were collected after the group discussion and even when the individual post-discussion measures were delayed two to six weeks.[23] The discovery of the risky shift was considered surprising and counter-intuitive, especially since earlier work in the 1920s and 1930s by Allport and other researchers suggested that individuals made more extreme decisions than did groups, leading to the expectation that groups would make decisions that would conform to the average risk level of its members.[20]The seemingly counter-intuitive findings of Stoner led to a spurt of research around the risky shift, which was originally thought to be a special case exception to the standard decision-making practice. Many people had concluded that people in a group setting would make decisions based on what they assumed to be the overall risk level of a group; because Stoner's work did not necessarily address this specific theme, and because it does seem to contrast Stoner's initial definition of risky shift, additional controversy arose leading researchers to further examine the topic. By the late 1960s, however, it had become clear that the risky shift was just one type of many attitudes that became more extreme in groups, leading Moscovici and Zavalloni to term the overall phenomenon "group polarization."[24] Subsequently, a decade-long period of examination of the applicability of group polarization to a number of fields in both lab and field settings began. There is a substantial amount of empirical evidence demonstrating the phenomenon of group polarization. Group polarization has been widely considered as a fundamental group decision-making process and was well established, but remained non-obvious and puzzling because its mechanisms were not fully understood. Almost as soon as the phenomenon of group polarization was discovered, a number of theories were offered to help explain and account for it. These explanations were gradually narrowed down and grouped together until two primary mechanisms remained,social comparisonandinformational influence. Thesocial comparison theory, or normative influence theory, has been widely used to explain group polarization. According to the social comparison interpretation, group polarization occurs as a result of individuals' desire to gain acceptance and be perceived in a favorable way by their group. The theory holds that people first compare their own ideas with those held by the rest of the group; they observe and evaluate what the group values and prefers. In order to gain acceptance, people then take a position that is similar to everyone else's but slightly more extreme. In doing so, individuals support the group's beliefs while still presenting themselves as admirable group "leaders". The presence of a member with an extreme viewpoint or attitude does not further polarize the group.[25]Studies regarding the theory have demonstrated that normative influence is more likely with judgmental issues, a group goal of harmony, person-oriented group members, and public responses.[4] Informational influence, or persuasive arguments theory, has also been used to explain group polarization, and is most recognized by psychologists today. The persuasive arguments interpretation holds that individuals become more convinced of their views when they hear novel arguments in support of their position. The theory posits that each group member enters the discussion aware of a set of items of information or arguments favoring both sides of the issue, but lean toward that side that boasts the greater amount of information. In other words, individuals base their individual choices by weighing remembered pro and con arguments. Some of these items or arguments are shared among the members while some items are unshared, in which all but one member has considered these arguments before. Assuming most or all group members lean in the same direction, during discussion, items of unshared information supporting that direction are expressed, which provides members previously unaware of them more reason to lean in that direction. Group discussion shifts the weight of evidence as each group member expresses their arguments, shedding light onto a number of different positions and ideas.[26]Research has indicated that informational influence is more likely with intellective issues, a group goal of making correct decision, task-oriented group members, and private responses.[4]Furthermore, research suggests that it is not simply the sharing of information that predicts group polarization. Rather, the amount of information and persuasiveness of the arguments mediate the level of polarization experienced.[27] In the 1970s, significant arguments occurred over whether persuasive argumentation alone accounted for group polarization.Daniel Isenberg's 1986 meta-analysis of the data gathered by both the persuasive argument and social comparison camps succeeded, in large part, in answering the questions about predominant mechanisms. Isenberg concluded that there was substantial evidence that both effects were operating simultaneously, and that persuasive arguments theory operated when social comparison did not, and vice versa.[4] While these two theories are the most widely accepted as explanations for group polarization, alternative theories have been proposed. The most popular of these theories isself-categorization theory. Self-categorization theory stems fromsocial identity theory, which holds that conformity stems from psychological processes; that is, being a member of a group is defined as the subjective perception of the self as a member of a specific category.[28]Accordingly, proponents of the self-categorization model hold that group polarization occurs because individuals identify with a particular group and conform to a prototypical group position that is more extreme than the group mean. In contrast to social comparison theory and persuasive argumentation theory, the self-categorization model maintains that inter-group categorization processes are the cause of group polarization[29] Support for theself-categorization theory, which explains group polarization as conformity to a polarized norm, was found by Hogg, Turner, and Davidson in 1990. In their experiment, participants gave pre-test, post-test, and group consensus recommendations on three choice dilemma item-types (risky, neutral, or cautious). The researchers hypothesized that aningroupconfronted by a risky outgroup will polarize toward caution, an ingroup confronted by a caution outgroup will polarize toward risk, and an ingroup in the middle of the social frame of reference, confronted by both risky and cautious outgroups, will not polarize but will converge on its pre-test mean.[29]The results of the study supported their hypothesis in that participants converged on a norm polarized toward risk on risky items and toward caution on cautious items.[29]Another similar study found that in-group prototypes become more polarized as the group becomes more extreme in the social context.[30]This further lends support to the self-categorization explanation of group polarization. The rising popularity and increased number of online social media platforms, such asFacebook,TwitterandInstagram, has enabled people to seek out and share ideas with others who have similar interests and common values, making group polarization effects increasingly evident, particularly ingeneration Yandgeneration Zindividuals.[31]Similar to the social media platforms, video streaming platforms like YouTube are forming groups unconsciously through intelligent algorithm seeking for extreme contents.[32]Owing to this technology, it is possible for individuals to curate their sources of information and the opinions to which they are exposed, thereby reinforcing and strengthening their own views while effectively avoiding information and perspectives with which they disagree.[33] One study analyzed over 30,000 tweets on Twitter regarding the shooting ofGeorge Tiller, a late term abortion doctor, where the tweets analyzed were conversations among supporters and opponents of abortion rights, post shooting. The study found that like-minded individuals strengthened group identity whereas replies between different-minded individuals reinforced a split in affiliation.[6] In a study conducted by Sia et al. (2002), group polarization was found to occur with online (computer-mediated) discussions. In particular, this study found that group discussions, conducted when discussants are in a distributed (cannot see one another) or anonymous (cannot identify one another) environment, can lead to even higher levels of group polarization compared to traditional meetings. This is attributed to the greater numbers of novel arguments generated (due to persuasive arguments theory) and higher incidence of one-upmanship behaviors (due to social comparison).[34] However, some research suggests that important differences arise in measuring group polarization in laboratory versus field experiments. A study conducted by Taylor & MacDonald (2002) featured a realistic setting of a computer-mediated discussion, but group polarization did not occur at the level expected.[35]The study's results also showed that groupthink occurs less in computer-mediated discussions than when people are face to face. Moreover, computer-mediated discussions often fail to result in a group consensus, or lead to less satisfaction with the consensus that was reached, compared to groups operating in a natural environment. Furthermore, the experiment took place over a two-week period, leading the researchers to suggest that group polarization may occur only in the short-term. Overall, the results suggest that not only may group polarization not be as prevalent as previous studies suggest, but group theories, in general, may not be simply transferable when seen in a computer-related discussion.[35] Group polarization has been widely discussed in terms of political behavior (seepolitical polarization). Researchers have identified an increase in affective polarization among the United States electorate, and report that hostility and discrimination towards the opposing political party has increased dramatically over time.[36] Group polarization is similarly influential in legal contexts. A study that assessed whether Federal district court judges behaved differently when they sat alone, or in small groups, demonstrated that those judges who sat alone took extreme action 35% of the time, whereas judges who sat in a group of three took extreme action 65% of the time. These results are noteworthy because they indicate that even trained, professional decision-makers are subject to the influences of group polarization.[37] Group polarization has been reported to occur during wartime and other times of conflict and helps to account partially for violent behavior and conflict.[38]Researchers have suggested, for instance, that ethnic conflict exacerbates group polarization by enhancing identification with the ingroup and hostility towards the outgroup.[39]While polarization can occur in any type of conflict, it has its most damaging effects in large-scale inter-group, public policy, and international conflicts. On a smaller scale, group polarization can also be seen in the everyday lives of students inhigher education. A study by Myers in 2005 reported that initial differences among American college students become more accentuated over time. For example, students who do not belong to fraternities and sororities tend to be more liberal politically, and this difference increases over the course of their college careers. Researchers theorize that this is at least partially explained by group polarization, as group members tend to reinforce one another's proclivities and opinions.[40]
https://en.wikipedia.org/wiki/Attitude_polarization
Thegame of chicken, also known as thehawk-dove gameorsnowdrift game,[1]is a model ofconflictfor two players ingame theory. The principle of the game is that while the ideal outcome is for one player to yield (to avoid the worst outcome if neither yields), individuals try to avoid it out of pride, not wanting to look like "chickens". Each player taunts the other to increase the risk of shame in yielding. However, when one player yields, the conflict is avoided, and the game essentially ends. The name "chicken" has its origins in a game in which two drivers drive toward each other on a collision course: one must swerve, or both may die in the crash, but if one driver swerves and the other does not, the one who swerved will be called a "chicken", meaning a coward; this terminology is most prevalent inpolitical scienceandeconomics. The name "hawk–dove" refers to a situation in which there is a competition for a shared resource and the contestants can choose either conciliation or conflict; this terminology is most commonly used inbiologyandevolutionary game theory. From a game-theoretic point of view, "chicken" and "hawk–dove" are identical.[1]The game has also been used to describe themutual assured destructionofnuclear warfare, especially the sort ofbrinkmanshipinvolved in theCuban Missile Crisis.[2] The game of chicken models two drivers, both headed for a single-lane bridge from opposite directions. The first to swerve away yields the bridge to the other. If neither player swerves, the result is a costly deadlock in the middle of the bridge or a potentially fatal head-on collision. It is presumed that the best thing for each driver is to stay straight while the other swerves (since the other is the "chicken" while a crash is avoided). Additionally, a crash is presumed to be the worst outcome for both players. This yields a situation where each player, in attempting to secure their best outcome, risks the worst. The phrasegame of chickenis also used as a metaphor for a situation where two parties engage in a showdown where they have nothing to gain and only pride stops them from backing down.Bertrand Russellfamously compared the game of Chicken tonuclearbrinkmanship: Since the nuclear stalemate became apparent, the governments of East and West have adopted the policy thatMr. Dullescalls 'brinkmanship'. This is a policy adapted from a sport that, I am told, is practiced by some youthful degenerates. This sport is called 'Chicken!'. It is played by choosing a long, straight road with a white line down the middle and starting two very fast cars toward each other from opposite ends. Each car is expected to keep the wheels on one side of the white line. As they approach each other, mutual destruction becomes more and more imminent. If one of them swerves from the white line before the other, the other, as they pass, shouts 'Chicken!', and the one who has swerved becomes an object of contempt. As played by irresponsible boys, this game is considered decadent and immoral, though only the lives of the players are risked. But when the game is played by eminent statesmen, who risk not only their own lives but those of many hundreds of millions of human beings, it is thought on both sides that the statesmen on one side are displaying a high degree of wisdom and courage, and only the statesmen on the other side are reprehensible. This, of course, is absurd. Both are to blame for playing such an incredibly dangerous game. The game may be played without misfortune a few times, but sooner or later, it will come to be felt that loss of face is more dreadful than nuclear annihilation. The moment will come when neither side can face the derisive cry of 'Chicken!' from the other side. When that moment comes, the statesmen of both sides will plunge the world into destruction.[2] Brinkmanship involves the introduction of an element of uncontrollable risk: even if all players act rationally in the face of risk, uncontrollable events can still trigger the catastrophic outcome.[3]In the "chickie run" scene from the filmRebel Without a Cause, this happens when Buzz cannot escape from the car and dies in the crash. The opposite scenario occurs inFootloosewhere Ren McCormack is stuck in his tractor and hence wins the game as they cannot play "chicken". A similar event happens in two different games in the filmThe Heavenly Kid, when first Bobby, and then later Lenny become stuck in their cars and drive off a cliff. The basic game-theoretic formulation of Chicken has no element of variable, potentially catastrophic, risk, and is also the contraction of a dynamic situation into a one-shot interaction. The hawk–dove version of the game imagines two players (animals) contesting an indivisible resource who can choose between two strategies, one more escalated than the other.[4]They can use threat displays (play Dove), or physically attack each other (play Hawk). If both players choose the Hawk strategy, then they fight until one is injured and the other wins. If only one player chooses Hawk, then this player defeats the Dove player. If both players play Dove, there is a tie, and each player receives a payoff lower than the profit of a hawk defeating a dove. A formal version of the game of Chicken has been the subject of serious research ingame theory.[5]Two versions of thepayoff matrixfor this game are presented here (Figures 1 and 2). In Figure 1, the outcomes are represented in words, where each player would prefer to win over tying, prefer to tie over losing, and prefer to lose over crashing. Figure 2 presents arbitrarily set numerical payoffs which theoretically conform to this situation. Here, the benefit of winning is 1, the cost of losing is -1, and the cost of crashing is -1000. Both Chicken and Hawk–Dove areanti-coordination games, in which it is mutually beneficial for the players to play different strategies. In this way, it can be thought of as the opposite of acoordination game, where playing the same strategyPareto dominatesplaying different strategies. The underlying concept is that players use a shared resource. In coordination games, sharing the resource creates a benefit for all: the resource isnon-rivalrous, and the shared usage creates positiveexternalities. In anti-coordination games the resource is rivalrous butnon-excludableand sharing comes at a cost (or negative externality). Because the loss of swerving is so trivial compared to the crash that occurs if nobody swerves, the reasonable strategy would seem to be to swerve before a crash is likely. Yet, knowing this, if one believes one's opponent to be reasonable, one may well decide not to swerve at all, in the belief that the opponent will be reasonable and decide to swerve, leaving the first player the winner. This unstable situation can be formalized by saying there is more than oneNash equilibrium, which is a pair of strategies for which neither player gains by changing their own strategy while the other stays the same. (In this case, thepure strategyequilibria are the two situations wherein one player swerves while the other does not.) Inthe biological literature, this game is known as Hawk–Dove. The earliest presentation of a form of the Hawk–Dove game was byJohn Maynard SmithandGeorge Pricein their paper, "The logic of animal conflict".[6]The traditional[4][7]payoff matrixfor the Hawk–Dove game is given in Figure 3, where V is the value of the contested resource, and C is the cost of an escalated fight. It is (almost always) assumed that the value of the resource is less than the cost of a fight, i.e., C > V > 0. If C ≤ V, the resulting game is not a game of Chicken but is instead aPrisoner's Dilemma. The exact value of the Dove vs. Dove payoff varies between model formulations. Sometimes the players are assumed to split the payoff equally (V/2 each), other times the payoff is assumed to be zero (since this is the expected payoff to awar of attritiongame, which is the presumed models for a contest decided by display duration). While the Hawk–Dove game is typically taught and discussed with the payoffs in terms of V and C, the solutions hold true for any matrix with the payoffs in Figure 4, where W > T > L > X.[7] Biologists have explored modified versions of classic Hawk–Dove game to investigate a number of biologically relevant factors. These include adding variation inresource holding potential, and differences in the value of winning to the different players,[8]allowing the players to threaten each other before choosing moves in the game,[9]and extending the interaction to two plays of the game.[10] One tactic in the game is for one party to signal their intentions convincingly before the game begins. For example, if one party were to ostentatiously disable their steering wheel just before the match, the other party would be compelled to swerve.[11]This shows that, in some circumstances, reducing one's own options can be a good strategy. One real-world example is a protester who handcuffs themselves to an object, so that no threat can be made which would compel them to move (since they cannot move). Another example, taken from fiction, is found inStanley Kubrick'sDr. Strangelove. In that film, theRussianssought to deter American attack by building a "doomsday machine", a device that would trigger world annihilation if Russia was hit by nuclear weapons or if any attempt were made to disarm it. However, the Russians had planned to signal the deployment of the machine a few days after having set it up, which, because of an unfortunate course of events, turned out to be too late. Players may also make non-binding threats to not swerve. This has been modeled explicitly in the Hawk–Dove game. Such threats work, but must bewastefully costlyif the threat is one of two possible signals ("I will not swerve" or "I will swerve"), or they will be costless if there are three or more signals (in which case the signals will function as a game of "rock, paper, scissors").[9] This strategy can also be seen successfully used in aviral video, in which a contestant ofGolden Balls(in a game also known as split or steal) persisted that he will steal, causing the other to split. In the video, both contestants choose split. This forced the action of the other, leading to the best outcome for both in this case, as if both chose steal then both contestants would have been left with nothing. All anti-coordination games have threeNash equilibria. Two of these arepurecontingent strategy profiles, in which each player plays one of the pair of strategies, and the other player chooses the opposite strategy. The third one is amixedequilibrium, in which each playerprobabilisticallychooses between the two pure strategies. Either the pure, or mixed, Nash equilibria will beevolutionarily stable strategiesdepending upon whetheruncorrelated asymmetriesexist. Thebest responsemapping for all 2x2 anti-coordination games is shown in Figure 5. The variablesxandyin Figure 5 are the probabilities of playing the escalated strategy ("Hawk" or "Don't swerve") for players X and Y respectively. The line in graph on the left shows the optimum probability of playing the escalated strategy for player Y as a function ofx. The line in the second graph shows the optimum probability of playing the escalated strategy for player X as a function ofy(the axes have not been rotated, so thedependent variableis plotted on theabscissa, and theindependent variableis plotted on theordinate). The Nash equilibria are where the players' correspondences agree, i.e., cross. These are shown with points in the right hand graph. The best response mappings agree (i.e., cross) at three points. The first two Nash equilibria are in the top left and bottom right corners, where one player chooses one strategy, the other player chooses the opposite strategy. The third Nash equilibrium is a mixed strategy which lies along the diagonal from the bottom left to top right corners. If the players do not know which one of them is which, then the mixed Nash is anevolutionarily stable strategy(ESS), as play is confined to the bottom left to top right diagonal line. Otherwise an uncorrelated asymmetry is said to exist, and the corner Nash equilibria are ESSes. The ESS for the Hawk–Dove game is a mixed strategy. Formal game theory is indifferent to whether this mixture is due to all players in a population choosing randomly between the two pure strategies (a range of possible instinctive reactions for a single situation) or whether the population is a polymorphic mixture of players dedicated to choosing a particular pure strategy(a single reaction differing from individual to individual). Biologically, these two options are strikingly different ideas. The Hawk–Dove game has been used as a basis for evolutionary simulations to explore which of these two modes of mixing ought to predominate in reality.[12] In both "Chicken" and "Hawk–Dove", the onlysymmetricNash equilibriumis themixed strategyNash equilibrium, where both individuals randomly chose between playing Hawk/Straight or Dove/Swerve. This mixed strategy equilibrium is often sub-optimal—both players would do better if they could coordinate their actions in some way. This observation has been made independently in two different contexts, with almost identical results.[13] Consider the version of "Chicken" pictured in Figure 6. Like all forms of the game, there are threeNash equilibria. The twopure strategyNash equilibria are (D,C) and (C,D). There is also amixed strategyequilibrium where each player Dares with probability 1/3. It results in expected payoffs of 14/3 = 4.667 for each player. Now consider a third party (or some natural event) that draws one of three cards labeled: (C,C), (D,C), and (C,D). This exogenous draw event is assumed to be uniformly at random over the 3 outcomes. After drawing the card the third party informs the players of the strategy assigned to them on the card (butnotthe strategy assigned to their opponent). Suppose a player is assignedD, they would not want to deviate supposing the other player played their assigned strategy since they will get 7 (the highest payoff possible). Suppose a player is assignedC. Then the other player has been assignedCwith probability 1/2 andDwith probability 1/2 (due to the nature of the exogenous draw). Theexpected utilityof Daring is 0(1/2) + 7(1/2) = 3.5 and the expected utility of chickening out is 2(1/2) + 6(1/2) = 4. So, the player would prefer to chicken out. Since neither player has an incentive to deviate from the drawn assignments, this probability distribution over the strategies is known as acorrelated equilibriumof the game. Notably, the expected payoff for this equilibrium is 7(1/3) + 2(1/3) + 6(1/3) = 5 which is higher than the expected payoff of the mixed strategy Nash equilibrium. Although there are three Nash equilibria in the Hawk–Dove game, the one which emerges as theevolutionarily stable strategy(ESS) depends upon the existence of anyuncorrelated asymmetryin the game (in the sense ofanti-coordination games). In order for row players to choose one strategy and column players the other, the players must be able to distinguish which role (column or row player) they have. If no such uncorrelated asymmetry exists then both players must choose the same strategy, and the ESS will be the mixing Nash equilibrium. If there is an uncorrelated asymmetry, then the mixing Nash is not an ESS, but the two pure, role contingent, Nash equilibria are. The standard biological interpretation of this uncorrelated asymmetry is that one player is the territory owner, while the other is an intruder on the territory. In most cases, the territory owner plays Hawk while the intruder plays Dove. In this sense, the evolution of strategies in Hawk–Dove can be seen as the evolution of a sort of prototypical version of ownership. Game-theoretically, however, there is nothing special about this solution. The opposite solution—where the owner plays dove and the intruder plays Hawk—is equally stable. In fact, this solution is present in a certain species of spider; when an invader appears the occupying spider leaves. In order to explain the prevalence of property rights over "anti-property rights" one must discover a way to break this additional symmetry.[13] Replicator dynamicsis a simple model of strategy change commonly used inevolutionary game theory. In this model, a strategy which does better than the average increases in frequency at the expense of strategies that do worse than the average. There are two versions of the replicator dynamics. In one version, there is a single population which plays against itself. In another, there are two population models where each population only plays against the other population (and not against itself). In the one population model, the only stable state is the mixed strategy Nash equilibrium. Every initial population proportion (except allHawkand allDove) converge to the mixed strategy Nash Equilibrium where part of the population playsHawkand part of the population playsDove. (This occurs because the only ESS is the mixed strategy equilibrium.) In the two population model, this mixed point becomes unstable. In fact, the only stable states in the two population model correspond to the pure strategy equilibria, where one population is composed of allHawksand the other of allDoves. In this model one population becomes the aggressive population while the other becomes passive. This model is illustrated by thevector fieldpictured in Figure 7a. The one-dimensional vector field of the single population model (Figure 7b) corresponds to the bottom left to top right diagonal of the two population model. The single population model presents a situation where no uncorrelated asymmetries exist, and so the best players can do is randomize their strategies. The two population models provide such an asymmetry and the members of each population will then use that to correlate their strategies. In the two population model, one population gains at the expense of another. Hawk–Dove and Chicken thus illustrate an interesting case where the qualitative results for the two different versions of the replicator dynamics differ wildly.[14] "Chicken" and "Brinkmanship" are often used synonymously in the context of conflict, but in the strict game-theoretic sense, "brinkmanship" refers to astrategic movedesigned to avert the possibility of the opponent switching to aggressive behavior. The move involves a credible threat of the risk of irrational behavior in the face of aggression. If player 1 unilaterally moves to A, a rational player 2 cannot retaliate since (A, C) is preferable to (A, A). Only if player 1 has grounds to believe that there is sufficient risk that player 2 responds irrationally (usually by giving up control over the response, so that there is sufficient risk that player 2 responds with A) player 1 will retract and agree on the compromise. Like "Chicken", the "War of attrition" game models escalation of conflict, but they differ in the form in which the conflict can escalate. Chicken models a situation in which the catastrophic outcome differs in kind from the agreeable outcome, e.g., if the conflict is over life and death. War of attrition models a situation in which the outcomes differ only in degrees, such as a boxing match in which the contestants have to decide whether the ultimate prize of victory is worth the ongoing cost of deteriorating health and stamina. The Hawk–Dove game is the most commonly used game theoretical model of aggressive interactions in biology.[15]Thewar of attritionis another very influential model of aggression in biology. The two models investigate slightly different questions. The Hawk–Dove game is a model of escalation, and addresses the question of when ought an individual escalate to dangerously costly physical combat. The war of attrition seeks to answer the question of how contests may be resolved when there is no possibility of physical combat. The war of attrition is anauctionin which both players pay the lowerbid(an all-pay second price auction). The bids are assumed to be the duration which the player is willing to persist in making a costlythreat display. Both players accrue costs while displaying at each other, the contest ends when the individual making the lower bid quits. Both players will then have paid the lower bid. Chicken is a symmetrical 2x2 game with conflicting interests, the preferred outcome is to playStraightwhile the opponent playsSwerve. Similarly, theprisoner's dilemmais a symmetrical 2x2 game with conflicting interests: the preferred outcome is toDefectwhile the opponent playsCooperate. PD is about the impossibility of cooperation while Chicken is about the inevitability of conflict. Iterated play can solve PD but not Chicken.[16] Both games have a desirable cooperative outcome in which both players choose the less escalated strategy,Swerve-Swervein the Chicken game, andCooperate-Cooperatein the prisoner's dilemma, such that players receive theCoordinationpayoff C (see tables below). The temptation away from this sensible outcome is toward aStraightmove in Chicken and aDefectmove in the prisoner's dilemma (generating theTemptation payoff, should the other player use the less escalated move). The essential difference between these two games is that in the prisoner's dilemma, theCooperatestrategy is dominated, whereas in Chicken the equivalent move is not dominated since the outcome payoffs when the opponent plays the more escalated move (Straightin place ofDefect) are reversed. The term "schedule chicken"[17]is used inproject managementandsoftware developmentcircles. The condition occurs when two or more areas of a product team claim they can deliver features at an unrealistically early date because each assumes the other teams are stretching the predictions even more than they are. This pretense continually moves forward past one project checkpoint to the next until featureintegrationbegins or just before the functionality is actually due. The practice of "schedule chicken"[18]often results in contagious schedule slips due to the inter-team dependencies and is difficult to identify and resolve, as it is in the best interest of each team not to be the first bearer of bad news. The psychological drivers underlining the "schedule chicken" behavior in many ways mimic the hawk–dove orsnowdrift modelof conflict.[19]
https://en.wikipedia.org/wiki/Chicken_(game)
1915 1916 1917 1918 Associated articles TheChristmas truce(German:Weihnachtsfrieden; French:Trêve de Noël;Dutch:Kerstbestand) was a series of widespread unofficialceasefiresalong theWestern Frontof theFirst World Wararound Christmas 1914. Thetruceoccurred five months after hostilities had begun. Lulls occurred in the fighting as armies ran out of men and munitions and commanders reconsidered their strategies following the stalemate of theRace to the Seaand the indecisive result of theFirst Battle of Ypres. In the week leading up to 25 December, French, German, and British soldiers crossed trenches to exchange seasonal greetings and talk. In some areas, men from both sides ventured intono man's landonChristmas EveandChristmas Dayto mingle and exchange food and souvenirs. There were joint burial ceremonies andprisoner swaps, while several meetings ended incarolling. Hostilities continued in some sectors, while in others the sides settled on little more than arrangements to recover bodies. The following year, a few units arranged ceasefires, but the truces were not nearly as widespread as in 1914; this was, in part, due to strongly worded orders from commanders, prohibiting truces. Subsequently, soldiers themselves became less amenable to truce by 1916; the war had become increasingly bitter after the human losses suffered during the battles of 1915. The truces were not unique to the Christmas period and reflected a mood of "live and let live", where infantry close together would stop fighting and fraternise, engaging in conversation. In some sectors, there were occasional ceasefires to allow soldiers to go between the lines and recover wounded or dead comrades; in others, there was a tacit agreement not to shoot while men rested, exercised or worked in view of the enemy. The Christmas truces were particularly significant due to the number of men involved and the level of their participation—even in quiet sectors, dozens of men openly congregating in daylight was remarkable—and are often seen as a symbolic moment of peace and humanity amidst one of the most violent conflicts in human history. During the first eight weeks ofWorld War I, French and British troops stopped the German attack throughBelgiuminto France outsideParisat theFirst Battle of the Marnein early September 1914. The Germans fell back to theAisne valley, where they dug in. In theFirst Battle of the Aisne, the Franco–British attacks were repulsed and both sides began diggingtrenchesto economise on manpower and use the surplus to outflank, to the north, their opponents. In theRace to the Sea, the two sides made reciprocal outflanking manoeuvres and after several weeks, during which the British forces were withdrawn from the Aisne and sent north toFlanders, both sides ran out of room. By November, armies had built continuous lines of trenches running from theNorth Seato the Swiss frontier.[1] Before Christmas 1914, there were several peace initiatives. TheOpen Christmas Letterwas a public message forpeaceaddressed "To the Women of Germany andAustria", signed by a group of 101 Britishwomen's suffragettesat the end of 1914.[2][3]Pope Benedict XV, on 7 December 1914, had begged for an official truce between the warring governments.[4]He asked "that the guns may fall silent at least upon the night the angels sang", which was refused by both sides.[5][6] Fraternisation—peaceful and sometimes friendly interactions between opposing forces—was a regular feature in quiet sectors of the Western Front. In some areas, both sides would refrain from aggressive behaviour, while in other cases it extended to regular conversation or even visits from one trench to another.[7]On theEastern Front,Fritz Kreislerreported incidents of spontaneous truces and fraternisation between the Austro-Hungarians and Russians in the first few weeks of the war.[8] Truces between British and German units can be dated to early November 1914, around the time that the war of manoeuvre ended. Rations were brought up to the front line after dusk and soldiers on both sides noted a period of peace while they collected their food.[9]By 1 December, a British soldier could record a friendly visit from a German sergeant one morning "to see how we were getting on".[10]Relations between French and German units were generally more tense but the same phenomenon began to emerge. In early December, a German surgeon recorded a regular half-hourly truce each evening to recover dead soldiers for burial, during which French and German soldiers exchanged newspapers.[11]This behaviour was often challenged by officers; lieutenantCharles de Gaullewrote on 7 December of the "lamentable" desire of French infantrymen to leave the enemy in peace, while the commander of10th Army,Victor d'Urbal, wrote of the "unfortunate consequences" when men "become familiar with their neighbours opposite".[11]Other truces could be forced on both sides by bad weather, especially when trench lines flooded and these often lasted after the weather had cleared.[11][12] The proximity of trench lines made it easy for soldiers to shout greetings to each other. This may have been the most common method of arranging informal truces in 1914.[13]Men would frequently exchange news or greetings, helped by a common language; many German soldiers had lived in England, particularly London, and were familiar with the language and the society. Several British soldiers recorded instances of Germans asking about news from the football leagues, while other conversations could be as banal as discussions of the weather or as plaintive as messages for a sweetheart.[14]One unusual phenomenon that grew in intensity was music; in peaceful sectors, it was not uncommon for units to sing in the evenings, sometimes deliberately with an eye towards entertaining or gently taunting their opposite numbers. This shaded gently into more festive activity; in early December, SirEdward Hulseof theScots Guardswrote that he was planning to organise a concert party for Christmas Day, which would "give the enemy every conceivable form of song in harmony" in response to frequent choruses ofDeutschland Über Alles.[15] Roughly 100,000 British and German troops were involved in the informal cessations of hostility along the Western Front.[16]The Germans placed candles on their trenches and on Christmas trees, then continued the celebration by singing Christmas carols. The British responded by singing carols of their own. The two sides continued by shouting Christmas greetings to each other. Soon thereafter, there were excursions across No Man's Land, where small gifts were exchanged, such as food, tobacco, alcohol, and souvenirs such as buttons and hats. Theartilleryin the region fell silent. The truce also allowed a breathing spell during which recently killed soldiers could be brought back behind their lines by burial parties. Joint services were held. In many sectors the truce lasted through Christmas night, continuing until New Year's Day in others.[6] On Christmas Day, Brigadier-GeneralWalter Congreve, commander of the18th Infantry Brigade, stationed nearNeuve Chapelle, wrote a letter recalling that the Germans declared a truce for the day. One of his men bravely lifted his head above the parapet and others from both sides walked onto no man's land. Officers and men shook hands and exchanged cigarettes and cigars; one of his captains "smoked a cigar with the best shot in the German army", the latter no more than 18 years old. Congreve admitted he was reluctant to witness the truce for fear of German snipers.[17] Bruce Bairnsfather, who fought throughout the war, wrote: I wouldn't have missed that unique and weird Christmas Day for anything.... I spotted a German officer, some sort of lieutenant I should think, and being a bit of a collector, I intimated to him that I had taken a fancy to some of his buttons.... I brought out my wire clippers and, with a few deft snips, removed a couple of his buttons and put them in my pocket. I then gave him two of mine in exchange.... The last I saw was one of my machine gunners, who was a bit of an amateur hairdresser in civil life, cutting the unnaturally long hair of a docile Boche, who was patiently kneeling on the ground whilst the automatic clippers crept up the back of his neck.[18][19] Henry Williamson, a nineteen-year-old private in theLondon Rifle Brigade, wrote to his mother on Boxing Day: Dear Mother, I am writing from the trenches. It is 11 o'clock in the morning. Beside me is a coke fire, opposite me a 'dug-out' (wet) with straw in it. The ground is sloppy in the actual trench, but frozen elsewhere. In my mouth is a pipe presented by thePrincess Mary. In the pipe is tobacco. Of course, you say. But wait. In the pipe is German tobacco. Haha, you say, from a prisoner or found in a captured trench. Oh dear, no! From a German soldier. Yes a live German soldier from his own trench. Yesterday the British & Germans met & shook hands in the Ground between the trenches, & exchanged souvenirs, & shook hands. Yes, all day Xmas day, & as I write. Marvellous, isn't it?[20] Captain Sir Edward Hulse reported how the first interpreter he met from the German lines was fromSuffolkand had left his girlfriend and a 3.5 hp motorcycle. Hulse described a sing-song which "ended up with 'Auld lang syne' which we all, English, Scots, Irish, Prussians, Württenbergers, etc, joined in. It was absolutely astounding, and if I had seen it on a cinematograph film I should have sworn that it was faked!"[21] Captain Robert Miles,King's Shropshire Light Infantry, who was attached to theRoyal Irish Rifles, recalled in an edited letter that was published in theDaily Mailand theWellington Journal & Shrewsbury Newsin January 1915, following his death in action on 30 December 1914: Friday (Christmas Day). We are having the most extraordinary Christmas Day imaginable. A sort of unarranged and quite unauthorized but perfectly understood and scrupulously observed truce exists between us and our friends in front. The funny thing is it only seems to exist in this part of the battle line – on our right and left we can all hear them firing away as cheerfully as ever. The thing started last night – a bitter cold night, with white frost – soon after dusk when the Germans started shouting 'Merry Christmas, Englishmen' to us. Of course our fellows shouted back and presently large numbers of both sides had left their trenches, unarmed, and met in the debatable, shot-riddled, no man's land between the lines. Here the agreement – all on their own – came to be made that we should not fire at each other until after midnight tonight. The men were all fraternizing in the middle (we naturally did not allow them too close to our line) and swapped cigarettes and lies in the utmost good fellowship. Not a shot was fired all night. Of the Germans he wrote: "They are distinctly bored with the war.... In fact, one of them wanted to know what on earth we were doing here fighting them." The truce in that sector continued into Boxing Day; he commented about the Germans, "The beggars simply disregard all our warnings to get down from off their parapet, so things are at a deadlock. We can't shoot them in cold blood.... I cannot see how we can get them to return to business."[22] On Christmas Eve and Christmas Day (24 and 25 December) 1914,Alfred Anderson's unit of the 1st/5th Battalion of theBlack Watchwas billeted in a farmhouse away from the front line. In a later interview (2003), Anderson, the last known surviving Scottish veteran of the war, vividly recalled Christmas Day and said: I remember the silence, the eerie sound of silence. Only the guards were on duty. We all went outside the farm buildings and just stood listening. And, of course, thinking of people back home. All I'd heard for two months in the trenches was the hissing, cracking and whining of bullets in flight, machinegun fire and distant German voices. But there was a dead silence that morning, right across the land as far as you could see. We shouted 'Merry Christmas', even though nobody felt merry. The silence ended early in the afternoon and the killing started again. It was a short peace in a terrible war.[23] A German Lieutenant, Johannes Niemann, wrote "grabbed my binoculars and looking cautiously over the parapet saw the incredible sight of our soldiers exchanging cigarettes, schnapps and chocolate with the enemy".[24] General SirHorace Smith-Dorrien, commander of theII Corps, issued orders forbidding friendly communication with the opposing German troops.[16]Adolf Hitler, a corporal in the 16th Bavarian Reserve Infantry, was also an opponent of the truce.[16] In the Comines sector of the front there was an early fraternisation between German and French soldiers in December 1914, during a short truce, and there are at least two other testimonials from French soldiers of similar behaviour in sectors where German and French companies opposed each other.[25]Gervais Morillon wrote to his parents, "The Boches waved a white flag and shouted 'Kamarades, Kamarades, rendez-vous'. When we didn't move they came towards us unarmed, led by an officer. Although we are not clean they are disgustingly filthy. I am telling you this but don't speak of it to anyone. We must not mention it even to other soldiers". Gustave Berthier wrote "On Christmas Day the Boches made a sign showing they wished to speak to us. They said they didn't want to shoot. ... They were tired of making war, they were married like me, they didn't have any differences with the French but with the English".[26][27] On theYser Front, where German and Belgian troops faced each other in December 1914, a truce was arranged at the request of Belgian soldiers who wished to send letters back to their families over theGerman-occupied parts of Belgium.[28] Many accounts of the truce involve one or morefootballmatches played in no man's land. This was mentioned in some of the earliest reports, with a letter written by a doctor attached to theRifle Brigade, published inThe Timeson 1 January 1915, reporting "a football match... played between them and us in front of the trench".[31]Similar stories have been told over the years, often naming units or the score. Some accounts of the game bring in elements of fiction byRobert Graves, a British poet and writer (and an officer on the front at the time)[32]who reconstructed the encounter in a story published in 1962; in Graves's version, the score was 3–2 to the Germans.[31] The truth of the accounts has been disputed by some historians. In 1984, Malcolm Brown and Shirley Seaton concluded that there were probably attempts to play organised matches that failed owing to the state of the ground, but that the contemporary reports were either hearsay or refer to kick-abouts with made-up footballs such as a bully-beef tin.[33]Chris Baker, former chairman of theWestern Front Associationand author ofThe Truce: The Day the War Stopped, was also sceptical but says that although there is little evidence the most likely place that an organised match could have taken place was near the village ofMessines: "There are two references to a game being played on the British side, but nothing from the Germans. If somebody one day found a letter from a German soldier who was in that area, then we would have something credible".[34][35]Lieutenant Kurt Zehmisch of the 134th Saxon Infantry Regiment said that the English "brought a soccer ball from their trenches, and pretty soon a lively game ensued. How marvellously wonderful, yet how strange it was".[36]In 2011Mike Dashconcluded that "there is plenty of evidence that football was played that Christmas Day—mostly by men of the same nationality but in at least three or four places between troops from the opposing armies".[31] Many units were reported in contemporary accounts to have taken part in games: Dash listed the 133rd Royal Saxon Regiment pitched against "Scottish troops"; the Argyll and Sutherland Highlanders against unidentified Germans (with the Scots reported to have won 4–1); the Royal Field Artillery against "Prussians and Hanovers" nearYpresand theLancashire FusiliersnearLe Touquet, with the detail of abully beefration tin as the ball.[31]One recent writer has identified 29 reports of football though does not give substantive details.[37]ColonelJ. E. B. Seelyrecorded in his diary for Christmas Day that he had been "Invited to football match between Saxons and English on New Year's Day", but this does not appear to have taken place.[38] On the Eastern Front, the first move originated from Austro-Hungarian commanders, at some uncertain level of the military hierarchy. The Russians responded positively and soldiers eventually met in no man's land.[39] The truces were not reported for a week, eventually being publicised to the masses when an unofficial press embargo was broken byThe New York Times, published in the neutral United States, on 31 December.[40][41][42]The British papers quickly followed, printing numerous first-hand accounts from soldiers in the field, taken from letters home to their families and editorials on "one of the greatest surprises of a surprising war". By 8 January 1915, pictures had made their way to the press, and theMirrorandSketchprinted front-page photographs of British and German troops mingling and singing between the lines. The tone of the reporting was strongly positive, with theTimesendorsing the "lack of malice" felt by both sides and theMirrorregretting that the "absurdity and the tragedy" would begin again.[43]Author Denis Winter argues that then "the censor had intervened" to prevent information about the spontaneous ceasefire from reaching the public and that the real dimension of the truce "only really came out when Captain Chudleigh in theTelegraphwrote after the war."[44] Coverage in Germany was less extensive than that of the British press,[45]while in France,press censorshipensured that the only word that spread of the truce came from soldiers at the front or first-hand accounts told by wounded men in hospitals.[46]The press was eventually forced to respond to the growing rumours by reprinting a government notice that fraternising with the enemy constituted treason. In early January, an official statement on the truce was published, claiming it was restricted to the British sector of the front and amounted to little more than an exchange of songs, which quickly degenerated into shooting.[47] The press of neutral Italy published a few articles on the events of the truce, usually reporting the articles of the foreign press.[48]On 30 December 1914Corriere della Seraprinted a report about fraternisation between the opposing trenches.[49]TheFlorentinenewspaperLa Nazionepublished a first-hand account about a football match played in no man's land.[50]In Italy, the lack of interest in the truce was probably due to the occurrence of other events, such as theItalian occupation of Vlorë, the debut of theGaribaldi Legionon the front of the Argonne and theearthquake in Avezzano. After 1914, sporadic attempts were made at seasonal truces; on the Western Front, for example, a German unit attempted to leave their trenches under a flag of truce onEaster Sunday1915 but were warned off by the British opposite them. At Easter 1915 on the Eastern Front there were truces between Orthodox troops of opposing sides; the Bulgarian writerYordan Yovkov, serving as an officer near the Greek border at theMesta river, witnessed one. It inspired his short story "Holy Night", translated into English in 2013 by Krastu Banaev.[51]In November, aSaxonunit briefly fraternised with aLiverpoolbattalion.[citation needed] On 24 May 1915,Australian and New Zealand Army Corps(ANZAC) and troops of theOttoman EmpireatGallipoliagreed to a9-hour truceto retrieve and bury their dead, during which opposing troops "exchang(ed) smiles and cigarettes".[52] In December 1915, there were orders by the Allied commanders to forestall any repeat of the previous Christmas truce. Units were encouraged to mount raids and harass the opposing line, whilst communicating with the enemy was discouraged byartillery barragesalong the front line throughout the day; a small number of brief truces occurred despite the prohibition.[53][54]On the German side, a general order from 29 December 1914 already forbade fraternisation with the enemy, warning German troops that "every approach to the enemy...will be punished as treason".[55] Richard Schirrmannfrom Altena (North Rhine Westphalia, Germany), the founder of the "Jugendherberge" and a soldier in a German regiment holding a position on the Bernhardstein, one of theVosges Mountains, wrote an account of events in December 1915: "When the Christmas bells sounded in the villages of the Vosges behind the lines... something fantastically unmilitary occurred. German and French troops spontaneously made peace and ceased hostilities; they visited each other through disused trench tunnels, and exchanged wine, cognac and cigarettes forPumpernickel(Westphalian black bread), biscuits and ham. This suited them so well that they remained good friends even after Christmas was over". He was separated from the French troops by a narrow No Man's Land and described the landscape "Strewn with shattered trees, the ground ploughed up by shellfire, a wilderness of earth, tree-roots and tattered uniforms". Military discipline was soon restored but Schirrmann pondered over the incident and whether "thoughtful young people of all countries could be provided with suitable meeting places where they could get to know each other". He founded theGerman Youth Hostel Associationin 1919.[56] An account byLlewelyn Wyn Griffith, recorded that after a night of exchanging carols, dawn on Christmas Day saw a "rush of men from both sides... [and] a feverish exchange of souvenirs" before the men were quickly called back by their officers, with offers to hold a ceasefire for the day and to play a football match. It came to nothing, as the brigade commander threatened repercussions for lack of discipline and insisted on a resumption of firing in the afternoon.[57]Another member of Griffith's battalion,Bertie Felstead, later recalled that one man had produced a football, resulting in "a free-for-all; there could have been 50 on each side", before they were ordered back.[58][59]Another unnamed participant reported in a letter home: "The Germans seem to be very nice chaps, and said they were awfully sick of the war."[60]In the evening, according to Robert Keating "The Germans were sending up star lights and singing – they stopped, so we cheered them & we began singingLand of Hope and Glory–Men of Harlechet cetera – we stopped and they cheered us. So we went on till the early hours of the morning".[61] In an adjacent sector, a short truce to bury the dead between the lines led to repercussions; a company commander, SirIain Colquhounof the Scots Guards, wascourt-martialledfor defying standing orders to the contrary. While he was found guilty and reprimanded, the punishment was annulled by GeneralDouglas Haig, and Colquhoun remained in his position; the official leniency may perhaps have been because his wife's uncle wasH. H. Asquith, the Prime Minister.[62][63] In December 1916 and 1917, German overtures to the British for truces were recorded without any success.[64]In some French sectors, singing and an exchange of thrown gifts was occasionally recorded, though these may simply have reflected a seasonal extension of the live-and-let-live approach common in the trenches.[65] Although the popular tendency has been to see the December 1914 Christmas Truces as unique and of romantic rather than political significance, they have also been interpreted as part of the widespread spirit of non-cooperation with the war.[66]In his book on trench warfare, Tony Ashworth described the 'live and let live system'. Complicated local truces and agreements not to fire at each other were negotiated by men along the front throughout the war. These often began with agreement not to attack each other at tea, meal or washing times. In some places tacit agreements became so common that sections of the front would see few casualties for extended periods of time. This system, Ashworth argues, 'gave soldiers some control over the conditions of their existence'.[67]The December 1914 Christmas Truces can therefore be seen as not unique but as the most dramatic example of the spirit of non-cooperation with the war that included refusals to fight, unofficial truces,mutinies, strikes and peace protests. A Christmas Truce memorial was unveiled inFrelinghien, France, on 11 November 2008. At the spot where their regimental ancestors came out from their trenches to play football on Christmas Day 1914, men from the 1st Battalion, theRoyal Welch Fusiliersplayed a football match with the German Battalion 371. The Germans won 2–1.[82]On 12 December 2014, a memorial was unveiled at theNational Memorial Arboretumin Staffordshire, England byPrince William, Duke of Cambridgeand theEngland national football teammanagerRoy Hodgson.[83]TheFootball Remembersmemorial was designed by a ten-year-old schoolboy, Spencer Turner, after a UK-wide competition.[83] The Midway Village inRockford, Illinois, United States, has hosted re-enactments of the Christmas Truce.[84] The last known living participant of the Christmas Truce wasAlfred Anderson. He served as part of the 1/5th Angus and Dundee Battalion of theBlack Watch (Royal Highland Regiment). He died on 21 November 2005, aged 109, nearly 91 years after the famous truce.[85]
https://en.wikipedia.org/wiki/Christmas_truce
Deterrence theoryrefers to the scholarship and practice of how threats of using force by one party can convince another party to refrain from initiating some other course of action.[1]The topic gained increased prominence as a military strategy during theCold Warwith regard to the use ofnuclear weaponsand is related to but distinct from the concept ofmutual assured destruction, according to which a full-scale nuclear attack on a power withsecond-strike capabilitywould devastate both parties. The central problem of deterrence revolves around how to credibly threaten military action or nuclear punishment on the adversary despite its costs to the deterrer.[2]Deterrencein aninternational relationscontext is the application of deterrence theory to avoid conflict. Deterrence is widely defined as any use of threats (implicit or explicit) or limited force intended todissuadean actor from taking an action (i.e. maintain the status quo).[3][4]Deterrence is unlikecompellence, which is the attempt to get an actor (such as a state) to take an action (i.e. alter the status quo).[5][6][4]Both are forms ofcoercion. Compellence has been characterized as harder to successfully implement than deterrence.[6][7]Deterrence also tends to be distinguished fromdefenseor the use of full force in wartime.[3] Deterrence is most likely to be successful when a prospective attacker believes that the probability of success is low and the costs of attack are high.[8]Central problems of deterrence include thecrediblecommunication of threats[9][4]and assurance.[10]Deterrence does not necessarily require military superiority.[11][12] "General deterrence" is considered successful when an actor whomightotherwise take an action refrains from doing so due to the consequences that the deterrer is perceived likely to take.[13]"Immediate deterrence" is considered successful when an actor seriously contemplatingimmediatemilitary force or action refrains from doing so.[13]Scholars distinguish between "extended deterrence" (the protection of allies) and "direct deterrence" (protection of oneself).[12][14]Rational deterrence theory holds that an attacker will be deterred if they believe that:[15] (Probability of deterrer carrying out deterrent threat × Costs if threat carried out) > (Probability of the attacker accomplishing the action × Benefits of the action) This model is frequently simplified ingame-theoreticterms as: Costs × P(Costs) > Benefits × P(Benefits) DuringWorld War II, some historians have argued that deterrence prevented the Western Allies and Axis from extensivechemical warfare, as had been used inWorld War I. Nonetheless, Nazi Germany used chemical weapons during theSiege of Sevastopol,Siege of Odessa, andBattle of the Kerch Peninsula, while Imperial Japan frequently used chemical weapons against Chinese troops.[citation needed]Conversely, during theNuremberg trials,Hermann Göringstated that initiating an exchange of chemical weapons during theOperation Overlordwould have immobilized theWehrmacht, which widely relied onhorse-drawn transport, and a suitable gas mask for horses had not been designed.[16] While the concept of deterrence precedes the Cold War, it was during the Cold War that the concept evolved into a clearly articulated objective in strategic planning and diplomacy, with considerable analysis by scholars.[17] Most of the innovative work on deterrence theory occurred from the late 1940s to mid-1960s.[18]Historically, scholarship on deterrence has tended to focus on nuclear deterrence.[19]Since the end of the Cold War, there has been an extension of deterrence scholarship to areas that are not specifically about nuclear weapons.[4] NATOwas founded in 1949 with deterring aggression as one of its goals.[20] A distinction is sometimes made between nuclear deterrence and "conventional deterrence."[21][22][23][24] The two most prominent deterrent strategies are "denial" (denying the attacker the benefits of attack) and "punishment" (inflicting costs on the attacker).[11] Lesson of Munich, whereappeasementfailed, contributes to deterrence theory. In the words of scholarsFrederik Logevalland Kenneth Osgood, "Munich and appeasement have become among the dirtiest words inAmerican politics, synonymous with naivete and weakness, and signifying a craven willingness to barter away the nation's vital interests for empty promises." They claimed that the success ofUS foreign policyoften depends upon a president withstanding "the inevitable charges of appeasement that accompany any decision to negotiate with hostile powers.[25] By November 1945 generalCurtis LeMay, who led Americanair raids on Japanduring World War II, was thinking about how the next war would be fought. He said in a speech that month to theOhio Society of New Yorkthat since "No air attack, once it is launched, can be completely stopped", his country needed an air force that could immediately retaliate: "If we are prepared it may never come. It is not immediately conceivable that any nation will dare to attack us if we are prepared".[26] In pursuit of nuclear deterrence, the superpowers of the USSR and US engaged in anuclear arms race. Warheads themselves evolved fromfission weaponstothermonuclear weapons, and were extensively miniaturized for bothstrategicandtacticaluse.Nuclear weapons deliverywas equally important, such as the perceivedbomber gapandmissile gap. Deterrence was a primary factor in the ultimateproliferation of nuclear weaponsto ten nations in total. Generally this was the form of the threat perceived from a nearby recently nuclear-armed neighbor. In the case ofIsraelandSouth Africadeterrence was against the threat of conventional attack.[citation needed] Additionally, chemical weapons were a component of deterrence for both sides, and large stockpiles were maintained until their destruction began following the 1993Chemical Weapons Convention. Offensivebiological weaponsprograms were pursued by both countries in the first two decades of the Cold War, but the United States program wasended by president Richard Nixon in 1969.[citation needed] The concept of deterrence can be defined as the use of threats in limited force by one party to convince another party to refrain from initiating some course of action.[27][3]InArms and Influence(1966), Schelling offers a broader definition of deterrence, as he defines it as "to prevent from action by fear of consequences."[6]Glenn Snyder also offers a broad definition of deterrence, as he argues that deterrence involves both the threat of sanction and the promise of reward.[28] A threat serves as a deterrent to the extent that it convinces its target not to carry out the intended action because of the costs and losses that target would incur. In international security, a policy of deterrence generally refers to threats of military retaliation directed by the leaders of one state to the leaders of another in an attempt to prevent the other state from resorting to the use of military force in pursuit of its foreign policy goals. As outlined by Huth,[27]a policy of deterrence can fit into two broad categories: preventing an armed attack against a state's own territory (known as direct deterrence) or preventing an armed attack against another state (known as extended deterrence). Situations of direct deterrence often occur if there is aterritorial disputebetween neighboring states in which major powers like the United States do not directly intervene. On the other hand, situations of extended deterrence often occur when agreat powerbecomes involved. The latter case has generated most interest in academic literature. Building on the two broad categories, Huth goes on to outline that deterrence policies may be implemented in response to a pressing short-term threat (known as immediate deterrence) or as strategy to prevent a military conflict or short-term threat from arising (known as general deterrence). A successful deterrence policy must be considered in military terms but also political terms: International relations, foreign policy and diplomacy. In military terms, deterrence success refers to preventing state leaders from issuing military threats and actions that escalate peacetime diplomatic and military co-operation into a crisis or militarized confrontation that threatens armed conflict and possibly war. The prevention of crises of wars, however, is not the only aim of deterrence. In addition, defending states must be able to resist the political and the military demands of a potential attacking nation. If armed conflict is avoided at the price of diplomatic concessions to the maximum demands of the potential attacking nation under the threat of war, it cannot be claimed that deterrence has succeeded. Furthermore, as Jentlesonet al.[29]argue, two key sets of factors for successful deterrence are important: a defending state strategy that balances credible coercion and deft diplomacy consistent with the three criteria of proportionality, reciprocity, and coercive credibility and minimizes international and domestic constraints and the extent of an attacking state's vulnerability as shaped by its domestic political and economic conditions. In broad terms, a state wishing to implement a strategy of deterrence is most likely to succeed if the costs of noncompliance that it can impose on and the benefits of compliance it can offer to another state are greater than the benefits of noncompliance and the costs of compliance. Deterrence theory holds that nuclear weapons are intended to deter other states from attacking with their nuclear weapons, through the promise of retaliation and possiblymutually assured destruction. Nuclear deterrence can also be applied to an attack by conventional forces. For example, the doctrine ofmassive retaliationthreatened to launch US nuclear weapons in response to Soviet attacks. A successful nuclear deterrent requires a country to preserve its ability to retaliate by responding before its own weapons are destroyed or ensuring asecond-strikecapability. A nuclear deterrent is sometimes composed of anuclear triad, as in the case of the nuclear weapons owned by theUnited States,Russia,ChinaandIndia. Other countries, such as theUnited KingdomandFrance, have only sea-based and air-based nuclear weapons. Jentlesonet al.provides further detail in relation to those factors.[29]Proportionality refers to the relationship between the defending state's scope and nature of the objectives being pursued and the instruments available for use to pursue them. The more the defending state demands of another state, the higher that state's costs of compliance and the greater need for the defending state's strategy to increase the costs of noncompliance and the benefits of compliance. That is a challenge, as deterrence is by definition a strategy of limited means. George (1991) goes on to explain that deterrence sometimes goes beyond threats to the actual use of military force, but if force is actually used, it must be limited and fall short of full-scale use to succeed.[30] The main source of disproportionality is an objective that goes beyond policy change toregime change, which has been seen in Libya, Iraq, and North Korea. There, defending states have sought to change the leadership of a state and to policy changes relating primarily to their nuclear weapons programs. Secondly, Jentlesonet al.[29]outlines that reciprocity involves an explicit understanding of linkage between the defending state's carrots and the attacking state's concessions. The balance lies in not offering too little, too late or for too much in return and not offering too much, too soon, or for too little return. Finally, coercive credibility requires that in addition to calculations about costs and benefits of co-operation, the defending state convincingly conveys to the attacking state that failure to co-operate has consequences. Threats, uses of force, and other coercive instruments such aseconomic sanctionsmust be sufficiently credible to raise the attacking state's perceived costs of noncompliance. A defending state having a superior military capability or economic strength in itself is not enough to ensure credibility. Indeed, all three elements of a balanced deterrence strategy are more likely to be achieved if other major international actors like theUNorNATOare supportive, and opposition within the defending state's domestic politics is limited. The other important considerations outlined by Jentlesonet al.[29]that must be taken into consideration is the domestic political and economic conditions in the attacking state affecting its vulnerability to deterrence policies and the attacking state's ability to compensate unfavourable power balances. The first factor is whether internal political support and regime security are better served by defiance, or there are domestic political gains to be made from improving relations with the defending state. The second factor is an economic calculation of the costs that military force, sanctions, and other coercive instruments can impose and the benefits that trade and other economic incentives may carry. That is partly a function of the strength and flexibility of the attacking state's domestic economy and its capacity to absorb or counter the costs being imposed. The third factor is the role of elites and other key domestic political figures within the attacking state. To the extent that such actors' interests are threatened with the defending state's demands, they act to prevent or block the defending state's demands. One approach to theorizing about deterrence has entailed the use of rational choice and game-theoretic models of decision making (seegame theory). Rational deterrence theory entails:[31] Deterrence theorists have consistently argued that deterrence success is more likely if a defending state's deterrent threat is credible to an attacking state. Huth[27]outlines that a threat is considered credible if the defending state possesses both the military capabilities to inflict substantial costs on an attacking state in an armed conflict, and the attacking state believes that the defending state is resolved to use its available military forces. Huth[27]goes on to explain the four key factors for consideration under rational deterrence theory: the military balance, signaling and bargaining power, reputations for resolve, interests at stake. The American economistThomas Schellingbrought his background in game theory to the subject of studying international deterrence. Schelling's (1966) classic work on deterrence presents the concept that military strategy can no longer be defined as the science of military victory. Instead, it is argued that military strategy was now equally, if not more, the art of coercion, intimidation and deterrence.[33]Schelling says the capacity to harm another state is now used as a motivating factor for other states to avoid it and influence another state's behavior. To be coercive or deter another state, violence must be anticipated and avoidable by accommodation. It can therefore be summarized that the use of the power to hurt as bargaining power is the foundation of deterrence theory and is most successful when it is held in reserve.[33] In an article celebrating Schelling's Nobel Memorial Prize for Economics,[34]Michael Kinsley,Washington Postop‑edcolumnist and one of Schelling's former students, anecdotally summarizes Schelling's reorientation of game theory thus: "[Y]ou're standing at the edge of a cliff, chained by the ankle to someone else. You'll be released, and one of you will get a large prize, as soon as the other gives in. How do you persuade the other guy to give in, when the only method at your disposal—threatening to push him off the cliff—would doom you both? Answer: You start dancing, closer and closer to the edge. That way, you don't have to convince him that you would do something totally irrational: plunge him and yourself off the cliff. You just have to convince him that you are prepared to take a higher risk than he is of accidentally falling off the cliff. If you can do that, you win." Deterrence is often directed against state leaders who have specific territorial goals that they seek to attain either by seizing disputed territory in a limited military attack or by occupying disputed territory after the decisive defeat of the adversary's armed forces. In either case, the strategic orientation of potential attacking states generally is for the short term and is driven by concerns about military cost and effectiveness. For successful deterrence, defending states need the military capacity to respond quickly and strongly to a range of contingencies. Deterrence often fails if either a defending state or an attacking state underestimates or overestimates the other's ability to undertake a particular course of action. The central problem for a state that seeks to communicate a credible deterrent threat by diplomatic or military actions is that all defending states have an incentive to act as if they are determined to resist an attack in the hope that the attacking state will back away from military conflict with a seemingly resolved adversary. If all defending states have such incentives, potential attacking states may discount statements made by defending states along with any movement of military forces as merely bluffs. In that regard, rational deterrence theorists have argued that costly signals are required to communicate the credibility of a defending state's resolve. Those are actions and statements that clearly increase the risk of a military conflict and also increase the costs of backing down from a deterrent threat. States that bluff are unwilling to cross a certain threshold of threat and military action for fear of committing themselves to an armed conflict. There are three different arguments that have been developed in relation to the role of reputations in influencing deterrence outcomes. The first argument focuses on a defending state's past behavior in international disputes and crises, which creates strong beliefs in a potential attacking state about the defending state's expected behaviour in future conflicts. The credibilities of a defending state's policies are arguably linked over time, and reputations for resolve have a powerful causal impact on an attacking state's decision whether to challenge either general or immediate deterrence. The second approach argues that reputations have a limited impact on deterrence outcomes because the credibility of deterrence is heavily determined by the specific configuration of military capabilities, interests at stake, and political constraints faced by a defending state in a given situation of attempted deterrence. The argument of that school of thought is that potential attacking states are not likely to draw strong inferences about a defending states resolve from prior conflicts because potential attacking states do not believe that a defending state's past behaviour is a reliable predictor of future behavior. The third approach is a middle ground between the first two approaches and argues that potential attacking states are likely to draw reputational inferences about resolve from the past behaviour of defending states only under certain conditions. The insight is the expectation that decisionmakers use only certain types of information when drawing inferences about reputations, and an attacking state updates and revises its beliefs when a defending state's unanticipated behavior cannot be explained by case-specific variables. An example shows that the problem extends to the perception of the third parties as well as main adversaries and underlies the way in which attempts at deterrence can fail and even backfire if the assumptions about the others' perceptions are incorrect.[35] Although costly signaling and bargaining power are more well established arguments in rational deterrence theory, the interests of defending states are not as well known. Attacking states may look beyond the short-term bargaining tactics of a defending state and seek to determine what interests are at stake for the defending state that would justify the risks of a military conflict. The argument is that defending states that have greater interests at stake in a dispute are more resolved to use force and more willing to endure military losses to secure those interests. Even less well-established arguments are the specific interests that are more salient to state leaders such as military interests and economic interests. Furthermore, Huth[27]argues that both supporters and critics of rational deterrence theory agree that an unfavorable assessment of the domestic and international status quo by state leaders can undermine or severely test the success of deterrence. In a rational choice approach, if the expected utility of not using force is reduced by a declining status quo position, deterrence failure is more likely since the alternative option of using force becomes relatively more attractive. International relations scholars Dan Reiter and Paul Poast have argued that so-called "tripwires" do not deter aggression.[36]Tripwires entail that small forces are deployed abroad with the assumption that an attack on them will trigger a greater deployment of forces.[36]Dan Altman has argued that tripwires do work to deter aggression, citing the Western deployment of forces to Berlin in 1948–1949 to deter Soviet aggression as a successful example.[37] A 2022 study by Brian Blankenship and Erik Lin-Greenberg found that high-resolve, low-capability signals (such as tripwires) were not viewed as more reassuring to allies than low-resolve, high-capability alternatives (such as forces stationed offshore). Their study cast doubt on the reassuring value of tripwires.[38] In 1966, Schelling[33]is prescriptive in outlining the impact of the development of nuclear weapons in the analysis of military power and deterrence. In his analysis, before the widespread use of assured second strike capability, or immediate reprisal, in the form ofSSBNsubmarines, Schelling argues thatnuclear weaponsgive nations the potential to destroy their enemies but also the rest of humanity without drawing immediate reprisal because of the lack of a conceivable defense system and the speed with which nuclear weapons can be deployed. A nation's credible threat of such severe damage empowers their deterrence policies and fuels political coercion and military deadlock, which can produce proxy warfare. According toKenneth Waltz, there are three requirements for successful nuclear deterrence:[39] Thestability–instability paradoxis a key concept in rational deterrence theory. It states that when two countries each have nuclear weapons, the probability of a direct war between them greatly decreases, but the probability of minor or indirect conflicts between them increases.[40][41][42]This occurs because rational actors want to avoid nuclear wars, and thus they neither start major conflicts nor allow minor conflicts to escalate into major conflicts—thus making it safe to engage in minor conflicts. For instance, during theCold WartheUnited Statesand theSoviet Unionnever engaged each other in warfare, but foughtproxy warsinKorea,Vietnam,Angola, theMiddle East,NicaraguaandAfghanistanand spent substantial amounts of money and manpower on gaining relative influence over thethird world.[43] Bernard Brodiewrote in 1959 that a credible nuclear deterrent must be always ready.[44][a]An extended nuclear deterrence guarantee is also called anuclear umbrella.[45] Scholars have debated whether having a superior nuclear arsenal provides a deterrent against other nuclear-armed states with smaller arsenals. Matthew Kroenig has argued that states with nuclear superiority are more likely to win nuclear crises,[46][47]whereas Todd Sechser, Matthew Fuhrmann and David C. Logan have challenged this assertion.[48][49][50]A 2023 study found that a state with nuclear weapons is less likely to be targeted by non-nuclear states, but that a state with nuclear weapons is not less likely to target other nuclear states in low-level conflict.[51]A 2022 study by Kyungwon Suh suggests that nuclear superiority may not reduce the likelihood that nuclear opponents will initiate nuclear crises.[52] Proponents of nuclear deterrence theory argue that newly nuclear-armed states may pose a short- or medium-term risk, but that "nuclear learning" occurs over time as states learn to live with new nuclear-armed states.[53][54]Mark S. Bell and Nicholas L. Miller have however argued that there is a weak theoretical and empirical basis for notions of "nuclear learning."[55] The US policy of deterrence during theCold Warunderwent significant variations. The early stages of the Cold War were generally characterized by thecontainmentof communism, an aggressive stance on behalf of the US especially ondeveloping nationsunder itssphere of influence. The period was characterized by numerousproxy warsthroughout most of the globe, particularly Africa, Asia, Central America, and South America. One notable conflict was theKorean War.George F. Kennan, who is taken to be the founder of this policy in hisLong Telegram, asserted that he never advocated military intervention, merely economic support, and that his ideas were misinterpreted as espoused by the general public. With theUS drawdownfrom Vietnam, the normalization of US relations with China, and theSino-Soviet Split, the policy of containment was abandoned and a new policy ofdétentewas established, with peaceful co-existence was sought between the United States and the Soviet Union. Although all of those factors contributed to this shift, the most important factor was probably the rough parity achieved in stockpiling nuclear weapons with the clear capability ofmutual assured destruction(MAD). Therefore, the period of détente was characterized by a general reduction in the tension between the Soviet Union and the United States and a thawing of the Cold War, which lasted from the late 1960s until the start of the 1980s. The doctrine of mutual nuclear deterrence then characterized relations between the United States and the Soviet Union and relations with Russia until the onset of theNew Cold Warin the early 2010s. Since then, relations have been less clear. A third shift occurred with US PresidentRonald Reagan's arms build-up during the 1980s. Reagan attempted to justify the policy by concerns of growing Soviet influence in Latin America and the post-1979revolutionarygovernment ofIran. Similar to the old policy of containment, the US funded several proxy wars, including support forSaddam HusseinofIraqduring theIran–Iraq War,[56]support for themujahideeninAfghanistan, who were fighting for independence from the Soviet Union, and several anticommunist movements in Latin America such as the overthrow of theSandinistagovernment inNicaragua. The funding of theContrasin Nicaragua led to theIran-Contra Affair, while overt support led to a ruling from theInternational Court of Justiceagainst the United States inNicaragua v. United States. The final expression of the full impact of deterrence during the cold war can be seen in the agreement between Reagan andMikhail Gorbachevin 1985. They "agreed that a nuclear war cannot be won and must never be fought. Recognizing that any conflict between the USSR and the U.S. could have catastrophic consequences, they emphasized the importance of preventing any war between them, whether nuclear or conventional. They will not seek to achieve military superiority.". While the army was dealing with the breakup of the Soviet Union and the spread of nuclear technology to other nations beyond the United States and Russia, the concept of deterrence took on a broader multinational dimension. The US policy on deterrence after the Cold War was outlined in 1995 in the document called "Essentials of Post–Cold War Deterrence".[57]It explains that while relations with Russia continue to follow the traditional characteristics of MAD, but the US policy of deterrence towards nations with minor nuclear capabilities should ensure by threats of immense retaliation (or evenpre-emptive action) not to threaten the United States, its interests, or allies. The document explains that such threats must also be used to ensure that nations without nuclear technology refrain from developing nuclear weapons and that a universal ban precludes any nation from maintainingchemicalorbiological weapons. The current tensions with Iran and North Korea over their nuclear programs are caused partly by the continuation of the policy of deterrence. By the beginning of the2022 Russian invasion of Ukraine, many western hawks expressed the view that deterrence worked in that war but only in one way – in favor of Russia. Former US security advisor,John Bolton, said: Deterrence is working in the Ukraine crisis, just not for the right side. The United States and its allies failed to deter Russia from invading. The purpose of deterrence strategy is to prevent the conflict entirely, and there Washington failed badly. On the other hand, Russian deterrence is enjoying spectacular success. Russia has convinced the West that even a whisper of NATO military action in Ukraine would bring disastrous consequences. Putin threatens, blusters, uses the word “nuclear,” and the West wilts.[58] WhenElon Muskprevented Ukraine from carryingdrone attackson the RussianBlack Sea fleetby denying to enable neededStarlink communications in Crimea,[59]Anne Applebaumargued Musk had been deterred by Russia after the country's ambassador warned him anattack on Crimeawould be met with a nuclear response.[60]Later Ukrainian attacks on the same fleet using a different communications system also caused deterrence, this time to the Russian Navy.[60] Timo S. Koster who served at NATO as Director of Defence Policy & Capabilities similarly argued: A massacre is taking place in Europe and the strongest military alliance in the world is staying out of it. We are deterred and Russia is not.[61]Philip Breedlove, a retired four-star U.S. Air Force general and a formerSACEUR, said that Western fears about nuclear weapons and World War III have left it "fully deterred" and Putin "completely undeterred." The West have "ceded the initiative to the enemy."[62]No attempt was made by NATO to deter Moscow with the threat of military force, wondered another expert. To the contrary, it was Russia’s deterrence that proved to be successful.[63] Since the early 2000s, there has been an increased focus on cyber deterrence. Cyber deterrence has two meanings:[64] Scholars have debated how cyber capabilities alter traditional understandings of deterrence, given that it may be harder to attribute responsibility for cyber attacks, the barriers to entry may be lower, the risks and costs may be lower for actors who conduct cyber attacks, it may be harder to signal and interpret intentions, the advantage of offense over defense, and weak actors and non-state actors can develop considerable cyber capabilities.[64][65][66][67]Scholars have also debated the feasibility of launching highly damaging cyber attacks and engaging in destructive cyber warfare, with most scholars expressing skepticism that cyber capabilities have enhanced the ability of states to launch highly destructive attacks.[68][69][70]The most prominent cyber attack to date is theStuxnetattack on Iran's nuclear program.[68][69]By 2019, the only publicly acknowledged case of a cyber attack causing a power outage was the2015 Ukraine power grid hack.[71] There are various ways to engage in cyber deterrence:[64][65][66] There is a risk of unintended escalation in cyberspace due to difficulties in discerning the intent of attackers,[75][76]and complexities in state-hacker relationships.[77]According to political scientists Joseph Brown andTanisha Fazal, states frequently neither confirm nor deny responsibility for cyber operations so that they can avoid the escalatory risks (that come with public credit) while also signaling that they have cyber capabilities and resolve (which can be achieved if intelligence agencies and governments believe they were responsible).[74] According to Lennart Maschmeyer, cyber weapons have limited coercive effectiveness due to a trilemma "whereby speed, intensity, and control are negatively correlated. These constraints pose a trilemma for actors because a gain in one variable tends to produce losses across the other two variables."[78] Intrawar deterrence is deterrence within a war context. It means that war has broken out but actors still seek to deter certain forms of behavior. In the words of Caitlin Talmadge, "intra-war deterrence failures... can be thought of as causing wars to get worse in some way."[79]Examples of intrawar deterrence include deterring adversaries from resorting to nuclear, chemical and biological weapons attacks or attacking civilian populations indiscriminately.[80]Broadly, it involves any prevention of escalation.[81] Matthew Fuhrmann refers to the ability of some states to rapidly develop or gain nuclear weapons as "latent nuclear deterrence". These states do not necessarily aim to go all the way in building nuclear weapons, but they may develop the civilian nuclear technology that would rapidly enable them to build a nuclear weapon. They can use thisnuclear latencystatus for coercive purposes, as they can deter adversaries who do not wish to see the state develop nuclear weapons or potentially use those nuclear weapons.[82] Deterrence theory has been criticized by numerous scholars for various reasons, the most basic being skepticism that decision makers are rational. A prominent strain of criticism argues that rational deterrence theory is contradicted by frequent deterrence failures, which may be attributed to misperceptions.[83]Here it's argued that misestimations of perceived costs and benefits by analysts contribute to deterrence failures,[84]as exemplified in case ofRussian invasion of Ukraine.Frozen conflictscan be seen as rewardingaggression.[85] Scholars have also argued that leaders do not behave in ways that are consistent with the predictions of nuclear deterrence theory.[86][87][88]Scholars have also argued that rational deterrence theory does not grapple sufficiently with emotions and psychological biases that make accidents, loss of self-control, and loss of control over others likely.[89][90]Frank C. Zagare has argued that deterrence theory is logically inconsistent and empirically inaccurate. In place of classical deterrence, rational choice scholars have argued forperfect deterrence, which assumes that states may vary in their internal characteristics and especially in the credibility of their threats of retaliation.[91] Advocates fornuclear disarmament, such asGlobal Zero, have criticized nuclear deterrence theory.Sam Nunn,William Perry,Henry Kissinger, andGeorge Shultzhave all called upon governments to embrace the vision of a world free of nuclear weapons, and created the Nuclear Security Project to advance that agenda.[92]In 2010, the four were featured in a documentary film entitledNuclear Tipping Pointwhere proposed steps to achieve nuclear disarmament.[93][94]Kissinger has argued, "The classical notion of deterrence was that there was some consequences before which aggressors and evildoers would recoil. In a world of suicide bombers, that calculation doesn't operate in any comparable way."[95]Shultz said, "If you think of the people who are doing suicide attacks, and people like that get a nuclear weapon, they are almost by definition not deterrable."[96] Paul Nitzeargued in 1994 that nuclear weapons were obsolete in the "new world disorder" after the dissolution of the Soviet Union, and he advocated reliance on precision guided munitions to secure a permanent military advantage over future adversaries.[97] As opposed to the extrememutually assured destructionform of deterrence, the concept ofminimum deterrencein which a state possesses no more nuclear weapons than is necessary to deter an adversary from attacking is presently the most common form of deterrence practiced bynuclear weapon states, such as China, India, Pakistan, Britain, and France.[98]Pursuingminimal deterrenceduring arms negotiations between the United States and Russia allows each state to makenuclear stockpilereductions without the state becoming vulnerable, but it has been noted that there comes a point that further reductions may be undesirable, once minimal deterrence is reached, as further reductions beyond that point increase a state's vulnerability and provide an incentive for an adversary to expand its nuclear arsenal secretly.[99] France has developed and maintained its ownnuclear deterrentunder the belief that the United States will refuse to risk its own cities by assisting Western Europe in a nuclear war.[100] In the postcold warera, philosophical objections to the reliance upon deterrence theories in general have also been raised on purelyethicalgrounds. Scholars such asRobert L. Holmeshave noted that the implementation of such theories is inconsistent with a fundamentaldeontologicalpresumption which prohibits the killing of innocent life. Consequently, such theories areprima facieimmoral in nature. In addition, he observes that deterrence theories serve to perpetuate a state of mutual assured destruction between nations over time. Holmes further argues that it is therefore both irrational and immoral to utilize a methodology for perpetuating international peace which relies exclusively upon the continuous development of new iterations of the very weapons which it is designed to prohibit.[101][102][103][104]
https://en.wikipedia.org/wiki/Deterrence_theory
"An eye for an eye" (Biblical Hebrew:עַיִן תַּחַת עַיִן,ʿayīn taḥaṯ ʿayīn)[a]is a commandment found in theBook of Exodus21:23–27 expressing the principle of reciprocal justice measure for measure. The earliest known use of the principle appears in theCode of Hammurabi, which predates the writing of the Hebrew Bible but not necessarily oral traditions.[1] Thelaw of exact retaliation(Latin:lex talionis),[2]orreciprocal justice, bears the same principle that a person who has injured another person is to be penalized to a similar degree by the injured party. In softer interpretations, it means the victim receives the [estimated] value of the injury in compensation.[3]The intent behind the principle was torestrictcompensation to the value of the loss.[2] The termlex talionisdoes not always refer to literal eye-for-an-eye codes of justice (seemirror punishment), but rather applies to the broader class of legal systems that formulate penalties for specific crimes, which are thought to be fitting in their severity. Some propose that this was at least in part intended to prevent excessive punishment at the hands of either an avenging private party, or the state.[4]The most common expression of lex talionis is "an eye for an eye", but other interpretations have been given as well.[5]Legal codes following the principle oflex talionishave one thing in common - prescribed 'fitting' counter punishment for afelony. The simplest example is the "eye for an eye" principle. In that case, the rule was that punishment must be exactly equal to the crime. In the legalCode of Hammurabi, the principle of exact reciprocity is very clearly used. For example, if a person caused the death of another person, the killer wouldbe put to death.[6] Various ideas regarding the origins of this law exist, but a common one is that it developed as early civilizations grew and a less well-established system for retribution of wrongs,feudsandvendettas, threatened the social fabric. Despite having been replaced with newer modes of legal theory,lex talionissystems served a critical purpose in the development of social systems—the establishment of a body whose purpose was to enact the retaliation and ensure that this was the only punishment. This body was the state in one of its earliest forms. The principle can be found in earlier Mesopotamian law codes such as theCodes of Ur-Nammuof Ur andLipit-Ištarof Isín.[7] The principle is found inBabylonian Law.[8][9]If it is surmised that in societies not bound by the rule of law, if a person was hurt, then the injured person (or their relative) would takevengefulretribution on the person who caused the injury. The retribution might be worse than the crime, perhaps even death. Babylonian law put a limit on such actions, restricting the retribution to be no worse than the crime, as long as victim and offender occupied the same status in society. As withblasphemyorlèse-majesté(crimes against a god or a monarch), crimes against one's social betters were punished more severely. Anaximander, teacher ofPythagoras: "The grand periodicities of nature are conceived of enacting cycles of retaliatory retribution."Socratesrejected this law.[10] The law of the Hebrews rejected[clarification needed]this law; the Hebrew Bible allows forkofer(a monetary payment) to take the place of a bodily punishment for any crime except murder.[11][non-primary source needed]It is not specified whether the victim, accused, or judge had the authority to choosekoferin place of bodily punishment. The idiomatic biblical phraseHebrew:עין תחת עין,romanized:ayin tachat ayinin Exodus and Leviticus literally means 'one eye under/(in place of) one eye' while a slightly different phrase (עַיִן בְּעַיִן שֵׁן בְּשֵׁן, literally 'eye for an eye; tooth for a tooth' ...) is used in another passage (Deuteronomy) in the context of possible reciprocal court sentences for failed false witnesses.[12][13][14]The "only one eye for one eye" was torestrictcompensation to the value of the loss.[2] The English translation of a passage inLeviticusstates,"And a man who injures his countryman – as he has done, so it shall be done to him [namely,] fracture under/for fracture, eye under/for eye, tooth under/for tooth. Just as another person has received injury from him, so it will be given to him."(Lev. 24:19–21).[12]For an example ofתחתbeing used in its regular sense of 'under', see Lev. 22:27 "A bull, sheep or goat, when it is born shall remain under its mother, and from the eighth day..." An English translation of Exodus 21:22-24 states:"If men strive, and hurt a woman with child, so that her fruit depart from her, and yet no mischief follow: he shall be surely punished, according as the woman's husband will lay upon him; and he shall pay as the judges determine. And if any mischief follow, then thou shalt give life for life, eye for eye, tooth for tooth, hand for hand, foot for foot." Isaac Kalimi said that the lex talionis was "humanized" by the Rabbis who interpreted "an eye for an eye" to mean reasonable pecuniary compensation. As in the case of the Babylonianlex talionis, ethical Judaism and humane Jewish jurisprudence replaces thepeshat(literal meaning) of the written Torah.[15]Pasachoff and Littman point to the reinterpretation of the lex talionis as an example of the ability of Pharisaic Judaism to "adapt to changing social and intellectual ideas."[16] TheTalmud[17]interprets the verses referring to "an eye for an eye" and similar expressions as mandating monetary compensation intortcases and argues against the interpretations by Sadducees that the Bible verses refer to physical retaliation in kind, using the argument that such an interpretation would be inapplicable to blind or eyeless offenders. Since the Torah requires that penalties be universally applicable, the phrase cannot be interpreted in this manner. The Oral Law explains, based upon the biblical verses, that the Bible mandates a sophisticated five-part monetary form of compensation, consisting of payment for "Damages, Pain, Medical Expenses, Incapacitation, and Mental Anguish" — which underlies many modern legal codes. Some rabbinic literature explains, moreover, that the expression, "An eye for an eye, etc." suggests that the perpetrator deserves to lose his own eye, but that biblical law treats him leniently. − Paraphrased from theUnion of Orthodox Congregations.[18] However, the Torah also discusses a form of direct reciprocal justice, where the phraseayin tachat ayinmakes another appearance.[19]Here, the Torah discusses false witnesses who conspire to testify against another person. The Torah requires the court to "do to him as he had conspired to do to his brother".[20]Assuming the fulfillment of certain technical criteria (such as the sentencing of the accused whose punishment was not yet executed), wherever it is possible to punish the conspirators with exactly the same punishment through which they had planned to harm their fellow, the court carries out this direct reciprocal justice (including when the punishment constitutes the death penalty). Otherwise, the offenders receive lashes.[21][22] Since there is no form of punishment in the Torah that calls for the maiming of an offender (punitary amputation) there is no case where a conspiratorial false witness could possibly be punished by the court injuring to his eye, tooth, hand, or foot. There is one case where the Torah states "…and you shall cut off her hand…"[23]The sages of the Talmud understood the literal meaning of this verse as referring to a case where the woman is attacking a man in potentially lethal manner. This verse teaches that, although one must intervene to save the victim, one may not kill a lethal attacker if it is possible to neutralize that attacker through non-lethal injury.[24][25][26]Regardless, there is no verse that even appears to mandate injury to the eye, tooth, or foot. Numbers 35:9–30 discusses the only form of remotely reciprocal justice not carried out directly by the court, where, under very limited circumstances, someone found guilty of negligent manslaughter may be killed by a relative of the deceased who takes on the role of "redeemer of blood". In such cases, the court requires the guilty party to flee to a designated city of refuge. While the guilty party is there, the "redeemer of blood" may not kill him. If, however, the guilty party illegally forgoes his exile, the "redeemer of blood", as an accessory of the court, may kill the guilty party. According to traditional Jewish Law, application of these laws requires the presence and maintenance of the biblically designated cities of refuge, as well as a conviction in an eligible court of 23 judges as delineated by the Torah and Talmud. The latter condition is also applicable for any capital punishment. These circumstances have not existed for approximately 2,000 years. The Talmud discusses the concept of justice as measure-for-measure retribution (middah k'neged middah) in the context of divinely implemented justice. Regarding reciprocal justice by court, however, the Torah states that punishments serve to remove dangerous elements from society ("…and you shall eliminate the evil from your midst"[20]) and to deter potential criminals from violating the law ("And the rest shall hear and be daunted, and they shall no longer commit anything like this evil deed in your midst"[27]). Additionally, reciprocal justice in tort cases serves to compensate the victim (see above). The ideal of vengeance for the sake of assuaging the distress of the victim plays no role in the Torah's conception of court justice, as victims are cautioned against even hating or bearing a grudge against those who have harmed them. The Torah makes no distinction between whether or not the potential object of hatred or a grudge has been brought to justice, and all people are taught to love their fellow Israelites.[28] In Exodus 21, as in theCode of Hammurabi, the concept of reciprocal justice seemingly applies to social equals; the statement of reciprocal justice "life for life, eye for eye, tooth for tooth, hand for hand, foot for foot, burn for burn, wound for wound, stripe for stripe"[29]is followed by an example of a different law: if a slave-owner blinds the eye or knocks out the tooth of a slave, the slave is freed but the owner pays no other consequence. On the other hand, the slave would probably be put to death for the injury of the eye of the slave-owner.[30] However the reciprocal justice applies across social boundaries: the "eye for eye" principle is directly followed by the proclamation "You are to have one law for the alien and the citizen."[31]This shows a much more meaningful principle for social justice, in that the marginalized in society were given the same rights under the social structure. In this context, the reciprocal justice in an ideal functioning setting, according toHarvard Divinity Schoollecturer Michael Coogan, "to prevent people from taking the law into their own hands and exacting disproportionate vengeance for offenses committed against them."[30] Classical texts advocating the retributive view includeCicero'sDe Legibus, written in the 1st century BC.[citation needed] Roman lawmoved toward monetary compensation as a substitute for vengeance. In cases of assault, fixed penalties were set for various injuries, althoughtaliowas still permitted if one person broke another's limb.[32] TheQuran(Q5:45) mentions the "eye for an eye" concept as being ordained for theChildren of Israel.[33]The principle of Lex talionis in Islam is Qiṣāṣ (Arabic: قصاص) as mentioned inQur'an, 2:178: "O you who have believed, prescribed for you is legal retribution (Qisas) for those murdered – the free for the free, the slave for the slave, and the female for the female. But whoever overlooks from his brother anything, then there should be a suitable follow-up and payment to him with good conduct. This is an alleviation from your Lord and a mercy. But whoever transgresses after that will have a painful punishment." Muslim countries that use IslamicSharialaw, such asIranor Saudi Arabia, apply the "eye for an eye" rule literally.[34][35] In the Torah We prescribed for them a life for a life, an eye for an eye, a nose for a nose, an ear for an ear, a tooth for a tooth, an equal wound for a wound: if anyone forgoes this out of charity, it will serve as atonement for his bad deeds. Those who do not judge according to what God has revealed are doing grave wrong. In 2017, an Iranian woman wounded in anacid attackwas given the opportunity to have her attacker blinded with acid per Sharia law.[36] The phrase "an eye for an eye makes the (whole) world blind" and other similar phrases has been conveyed by, but not limited toGeorge Perry Graham(1914) on capital punishment debate argument,[38]Louis Fischer(1951) describing philosophy ofMahatma Gandhi,[39]andMartin Luther King Jr.(1958) in the context of racial violence.[40]
https://en.wikipedia.org/wiki/Eye_for_an_eye
TheGolden Ruleis theprincipleof treating others as one would want to be treated by them. It is sometimes called an ethics of reciprocity, meaning that one should reciprocate to others how one would like them to treat the person (not necessarily how they actually treat them). Various expressions of this rule can be found in the tenets of most religions and creeds through the ages.[1] Themaximmay appear as apositive or negativeinjunction governing conduct: Theterm"Golden Rule", or "Golden law", began to be used widely in the early 17th century in Britain byAnglicantheologians and preachers;[2]the earliest known usage is that of Anglicans Charles Gibbon and Thomas Jackson in 1604.[3] Possibly the earliest affirmation of the maxim of reciprocity, reflecting the ancient Egyptian goddessMa'at, appears in the story of "The Eloquent Peasant", which dates to theMiddle Kingdom(c.2040–1650 BCE): "Now this is the command: Do to the doer to make him do."[4][5]This proverb embodies thedo ut desprinciple.[6]ALate Period(c.664–323 BCE) papyrus contains an early negative affirmation of the Golden Rule: "That which you hate to be done to you, do not do to another."[7] InMahābhārata, the ancient epic of India, there is a discourse in which sage Brihaspati tells the king Yudhishthira the following aboutdharma, a philosophical understanding of values and actions that lend good order to life: One should never do something to others that one would regard as an injury to one's own self. In brief, this is dharma. Anything else is succumbing to desire. The Mahābhārata is usually dated to the period between 400 BCE and 400 CE.[8][9] In Chapter 32 in theBook of Virtueof theTirukkuṛaḷ(c.1st century BCE to 5th century CE),Valluvarsays: Do not do to others what you know has hurt yourself. Why does one hurt others knowing what it is to be hurt? Furthermore, in verse 312, Valluvar says that it is the determination or code of the spotless (virtuous) not to do evil, even in return, to those who have cherished enmity and done them evil. According to him, the proper punishment to those who have done evil is to put them to shame by showing them kindness, in return and to forget both the evil and the good done on both sides (verse 314).[11] The Golden Rule in its prohibitive (negative) form was a common principle inancient Greekphilosophy. Examples of the general concept include: ThePahlavi TextsofZoroastrianism(c.300 BCE– 1000 CE) were an early source for the Golden Rule: "That nature alone is good which refrains from doing to another whatsoever is not good for itself." Dadisten-I-dinik, 94,5, and "Whatever is disagreeable to yourself do not do unto others." Shayast-na-Shayast 13:29[18] Seneca the Younger(c.4 BCE– 65 CE), a practitioner ofStoicism(c.300 BCE– 200 CE), expressed a hierarchical variation of the Golden Rule in hisLetter 47, an essay regarding the treatment of slaves: "Treat your inferior as you would wish your superior to treat you."[19] According toSimon Blackburn, the Golden Rule "can be found in some form in almost every ethical tradition".[20]A multi-faith poster showing the Golden Rule in sacred writings from 13 faith traditions (designed by Paul McKenna of Scarboro Missions, 2000) has been on permanent display at theHeadquarters of the United Nationssince 4 January 2002.[21]Creating the poster "took five years of research that included consultations with experts in each of the 13 faith groups."[21](See alsothe section on Global Ethic.) A rule of reciprocalaltruismwas stated positively in a well-known Torah verse (Hebrew:ואהבת לרעך כמוך‎): You shall not take vengeance or bear a grudge against your kinsfolk. Love your neighbor as yourself: I am the LORD. According toJohn J. CollinsofYale Divinity School, most modern scholars, withRichard Elliott Friedmanas a prominent exception, view the command as applicable to fellow Israelites.[23] Rashicommented what constitutes revenge and grudge, using the example of two men. One man would not lend the other his ax, then the next day, the same man asks the other for his ax. If the second man should say,"'I will not lend it to you, just as you did not lend to me,' it constitutes revenge; if 'Here it is for you; I am not like you, who did not lend me,' it constitutes a grudge. Rashi concludes his commentary by quotingRabbi Akivaon love of neighbor: 'This is a fundamental [all-inclusive] principle of the Torah.'"[24] Hillel the Elder(c.110 BCE– 10 CE)[25]used this verse as a most important message of theTorahfor his teachings. Once, he was challenged by a gentile who asked to be converted under the condition that the Torah be explained to him while he stood on one foot. Hillel accepted him as a candidate forconversion to Judaismbut, drawing on Leviticus 19:18, briefed the man: What is hateful to you, do not do to your fellow: this is the whole Torah; the rest is the explanation; go and learn. Hillel recognized brotherly love as the fundamental principle of Jewish ethics.Rabbi Akivaagreed, whileSimeon ben Azzaisuggested that the principle of love must have its foundation in Genesis chapter 1, which teaches that all men are the offspring of Adam, who was made in the image of God.[27][28]According toJewish rabbinic literature, the first manAdamrepresents theunity of mankind. This is echoed in the modern preamble of theUniversal Declaration of Human Rights.[29][30]It is also taught thatAdamis last in order according to the evolutionary character of God's creation:[28] Why was only a single specimen of man created first? To teach us that he who destroys a single soul destroys a whole world and that he who saves a single soul saves a whole world; furthermore, so no race or class may claim a nobler ancestry, saying, "Our father was born first"; and, finally, to give testimony to the greatness of the Lord, who caused the wonderful diversity of mankind to emanate from one type. And why was Adam created last of all beings? To teach him humility; for if he be overbearing, let him remember that the little fly preceded him in the order of creation.[28] The Jewish Publication Society's edition ofLeviticusstates: Thou shalt not hate thy brother, in thy heart; thou shalt surely rebuke thy neighbour, and not bear sin because of him. Thou shalt not take vengeance, nor bear any grudge against the children of thy people, but thou shalt love thy neighbour as thyself: I am the LORD.[31] This Torah verse represents one of several versions of theGolden Rule, which itself appears in various forms, positive and negative. It is the earliest written version of that concept in a positive form.[32] At the turn of the era, the Jewish rabbis were discussing the scope of the meaning of Leviticus 19:18 and 19:34 extensively: Thestranger who resides with youshall be to you as one of your citizens; you shall love him as yourself, for you were strangers in the land of Egypt: I the LORDam your God. Commentators interpret that this applies to foreigners (e.g.Samaritans), proselytes ('strangers who reside with you')[34]and Jews.[35] On the verse, "Love your fellow as yourself", the classic commentatorRashiquotes fromTorat Kohanim, an early Midrashic text regarding the famous dictum of Rabbi Akiva: "Love your fellow as yourself – Rabbi Akiva says this is a great principle of the Torah."[36] In 1935, RabbiEliezer Berkovitsexplained in his work "What is the Talmud?" that Leviticus 19:34 disallowedxenophobiaby Jews.[37] Israel's postal servicequoted from the previous Leviticus verse when it commemorated theUniversal Declaration of Human Rightson a 1958postage stamp.[38] The Golden Rule was proclaimed byJesus of Nazareth[39]during hisSermon on the Mountand described by him as the second great commandment. The common English phrasing is "Do unto others as you would have them do unto you". Various applications of the Golden Rule are stated positively numerous times in theOld Testament: "You shall not take vengeance or bear a grudge against any of your people, but you shall love your neighbor as yourself: I am the LORD."[40]Or, in Leviticus 19:34: "The alien who resides with you shall be to you as the native-born among you; you shall love the alien as yourself, for you were aliens in the land of Egypt: I am the LORD your God."[40]These two examples are given in theSeptuagintas follows: "And thy hand shall not avenge thee; and thou shalt not be angry with the children of thy people; and thou shalt love thy neighbour as thyself; I am the Lord." and "The stranger that comes to you shall be among you as the native, and thou shalt love him as thyself; for ye were strangers in the land of Egypt: I am the Lord your God."[41] Two passages in theNew TestamentquoteJesus of Nazarethespousing the positive form of the Golden rule:[42] "In everything do to others as you would have them do to you, for this is the Law and the Prophets." Do to others as you would have them do to you. A similar passage, a parallel to theGreat Commandment, is to be found later in theGospel of Luke.[43] An expert in the law stood up to test him [Jesus]. "Teacher," he said, "what must I do to inherit eternal life?" He said to him, "What is written in the law? What do you read there?" He answered, "You shall love the Lord your God with all your heart and with all your soul and with all your strength and with all your mind and your neighbor as yourself." And he said to him, "You have given the right answer; do this, and you will live." The passage in the book of Luke then continues with Jesus answering the question, "Who is my neighbor?", by telling the parable of theGood Samaritan, which John Wesley interprets as meaning that "your neighbor" is anyone in need.[44] Jesus' teaching goes beyond the negative formulation of not doing what one would not like done to themselves, to the positive formulation of actively doing good to another that, if the situations were reversed, one would desire that the other would do for them. This formulation, as indicated in the parable of the Good Samaritan, emphasizes the needs for positive action that brings benefit to another, not simply restraining oneself from negative activities that hurt another.[45] In one passage of theNew Testament,Paul the Apostlerefers to the golden rule, restating Jesus' second commandment:[46] For the whole law is summed up in a single commandment, "You shall love your neighbor as yourself." St. Paul also comments on the golden rule in theEpistle to the Romans:[47] Owe no one anything, except to love one another, for the one who loves another has fulfilled the law. The commandments, "You shall not commit adultery; you shall not murder; you shall not steal; you shall not covet," and any other commandment, are summed up in this word, "You shall love your neighbor as yourself." TheOld TestamentDeuterocanonicalbooks ofTobitandSirach, accepted as part of the Scriptural canon byCatholic Church,Eastern Orthodoxy, and thenon-Chalcedonianchurches, express a negative form of the golden rule:[48][49] And what you hate, do not do to anyone. May no evil go with you on any of your way. Judge your neighbor’s feelings by your own, and in every matter be thoughtful. As prolific commentators on the Bible, multipleChurch Fathers, including theApostolic Fathers, wrote on the Golden Rule found in both Old and New Testaments.[50][full citation needed]The early Christian treatise theDidacheincluded the Golden Rule in saying "in everything, do not do to another what you would not want done to you."[51] Clement of Alexandria, commenting on the Golden Rule in Luke 6:31, calls the concept "all embracing" for how one acts in life.[52]Clement further pointed to the phrasing in the book of Tobit as part of the ethics between husbands and wives.Tertullianstated that the rule taught "love, respect, consolation, protection, and benefits".[53] While many Church Fathers framed the Golden Rule as part of Jewish and Christian Ethics,Theophilus of Antiochstated that it had universal application for all of humanity.[54]Origenconnected the Golden Rule with the law written on the hearts of Gentiles mentioned by Paul in his letter to the Romans, and had universal application to Christian and non-Christian alike.[55] Basil of Caesareacommented that the negative form of the Golden Rule was for avoiding evil while the positive form was for doing good.[56] The Arabian peninsula was said to not practice the golden rule prior to the advent of Islam. According toTh. Emil Homerin: "Pre-Islamic Arabs regarded the survival of the tribe, as most essential and to be ensured by the ancient rite of blood vengeance."[57]Homerin goes on to say: Similar examples of the golden rule are found in the hadiths. Thehadithrecount what the prophet is claimed to have said and done, and generally Muslims regard the hadith as second to only the Qur'an as a guide to correct belief and action.[58] From thehadith: A Bedouin came to the prophet, grabbed the stirrup of his camel and said: O the messenger of God! Teach me something to go to heaven with it. Prophet said: "As you would have people do to you, do to them; and what you dislike to be done to you, don't do to them. Now let the stirrup go! [This maxim is enough for you; go and act in accordance with it!]" None of you [truly] believes until he wishes for his brother what he wishes for himself. Seek for mankind that of which you are desirous for yourself, that you may be a believer. That which you want for yourself, seek for mankind.[61] The most righteous person is the one who consents for other people what he consents for himself, and who dislikes for them what he dislikes for himself.[61] Ali ibn Abi Talib(4thCaliphinSunniIslam, and firstImaminShiaIslam) says: O my child, make yourself the measure (for dealings) between you and others. Thus, you should desire for others what you desire for yourself and hate for others what you hate for yourself. Do not oppress as you do not like to be oppressed. Do good to others as you would like good to be done to you. Regard bad for yourself whatever you regard bad for others. Accept that (treatment) from others which you would like others to accept from you ... Do not say to others what you do not like to be said to you. Muslim scholarAl-Qurtubilooked at the Golden Rule of loving one's neighbor and treating them as one wishes to be treated as having universal application to believers and unbelievers alike.[63]Relying upon a Hadith, exegistIbn Kathirlisted those "who judge people the way they judge themselves" as people who will be among the first to beResurrected.[64] Hussein bin Ali bin Awn al-Hashemi(102nd CaliphinSunni Islam), repeated the Golden Rule in the context of theArmenian genocide, thus, in 1917, he states:[65] Winter is ahead of us. Refugees from the Armenian Jacobite Community will probably need warmth. Help them how you would help your brothers. Pray for these people who have been expelled from their homes and left homeless and devoid of livestock and all their property. InMandaeanscriptures, theGinza RabbaandMandaean Book of Johncontain a prohibitive form of the Golden Rule that is virtually identical to the one used by Hillel. ia mhaimnia u-šalmania kul ḏ-īlauaikun snia b-habraikun la-tibdun O you believers and perfect ones! All that is hateful to you – do not do it to your neighbours. O you perfect and faithful ones! Everything that is hateful and detestable to you – do not do it to your neighbours. Everything that seems good to you – do it if you are capable of doing it, and support each other. My sons! Everything that is hateful to you, do not do it to thy comrade, for in the world to which you are going, there is a judgment and a great summing up. Thewritingsof theBaháʼí Faithencourage everyone to treat others as they would treat themselves and even prefer others over oneself: O SON OF MAN! Deny not My servant should he ask anything from thee, for his face is My face; be then abashed before Me. Blessed is he who preferreth his brother before himself. And if thine eyes be turned towards justice, choose thou for thy neighbour that which thou choosest for thyself. Ascribe not to any soul that which thou wouldst not have ascribed to thee, and say not that which thou doest not. One should never do that to another which one regards as injurious to one's own self. This, in brief, is the rule of dharma. Other behavior is due to selfish desires. By makingdharmayour main focus, treat others as you treat yourself[77] Also, श्रूयतां धर्मसर्वस्वं श्रुत्वा चाप्यवधार्यताम्।आत्मनः प्रतिकूलानि परेषां न समाचरेत्।। If the entireDharmacan be said in a few words, then it is—that which is unfavorable to us, do not do that to others. Buddha(Siddhartha Gautama,c.623–543 BCE)[78][79]made the negative formulation of the golden rule one of the cornerstones of his ethics in the 6th century BCE. It occurs in many places and in many forms throughout theTripitaka. Comparing oneself to others in such terms as "Just as I am so are they, just as they are so am I," he should neither kill nor cause others to kill. One who, while himself seeking happiness, oppresses with violence other beings who also desire happiness, will not attain happiness hereafter. Hurt not others in ways that you yourself would find hurtful. Putting oneself in the place of another, one should not kill nor cause another to kill.[80] The Golden Rule is paramount in the Jainist philosophy and can be seen in the doctrines ofahimsaandkarma. As part of the prohibition of causing any living beings to suffer, Jainism forbids inflicting upon others what is harmful to oneself. The following line from theAcaranga Sutrasums up the philosophy of Jainism: Nothing which breathes, which exists, which lives, or which has essence or potential of life, should be destroyed or ruled over, or subjugated, or harmed, or denied of its essence or potential. In support of this Truth, I ask you a question – "Is sorrow or pain desirable to you?" If you say "yes it is", it would be a lie. If you say, "No, It is not" you will be expressing the truth. Just as sorrow or pain is not desirable to you, so it is to all which breathe, exist, live or have any essence of life. To you and all, it is undesirable, and painful, and repugnant.[81] A man should wander about treating all creatures as he himself would be treated. In happiness and suffering, in joy and grief, we should regard all creatures as we regard our own self. Precious like jewels are the minds of all. To hurt them is not at all good. If thou desirest thy Beloved, then hurt thou not anyone's heart. 己所不欲,勿施於人。 What you do not wish for yourself, do not do to others. 子貢問曰:「有一言而可以終身行之者乎?」子曰:「其恕乎!己所不欲,勿施於人。」 Zi Gong [a disciple of Confucius] asked: "Is there any one word that could guide a person throughout life?"The Master replied: "How about 'shu' [reciprocity]: never impose on others what you would not choose for yourself?" The same idea is also presented in V.12 and VI.30 of theAnalects(c.500 BCE), which can be found in the onlineChinese Text Project. The phraseology differs from the Christian version of the Golden Rule. It does not presume to do anything unto others, but merely to avoid doing what would be harmful. It does not preclude doing good deeds and taking moral positions. In relation to the Golden Rule, Confucian philosopherMenciussaid "If one acts with a vigorous effort at the law of reciprocity, when he seeks for the realization of perfect virtue, nothing can be closer than his approximation to it."[83] The sage has no interest of his own, but takes the interests of the people as his own. He is kind to the kind; he is also kind to the unkind: for Virtue is kind. He is faithful to the faithful; he is also faithful to the unfaithful: for Virtue is faithful. Regard your neighbor's gain as your own gain, and your neighbor's loss as your own loss. If people regarded other people's states in the same way that they regard their own, who then would incite their own state to attack that of another? For one would do for others as one would do for oneself. If people regarded other people's cities in the same way that they regard their own, who then would incite their own city to attack that of another? For one would do for others as one would do for oneself. If people regarded other people's families in the same way that they regard their own, who then would incite their own family to attack that of another? For one would do for others as one would do for oneself. And so if states and cities do not attack one another and families do not wreak havoc upon and steal from one another, would this be a harm to the world or a benefit? Of course one must say it is a benefit to the world. Mozi regarded the Golden Rule as a corollary to the cardinal virtue of impartiality, and encouragedegalitarianismand selflessness in relationships. Do not do unto others whatever is injurious to yourself. Hear ye these words and heed them well, the words of Dea, thyMother Goddess, "I command thee thus, O children of the Earth, that that which ye deem harmful unto thyself, the very same shall ye be forbidden from doing unto another, for violence and hatred give rise to the same. My command is thus, that ye shall return all violence and hatred with peacefulness and love, for my Law is love unto all things. Only through love shall ye have peace; yea and verily, only peace and love will cure the world, and subdue all evil." Try not to do things to others that you would not like them to do to you.Try to treat others as you would want them to treat you. One who is going to take a pointed stick to pinch a baby bird should first try it on himself to feel how it hurts. Egbe bere, ugo bere. Let the eagle perch, let the hawk perch. Nke si ibe ya ebene gosi ya ebe o ga-ebe. Whoever says the other shall not perch, may they show the other where to perch. The "Declaration Toward a Global Ethic"[86]from theParliament of the World's Religions(1993) proclaimed the Golden Rule ("We must treat others as we wish others to treat us") as the common principle for many religions.[87]The Initial Declaration was signed by 143 leaders from all of the world's major faiths, including Baháʼí Faith, Brahmanism, Brahma Kumaris, Buddhism, Christianity, Hinduism, Indigenous, Interfaith, Islam, Jainism, Judaism, Native American, Neo-Pagan, Sikhism, Taoism, Theosophist, Unitarian Universalist and Zoroastrian.[87][88] In the view ofGreg M. Epstein, aHumanistchaplainatHarvard University,"'do unto others' ... is a concept that essentially no religion misses entirely.But not a single one of these versions of the golden rule requires a God."[89]Various sources identify the Golden Rule as a humanist principle:[90] Trying to live according to the Golden Rule means trying to empathise with other people, including those who may be very different from us. Empathy is at the root of kindness, compassion, understanding and respect – qualities that we all appreciate being shown, whoever we are, whatever we think and wherever we come from. And although it isn't possible to know what it really feels like to be a different person or live in different circumstances and have different life experiences, it isn't difficult for most of us to imagine what would cause us suffering and to try to avoid causing suffering to others. For this reason many people find the Golden Rule's corollary – "do not treat people in a way you would not wish to be treated yourself" – more pragmatic.[90] Do not do to others what you would not want them to do to you. ... [is] the single greatest, simplest, and most important moral axiom humanity has ever invented, one which reappears in the writings of almost every culture and religion throughout history, the one we know as the Golden Rule. Moral directives do not need to be complex or obscure to be worthwhile, and in fact, it is precisely this rule's simplicity which makes it great. It is easy to come up with, easy to understand, and easy to apply, and these three things are the hallmarks of a strong and healthy moral system. The idea behind it is readily graspable: before performing an action which might harm another person, try to imagine yourself in their position, and consider whether you would want to be the recipient of that action. If you would not want to be in such a position, the other person probably would not either, and so you should not do it. It is the basic and fundamental human trait of empathy, the ability to vicariously experience how another is feeling, that makes this possible, and it is the principle of empathy by which we should live our lives. When we say that man chooses for himself, we do mean that every one of us must choose himself; but by that we also mean that in choosing for himself he chooses for all men. For in effect, of all the actions a man may take in order to create himself as he wills to be, there is not one which is not creative, at the same time, of an image of man such as he believes he ought to be. To choose between this or that is at the same time to affirm the value of that which is chosen; for we are unable ever to choose the worse. What we choose is always the better; and nothing can be better for us unless it is better for all. John Stuart Millin his book,Utilitarianism(originally published in 1861), wrote, "In the golden rule of Jesus of Nazareth, we read the complete spirit of the ethics of utility. 'To do as you would be done by,' and 'to love your neighbour as yourself,' constitute the ideal perfection of utilitarian morality."[94] According toMarc H. Bornstein, and William E. Paden, the Golden Rule is arguably the most essential basis for the modern concept ofhuman rights, in which each individual has a right to just treatment, and a reciprocal responsibility to ensure justice for others.[95] However,Leo Damroschargued that the notion that the Golden Rule pertains to "rights" per se is a contemporary interpretation and has nothing to do with its origin. The development of human "rights" is a modern political ideal that began as a philosophical concept promulgated through the philosophy ofJean Jacques Rousseauin 18th century France, among others. His writings influencedThomas Jefferson, who then incorporated Rousseau's reference to "inalienable rights" into theUnited States Declaration of Independencein 1776. Damrosch argued that to confuse the Golden Rule with human rights is to apply contemporary thinking to ancient concepts.[96] ThePlatinum Rulehas been said to be stated as, "Do to others as they would have you do to them." Taken in the spirit of the Golden Rule, this suggests one should be familiar or at least consider the desires of the person they're interacting with.[97]However, this is the flaw of the rule in that it requires one to stereotype or make broad assumptions about a stranger's interests and personality before interacting with them. These kind of assumptions are often erroneous and therefore a prudent person would avoid the interaction knowing their assumptions are likely incorrect. This rule is prohibitive to communication and prefers no interaction over any interaction with strangers. On occasion, stereotypes may be applied and in rare cases are largely correct. In those situations this rule can be applied successfully. On the other hand, the Platinum Rule is broadly successful when interacting with familiar people and directs that all interaction be conducted in a manner the person would like to be treated. This demonstrates respect and the desire to favorably regard the person one is interacting with. Unfortunately, this can lead to adependent relationship, developing a psychological tendency to expect similar treatment in all relationships and avoid forming new relationships where this treatment would not exist simply from not knowing the individuals preferences. Despite the unusual cases stifling interaction or individuals developing a demand for this behavior from others, the Platinum Rule requires due consideration, self-control, and receiver analysis. Taken altogether, the Platinum Rule represents a gesture of kindness, and is an established norm in various industries, such as marketing, medical care, motivational speaking, and many others.[98]As a consequence, some argue the Golden Rule is outdated, self-absorbed, and grossly fails to consider the needs of others.[99][100] Some published research argues that some 'sense' of fair play and the Golden Rule may be stated and rooted in terms ofneuroscientificandneuroethicalprinciples.[101] The Golden Rule can also be explained from the perspectives of psychology, philosophy, sociology, human evolution, and economics. Psychologically, it involves a personempathizingwith others. Philosophically, it involves a person perceiving their neighbor also as "I" or "self".[102]Sociologically, "love your neighbor as yourself" is applicable between individuals, between groups, and also between individuals and groups. In evolution, "reciprocal altruism" is seen as a distinctive advance in the capacity of human groups to survive and reproduce, as their exceptional brains demanded exceptionally long childhoods and ongoing provision and protection even beyond that of the immediate family.[103]Ineconomics, Richard Swift, referring to ideas fromDavid Graeber, suggests that "without some kind of reciprocity society would no longer be able to exist."[104] Study of other primates provides evidence that the Golden Rule exists in other non-human species.[105] Philosophers such asImmanuel Kant[106]andFriedrich Nietzsche[107]have objected to the rule on a variety of grounds. One is the epistemic question of determining how others want to be treated. The obvious way is to ask them, but they might give duplicitous answers if they find this strategically useful, and they might also fail to understand the details of the choice situation as one understands it. People might also be biased to perceiving harms and benefits to themselves more than to others, which could lead to escalating conflict if they are suspicious of others. HenceLinus Paulingsuggested that a bias towards others is to be introduced into the golden rule: "Do unto others 20 percent better than you would have them do unto you" - to correct for subjective bias.[108] George Bernard Shawwrote, "Do not do unto others as you would that they should do unto you. Their tastes may not be the same."[109]This suggests that if one's values are not shared with others, the way one wants to be treated will not be the way others want to be treated. Hence, the Golden Rule of "do unto others" is "dangerous in the wrong hands",[110]according to philosopherIain King, because "some fanatics have no aversion to death: the Golden Rule might inspire them to kill others in suicide missions."[111] Walter Terence Stace, inThe Concept of Morals(1937) argued that Shaw's remark ...seems to overlook the fact that "doing as you would be done by" includes taking into account your neighbour's tastes as you would that he should take yours into account. Thus the "golden rule" might still express the essence of a universal moralityeven if no two men in the world had any needs or tastes in common.[112] Immanuel Kantfamously criticized the golden rule for not being sensitive to differences of situation, noting that a prisoner duly convicted of a crime could appeal to the golden rule while asking the judge to release him, pointing out that the judge would not want anyone else to send him to prison, so he should not do so to others.[106]On the other hand, in a critique of the consistency of Kant's writings, several authors have noted the"similarity"[113]between the Golden Rule and Kant'sCategorical Imperative, introduced inGroundwork of the Metaphysic of Morals(See discussion at this link). This was perhaps a well-known objection, as Leibniz actually responded to it long before Kant made it, suggesting that the judge should put himself in the place, not merely of the criminal, but of all affected persons and then judging each option (to inflict punishment, or release the criminal, etc.) by whether there was a “greater good in which this lesser evil was included.”[114] Marcus George Singerobserved that there are two importantly different ways of looking at the golden rule: as requiring either that one performs specific actions that they want others to do to them or that they guide their behavior in the same general ways that they want others to.[115]Counter-examples to the Golden Rule typically are more forceful against the first than the second. In his book on the Golden Rule, Jeffrey Wattles makes the similar observation that such objections typically arise while applying the Golden Rule in certain general ways (namely, ignoring differences in taste or situation, failing to compensate for subjective bias, etc.) But if people apply the golden rule to their own method of using it, asking in effect if they would want other people to apply the Golden Rule in such ways, the answer would typically be no, since others' ignoring of such factors will lead to behavior which people object to. It follows that people should not do so themselves—according to the Golden Rule. In this way, the Golden Rule may be self-correcting.[116]An article by Jouni Reinikainen develops this suggestion in greater detail.[117] It is possible, then, that the golden rule can itself guide people in identifying which differences of situation are morally relevant. People would often want other people to ignore any prejudice against theirraceor nationality when deciding how to act towards them, but would also want others to not ignore their differing preferences in food, desire for aggressiveness, and so on. This principle of "doing unto others, wherever possible, astheywould be done by..." has sometimes been termed the Platinum Rule.[118] Charles Kingsley'sThe Water Babies(1863) includes a character named Mrs Do-As-You-Would-Be-Done-By (and another, Mrs Be-Done-By-As-You-Did).[119]
https://en.wikipedia.org/wiki/Golden_Rule
Mutual assured destruction(MAD) is adoctrineofmilitary strategyandnational security policywhich posits that a full-scale use ofnuclear weaponsby an attacker on a nuclear-armed defender withsecond-strike capabilitieswould result in thecomplete annihilationof both the attacker and the defender.[1]It is based on the theory ofrational deterrence, which holds that the threat of using strong weapons against the enemy prevents the enemy's use of those same weapons. The strategy is a form ofNash equilibriumin which, once armed, neither side has any incentive to initiate a conflict or to disarm. The result may be anuclear peace, in which the presence ofnuclear weaponsdecreases the risk of crisis escalation, since parties will seek to avoid situations that could lead to the use of nuclear weapons. Proponents of nuclear peace theory therefore believe that controllednuclear proliferationmay be beneficial for global stability. Critics argue that nuclear proliferation increases the chance ofnuclear warthrough either deliberate or inadvertent use of nuclear weapons, as well as the likelihood ofnuclear materialfalling into the hands ofviolent non-state actors. The term "mutual assured destruction", commonly abbreviated "MAD", was coined by Donald Brennan, a strategist working inHerman Kahn'sHudson Institutein 1962.[2]Brennan conceived the acronym cynically, spelling out the English word "mad" to argue that holding weapons capable of destroying society was irrational.[3] Under MAD, each side has enough nuclear weaponry to destroy the other side. Either side, if attacked for any reason by the other, would retaliate with equal or greater force. The expected result is an immediate, irreversible escalation of hostilities resulting in both combatants' mutual, total, and assured destruction. The doctrine requires that neither side construct shelters on a massive scale.[4]If one side constructed a similar system of shelters, it would violate the MAD doctrine and destabilize the situation, because it would have less to fear from asecond strike.[5][6]The same principle is invoked againstmissile defense. The doctrine further assumes that neither side will dare to launch afirst strikebecause the other side wouldlaunch on warning(also calledfail-deadly) or with surviving forces (asecond strike), resulting in unacceptable losses for both parties. The payoff of the MAD doctrine was and still is expected to be a tense but stable global peace. However, many have argued that mutually assured destruction is unable to deter conventional war that could later escalate. Emerging domains ofcyber-espionage, proxy-state conflict, andhigh-speed missilesthreaten to circumvent MAD as a deterrent strategy.[7] The primary application of this doctrine started during theCold War(1940s to 1991), in which MAD was seen as helping to prevent any direct full-scale conflicts between the United States and theSoviet Unionwhile they engaged in smallerproxy warsaround the world. MAD was also responsible for thearms race, as both nations struggled to keep nuclear parity, or at least retainsecond-strike capability. Although the Cold War ended in the early 1990s, the MAD doctrine continues to be applied. Proponents of MAD as part of the US and USSR strategic doctrine believed thatnuclear warcould best be prevented if neither side could expect to survive a full-scale nuclear exchange as a functioning state. Since the credibility of the threat is critical to such assurance, each side had to invest substantialcapitalin their nuclear arsenals even if they were not intended for use. In addition, neither side could be expected or allowed to adequately defend itself against the other's nuclear missiles.[8]This led both to the hardening and diversification of nuclear delivery systems (such as nuclearmissile silos,ballistic missile submarines, and nuclearbomberskept atfail-safepoints) and to theAnti-Ballistic Missile Treaty. This MAD scenario is often referred to asrational nuclear deterrence. When the possibility of nuclear warfare between the United States and Soviet Union started to become a reality, theorists began to think that mutual assured destruction would be sufficient to deter the other side from launching a nuclear weapon.Kenneth Waltz, an American political scientist, believed that nuclear forces were in fact useful, but even more useful in the fact that they deterred other nuclear threats from using them, based on mutually assured destruction. The theory of mutually assured destruction being a safe way to deter continued even farther with the thought that nuclear weapons intended on being used for the winning of a war, were impractical, and even considered too dangerous and risky.[9]Even with the Cold War ending in 1991, deterrence from mutually assured destruction is still said to be the safest course to avoid nuclear warfare.[10] A study published in theJournal of Conflict Resolutionin 2009quantitatively evaluatedthe nuclear peacehypothesisand found support for the existence of thestability-instability paradox. The study determined that nuclear weapons promotestrategic stabilityand prevent large-scale wars but simultaneously allow for morelow intensity conflicts. If a nuclear monopoly exists between two states, and one state has nuclear weapons and its opponent does not, there is a greater chance of war. In contrast, if there is mutual nuclear weapon ownership with both states possessing nuclear weapons, the odds of war drop precipitously.[11] The concept of MAD had been discussed in the literature for nearly a century before the invention of nuclear weapons. One of the earliest references comes from the English authorWilkie Collins, writing at the time of theFranco-Prussian Warin 1870: "I begin to believe in only one civilizing influence—the discovery one of these days of a destructive agent so terrible that War shall mean annihilation and men's fears will force them to keep the peace."[12]The concept was also described in 1863 byJules Vernein his novelParis in the Twentieth Century, though it was not published until 1994. The book is set in 1960 and describes "the engines of war", which have become so efficient that war is inconceivable and all countries are at a perpetual stalemate.[13][non-primary source needed] MAD has been invoked by more than one weapons inventor. For example,Richard Jordan Gatlingpatented his namesakeGatling gunin 1862 with the partial intention of illustrating the futility of war.[14]Likewise, after his 1867 invention ofdynamite,Alfred Nobelstated that "the day when two army corps can annihilate each other in one second, all civilized nations, it is to be hoped, will recoil from war and discharge their troops."[15]In 1937,Nikola TeslapublishedThe Art of Projecting Concentrated Non-dispersive Energy through the Natural Media,[16]a treatise concerningcharged particle beamweapons.[17]Tesla described his device as a "superweapon that would put an end to all war." The March 1940Frisch–Peierls memorandum, the earliest technical exposition of a practical nuclear weapon, anticipated deterrence as the principal means of combating an enemy with nuclear weapons.[18] In August 1945, the United States became the first nuclear power after thenuclear attacks on Hiroshima and Nagasaki. Four years later, on August 29, 1949, the Soviet Uniondetonated its own nuclear device. At the time, both sides lacked the means to effectively use nuclear devices against each other. However, with the development of aircraft like the AmericanConvair B-36and the SovietTupolev Tu-95, both sides were gaining a greater ability to deliver nuclear weapons into the interior of the opposing country. The official policy of the United States became one of "Instant Retaliation", as coined by Secretary of StateJohn Foster Dulles, which called for massive atomic attack against the Soviet Union if they were to invade Europe, regardless of whether it was a conventional or a nuclear attack.[19] By the time of the 1962Cuban Missile Crisis, both the United States and the Soviet Union had developed the capability of launching a nuclear-tipped missile from a submerged submarine, which completed the "third leg" of thenuclear triadweapons strategy necessary to fully implement the MAD doctrine. Having a three-branched nuclear capability eliminated the possibility that an enemy could destroy all of a nation's nuclear forces in afirst-strikeattack; this, in turn, ensured the credible threat of a devastatingretaliatory strikeagainst the aggressor, increasing a nation'snuclear deterrence.[20][21][22] Campbell Craig andSergey Radchenkoargue thatNikita Khrushchev(Soviet leader 1953 to 1964) decided that policies that facilitated nuclear war were too dangerous to the Soviet Union. His approach did not greatly change his foreign policy or military doctrine but is apparent in his determination to choose options that minimized the risk of war.[23] Beginning in 1955, the United StatesStrategic Air Command(SAC) kept one-third of its bombers on alert, with crews ready to take off within fifteen minutes and fly to designated targets inside theSoviet Unionand destroy them with nuclear bombs in the event of a Soviet first-strike attack on the United States. In 1961, President John F. Kennedy increased funding for this program[24]and raised the commitment to 50 percent of SAC aircraft.[citation needed] During periods of increased tension in the early 1960s, SAC kept part of its B-52 fleet airborne at all times, to allow an extremely fast retaliatory strike against the Soviet Union in the event of a surprise attack on the United States. This program continued until 1969. Between 1954 and 1992, bomber wings had approximately one-third to one-half of their assigned aircraft on quick reaction ground alert and were able to take off within a few minutes.[25]SAC also maintained the National Emergency Airborne Command Post (NEACP, pronounced "kneecap"), also known as "Looking Glass", which consisted of several EC-135s, one of which was airborne at all times from 1961 through 1990.[26]During theCuban Missile Crisisthe bombers were dispersed to several different airfields, and sixty-five B-52s were airborne at all times.[27] During the height of the tensions between the US and the USSR in the 1960s, two popular films were made dealing with what could go terribly wrong with the policy of keeping nuclear-bomb-carrying airplanes at the ready:Dr. Strangelove(1964)[28]andFail Safe(1964).[29] The strategy of MAD was fully declared in the early 1960s, primarily byUnited States Secretary of DefenseRobert McNamara. In McNamara's formulation, there was the very real danger that a nation with nuclear weapons could attempt to eliminate another nation's retaliatory forces with a surprise, devastating first strike and theoretically "win" a nuclear war relatively unharmed. The true second-strike capability could be achieved only when a nation had aguaranteedability to fully retaliate after a first-strike attack.[4] The United States had achieved an early form of second-strike capability by fielding continual patrols of strategic nuclear bombers, with a large number of planes always in the air, on their way to or from fail-safe points close to the borders of the Soviet Union. This meant the United States could still retaliate, even after a devastating first-strike attack. The tactic was expensive and problematic because of the high cost of keeping enough planes in the air at all times and the possibility they would be shot down by Sovietanti-aircraft missilesbefore reaching their targets. In addition, as the idea of amissile gapexisting between the US and the Soviet Union developed, there was increasing priority being given toICBMsover bombers. It was only with the advent ofnuclear-poweredballistic missile submarines, starting with theGeorge Washingtonclassin 1959, that a genuinesurvivablenuclear force became possible and a retaliatory second strike capability guaranteed. The deployment of fleets of ballistic missile submarines established a guaranteed second-strike capability because of their stealth and by the number fielded by each Cold War adversary—it was highly unlikely that all of them could be targeted and preemptively destroyed (in contrast to, for example, a missile silo with a fixed location that could be targeted during a first strike). Given their long-range, highsurvivabilityand ability to carry many medium- and long-range nuclear missiles, submarines were credible and effective means for full-scale retaliation even after a massive first strike.[30] This deterrence strategy and the program have continued into the 21st century, with nuclear submarines carryingTrident IIballistic missiles as one leg of the USstrategic nuclear deterrentand as the sole deterrent of the United Kingdom. The other elements of the US deterrent are intercontinental ballistic missiles (ICBMs) on alert in the continental United States, and nuclear-capable bombers. Ballistic missile submarines are also operated by the navies of China, France, India, and Russia. TheUS Department of Defenseanticipates a continued need for asea-based strategic nuclear force.[citation needed]The first of the currentOhio-class SSBNsare expected to be retired by 2029,[citation needed]meaning that a replacement platform must already be seaworthy by that time. A replacement may cost over $4 billion per unit compared to the USSOhio's $2 billion.[31]The USN's follow-on class of SSBN will be theColumbiaclass, which began construction in 2021 and enter service in 2031.[32] In the 1960s both the Soviet Union (A-35 anti-ballistic missile system) and the United States (LIM-49 Nike Zeus) developed anti-ballistic missile systems. Had such systems been able to effectively defend against a retaliatorysecond strike, MAD would have been undermined. However, multiple scientific studies showed technological and logistical problems in these systems, including the inability to distinguish between real and decoy weapons.[33] Themultiple independently targetable re-entry vehicle(MIRV) was another weapons system designed specifically to aid with the MAD nuclear deterrence doctrine. With a MIRV payload, oneICBMcould hold many separate warheads. MIRVs were first created by the United States in order to counterbalance the SovietA-35 anti-ballistic missile systemsaround Moscow. Since each defensive missile could be counted on to destroy only one offensive missile, making each offensive missile have, for example, three warheads (as with early MIRV systems) meant that three times as many defensive missiles were needed for each offensive missile. This made defending against missile attacks more costly and difficult. One of the largest US MIRVed missiles, theLGM-118A Peacekeeper, could hold up to 10 warheads, each with a yield of around 300kilotons of TNT(1.3 PJ)—all together, an explosive payload equivalent to 230Hiroshima-typebombs. The multiple warheads made defense untenable with the available technology, leaving the threat of retaliatory attack as the only viable defensive option. MIRVed land-based ICBMs tend to put a premium on striking first. TheSTART IIagreement was proposed to ban this type of weapon, but never entered into force. In the event of a Soviet conventional attack onWestern Europe,NATOplanned to usetactical nuclear weapons. The Soviet Union countered this threat by issuing a statement that any use of nuclear weapons (tactical or otherwise) against Soviet forces would be grounds for a full-scale Soviet retaliatory strike (massive retaliation). Thus it was generally assumed that any combat in Europe would end withapocalypticconclusions. MIRVedland-based ICBMs are generally considered suitable for a first strike (inherentlycounterforce) or a counterforcesecond strike, due to: Unlike adecapitation strikeor acountervalue strike, acounterforce strikemight result in a potentially more constrained retaliation. Though the Minuteman III of the mid-1960s was MIRVed with three warheads, heavily MIRVed vehicles threatened to upset the balance; these included theSS-18 Satanwhich was deployed in 1976, and was considered to threatenMinuteman IIIsilos, which ledsomeneoconservativesto conclude aSovietfirst strike was being prepared for.[citation needed]This led to the development of the aforementionedPershing II, theTrident IandTrident II, as well as theMX missile, and theB-1 Lancer. MIRVed land-basedICBMsare considered destabilizing because they tend to put a premium on striking first. When a missile is MIRVed, it is able to carry manywarheads(up to eight in existing US missiles, limited byNew START, though Trident II is capable of carrying up to 12[34]) and deliver them to separate targets. If it is assumed that each side has 100 missiles, with five warheads each, and further that each side has a 95 percent chance of neutralizing the opponent's missiles in their silos by firing two warheads at each silo, then the attacking side can reduce the enemy ICBM force from 100 missiles to about five by firing 40 missiles with 200 warheads, and keeping the rest of 60 missiles in reserve. As such, this type of weapon was intended to be banned under theSTART IIagreement; however, the START II agreement was never brought into force, and neither Russia nor the United States ratified the agreement. The original US MAD doctrine was modified on July 25, 1980, with US PresidentJimmy Carter's adoption ofcountervailing strategywithPresidential Directive 59. According to its architect, Secretary of DefenseHarold Brown, "countervailing strategy" stressed that the planned response to a Soviet attack was no longer to bomb Soviet population centers and cities primarily, but first to kill the Soviet leadership, then attack military targets, in the hope of a Soviet surrender before total destruction of the Soviet Union (and the United States). This modified version of MAD was seen as a winnable nuclear war, while still maintaining the possibility of assured destruction for at least one party. This policy was further developed by theReagan administrationwith the announcement of theStrategic Defense Initiative(SDI, nicknamed "Star Wars"), the goal of which was to develop space-based technology to destroy Soviet missiles before they reached the United States. SDI was criticized by both the Soviets and many of America's allies (including Prime Minister of the United KingdomMargaret Thatcher) because, were it ever operational and effective, it would have undermined the "assured destruction" required for MAD. If the United States had a guarantee against Soviet nuclear attacks, its critics argued, it would have first-strike capability, which would have been a politically and militarily destabilizing position. Critics further argued that it could trigger a new arms race, this time to develop countermeasures for SDI. Despite its promise of nuclear safety, SDI was described by many of its critics (including Soviet nuclear physicist and later peace activistAndrei Sakharov) as being even more dangerous than MAD because of these political implications. Supporters also argued that SDI could trigger a new arms race, forcing the USSR to spend an increasing proportion of GDP on defense—something which has been claimed to have been an indirect cause of the eventual collapse of the Soviet Union.Gorbachevhimself in 1983 announced that “the continuation of the S.D.I. program will sweep the world into a new stage of the arms race and would destabilize the strategic situation.”[35] Proponents ofballistic missile defense(BMD) argue that MAD is exceptionally dangerous in that it essentially offers a single course of action in the event of a nuclear attack: full retaliatory response. The fact thatnuclear proliferationhas led to an increase in the number of nations in the "nuclear club", including nations of questionable stability (e.g.North Korea), and that a nuclear nation might be hijacked by adespotor other person or persons who might use nuclear weapons without a sane regard for the consequences, presents a strong case for proponents of BMD who seek a policy which both protect against attack, but also does not require an escalation into what might becomeglobal nuclear war. Russia continues to have a strong public distaste for Western BMD initiatives, presumably because proprietary operative BMD systems could exceed their technical and financial resources and therefore degrade their larger military standing and sense of security in a post-MAD environment. Russian refusal to accept invitations[citation needed]to participate in NATO BMD may be indicative of the lack of an alternative to MAD in current Russian war-fighting strategy due to the dilapidation of conventional forces after the breakup of theSoviet Union. Proud Prophetwas a series of war games played out by various American military officials. The simulation revealed MAD made the use of nuclear weapons virtually impossible without total nuclear annihilation, regardless of how nuclear weapons were implemented in war plans. These results essentially ruled out the possibility of a limited nuclear strike, as every time this was attempted, it resulted in a complete expenditure of nuclear weapons by both the United States and USSR. Proud Prophet marked a shift in American strategy; following Proud Prophet, American rhetoric of strategies that involved the use of nuclear weapons dissipated and American war plans were changed to emphasize the use of conventional forces.[36] In 1983, a group of researchers includingCarl Saganreleased the TTAPS study (named for the respective initials of the authors), which predicted that the large scale use of nuclear weapons would cause a “nuclear winter”. The study predicted that the debris burned in nuclear bombings would be lifted into the atmosphere and diminish sunlight worldwide, thus reducing world temperatures by “-15° to -25°C”.[37]These findings led to theory that MAD would still occur with many fewer weapons than were possessed by either the United States or USSR at the height of the Cold War. As such, nuclear winter was used as an argument for significant reduction of nuclear weapons since MAD would occur anyway.[38] After thefall of the Soviet Union, theRussian Federationemerged as a sovereign entity encompassing most of the territory of the former USSR. Relations between the United States and Russia were, at least for a time, less tense than they had been with the Soviet Union.[citation needed] While MAD has become less applicable for the US and Russia, it has been argued as a factor behindIsrael's acquisition of nuclear weapons. Similarly, diplomats have warned that Japan may be pressured to nuclearize by the presence of North Korean nuclear weapons. The ability to launch a nuclear attack against an enemy city is a relevant deterrent strategy for these powers.[39] The administration of US PresidentGeorge W. Bushwithdrew from theAnti-Ballistic Missile Treatyin June 2002, claiming that the limited national missile defense system which they proposed to build was designed only to preventnuclear blackmailby a state with limited nuclear capability and was not planned to alter the nuclear posture between Russia and the United States. While relations have improved and an intentional nuclear exchange is more unlikely, the decay in Russian nuclear capability in thepost–Cold War eramay have had an effect on the continued viability of the MAD doctrine. A 2006 article by Keir Lieber and Daryl Press stated that the United States could carry out a nuclear first strike on Russia and would "have a good chance of destroying every Russian bomber base, submarine, and ICBM." This was attributed to reductions in Russian nuclear stockpiles and the increasing inefficiency and age of that which remains. Lieber and Press argued that the MAD era is coming to an end and that the United States is on the cusp of global nuclear primacy.[40] However, in a follow-up article in the same publication, others criticized the analysis, includingPeter Flory, the US Assistant Secretary of Defense for International Security Policy, who began by writing "The essay by Keir Lieber and Daryl Press contains so many errors, on a topic of such gravity, that a Department of Defense response is required to correct the record."[41]Regarding reductions in Russian stockpiles, another response stated that "a similarly one-sided examination of [reductions in] U.S. forces would have painted a similarly dire portrait". A situation in which the United States might actually be expected to carry out a "successful" attack is perceived as a disadvantage for both countries. The strategic balance between the United States and Russia is becoming less stable, and the objective, the technical possibility of a first strike by the United States is increasing. At a time of crisis, this instability could lead to an accidental nuclear war. For example, if Russia feared a US nuclear attack, Moscow might make rash moves (such as putting its forces on alert) that would provoke a US preemptive strike.[41] An outline of current US nuclear strategy toward both Russia and other nations was published as the document "Essentials of Post–Cold War Deterrence" in 1995. In November 2020, the US successfully destroyed a dummy ICBM outside the atmosphere with another missile.Bloomberg Opinionwrites that this defense ability "ends the era of nuclear stability".[42] MAD does not entirely apply to all nuclear-armed rivals.India and Pakistanare an example of this; because of the superiority of conventional Indian armed forces to their Pakistani counterparts, Pakistan may be forced to use their nuclear weapons on invading Indian forces out of desperation regardless of an Indian retaliatory strike. As such, any large-scale attack on Pakistan by India could precipitate the use of nuclear weapons by Pakistan, thus rendering MAD inapplicable. However, MAD is applicable in that it may deter Pakistan from making a “suicidal” nuclear attack rather than a defensive nuclear strike.[3] Since the emergence ofNorth Korea as a nuclear state, military action has not been an option in handling the instability surrounding North Korea because of their option of nuclear retaliation in response to any conventional attack on them, thus rendering non-nuclear neighboring states such as South Korea and Japan incapable of resolving the destabilizing effect of North Korea via military force.[43]MAD may not apply to the situation in North Korea because the theory relies on rational consideration of the use and consequences of nuclear weapons, which may not be the case for potential North Korean deployment.[44] Whether MAD was the officially accepted doctrine of the United States military during the Cold War is largely a matter of interpretation. TheUnited States Air Force, for example, has retrospectively contended that it never advocated MAD as a sole strategy, and that this form of deterrence was seen as one of numerous options in US nuclear policy.[45]Former officers have emphasized that they never felt as limited by the logic of MAD (and were prepared to use nuclear weapons in smaller-scale situations than "assured destruction" allowed), and did not deliberately target civilian cities (though they acknowledge that the result of a "purely military" attack would certainly devastate the cities as well). However, according to a declassified 1959Strategic Air Commandstudy, US nuclear weapons plans specifically targeted the populations of Beijing, Moscow, Leningrad, East Berlin, and Warsaw for systematic destruction.[46]MAD was implied in several US policies and used in the political rhetoric of leaders in both the United States and the USSR during many periods of the Cold War: To continue to deter in an era of strategic nuclear equivalence, it is necessary to have nuclear (as well as conventional) forces such that in considering aggression against our interests any adversary would recognize that no plausible outcome would represent a victory or any plausible definition of victory. To this end and so as to preserve the possibility of bargaining effectively to terminate the war on acceptable terms that are as favorable as practical, if deterrence fails initially, we must be capable of fighting successfully so that the adversary would not achieve his war aims and would suffer costs that are unacceptable, or in any event greater than his gains, from having initiated an attack. The doctrine of MAD was officially at odds with that of theUSSR, which had, contrary to MAD, insisted survival was possible.[47][48][49]The Soviets believed they could win not only a strategic nuclear war, which they planned to absorb with their extensivecivil defenseplanning,[47][50][51]but also the conventional war that they predicted would follow after their strategic nuclear arsenal had been depleted.[52]Official Soviet policy, though, may have had internal critics towards the end of the Cold War, including some in the USSR's own leadership:[49] Nuclear use would be catastrophic. Other evidence of this comes from the Soviet minister of defense,Dmitriy Ustinov, who wrote that "A clear appreciation by the Soviet leadership of what a war under contemporary conditions would mean for mankind determines the active position of the USSR."[53]The Soviet doctrine, although being seen as primarily offensive by Western analysts, fully rejected the possibility of a "limited" nuclear war by 1975.[54] Deterrence theory has been criticized by numerous scholars for various reasons. A prominent strain of criticism argues that rational deterrence theory is contradicted by frequent deterrence failures, which may be attributed to misperceptions.[55]Critics have also argued that leaders do not behave in ways that are consistent with the predictions of nuclear deterrence theory.[56][57][58]For example, it has been argued that it is inconsistent with the logic of rational deterrence theory that states continue to build nuclear arsenals once they have reached the second-strike threshold.[56][57]For a more inconsistent example,Mao Zedongurged the socialist camp not to fearnuclear warwith the United States since, even if "half of mankind died, the other half would remain while imperialism would be razed to the ground and the whole world would become socialist."[59] Additionally, many scholars have advanced philosophical objections against the principles of deterrence theory on purelyethicalgrounds. Included in this group isRobert L. Holmeswho observes that mankind's reliance upon a system of preventing war which is based exclusively upon the threat of waging war is inherently irrational and must be considered immoral according to fundamentaldeontologicalprinciples. In addition, he questions whether it can be conclusively demonstrated that such a system has in fact served to prevent warfare in the past and may actually serve to increase the probability of waging war in the future due to its reliance upon the continuous development of new generations of technologically advanced nuclear weapons.[60][61][62] Another reason is that deterrence has an inherent instability. As Kenneth Boulding said: "If deterrence were really stable... it would cease to deter." If decision-makers were perfectly rational, they would never order the largescale use of nuclear weapons, and the credibility of the nuclear threat would be low. However, that apparent perfect rationality criticism is countered and so is consistent with current deterrence policy. InEssentials of Post-Cold War Deterrence, the authors detail an explicit advocation of ambiguity regarding "what is permitted" for other nations and its endorsement of "irrationality" or, more precisely, the perception thereof as an important tool in deterrence and foreign policy. The document claims that the capacity of the United States, in exercising deterrence, would be hurt by portraying US leaders as fully rational and cool-headed: The fact that some elements may appear to be potentially 'out of control' can be beneficial to creating and reinforcing fears and doubts in the minds of an adversary's decision makers. This essential sense of fear is the working force of deterrence. That the U.S. may become irrational and vindictive if its vital interests are attacked should be part of the national persona we project to all adversaries. HoweverRobert Gallucci, the president of theJohn D. and Catherine T. MacArthur Foundation, argues that although traditional deterrence is not an effective approach toward terrorist groups bent on causing a nuclear catastrophe, "the United States should instead consider a policy of expanded deterrence, which focuses not solely on the would-be nuclear terrorists but on those states that may deliberately transfer or inadvertently lead nuclear weapons and materials to them. By threatening retaliation against those states, the United States may be able to deter that which it cannot physically prevent."[67] Graham Allisonmakes a similar case and argues that the key to expanded deterrence is coming up with ways of tracing nuclear material to the country that forged the fissile material: "After a nuclear bomb detonates,nuclear forensiccops would collect debris samples and send them to a laboratory for radiological analysis. By identifying unique attributes of the fissile material, including its impurities and contaminants, one could trace the path back to its origin."[68]The process is analogous to identifying a criminal by fingerprints: "The goal would be twofold: first, to deter leaders of nuclear states from selling weapons to terrorists by holding them accountable for any use of their own weapons; second, to give leaders every incentive to tightly secure their nuclear weapons and materials."[68]
https://en.wikipedia.org/wiki/Mutual_assured_destruction
Nice Guys Finish First(BBCHorizontelevision series) is a 1986 documentary byRichard Dawkinswhich discusses selfishness and cooperation, arguing that evolution often favors co-operative behaviour, and focusing especially on thetit for tatstrategy of theprisoner's dilemmagame. The film is approximately 50 minutes long and was produced by Jeremy Taylor.[1] The twelfth chapter in Dawkins' bookThe Selfish Gene(added in the second edition, 1989) is also namedNice Guys Finish Firstand explores similar material. In the opening scene,Richard Dawkinsresponds very precisely to what he views as a misrepresentation of his first book,The Selfish Gene. In particular, the response of theright wingfor using it as justification forsocial Darwinismandlaissez-faireeconomics (free-market capitalism). Dawkins has examined this issue throughout his career and focused much of his documentaryThe Genius of Charles Darwinon this very issue. The concept ofreciprocal altruismis a central theme of this documentary. Additionally, Dawkins examines thetragedy of the commons, and the dilemma that it presents. He uses the large area of common landPort MeadowinOxford, England, which has been battered byovergrazing. This provides an example of the infamoustragedy of the commons. Fourteen academics as well as experts ingame theorysubmitted their own computer programs to compete in a tournament to see who would win in theprisoner's dilemma. The winner wastit for tat, a program based on "equal retaliation", and Dawkins illustrates the four conditions of tit for tat. In a second trial, this time of over sixty applicants,tit for tatwon again. This article about a scientificdocumentarywork for radio, television or the internet is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Nice_Guys_Finish_First
Richard Dawkins(born 26 March 1941)[3]is a Britishevolutionary biologist,zoologist,science communicatorand author.[4]He is anemeritus fellowofNew College, Oxford, and wasProfessor for Public Understanding of Scienceat theUniversity of Oxfordfrom 1995 to 2008, and is on the advisory board of theUniversity of Austin.[5][6]His bookThe Selfish Gene(1976) popularised thegene-centred view of evolutionand coined the wordmeme. Dawkins has won several academic and writing awards.[7] A vocalatheist, Dawkins is known for his criticism ofcreationismandintelligent design.[8]He wroteThe Blind Watchmaker(1986), in which he argues against thewatchmaker analogy, an argument for the existence of acreator deitybased upon thecomplexity of living organisms. Instead, he describes evolutionary processes as analogous to ablindwatchmaker, in thatreproduction,mutationandselectionare unguided by any sentient designer. In his bookThe God Delusion(2006) he argues that a supernatural creator almost certainly does not exist and calls religious faith adelusion. He founded theRichard Dawkins Foundation for Reason and Sciencein 2006.[9][10]Dawkins has published two volumes ofmemoirs,An Appetite for Wonder(2013) andBrief Candle in the Dark(2015). Dawkins was bornClinton Richard Dawkinson 26 March 1941 inNairobi, the capital ofKenya during British colonial rule.[11]He later droppedClintonfrom his name bydeed pollbecause of confusion in the United States over using his middle name as his first name.[12][3]He is the son of Jean Mary Vyvyan (néeLadner; 1916–2019)[13][14]and Clinton John Dawkins (1915–2010), an agricultural civil servant in the BritishColonial ServiceinNyasaland(present-dayMalawi), of alanded gentryfamily fromOxfordshire.[11][15][16]His father was called up into theKing's African Riflesduring theSecond World War[17][18]and returned to England in 1949, when Dawkins was eight. His father had inherited acountry estate,Over Norton Parkin Oxfordshire, which he farmed commercially.[16]Dawkins lives inOxford.[19]He has a younger sister, Sarah.[20] His parents were interested innatural sciences, and they answered Dawkins's questions in scientific terms.[21]Dawkins describes his childhood as "a normalAnglicanupbringing".[22]He embracedChristianityuntil halfway through his teenage years, at which point he concluded that thetheory of evolutionalone was a better explanation for life's complexity, and ceased believing in theChristian God.[20]He states: "The main residual reason why I was religious was from being so impressed with the complexity of life and feeling that it had to have a designer, and I think it was when I realised thatDarwinismwas a far superior explanation that pulled the rug out from under the argument of design. And that left me with nothing".[20]This understanding of atheism, combined with hisWesterncultural background, influences Dawkins as he describes himself in several interviews as a "cultural Christian" and a "culturalAnglican" in 2007, 2013[23][24][25]and 2024.[26][27]Dawkins explained, however, that this statement about his culture "means absolutely nothing as far as religious belief is concerned."[12] On his arrival in England from Nyasaland in 1949, at the age of eight, Dawkins joinedChafyn Grove School, inWiltshire,[28]where he says he was molested by a teacher.[29]From 1954 to 1959, he attendedOundle SchoolinNorthamptonshire, an Englishpublic schoolwith aChurch of Englandethos,[20]where he was in Laundimer House.[30]While at Oundle, Dawkins readBertrand Russell'sWhy I Am Not a Christianfor the first time.[31]He studiedzoologyatBalliol College, Oxford(the same college his father attended), graduating in 1962; while there, he was tutored byNikolaas Tinbergen, aNobel Prize-winningethologist. He graduated with asecond-class degree.[32] Dawkins continued as a research student under Tinbergen's supervision, receiving hisDoctor of Philosophy[33]degree by 1966, and remained a research assistant for another year.[34][35]Tinbergen was a pioneer in the study of animal behaviour, particularly in the areas ofinstinct, learning, and choice;[36]Dawkins's research in this period concerned models of animal decision-making.[37] From 1967 to 1969 Dawkins was an assistant professor of zoology at theUniversity of California, Berkeley. During this period, the students and faculty at UC Berkeley were largely opposed to the ongoingVietnam War, and Dawkins became involved in theanti-wardemonstrations and activities.[38]He returned to the University of Oxford in 1970 as a lecturer. In 1990, he became areaderin zoology. In 1995, he was appointedSimonyi Professor for the Public Understanding of Scienceat Oxford, a position that had been endowed byCharles Simonyiwith the express intention that the holder "be expected to make important contributions to the public understanding of some scientific field",[39]and that its first holder should be Richard Dawkins.[40]He held that professorship from 1995 until 2008.[41] Since 1970 he has been afellowofNew College, Oxford, and he is now anemeritusfellow.[42][43]He has delivered many lectures, including theHenry SidgwickMemorial Lecture (1989), the firstErasmus DarwinMemorial Lecture (1990), theMichael FaradayLecture (1991), theT. H. HuxleyMemorial Lecture (1992), theIrvineMemorial Lecture (1997), the Sheldon Doyle Lecture (1999), the Tinbergen Lecture (2004), and theTanner Lectures(2003).[34]In 1991 he gave theRoyal Institution Christmas Lectures for ChildrenonGrowing Up in the Universe. He also has edited several journals and has acted as an editorial advisor to theEncarta Encyclopediaand theEncyclopedia of Evolution. He is listed as a senior editor and a columnist of theCouncil for Secular Humanism'sFree Inquirymagazine and has been a member of the editorial board ofSkepticmagazine since its foundation.[44] Dawkins has sat on judging panels for awards such as theRoyal Society'sFaraday Awardand theBritish Academy Television Awards,[34]and has been president of the Biological Sciences section of theBritish Association for the Advancement of Science. In 2004,Balliol College, Oxford, instituted the Dawkins Prize, awarded for "outstanding research into the ecology and behaviour of animals whose welfare and survival may be endangered by human activities".[45]In September 2008 he retired from his professorship, announcing plans to "write a book aimed at youngsters in which he will warn them against believing in 'anti-scientific' fairytales".[46]In 2011 Dawkins joined the professoriate of theNew College of the Humanities, aprivate universityin London established by the philosopherA. C. Grayling, which opened in September 2012.[47] Dawkins announced his final speaking tour would take place in the autumn of 2024.[48] Dawkins is best known for his popularisation of thegeneas the principalunit of selectioninevolution; this view is most clearly set out in two of his books:[49][50] Dawkins has consistently been sceptical about non-adaptive processes in evolution (such asspandrels, described byStephen Jay GouldandRichard Lewontin)[53]and about selection at levels "above" that of the gene.[54]He is particularly sceptical about the practical possibility or importance ofgroup selectionas a basis for understandingaltruism.[55] Altruism appears at first to be an evolutionary paradox, since helping others costs precious resources and decreases one's own chances for survival, orfitness. Previously, many had interpreted altruism as an aspect of group selection, suggesting that individuals are doing what is best for the survival of the population or species as a whole. The British evolutionary biologistW. D. Hamiltonused gene-frequency analysis in hisinclusive fitnesstheory to show how hereditary altruistic traits can evolve if there is sufficient genetic similarity between actors and recipients of such altruism, including close relatives.[56][a]Hamilton's inclusive fitness has since been successfully applied to a wide range of organisms, includinghumans. Similarly, the evolutionary biologistRobert Trivers, thinking in terms of the gene-centred model, developed the theory ofreciprocal altruism, whereby one organism provides a benefit to another in the expectation of future reciprocation.[57]Dawkins popularised these ideas inThe Selfish Gene, and developed them in his own work.[58] In June 2012 Dawkins was highly critical of his fellow-biologistE. O. Wilson's 2012 bookThe Social Conquest of Earthas misunderstanding Hamilton's theory of kin selection.[59][60]Dawkins has also been strongly critical of theGaia hypothesisof the independent scientistJames Lovelock.[61][62][63] Critics of Dawkins's biological approach suggest that taking thegeneas the unit ofselection(a single event in which an individual either succeeds or fails to reproduce) is misleading. The gene could be better described, they say, as a unit ofevolution(the long-term changes inallelefrequencies in a population).[64]InThe Selfish Gene, Dawkins explains that he is using the biologistGeorge C. Williams's definition of the gene as "that which segregates and recombines with appreciable frequency".[65]Another common objection is that a gene cannot survive alone, but must cooperate with other genes to build an individual, and therefore a gene cannot be an independent "unit".[66]InThe Extended Phenotype, Dawkins suggests that from an individual gene's viewpoint, all other genes are part of the environment to which it is adapted. Advocates for higher levels of selection (such asRichard Lewontin,David Sloan WilsonandElliott Sober) suggest that there are many phenomena (including altruism) that gene-based selection cannot satisfactorily explain. The philosopherMary Midgley, with whom Dawkins clashed in print concerningThe Selfish Gene,[67][68]has criticised gene selection, memetics, and sociobiology as being excessivelyreductionist;[69]she has suggested that the popularity of Dawkins's work is due to factors in theZeitgeistsuch as the increased individualism of the Thatcher/Reagan decades.[70]Besides, other, more recent views and analysis on his popular science works also exist.[71] In a set of controversies over the mechanisms and interpretation of evolution (what has been called 'The Darwin Wars'),[72][73]one faction is often named after Dawkins, while the other faction is named after the American palaeontologistStephen Jay Gould, reflecting the pre-eminence of each as a populariser of the pertinent ideas.[74][75]In particular, Dawkins and Gould have been prominent commentators in the controversy oversociobiologyandevolutionary psychology, with Dawkins generally approving and Gould generally being critical.[76]A typical example of Dawkins's position is his scathing review ofNot in Our GenesbySteven Rose,Leon J. Kaminand Richard C. Lewontin.[77]Two other thinkers who are often considered to be allied with Dawkins on the subject areSteven PinkerandDaniel Dennett; Dennett has promoted a gene-centred view of evolution and defendedreductionismin biology.[78]Despite their academic disagreements, Dawkins and Gould did not have a hostile personal relationship, and Dawkins dedicated a large portion of his 2003 bookA Devil's Chaplainposthumously to Gould, who had died the previous year. When asked ifDarwinisminfluences his everyday apprehension of life, Dawkins says, "In one way it does. My eyes are constantly wide open to the extraordinary fact of existence. Not just human existence but the existence of life and how this breathtakingly powerful process, which is natural selection, has managed to take the very simple facts of physics and chemistry and build them up to redwood trees and humans. That's never far from my thoughts, that sense of amazement. On the other hand, I certainly don't allow Darwinism to influence my feelings about human social life", implying that he feels that individual human beings can opt out of the survival machine of Darwinism since they are freed by theconsciousnessof self.[19] In his bookThe Selfish Gene, Dawkinscoinedthe wordmeme(the behavioural equivalent of a gene) as a way to encourage readers to think about how Darwinian principles might be extended beyond the realm of genes.[79]It was intended as an extension of his "replicators" argument, but it took on a life of its own in the hands of other authors, such asDaniel DennettandSusan Blackmore. These popularisations then led to the emergence ofmemetics, a field from which Dawkins has distanced himself.[80] Dawkins'smemerefers to any cultural entity that an observer might consider a replicator of a certain idea or set of ideas. He hypothesised that people could view many cultural entities as capable of such replication, generally through communication and contact with humans, who have evolved as efficient (although not perfect) copiers of information and behaviour. Because memes are not always copied perfectly, they might become refined, combined, or otherwise modified with other ideas; this results in new memes, which may themselves prove more or less efficient replicators than their predecessors, thus providing a framework for a hypothesis ofcultural evolutionbased on memes, a notion that is analogous to the theory of biological evolution based on genes.[81] Although Dawkins invented the termmeme, he has not said that the idea was entirely novel,[82]and there have been other expressions for similar ideas in the past. For instance, John Laurent has suggested that the term may have derived from the work of the little-known German biologistRichard Semon.[83]Semon regarded "mneme" as the collective set of neural memory traces (conscious or subconscious) that were inherited, although such view would be considered asLamarckianby modern biologists.[84]Laurent also found the use of the termmnemeinMaurice Maeterlinck'sThe Life of the White Ant(1926), and Maeterlinck himself stated that he obtained the phrase from Semon's work.[83]In his own work, Maeterlinck tried to explain memory in termites and ants by stating that neural memory traces were added "upon the individual mneme".[84]Nonetheless, the authorJames Gleickdescribes Dawkins's concept of the meme as "his most famous memorable invention, far more influential than hisselfish genesor his later proselytising against religiosity".[85] In 2006 Dawkins founded theRichard Dawkins Foundation for Reason and Science(RDFRS), anon-profit organisation. RDFRS financed research on thepsychology of belief and religion, financed scientific education programs and materials, and publicised and supportedcharitable organisationsthat aresecularin nature.[86]In January 2016 it was announced that the foundation was merging with theCenter for Inquiry, with Dawkins becoming a member of the new organisation’s board of directors.[87] Dawkins was confirmed into theChurch of Englandat the age of 13, but began to grow sceptical of the beliefs. He said that his understanding of science and evolutionary processes led him to question how adults in positions of leadership in a civilised world could still be so uneducated in biology,[88]and is puzzled by how belief in God could remain among individuals who are sophisticated in science. Dawkins says that some physicists use 'God' as a metaphor for the general awe-inspiring mysteries of the universe, which he says causes confusion and misunderstanding among people who incorrectly think they are talking about a mystical being who forgives sins, transubstantiates wine, or makes people live after they die.[89] Dawkins disagrees withStephen Jay Gould's principle ofnonoverlapping magisteria (NOMA)[90]and suggests that theexistence of Godshould be treated as a scientific hypothesis like any other.[91]Dawkins became a prominentcritic of religionand has stated hisopposition to religionas twofold: religion is both a source of conflict and a justification for belief without evidence.[92]He considers faith—belief that is not based on evidence—as "one of the world's great evils".[93] On hisspectrum of theistic probability, which ranges from 1 (100 per cent certainty that a God or gods exist) to 7 (100 per cent certainty that a God or gods do not exist), Dawkins has said he is a 6.9, which represents a "de facto atheist" who thinks "I cannot know for certain but I think God is very improbable, and I live my life on the assumption that he is not there". When asked about his slight uncertainty, Dawkins quips, "I am agnostic to the extent that I am agnostic about fairies at the bottom of the garden".[94][95]In May 2014, at theHay Festivalin Wales, Dawkins explained that while he does not believe in the supernatural elements of the Christian faith, he still has nostalgia for the ceremonial side of religion.[96]In addition to beliefs in deities, Dawkins has criticised religious beliefs as irrational, such as thatJesus turned water into wine, that an embryo starts as a blob, thatmagic underwearwill protect you, thatJesus was resurrected, thatsemencomes from the spine, thatJesus walked on water, that the sun sets in a marsh, that theGarden of Edenexisted inAdam-ondi-Ahmanin theU.S. stateofMissouri, thatJesus' mother was a virgin, thatMuhammad split the Moon, and thatLazarus was raised from the dead.[104] Dawkins has risen to prominence in public debates concerning science and religion since the publication of his most popular book,The God Delusion, in 2006, which became an international bestseller.[105]As of 2015, more than three million copies have been sold, and the book has been translated into more than 30 languages.[106]Its success has been seen by many as indicative of a change in the contemporary culturalzeitgeistand has also been identified with the rise ofNew Atheism.[107]In the book, Dawkins contends that a supernatural creator almost certainly does not exist and that religious faith is adelusion—"a fixed false belief".[108]In his February 2002TEDtalk entitled "Militant atheism", Dawkins urged all atheists to openly state their position and to fight the incursion of the church into politics and science.[109]On 30 September 2007 Dawkins,Christopher Hitchens,Sam HarrisandDaniel Dennettmet at Hitchens's residence inWashington, D.C.for a private, unmoderated discussion that lasted two hours. It was recorded and entitled "The Four Horsemen".[110] Dawkins sees education andconsciousness-raisingas the primary tools in opposing what he considers to be religious dogma and indoctrination.[38][111][112]These tools include the fight against certain stereotypes, and he has adopted the termbrightas a way of associating positive public connotations with those who possess anaturalisticworldview.[112]He has given support to the idea of a free-thinking school,[113]which would not "indoctrinate children" but would instead teach children to ask for evidence and be skeptical, critical, and open-minded. Such a school, says Dawkins, should "teach comparative religion, and teach it properly without any bias towards particular religions, and including historically important but dead religions, such as those of ancient Greece and the Norse gods, if only because these, like the Abrahamic scriptures, are important for understanding English literature and European history".[114][115]Inspired by the consciousness-raising successes offeministsin arousing widespread embarrassment at the routine use of "he" instead of "she", Dawkins similarly suggests that phrases such as "Catholic child" and "Muslim child" should be considered as socially absurd as, for instance, "Marxist child", as he believes that children should not be classified based on the ideological or religious beliefs of their parents.[112] While some critics, such as his friend the writerChristopher Hitchens, the psychologistSteven Pinkerand theNobel Prize laureatesSirHarold Kroto,James D. WatsonandSteven Weinberghave defended Dawkins's stance on religion and praised his work,[116]others, including theNobel Prize-winning theoretical physicistPeter Higgs, the astrophysicistMartin Rees, the philosopher of scienceMichael Ruse, the literary criticTerry Eagleton, the philosopherRoger Scruton, the academic and social criticCamille Paglia, the atheist philosopher Daniel Came and the theologianAlister McGrath,[123]have criticised Dawkins on various grounds, including the assertion that his work simply serves as an atheist counterpart to religious fundamentalism rather than a productive critique of it, and that he has fundamentally misapprehended the foundations of thetheologicalpositions he claims to refute. Rees and Higgs, in particular, have both rejected Dawkins's confrontational stance toward religion as narrow and "embarrassing", with Higgs equating Dawkins with the religious fundamentalists he criticises.[124][125][126][127]The atheist philosopherJohn Grayhas denounced Dawkins as an "anti-religious missionary", whose assertions are "in no sense novel or original", suggesting that "transfixed in wonderment at the workings of his own mind, Dawkins misses much that is of importance in human beings". Gray has also criticised Dawkins's perceived allegiance to Darwin, stating that if "science, for Darwin, was a method of inquiry that enabled him to edge tentatively and humbly toward the truth, for Dawkins, science is an unquestioned view of the world".[128]A 2016 study found that many British scientists held an unfavourable view of Dawkins and his attitude towards religion.[129]In response to his critics, Dawkins maintains that theologians are no better than scientists in addressing deepcosmologicalquestions and that he is not a fundamentalist, as he is willing to change his mind in the face of new evidence.[130][131][132] Dawkins has faced backlash over some of his public comments about Islam. In 2013 Dawkinstweetedthat "All the world's Muslims have fewer Nobel Prizes thanTrinity College, Cambridge. They did great things in theMiddle Ages, though."[133]In 2016 Dawkins's invitation to speak at theNortheast Conference on Science and Skepticismwas withdrawn over his sharing of what was characterised as a "highly offensive video" satirically showing cartoon feminist and Islamist characters singing about the things they hold in common. In issuing the tweet Dawkins stated that it "Obviously doesn't apply to [sic] vast majority of feminists, among whom I count myself. But the minority are pernicious."[134] Dawkins also does not believe in anafterlife.[12] Dawkins is a prominent critic ofcreationism, a religious belief thathumanity,life, and theuniversewere created by adeity[135]without recourse to evolution.[136]He has described theyoung Earth creationistview that the Earth is only a few thousand years old as "a preposterous, mind-shrinking falsehood".[137]His 1986 book,The Blind Watchmaker, contains a sustained critique of theargument from design, an important creationist argument. In the book, Dawkins argues against thewatchmaker analogymade famous by the eighteenth-century EnglishtheologianWilliam Paleyvia his bookNatural Theology, in which Paley argues that just as a watch is too complicated and too functional to have sprung into existence merely by accident, so too must all living things—with their far greater complexity—be purposefully designed. Dawkins shares the view generally held by scientists that natural selection is sufficient to explain the apparent functionality and non-random complexity of the biological world, and can be said to play the role of watchmaker in nature, albeit as an automatic, unguided by any designer, nonintelligent,blindwatchmaker.[138] In 1986 Dawkins and the biologistJohn Maynard Smithparticipated in anOxford Uniondebate againstA. E. Wilder-Smith(aYoung Earth creationist) andEdgar Andrews(president of theBiblical Creation Society).[b]In general, however, Dawkins has followed the advice of his late colleagueStephen Jay Gouldand refused to participate in formal debates with creationists because "what they seek is the oxygen of respectability", and doing so would "give them this oxygen by the mere act ofengagingwith them at all". He suggests that creationists "don't mind being beaten in an argument. What matters is that we give them recognition by bothering to argue with them in public."[139]In a December 2004 interview with the American journalistBill Moyers, Dawkins said that "among the things that science does know, evolution is about as certain as anything we know". When Moyers questioned him on theuse of the wordtheory, Dawkins stated that "evolution has been observed. It's just that it hasn't been observed while it's happening." He added that "it is rather like a detective coming on a murder after the scene... the detective hasn't actually seen the murder take place, of course. But what you do see is a massive clue... Huge quantities of circumstantial evidence. It might as well be spelled out in words of English."[140] Dawkins has opposed the inclusion ofintelligent designin science education, describing it as "not a scientific argument at all, but a religious one".[141]He has been referred to in the media as "Darwin'sRottweiler",[142][143]a reference to the English biologistT. H. Huxley, who was known as "Darwin'sBulldog" for his advocacy ofCharles Darwin's evolutionary ideas. He has been a strong critic of the British organisationTruth in Science, which promotes the teaching of creationism in state schools, and whose work Dawkins has described as an "educational scandal". He plans to subsidise schools through theRichard Dawkins Foundation for Reason and Sciencewith the delivery of books, DVDs, and pamphlets that counteract their work.[144] Dawkins is an outspokenatheist[145]and a supporter of various atheist, secular,[146][147]andhumanist organisations,[148][149][150][151]includingHumanists UKand theBrights movement.[109]Dawkins suggests that atheists should be proud, not apologetic, stressing that atheism is evidence of a healthy, independent mind.[152]He hopes that the more atheists identify themselves, the more the public will become aware of just how many people are nonbelievers, thereby reducing the negative opinion of atheism among the religious majority.[153]Inspired by thegay rights movement, he endorsed theOut Campaignto encourage atheists worldwide to declare their stance publicly.[154]He supported a UK atheist advertising initiative, theAtheist Bus Campaignin 2008 and 2009, which aimed to raise funds to place atheist advertisements on buses in the London area.[155] Dawkins has expressed concern about the growth of the human population and about the matter ofoverpopulation.[156]InThe Selfish Gene, he briefly mentions population growth, giving the example ofLatin America, whose population, at the time the book was written, was doubling every 40 years. He is critical ofRoman Catholicattitudes tofamily planningandpopulation control, stating that leaders who forbidcontraceptionand "express a preference for 'natural' methods of population limitation" will get just such a method in the form ofstarvation.[157] As a supporter of theGreat Ape Project—a movement to extend certain moral and legalrightsto allgreat apes—Dawkins contributed the article 'Gaps in the Mind' to theGreat Ape Projectbook edited byPaola CavalieriandPeter Singer. In this essay, he criticises contemporary society's moral attitudes as being based on a "discontinuous,speciesistimperative".[158] Dawkins also regularly comments in newspapers andblogson contemporary political questions and is a frequent contributor to the online science and culture digest3 Quarks Daily.[159]His opinions include opposition to the2003 invasion of Iraq,[160]theBritish nuclear deterrent, the actions of then-US PresidentGeorge W. Bush,[161]and the ethics ofdesigner babies.[162]Several such articles were included inA Devil's Chaplain, an anthology of writings about science, religion, and politics. He is also a supporter ofRepublic's campaign to replace theBritish monarchywith a type of democraticrepublic.[163]Dawkins has described himself as aLabour Partyvoter in the 1970s[164]and voter for theLiberal Democratssince the party's creation. In 2009 he spoke at the party's conference in opposition toblasphemy laws,alternative medicine, andfaith schools. At the2010 general electionDawkins officially endorsed the Liberal Democrats, in support of their campaign for electoral reform and for their "refusal to pander to 'faith'".[165]In the run up to the2017 general election, Dawkins once again endorsed the Liberal Democrats and urged voters to join the party. In April 2021 Dawkins said on Twitter that "In 2015,Rachel Dolezal, a white chapter president of NAACP, was vilified for identifying as Black. Some men choose to identify as women, and some women choose to identify as men. You will be vilified if you deny that they literally are what they identify as. Discuss." After receiving criticism for this tweet, Dawkins responded by saying that "I do not intend to disparage trans people. I see that my academic "Discuss" question has been misconstrued as such and I deplore this. It was also not my intent to ally in any way with Republican bigots in US now exploiting this issue."[166]In a recent interview Dawkins stated regarding trans people that he does not "deny their existence nor does he in anyway oppress them". He objects to the statement that a "trans woman is a woman because that is a distortion of language and a distortion of science".[167]TheAmerican Humanist Associationretracted Dawkins' 1996 Humanist of the Year Award in response to these comments.[168]Robby SoaveofReasonmagazinecriticised the retraction, saying that "The drive to punish dissenters from various orthodoxies is itself illiberal."[169] Dawkins has voiced his support for theCampaign for the Establishment of a United Nations Parliamentary Assembly, an organisation that campaigns for democratic reform in the United Nations, and the creation of a more accountable international political system.[170] Dawkins identifies as a feminist.[171]He has said that feminism is "enormously important".[172]Dawkins has been accused by writers such asAmanda Marcotte, Caitlin Dickson and Adam Lee ofmisogyny, criticising those who speak about sexual harassment and abuse while ignoring sexism within theNew Atheist movement.[173][174][175] In 1998, in a book review published inNature, Dawkins expressed his appreciation for two books connected with theSokal affair:Higher Superstition: The Academic Left and Its Quarrels with SciencebyPaul R. GrossandNorman LevittandIntellectual ImposturesbyAlan SokalandJean Bricmont. These books are famous for their criticism ofpostmodernismin U.S. universities (namely in the departments of literary studies, anthropology, and other cultural studies).[176] Echoing many critics, Dawkins holds that postmodernism usesobscurantistlanguage to hide its lack of meaningful content. As an example he quotes the psychoanalystFélix Guattari: "We can clearly see that there is no bi-univocal correspondence between linear signifying links or archi-writing, depending on the author, and this multireferential, multi-dimensional machinic catalysis." This is explained, Dawkins maintains, by certain intellectuals' academic ambitions. Figures like Guattari orJacques Lacan, according to Dawkins, have nothing to say but want to reap the benefits of reputation and fame that derive from a successful academic career: "Suppose you are an intellectual impostor with nothing to say, but with strong ambitions to succeed in academic life, collect a coterie of reverent disciples and have students around the world anoint your pages with respectful yellow highlighter. What kind of literary style would you cultivate? Not a lucid one, surely, for clarity would expose your lack of content."[176] In 2024 Dawkins co-authored an op-ed inThe Boston Globewith Sokal criticising the use of the terminology "sex assigned at birth" instead of "sex" by theAmerican Medical Association, theAmerican Psychological Association, theAmerican Academy of Pediatricsand theCenters for Disease Control and Prevention. Dawkins and Sokal argued thatsexis an "objective biological reality" that "is determined at conception and is thenobservedat birth," rather thanassignedby a medical professional. Calling this "social constructionismgone amok," Dawkins and Sokal argued further that "distort[ing] the scientific facts in the service of a social cause" risks undermining trust in medical institutions.[177] In his role as professor for public understanding of science, Dawkins has been a critic ofpseudoscienceandalternative medicine. His 1998 bookUnweaving the RainbowconsidersJohn Keats's accusation that by explaining therainbow,Isaac Newtondiminished its beauty; Dawkins argues for the opposite conclusion. He suggests that deep space, the billions of years of life's evolution, and the microscopic workings of biology and heredity contain more beauty and wonder than do "myths" and "pseudoscience".[178]ForJohn Diamond's posthumously publishedSnake Oil, a book devoted to debunkingalternative medicine, Dawkins wrote a foreword in which he asserts that alternative medicine is harmful, if only because it distracts patients from more successful conventional treatments and gives people false hopes.[179]Dawkins states that "There is no alternative medicine. There is only medicine that works and medicine that doesn't work."[180]In his 2007Channel 4filmThe Enemies of Reason, Dawkins concluded that Britain is gripped by "an epidemic of superstitious thinking".[181] Continuing a long-standing partnership with Channel 4, Dawkins participated in a five-part television series,Genius of Britain, along with his fellow-scientistsStephen Hawking,James Dyson,Paul NurseandJim Al-Khalili. The series was first broadcast in June 2010, and focuses on major British scientific achievements throughout history.[182]In 2014 he joined the global awareness movementAsteroid Dayas a "100x Signatory".[183] He holdshonorary doctoratesin science from theUniversity of Huddersfield,University of Westminster,Durham University,[184]theUniversity of Hull, theUniversity of Antwerp, theUniversity of Oslo, theUniversity of Aberdeen,[185]Open University, theVrije Universiteit Brussel,[34]and theUniversity of Valencia.[186]He also holds honorary doctorates of letters from theUniversity of St Andrewsand theAustralian National University(HonLittD, 1996), and was elected Fellow of theRoyal Society of Literaturein 1997 and aFellow of the Royal Society (FRS) in 2001.[1][34]He is one of the patrons of theOxford University Scientific Society. In 1987 Dawkins received aRoyal Society of Literatureaward and aLos Angeles TimesLiterary Prize for his bookThe Blind Watchmaker. In the same year, he received a Sci. Tech Prize for Best Television Documentary Science Programme of the Year for his work on the BBC'sHorizonepisodeThe Blind Watchmaker.[34] In 1996 theAmerican Humanist Associationgave him their Humanist of the Year Award, but the award was withdrawn in 2021, with the statement that he "demean[ed] marginalized groups", includingtransgenderpeople, using "the guise of scientific discourse".[187][166] Other awards include theZoological Society of London'sSilver Medal(1989), the Finlay Innovation Award (1990), theMichael Faraday Award(1990), the Nakayama Prize (1994), the fifthInternational Cosmos Prize(1997), theKistler Prize(2001), theMedal of the Presidency of the Italian Republic(2001), the 2001 and 2012 Emperor Has No Clothes Award from theFreedom From Religion Foundation, the Bicentennial Kelvin Medal ofThe Royal Philosophical Society of Glasgow(2002),[34]the Golden Plate Award of theAmerican Academy of Achievement(2006),[188]and theNierenberg Prizefor Science in the Public Interest (2009).[189]He was awarded theDeschner Award, named after the Germananti-clericalauthorKarlheinz Deschner.[190]TheCommittee for Skeptical Inquiry(CSICOP) has awarded Dawkins their highest awardIn Praise of Reason(1992).[191] Dawkins toppedProspectmagazine's 2004 list of the top 100 public British intellectuals, as decided by the readers, receiving twice as many votes as the runner-up.[192][193]He was shortlisted as a candidate in their 2008 follow-up poll.[194]In a poll held byProspectin 2013, Dawkins was voted the world's top thinker based on 65 names chosen by a largely US and UK-based expert panel.[195] In 2005 theHamburg-basedAlfred Toepfer Foundationawarded him its annualShakespeare Prizein recognition of his "concise and accessible presentation of scientific knowledge". He won theLewis Thomas Prize for Writing about Sciencefor 2006, as well as theGalaxy British Book Awards's Author of the Year Award for 2007.[196]In the same year, he was listed byTimemagazine as one of the 100 most influential people in the world in 2007,[197]and was ranked 20th inThe Daily Telegraph's2007 list of 100 greatest living geniuses.[198] Since 2003 theAtheist Alliance Internationalhas awarded a prize during its annual conference, honouring an outstanding atheist whose work has done the most to raise public awareness of atheism during that year; it is known as theRichard Dawkins Award, in honour of Dawkins's own efforts.[199] In February 2010 Dawkins was named to theFreedom From Religion Foundation's Honorary Board of distinguished achievers.[200]In December 2024 Dawkins resigned from the board, along withSteven PinkerandJerry Coyne, after the Foundation took down the article "Biology is not Bigotry" by Coyne which supported a biological, rather than a psychological, view of sex.[201] In 2012 a Sri Lankan team ofichthyologistsheaded byRohan Pethiyagodanamed a newgenusof freshwater fishDawkinsiain Dawkins's honour. (Members of this genus were formerly members of the genusPuntius).[202] Dawkins has been married four times and has a daughter. On 19 August 1967 Dawkins married the ethologistMarian Stampin the Protestant church inAnnestown,County Waterford, Ireland;[203]they divorced in 1984. On 1 June 1984 he married Eve Barham (1951–1999) in Oxford. They had one daughter prior to their divorce.[204]In 1992 he married the actressLalla Ward[204]inKensington and Chelsea. Dawkins met her through their mutual friendDouglas Adams,[205]who had worked with her on theBBCtelevision seriesDoctor Who. Dawkins and Ward separated in 2016 and they later described the separation as "entirely amicable".[206]Dawkins is currently married to an illustrator named Jana Lenzová.[207] On 6 February 2016 Dawkins suffered a minor haemorrhagicstrokewhile at home.[208][209]He reported later that year that he had almost completely recovered.[210][211] Dawkins has made many television appearances on news shows providing his political opinions and especially his views as an atheist. He has been interviewed on the radio, often as part of his book tours. He has debated many religious figures. He has made many university speaking appearances, again often in coordination with his book tours. As of 2016, he has more than 60 credits in theInternet Movie Databasewhere he appeared as himself: a.^W. D. Hamiltoninfluenced Dawkins and the influence can be seen throughout Dawkins's bookThe Selfish Gene.[38]They became friends at Oxford and following Hamilton's death in 2000, Dawkins wrote his obituary and organised asecularmemorial service.[223] b.^The debate ended with the motion "That the doctrine of creation is more valid than the theory of evolution" being defeated by 198 votes to 115.[224][225]
https://en.wikipedia.org/wiki/Richard_Dawkins
Peace war gameis aniterated gameoriginally played in academic groups and bycomputer simulationfor years to study possible strategies ofcooperationandaggression.[1]As peace makers became richer over time it became clear that making war had greater costs than initially anticipated. The onlystrategythat acquired wealth more rapidly was a "Genghis Khan", a constant aggressor making war continually to gain resources. This led to the development of the "provokable nice guy"strategy, a peace-maker until attacked.[2]Multiple players continue to gain wealth cooperating with each other while bleeding the constant aggressor.[2]Historical analogs exist, besides rapid dissolution of Genghis Khan's empire bytribesoriginally conquered. After theViking Age, theHanseatic Leaguefor trade and mutual defense originated in part from concerns about seaborne raiders. The peace war game is a variation of theiterated prisoner's dilemmain which the decisions (Cooperate,Defect) are replaced by (Peace,War). Strategies remain the same withreciprocal altruism, "Tit for Tat", or "provokable nice guy" as the bestdeterministicone. This strategy is simply to make peace on the first iteration of thegame; after that, the player does what his opponent did on the previous move. A slightly better strategy is "Tit for Tat with forgiveness". When the opponent makes war, on the next move, the player sometimes makes peace anyway, with a small probability. This allows an escape from wasting cycles of retribution, a motivation similar to theRule of Koin the game ofGo. "Tit for Tat with forgiveness" is best when miscommunication is introduced, when one's move is incorrectly reported to the opponent. A typicalpayoff matrixfor two players (A, B) of one iteration of thisgameis: Here a player's resources have a value of 2, half of which must be spent to wage war. In this case, there exists aNash equilibrium, a mutually best response for a single iteration, here (War, War), by definition heedless of consequences in later iterations. "Provokable nice guy's"optimalitydepends on iterations. How many are necessary is likely tied to the payoff matrix andprobabilitiesof choosing.[3]Asubgame perfectversion of this strategy is "Contrite Tit-for-Tat" which is to make peace unless one is in "good standing" and one's opponent is not. Good ("standing" assumed) means to make peace with good opponents, make peace when bad, or make war when good and opponent is not.[4] He who is skilled in war subdues the enemy without fighting.
https://en.wikipedia.org/wiki/Peace_war_game
Quid pro quo(Latin: "something for something"[2]) is aLatin phraseused inEnglishto mean an exchange of goods or services, in which one transfer is contingent upon the other; "a favor for a favor". Phrases with similar meanings include: "give and take", "tit for tat", "you scratch my back, and I'll scratch yours", "this for that,"[3]and "one hand washes the other". Other languages usedo ut desto express a reciprocal exchange, which aligns with the Latin meaning,[4]whereas the widespread use ofquid pro quoin English for this concept arose from a "misunderstanding".[5] The Latin phrasequid pro quooriginally implied that something had been substituted, meaning "something for something" as inI gave you sugar for salt. Early usage by English speakers followed the original Latin meaning, with occurrences in the 1530s where the term referred to substituting one medicine for another, whether unintentionally or fraudulently.[6][7]By the end of the same century,quid pro quoevolved into a more current use to describe equivalent exchanges.[8] In 1654, the expressionquid pro quowas used to generally refer to something done for personal gain or with the expectation of reciprocity in the textThe Reign of King Charles: An History Disposed into Annalls, with a somewhat positive connotation. It refers to the covenant with Christ as something "that prove not anudum pactum, a naked contract, withoutquid pro quo." Believers in Christ have to do their part in return, namely "foresake the devil and all his works".[9] Quid pro quowould go on to be used, by English speakers in legal and diplomatic contexts, as an exchange of equally valued goods or services and continues to be today.[10] The Latin phrase corresponding to the English usage ofquid pro quoisdo ut des(Latin for "I give, so that you may give").[11]Other languages continue to usedo ut desfor this purpose, whilequid pro quo(or its equivalentqui pro quo, as widely used in Italian, French, Spanish and Portuguese) still keeps its original meaning of something being unwittingly mistaken, or erroneously told or understood, instead of something else. Incommon law,quid pro quoindicates that an item or a service has been traded in return for something of value, usually when the propriety or equity of the transaction is in question. Acontractmust involveconsideration: that is, the exchange of something of value for something else of value. For example, when buying an item of clothing or a gallon of milk, a pre-determined amount of money is exchanged for the product the customer is purchasing; therefore, they have received something but have given up something of equal value in return. In the United Kingdom, the one-sidedness of a contract is covered by theUnfair Contract Terms Act 1977and various revisions and amendments to it; a clause can be held void or the entire contract void if it is deemed unfair (that is to say, one-sided and not aquid pro quo); however, this is a civil law and not a common law matter. Political donors must be resident in the UK. There are fixed limits to how much they may donate (£5000 in any single donation), and it must be recorded in the House of CommonsRegister of Members' Interestsor at theHouse of Commons Library; thequid pro quois strictly not allowed, that a donor can by his donation have some personal gain. This is overseen by theParliamentary Commissioner for Standards. There are also prohibitions on donations being given in the six weeks before the election for which it is being campaigned.[citation needed]It is also illegal for donors to supportparty political broadcasts, which are tightly regulated, free to air, and scheduled and allotted to the various parties according to a formula agreed by Parliament and enacted with theCommunications Act 2003. In the United States, if an exchange appears excessively one sided, courts in some jurisdictions may question whether aquid pro quodid actually exist and the contract may be heldvoid. In cases of "quid pro quo" business contracts, the term takes on a negative connotation because major corporations may cross ethical boundaries in order to enter into these very valuable, mutually beneficial, agreements with other major big businesses. In these deals, large sums of money are often at play and can consequently lead to promises of exclusive partnerships indefinitely or promises of distortion of economic reports.[12][13] In the U.S.,lobbyistsare legally entitled to support candidates that hold positions with which the donors agree, or which will benefit the donors. Such conduct becomesbriberyonly when there is an identifiable exchange between the contribution and official acts, previous or subsequent, and the termquid pro quodenotes such an exchange.[14] In terms of criminal law,quid pro quotends to get used as a euphemism for crimes such asextortionandbribery.[15] InUnited States labor law, workplace sexual harassment can take two forms; either "quid pro quo" harassment orhostile work environmentharassment.[16]"Quid pro quo" harassment takes place when a supervisor requires sex, sexual favors, or sexual contact from an employee/job candidate as a condition of their employment. Only supervisors who have the authority to make tangible employment actions (i.e. hire, fire, promote, etc.), can commit "quid pro quo" harassment.[17]The supervising harasser must have "immediate (or successively higher) authority over the employee."[18]The power dynamic between a supervisor and subordinate/job candidate is such that a supervisor could use their position of authority to extract sexual relations based on the subordinate/job candidate's need for employment. Co-workers and non-decision making supervisors cannot engage in "quid pro quo" harassment with other employees, but an employer could potentially be liable for the behavior of these employees under a hostile work environment claim. The harassing employee's status as a supervisor is significant because if the individual is found to be a supervisor then the employing company can be heldvicariously liablefor the actions of that supervisor.[19]UnderAgency law, the employer is held responsible for the actions of the supervisor because they were in a position of power within the company at the time of the harassment. To establish aprima faciecase of "quid pro quo" harassment, the plaintiff must prove that they were subjected to "unwelcome sexual conduct", that submission to such conduct was explicitly or implicitly a term of their employment, and submission to or rejection of this conduct was used as a basis for an employment decision,[20]as follows: Once the plaintiff has established these three factors, the employer can not assert an affirmative defense (such as the employer had a sexual harassment policy in place to prevent and properly respond to issues of sexual harassment), but can only dispute whether the unwelcome conduct did not in fact take place, the employee was not a supervisor, and that there was no tangible employment action involved. Although these terms are popular among lawyers and scholars, neither "hostile work environment" nor "quid pro quo" are found inTitle VII of the Civil Rights Act of 1964, which prohibits employers from discriminating on the basis of race, sex, color, national origin, and religion. The Supreme Court noted inBurlington Industries, Inc. v. Ellerththat these terms are useful in differentiating between cases where threats of harassment are "carried out and those where they are not or absent altogether," but otherwise these terms serve a limited purpose.[25]Therefore, sexual harassment can take place by a supervisor, and an employer can be potentially liable, even if that supervisor's behavior does not fall within the criteria of a "Quid pro quo" harassment claim. Quid pro quowas frequently mentioned during the firstimpeachment inquiryinto U.S. presidentDonald Trump, in reference to the charge that his request for an investigation ofHunter Bidenwas a precondition for the delivery of congressionally authorized military aid during a call with Ukrainian presidentVolodymyr Zelenskyy.[26] For languages that come from Latin, such as Italian, Portuguese, Spanish and French,quid pro quois used to define a misunderstanding or blunder made by the substituting of one thing for another. The Oxford English Dictionary describes this alternative definition in English as "now rare". TheVocabolario Treccani(an authoritative dictionary published by the EncyclopediaTreccani), under the entry "qui pro quo", states that the latter expression probably derives from the Latin used in late medieval pharmaceutical compilations.[27]This can be clearly seen from the work appearing precisely under this title, "Tractatus quid pro quo," (Treatise on what substitutes for what) in the medical collection headed up byMesue cum expositione Mondini super Canones universales...(Venice: per Joannem & Gregorium de gregorijs fratres, 1497), folios 334r-335r. Some examples of what could be used in place of what in this list are:Pro uva passa dactili('in place of raisins, [use] dates');Pro mirto sumac('in place of myrtle, [use] sumac');Pro fenugreco semen lini('in place of fenugreek, [use] flaxseed'), etc. This list was an essential resource in the medieval apothecary, especially for occasions when certain essential medicinal substances were not available. SatiristAmbrose Biercedefined political influence as "a visionaryquogiven in exchange for a substantialquid",[28]making a pun onquidas a form of currency.[29] Quidis slang forpounds, the British currency, originating on this expression as in:if you want the quo you'll need to give them some quid, which explains the plural withouts, as inI gave them five hundred quid.
https://en.wikipedia.org/wiki/Quid_pro_quo
Ingame theory, atrigger strategyis any of a class of strategies employed in a repeated non-cooperative game. A player using a trigger strategy initially cooperates but punishes the opponent if a certain level of defection (i.e., the trigger) is observed. The level ofpunishmentand the sensitivity of the trigger vary with different trigger strategies. Thisgame theoryarticle is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Trigger_strategy
Zero-sum gameis amathematical representationingame theoryandeconomic theoryof a situation that involves two competing entities, where the result is an advantage for one side and an equivalent loss for the other.[1]In other words, player one's gain is equivalent to player two's loss, with the result that the net improvement in benefit of the game is zero.[2] If the total gains of the participants are added up, and the total losses are subtracted, they will sum to zero. Thus,cutting a cake, where taking a more significant piece reduces the amount of cake available for others as much as it increases the amount available for that taker, is a zero-sum game ifall participants value each unit of cake equally. Other examples of zero-sum games in daily life include games likepoker,chess,sportandbridgewhere one person gains and another person loses, which results in a zero-net benefit for every player.[3]In the markets and financial instruments, futures contracts and options are zero-sum games as well.[4] In contrast,non-zero-sumdescribes a situation in which the interacting parties' aggregate gains and losses can be less than or more than zero. A zero-sum game is also called astrictly competitivegame, while non-zero-sum games can be either competitive or non-competitive. Zero-sum games are most often solved with theminimax theoremwhich is closely related tolinear programming duality,[5]or withNash equilibrium.Prisoner's Dilemmais a classic non-zero-sum game.[6] The zero-sum property (if one gains, another loses) means that any result of a zero-sum situation isPareto optimal. Generally, any game where all strategies are Pareto optimal is called a conflict game.[7][8] Zero-sum games are a specific example of constant sum games where the sum of each outcome is always zero.[9]Such games are distributive, not integrative; the pie cannot be enlarged by good negotiation. In situation where one decision maker's gain (or loss) does not necessarily result in the other decision makers' loss (or gain), they are referred to as non-zero-sum.[10]Thus, a country with an excess of bananas trading with another country for their excess of apples, where both benefit from the transaction, is in a non-zero-sum situation. Other non-zero-sum games are games in which the sum of gains and losses by the players is sometimes more or less than what they began with. The idea of Pareto optimal payoff in a zero-sum game gives rise to a generalized relative selfish rationality standard, the punishing-the-opponent standard, where both players always seek to minimize the opponent's payoff at a favourable cost to themselves rather than prefer more over less. The punishing-the-opponent standard can be used in both zero-sum games (e.g. warfare game, chess) and non-zero-sum games (e.g. pooling selection games).[11]The player in the game has a simple enough desire to maximise the profit for them, and the opponent wishes to minimise it.[12] For two-player finite zero-sum games, if the players are allowed to play amixed strategy, the game always has at least one equilibrium solution. The differentgame theoreticsolution conceptsofNash equilibrium,minimax, andmaximinall give the same solution. Notice that this is not true forpure strategy. A game'spayoff matrixis a convenient representation. Consider these situations as an example, the two-player zero-sum game pictured at right or above. The order of play proceeds as follows: The first player (red) chooses in secret one of the two actions 1 or 2; the second player (blue), unaware of the first player's choice, chooses in secret one of the three actions A, B or C. Then, the choices are revealed and each player's points total is affected according to the payoff for those choices. Example: Red chooses action 2 and Blue chooses action B. When the payoff is allocated, Red gains 20 points and Blue loses 20 points. In this example game, both players know the payoff matrix and attempt to maximize the number of their points. Red could reason as follows: "With action 2, I could lose up to 20 points and can win only 20, and with action 1 I can lose only 10 but can win up to 30, so action 1 looks a lot better." With similar reasoning, Blue would choose action C. If both players take these actions, Red will win 20 points. If Blue anticipates Red's reasoning and choice of action 1, Blue may choose action B, so as to win 10 points. If Red, in turn, anticipates this trick and goes for action 2, this wins Red 20 points. Émile BorelandJohn von Neumannhad the fundamental insight thatprobabilityprovides a way out of this conundrum. Instead of deciding on a definite action to take, the two players assign probabilities to their respective actions, and then use a random device which, according to these probabilities, chooses an action for them. Each player computes the probabilities so as to minimize the maximumexpectedpoint-loss independent of the opponent's strategy. This leads to alinear programmingproblem with the optimal strategies for each player. Thisminimaxmethod can compute probably optimal strategies for all two-player zero-sum games. For the example given above, it turns out that Red should choose action 1 with probability⁠4/7⁠and action 2 with probability⁠3/7⁠, and Blue should assign the probabilities 0,⁠4/7⁠, and⁠3/7⁠to the three actions A, B, and C. Red will then win⁠20/7⁠points on average per game. TheNash equilibriumfor a two-player, zero-sum game can be found by solving alinear programmingproblem. Suppose a zero-sum game has a payoff matrixMwhere elementMi,jis the payoff obtained when the minimizing player chooses pure strategyiand the maximizing player chooses pure strategyj(i.e. the player trying to minimize the payoff chooses the row and the player trying to maximize the payoff chooses the column). Assume every element ofMis positive. The game will have at least one Nash equilibrium. The Nash equilibrium can be found (Raghavan 1994, p. 740) by solving the following linear program to find a vectoru: ∑iui{\displaystyle \sum _{i}u_{i}}Subject to the constraints: The first constraint says each element of theuvector must be nonnegative, and the second constraint says each element of theM uvector must be at least 1. For the resultinguvector, the inverse of the sum of its elements is the value of the game. Multiplyinguby that value gives a probability vector, giving the probability that the maximizing player will choose each possible pure strategy. If the game matrix does not have all positive elements, add a constant to every element that is large enough to make them all positive. That will increase the value of the game by that constant, and will not affect the equilibrium mixed strategies for the equilibrium. The equilibrium mixed strategy for the minimizing player can be found by solving the dual of the given linear program. Alternatively, it can be found by using the above procedure to solve a modified payoff matrix which is the transpose and negation ofM(adding a constant so it is positive), then solving the resulting game. If all the solutions to the linear program are found, they will constitute all the Nash equilibria for the game. Conversely, any linear program can be converted into a two-player, zero-sum game by using a change of variables that puts it in the form of the above equations and thus such games are equivalent to linear programs, in general.[13] If avoiding a zero-sum game is an action choice with some probability for players, avoiding is always an equilibrium strategy for at least one player at a zero-sum game. For any two players zero-sum game where a zero-zero draw is impossible or non-credible after the play is started, such as poker, there is no Nash equilibrium strategy other than avoiding the play. Even if there is a credible zero-zero draw after a zero-sum game is started, it is not better than the avoiding strategy. In this sense, it's interesting to find reward-as-you-go in optimal choice computation shall prevail over all two players zero-sum games concerning starting the game or not.[14] The most common or simple example from the subfield ofsocial psychologyis the concept of "social traps". In some cases pursuing individual personal interest can enhance the collective well-being of the group, but in other situations, all parties pursuing personal interest results in mutually destructive behaviour. Copeland's review notes that an n-player non-zero-sum game can be converted into an (n+1)-player zero-sum game, where the n+1st player, denoted thefictitious player, receives the negative of the sum of the gains of the other n-players (the global gain / loss).[15] It is clear that there are manifold relationships between players in a zero-sum three-person game, in a zero-sum two-person game, anything one player wins is necessarily lost by the other and vice versa; therefore, there is always an absolute antagonism of interests, and that is similar in the three-person game.[16]A particular move of a player in a zero-sum three-person game would be assumed to be clearly beneficial to him and may disbenefits to both other players, or benefits to one and disbenefits to the other opponent.[16]Particularly, parallelism of interests between two players makes a cooperation desirable; it may happen that a player has a choice among various policies: Get into a parallelism interest with another player by adjusting his conduct, or the opposite; that he can choose with which of other two players he prefers to build such parallelism, and to what extent.[16]The picture on the left shows that a typical example of a zero-sum three-person game. If Player 1 chooses to defence, but Player 2 & 3 chooses to offence, both of them will gain one point. At the same time, Player 1 will lose two-point because points are taken away by other players, and it is evident that Player 2 & 3 has parallelism of interests. Studies show that the entry of low-cost airlines into the Hong Kong market brought in $671 million in revenue and resulted in an outflow of $294 million. Therefore, the replacement effect should be considered when introducing a new model, which will lead to economic leakage and injection. Thus introducing new models requires caution. For example, if the number of new airlines departing from and arriving at the airport is the same, the economic contribution to the host city may be a zero-sum game. Because for Hong Kong, the consumption of overseas tourists in Hong Kong is income, while the consumption of Hong Kong residents in opposite cities is outflow. In addition, the introduction of new airlines can also have a negative impact on existing airlines. Consequently, when a new aviation model is introduced, feasibility tests need to be carried out in all aspects, taking into account the economic inflow and outflow and displacement effects caused by the model. Derivativestrading may be considered a zero-sum game, as each dollar gained by one party in a transaction must be lost by the other, hence yielding a net transfer of wealth of zero.[18] Anoptionscontract - whereby a buyer purchases a derivative contract which provides them with the right to buy an underlying asset from a seller at a specified strike price before a specified expiration date – is an example of a zero-sum game. Afutures contract– whereby a buyer purchases a derivative contract to buy an underlying asset from the seller for a specified price on a specified date – is also an example of a zero-sum game.[19]This is because the fundamental principle of these contracts is that they are agreements between two parties, and any gain made by one party must be matched by a loss sustained by the other. If the price of the underlying asset increases before the expiration date the buyer may exercise/ close the options/ futures contract. The buyers gain and corresponding sellers loss will be the difference between the strike price and value of the underlying asset at that time. Hence, the net transfer of wealth is zero. Swaps, which involve the exchange of cash flows from two different financial instruments, are also considered a zero-sum game.[20]Consider a standardinterest rate swapwhereby Firm A pays a fixed rate and receives a floating rate; correspondingly Firm B pays a floating rate and receives a fixed rate. If rates increase, then Firm A will gain, and Firm B will lose by the rate differential (floating rate – fixed rate). If rates decrease, then Firm A will lose, and Firm B will gain by the rate differential (fixed rate – floating rate). Whilst derivatives trading may be considered a zero-sum game, it is important to remember that this is not an absolute truth. Thefinancial marketsare complex and multifaceted, with a range of participants engaging in a variety of activities. While some trades may result in a simple transfer of wealth from one party to another, the market as a whole is not purely competitive, and many transactions serve important economic functions. Thestock marketis an excellent example of a positive-sum game, often erroneously labelled as a zero-sum game. This is a zero-sum fallacy: the perception that one trader in the stock market may only increase the value of their holdings if another trader decreases their holdings.[21] The primary goal of the stock market is to match buyers and sellers, but the prevailing price is the one which equilibrates supply and demand. Stock prices generally move according to changes in future expectations, such as acquisition announcements, upside earnings surprises, or improved guidance.[22] For instance, if Company C announces a deal to acquire Company D, and investors believe that the acquisition will result in synergies and hence increased profitability for Company C, there will be an increased demand for Company C stock. In this scenario, all existing holders of Company C stock will enjoy gains without incurring any corresponding measurable losses to other players. Furthermore, in the long run, the stock market is a positive-sum game. As economic growth occurs, demand increases, output increases, companies grow, and company valuations increase, leading to value creation and wealth addition in the market. It has been theorized byRobert Wrightin his bookNonzero: The Logic of Human Destiny, that society becomes increasingly non-zero-sum as it becomes more complex, specialized, and interdependent. In 1944,John von NeumannandOskar Morgensternproved that any non-zero-sum game fornplayers is equivalent to a zero-sum game withn+ 1 players; the (n+ 1)th player representing the global profit or loss.[23] Zero-sum games and particularly their solutions are commonly misunderstood by critics ofgame theory, usually with respect to the independence andrationalityof the players, as well as to the interpretation of utility functions[further explanation needed]. Furthermore, the word "game" does not imply the model is valid only for recreationalgames.[5] Politics is sometimes called zero sum[24][25][26]because in common usage the idea of a stalemate is perceived to be "zero sum"; politics andmacroeconomicsare not zero-sum games, however, because they do not constituteconserved systems.[citation needed]. Applying zero-sum game logic to scenarios that are not zero-sum in nature may lead to incorrect conclusions. Zero-sum games are based on the notion that one person's win will result in the other person's loss, so naturally there is competition between the two. There are scenarios, however, where that is not the case. For instance, in some cases both sides cooperating and working together could result in both sides benefitting more than they otherwise would have. By applying zero-sum logic, we in turn create an unnecessary, and potentially harmful, sense of scarcity and hostility.[27]Therefore, it is critical to make sure that zero-sum applications fit the given context. In psychology,zero-sum thinkingrefers to the perception that a given situation is like a zero-sum game, where one person's gain is equal to another person's loss.
https://en.wikipedia.org/wiki/Zero-sum_game
Instatistics,kernel Fisher discriminant analysis (KFD),[1]also known asgeneralized discriminant analysis[2]andkernel discriminant analysis,[3]is a kernelized version oflinear discriminant analysis(LDA). It is named afterRonald Fisher. Intuitively, the idea of LDA is to find a projection where class separation is maximized. Given two sets of labeled data,C1{\displaystyle \mathbf {C} _{1}}andC2{\displaystyle \mathbf {C} _{2}}, we can calculate the mean value of each class,m1{\displaystyle \mathbf {m} _{1}}andm2{\displaystyle \mathbf {m} _{2}}, as whereli{\displaystyle l_{i}}is the number of examples of classCi{\displaystyle \mathbf {C} _{i}}. The goal of linear discriminant analysis is to give a large separation of the class means while also keeping the in-class variance small.[4]This is formulated as maximizing, with respect tow{\displaystyle \mathbf {w} }, the following ratio: whereSB{\displaystyle \mathbf {S} _{B}}is the between-class covariance matrix andSW{\displaystyle \mathbf {S} _{W}}is the total within-class covariance matrix: The maximum of the above ratio is attained at as can be shown by theLagrange multipliermethod (sketch of proof): MaximizingJ(w)=wTSBwwTSWw{\displaystyle J(\mathbf {w} )={\frac {\mathbf {w} ^{\text{T}}\mathbf {S} _{B}\mathbf {w} }{\mathbf {w} ^{\text{T}}\mathbf {S} _{W}\mathbf {w} }}}is equivalent to maximizing subject to This, in turn, is equivalent to maximizingI(w,λ)=wTSBw−λ(wTSWw−1){\displaystyle I(\mathbf {w} ,\lambda )=\mathbf {w} ^{\text{T}}\mathbf {S} _{B}\mathbf {w} -\lambda (\mathbf {w} ^{\text{T}}\mathbf {S} _{W}\mathbf {w} -1)}, whereλ{\displaystyle \lambda }is the Lagrange multiplier. At the maximum, the derivatives ofI(w,λ){\displaystyle I(\mathbf {w} ,\lambda )}with respect tow{\displaystyle \mathbf {w} }andλ{\displaystyle \lambda }must be zero. TakingdIdw=0{\displaystyle {\frac {dI}{d\mathbf {w} }}=\mathbf {0} }yields which is trivially satisfied byw=cSW−1(m2−m1){\displaystyle \mathbf {w} =c\mathbf {S} _{W}^{-1}(\mathbf {m} _{2}-\mathbf {m} _{1})}andλ=(m2−m1)TSW−1(m2−m1).{\displaystyle \lambda =(\mathbf {m} _{2}-\mathbf {m} _{1})^{\text{T}}\mathbf {S} _{W}^{-1}(\mathbf {m} _{2}-\mathbf {m} _{1}).} To extend LDA to non-linear mappings, the data, given as theℓ{\displaystyle \ell }pointsxi,{\displaystyle \mathbf {x} _{i},}can be mapped to a new feature space,F,{\displaystyle F,}via some functionϕ.{\displaystyle \phi .}In this new feature space, the function that needs to be maximized is[1] where and Further, note thatw∈F{\displaystyle \mathbf {w} \in F}. Explicitly computing the mappingsϕ(xi){\displaystyle \phi (\mathbf {x} _{i})}and then performing LDA can be computationally expensive, and in many cases intractable. For example,F{\displaystyle F}may be infinite dimensional. Thus, rather than explicitly mapping the data toF{\displaystyle F}, the data can be implicitly embedded by rewriting the algorithm in terms ofdot productsand using kernel functions in which the dot product in the new feature space is replaced by a kernel function,k(x,y)=ϕ(x)⋅ϕ(y){\displaystyle k(\mathbf {x} ,\mathbf {y} )=\phi (\mathbf {x} )\cdot \phi (\mathbf {y} )}. LDA can be reformulated in terms of dot products by first noting thatw{\displaystyle \mathbf {w} }will have an expansion of the form[5] Then note that where The numerator ofJ(w){\displaystyle J(\mathbf {w} )}can then be written as: Similarly, the denominator can be written as with thenth,mth{\displaystyle n^{\text{th}},m^{\text{th}}}component ofKj{\displaystyle \mathbf {K} _{j}}defined ask(xn,xmj),I{\displaystyle k(\mathbf {x} _{n},\mathbf {x} _{m}^{j}),\mathbf {I} }is the identity matrix, and1lj{\displaystyle \mathbf {1} _{l_{j}}}the matrix with all entries equal to1/lj{\displaystyle 1/l_{j}}. This identity can be derived by starting out with the expression forwTSWϕw{\displaystyle \mathbf {w} ^{\text{T}}\mathbf {S} _{W}^{\phi }\mathbf {w} }and using the expansion ofw{\displaystyle \mathbf {w} }and the definitions ofSWϕ{\displaystyle \mathbf {S} _{W}^{\phi }}andmiϕ{\displaystyle \mathbf {m} _{i}^{\phi }} With these equations for the numerator and denominator ofJ(w){\displaystyle J(\mathbf {w} )}, the equation forJ{\displaystyle J}can be rewritten as Then, differentiating and setting equal to zero gives Since only the direction ofw{\displaystyle \mathbf {w} }, and hence the direction ofα,{\displaystyle \mathbf {\alpha } ,}matters, the above can be solved forα{\displaystyle \mathbf {\alpha } }as Note that in practice,N{\displaystyle \mathbf {N} }is usually singular and so a multiple of the identity is added to it[1] Given the solution forα{\displaystyle \mathbf {\alpha } }, the projection of a new data point is given by[1] The extension to cases where there are more than two classes is relatively straightforward.[2][6][7]Letc{\displaystyle c}be the number of classes. Then multi-class KFD involves projecting the data into a(c−1){\displaystyle (c-1)}-dimensional space using(c−1){\displaystyle (c-1)}discriminant functions This can be written in matrix notation where thewi{\displaystyle \mathbf {w} _{i}}are the columns ofW{\displaystyle \mathbf {W} }.[6]Further, the between-class covariance matrix is now wheremϕ{\displaystyle \mathbf {m} ^{\phi }}is the mean of all the data in the new feature space. The within-class covariance matrix is The solution is now obtained by maximizing The kernel trick can again be used and the goal of multi-class KFD becomes[7] whereA=[α1,…,αc−1]{\displaystyle A=[\mathbf {\alpha } _{1},\ldots ,\mathbf {\alpha } _{c-1}]}and TheMi{\displaystyle \mathbf {M} _{i}}are defined as in the above section andM∗{\displaystyle \mathbf {M} _{*}}is defined as A∗{\displaystyle \mathbf {A} ^{*}}can then be computed by finding the(c−1){\displaystyle (c-1)}leading eigenvectors ofN−1M{\displaystyle \mathbf {N} ^{-1}\mathbf {M} }.[7]Furthermore, the projection of a new input,xt{\displaystyle \mathbf {x} _{t}}, is given by[7] where theith{\displaystyle i^{th}}component ofKt{\displaystyle \mathbf {K} _{t}}is given byk(xi,xt){\displaystyle k(\mathbf {x} _{i},\mathbf {x} _{t})}. In both two-class and multi-class KFD, the class label of a new input can be assigned as[7] wherey¯j{\displaystyle {\bar {\mathbf {y} }}_{j}}is the projected mean for classj{\displaystyle j}andD(⋅,⋅){\displaystyle D(\cdot ,\cdot )}is a distance function. Kernel discriminant analysis has been used in a variety of applications. These include:
https://en.wikipedia.org/wiki/Kernel_Fisher_discriminant_analysis
Multiple Discriminant Analysis(MDA) is a multivariatedimensionality reductiontechnique. It has been used to predict signals as diverse asneuralmemory traces and corporate failure.[1] MDA is not directly used to perform classification. It merely supportsclassificationby yielding acompressed signalamenable to classification. The method described in Duda et al. (2001) §3.8.3 projects the multivariate signal down to anM−1 dimensional space whereMis the number of categories. MDA is useful because most classifiers are strongly affected by thecurse of dimensionality. In other words, when signals are represented in very-high-dimensional spaces, the classifier's performance is catastrophically impaired by theoverfittingproblem. This problem is reduced by compressing the signal down to a lower-dimensional space as MDA does. MDA has been used to revealneural codes.[2][3] This robotics-related article is astub. You can help Wikipedia byexpanding it. Thisstatistics-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Multiple_discriminant_analysis
Multidimensional scaling(MDS) is a means of visualizing the level ofsimilarityof individual cases of a data set. MDS is used to translate distances between each pair ofn{\textstyle n}objects in a set into a configuration ofn{\textstyle n}points mapped into an abstractCartesian space.[1] More technically, MDS refers to a set of relatedordinationtechniques used ininformation visualization, in particular to display the information contained in adistance matrix. It is a form ofnon-linear dimensionality reduction. Given a distance matrix with the distances between each pair of objects in a set, and a chosen number of dimensions,N, an MDSalgorithmplaces each object intoN-dimensionalspace (a lower-dimensional representation) such that the between-object distances are preserved as well as possible. ForN= 1, 2, and 3, the resulting points can be visualized on ascatter plot.[2] Core theoretical contributions to MDS were made byJames O. RamsayofMcGill University, who is also regarded as the founder offunctional data analysis.[3] MDS algorithms fall into ataxonomy, depending on the meaning of the input matrix: It is also known asPrincipal Coordinates Analysis(PCoA), Torgerson Scaling or Torgerson–Gower scaling. It takes an input matrix giving dissimilarities between pairs of items and outputs a coordinate matrix whose configuration minimizes aloss functioncalledstrain,[2]which is given byStrainD(x1,x2,...,xn)=(∑i,j(bij−xiTxj)2∑i,jbij2)1/2,{\displaystyle {\text{Strain}}_{D}(x_{1},x_{2},...,x_{n})={\Biggl (}{\frac {\sum _{i,j}{\bigl (}b_{ij}-x_{i}^{T}x_{j}{\bigr )}^{2}}{\sum _{i,j}b_{ij}^{2}}}{\Biggr )}^{1/2},}wherexi{\displaystyle x_{i}}denote vectors inN-dimensional space,xiTxj{\displaystyle x_{i}^{T}x_{j}}denotes the scalar product betweenxi{\displaystyle x_{i}}andxj{\displaystyle x_{j}}, andbij{\displaystyle b_{ij}}are the elements of the matrixB{\displaystyle B}defined on step 2 of the following algorithm, which are computed from the distances. It is a superset of classical MDS that generalizes the optimization procedure to a variety of loss functions and input matrices of known distances with weights and so on. A useful loss function in this context is calledstress, which is often minimized using a procedure calledstress majorization. Metric MDS minimizes the cost function called “stress” which is a residual sum of squares: StressD(x1,x2,...,xn)=∑i≠j=1,...,n(dij−‖xi−xj‖)2.{\displaystyle {\text{Stress}}_{D}(x_{1},x_{2},...,x_{n})={\sqrt {\sum _{i\neq j=1,...,n}{\bigl (}d_{ij}-\|x_{i}-x_{j}\|{\bigr )}^{2}}}.} Metric scaling uses a power transformation with a user-controlled exponentp{\textstyle p}:dijp{\textstyle d_{ij}^{p}}and−dij2p{\textstyle -d_{ij}^{2p}}for distance. In classical scalingp=1.{\textstyle p=1.}Non-metric scaling is defined by the use of isotonic regression to nonparametrically estimate a transformation of the dissimilarities. In contrast to metric MDS, non-metric MDS finds both anon-parametricmonotonicrelationship between the dissimilarities in the item-item matrix and the Euclidean distances between items, and the location of each item in the low-dimensional space. Letdij{\displaystyle d_{ij}}be the dissimilarity between pointsi,j{\displaystyle i,j}. Letd^ij=‖xi−xj‖{\displaystyle {\hat {d}}_{ij}=\|x_{i}-x_{j}\|}be the Euclidean distance between embedded pointsxi,xj{\displaystyle x_{i},x_{j}}. Now, for each choice of the embedded pointsxi{\displaystyle x_{i}}and is a monotonically increasing functionf{\displaystyle f}, define the "stress" function: S(x1,...,xn;f)=∑i<j(f(dij)−d^ij)2∑i<jd^ij2.{\displaystyle S(x_{1},...,x_{n};f)={\sqrt {\frac {\sum _{i<j}{\bigl (}f(d_{ij})-{\hat {d}}_{ij}{\bigr )}^{2}}{\sum _{i<j}{\hat {d}}_{ij}^{2}}}}.} The factor of∑i<jd^ij2{\displaystyle \sum _{i<j}{\hat {d}}_{ij}^{2}}in the denominator is necessary to prevent a "collapse". Suppose we define insteadS=∑i<j(f(dij)−d^ij)2{\displaystyle S={\sqrt {\sum _{i<j}{\bigl (}f(d_{ij})-{\hat {d}}_{ij})^{2}}}}, then it can be trivially minimized by settingf=0{\displaystyle f=0}, then collapse every point to the same point. A few variants of this cost function exist. MDS programs automatically minimize stress in order to obtain the MDS solution. The core of a non-metric MDS algorithm is a twofold optimization process. First the optimal monotonic transformation of the proximities has to be found. Secondly, the points of a configuration have to be optimally arranged, so that their distances match the scaled proximities as closely as possible. NMDS needs to optimize two objectives simultaneously. This is usually done iteratively: Louis Guttman's smallest space analysis (SSA) is an example of a non-metric MDS procedure. An extension of metric multidimensional scaling, in which the target space is an arbitrary smooth non-Euclidean space. In cases where the dissimilarities are distances on a surface and the target space is another surface, GMDS allows finding the minimum-distortion embedding of one surface into another.[5] An extension of MDS, known as Super MDS, incorporates both distance and angle information for improved source localization. Unlike traditional MDS, which uses only distance measurements, Super MDS processes both distance and angle-of-arrival (AOA) data algebraically (without iteration) to achieve better accuracy.[6] The method proceeds in the following steps: This concise approach reduces the need for multiple anchors and enhances localization precision by leveraging angle constraints. The data to be analyzed is a collection ofM{\displaystyle M}objects (colors, faces, stocks, . . .) on which adistance functionis defined, These distances are the entries of thedissimilarity matrix The goal of MDS is, givenD{\displaystyle D}, to findM{\displaystyle M}vectorsx1,…,xM∈RN{\displaystyle x_{1},\ldots ,x_{M}\in \mathbb {R} ^{N}}such that where‖⋅‖{\displaystyle \|\cdot \|}is avector norm. In classical MDS, this norm is theEuclidean distance, but, in a broader sense, it may be ametricor arbitrary distance function.[7]For example, when dealing with mixed-type data that contain numerical as well as categorical descriptors,Gower's distanceis a common alternative.[citation needed] In other words, MDS attempts to find a mapping from theM{\displaystyle M}objects intoRN{\displaystyle \mathbb {R} ^{N}}such that distances are preserved. If the dimensionN{\displaystyle N}is chosen to be 2 or 3, we may plot the vectorsxi{\displaystyle x_{i}}to obtain a visualization of the similarities between theM{\displaystyle M}objects. Note that the vectorsxi{\displaystyle x_{i}}are not unique: With the Euclidean distance, they may be arbitrarily translated, rotated, and reflected, since these transformations do not change the pairwise distances‖xi−xj‖{\displaystyle \|x_{i}-x_{j}\|}. (Note: The symbolR{\displaystyle \mathbb {R} }indicates the set ofreal numbers, and the notationRN{\displaystyle \mathbb {R} ^{N}}refers to the Cartesian product ofN{\displaystyle N}copies ofR{\displaystyle \mathbb {R} }, which is anN{\displaystyle N}-dimensional vector space over the field of the real numbers.) There are various approaches to determining the vectorsxi{\displaystyle x_{i}}. Usually, MDS is formulated as anoptimization problem, where(x1,…,xM){\displaystyle (x_{1},\ldots ,x_{M})}is found as a minimizer of some cost function, for example, A solution may then be found by numerical optimization techniques. For some particularly chosen cost functions, minimizers can be stated analytically in terms of matrixeigendecompositions.[2] There are several steps in conducting MDS research:
https://en.wikipedia.org/wiki/Multidimensional_scaling
Preference regressionis a statistical technique used by marketers to determine consumers’preferredcore benefits. It usually supplementsproduct positioningtechniques likemulti dimensional scalingorfactor analysisand is used to create ideal vectors onperceptual maps. Starting with raw data from surveys, researchers apply positioning techniques to determine important dimensions and plot the position of competingproductson these dimensions. Next theyregressthe survey data against the dimensions. The independent variables are the data collected in the survey. The dependent variable is the preference datum. Like all regression methods, the computer fits weights to best predict data. The resultant regression line is referred to as an ideal vector because the slope of the vector is the ratio of the preferences for the two dimensions. If all the data is used in the regression, the program will derive a single equation and hence a single ideal vector. This tends to be a blunt instrument so researchers refine the process withcluster analysis. This creates clusters that reflectmarket segments. Separate preference regressions are then done on the data within each segment. This provides an ideal vector for each segment. Self-stated importance methodis an alternative method in which direct survey data is used to determine the weightings rather than statistical imputations. A third method isconjoint analysisin which an additive method is used.
https://en.wikipedia.org/wiki/Preference_regression
Inphysicsandmathematics, theHelmholtz decomposition theoremor thefundamental theorem of vector calculus[1][2][3][4][5][6][7]states that certain differentiablevector fieldscan be resolved into the sum of anirrotational(curl-free) vector field and asolenoidal(divergence-free) vector field. Inphysics, often only the decomposition of sufficientlysmooth, rapidly decayingvector fieldsin three dimensions is discussed. It is named afterHermann von Helmholtz. For a vector fieldF∈C1(V,Rn){\displaystyle \mathbf {F} \in C^{1}(V,\mathbb {R} ^{n})}defined on a domainV⊆Rn{\displaystyle V\subseteq \mathbb {R} ^{n}}, a Helmholtz decomposition is a pair of vector fieldsG∈C1(V,Rn){\displaystyle \mathbf {G} \in C^{1}(V,\mathbb {R} ^{n})}andR∈C1(V,Rn){\displaystyle \mathbf {R} \in C^{1}(V,\mathbb {R} ^{n})}such that:F(r)=G(r)+R(r),G(r)=−∇Φ(r),∇⋅R(r)=0.{\displaystyle {\begin{aligned}\mathbf {F} (\mathbf {r} )&=\mathbf {G} (\mathbf {r} )+\mathbf {R} (\mathbf {r} ),\\\mathbf {G} (\mathbf {r} )&=-\nabla \Phi (\mathbf {r} ),\\\nabla \cdot \mathbf {R} (\mathbf {r} )&=0.\end{aligned}}}Here,Φ∈C2(V,R){\displaystyle \Phi \in C^{2}(V,\mathbb {R} )}is ascalar potential,∇Φ{\displaystyle \nabla \Phi }is itsgradient, and∇⋅R{\displaystyle \nabla \cdot \mathbf {R} }is thedivergenceof the vector fieldR{\displaystyle \mathbf {R} }. The irrotational vector fieldG{\displaystyle \mathbf {G} }is called agradient fieldandR{\displaystyle \mathbf {R} }is called asolenoidalfieldorrotation field. This decomposition does not exist for all vector fields and is notunique.[8] The Helmholtz decomposition in three dimensions was first described in 1849[9]byGeorge Gabriel Stokesfor a theory ofdiffraction.Hermann von Helmholtzpublished his paper on somehydrodynamicbasic equations in 1858,[10][11]which was part of his research on theHelmholtz's theoremsdescribing the motion of fluid in the vicinity of vortex lines.[11]Their derivation required the vector fields to decay sufficiently fast at infinity. Later, this condition could be relaxed, and the Helmholtz decomposition could be extended to higher dimensions.[8][12][13]ForRiemannian manifolds, the Helmholtz-Hodge decomposition usingdifferential geometryandtensor calculuswas derived.[8][11][14][15] The decomposition has become an important tool for many problems intheoretical physics,[11][14]but has also found applications inanimation,computer visionas well asrobotics.[15] Many physics textbooks restrict the Helmholtz decomposition to the three-dimensional space and limit its application to vector fields that decay sufficiently fast at infinity or tobump functionsthat are defined on abounded domain. Then, avector potentialA{\displaystyle A}can be defined, such that the rotation field is given byR=∇×A{\displaystyle \mathbf {R} =\nabla \times \mathbf {A} }, using thecurlof a vector field.[16] LetF{\displaystyle \mathbf {F} }be a vector field on a bounded domainV⊆R3{\displaystyle V\subseteq \mathbb {R} ^{3}}, which is twice continuously differentiable insideV{\displaystyle V}, and letS{\displaystyle S}be the surface that encloses the domainV{\displaystyle V}with outward surface normaln^′{\displaystyle \mathbf {\hat {n}} '}. ThenF{\displaystyle \mathbf {F} }can be decomposed into a curl-free component and a divergence-free component as follows:[17] F=−∇Φ+∇×A,{\displaystyle \mathbf {F} =-\nabla \Phi +\nabla \times \mathbf {A} ,}whereΦ(r)=14π∫V∇′⋅F(r′)|r−r′|dV′−14π∮Sn^′⋅F(r′)|r−r′|dS′A(r)=14π∫V∇′×F(r′)|r−r′|dV′−14π∮Sn^′×F(r′)|r−r′|dS′{\displaystyle {\begin{aligned}\Phi (\mathbf {r} )&={\frac {1}{4\pi }}\int _{V}{\frac {\nabla '\cdot \mathbf {F} (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,\mathrm {d} V'-{\frac {1}{4\pi }}\oint _{S}\mathbf {\hat {n}} '\cdot {\frac {\mathbf {F} (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,\mathrm {d} S'\\[8pt]\mathbf {A} (\mathbf {r} )&={\frac {1}{4\pi }}\int _{V}{\frac {\nabla '\times \mathbf {F} (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,\mathrm {d} V'-{\frac {1}{4\pi }}\oint _{S}\mathbf {\hat {n}} '\times {\frac {\mathbf {F} (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,\mathrm {d} S'\end{aligned}}} and∇′{\displaystyle \nabla '}is thenabla operatorwith respect tor′{\displaystyle \mathbf {r'} }, notr{\displaystyle \mathbf {r} }. IfV=R3{\displaystyle V=\mathbb {R} ^{3}}and is therefore unbounded, andF{\displaystyle \mathbf {F} }vanishes faster than1/r{\displaystyle 1/r}asr→∞{\displaystyle r\to \infty }, then one has[18] Φ(r)=14π∫R3∇′⋅F(r′)|r−r′|dV′A(r)=14π∫R3∇′×F(r′)|r−r′|dV′{\displaystyle {\begin{aligned}\Phi (\mathbf {r} )&={\frac {1}{4\pi }}\int _{\mathbb {R} ^{3}}{\frac {\nabla '\cdot \mathbf {F} (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,\mathrm {d} V'\\[8pt]\mathbf {A} (\mathbf {r} )&={\frac {1}{4\pi }}\int _{\mathbb {R} ^{3}}{\frac {\nabla '\times \mathbf {F} (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,\mathrm {d} V'\end{aligned}}}This holds in particular ifF{\displaystyle \mathbf {F} }is twice continuously differentiable inR3{\displaystyle \mathbb {R} ^{3}}and of bounded support. Suppose we have a vector functionF(r){\displaystyle \mathbf {F} (\mathbf {r} )}of which we know the curl,∇×F{\displaystyle \nabla \times \mathbf {F} }, and the divergence,∇⋅F{\displaystyle \nabla \cdot \mathbf {F} }, in the domain and the fields on the boundary. Writing the function using thedelta functionin the formδ3(r−r′)=−14π∇21|r−r′|,{\displaystyle \delta ^{3}(\mathbf {r} -\mathbf {r} ')=-{\frac {1}{4\pi }}\nabla ^{2}{\frac {1}{|\mathbf {r} -\mathbf {r} '|}}\,,}where∇2{\displaystyle \nabla ^{2}}is theLaplacianoperator, we have F(r)=∫VF(r′)δ3(r−r′)dV′=∫VF(r′)(−14π∇21|r−r′|)dV′{\displaystyle {\begin{aligned}\mathbf {F} (\mathbf {r} )&=\int _{V}\mathbf {F} \left(\mathbf {r} '\right)\delta ^{3}(\mathbf {r} -\mathbf {r} ')\mathrm {d} V'\\&=\int _{V}\mathbf {F} (\mathbf {r} ')\left(-{\frac {1}{4\pi }}\nabla ^{2}{\frac {1}{\left|\mathbf {r} -\mathbf {r} '\right|}}\right)\mathrm {d} V'\end{aligned}}} Now, changing the meaning of∇2{\displaystyle \nabla ^{2}}to thevector Laplacianoperator (we have the right to do so because this laplacian is with respect tor{\displaystyle \mathbf {r} }therefore it sees the vector fieldF(r′){\displaystyle \mathbf {F} (\mathbf {r'} )}as a constant), we can moveF(r′){\displaystyle \mathbf {F} (\mathbf {r'} )}to the right of the∇2{\displaystyle \nabla ^{2}}operator. F(r)=∫V−14π∇2F(r′)|r−r′|dV′=−14π∇2∫VF(r′)|r−r′|dV′=−14π[∇(∇⋅∫VF(r′)|r−r′|dV′)−∇×(∇×∫VF(r′)|r−r′|dV′)]=−14π[∇(∫VF(r′)⋅∇1|r−r′|dV′)+∇×(∫VF(r′)×∇1|r−r′|dV′)]=−14π[−∇(∫VF(r′)⋅∇′1|r−r′|dV′)−∇×(∫VF(r′)×∇′1|r−r′|dV′)]{\displaystyle {\begin{aligned}\mathbf {F} (\mathbf {r} )&=\int _{V}-{\frac {1}{4\pi }}\nabla ^{2}{\frac {\mathbf {F} (\mathbf {r} ')}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} V'\\&=-{\frac {1}{4\pi }}\nabla ^{2}\int _{V}{\frac {\mathbf {F} (\mathbf {r} ')}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} V'\\&=-{\frac {1}{4\pi }}\left[\nabla \left(\nabla \cdot \int _{V}{\frac {\mathbf {F} (\mathbf {r} ')}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} V'\right)-\nabla \times \left(\nabla \times \int _{V}{\frac {\mathbf {F} (\mathbf {r} ')}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} V'\right)\right]\\&=-{\frac {1}{4\pi }}\left[\nabla \left(\int _{V}\mathbf {F} (\mathbf {r} ')\cdot \nabla {\frac {1}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} V'\right)+\nabla \times \left(\int _{V}\mathbf {F} (\mathbf {r} ')\times \nabla {\frac {1}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} V'\right)\right]\\&=-{\frac {1}{4\pi }}\left[-\nabla \left(\int _{V}\mathbf {F} (\mathbf {r} ')\cdot \nabla '{\frac {1}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} V'\right)-\nabla \times \left(\int _{V}\mathbf {F} (\mathbf {r} ')\times \nabla '{\frac {1}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} V'\right)\right]\end{aligned}}} where we have used the vector Laplacian identity:∇2a=∇(∇⋅a)−∇×(∇×a),{\displaystyle \nabla ^{2}\mathbf {a} =\nabla (\nabla \cdot \mathbf {a} )-\nabla \times (\nabla \times \mathbf {a} )\ ,} differentiation/integration with respect tor′{\displaystyle \mathbf {r} '}by∇′/dV′,{\displaystyle \nabla '/\mathrm {d} V',}and in the last line, linearity of function arguments:∇1|r−r′|=−∇′1|r−r′|.{\displaystyle \nabla {\frac {1}{\left|\mathbf {r} -\mathbf {r} '\right|}}=-\nabla '{\frac {1}{\left|\mathbf {r} -\mathbf {r} '\right|}}\ .} Then using the vectorial identities a⋅∇ψ=−ψ(∇⋅a)+∇⋅(ψa)a×∇ψ=ψ(∇×a)−∇×(ψa){\displaystyle {\begin{aligned}\mathbf {a} \cdot \nabla \psi &=-\psi (\nabla \cdot \mathbf {a} )+\nabla \cdot (\psi \mathbf {a} )\\\mathbf {a} \times \nabla \psi &=\psi (\nabla \times \mathbf {a} )-\nabla \times (\psi \mathbf {a} )\end{aligned}}} we getF(r)=−14π[−∇(−∫V∇′⋅F(r′)|r−r′|dV′+∫V∇′⋅F(r′)|r−r′|dV′)−∇×(∫V∇′×F(r′)|r−r′|dV′−∫V∇′×F(r′)|r−r′|dV′)].{\displaystyle {\begin{aligned}\mathbf {F} (\mathbf {r} )=-{\frac {1}{4\pi }}{\bigg [}&-\nabla \left(-\int _{V}{\frac {\nabla '\cdot \mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} V'+\int _{V}\nabla '\cdot {\frac {\mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} V'\right)\\&-\nabla \times \left(\int _{V}{\frac {\nabla '\times \mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} V'-\int _{V}\nabla '\times {\frac {\mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} V'\right){\bigg ]}.\end{aligned}}} Thanks to thedivergence theoremthe equation can be rewritten as F(r)=−14π[−∇(−∫V∇′⋅F(r′)|r−r′|dV′+∮Sn^′⋅F(r′)|r−r′|dS′)−∇×(∫V∇′×F(r′)|r−r′|dV′−∮Sn^′×F(r′)|r−r′|dS′)]=−∇[14π∫V∇′⋅F(r′)|r−r′|dV′−14π∮Sn^′⋅F(r′)|r−r′|dS′]+∇×[14π∫V∇′×F(r′)|r−r′|dV′−14π∮Sn^′×F(r′)|r−r′|dS′]{\displaystyle {\begin{aligned}\mathbf {F} (\mathbf {r} )&=-{\frac {1}{4\pi }}{\bigg [}-\nabla \left(-\int _{V}{\frac {\nabla '\cdot \mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} V'+\oint _{S}\mathbf {\hat {n}} '\cdot {\frac {\mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} S'\right)\\&\qquad \qquad -\nabla \times \left(\int _{V}{\frac {\nabla '\times \mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} V'-\oint _{S}\mathbf {\hat {n}} '\times {\frac {\mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} S'\right){\bigg ]}\\&=-\nabla \left[{\frac {1}{4\pi }}\int _{V}{\frac {\nabla '\cdot \mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} V'-{\frac {1}{4\pi }}\oint _{S}\mathbf {\hat {n}} '\cdot {\frac {\mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} S'\right]\\&\quad +\nabla \times \left[{\frac {1}{4\pi }}\int _{V}{\frac {\nabla '\times \mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} V'-{\frac {1}{4\pi }}\oint _{S}\mathbf {\hat {n}} '\times {\frac {\mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} S'\right]\end{aligned}}} with outward surface normaln^′{\displaystyle \mathbf {\hat {n}} '}. Defining Φ(r)≡14π∫V∇′⋅F(r′)|r−r′|dV′−14π∮Sn^′⋅F(r′)|r−r′|dS′{\displaystyle \Phi (\mathbf {r} )\equiv {\frac {1}{4\pi }}\int _{V}{\frac {\nabla '\cdot \mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} V'-{\frac {1}{4\pi }}\oint _{S}\mathbf {\hat {n}} '\cdot {\frac {\mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} S'}A(r)≡14π∫V∇′×F(r′)|r−r′|dV′−14π∮Sn^′×F(r′)|r−r′|dS′{\displaystyle \mathbf {A} (\mathbf {r} )\equiv {\frac {1}{4\pi }}\int _{V}{\frac {\nabla '\times \mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} V'-{\frac {1}{4\pi }}\oint _{S}\mathbf {\hat {n}} '\times {\frac {\mathbf {F} \left(\mathbf {r} '\right)}{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} S'} we finally obtainF=−∇Φ+∇×A.{\displaystyle \mathbf {F} =-\nabla \Phi +\nabla \times \mathbf {A} .} If(Φ1,A1){\displaystyle (\Phi _{1},{\mathbf {A} _{1}})}is a Helmholtz decomposition ofF{\displaystyle \mathbf {F} }, then(Φ2,A2){\displaystyle (\Phi _{2},{\mathbf {A} _{2}})}is another decomposition if, and only if, Proof: Setλ=Φ2−Φ1{\displaystyle \lambda =\Phi _{2}-\Phi _{1}}andB=A2−A1{\displaystyle {\mathbf {B} =A_{2}-A_{1}}}. According to the definition of the Helmholtz decomposition, the condition is equivalent to Taking the divergence of each member of this equation yields∇2λ=0{\displaystyle \nabla ^{2}\lambda =0}, henceλ{\displaystyle \lambda }is harmonic. Conversely, given any harmonic functionλ{\displaystyle \lambda },∇λ{\displaystyle \nabla \lambda }is solenoidal since Thus, according to the above section, there exists a vector fieldAλ{\displaystyle {\mathbf {A} }_{\lambda }}such that∇λ=∇×Aλ{\displaystyle \nabla \lambda =\nabla \times {\mathbf {A} }_{\lambda }}. IfA′λ{\displaystyle {\mathbf {A} '}_{\lambda }}is another such vector field, thenC=Aλ−A′λ{\displaystyle \mathbf {C} ={\mathbf {A} }_{\lambda }-{\mathbf {A} '}_{\lambda }}fulfills∇×C=0{\displaystyle \nabla \times {\mathbf {C} }=0}, henceC=∇φ{\displaystyle C=\nabla \varphi }for some scalar fieldφ{\displaystyle \varphi }. The term "Helmholtz theorem" can also refer to the following. LetCbe asolenoidal vector fieldandda scalar field onR3which are sufficiently smooth and which vanish faster than1/r2at infinity. Then there exists a vector fieldFsuch that ∇⋅F=dand∇×F=C;{\displaystyle \nabla \cdot \mathbf {F} =d\quad {\text{ and }}\quad \nabla \times \mathbf {F} =\mathbf {C} ;} if additionally the vector fieldFvanishes asr→ ∞, thenFis unique.[18] In other words, a vector field can be constructed with both a specified divergence and a specified curl, and if it also vanishes at infinity, it is uniquely specified by its divergence and curl. This theorem is of great importance inelectrostatics, sinceMaxwell's equationsfor the electric and magnetic fields in the static case are of exactly this type.[18]The proof is by a construction generalizing the one given above: we set F=∇(G(d))−∇×(G(C)),{\displaystyle \mathbf {F} =\nabla ({\mathcal {G}}(d))-\nabla \times ({\mathcal {G}}(\mathbf {C} )),} whereG{\displaystyle {\mathcal {G}}}represents theNewtonian potentialoperator. (When acting on a vector field, such as∇ ×F, it is defined to act on each component.) The Helmholtz decomposition can be generalized by reducing the regularity assumptions (the need for the existence of strong derivatives). SupposeΩis a bounded, simply-connected,Lipschitz domain. Everysquare-integrablevector fieldu∈ (L2(Ω))3has anorthogonaldecomposition:[19][20][21] u=∇φ+∇×A{\displaystyle \mathbf {u} =\nabla \varphi +\nabla \times \mathbf {A} } whereφis in theSobolev spaceH1(Ω)of square-integrable functions onΩwhose partial derivatives defined in thedistributionsense are square integrable, andA∈H(curl, Ω), the Sobolev space of vector fields consisting of square integrable vector fields with square integrable curl. For a slightly smoother vector fieldu∈H(curl, Ω), a similar decomposition holds: u=∇φ+v{\displaystyle \mathbf {u} =\nabla \varphi +\mathbf {v} } whereφ∈H1(Ω),v∈ (H1(Ω))d. Note that in the theorem stated here, we have imposed the condition that ifF{\displaystyle \mathbf {F} }is not defined on a bounded domain, thenF{\displaystyle \mathbf {F} }shall decay faster than1/r{\displaystyle 1/r}. Thus, theFourier transformofF{\displaystyle \mathbf {F} }, denoted asG{\displaystyle \mathbf {G} }, is guaranteed to exist. We apply the conventionF(r)=∭G(k)eik⋅rdVk{\displaystyle \mathbf {F} (\mathbf {r} )=\iiint \mathbf {G} (\mathbf {k} )e^{i\mathbf {k} \cdot \mathbf {r} }dV_{k}} The Fourier transform of a scalar field is a scalar field, and the Fourier transform of a vector field is a vector field of same dimension. Now consider the following scalar and vector fields:GΦ(k)=ik⋅G(k)‖k‖2GA(k)=ik×G(k)‖k‖2Φ(r)=∭GΦ(k)eik⋅rdVkA(r)=∭GA(k)eik⋅rdVk{\displaystyle {\begin{aligned}G_{\Phi }(\mathbf {k} )&=i{\frac {\mathbf {k} \cdot \mathbf {G} (\mathbf {k} )}{\|\mathbf {k} \|^{2}}}\\\mathbf {G} _{\mathbf {A} }(\mathbf {k} )&=i{\frac {\mathbf {k} \times \mathbf {G} (\mathbf {k} )}{\|\mathbf {k} \|^{2}}}\\[8pt]\Phi (\mathbf {r} )&=\iiint G_{\Phi }(\mathbf {k} )e^{i\mathbf {k} \cdot \mathbf {r} }dV_{k}\\\mathbf {A} (\mathbf {r} )&=\iiint \mathbf {G} _{\mathbf {A} }(\mathbf {k} )e^{i\mathbf {k} \cdot \mathbf {r} }dV_{k}\end{aligned}}} Hence G(k)=−ikGΦ(k)+ik×GA(k)F(r)=−∭ikGΦ(k)eik⋅rdVk+∭ik×GA(k)eik⋅rdVk=−∇Φ(r)+∇×A(r){\displaystyle {\begin{aligned}\mathbf {G} (\mathbf {k} )&=-i\mathbf {k} G_{\Phi }(\mathbf {k} )+i\mathbf {k} \times \mathbf {G} _{\mathbf {A} }(\mathbf {k} )\\[6pt]\mathbf {F} (\mathbf {r} )&=-\iiint i\mathbf {k} G_{\Phi }(\mathbf {k} )e^{i\mathbf {k} \cdot \mathbf {r} }dV_{k}+\iiint i\mathbf {k} \times \mathbf {G} _{\mathbf {A} }(\mathbf {k} )e^{i\mathbf {k} \cdot \mathbf {r} }dV_{k}\\&=-\nabla \Phi (\mathbf {r} )+\nabla \times \mathbf {A} (\mathbf {r} )\end{aligned}}} A terminology often used in physics refers to the curl-free component of a vector field as thelongitudinal componentand the divergence-free component as thetransverse component.[22]This terminology comes from the following construction: Compute the three-dimensionalFourier transformF^{\displaystyle {\hat {\mathbf {F} }}}of the vector fieldF{\displaystyle \mathbf {F} }. Then decompose this field, at each pointk, into two components, one of which points longitudinally, i.e. parallel tok, the other of which points in the transverse direction, i.e. perpendicular tok. So far, we have F^(k)=F^t(k)+F^l(k){\displaystyle {\hat {\mathbf {F} }}(\mathbf {k} )={\hat {\mathbf {F} }}_{t}(\mathbf {k} )+{\hat {\mathbf {F} }}_{l}(\mathbf {k} )}k⋅F^t(k)=0.{\displaystyle \mathbf {k} \cdot {\hat {\mathbf {F} }}_{t}(\mathbf {k} )=0.}k×F^l(k)=0.{\displaystyle \mathbf {k} \times {\hat {\mathbf {F} }}_{l}(\mathbf {k} )=\mathbf {0} .} Now we apply an inverse Fourier transform to each of these components. Using properties of Fourier transforms, we derive: F(r)=Ft(r)+Fl(r){\displaystyle \mathbf {F} (\mathbf {r} )=\mathbf {F} _{t}(\mathbf {r} )+\mathbf {F} _{l}(\mathbf {r} )}∇⋅Ft(r)=0{\displaystyle \nabla \cdot \mathbf {F} _{t}(\mathbf {r} )=0}∇×Fl(r)=0{\displaystyle \nabla \times \mathbf {F} _{l}(\mathbf {r} )=\mathbf {0} } Since∇×(∇Φ)=0{\displaystyle \nabla \times (\nabla \Phi )=0}and∇⋅(∇×A)=0{\displaystyle \nabla \cdot (\nabla \times \mathbf {A} )=0}, we can get Ft=∇×A=14π∇×∫V∇′×F|r−r′|dV′{\displaystyle \mathbf {F} _{t}=\nabla \times \mathbf {A} ={\frac {1}{4\pi }}\nabla \times \int _{V}{\frac {\nabla '\times \mathbf {F} }{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} V'}Fl=−∇Φ=−14π∇∫V∇′⋅F|r−r′|dV′{\displaystyle \mathbf {F} _{l}=-\nabla \Phi =-{\frac {1}{4\pi }}\nabla \int _{V}{\frac {\nabla '\cdot \mathbf {F} }{\left|\mathbf {r} -\mathbf {r} '\right|}}\mathrm {d} V'} so this is indeed the Helmholtz decomposition.[23] The generalization tod{\displaystyle d}dimensions cannot be done with a vector potential, since the rotation operator and thecross productare defined (as vectors) only in three dimensions. LetF{\displaystyle \mathbf {F} }be a vector field on a bounded domainV⊆Rd{\displaystyle V\subseteq \mathbb {R} ^{d}}which decays faster than|r|−δ{\displaystyle |\mathbf {r} |^{-\delta }}for|r|→∞{\displaystyle |\mathbf {r} |\to \infty }andδ>2{\displaystyle \delta >2}. The scalar potential is defined similar to the three dimensional case as:Φ(r)=−∫Rddiv⁡(F(r′))K(r,r′)dV′=−∫Rd∑i∂Fi∂ri(r′)K(r,r′)dV′,{\displaystyle \Phi (\mathbf {r} )=-\int _{\mathbb {R} ^{d}}\operatorname {div} (\mathbf {F} (\mathbf {r} '))K(\mathbf {r} ,\mathbf {r} ')\mathrm {d} V'=-\int _{\mathbb {R} ^{d}}\sum _{i}{\frac {\partial F_{i}}{\partial r_{i}}}(\mathbf {r} ')K(\mathbf {r} ,\mathbf {r} ')\mathrm {d} V',}where as the integration kernelK(r,r′){\displaystyle K(\mathbf {r} ,\mathbf {r} ')}is again thefundamental solutionofLaplace's equation, but in d-dimensional space:K(r,r′)={12πlog⁡|r−r′|d=2,1d(2−d)Vd|r−r′|2−dotherwise,{\displaystyle K(\mathbf {r} ,\mathbf {r} ')={\begin{cases}{\frac {1}{2\pi }}\log {|\mathbf {r} -\mathbf {r} '|}&d=2,\\{\frac {1}{d(2-d)V_{d}}}|\mathbf {r} -\mathbf {r} '|^{2-d}&{\text{otherwise}},\end{cases}}}withVd=πd2/Γ(d2+1){\displaystyle V_{d}=\pi ^{\frac {d}{2}}/\Gamma {\big (}{\tfrac {d}{2}}+1{\big )}}the volume of the d-dimensionalunit ballsandΓ(r){\displaystyle \Gamma (\mathbf {r} )}thegamma function. Ford=3{\displaystyle d=3},Vd{\displaystyle V_{d}}is just equal to4π3{\displaystyle {\frac {4\pi }{3}}}, yielding the same prefactor as above. The rotational potential is anantisymmetric matrixwith the elements:Aij(r)=∫Rd(∂Fi∂xj(r′)−∂Fj∂xi(r′))K(r,r′)dV′.{\displaystyle A_{ij}(\mathbf {r} )=\int _{\mathbb {R} ^{d}}\left({\frac {\partial F_{i}}{\partial x_{j}}}(\mathbf {r} ')-{\frac {\partial F_{j}}{\partial x_{i}}}(\mathbf {r} ')\right)K(\mathbf {r} ,\mathbf {r} ')\mathrm {d} V'.}Above the diagonal are(d2){\displaystyle \textstyle {\binom {d}{2}}}entries which occur again mirrored at the diagonal, but with a negative sign. In the three-dimensional case, the matrix elements just correspond to the components of the vector potentialA=[A1,A2,A3]=[A23,A31,A12]{\displaystyle \mathbf {A} =[A_{1},A_{2},A_{3}]=[A_{23},A_{31},A_{12}]}. However, such a matrix potential can be written as a vector only in the three-dimensional case, because(d2)=d{\displaystyle \textstyle {\binom {d}{2}}=d}is valid only ford=3{\displaystyle d=3}. As in the three-dimensional case, the gradient field is defined asG(r)=−∇Φ(r).{\displaystyle \mathbf {G} (\mathbf {r} )=-\nabla \Phi (\mathbf {r} ).}The rotational field, on the other hand, is defined in the general case as the row divergence of the matrix:R(r)=[∑k∂rkAik(r);1≤i≤d].{\displaystyle \mathbf {R} (\mathbf {r} )=\left[\sum \nolimits _{k}\partial _{r_{k}}A_{ik}(\mathbf {r} );{1\leq i\leq d}\right].}In three-dimensional space, this is equivalent to the rotation of the vector potential.[8][24] In ad{\displaystyle d}-dimensional vector space withd≠3{\displaystyle d\neq 3},−14π|r−r′|{\textstyle -{\frac {1}{4\pi \left|\mathbf {r} -\mathbf {r} '\right|}}}can be replaced by the appropriateGreen's function for the Laplacian, defined by∇2G(r,r′)=∂∂rμ∂∂rμG(r,r′)=δd(r−r′){\displaystyle \nabla ^{2}G(\mathbf {r} ,\mathbf {r} ')={\frac {\partial }{\partial r_{\mu }}}{\frac {\partial }{\partial r_{\mu }}}G(\mathbf {r} ,\mathbf {r} ')=\delta ^{d}(\mathbf {r} -\mathbf {r} ')}whereEinstein summation conventionis used for the indexμ{\displaystyle \mu }. For example,G(r,r′)=12πln⁡|r−r′|{\textstyle G(\mathbf {r} ,\mathbf {r} ')={\frac {1}{2\pi }}\ln \left|\mathbf {r} -\mathbf {r} '\right|}in 2D. Following the same steps as above, we can writeFμ(r)=∫VFμ(r′)∂∂rμ∂∂rμG(r,r′)ddr′=δμνδρσ∫VFν(r′)∂∂rρ∂∂rσG(r,r′)ddr′{\displaystyle F_{\mu }(\mathbf {r} )=\int _{V}F_{\mu }(\mathbf {r} '){\frac {\partial }{\partial r_{\mu }}}{\frac {\partial }{\partial r_{\mu }}}G(\mathbf {r} ,\mathbf {r} ')\,\mathrm {d} ^{d}\mathbf {r} '=\delta _{\mu \nu }\delta _{\rho \sigma }\int _{V}F_{\nu }(\mathbf {r} '){\frac {\partial }{\partial r_{\rho }}}{\frac {\partial }{\partial r_{\sigma }}}G(\mathbf {r} ,\mathbf {r} ')\,\mathrm {d} ^{d}\mathbf {r} '}whereδμν{\displaystyle \delta _{\mu \nu }}is theKronecker delta(and the summation convention is again used). In place of the definition of the vector Laplacian used above, we now make use of an identity for theLevi-Civita symbolε{\displaystyle \varepsilon },εαμρεανσ=(d−2)!(δμνδρσ−δμσδνρ){\displaystyle \varepsilon _{\alpha \mu \rho }\varepsilon _{\alpha \nu \sigma }=(d-2)!(\delta _{\mu \nu }\delta _{\rho \sigma }-\delta _{\mu \sigma }\delta _{\nu \rho })}which is valid ind≥2{\displaystyle d\geq 2}dimensions, whereα{\displaystyle \alpha }is a(d−2){\displaystyle (d-2)}-componentmulti-index. This givesFμ(r)=δμσδνρ∫VFν(r′)∂∂rρ∂∂rσG(r,r′)ddr′+1(d−2)!εαμρεανσ∫VFν(r′)∂∂rρ∂∂rσG(r,r′)ddr′{\displaystyle F_{\mu }(\mathbf {r} )=\delta _{\mu \sigma }\delta _{\nu \rho }\int _{V}F_{\nu }(\mathbf {r} '){\frac {\partial }{\partial r_{\rho }}}{\frac {\partial }{\partial r_{\sigma }}}G(\mathbf {r} ,\mathbf {r} ')\,\mathrm {d} ^{d}\mathbf {r} '+{\frac {1}{(d-2)!}}\varepsilon _{\alpha \mu \rho }\varepsilon _{\alpha \nu \sigma }\int _{V}F_{\nu }(\mathbf {r} '){\frac {\partial }{\partial r_{\rho }}}{\frac {\partial }{\partial r_{\sigma }}}G(\mathbf {r} ,\mathbf {r} ')\,\mathrm {d} ^{d}\mathbf {r} '} We can therefore writeFμ(r)=−∂∂rμΦ(r)+εμρα∂∂rρAα(r){\displaystyle F_{\mu }(\mathbf {r} )=-{\frac {\partial }{\partial r_{\mu }}}\Phi (\mathbf {r} )+\varepsilon _{\mu \rho \alpha }{\frac {\partial }{\partial r_{\rho }}}A_{\alpha }(\mathbf {r} )}whereΦ(r)=−∫VFν(r′)∂∂rνG(r,r′)ddr′Aα=1(d−2)!εανσ∫VFν(r′)∂∂rσG(r,r′)ddr′{\displaystyle {\begin{aligned}\Phi (\mathbf {r} )&=-\int _{V}F_{\nu }(\mathbf {r} '){\frac {\partial }{\partial r_{\nu }}}G(\mathbf {r} ,\mathbf {r} ')\,\mathrm {d} ^{d}\mathbf {r} '\\A_{\alpha }&={\frac {1}{(d-2)!}}\varepsilon _{\alpha \nu \sigma }\int _{V}F_{\nu }(\mathbf {r} '){\frac {\partial }{\partial r_{\sigma }}}G(\mathbf {r} ,\mathbf {r} ')\,\mathrm {d} ^{d}\mathbf {r} '\end{aligned}}}Note that the vector potential is replaced by a rank-(d−2){\displaystyle (d-2)}tensor ind{\displaystyle d}dimensions. BecauseG(r,r′){\displaystyle G(\mathbf {r} ,\mathbf {r} ')}is a function of onlyr−r′{\displaystyle \mathbf {r} -\mathbf {r} '}, one can replace∂∂rμ→−∂∂rμ′{\displaystyle {\frac {\partial }{\partial r_{\mu }}}\rightarrow -{\frac {\partial }{\partial r'_{\mu }}}}, givingΦ(r)=∫VFν(r′)∂∂rν′G(r,r′)ddr′Aα=−1(d−2)!εανσ∫VFν(r′)∂∂rσ′G(r,r′)ddr′{\displaystyle {\begin{aligned}\Phi (\mathbf {r} )&=\int _{V}F_{\nu }(\mathbf {r} '){\frac {\partial }{\partial r'_{\nu }}}G(\mathbf {r} ,\mathbf {r} ')\,\mathrm {d} ^{d}\mathbf {r} '\\A_{\alpha }&=-{\frac {1}{(d-2)!}}\varepsilon _{\alpha \nu \sigma }\int _{V}F_{\nu }(\mathbf {r} '){\frac {\partial }{\partial r_{\sigma }'}}G(\mathbf {r} ,\mathbf {r} ')\,\mathrm {d} ^{d}\mathbf {r} '\end{aligned}}}Integration by partscan then be used to giveΦ(r)=−∫VG(r,r′)∂∂rν′Fν(r′)ddr′+∮SG(r,r′)Fν(r′)n^ν′dd−1r′Aα=1(d−2)!εανσ∫VG(r,r′)∂∂rσ′Fν(r′)ddr′−1(d−2)!εανσ∮SG(r,r′)Fν(r′)n^σ′dd−1r′{\displaystyle {\begin{aligned}\Phi (\mathbf {r} )&=-\int _{V}G(\mathbf {r} ,\mathbf {r} '){\frac {\partial }{\partial r'_{\nu }}}F_{\nu }(\mathbf {r} ')\,\mathrm {d} ^{d}\mathbf {r} '+\oint _{S}G(\mathbf {r} ,\mathbf {r} ')F_{\nu }(\mathbf {r} '){\hat {n}}'_{\nu }\,\mathrm {d} ^{d-1}\mathbf {r} '\\A_{\alpha }&={\frac {1}{(d-2)!}}\varepsilon _{\alpha \nu \sigma }\int _{V}G(\mathbf {r} ,\mathbf {r} '){\frac {\partial }{\partial r_{\sigma }'}}F_{\nu }(\mathbf {r} ')\,\mathrm {d} ^{d}\mathbf {r} '-{\frac {1}{(d-2)!}}\varepsilon _{\alpha \nu \sigma }\oint _{S}G(\mathbf {r} ,\mathbf {r} ')F_{\nu }(\mathbf {r} '){\hat {n}}'_{\sigma }\,\mathrm {d} ^{d-1}\mathbf {r} '\end{aligned}}}whereS=∂V{\displaystyle S=\partial V}is the boundary ofV{\displaystyle V}. These expressions are analogous to those given above forthree-dimensional space. For a further generalization to manifolds, see the discussion ofHodge decompositionbelow. TheHodge decompositionis closely related to the Helmholtz decomposition,[25]generalizing from vector fields onR3todifferential formson aRiemannian manifoldM. Most formulations of the Hodge decomposition requireMto becompact.[26]Since this is not true ofR3, the Hodge decomposition theorem is not strictly a generalization of the Helmholtz theorem. However, the compactness restriction in the usual formulation of the Hodge decomposition can be replaced by suitable decay assumptions at infinity on the differential forms involved, giving a proper generalization of the Helmholtz theorem. Most textbooks only deal with vector fields decaying faster than|r|−δ{\displaystyle |\mathbf {r} |^{-\delta }}withδ>1{\displaystyle \delta >1}at infinity.[16][13][27]However,Otto Blumenthalshowed in 1905 that an adapted integration kernel can be used to integrate fields decaying faster than|r|−δ{\displaystyle |\mathbf {r} |^{-\delta }}withδ>0{\displaystyle \delta >0}, which is substantially less strict. To achieve this, the kernelK(r,r′){\displaystyle K(\mathbf {r} ,\mathbf {r} ')}in the convolution integrals has to be replaced byK′(r,r′)=K(r,r′)−K(0,r′){\displaystyle K'(\mathbf {r} ,\mathbf {r} ')=K(\mathbf {r} ,\mathbf {r} ')-K(0,\mathbf {r} ')}.[28]With even more complex integration kernels, solutions can be found even for divergent functions that need not grow faster than polynomial.[12][13][24][29] For allanalyticvector fields that need not go to zero even at infinity, methods based onpartial integrationand theCauchy formula for repeated integration[30]can be used to compute closed-form solutions of the rotation and scalar potentials, as in the case ofmultivariate polynomial,sine,cosine, andexponential functions.[8] In general, the Helmholtz decomposition is not uniquely defined. Aharmonic functionH(r){\displaystyle H(\mathbf {r} )}is a function that satisfiesΔH(r)=0{\displaystyle \Delta H(\mathbf {r} )=0}. By addingH(r){\displaystyle H(\mathbf {r} )}to the scalar potentialΦ(r){\displaystyle \Phi (\mathbf {r} )}, a different Helmholtz decomposition can be obtained: G′(r)=∇(Φ(r)+H(r))=G(r)+∇H(r),R′(r)=R(r)−∇H(r).{\displaystyle {\begin{aligned}\mathbf {G} '(\mathbf {r} )&=\nabla (\Phi (\mathbf {r} )+H(\mathbf {r} ))=\mathbf {G} (\mathbf {r} )+\nabla H(\mathbf {r} ),\\\mathbf {R} '(\mathbf {r} )&=\mathbf {R} (\mathbf {r} )-\nabla H(\mathbf {r} ).\end{aligned}}} For vector fieldsF{\displaystyle \mathbf {F} }, decaying at infinity, it is a plausible choice that scalar and rotation potentials also decay at infinity. BecauseH(r)=0{\displaystyle H(\mathbf {r} )=0}is the only harmonic function with this property, which follows fromLiouville's theorem, this guarantees the uniqueness of the gradient and rotation fields.[31] This uniqueness does not apply to the potentials: In the three-dimensional case, the scalar and vector potential jointly have four components, whereas the vector field has only three. The vector field is invariant to gauge transformations and the choice of appropriate potentials known asgauge fixingis the subject ofgauge theory. Important examples from physics are theLorenz gauge conditionand theCoulomb gauge. An alternative is to use thepoloidal–toroidal decomposition. The Helmholtz theorem is of particular interest inelectrodynamics, since it can be used to writeMaxwell's equationsin the potential image and solve them more easily. The Helmholtz decomposition can be used to prove that, givenelectric current densityandcharge density, theelectric fieldand themagnetic flux densitycan be determined. They are unique if the densities vanish at infinity and one assumes the same for the potentials.[16] Influid dynamics, the Helmholtz projection plays an important role, especially for the solvability theory of theNavier-Stokes equations. If the Helmholtz projection is applied to the linearized incompressible Navier-Stokes equations, theStokes equationis obtained. This depends only on the velocity of the particles in the flow, but no longer on the static pressure, allowing the equation to be reduced to one unknown. However, both equations, the Stokes and linearized equations, are equivalent. The operatorPΔ{\displaystyle P\Delta }is called theStokes operator.[32] In the theory ofdynamical systems, Helmholtz decomposition can be used to determine "quasipotentials" as well as to computeLyapunov functionsin some cases.[33][34][35] For some dynamical systems such as theLorenz system(Edward N. Lorenz, 1963[36]), a simplified model foratmosphericconvection, aclosed-form expressionof the Helmholtz decomposition can be obtained:r˙=F(r)=[a(r2−r1),r1(b−r3)−r2,r1r2−cr3].{\displaystyle {\dot {\mathbf {r} }}=\mathbf {F} (\mathbf {r} )={\big [}a(r_{2}-r_{1}),r_{1}(b-r_{3})-r_{2},r_{1}r_{2}-cr_{3}{\big ]}.}The Helmholtz decomposition ofF(r){\displaystyle \mathbf {F} (\mathbf {r} )}, with the scalar potentialΦ(r)=a2r12+12r22+c2r32{\displaystyle \Phi (\mathbf {r} )={\tfrac {a}{2}}r_{1}^{2}+{\tfrac {1}{2}}r_{2}^{2}+{\tfrac {c}{2}}r_{3}^{2}}is given as: G(r)=[−ar1,−r2,−cr3],{\displaystyle \mathbf {G} (\mathbf {r} )={\big [}-ar_{1},-r_{2},-cr_{3}{\big ]},}R(r)=[+ar2,br1−r1r3,r1r2].{\displaystyle \mathbf {R} (\mathbf {r} )={\big [}+ar_{2},br_{1}-r_{1}r_{3},r_{1}r_{2}{\big ]}.} The quadratic scalar potential provides motion in the direction of the coordinate origin, which is responsible for the stablefix pointfor some parameter range. For other parameters, the rotation field ensures that astrange attractoris created, causing the model to exhibit abutterfly effect.[8][37] Inmagnetic resonance elastography, a variant of MR imaging where mechanical waves are used to probe the viscoelasticity of organs, the Helmholtz decomposition is sometimes used to separate the measured displacement fields into its shear component (divergence-free) and its compression component (curl-free).[38]In this way, the complex shear modulus can be calculated without contributions from compression waves. The Helmholtz decomposition is also used in the field of computer engineering. This includes robotics, image reconstruction but also computer animation, where the decomposition is used for realistic visualization of fluids or vector fields.[15][39]
https://en.wikipedia.org/wiki/Helmholtz_decomposition
In mathematics,Hiptmair–Xu (HX) preconditioners[1]are preconditioners for solvingH(curl){\displaystyle H(\operatorname {curl} )}andH(div){\displaystyle H(\operatorname {div} )}problems based on the auxiliary space preconditioning framework.[2]An important ingredient in the derivation of HX preconditioners in two and three dimensions is the so-called regular decomposition, which decomposes a Sobolev space function into a component of higher regularity and a scalar or vector potential. The key to the success of HX preconditioners is the discrete version of this decomposition, which is also known as HX decomposition. The discrete decomposition decomposes a discrete Sobolev space function into a discrete component of higher regularity, a discrete scale or vector potential, and a high-frequency component. HX preconditioners have been used for accelerating a wide variety of solution techniques, thanks to their highly scalable parallel implementations, and are known as AMS[3]and ADS[4]precondition. HX preconditioner was identified by the U.S. Department of Energy as one of the top ten breakthroughs in computational science[5]in recent years. Researchers from Sandia, Los Alamos, and Lawrence Livermore National Labs use this algorithm for modeling fusion with magnetohydrodynamic equations.[6]Moreover, this approach will also be instrumental in developing optimal iterative methods in structural mechanics, electrodynamics, and modeling of complex flows. Consider the followingH(curl){\displaystyle H(\operatorname {curl} )}problem: Findu∈Hh(curl){\displaystyle u\in H_{h}(\operatorname {curl} )}such that (curl⁡u,curl⁡v)+τ(u,v)=(f,v),∀v∈Hh(curl),{\displaystyle (\operatorname {curl} ~u,\operatorname {curl} ~v)+\tau (u,v)=(f,v),\quad \forall v\in H_{h}(\operatorname {curl} ),}withτ>0{\displaystyle \tau >0}. The corresponding matrix form is Acurlu=f.{\displaystyle A_{\operatorname {curl} }u=f.} The HX preconditioner forH(curl){\displaystyle H(\operatorname {curl} )}problem is defined as Bcurl=Scurl+ΠhcurlAvgrad−1(Πhcurl)T+gradAgrad−1(grad)T,{\displaystyle B_{\operatorname {curl} }=S_{\operatorname {curl} }+\Pi _{h}^{\operatorname {curl} }\,A_{vgrad}^{-1}\,(\Pi _{h}^{\operatorname {curl} })^{T}+\operatorname {grad} \,A_{\operatorname {grad} }^{-1}\,(\operatorname {grad} )^{T},} whereScurl{\displaystyle S_{\operatorname {curl} }}is a smoother (e.g., Jacobi smoother, Gauss–Seidel smoother),Πhcurl{\displaystyle \Pi _{h}^{\operatorname {curl} }}is the canonical interpolation operator forHh(curl){\displaystyle H_{h}(\operatorname {curl} )}space,Avgrad{\displaystyle A_{vgrad}}is the matrix representation of discrete vector Laplacian defined on[Hh(grad)]n{\displaystyle [H_{h}(\operatorname {grad} )]^{n}},grad{\displaystyle grad}is the discrete gradient operator, andAgrad{\displaystyle A_{\operatorname {grad} }}is the matrix representation of the discrete scalar Laplacian defined onHh(grad){\displaystyle H_{h}(\operatorname {grad} )}. Based on auxiliary space preconditioning framework, one can show that κ(BcurlAcurl)≤C,{\displaystyle \kappa (B_{\operatorname {curl} }A_{\operatorname {curl} })\leq C,} whereκ(A){\displaystyle \kappa (A)}denotes the condition number of matrixA{\displaystyle A}. In practice, invertingAvgrad{\displaystyle A_{vgrad}}andAgrad{\displaystyle A_{grad}}might be expensive, especially for large scale problems. Therefore, we can replace their inversion by spectrally equivalent approximations,Bvgrad{\displaystyle B_{vgrad}}andBgrad{\displaystyle B_{\operatorname {grad} }}, respectively. And the HX preconditioner forH(curl){\displaystyle H(\operatorname {curl} )}becomesBcurl=Scurl+ΠhcurlBvgrad(Πhcurl)T+grad⁡Bgrad(grad)T.{\displaystyle B_{\operatorname {curl} }=S_{\operatorname {curl} }+\Pi _{h}^{\operatorname {curl} }\,B_{vgrad}\,(\Pi _{h}^{\operatorname {curl} })^{T}+\operatorname {grad} B_{\operatorname {grad} }(\operatorname {grad} )^{T}.} Consider the followingH(div){\displaystyle H(\operatorname {div} )}problem: Findu∈Hh(div){\displaystyle u\in H_{h}(\operatorname {div} )} (divu,divv)+τ(u,v)=(f,v),∀v∈Hh(div),{\displaystyle (\operatorname {div} \,u,\operatorname {div} \,v)+\tau (u,v)=(f,v),\quad \forall v\in H_{h}(\operatorname {div} ),}withτ>0{\displaystyle \tau >0}. The corresponding matrix form is Adivu=f.{\displaystyle A_{\operatorname {div} }\,u=f.} The HX preconditioner forH(div){\displaystyle H(\operatorname {div} )}problem is defined as Bdiv=Sdiv+ΠhdivAvgrad−1(Πhdiv)T+curlAcurl−1(curl)T,{\displaystyle B_{\operatorname {div} }=S_{\operatorname {div} }+\Pi _{h}^{\operatorname {div} }\,A_{vgrad}^{-1}\,(\Pi _{h}^{\operatorname {div} })^{T}+\operatorname {curl} \,A_{\operatorname {curl} }^{-1}\,(\operatorname {curl} )^{T},} whereSdiv{\displaystyle S_{\operatorname {div} }}is a smoother (e.g., Jacobi smoother, Gauss–Seidel smoother),Πhdiv{\displaystyle \Pi _{h}^{\operatorname {div} }}is the canonical interpolation operator forH(div){\displaystyle H(\operatorname {div} )}space,Avgrad{\displaystyle A_{vgrad}}is the matrix representation of discrete vector Laplacian defined on[Hh(grad)]n{\displaystyle [H_{h}(\operatorname {grad} )]^{n}}, andcurl{\displaystyle \operatorname {curl} }is the discrete curl operator. Based on the auxiliary space preconditioning framework, one can show that κ(BdivAdiv)≤C.{\displaystyle \kappa (B_{\operatorname {div} }A_{\operatorname {div} })\leq C.} ForAcurl−1{\displaystyle A_{\operatorname {curl} }^{-1}}in the definition ofBdiv{\displaystyle B_{\operatorname {div} }}, we can replace it by the HX preconditioner forH(curl){\displaystyle H(\operatorname {curl} )}problem, e.g.,Bcurl{\displaystyle B_{\operatorname {curl} }}, since they are spectrally equivalent. Moreover, invertingAvgrad{\displaystyle A_{vgrad}}might be expensive and we can replace it by a spectrally equivalent approximationsBvgrad{\displaystyle B_{vgrad}}. These leads to the following practical HX preconditioner forH(div){\displaystyle H(\operatorname {div} )}problem, Bdiv=Sdiv+ΠhdivBvgrad(Πhdiv)T+curl⁡Bcurl(curl)T=Sdiv+ΠhdivBvgrad(Πhdiv)T+curl⁡Scurl(curl)T+curl⁡ΠhcurlBvgrad(Πhcurl)T(curl)T.{\displaystyle B_{\operatorname {div} }=S_{\operatorname {div} }+\Pi _{h}^{\operatorname {div} }B_{vgrad}(\Pi _{h}^{\operatorname {div} })^{T}+\operatorname {curl} B_{\operatorname {curl} }(\operatorname {curl} )^{T}=S_{\operatorname {div} }+\Pi _{h}^{\operatorname {div} }B_{vgrad}(\Pi _{h}^{\operatorname {div} })^{T}+\operatorname {curl} S_{\operatorname {curl} }(\operatorname {curl} )^{T}+\operatorname {curl} \Pi _{h}^{\operatorname {curl} }B_{vgrad}(\Pi _{h}^{\operatorname {curl} })^{T}(\operatorname {curl} )^{T}.} The derivation of HX preconditioners is based on the discrete regular decompositions forHh(curl){\displaystyle H_{h}(\operatorname {curl} )}andHh(div){\displaystyle H_{h}(\operatorname {div} )}, for the completeness, let us briefly recall them. Theorem:[Discrete regular decomposition forHh(curl){\displaystyle H_{h}(\operatorname {curl} )}] LetΩ{\displaystyle \Omega }be a simply connected bounded domain. For any functionvh∈Hh(curl⁡Ω){\displaystyle v_{h}\in H_{h}(\operatorname {curl} \Omega )}, there exists a vectorv~h∈Hh(curl⁡Ω){\displaystyle {\tilde {v}}_{h}\in H_{h}(\operatorname {curl} \Omega )},ψh∈[Hh(grad⁡Ω)]3{\displaystyle \psi _{h}\in [H_{h}(\operatorname {grad} \Omega )]^{3}},ph∈Hh(grad⁡Ω){\displaystyle p_{h}\in H_{h}(\operatorname {grad} \Omega )}, such thatvh=v~h+Πhcurlψh+grad⁡ph{\displaystyle v_{h}={\tilde {v}}_{h}+\Pi _{h}^{\operatorname {curl} }\psi _{h}+\operatorname {grad} p_{h}}and‖h−1v~h‖+‖ψh‖1+‖ph‖1≲‖vh‖H(curl){\displaystyle \Vert h^{-1}{\tilde {v}}_{h}\Vert +\Vert \psi _{h}\Vert _{1}+\Vert p_{h}\Vert _{1}\lesssim \Vert v_{h}\Vert _{H(\operatorname {curl} )}} Theorem:[Discrete regular decomposition forHh(div){\displaystyle H_{h}(\operatorname {div} )}] LetΩ{\displaystyle \Omega }be a simply connected bounded domain. For any functionvh∈Hh(div⁡Ω){\displaystyle v_{h}\in H_{h}(\operatorname {div} \Omega )}, there exists a vectorv~h∈Hh(div⁡Ω){\displaystyle {\widetilde {v}}_{h}\in H_{h}(\operatorname {div} \Omega )},ψh∈[Hh(grad⁡Ω)]3,{\displaystyle \psi _{h}\in [H_{h}(\operatorname {grad} \Omega )]^{3},}wh∈Hh(curl⁡Ω),{\displaystyle w_{h}\in H_{h}(\operatorname {curl} \Omega ),}such thatvh=v~h+Πhdivψh+curlwh,{\displaystyle v_{h}={\widetilde {v}}_{h}+\Pi _{h}^{\operatorname {div} }\psi _{h}+\operatorname {curl} \,w_{h},}and‖h−1v~h‖+‖ψh‖1+‖wh‖1≲‖vh‖H(div){\displaystyle \Vert h^{-1}{\widetilde {v}}_{h}\Vert +\Vert \psi _{h}\Vert _{1}+\Vert w_{h}\Vert _{1}\lesssim \Vert v_{h}\Vert _{H(\operatorname {div} )}} Based on the above discrete regular decompositions, together with the auxiliary space preconditioning framework, we can derive the HX preconditioners forH(curl){\displaystyle H(\operatorname {curl} )}andH(div){\displaystyle H(\operatorname {div} )}problems as shown before.
https://en.wikipedia.org/wiki/Hiptmair%E2%80%93Xu_preconditioner
This is a list of somevector calculusformulae for working with commoncurvilinearcoordinate systems. Note that the operationarctan⁡(AB){\displaystyle \arctan \left({\frac {A}{B}}\right)}must be interpreted as the two-argument inverse tangent,atan2. (∇2Aρ−Aρρ2−2ρ2∂Aφ∂φ)ρ^+(∇2Aφ−Aφρ2+2ρ2∂Aρ∂φ)φ^+∇2Azz^{\displaystyle {\begin{aligned}{\mathopen {}}\left(\nabla ^{2}A_{\rho }-{\frac {A_{\rho }}{\rho ^{2}}}-{\frac {2}{\rho ^{2}}}{\frac {\partial A_{\varphi }}{\partial \varphi }}\right){\mathclose {}}&{\hat {\boldsymbol {\rho }}}\\+{\mathopen {}}\left(\nabla ^{2}A_{\varphi }-{\frac {A_{\varphi }}{\rho ^{2}}}+{\frac {2}{\rho ^{2}}}{\frac {\partial A_{\rho }}{\partial \varphi }}\right){\mathclose {}}&{\hat {\boldsymbol {\varphi }}}\\{}+\nabla ^{2}A_{z}&{\hat {\mathbf {z} }}\end{aligned}}} (∇2Ar−2Arr2−2r2sin⁡θ∂(Aθsin⁡θ)∂θ−2r2sin⁡θ∂Aφ∂φ)r^+(∇2Aθ−Aθr2sin2⁡θ+2r2∂Ar∂θ−2cos⁡θr2sin2⁡θ∂Aφ∂φ)θ^+(∇2Aφ−Aφr2sin2⁡θ+2r2sin⁡θ∂Ar∂φ+2cos⁡θr2sin2⁡θ∂Aθ∂φ)φ^{\displaystyle {\begin{aligned}\left(\nabla ^{2}A_{r}-{\frac {2A_{r}}{r^{2}}}-{\frac {2}{r^{2}\sin \theta }}{\frac {\partial \left(A_{\theta }\sin \theta \right)}{\partial \theta }}-{\frac {2}{r^{2}\sin \theta }}{\frac {\partial A_{\varphi }}{\partial \varphi }}\right)&{\hat {\mathbf {r} }}\\+\left(\nabla ^{2}A_{\theta }-{\frac {A_{\theta }}{r^{2}\sin ^{2}\theta }}+{\frac {2}{r^{2}}}{\frac {\partial A_{r}}{\partial \theta }}-{\frac {2\cos \theta }{r^{2}\sin ^{2}\theta }}{\frac {\partial A_{\varphi }}{\partial \varphi }}\right)&{\hat {\boldsymbol {\theta }}}\\+\left(\nabla ^{2}A_{\varphi }-{\frac {A_{\varphi }}{r^{2}\sin ^{2}\theta }}+{\frac {2}{r^{2}\sin \theta }}{\frac {\partial A_{r}}{\partial \varphi }}+{\frac {2\cos \theta }{r^{2}\sin ^{2}\theta }}{\frac {\partial A_{\theta }}{\partial \varphi }}\right)&{\hat {\boldsymbol {\varphi }}}\end{aligned}}} (Ar∂Br∂r+Aθr∂Br∂θ+Aφrsin⁡θ∂Br∂φ−AθBθ+AφBφr)r^+(Ar∂Bθ∂r+Aθr∂Bθ∂θ+Aφrsin⁡θ∂Bθ∂φ+AθBrr−AφBφcot⁡θr)θ^+(Ar∂Bφ∂r+Aθr∂Bφ∂θ+Aφrsin⁡θ∂Bφ∂φ+AφBrr+AφBθcot⁡θr)φ^{\displaystyle {\begin{aligned}\left(A_{r}{\frac {\partial B_{r}}{\partial r}}+{\frac {A_{\theta }}{r}}{\frac {\partial B_{r}}{\partial \theta }}+{\frac {A_{\varphi }}{r\sin \theta }}{\frac {\partial B_{r}}{\partial \varphi }}-{\frac {A_{\theta }B_{\theta }+A_{\varphi }B_{\varphi }}{r}}\right)&{\hat {\mathbf {r} }}\\+\left(A_{r}{\frac {\partial B_{\theta }}{\partial r}}+{\frac {A_{\theta }}{r}}{\frac {\partial B_{\theta }}{\partial \theta }}+{\frac {A_{\varphi }}{r\sin \theta }}{\frac {\partial B_{\theta }}{\partial \varphi }}+{\frac {A_{\theta }B_{r}}{r}}-{\frac {A_{\varphi }B_{\varphi }\cot \theta }{r}}\right)&{\hat {\boldsymbol {\theta }}}\\+\left(A_{r}{\frac {\partial B_{\varphi }}{\partial r}}+{\frac {A_{\theta }}{r}}{\frac {\partial B_{\varphi }}{\partial \theta }}+{\frac {A_{\varphi }}{r\sin \theta }}{\frac {\partial B_{\varphi }}{\partial \varphi }}+{\frac {A_{\varphi }B_{r}}{r}}+{\frac {A_{\varphi }B_{\theta }\cot \theta }{r}}\right)&{\hat {\boldsymbol {\varphi }}}\end{aligned}}} (∂Txx∂x+∂Tyx∂y+∂Tzx∂z)x^+(∂Txy∂x+∂Tyy∂y+∂Tzy∂z)y^+(∂Txz∂x+∂Tyz∂y+∂Tzz∂z)z^{\displaystyle {\begin{aligned}\left({\frac {\partial T_{xx}}{\partial x}}+{\frac {\partial T_{yx}}{\partial y}}+{\frac {\partial T_{zx}}{\partial z}}\right)&{\hat {\mathbf {x} }}\\+\left({\frac {\partial T_{xy}}{\partial x}}+{\frac {\partial T_{yy}}{\partial y}}+{\frac {\partial T_{zy}}{\partial z}}\right)&{\hat {\mathbf {y} }}\\+\left({\frac {\partial T_{xz}}{\partial x}}+{\frac {\partial T_{yz}}{\partial y}}+{\frac {\partial T_{zz}}{\partial z}}\right)&{\hat {\mathbf {z} }}\end{aligned}}} [∂Tρρ∂ρ+1ρ∂Tφρ∂φ+∂Tzρ∂z+1ρ(Tρρ−Tφφ)]ρ^+[∂Tρφ∂ρ+1ρ∂Tφφ∂φ+∂Tzφ∂z+1ρ(Tρφ+Tφρ)]φ^+[∂Tρz∂ρ+1ρ∂Tφz∂φ+∂Tzz∂z+Tρzρ]z^{\displaystyle {\begin{aligned}\left[{\frac {\partial T_{\rho \rho }}{\partial \rho }}+{\frac {1}{\rho }}{\frac {\partial T_{\varphi \rho }}{\partial \varphi }}+{\frac {\partial T_{z\rho }}{\partial z}}+{\frac {1}{\rho }}(T_{\rho \rho }-T_{\varphi \varphi })\right]&{\hat {\boldsymbol {\rho }}}\\+\left[{\frac {\partial T_{\rho \varphi }}{\partial \rho }}+{\frac {1}{\rho }}{\frac {\partial T_{\varphi \varphi }}{\partial \varphi }}+{\frac {\partial T_{z\varphi }}{\partial z}}+{\frac {1}{\rho }}(T_{\rho \varphi }+T_{\varphi \rho })\right]&{\hat {\boldsymbol {\varphi }}}\\+\left[{\frac {\partial T_{\rho z}}{\partial \rho }}+{\frac {1}{\rho }}{\frac {\partial T_{\varphi z}}{\partial \varphi }}+{\frac {\partial T_{zz}}{\partial z}}+{\frac {T_{\rho z}}{\rho }}\right]&{\hat {\mathbf {z} }}\end{aligned}}} [∂Trr∂r+2Trrr+1r∂Tθr∂θ+cot⁡θrTθr+1rsin⁡θ∂Tφr∂φ−1r(Tθθ+Tφφ)]r^+[∂Trθ∂r+2Trθr+1r∂Tθθ∂θ+cot⁡θrTθθ+1rsin⁡θ∂Tφθ∂φ+Tθrr−cot⁡θrTφφ]θ^+[∂Trφ∂r+2Trφr+1r∂Tθφ∂θ+1rsin⁡θ∂Tφφ∂φ+Tφrr+cot⁡θr(Tθφ+Tφθ)]φ^{\displaystyle {\begin{aligned}\left[{\frac {\partial T_{rr}}{\partial r}}+2{\frac {T_{rr}}{r}}+{\frac {1}{r}}{\frac {\partial T_{\theta r}}{\partial \theta }}+{\frac {\cot \theta }{r}}T_{\theta r}+{\frac {1}{r\sin \theta }}{\frac {\partial T_{\varphi r}}{\partial \varphi }}-{\frac {1}{r}}(T_{\theta \theta }+T_{\varphi \varphi })\right]&{\hat {\mathbf {r} }}\\+\left[{\frac {\partial T_{r\theta }}{\partial r}}+2{\frac {T_{r\theta }}{r}}+{\frac {1}{r}}{\frac {\partial T_{\theta \theta }}{\partial \theta }}+{\frac {\cot \theta }{r}}T_{\theta \theta }+{\frac {1}{r\sin \theta }}{\frac {\partial T_{\varphi \theta }}{\partial \varphi }}+{\frac {T_{\theta r}}{r}}-{\frac {\cot \theta }{r}}T_{\varphi \varphi }\right]&{\hat {\boldsymbol {\theta }}}\\+\left[{\frac {\partial T_{r\varphi }}{\partial r}}+2{\frac {T_{r\varphi }}{r}}+{\frac {1}{r}}{\frac {\partial T_{\theta \varphi }}{\partial \theta }}+{\frac {1}{r\sin \theta }}{\frac {\partial T_{\varphi \varphi }}{\partial \varphi }}+{\frac {T_{\varphi r}}{r}}+{\frac {\cot \theta }{r}}(T_{\theta \varphi }+T_{\varphi \theta })\right]&{\hat {\boldsymbol {\varphi }}}\end{aligned}}} div⁡A=limV→0∬∂VA⋅dS∭VdV=Ax(x+dx)dydz−Ax(x)dydz+Ay(y+dy)dxdz−Ay(y)dxdz+Az(z+dz)dxdy−Az(z)dxdydxdydz=∂Ax∂x+∂Ay∂y+∂Az∂z{\displaystyle {\begin{aligned}\operatorname {div} \mathbf {A} =\lim _{V\to 0}{\frac {\iint _{\partial V}\mathbf {A} \cdot d\mathbf {S} }{\iiint _{V}dV}}&={\frac {A_{x}(x+dx)\,dy\,dz-A_{x}(x)\,dy\,dz+A_{y}(y+dy)\,dx\,dz-A_{y}(y)\,dx\,dz+A_{z}(z+dz)\,dx\,dy-A_{z}(z)\,dx\,dy}{dx\,dy\,dz}}\\&={\frac {\partial A_{x}}{\partial x}}+{\frac {\partial A_{y}}{\partial y}}+{\frac {\partial A_{z}}{\partial z}}\end{aligned}}} (curl⁡A)x=limS⊥x^→0∫∂SA⋅dℓ∬SdS=Az(y+dy)dz−Az(y)dz+Ay(z)dy−Ay(z+dz)dydydz=∂Az∂y−∂Ay∂z{\displaystyle {\begin{aligned}(\operatorname {curl} \mathbf {A} )_{x}=\lim _{S^{\perp \mathbf {\hat {x}} }\to 0}{\frac {\int _{\partial S}\mathbf {A} \cdot d\mathbf {\ell } }{\iint _{S}dS}}&={\frac {A_{z}(y+dy)\,dz-A_{z}(y)\,dz+A_{y}(z)\,dy-A_{y}(z+dz)\,dy}{dy\,dz}}\\&={\frac {\partial A_{z}}{\partial y}}-{\frac {\partial A_{y}}{\partial z}}\end{aligned}}} The expressions for(curl⁡A)y{\displaystyle (\operatorname {curl} \mathbf {A} )_{y}}and(curl⁡A)z{\displaystyle (\operatorname {curl} \mathbf {A} )_{z}}are found in the same way. div⁡A=limV→0∬∂VA⋅dS∭VdV=Aρ(ρ+dρ)(ρ+dρ)dϕdz−Aρ(ρ)ρdϕdz+Aϕ(ϕ+dϕ)dρdz−Aϕ(ϕ)dρdz+Az(z+dz)dρ(ρ+dρ/2)dϕ−Az(z)dρ(ρ+dρ/2)dϕρdϕdρdz=1ρ∂(ρAρ)∂ρ+1ρ∂Aϕ∂ϕ+∂Az∂z{\displaystyle {\begin{aligned}\operatorname {div} \mathbf {A} &=\lim _{V\to 0}{\frac {\iint _{\partial V}\mathbf {A} \cdot d\mathbf {S} }{\iiint _{V}dV}}\\&={\frac {A_{\rho }(\rho +d\rho )(\rho +d\rho )\,d\phi \,dz-A_{\rho }(\rho )\rho \,d\phi \,dz+A_{\phi }(\phi +d\phi )\,d\rho \,dz-A_{\phi }(\phi )\,d\rho \,dz+A_{z}(z+dz)\,d\rho \,(\rho +d\rho /2)\,d\phi -A_{z}(z)\,d\rho (\rho +d\rho /2)\,d\phi }{\rho \,d\phi \,d\rho \,dz}}\\&={\frac {1}{\rho }}{\frac {\partial (\rho A_{\rho })}{\partial \rho }}+{\frac {1}{\rho }}{\frac {\partial A_{\phi }}{\partial \phi }}+{\frac {\partial A_{z}}{\partial z}}\end{aligned}}} (curl⁡A)ρ=limS⊥ρ^→0∫∂SA⋅dℓ∬SdS=Aϕ(z)(ρ+dρ)dϕ−Aϕ(z+dz)(ρ+dρ)dϕ+Az(ϕ+dϕ)dz−Az(ϕ)dz(ρ+dρ)dϕdz=−∂Aϕ∂z+1ρ∂Az∂ϕ{\displaystyle {\begin{aligned}(\operatorname {curl} \mathbf {A} )_{\rho }&=\lim _{S^{\perp {\hat {\boldsymbol {\rho }}}}\to 0}{\frac {\int _{\partial S}\mathbf {A} \cdot d{\boldsymbol {\ell }}}{\iint _{S}dS}}\\[1ex]&={\frac {A_{\phi }(z)\left(\rho +d\rho \right)\,d\phi -A_{\phi }(z+dz)\left(\rho +d\rho \right)\,d\phi +A_{z}(\phi +d\phi )\,dz-A_{z}(\phi )\,dz}{\left(\rho +d\rho \right)\,d\phi \,dz}}\\[1ex]&=-{\frac {\partial A_{\phi }}{\partial z}}+{\frac {1}{\rho }}{\frac {\partial A_{z}}{\partial \phi }}\end{aligned}}} (curl⁡A)ϕ=limS⊥ϕ^→0∫∂SA⋅dℓ∬SdS=Az(ρ)dz−Az(ρ+dρ)dz+Aρ(z+dz)dρ−Aρ(z)dρdρdz=−∂Az∂ρ+∂Aρ∂z{\displaystyle {\begin{aligned}(\operatorname {curl} \mathbf {A} )_{\phi }&=\lim _{S^{\perp {\boldsymbol {\hat {\phi }}}}\to 0}{\frac {\int _{\partial S}\mathbf {A} \cdot d{\boldsymbol {\ell }}}{\iint _{S}dS}}\\&={\frac {A_{z}(\rho )\,dz-A_{z}(\rho +d\rho )\,dz+A_{\rho }(z+dz)\,d\rho -A_{\rho }(z)\,d\rho }{d\rho \,dz}}\\&=-{\frac {\partial A_{z}}{\partial \rho }}+{\frac {\partial A_{\rho }}{\partial z}}\end{aligned}}} (curl⁡A)z=limS⊥z^→0∫∂SA⋅dℓ∬SdS=Aρ(ϕ)dρ−Aρ(ϕ+dϕ)dρ+Aϕ(ρ+dρ)(ρ+dρ)dϕ−Aϕ(ρ)ρdϕρdρdϕ=−1ρ∂Aρ∂ϕ+1ρ∂(ρAϕ)∂ρ{\displaystyle {\begin{aligned}(\operatorname {curl} \mathbf {A} )_{z}&=\lim _{S^{\perp {\hat {\boldsymbol {z}}}}\to 0}{\frac {\int _{\partial S}\mathbf {A} \cdot d\mathbf {\ell } }{\iint _{S}dS}}\\[1ex]&={\frac {A_{\rho }(\phi )\,d\rho -A_{\rho }(\phi +d\phi )\,d\rho +A_{\phi }(\rho +d\rho )(\rho +d\rho )\,d\phi -A_{\phi }(\rho )\rho \,d\phi }{\rho \,d\rho \,d\phi }}\\[1ex]&=-{\frac {1}{\rho }}{\frac {\partial A_{\rho }}{\partial \phi }}+{\frac {1}{\rho }}{\frac {\partial (\rho A_{\phi })}{\partial \rho }}\end{aligned}}} curl⁡A=(curl⁡A)ρρ^+(curl⁡A)ϕϕ^+(curl⁡A)zz^=(1ρ∂Az∂ϕ−∂Aϕ∂z)ρ^+(∂Aρ∂z−∂Az∂ρ)ϕ^+1ρ(∂(ρAϕ)∂ρ−∂Aρ∂ϕ)z^{\displaystyle {\begin{aligned}\operatorname {curl} \mathbf {A} &=(\operatorname {curl} \mathbf {A} )_{\rho }{\hat {\boldsymbol {\rho }}}+(\operatorname {curl} \mathbf {A} )_{\phi }{\hat {\boldsymbol {\phi }}}+(\operatorname {curl} \mathbf {A} )_{z}{\hat {\boldsymbol {z}}}\\[1ex]&=\left({\frac {1}{\rho }}{\frac {\partial A_{z}}{\partial \phi }}-{\frac {\partial A_{\phi }}{\partial z}}\right){\hat {\boldsymbol {\rho }}}+\left({\frac {\partial A_{\rho }}{\partial z}}-{\frac {\partial A_{z}}{\partial \rho }}\right){\hat {\boldsymbol {\phi }}}+{\frac {1}{\rho }}\left({\frac {\partial (\rho A_{\phi })}{\partial \rho }}-{\frac {\partial A_{\rho }}{\partial \phi }}\right){\hat {\boldsymbol {z}}}\end{aligned}}} div⁡A=limV→0∬∂VA⋅dS∭VdV=Ar(r+dr)(r+dr)dθ(r+dr)sin⁡θdϕ−Ar(r)rdθrsin⁡θdϕ+Aθ(θ+dθ)sin⁡(θ+dθ)rdrdϕ−Aθ(θ)sin⁡(θ)rdrdϕ+Aϕ(ϕ+dϕ)rdrdθ−Aϕ(ϕ)rdrdθdrrdθrsin⁡θdϕ=1r2∂(r2Ar)∂r+1rsin⁡θ∂(Aθsin⁡θ)∂θ+1rsin⁡θ∂Aϕ∂ϕ{\displaystyle {\begin{aligned}\operatorname {div} \mathbf {A} &=\lim _{V\to 0}{\frac {\iint _{\partial V}\mathbf {A} \cdot d\mathbf {S} }{\iiint _{V}dV}}\\&={\frac {A_{r}(r+dr)(r+dr)\,d\theta \,(r+dr)\sin \theta \,d\phi -A_{r}(r)r\,d\theta \,r\sin \theta \,d\phi +A_{\theta }(\theta +d\theta )\sin(\theta +d\theta )r\,dr\,d\phi -A_{\theta }(\theta )\sin(\theta )r\,dr\,d\phi +A_{\phi }(\phi +d\phi )r\,dr\,d\theta -A_{\phi }(\phi )r\,dr\,d\theta }{dr\,r\,d\theta \,r\sin \theta \,d\phi }}\\&={\frac {1}{r^{2}}}{\frac {\partial (r^{2}A_{r})}{\partial r}}+{\frac {1}{r\sin \theta }}{\frac {\partial (A_{\theta }\sin \theta )}{\partial \theta }}+{\frac {1}{r\sin \theta }}{\frac {\partial A_{\phi }}{\partial \phi }}\end{aligned}}} (curl⁡A)r=limS⊥r^→0∫∂SA⋅dℓ∬SdS=Aθ(ϕ)rdθ+Aϕ(θ+dθ)rsin⁡(θ+dθ)dϕ−Aθ(ϕ+dϕ)rdθ−Aϕ(θ)rsin⁡(θ)dϕrdθrsin⁡θdϕ=1rsin⁡θ∂(Aϕsin⁡θ)∂θ−1rsin⁡θ∂Aθ∂ϕ{\displaystyle {\begin{aligned}(\operatorname {curl} \mathbf {A} )_{r}=\lim _{S^{\perp {\boldsymbol {\hat {r}}}}\to 0}{\frac {\int _{\partial S}\mathbf {A} \cdot d\mathbf {\ell } }{\iint _{S}dS}}&={\frac {A_{\theta }(\phi )r\,d\theta +A_{\phi }(\theta +d\theta )r\sin(\theta +d\theta )\,d\phi -A_{\theta }(\phi +d\phi )r\,d\theta -A_{\phi }(\theta )r\sin(\theta )\,d\phi }{r\,d\theta \,r\sin \theta \,d\phi }}\\&={\frac {1}{r\sin \theta }}{\frac {\partial (A_{\phi }\sin \theta )}{\partial \theta }}-{\frac {1}{r\sin \theta }}{\frac {\partial A_{\theta }}{\partial \phi }}\end{aligned}}} (curl⁡A)θ=limS⊥θ^→0∫∂SA⋅dℓ∬SdS=Aϕ(r)rsin⁡θdϕ+Ar(ϕ+dϕ)dr−Aϕ(r+dr)(r+dr)sin⁡θdϕ−Ar(ϕ)drdrrsin⁡θdϕ=1rsin⁡θ∂Ar∂ϕ−1r∂(rAϕ)∂r{\displaystyle {\begin{aligned}(\operatorname {curl} \mathbf {A} )_{\theta }=\lim _{S^{\perp {\boldsymbol {\hat {\theta }}}}\to 0}{\frac {\int _{\partial S}\mathbf {A} \cdot d\mathbf {\ell } }{\iint _{S}dS}}&={\frac {A_{\phi }(r)r\sin \theta \,d\phi +A_{r}(\phi +d\phi )\,dr-A_{\phi }(r+dr)(r+dr)\sin \theta \,d\phi -A_{r}(\phi )\,dr}{dr\,r\sin \theta \,d\phi }}\\&={\frac {1}{r\sin \theta }}{\frac {\partial A_{r}}{\partial \phi }}-{\frac {1}{r}}{\frac {\partial (rA_{\phi })}{\partial r}}\end{aligned}}} (curl⁡A)ϕ=limS⊥ϕ^→0∫∂SA⋅dℓ∬SdS=Ar(θ)dr+Aθ(r+dr)(r+dr)dθ−Ar(θ+dθ)dr−Aθ(r)rdθrdrdθ=1r∂(rAθ)∂r−1r∂Ar∂θ{\displaystyle {\begin{aligned}(\operatorname {curl} \mathbf {A} )_{\phi }=\lim _{S^{\perp {\boldsymbol {\hat {\phi }}}}\to 0}{\frac {\int _{\partial S}\mathbf {A} \cdot d\mathbf {\ell } }{\iint _{S}dS}}&={\frac {A_{r}(\theta )\,dr+A_{\theta }(r+dr)(r+dr)\,d\theta -A_{r}(\theta +d\theta )\,dr-A_{\theta }(r)r\,d\theta }{r\,dr\,d\theta }}\\&={\frac {1}{r}}{\frac {\partial (rA_{\theta })}{\partial r}}-{\frac {1}{r}}{\frac {\partial A_{r}}{\partial \theta }}\end{aligned}}} curl⁡A=(curl⁡A)rr^+(curl⁡A)θθ^+(curl⁡A)ϕϕ^=1rsin⁡θ(∂(Aϕsin⁡θ)∂θ−∂Aθ∂ϕ)r^+1r(1sin⁡θ∂Ar∂ϕ−∂(rAϕ)∂r)θ^+1r(∂(rAθ)∂r−∂Ar∂θ)ϕ^{\displaystyle {\begin{aligned}\operatorname {curl} \mathbf {A} &=(\operatorname {curl} \mathbf {A} )_{r}\,{\hat {\boldsymbol {r}}}+(\operatorname {curl} \mathbf {A} )_{\theta }\,{\hat {\boldsymbol {\theta }}}+(\operatorname {curl} \mathbf {A} )_{\phi }\,{\hat {\boldsymbol {\phi }}}\\[1ex]&={\frac {1}{r\sin \theta }}\left({\frac {\partial (A_{\phi }\sin \theta )}{\partial \theta }}-{\frac {\partial A_{\theta }}{\partial \phi }}\right){\hat {\boldsymbol {r}}}+{\frac {1}{r}}\left({\frac {1}{\sin \theta }}{\frac {\partial A_{r}}{\partial \phi }}-{\frac {\partial (rA_{\phi })}{\partial r}}\right){\hat {\boldsymbol {\theta }}}+{\frac {1}{r}}\left({\frac {\partial (rA_{\theta })}{\partial r}}-{\frac {\partial A_{r}}{\partial \theta }}\right){\hat {\boldsymbol {\phi }}}\end{aligned}}} The unit vector of a coordinate parameteruis defined in such a way that a small positive change inucauses the position vectorr{\displaystyle \mathbf {r} }to change inu{\displaystyle \mathbf {u} }direction. Therefore,∂r∂u=∂s∂uu{\displaystyle {\frac {\partial {\mathbf {r} }}{\partial u}}={\frac {\partial {s}}{\partial u}}\mathbf {u} }wheresis the arc length parameter. For two sets of coordinate systemsui{\displaystyle u_{i}}andvj{\displaystyle v_{j}}, according tochain rule,dr=∑i∂r∂uidui=∑i∂s∂uiu^idui=∑j∂s∂vjv^jdvj=∑j∂s∂vjv^j∑i∂vj∂uidui=∑i∑j∂s∂vj∂vj∂uiv^jdui.{\displaystyle d\mathbf {r} =\sum _{i}{\frac {\partial \mathbf {r} }{\partial u_{i}}}\,du_{i}=\sum _{i}{\frac {\partial s}{\partial u_{i}}}{\hat {\mathbf {u} }}_{i}du_{i}=\sum _{j}{\frac {\partial s}{\partial v_{j}}}{\hat {\mathbf {v} }}_{j}\,dv_{j}=\sum _{j}{\frac {\partial s}{\partial v_{j}}}{\hat {\mathbf {v} }}_{j}\sum _{i}{\frac {\partial v_{j}}{\partial u_{i}}}\,du_{i}=\sum _{i}\sum _{j}{\frac {\partial s}{\partial v_{j}}}{\frac {\partial v_{j}}{\partial u_{i}}}{\hat {\mathbf {v} }}_{j}\,du_{i}.} Now, we isolate thei{\displaystyle i}thcomponent. Fori≠k{\displaystyle i{\neq }k}, letduk=0{\displaystyle \mathrm {d} u_{k}=0}. Then divide on both sides bydui{\displaystyle \mathrm {d} u_{i}}to get:∂s∂uiu^i=∑j∂s∂vj∂vj∂uiv^j.{\displaystyle {\frac {\partial s}{\partial u_{i}}}{\hat {\mathbf {u} }}_{i}=\sum _{j}{\frac {\partial s}{\partial v_{j}}}{\frac {\partial v_{j}}{\partial u_{i}}}{\hat {\mathbf {v} }}_{j}.}
https://en.wikipedia.org/wiki/Del_in_cylindrical_and_spherical_coordinates
Incontinuum mechanics,vorticityis apseudovector(or axial vector)fieldthat describes the localspinningmotion of a continuum near some point (the tendency of something to rotate[1]), as would be seen by an observer located at that point and traveling along with theflow. It is an important quantity inthe dynamical theoryoffluidsand provides a convenient framework for understanding a variety of complex flow phenomena, such as the formation and motion ofvortex rings.[2][3] Mathematically, the vorticityω{\displaystyle {\boldsymbol {\omega }}}is thecurlof theflow velocityv{\displaystyle \mathbf {v} }:[4][3] where∇{\displaystyle \nabla }is thenabla operator. Conceptually,ω{\displaystyle {\boldsymbol {\omega }}}could be determined by marking parts of a continuum in a smallneighborhoodof the point in question, and watching theirrelativedisplacementsas they move along the flow. The vorticityω{\displaystyle {\boldsymbol {\omega }}}would be twice the meanangular velocityvector of those particles relative to theircenter of mass, oriented according to theright-hand rule. By its own definition, the vorticity vector is asolenoidalfield since∇⋅ω=0.{\displaystyle \nabla \cdot {\boldsymbol {\omega }}=0.} In atwo-dimensional flow,ω{\displaystyle {\boldsymbol {\omega }}}is always perpendicular to the plane of the flow, and can therefore be considered ascalar field. The dynamics of vorticity are fundamentally linked to drag through the Josephson-Anderson relation.[5][6] Mathematically, the vorticity of a three-dimensional flow is a pseudovector field, usually denoted byω{\displaystyle {\boldsymbol {\omega }}}, defined as thecurlof the velocity fieldv{\displaystyle \mathbf {v} }describing the continuum motion. InCartesian coordinates: We may also express this in index notation asωi=εijk∂vk∂xj{\displaystyle \omega _{i}=\varepsilon _{ijk}{\frac {\partial v_{k}}{\partial x_{j}}}}.[7]In words, the vorticity tells how the velocity vector changes when one moves by an infinitesimal distance in a direction perpendicular to it. In a two-dimensional flow where the velocity is independent of thez{\displaystyle z}-coordinate and has noz{\displaystyle z}-component, the vorticity vector is always parallel to thez{\displaystyle z}-axis, and therefore can be expressed as a scalar field multiplied by a constant unit vectorz^{\displaystyle {\hat {z}}}: The vorticity is also related to the flow'scirculation(line integral of the velocity) along a closed path by the (classical)Stokes' theorem. Namely, for anyinfinitesimalsurface elementCwithnormal directionn{\displaystyle \mathbf {n} }and areadA{\displaystyle dA}, the circulationdΓ{\displaystyle d\Gamma }along theperimeterofC{\displaystyle C}is thedot productω⋅(ndA){\displaystyle {\boldsymbol {\omega }}\cdot (\mathbf {n} \,dA)}whereω{\displaystyle {\boldsymbol {\omega }}}is the vorticity at the center ofC{\displaystyle C}.[8] Since vorticity is an axial vector, it can be associated with a second-order antisymmetric tensorΩ{\displaystyle {\boldsymbol {\Omega }}}(the so-called vorticity or rotation tensor), which is said to be the dual ofω{\displaystyle {\boldsymbol {\omega }}}. The relation between the two quantities, in index notation, are given by whereεijk{\displaystyle \varepsilon _{ijk}}is the three-dimensionalLevi-Civita tensor. The vorticity tensor is simply the antisymmetric part of the tensor∇v{\displaystyle \nabla \mathbf {v} }, i.e., In a mass of continuum that is rotating like a rigid body, the vorticity is twice theangular velocityvector of that rotation. This is the case, for example, in the central core of aRankine vortex.[9] The vorticity may be nonzero even when all particles are flowing along straight and parallelpathlines, if there isshear(that is, if the flow speed varies acrossstreamlines). For example, in thelaminar flowwithin a pipe with constantcross section, all particles travel parallel to the axis of the pipe; but faster near that axis, and practically stationary next to the walls. The vorticity will be zero on the axis, and maximum near the walls, where the shear is largest. Conversely, a flow may have zero vorticity even though its particles travel along curved trajectories. An example is the idealirrotational vortex, where most particles rotate about some straight axis, with speed inversely proportional to their distances to that axis. A small parcel of continuum that does not straddle the axis will be rotated in one sense but sheared in the opposite sense, in such a way that their mean angular velocityabout their center of massis zero. Another way to visualize vorticity is to imagine that, instantaneously, a tiny part of the continuum becomes solid and the rest of the flow disappears. If that tiny new solid particle is rotating, rather than just moving with the flow, then there is vorticity in the flow. In the figure below, the left subfigure demonstrates no vorticity, and the right subfigure demonstrates existence of vorticity. The evolution of the vorticity field in time is described by thevorticity equation, which can be derived from theNavier–Stokes equations.[10] In many real flows where the viscosity can be neglected (more precisely, in flows with highReynolds number), the vorticity field can be modeled by a collection of discrete vortices, the vorticity being negligible everywhere except in small regions of space surrounding the axes of the vortices. This is true in the case of two-dimensionalpotential flow(i.e. two-dimensional zero viscosity flow), in which case the flowfield can be modeled as acomplex-valuedfield on thecomplex plane. Vorticity is useful for understanding how ideal potential flow solutions can be perturbed to model real flows. In general, the presence of viscosity causes adiffusionof vorticity away from the vortex cores into the general flow field; this flow is accounted for by a diffusion term in the vorticity transport equation.[11] Avortex lineorvorticity lineis a line which is everywhere tangent to the local vorticity vector. Vortex lines are defined by the relation[12] whereω=(ωx,ωy,ωz){\displaystyle {\boldsymbol {\omega }}=(\omega _{x},\omega _{y},\omega _{z})}is the vorticity vector inCartesian coordinates. Avortex tubeis the surface in the continuum formed by all vortex lines passing through a given (reducible) closed curve in the continuum. The 'strength' of a vortex tube (also calledvortex flux)[13]is the integral of the vorticity across a cross-section of the tube, and is the same everywhere along the tube (because vorticity has zero divergence). It is a consequence ofHelmholtz's theorems(or equivalently, ofKelvin's circulation theorem) that in an inviscid fluid the 'strength' of the vortex tube is also constant with time. Viscous effects introduce frictional losses and time dependence.[14] In a three-dimensional flow, vorticity (as measured by thevolume integralof the square of its magnitude) can be intensified when a vortex line is extended — a phenomenon known asvortex stretching.[15]This phenomenon occurs in the formation of a bathtub vortex in outflowing water, and the build-up of a tornado by rising air currents. A rotating-vane vorticity meter was invented by Russian hydraulic engineer A. Ya. Milovich (1874–1958). In 1913 he proposed a cork with four blades attached as a device qualitatively showing the magnitude of the vertical projection of the vorticity and demonstrated a motion-picture photography of the float's motion on the water surface in a model of a river bend.[16] Rotating-vane vorticity meters are commonly shown in educational films on continuum mechanics (famous examples include the NCFMF's "Vorticity"[17]and "Fundamental Principles of Flow" by Iowa Institute of Hydraulic Research[18]). Inaerodynamics, theliftdistribution over afinite wingmay be approximated by assuming that each spanwise segment of the wing has a semi-infinite trailing vortex behind it. It is then possible to solve for the strength of the vortices using the criterion that there be no flow induced through the surface of the wing. This procedure is called the vortex panel method ofcomputational fluid dynamics. The strengths of the vortices are then summed to find the total approximatecirculationabout the wing. According to theKutta–Joukowski theorem, lift per unit of span is the product of circulation, airspeed, and air density. Therelative vorticityis the vorticity relative to the Earth induced by the air velocity field. This air velocity field is often modeled as a two-dimensional flow parallel to the ground, so that the relative vorticity vector is generally scalar rotation quantity perpendicular to the ground. Vorticity is positive when – looking down onto the Earth's surface – the wind turns counterclockwise. In the northern hemisphere, positive vorticity is calledcyclonic rotation, and negative vorticity isanticyclonic rotation; the nomenclature is reversed in the Southern Hemisphere. Theabsolute vorticityis computed from the air velocity relative to an inertial frame, and therefore includes a term due to the Earth's rotation, theCoriolis parameter. Thepotential vorticityis absolute vorticity divided by the vertical spacing between levels of constant(potential) temperature(orentropy). The absolute vorticity of an air mass will change if the air mass is stretched (or compressed) in the vertical direction, but the potential vorticity isconservedin anadiabaticflow. Asadiabaticflow predominates in the atmosphere, the potential vorticity is useful as an approximatetracerof air masses in the atmosphere over the timescale of a few days, particularly when viewed on levels of constant entropy. Thebarotropic vorticity equationis the simplest way for forecasting the movement ofRossby waves(that is, thetroughsandridgesof 500hPageopotential height) over a limited amount of time (a few days). In the 1950s, the first successful programs fornumerical weather forecastingutilized that equation. In modern numerical weather forecasting models andgeneral circulation models(GCMs), vorticity may be one of the predicted variables, in which case the corresponding time-dependent equation is aprognostic equation. Related to the concept of vorticity is thehelicityH(t){\displaystyle H(t)}, defined as where the integral is over a given volumeV{\displaystyle V}. In atmospheric science, helicity of the air motion is important in forecastingsupercellsand the potential fortornadicactivity.[19]
https://en.wikipedia.org/wiki/Vorticity
Invector calculus, thedivergence theorem, also known asGauss's theoremorOstrogradsky's theorem,[1]is atheoremrelating thefluxof avector fieldthrough a closedsurfaceto thedivergenceof the field in the volume enclosed. More precisely, the divergence theorem states that thesurface integralof a vector field over a closed surface, which is called the "flux" through the surface, is equal to thevolume integralof the divergence over the region enclosed by the surface. Intuitively, it states that "the sum of all sources of the field in a region (with sinks regarded as negative sources) gives the net flux out of the region". The divergence theorem is an important result for the mathematics ofphysicsandengineering, particularly inelectrostaticsandfluid dynamics. In these fields, it is usually applied in three dimensions. However, itgeneralizesto any number of dimensions. In one dimension, it is equivalent to thefundamental theorem of calculus. In two dimensions, it is equivalent toGreen's theorem. Vector fieldsare often illustrated using the example of thevelocityfield of afluid, such as a gas or liquid. A moving liquid has a velocity—a speed and a direction—at each point, which can be represented by avector, so that the velocity of the liquid at any moment forms a vector field. Consider an imaginary closed surfaceSinside a body of liquid, enclosing a volume of liquid. Thefluxof liquid out of the volume at any time is equal to the volume rate of fluid crossing this surface, i.e., thesurface integralof the velocity over the surface. Since liquids are incompressible, the amount of liquid inside a closed volume is constant; if there are nosources or sinksinside the volume then the flux of liquid out ofSis zero. If the liquid is moving, it may flow into the volume at some points on the surfaceSand out of the volume at other points, but the amounts flowing in and out at any moment are equal, so thenetflux of liquid out of the volume is zero. However if asourceof liquid is inside the closed surface, such as a pipe through which liquid is introduced, the additional liquid will exert pressure on the surrounding liquid, causing an outward flow in all directions. This will cause a net outward flow through the surfaceS. The flux outward throughSequals the volume rate of flow of fluid intoSfrom the pipe. Similarly if there is asinkor drain insideS, such as a pipe which drains the liquid off, the external pressure of the liquid will cause a velocity throughout the liquid directed inward toward the location of the drain. The volume rate of flow of liquid inward through the surfaceSequals the rate of liquid removed by the sink. If there are multiple sources and sinks of liquid insideS, the flux through the surface can be calculated by adding up the volume rate of liquid added by the sources and subtracting the rate of liquid drained off by the sinks. The volume rate of flow of liquid through a source or sink (with the flow through a sink given a negative sign) is equal to thedivergenceof the velocity field at the pipe mouth, so adding up (integrating) the divergence of the liquid throughout the volume enclosed bySequals the volume rate of flux throughS. This is the divergence theorem.[2] The divergence theorem is employed in anyconservation lawwhich states that the total volume of all sinks and sources, that is the volume integral of the divergence, is equal to the net flow across the volume's boundary.[3] SupposeVis asubsetofRn{\displaystyle \mathbb {R} ^{n}}(in the case ofn= 3,Vrepresents a volume inthree-dimensional space) which iscompactand has apiecewisesmooth boundaryS(also indicated with∂V=S{\displaystyle \partial V=S}). IfFis a continuously differentiable vector field defined on aneighborhoodofV, then:[4][5] The left side is avolume integralover the volumeV, and the right side is thesurface integralover the boundary of the volumeV. The closed, measurable set∂V{\displaystyle \partial V}is oriented by outward-pointingnormals, andn^{\displaystyle \mathbf {\hat {n}} }is the outward pointing unit normal at almost each point on the boundary∂V{\displaystyle \partial V}. (dS{\displaystyle \mathrm {d} \mathbf {S} }may be used as a shorthand forndS{\displaystyle \mathbf {n} \mathrm {d} S}.) In terms of the intuitive description above, the left-hand side of the equation represents the total of the sources in the volumeV, and the right-hand side represents the total flow across the boundaryS. The divergence theorem follows from the fact that if a volumeVis partitioned into separate parts, thefluxout of the original volume is equal to the algebraic sum of the flux out of each component volume.[6][7]This is true despite the fact that the new subvolumes have surfaces that were not part of the original volume's surface, because these surfaces are just partitions between two of the subvolumes and the flux through them just passes from one volume to the other and so cancels out when the flux out of the subvolumes is summed. See the diagram. A closed, bounded volumeVis divided into two volumesV1andV2by a surfaceS3(green). The fluxΦ(Vi)out of each component regionViis equal to the sum of the flux through its two faces, so the sum of the flux out of the two parts is whereΦ1andΦ2are the flux out of surfacesS1andS2,Φ31is the flux throughS3out of volume 1, andΦ32is the flux throughS3out of volume 2. The point is that surfaceS3is part of the surface of both volumes. The "outward" direction of thenormal vectorn^{\displaystyle \mathbf {\hat {n}} }is opposite for each volume, so the flux out of one throughS3is equal to the negative of the flux out of the other so these two fluxes cancel in the sum. Therefore: Since the union of surfacesS1andS2isS This principle applies to a volume divided into any number of parts, as shown in the diagram.[7]Since the integral over each internal partition(green surfaces)appears with opposite signs in the flux of the two adjacent volumes they cancel out, and the only contribution to the flux is the integral over the external surfaces(grey). Since the external surfaces of all the component volumes equal the original surface. The fluxΦout of each volume is the surface integral of the vector fieldF(x)over the surface The goal is to divide the original volume into infinitely many infinitesimal volumes. As the volume is divided into smaller and smaller parts, the surface integral on the right, the flux out of each subvolume, approaches zero because the surface areaS(Vi)approaches zero. However, from the definition ofdivergence, the ratio of flux to volume,Φ(Vi)|Vi|=1|Vi|∬S(Vi)F⋅n^dS{\displaystyle {\frac {\Phi (V_{\text{i}})}{|V_{\text{i}}|}}={\frac {1}{|V_{\text{i}}|}}\iint _{S(V_{\text{i}})}\mathbf {F} \cdot \mathbf {\hat {n}} \;\mathrm {d} S}, the part in parentheses below, does not in general vanish but approaches thedivergencedivFas the volume approaches zero.[7] As long as the vector fieldF(x)has continuous derivatives, the sum above holds even in thelimitwhen the volume is divided into infinitely small increments As|Vi|{\displaystyle |V_{\text{i}}|}approaches zero volume, it becomes the infinitesimaldV, the part in parentheses becomes the divergence, and the sum becomes avolume integraloverV ∬S(V)F⋅n^dS=∭Vdiv⁡FdV{\displaystyle \;\iint _{S(V)}\mathbf {F} \cdot \mathbf {\hat {n}} \;\mathrm {d} S=\iiint _{V}\operatorname {div} \mathbf {F} \;\mathrm {d} V\;} Since this derivation is coordinate free, it shows that the divergence does not depend on the coordinates used. We are going to prove the following:[citation needed] Theorem—LetΩ⊂Rn{\displaystyle \Omega \subset \mathbb {R} ^{n}}be open and bounded withC1{\displaystyle C^{1}}boundary. Ifu{\displaystyle u}isC1{\displaystyle C^{1}}on an open neighborhoodO{\displaystyle O}ofΩ¯{\displaystyle {\overline {\Omega }}}, that is,u∈C1(O){\displaystyle u\in C^{1}(O)}, then for eachi∈{1,…,n}{\displaystyle i\in \{1,\dots ,n\}},∫ΩuxidV=∫∂ΩuνidS,{\displaystyle \int _{\Omega }u_{x_{i}}\,dV=\int _{\partial \Omega }u\nu _{i}\,dS,}whereν:∂Ω→Rn{\displaystyle \nu :\partial \Omega \to \mathbb {R} ^{n}}is the outward pointing unit normal vector to∂Ω{\displaystyle \partial \Omega }. Equivalently,∫Ω∇udV=∫∂ΩuνdS.{\displaystyle \int _{\Omega }\nabla u\,dV=\int _{\partial \Omega }u\nu \,dS.} Proof of Theorem.[8] x′=(x1,…,xn−1),{\displaystyle x'=(x_{1},\dots ,x_{n-1}),}it holds thatU={x∈Rn:|x′|<rand|xn−g(x′)|<h}{\displaystyle U=\{x\in \mathbb {R} ^{n}:|x'|<r{\text{ and }}|x_{n}-g(x')|<h\}}and forx∈U{\displaystyle x\in U},xn=g(x′)⟹x∈∂Ω,−h<xn−g(x′)<0⟹x∈Ω,0<xn−g(x′)<h⟹x∉Ω.{\displaystyle {\begin{aligned}x_{n}=g(x')&\implies x\in \partial \Omega ,\\-h<x_{n}-g(x')<0&\implies x\in \Omega ,\\0<x_{n}-g(x')<h&\implies x\notin \Omega .\\\end{aligned}}} We are going to prove the following:[citation needed] Theorem—LetΩ¯{\displaystyle {\overline {\Omega }}}be aC2{\displaystyle C^{2}}compact manifold with boundary withC1{\displaystyle C^{1}}metric tensorg{\displaystyle g}. LetΩ{\displaystyle \Omega }denote the manifold interior ofΩ¯{\displaystyle {\overline {\Omega }}}and let∂Ω{\displaystyle \partial \Omega }denote the manifold boundary ofΩ¯{\displaystyle {\overline {\Omega }}}. Let(⋅,⋅){\displaystyle (\cdot ,\cdot )}denoteL2(Ω¯){\displaystyle L^{2}({\overline {\Omega }})}inner products of functions and⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }denote inner products of vectors. Supposeu∈C1(Ω¯,R){\displaystyle u\in C^{1}({\overline {\Omega }},\mathbb {R} )}andX{\displaystyle X}is aC1{\displaystyle C^{1}}vector field onΩ¯{\displaystyle {\overline {\Omega }}}. Then(grad⁡u,X)=−(u,div⁡X)+∫∂Ωu⟨X,N⟩dS,{\displaystyle (\operatorname {grad} u,X)=-(u,\operatorname {div} X)+\int _{\partial \Omega }u\langle X,N\rangle \,dS,}whereN{\displaystyle N}is the outward-pointing unit normal vector to∂Ω{\displaystyle \partial \Omega }. Proof of Theorem.[9]We use the Einstein summation convention. By using a partition of unity, we may assume thatu{\displaystyle u}andX{\displaystyle X}have compact support in a coordinate patchO⊂Ω¯{\displaystyle O\subset {\overline {\Omega }}}. First consider the case where the patch is disjoint from∂Ω{\displaystyle \partial \Omega }. ThenO{\displaystyle O}is identified with an open subset ofRn{\displaystyle \mathbb {R} ^{n}}and integration by parts produces no boundary terms:(grad⁡u,X)=∫O⟨grad⁡u,X⟩gdx=∫O∂juXjgdx=−∫Ou∂j(gXj)dx=−∫Ou1g∂j(gXj)gdx=(u,−1g∂j(gXj))=(u,−div⁡X).{\displaystyle {\begin{aligned}(\operatorname {grad} u,X)&=\int _{O}\langle \operatorname {grad} u,X\rangle {\sqrt {g}}\,dx\\&=\int _{O}\partial _{j}uX^{j}{\sqrt {g}}\,dx\\&=-\int _{O}u\partial _{j}({\sqrt {g}}X^{j})\,dx\\&=-\int _{O}u{\frac {1}{\sqrt {g}}}\partial _{j}({\sqrt {g}}X^{j}){\sqrt {g}}\,dx\\&=(u,-{\frac {1}{\sqrt {g}}}\partial _{j}({\sqrt {g}}X^{j}))\\&=(u,-\operatorname {div} X).\end{aligned}}}In the last equality we used the Voss-Weyl coordinate formula for the divergence, although the preceding identity could be used to define−div{\displaystyle -\operatorname {div} }as the formal adjoint ofgrad{\displaystyle \operatorname {grad} }. Now supposeO{\displaystyle O}intersects∂Ω{\displaystyle \partial \Omega }. ThenO{\displaystyle O}is identified with an open set inR+n={x∈Rn:xn≥0}{\displaystyle \mathbb {R} _{+}^{n}=\{x\in \mathbb {R} ^{n}:x_{n}\geq 0\}}. We zero extendu{\displaystyle u}andX{\displaystyle X}toR+n{\displaystyle \mathbb {R} _{+}^{n}}and perform integration by parts to obtain(grad⁡u,X)=∫O⟨grad⁡u,X⟩gdx=∫R+n∂juXjgdx=(u,−div⁡X)−∫Rn−1u(x′,0)Xn(x′,0)g(x′,0)dx′,{\displaystyle {\begin{aligned}(\operatorname {grad} u,X)&=\int _{O}\langle \operatorname {grad} u,X\rangle {\sqrt {g}}\,dx\\&=\int _{\mathbb {R} _{+}^{n}}\partial _{j}uX^{j}{\sqrt {g}}\,dx\\&=(u,-\operatorname {div} X)-\int _{\mathbb {R} ^{n-1}}u(x',0)X^{n}(x',0){\sqrt {g(x',0)}}\,dx',\end{aligned}}}wheredx′=dx1…dxn−1{\displaystyle dx'=dx_{1}\dots dx_{n-1}}. By a variant of thestraightening theorem for vector fields, we may chooseO{\displaystyle O}so that∂∂xn{\displaystyle {\frac {\partial }{\partial x_{n}}}}is the inward unit normal−N{\displaystyle -N}at∂Ω{\displaystyle \partial \Omega }. In this caseg(x′,0)dx′=g∂Ω(x′)dx′=dS{\displaystyle {\sqrt {g(x',0)}}\,dx'={\sqrt {g_{\partial \Omega }(x')}}\,dx'=dS}is the volume element on∂Ω{\displaystyle \partial \Omega }and the above formula reads(grad⁡u,X)=(u,−div⁡X)+∫∂Ωu⟨X,N⟩dS.{\displaystyle (\operatorname {grad} u,X)=(u,-\operatorname {div} X)+\int _{\partial \Omega }u\langle X,N\rangle \,dS.}This completes the proof. By replacingFin the divergence theorem with specific forms, other useful identities can be derived (cf.vector identities).[10] Suppose we wish to evaluate whereSis theunit spheredefined by andFis thevector field The direct computation of this integral is quite difficult, but we can simplify the derivation of the result using the divergence theorem, because the divergence theorem says that the integral is equal to: whereWis theunit ball: Since the functionyis positive in one hemisphere ofWand negative in the other, in an equal and opposite way, its total integral overWis zero. The same is true forz: Therefore, because the unit ballWhasvolume⁠4π/3⁠. As a result of the divergence theorem, a host of physical laws can be written in both a differential form (where one quantity is the divergence of another) and an integral form (where the flux of one quantity through a closed surface is equal to another quantity). Three examples areGauss's law(inelectrostatics),Gauss's law for magnetism, andGauss's law for gravity. Continuity equationsoffer more examples of laws with both differential and integral forms, related to each other by the divergence theorem. Influid dynamics,electromagnetism,quantum mechanics,relativity theory, and a number of other fields, there arecontinuity equationsthat describe the conservation of mass, momentum, energy, probability, or other quantities. Generically, these equations state that the divergence of the flow of the conserved quantity is equal to the distribution ofsourcesorsinksof that quantity. The divergence theorem states that any such continuity equation can be written in a differential form (in terms of a divergence) and an integral form (in terms of a flux).[12] Anyinverse-square lawcan instead be written in aGauss's law-type form (with a differential and integral form, as described above). Two examples areGauss's law(in electrostatics), which follows from the inverse-squareCoulomb's law, andGauss's law for gravity, which follows from the inverse-squareNewton's law of universal gravitation. The derivation of the Gauss's law-type equation from the inverse-square formulation or vice versa is exactly the same in both cases; see either of those articles for details.[12] Joseph-Louis Lagrangeintroduced the notion of surface integrals in 1760 and again in more general terms in 1811, in the second edition of hisMécanique Analytique.Lagrange employed surface integrals in his work on fluid mechanics.[13]He discovered the divergence theorem in 1762.[14] Carl Friedrich Gausswas also using surface integrals while working on the gravitational attraction of an elliptical spheroid in 1813, when he proved special cases of the divergence theorem.[15][13]He proved additional special cases in 1833 and 1839.[16]But it wasMikhail Ostrogradsky, who gave the first proof of the general theorem, in 1826, as part of his investigation of heat flow.[17]Special cases were proven byGeorge Greenin 1828 inAn Essay on the Application of Mathematical Analysis to the Theories of Electricity and Magnetism,[18][16]Siméon Denis Poissonin 1824 in a paper on elasticity, andFrédéric Sarrusin 1828 in his work on floating bodies.[19][16] To verify the planar variant of the divergence theorem for a regionR{\displaystyle R}: and the vector field: The boundary ofR{\displaystyle R}is the unit circle,C{\displaystyle C}, that can be represented parametrically by: such that0≤s≤2π{\displaystyle 0\leq s\leq 2\pi }wheres{\displaystyle s}units is the length arc from the points=0{\displaystyle s=0}to the pointP{\displaystyle P}onC{\displaystyle C}. Then a vector equation ofC{\displaystyle C}is At a pointP{\displaystyle P}onC{\displaystyle C}: Therefore, BecauseM=Re(F)=2y{\displaystyle M={\mathfrak {Re}}(\mathbf {F} )=2y}, we can evaluate∂M∂x=0{\displaystyle {\frac {\partial M}{\partial x}}=0},and becauseN=Im(F)=5x{\displaystyle N={\mathfrak {Im}}(\mathbf {F} )=5x},∂N∂y=0{\displaystyle {\frac {\partial N}{\partial y}}=0}. Thus Let's say we wanted to evaluate the flux of the followingvector fielddefined byF=2x2i+2y2j+2z2k{\displaystyle \mathbf {F} =2x^{2}{\textbf {i}}+2y^{2}{\textbf {j}}+2z^{2}{\textbf {k}}}bounded by the following inequalities: By the divergence theorem, We now need to determine the divergence ofF{\displaystyle {\textbf {F}}}. IfF{\displaystyle \mathbf {F} }is a three-dimensional vector field, then the divergence ofF{\displaystyle {\textbf {F}}}is given by∇⋅F=(∂∂xi+∂∂yj+∂∂zk)⋅F{\textstyle \nabla \cdot {\textbf {F}}=\left({\frac {\partial }{\partial x}}{\textbf {i}}+{\frac {\partial }{\partial y}}{\textbf {j}}+{\frac {\partial }{\partial z}}{\textbf {k}}\right)\cdot {\textbf {F}}}. Thus, we can set up the following flux integralI={\displaystyle I=}S{\displaystyle {\scriptstyle S}}F⋅ndS,{\displaystyle \mathbf {F} \cdot \mathbf {n} \,\mathrm {d} S,}as follows: Now that we have set up the integral, we can evaluate it. One can use thegeneralised Stokes' theoremto equate then-dimensional volume integral of the divergence of a vector fieldFover a regionUto the(n− 1)-dimensional surface integral ofFover the boundary ofU: This equation is also known as the divergence theorem. Whenn= 2, this is equivalent toGreen's theorem. Whenn= 1, it reduces to thefundamental theorem of calculus, part 2. Writing the theorem inEinstein notation: suggestively, replacing the vector fieldFwith a rank-ntensor fieldT, this can be generalized to:[20] where on each side,tensor contractionoccurs for at least one index. This form of the theorem is still in 3d, each index takes values 1, 2, and 3. It can be generalized further still to higher (or lower) dimensions (for example to 4dspacetimeingeneral relativity[21]).
https://en.wikipedia.org/wiki/Divergence_theorem
Inspecial relativity, afour-vector(or4-vector, sometimesLorentz vector)[1]is an object with four components, which transform in a specific way underLorentz transformations. Specifically, a four-vector is an element of a four-dimensionalvector spaceconsidered as arepresentation spaceof thestandard representationof theLorentz group, the (⁠1/2⁠,⁠1/2⁠) representation. It differs from aEuclidean vectorin how its magnitude is determined. The transformations that preserve this magnitude are the Lorentz transformations, which includespatial rotationsandboosts(a change by a constant velocity to anotherinertial reference frame).[2]: ch1 Four-vectors describe, for instance, positionxμin spacetime modeled asMinkowski space, a particle'sfour-momentumpμ, the amplitude of theelectromagnetic four-potentialAμ(x)at a pointxin spacetime, and the elements of the subspace spanned by thegamma matricesinside theDirac algebra. The Lorentz group may be represented by 4×4 matricesΛ. The action of a Lorentz transformation on a generalcontravariantfour-vectorX(like the examples above), regarded as a column vector withCartesian coordinateswith respect to aninertial framein the entries, is given by X′=ΛX,{\displaystyle X'=\Lambda X,} (matrix multiplication) where the components of the primed object refer to the new frame. Related to the examples above that are given as contravariant vectors, there are also the correspondingcovariant vectorsxμ,pμandAμ(x). These transform according to the rule X′=(Λ−1)TX,{\displaystyle X'=\left(\Lambda ^{-1}\right)^{\textrm {T}}X,} whereTdenotes thematrix transpose. This rule is different from the above rule. It corresponds to thedual representationof the standard representation. However, for the Lorentz group the dual of any representation isequivalentto the original representation. Thus the objects with covariant indices are four-vectors as well. For an example of a well-behaved four-component object in special relativity that isnota four-vector, seebispinor. It is similarly defined, the difference being that the transformation rule under Lorentz transformations is given by a representation other than the standard representation. In this case, the rule readsX′= Π(Λ)X, whereΠ(Λ)is a 4×4 matrix other thanΛ. Similar remarks apply to objects with fewer or more components that are well-behaved under Lorentz transformations. These includescalars,spinors,tensorsand spinor-tensors. The article considers four-vectors in the context of special relativity. Although the concept of four-vectors also extends togeneral relativity, some of the results stated in this article require modification in general relativity. The notations in this article are: lowercase bold forthree-dimensionalvectors, hats for three-dimensionalunit vectors, capital bold forfour dimensionalvectors (except for the four-gradient), andtensor index notation. Afour-vectorAis a vector with a "timelike" component and three "spacelike" components, and can be written in various equivalent notations:[3] A=(A0,A1,A2,A3)=A0E0+A1E1+A2E2+A3E3=A0E0+AiEi=AαEα{\displaystyle {\begin{aligned}\mathbf {A} &=\left(A^{0},\,A^{1},\,A^{2},\,A^{3}\right)\\&=A^{0}\mathbf {E} _{0}+A^{1}\mathbf {E} _{1}+A^{2}\mathbf {E} _{2}+A^{3}\mathbf {E} _{3}\\&=A^{0}\mathbf {E} _{0}+A^{i}\mathbf {E} _{i}\\&=A^{\alpha }\mathbf {E} _{\alpha }\end{aligned}}} whereAαis the magnitude component andEαis thebasis vectorcomponent; note that both are necessary to make a vector, and that whenAαis seen alone, it refers strictly to thecomponentsof the vector. The upper indices indicatecontravariantcomponents. Here the standard convention is that Latin indices take values for spatial components, so thati= 1, 2, 3, and Greek indices take values for spaceand timecomponents, soα= 0, 1, 2, 3, used with thesummation convention. The split between the time component and the spatial components is a useful one to make when determining contractions of one four vector with other tensor quantities, such as for calculating Lorentz invariants in inner products (examples are given below), orraising and lowering indices. In special relativity, the spacelike basisE1,E2,E3and componentsA1,A2,A3are oftenCartesianbasis and components: A=(At,Ax,Ay,Az)=AtEt+AxEx+AyEy+AzEz{\displaystyle {\begin{aligned}\mathbf {A} &=\left(A_{t},\,A_{x},\,A_{y},\,A_{z}\right)\\&=A_{t}\mathbf {E} _{t}+A_{x}\mathbf {E} _{x}+A_{y}\mathbf {E} _{y}+A_{z}\mathbf {E} _{z}\\\end{aligned}}} although, of course, any other basis and components may be used, such asspherical polar coordinates A=(At,Ar,Aθ,Aϕ)=AtEt+ArEr+AθEθ+AϕEϕ{\displaystyle {\begin{aligned}\mathbf {A} &=\left(A_{t},\,A_{r},\,A_{\theta },\,A_{\phi }\right)\\&=A_{t}\mathbf {E} _{t}+A_{r}\mathbf {E} _{r}+A_{\theta }\mathbf {E} _{\theta }+A_{\phi }\mathbf {E} _{\phi }\\\end{aligned}}} orcylindrical polar coordinates, A=(At,Ar,Aθ,Az)=AtEt+ArEr+AθEθ+AzEz{\displaystyle {\begin{aligned}\mathbf {A} &=(A_{t},\,A_{r},\,A_{\theta },\,A_{z})\\&=A_{t}\mathbf {E} _{t}+A_{r}\mathbf {E} _{r}+A_{\theta }\mathbf {E} _{\theta }+A_{z}\mathbf {E} _{z}\\\end{aligned}}} or any otherorthogonal coordinates, or even generalcurvilinear coordinates. Note the coordinate labels are always subscripted as labels and are not indices taking numerical values. In general relativity, local curvilinear coordinates in a local basis must be used. Geometrically, a four-vector can still be interpreted as an arrow, but in spacetime - not just space. In relativity, the arrows are drawn as part ofMinkowski diagram(also calledspacetime diagram). In this article, four-vectors will be referred to simply as vectors. It is also customary to represent the bases bycolumn vectors: E0=(1000),E1=(0100),E2=(0010),E3=(0001){\displaystyle \mathbf {E} _{0}={\begin{pmatrix}1\\0\\0\\0\end{pmatrix}}\,,\quad \mathbf {E} _{1}={\begin{pmatrix}0\\1\\0\\0\end{pmatrix}}\,,\quad \mathbf {E} _{2}={\begin{pmatrix}0\\0\\1\\0\end{pmatrix}}\,,\quad \mathbf {E} _{3}={\begin{pmatrix}0\\0\\0\\1\end{pmatrix}}} so that: A=(A0A1A2A3){\displaystyle \mathbf {A} ={\begin{pmatrix}A^{0}\\A^{1}\\A^{2}\\A^{3}\end{pmatrix}}} The relation between thecovariantand contravariant coordinates is through theMinkowskimetric tensor(referred to as the metric),ηwhichraises and lowers indicesas follows: Aμ=ημνAν,{\displaystyle A_{\mu }=\eta _{\mu \nu }A^{\nu }\,,} and in various equivalent notations the covariant components are: A=(A0,A1,A2,A3)=A0E0+A1E1+A2E2+A3E3=A0E0+AiEi=AαEα{\displaystyle {\begin{aligned}\mathbf {A} &=(A_{0},\,A_{1},\,A_{2},\,A_{3})\\&=A_{0}\mathbf {E} ^{0}+A_{1}\mathbf {E} ^{1}+A_{2}\mathbf {E} ^{2}+A_{3}\mathbf {E} ^{3}\\&=A_{0}\mathbf {E} ^{0}+A_{i}\mathbf {E} ^{i}\\&=A_{\alpha }\mathbf {E} ^{\alpha }\\\end{aligned}}} where the lowered index indicates it to becovariant. Often the metric is diagonal, as is the case fororthogonal coordinates(seeline element), but not in generalcurvilinear coordinates. The bases can be represented byrow vectors: E0=(1000),E1=(0100),E2=(0010),E3=(0001),{\displaystyle {\begin{aligned}\mathbf {E} ^{0}&={\begin{pmatrix}1&0&0&0\end{pmatrix}}\,,&\mathbf {E} ^{1}&={\begin{pmatrix}0&1&0&0\end{pmatrix}}\,,\\[1ex]\mathbf {E} ^{2}&={\begin{pmatrix}0&0&1&0\end{pmatrix}}\,,&\mathbf {E} ^{3}&={\begin{pmatrix}0&0&0&1\end{pmatrix}},\end{aligned}}}so that:A=(A0A1A2A3){\displaystyle \mathbf {A} ={\begin{pmatrix}A_{0}&A_{1}&A_{2}&A_{3}\end{pmatrix}}} The motivation for the above conventions are that the inner product is a scalar, see below for details. Given two inertial or rotatedframes of reference, a four-vector is defined as a quantity which transforms according to theLorentz transformationmatrixΛ:A′=ΛA{\displaystyle \mathbf {A} '={\boldsymbol {\Lambda }}\mathbf {A} } In index notation, the contravariant and covariant components transform according to, respectively:A′μ=ΛμνAν,A′μ=ΛμνAν{\displaystyle {A'}^{\mu }=\Lambda ^{\mu }{}_{\nu }A^{\nu }\,,\quad {A'}_{\mu }=\Lambda _{\mu }{}^{\nu }A_{\nu }}in which the matrixΛhas componentsΛμνin rowμand columnν, and the matrix(Λ−1)Thas componentsΛμνin rowμand columnν. For background on the nature of this transformation definition, seetensor. All four-vectors transform in the same way, and this can be generalized to four-dimensional relativistic tensors; seespecial relativity. For two frames rotated by a fixed angleθabout an axis defined by theunit vector: n^=(n^1,n^2,n^3),{\displaystyle {\hat {\mathbf {n} }}=\left({\hat {n}}_{1},{\hat {n}}_{2},{\hat {n}}_{3}\right)\,,} without any boosts, the matrixΛhas components given by:[4] Λ00=1Λ0i=Λi0=0Λij=(δij−n^in^j)cos⁡θ−εijkn^ksin⁡θ+n^in^j{\displaystyle {\begin{aligned}\Lambda _{00}&=1\\\Lambda _{0i}=\Lambda _{i0}&=0\\\Lambda _{ij}&=\left(\delta _{ij}-{\hat {n}}_{i}{\hat {n}}_{j}\right)\cos \theta -\varepsilon _{ijk}{\hat {n}}_{k}\sin \theta +{\hat {n}}_{i}{\hat {n}}_{j}\end{aligned}}} whereδijis theKronecker delta, andεijkis thethree-dimensionalLevi-Civita symbol. The spacelike components of four-vectors are rotated, while the timelike components remain unchanged. For the case of rotations about thez-axis only, the spacelike part of the Lorentz matrix reduces to therotation matrixabout thez-axis: (A′0A′1A′2A′3)=(10000cos⁡θ−sin⁡θ00sin⁡θcos⁡θ00001)(A0A1A2A3).{\displaystyle {\begin{pmatrix}{A'}^{0}\\{A'}^{1}\\{A'}^{2}\\{A'}^{3}\end{pmatrix}}={\begin{pmatrix}1&0&0&0\\0&\cos \theta &-\sin \theta &0\\0&\sin \theta &\cos \theta &0\\0&0&0&1\\\end{pmatrix}}{\begin{pmatrix}A^{0}\\A^{1}\\A^{2}\\A^{3}\end{pmatrix}}\ .} For two frames moving at constant relative three-velocityv(not four-velocity,see below), it is convenient to denote and define the relative velocity in units ofcby: β=(β1,β2,β3)=1c(v1,v2,v3)=1cv.{\displaystyle {\boldsymbol {\beta }}=(\beta _{1},\,\beta _{2},\,\beta _{3})={\frac {1}{c}}(v_{1},\,v_{2},\,v_{3})={\frac {1}{c}}\mathbf {v} \,.} Then without rotations, the matrixΛhas components given by:[5]Λ00=γ,Λ0i=Λi0=−γβi,Λij=Λji=(γ−1)βiβjβ2+δij=(γ−1)vivjv2+δij,{\displaystyle {\begin{aligned}\Lambda _{00}&=\gamma ,\\\Lambda _{0i}=\Lambda _{i0}&=-\gamma \beta _{i},\\\Lambda _{ij}=\Lambda _{ji}&=(\gamma -1){\frac {\beta _{i}\beta _{j}}{\beta ^{2}}}+\delta _{ij}=(\gamma -1){\frac {v_{i}v_{j}}{v^{2}}}+\delta _{ij},\\\end{aligned}}}where theLorentz factoris defined by:γ=11−β⋅β,{\displaystyle \gamma ={\frac {1}{\sqrt {1-{\boldsymbol {\beta }}\cdot {\boldsymbol {\beta }}}}}\,,}andδijis theKronecker delta. Contrary to the case for pure rotations, the spacelike and timelike components are mixed together under boosts. For the case of a boost in thex-direction only, the matrix reduces to;[6][7] (A′0A′1A′2A′3)=(cosh⁡ϕ−sinh⁡ϕ00−sinh⁡ϕcosh⁡ϕ0000100001)(A0A1A2A3){\displaystyle {\begin{pmatrix}A'^{0}\\A'^{1}\\A'^{2}\\A'^{3}\end{pmatrix}}={\begin{pmatrix}\cosh \phi &-\sinh \phi &0&0\\-\sinh \phi &\cosh \phi &0&0\\0&0&1&0\\0&0&0&1\\\end{pmatrix}}{\begin{pmatrix}A^{0}\\A^{1}\\A^{2}\\A^{3}\end{pmatrix}}} Where therapidityϕexpression has been used, written in terms of thehyperbolic functions:γ=cosh⁡ϕ{\displaystyle \gamma =\cosh \phi } This Lorentz matrix illustrates the boost to be ahyperbolic rotationin four dimensional spacetime, analogous to the circular rotation above in three-dimensional space. Four-vectors have the samelinearity propertiesasEuclidean vectorsinthree dimensions. They can be added in the usual entrywise way:A+B=(A0,A1,A2,A3)+(B0,B1,B2,B3)=(A0+B0,A1+B1,A2+B2,A3+B3){\displaystyle {\begin{aligned}\mathbf {A} +\mathbf {B} &=\left(A^{0},A^{1},A^{2},A^{3}\right)+\left(B^{0},B^{1},B^{2},B^{3}\right)\\&=\left(A^{0}+B^{0},A^{1}+B^{1},A^{2}+B^{2},A^{3}+B^{3}\right)\end{aligned}}}and similarlyscalar multiplicationby ascalarλis defined entrywise by:λA=λ(A0,A1,A2,A3)=(λA0,λA1,λA2,λA3){\displaystyle \lambda \mathbf {A} =\lambda \left(A^{0},A^{1},A^{2},A^{3}\right)=\left(\lambda A^{0},\lambda A^{1},\lambda A^{2},\lambda A^{3}\right)} Then subtraction is the inverse operation of addition, defined entrywise by:A+(−1)B=(A0,A1,A2,A3)+(−1)(B0,B1,B2,B3)=(A0−B0,A1−B1,A2−B2,A3−B3){\displaystyle {\begin{aligned}\mathbf {A} +(-1)\mathbf {B} &=\left(A^{0},A^{1},A^{2},A^{3}\right)+(-1)\left(B^{0},B^{1},B^{2},B^{3}\right)\\&=\left(A^{0}-B^{0},A^{1}-B^{1},A^{2}-B^{2},A^{3}-B^{3}\right)\end{aligned}}} Applying theMinkowski tensorημνto two four-vectorsAandB, writing the result indot productnotation, we have, usingEinstein notation:A⋅B=AμBνEμ⋅Eν=AμημνBν{\displaystyle \mathbf {A} \cdot \mathbf {B} =A^{\mu }B^{\nu }\mathbf {E} _{\mu }\cdot \mathbf {E} _{\nu }=A^{\mu }\eta _{\mu \nu }B^{\nu }} in special relativity. The dot product of the basis vectors is the Minkowski metric, as opposed to the Kronecker delta as in Euclidean space. It is convenient to rewrite the definition inmatrixform:A⋅B=(A0A1A2A3)(η00η01η02η03η10η11η12η13η20η21η22η23η30η31η32η33)(B0B1B2B3){\displaystyle \mathbf {A\cdot B} ={\begin{pmatrix}A^{0}&A^{1}&A^{2}&A^{3}\end{pmatrix}}{\begin{pmatrix}\eta _{00}&\eta _{01}&\eta _{02}&\eta _{03}\\\eta _{10}&\eta _{11}&\eta _{12}&\eta _{13}\\\eta _{20}&\eta _{21}&\eta _{22}&\eta _{23}\\\eta _{30}&\eta _{31}&\eta _{32}&\eta _{33}\end{pmatrix}}{\begin{pmatrix}B^{0}\\B^{1}\\B^{2}\\B^{3}\end{pmatrix}}}in which caseημνabove is the entry in rowμand columnνof the Minkowski metric as a square matrix. The Minkowski metric is not aEuclidean metric, because it is indefinite (seemetric signature). A number of other expressions can be used because the metric tensor can raise and lower the components ofAorB. For contra/co-variant components ofAand co/contra-variant components ofB, we have:A⋅B=AμημνBν=AνBν=AμBμ{\displaystyle \mathbf {A} \cdot \mathbf {B} =A^{\mu }\eta _{\mu \nu }B^{\nu }=A_{\nu }B^{\nu }=A^{\mu }B_{\mu }}so in the matrix notation:A⋅B=(A0A1A2A3)(B0B1B2B3)=(B0B1B2B3)(A0A1A2A3){\displaystyle {\begin{aligned}\mathbf {A} \cdot \mathbf {B} &={\begin{pmatrix}A_{0}&A_{1}&A_{2}&A_{3}\end{pmatrix}}{\begin{pmatrix}B^{0}\\B^{1}\\B^{2}\\B^{3}\end{pmatrix}}\\[1ex]&={\begin{pmatrix}B_{0}&B_{1}&B_{2}&B_{3}\end{pmatrix}}{\begin{pmatrix}A^{0}\\A^{1}\\A^{2}\\A^{3}\end{pmatrix}}\end{aligned}}}while forAandBeach in covariant components:A⋅B=AμημνBν{\displaystyle \mathbf {A} \cdot \mathbf {B} =A_{\mu }\eta ^{\mu \nu }B_{\nu }}with a similar matrix expression to the above. Applying the Minkowski tensor to a four-vectorAwith itself we get:A⋅A=AμημνAν{\displaystyle \mathbf {A\cdot A} =A^{\mu }\eta _{\mu \nu }A^{\nu }}which, depending on the case, may be considered the square, or its negative, of the length of the vector. Following are two common choices for the metric tensor in thestandard basis(essentially Cartesian coordinates). If orthogonal coordinates are used, there would be scale factors along the diagonal part of the spacelike part of the metric, while for general curvilinear coordinates the entire spacelike part of the metric would have components dependent on the curvilinear basis used. The (+−−−)metric signatureis sometimes called the "mostly minus" convention, or the "west coast" convention. In the (+−−−)metric signature, evaluating thesummation over indicesgives:A⋅B=A0B0−A1B1−A2B2−A3B3{\displaystyle \mathbf {A} \cdot \mathbf {B} =A^{0}B^{0}-A^{1}B^{1}-A^{2}B^{2}-A^{3}B^{3}}while in matrix form:A⋅B=(A0A1A2A3)(10000−10000−10000−1)(B0B1B2B3){\displaystyle \mathbf {A\cdot B} ={\begin{pmatrix}A^{0}&A^{1}&A^{2}&A^{3}\end{pmatrix}}{\begin{pmatrix}1&0&0&0\\0&-1&0&0\\0&0&-1&0\\0&0&0&-1\end{pmatrix}}{\begin{pmatrix}B^{0}\\B^{1}\\B^{2}\\B^{3}\end{pmatrix}}} It is a recurring theme in special relativity to take the expressionA⋅B=A0B0−A1B1−A2B2−A3B3=C{\displaystyle \mathbf {A} \cdot \mathbf {B} =A^{0}B^{0}-A^{1}B^{1}-A^{2}B^{2}-A^{3}B^{3}=C}in onereference frame, whereCis the value of the inner product in this frame, and:A′⋅B′=A′0B′0−A′1B′1−A′2B′2−A′3B′3=C′{\displaystyle \mathbf {A} '\cdot \mathbf {B} '={A'}^{0}{B'}^{0}-{A'}^{1}{B'}^{1}-{A'}^{2}{B'}^{2}-{A'}^{3}{B'}^{3}=C'}in another frame, in whichC′ is the value of the inner product in this frame. Then since the inner product is an invariant, these must be equal:A⋅B=A′⋅B′{\displaystyle \mathbf {A} \cdot \mathbf {B} =\mathbf {A} '\cdot \mathbf {B} '}that is:C=A0B0−A1B1−A2B2−A3B3=A′0B′0−A′1B′1−A′2B′2−A′3B′3{\displaystyle {\begin{aligned}C&=A^{0}B^{0}-A^{1}B^{1}-A^{2}B^{2}-A^{3}B^{3}\\[2pt]&={A'}^{0}{B'}^{0}-{A'}^{1}{B'}^{1}-{A'}^{2}{B'}^{2}-{A'}^{3}{B'}^{3}\end{aligned}}} Considering that physical quantities in relativity are four-vectors, this equation has the appearance of a "conservation law", but there is no "conservation" involved. The primary significance of the Minkowski inner product is that for any two four-vectors, its value isinvariantfor all observers; a change of coordinates does not result in a change in value of the inner product. The components of the four-vectors change from one frame to another;AandA′ are connected by aLorentz transformation, and similarly forBandB′, although the inner products are the same in all frames. Nevertheless, this type of expression is exploited in relativistic calculations on a par with conservation laws, since the magnitudes of components can be determined without explicitly performing any Lorentz transformations. A particular example is with energy and momentum in theenergy-momentum relationderived from thefour-momentumvector (see also below). In this signature we have:A⋅A=(A0)2−(A1)2−(A2)2−(A3)2{\displaystyle \mathbf {A\cdot A} =\left(A^{0}\right)^{2}-\left(A^{1}\right)^{2}-\left(A^{2}\right)^{2}-\left(A^{3}\right)^{2}} With the signature (+−−−), four-vectors may be classified as eitherspacelikeifA⋅A<0{\displaystyle \mathbf {A\cdot A} <0},timelikeifA⋅A>0{\displaystyle \mathbf {A\cdot A} >0}, andnull vectorsifA⋅A=0{\displaystyle \mathbf {A\cdot A} =0}. The (-+++)metric signatureis sometimes called the "east coast" convention. Some authors defineηwith the opposite sign, in which case we have the (−+++) metric signature. Evaluating the summation with this signature: A⋅B=−A0B0+A1B1+A2B2+A3B3{\displaystyle \mathbf {A\cdot B} =-A^{0}B^{0}+A^{1}B^{1}+A^{2}B^{2}+A^{3}B^{3}} while the matrix form is: A⋅B=(A0A1A2A3)(−1000010000100001)(B0B1B2B3){\displaystyle \mathbf {A\cdot B} =\left({\begin{matrix}A^{0}&A^{1}&A^{2}&A^{3}\end{matrix}}\right)\left({\begin{matrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{matrix}}\right)\left({\begin{matrix}B^{0}\\B^{1}\\B^{2}\\B^{3}\end{matrix}}\right)} Note that in this case, in one frame: A⋅B=−A0B0+A1B1+A2B2+A3B3=−C{\displaystyle \mathbf {A} \cdot \mathbf {B} =-A^{0}B^{0}+A^{1}B^{1}+A^{2}B^{2}+A^{3}B^{3}=-C} while in another: A′⋅B′=−A′0B′0+A′1B′1+A′2B′2+A′3B′3=−C′{\displaystyle \mathbf {A} '\cdot \mathbf {B} '=-{A'}^{0}{B'}^{0}+{A'}^{1}{B'}^{1}+{A'}^{2}{B'}^{2}+{A'}^{3}{B'}^{3}=-C'} so that: −C=−A0B0+A1B1+A2B2+A3B3=−A′0B′0+A′1B′1+A′2B′2+A′3B′3{\displaystyle {\begin{aligned}-C&=-A^{0}B^{0}+A^{1}B^{1}+A^{2}B^{2}+A^{3}B^{3}\\[2pt]&=-{A'}^{0}{B'}^{0}+{A'}^{1}{B'}^{1}+{A'}^{2}{B'}^{2}+{A'}^{3}{B'}^{3}\end{aligned}}} which is equivalent to the above expression forCin terms ofAandB. Either convention will work. With the Minkowski metric defined in the two ways above, the only difference between covariant and contravariant four-vector components are signs, therefore the signs depend on which sign convention is used. We have: A⋅A=−(A0)2+(A1)2+(A2)2+(A3)2{\displaystyle \mathbf {A\cdot A} =-\left(A^{0}\right)^{2}+\left(A^{1}\right)^{2}+\left(A^{2}\right)^{2}+\left(A^{3}\right)^{2}} With the signature (−+++), four-vectors may be classified as eitherspacelikeifA⋅A>0{\displaystyle \mathbf {A\cdot A} >0},timelikeifA⋅A<0{\displaystyle \mathbf {A\cdot A} <0}, andnullifA⋅A=0{\displaystyle \mathbf {A\cdot A} =0}. Applying the Minkowski tensor is often expressed as the effect of thedual vectorof one vector on the other: A⋅B=A∗(B)=AνBν.{\displaystyle \mathbf {A\cdot B} =A^{*}(\mathbf {B} )=A{_{\nu }}B^{\nu }.} Here theAνs are the components of the dual vectorA* ofAin thedual basisand called thecovariantcoordinates ofA, while the originalAνcomponents are called thecontravariantcoordinates. In special relativity (but not general relativity), thederivativeof a four-vector with respect to a scalarλ(invariant) is itself a four-vector. It is also useful to take thedifferentialof the four-vector,dAand divide it by the differential of the scalar,dλ: dAdifferential=dAdλderivativedλdifferential{\displaystyle {\underset {\text{differential}}{d\mathbf {A} }}={\underset {\text{derivative}}{\frac {d\mathbf {A} }{d\lambda }}}{\underset {\text{differential}}{d\lambda }}} where the contravariant components are: dA=(dA0,dA1,dA2,dA3){\displaystyle d\mathbf {A} =\left(dA^{0},dA^{1},dA^{2},dA^{3}\right)} while the covariant components are: dA=(dA0,dA1,dA2,dA3){\displaystyle d\mathbf {A} =\left(dA_{0},dA_{1},dA_{2},dA_{3}\right)} In relativistic mechanics, one often takes the differential of a four-vector and divides by the differential inproper time(see below). A point inMinkowski spaceis a time and spatial position, called an "event", or sometimes theposition four-vectororfour-positionor4-position, described in some reference frame by a set of four coordinates: R=(ct,r){\displaystyle \mathbf {R} =\left(ct,\mathbf {r} \right)} whereris thethree-dimensional spaceposition vector. Ifris a function of coordinate timetin the same frame, i.e.r=r(t), this corresponds to a sequence of events astvaries. The definitionR0=ctensures that all the coordinates have the samedimension(oflength) and units (in theSI, meters).[8][9][10][11]These coordinates are the components of theposition four-vectorfor the event. Thedisplacement four-vectoris defined to be an "arrow" linking two events: ΔR=(cΔt,Δr){\displaystyle \Delta \mathbf {R} =\left(c\Delta t,\Delta \mathbf {r} \right)} For thedifferentialfour-position on a world line we have, usinga norm notation: ‖dR‖2=dR⋅dR=dRμdRμ=c2dτ2=ds2,{\displaystyle \|d\mathbf {R} \|^{2}=\mathbf {dR\cdot dR} =dR^{\mu }dR_{\mu }=c^{2}d\tau ^{2}=ds^{2}\,,} defining the differentialline elementdsand differential proper time increment dτ, but this "norm" is also: ‖dR‖2=(cdt)2−dr⋅dr,{\displaystyle \|d\mathbf {R} \|^{2}=(cdt)^{2}-d\mathbf {r} \cdot d\mathbf {r} \,,} so that: (cdτ)2=(cdt)2−dr⋅dr.{\displaystyle (cd\tau )^{2}=(cdt)^{2}-d\mathbf {r} \cdot d\mathbf {r} \,.} When considering physical phenomena, differential equations arise naturally; however, when considering space andtime derivativesof functions, it is unclear which reference frame these derivatives are taken with respect to. It is agreed that time derivatives are taken with respect to theproper timeτ{\displaystyle \tau }. As proper time is an invariant, this guarantees that the proper-time-derivative of any four-vector is itself a four-vector. It is then important to find a relation between this proper-time-derivative and another time derivative (using thecoordinate timetof an inertial reference frame). This relation is provided by taking the above differential invariant spacetime interval, then dividing by (cdt)2to obtain: (cdτcdt)2=1−(drcdt⋅drcdt)=1−u⋅uc2=1γ(u)2,{\displaystyle \left({\frac {cd\tau }{cdt}}\right)^{2}=1-\left({\frac {d\mathbf {r} }{cdt}}\cdot {\frac {d\mathbf {r} }{cdt}}\right)=1-{\frac {\mathbf {u} \cdot \mathbf {u} }{c^{2}}}={\frac {1}{\gamma (\mathbf {u} )^{2}}}\,,} whereu=dr/dtis the coordinate 3-velocityof an object measured in the same frame as the coordinatesx,y,z, andcoordinate timet, and γ(u)=11−u⋅uc2{\displaystyle \gamma (\mathbf {u} )={\frac {1}{\sqrt {1-{\frac {\mathbf {u} \cdot \mathbf {u} }{c^{2}}}}}}} is theLorentz factor. This provides a useful relation between the differentials in coordinate time and proper time: dt=γ(u)dτ.{\displaystyle dt=\gamma (\mathbf {u} )d\tau \,.} This relation can also be found from the time transformation in theLorentz transformations. Important four-vectors in relativity theory can be defined by applying this differentialddτ{\displaystyle {\frac {d}{d\tau }}}. Considering thatpartial derivativesarelinear operators, one can form afour-gradientfrom the partialtime derivative∂/∂tand the spatialgradient∇. Using the standard basis, in index and abbreviated notations, the contravariant components are: ∂=(∂∂x0,−∂∂x1,−∂∂x2,−∂∂x3)=(∂0,−∂1,−∂2,−∂3)=E0∂0−E1∂1−E2∂2−E3∂3=E0∂0−Ei∂i=Eα∂α=(1c∂∂t,−∇)=(∂tc,−∇)=E01c∂∂t−∇{\displaystyle {\begin{aligned}{\boldsymbol {\partial }}&=\left({\frac {\partial }{\partial x_{0}}},\,-{\frac {\partial }{\partial x_{1}}},\,-{\frac {\partial }{\partial x_{2}}},\,-{\frac {\partial }{\partial x_{3}}}\right)\\&=(\partial ^{0},\,-\partial ^{1},\,-\partial ^{2},\,-\partial ^{3})\\&=\mathbf {E} _{0}\partial ^{0}-\mathbf {E} _{1}\partial ^{1}-\mathbf {E} _{2}\partial ^{2}-\mathbf {E} _{3}\partial ^{3}\\&=\mathbf {E} _{0}\partial ^{0}-\mathbf {E} _{i}\partial ^{i}\\&=\mathbf {E} _{\alpha }\partial ^{\alpha }\\&=\left({\frac {1}{c}}{\frac {\partial }{\partial t}},\,-\nabla \right)\\&=\left({\frac {\partial _{t}}{c}},-\nabla \right)\\&=\mathbf {E} _{0}{\frac {1}{c}}{\frac {\partial }{\partial t}}-\nabla \\\end{aligned}}} Note the basis vectors are placed in front of the components, to prevent confusion between taking the derivative of the basis vector, or simply indicating the partial derivative is a component of this four-vector. The covariant components are: ∂=(∂∂x0,∂∂x1,∂∂x2,∂∂x3)=(∂0,∂1,∂2,∂3)=E0∂0+E1∂1+E2∂2+E3∂3=E0∂0+Ei∂i=Eα∂α=(1c∂∂t,∇)=(∂tc,∇)=E01c∂∂t+∇{\displaystyle {\begin{aligned}{\boldsymbol {\partial }}&=\left({\frac {\partial }{\partial x^{0}}},\,{\frac {\partial }{\partial x^{1}}},\,{\frac {\partial }{\partial x^{2}}},\,{\frac {\partial }{\partial x^{3}}}\right)\\&=(\partial _{0},\,\partial _{1},\,\partial _{2},\,\partial _{3})\\&=\mathbf {E} ^{0}\partial _{0}+\mathbf {E} ^{1}\partial _{1}+\mathbf {E} ^{2}\partial _{2}+\mathbf {E} ^{3}\partial _{3}\\&=\mathbf {E} ^{0}\partial _{0}+\mathbf {E} ^{i}\partial _{i}\\&=\mathbf {E} ^{\alpha }\partial _{\alpha }\\&=\left({\frac {1}{c}}{\frac {\partial }{\partial t}},\,\nabla \right)\\&=\left({\frac {\partial _{t}}{c}},\nabla \right)\\&=\mathbf {E} ^{0}{\frac {1}{c}}{\frac {\partial }{\partial t}}+\nabla \\\end{aligned}}} Since this is an operator, it doesn't have a "length", but evaluating the inner product of the operator with itself gives another operator: ∂μ∂μ=1c2∂2∂t2−∇2=∂t2c2−∇2{\displaystyle \partial ^{\mu }\partial _{\mu }={\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}-\nabla ^{2}={\frac {{\partial _{t}}^{2}}{c^{2}}}-\nabla ^{2}} called theD'Alembert operator. Thefour-velocityof a particle is defined by: U=dXdτ=dXdtdtdτ=γ(u)(c,u),{\displaystyle \mathbf {U} ={\frac {d\mathbf {X} }{d\tau }}={\frac {d\mathbf {X} }{dt}}{\frac {dt}{d\tau }}=\gamma (\mathbf {u} )\left(c,\mathbf {u} \right),} Geometrically,Uis a normalized vector tangent to theworld lineof the particle. Using the differential of the four-position, the magnitude of the four-velocity can be obtained: ‖U‖2=UμUμ=dXμdτdXμdτ=dXμdXμdτ2=c2,{\displaystyle \|\mathbf {U} \|^{2}=U^{\mu }U_{\mu }={\frac {dX^{\mu }}{d\tau }}{\frac {dX_{\mu }}{d\tau }}={\frac {dX^{\mu }dX_{\mu }}{d\tau ^{2}}}=c^{2}\,,} in short, the magnitude of the four-velocity for any object is always a fixed constant: ‖U‖2=c2{\displaystyle \|\mathbf {U} \|^{2}=c^{2}} The norm is also: ‖U‖2=γ(u)2(c2−u⋅u),{\displaystyle \|\mathbf {U} \|^{2}={\gamma (\mathbf {u} )}^{2}\left(c^{2}-\mathbf {u} \cdot \mathbf {u} \right)\,,} so that: c2=γ(u)2(c2−u⋅u),{\displaystyle c^{2}={\gamma (\mathbf {u} )}^{2}\left(c^{2}-\mathbf {u} \cdot \mathbf {u} \right)\,,} which reduces to the definition of theLorentz factor. Units of four-velocity are m/s inSIand 1 in thegeometrized unit system. Four-velocity is a contravariant vector. Thefour-accelerationis given by: A=dUdτ=γ(u)(dγ(u)dtc,dγ(u)dtu+γ(u)a).{\displaystyle \mathbf {A} ={\frac {d\mathbf {U} }{d\tau }}=\gamma (\mathbf {u} )\left({\frac {d{\gamma }(\mathbf {u} )}{dt}}c,{\frac {d{\gamma }(\mathbf {u} )}{dt}}\mathbf {u} +\gamma (\mathbf {u} )\mathbf {a} \right).} wherea=du/dtis the coordinate 3-acceleration. Since the magnitude ofUis a constant, the four acceleration is orthogonal to the four velocity, i.e. the Minkowski inner product of the four-acceleration and the four-velocity is zero: A⋅U=AμUμ=dUμdτUμ=12ddτ(UμUμ)=0{\displaystyle \mathbf {A} \cdot \mathbf {U} =A^{\mu }U_{\mu }={\frac {dU^{\mu }}{d\tau }}U_{\mu }={\frac {1}{2}}\,{\frac {d}{d\tau }}\left(U^{\mu }U_{\mu }\right)=0\,} which is true for all world lines. The geometric meaning of four-acceleration is thecurvature vectorof the world line in Minkowski space. For a massive particle ofrest mass(orinvariant mass)m0, thefour-momentumis given by: P=m0U=m0γ(u)(c,u)=(Ec,p){\displaystyle \mathbf {P} =m_{0}\mathbf {U} =m_{0}\gamma (\mathbf {u} )(c,\mathbf {u} )=\left({\frac {E}{c}},\mathbf {p} \right)} where the total energy of the moving particle is: E=γ(u)m0c2{\displaystyle E=\gamma (\mathbf {u} )m_{0}c^{2}} and the totalrelativistic momentumis: p=γ(u)m0u{\displaystyle \mathbf {p} =\gamma (\mathbf {u} )m_{0}\mathbf {u} } Taking the inner product of the four-momentum with itself: ‖P‖2=PμPμ=m02UμUμ=m02c2{\displaystyle \|\mathbf {P} \|^{2}=P^{\mu }P_{\mu }=m_{0}^{2}U^{\mu }U_{\mu }=m_{0}^{2}c^{2}} and also: ‖P‖2=E2c2−p⋅p{\displaystyle \|\mathbf {P} \|^{2}={\frac {E^{2}}{c^{2}}}-\mathbf {p} \cdot \mathbf {p} } which leads to theenergy–momentum relation: E2=c2p⋅p+(m0c2)2.{\displaystyle E^{2}=c^{2}\mathbf {p} \cdot \mathbf {p} +\left(m_{0}c^{2}\right)^{2}\,.} This last relation is useful inrelativistic mechanics, essential inrelativistic quantum mechanicsandrelativistic quantum field theory, all with applications toparticle physics. Thefour-forceacting on a particle is defined analogously to the 3-force as the time derivative of 3-momentum inNewton's second law: F=dPdτ=γ(u)(1cdEdt,dpdt)=γ(u)(Pc,f){\displaystyle \mathbf {F} ={\frac {d\mathbf {P} }{d\tau }}=\gamma (\mathbf {u} )\left({\frac {1}{c}}{\frac {dE}{dt}},{\frac {d\mathbf {p} }{dt}}\right)=\gamma (\mathbf {u} )\left({\frac {P}{c}},\mathbf {f} \right)} wherePis thepowertransferred to move the particle, andfis the 3-force acting on the particle. For a particle of constant invariant massm0, this is equivalent to F=m0A=m0γ(u)(dγ(u)dtc,(dγ(u)dtu+γ(u)a)){\displaystyle \mathbf {F} =m_{0}\mathbf {A} =m_{0}\gamma (\mathbf {u} )\left({\frac {d{\gamma }(\mathbf {u} )}{dt}}c,\left({\frac {d{\gamma }(\mathbf {u} )}{dt}}\mathbf {u} +\gamma (\mathbf {u} )\mathbf {a} \right)\right)} An invariant derived from the four-force is: F⋅U=FμUμ=m0AμUμ=0{\displaystyle \mathbf {F} \cdot \mathbf {U} =F^{\mu }U_{\mu }=m_{0}A^{\mu }U_{\mu }=0} from the above result. The four-heat flux vector field, is essentially similar to the 3dheat fluxvector fieldq, in the local frame of the fluid:[12] Q=−k∂T=−k(1c∂T∂t,∇T){\displaystyle \mathbf {Q} =-k{\boldsymbol {\partial }}T=-k\left({\frac {1}{c}}{\frac {\partial T}{\partial t}},\nabla T\right)} whereTisabsolute temperatureandkisthermal conductivity. The flux of baryons is:[13]S=nU{\displaystyle \mathbf {S} =n\mathbf {U} }wherenis thenumber densityofbaryonsin the localrest frameof the baryon fluid (positive values for baryons, negative forantibaryons), andUthefour-velocityfield (of the fluid) as above. The four-entropyvector is defined by:[14]s=sS+QT{\displaystyle \mathbf {s} =s\mathbf {S} +{\frac {\mathbf {Q} }{T}}}wheresis the entropy per baryon, andTtheabsolute temperature, in the local rest frame of the fluid.[15] Examples of four-vectors inelectromagnetisminclude the following. The electromagneticfour-current(or more correctly a four-current density)[16]is defined byJ=(ρc,j){\displaystyle \mathbf {J} =\left(\rho c,\mathbf {j} \right)}formed from thecurrent densityjandcharge densityρ. Theelectromagnetic four-potential(or more correctly a four-EM vector potential) defined byA=(ϕc,a){\displaystyle \mathbf {A} =\left({\frac {\phi }{c}},\mathbf {a} \right)}formed from thevector potentialaand the scalar potentialϕ. The four-potential is not uniquely determined, because it depends on a choice ofgauge. In thewave equationfor the electromagnetic field: A photonicplane wavecan be described by thefour-frequency, defined as N=ν(1,n^){\displaystyle \mathbf {N} =\nu \left(1,{\hat {\mathbf {n} }}\right)} whereνis the frequency of the wave andn^{\displaystyle {\hat {\mathbf {n} }}}is aunit vectorin the travel direction of the wave. Now: ‖N‖=NμNμ=ν2(1−n^⋅n^)=0{\displaystyle \|\mathbf {N} \|=N^{\mu }N_{\mu }=\nu ^{2}\left(1-{\hat {\mathbf {n} }}\cdot {\hat {\mathbf {n} }}\right)=0} so the four-frequency of a photon is always a null vector. The quantities reciprocal to timetand spacerare theangular frequencyωandangular wave vectork, respectively. They form the components of thefour-wavevectororwave four-vector: K=(ωc,k→)=(ωc,ωvpn^).{\displaystyle \mathbf {K} =\left({\frac {\omega }{c}},{\vec {\mathbf {k} }}\right)=\left({\frac {\omega }{c}},{\frac {\omega }{v_{p}}}{\hat {\mathbf {n} }}\right)\,.} The wave four-vector hascoherent derived unitofreciprocal metersin the SI.[17] A wave packet of nearlymonochromaticlight can be described by: K=2πcN=2πcν(1,n^)=ωc(1,n^).{\displaystyle \mathbf {K} ={\frac {2\pi }{c}}\mathbf {N} ={\frac {2\pi }{c}}\nu \left(1,{\hat {\mathbf {n} }}\right)={\frac {\omega }{c}}\left(1,{\hat {\mathbf {n} }}\right)~.} The de Broglie relations then showed that four-wavevector applied tomatter wavesas well as to light waves:P=ℏK=(Ec,p→)=ℏ(ωc,k→).{\displaystyle \mathbf {P} =\hbar \mathbf {K} =\left({\frac {E}{c}},{\vec {p}}\right)=\hbar \left({\frac {\omega }{c}},{\vec {k}}\right)~.}yieldingE=ℏω{\displaystyle E=\hbar \omega }andp→=ℏk→{\displaystyle {\vec {p}}=\hbar {\vec {k}}}, whereħis thePlanck constantdivided by2π. The square of the norm is:‖K‖2=KμKμ=(ωc)2−k⋅k,{\displaystyle \|\mathbf {K} \|^{2}=K^{\mu }K_{\mu }=\left({\frac {\omega }{c}}\right)^{2}-\mathbf {k} \cdot \mathbf {k} \,,}and by the de Broglie relation:‖K‖2=1ℏ2‖P‖2=(m0cℏ)2,{\displaystyle \|\mathbf {K} \|^{2}={\frac {1}{\hbar ^{2}}}\|\mathbf {P} \|^{2}=\left({\frac {m_{0}c}{\hbar }}\right)^{2}\,,}we have the matter wave analogue of the energy–momentum relation:(ωc)2−k⋅k=(m0cℏ)2.{\displaystyle \left({\frac {\omega }{c}}\right)^{2}-\mathbf {k} \cdot \mathbf {k} =\left({\frac {m_{0}c}{\hbar }}\right)^{2}~.} Note that for massless particles, in which casem0= 0, we have:(ωc)2=k⋅k,{\displaystyle \left({\frac {\omega }{c}}\right)^{2}=\mathbf {k} \cdot \mathbf {k} \,,}or‖k‖ =ω/c. Note this is consistent with the above case; for photons with a 3-wavevector of modulusω / c,in the direction of wave propagation defined by the unit vectorn^.{\displaystyle \ {\hat {\mathbf {n} }}~.} Inquantum mechanics, the four-probability currentor probability four-current is analogous to theelectromagnetic four-current:[18]J=(ρc,j){\displaystyle \mathbf {J} =(\rho c,\mathbf {j} )}whereρis theprobability density functioncorresponding to the time component, andjis theprobability currentvector. In non-relativistic quantum mechanics, this current is always well defined because the expressions for density and current are positive definite and can admit a probability interpretation. Inrelativistic quantum mechanicsandquantum field theory, it is not always possible to find a current, particularly when interactions are involved. Replacing the energy by theenergy operatorand the momentum by themomentum operatorin the four-momentum, one obtains thefour-momentum operator, used inrelativistic wave equations. Thefour-spinof a particle is defined in the rest frame of a particle to beS=(0,s){\displaystyle \mathbf {S} =(0,\mathbf {s} )}wheresis thespinpseudovector. In quantum mechanics, not all three components of this vector are simultaneously measurable, only one component is. The timelike component is zero in the particle's rest frame, but not in any other frame. This component can be found from an appropriate Lorentz transformation. The norm squared is the (negative of the) magnitude squared of the spin, and according to quantum mechanics we have‖S‖2=−|s|2=−ℏ2s(s+1){\displaystyle \|\mathbf {S} \|^{2}=-|\mathbf {s} |^{2}=-\hbar ^{2}s(s+1)} This value is observable and quantized, withsthespin quantum number(not the magnitude of the spin vector). A four-vectorAcan also be defined in using thePauli matricesas abasis, again in various equivalent notations:[19]A=(A0,A1,A2,A3)=A0σ0+A1σ1+A2σ2+A3σ3=A0σ0+Aiσi=Aασα{\displaystyle {\begin{aligned}\mathbf {A} &=\left(A^{0},\,A^{1},\,A^{2},\,A^{3}\right)\\&=A^{0}{\boldsymbol {\sigma }}_{0}+A^{1}{\boldsymbol {\sigma }}_{1}+A^{2}{\boldsymbol {\sigma }}_{2}+A^{3}{\boldsymbol {\sigma }}_{3}\\&=A^{0}{\boldsymbol {\sigma }}_{0}+A^{i}{\boldsymbol {\sigma }}_{i}\\&=A^{\alpha }{\boldsymbol {\sigma }}_{\alpha }\\\end{aligned}}}or explicitly:A=A0(1001)+A1(0110)+A2(0−ii0)+A3(100−1)=(A0+A3A1−iA2A1+iA2A0−A3){\displaystyle {\begin{aligned}\mathbf {A} &=A^{0}{\begin{pmatrix}1&0\\0&1\end{pmatrix}}+A^{1}{\begin{pmatrix}0&1\\1&0\end{pmatrix}}+A^{2}{\begin{pmatrix}0&-i\\i&0\end{pmatrix}}+A^{3}{\begin{pmatrix}1&0\\0&-1\end{pmatrix}}\\&={\begin{pmatrix}A^{0}+A^{3}&A^{1}-iA^{2}\\A^{1}+iA^{2}&A^{0}-A^{3}\end{pmatrix}}\end{aligned}}}and in this formulation, the four-vector is represented as aHermitian matrix(thematrix transposeandcomplex conjugateof the matrix leaves it unchanged), rather than a real-valued column or row vector. Thedeterminantof the matrix is the modulus of the four-vector, so the determinant is an invariant:|A|=|A0+A3A1−iA2A1+iA2A0−A3|=(A0+A3)(A0−A3)−(A1−iA2)(A1+iA2)=(A0)2−(A1)2−(A2)2−(A3)2{\displaystyle {\begin{aligned}|\mathbf {A} |&={\begin{vmatrix}A^{0}+A^{3}&A^{1}-iA^{2}\\A^{1}+iA^{2}&A^{0}-A^{3}\end{vmatrix}}\\[1ex]&=\left(A^{0}+A^{3}\right)\left(A^{0}-A^{3}\right)-\left(A^{1}-iA^{2}\right)\left(A^{1}+iA^{2}\right)\\[1ex]&=\left(A^{0}\right)^{2}-\left(A^{1}\right)^{2}-\left(A^{2}\right)^{2}-\left(A^{3}\right)^{2}\end{aligned}}} This idea of using the Pauli matrices asbasis vectorsis employed in thealgebra of physical space, an example of aClifford algebra. Inspacetime algebra, another example of Clifford algebra, thegamma matricescan also form abasis. (They are also called the Dirac matrices, owing to their appearance in theDirac equation). There is more than one way to express the gamma matrices, detailed in that main article. TheFeynman slash notationis a shorthand for a four-vectorAcontracted with the gamma matrices:A/=Aαγα=A0γ0+A1γ1+A2γ2+A3γ3{\displaystyle \mathbf {A} \!\!\!\!/=A_{\alpha }\gamma ^{\alpha }=A_{0}\gamma ^{0}+A_{1}\gamma ^{1}+A_{2}\gamma ^{2}+A_{3}\gamma ^{3}} The four-momentum contracted with the gamma matrices is an important case inrelativistic quantum mechanicsandrelativistic quantum field theory. In the Dirac equation and otherrelativistic wave equations, terms of the form:P/=Pαγα=P0γ0+P1γ1+P2γ2+P3γ3=Ecγ0−pxγ1−pyγ2−pzγ3{\displaystyle {\begin{aligned}\mathbf {P} \!\!\!\!/=P_{\alpha }\gamma ^{\alpha }&=P_{0}\gamma ^{0}+P_{1}\gamma ^{1}+P_{2}\gamma ^{2}+P_{3}\gamma ^{3}\\[4pt]&={\dfrac {E}{c}}\gamma ^{0}-p_{x}\gamma ^{1}-p_{y}\gamma ^{2}-p_{z}\gamma ^{3}\\\end{aligned}}}appear, in which the energyEand momentum components(px,py,pz)are replaced by their respectiveoperators.
https://en.wikipedia.org/wiki/Four-vector
Inspecial relativity, afour-vector(or4-vector, sometimesLorentz vector)[1]is an object with four components, which transform in a specific way underLorentz transformations. Specifically, a four-vector is an element of a four-dimensionalvector spaceconsidered as arepresentation spaceof thestandard representationof theLorentz group, the (⁠1/2⁠,⁠1/2⁠) representation. It differs from aEuclidean vectorin how its magnitude is determined. The transformations that preserve this magnitude are the Lorentz transformations, which includespatial rotationsandboosts(a change by a constant velocity to anotherinertial reference frame).[2]: ch1 Four-vectors describe, for instance, positionxμin spacetime modeled asMinkowski space, a particle'sfour-momentumpμ, the amplitude of theelectromagnetic four-potentialAμ(x)at a pointxin spacetime, and the elements of the subspace spanned by thegamma matricesinside theDirac algebra. The Lorentz group may be represented by 4×4 matricesΛ. The action of a Lorentz transformation on a generalcontravariantfour-vectorX(like the examples above), regarded as a column vector withCartesian coordinateswith respect to aninertial framein the entries, is given by X′=ΛX,{\displaystyle X'=\Lambda X,} (matrix multiplication) where the components of the primed object refer to the new frame. Related to the examples above that are given as contravariant vectors, there are also the correspondingcovariant vectorsxμ,pμandAμ(x). These transform according to the rule X′=(Λ−1)TX,{\displaystyle X'=\left(\Lambda ^{-1}\right)^{\textrm {T}}X,} whereTdenotes thematrix transpose. This rule is different from the above rule. It corresponds to thedual representationof the standard representation. However, for the Lorentz group the dual of any representation isequivalentto the original representation. Thus the objects with covariant indices are four-vectors as well. For an example of a well-behaved four-component object in special relativity that isnota four-vector, seebispinor. It is similarly defined, the difference being that the transformation rule under Lorentz transformations is given by a representation other than the standard representation. In this case, the rule readsX′= Π(Λ)X, whereΠ(Λ)is a 4×4 matrix other thanΛ. Similar remarks apply to objects with fewer or more components that are well-behaved under Lorentz transformations. These includescalars,spinors,tensorsand spinor-tensors. The article considers four-vectors in the context of special relativity. Although the concept of four-vectors also extends togeneral relativity, some of the results stated in this article require modification in general relativity. The notations in this article are: lowercase bold forthree-dimensionalvectors, hats for three-dimensionalunit vectors, capital bold forfour dimensionalvectors (except for the four-gradient), andtensor index notation. Afour-vectorAis a vector with a "timelike" component and three "spacelike" components, and can be written in various equivalent notations:[3] A=(A0,A1,A2,A3)=A0E0+A1E1+A2E2+A3E3=A0E0+AiEi=AαEα{\displaystyle {\begin{aligned}\mathbf {A} &=\left(A^{0},\,A^{1},\,A^{2},\,A^{3}\right)\\&=A^{0}\mathbf {E} _{0}+A^{1}\mathbf {E} _{1}+A^{2}\mathbf {E} _{2}+A^{3}\mathbf {E} _{3}\\&=A^{0}\mathbf {E} _{0}+A^{i}\mathbf {E} _{i}\\&=A^{\alpha }\mathbf {E} _{\alpha }\end{aligned}}} whereAαis the magnitude component andEαis thebasis vectorcomponent; note that both are necessary to make a vector, and that whenAαis seen alone, it refers strictly to thecomponentsof the vector. The upper indices indicatecontravariantcomponents. Here the standard convention is that Latin indices take values for spatial components, so thati= 1, 2, 3, and Greek indices take values for spaceand timecomponents, soα= 0, 1, 2, 3, used with thesummation convention. The split between the time component and the spatial components is a useful one to make when determining contractions of one four vector with other tensor quantities, such as for calculating Lorentz invariants in inner products (examples are given below), orraising and lowering indices. In special relativity, the spacelike basisE1,E2,E3and componentsA1,A2,A3are oftenCartesianbasis and components: A=(At,Ax,Ay,Az)=AtEt+AxEx+AyEy+AzEz{\displaystyle {\begin{aligned}\mathbf {A} &=\left(A_{t},\,A_{x},\,A_{y},\,A_{z}\right)\\&=A_{t}\mathbf {E} _{t}+A_{x}\mathbf {E} _{x}+A_{y}\mathbf {E} _{y}+A_{z}\mathbf {E} _{z}\\\end{aligned}}} although, of course, any other basis and components may be used, such asspherical polar coordinates A=(At,Ar,Aθ,Aϕ)=AtEt+ArEr+AθEθ+AϕEϕ{\displaystyle {\begin{aligned}\mathbf {A} &=\left(A_{t},\,A_{r},\,A_{\theta },\,A_{\phi }\right)\\&=A_{t}\mathbf {E} _{t}+A_{r}\mathbf {E} _{r}+A_{\theta }\mathbf {E} _{\theta }+A_{\phi }\mathbf {E} _{\phi }\\\end{aligned}}} orcylindrical polar coordinates, A=(At,Ar,Aθ,Az)=AtEt+ArEr+AθEθ+AzEz{\displaystyle {\begin{aligned}\mathbf {A} &=(A_{t},\,A_{r},\,A_{\theta },\,A_{z})\\&=A_{t}\mathbf {E} _{t}+A_{r}\mathbf {E} _{r}+A_{\theta }\mathbf {E} _{\theta }+A_{z}\mathbf {E} _{z}\\\end{aligned}}} or any otherorthogonal coordinates, or even generalcurvilinear coordinates. Note the coordinate labels are always subscripted as labels and are not indices taking numerical values. In general relativity, local curvilinear coordinates in a local basis must be used. Geometrically, a four-vector can still be interpreted as an arrow, but in spacetime - not just space. In relativity, the arrows are drawn as part ofMinkowski diagram(also calledspacetime diagram). In this article, four-vectors will be referred to simply as vectors. It is also customary to represent the bases bycolumn vectors: E0=(1000),E1=(0100),E2=(0010),E3=(0001){\displaystyle \mathbf {E} _{0}={\begin{pmatrix}1\\0\\0\\0\end{pmatrix}}\,,\quad \mathbf {E} _{1}={\begin{pmatrix}0\\1\\0\\0\end{pmatrix}}\,,\quad \mathbf {E} _{2}={\begin{pmatrix}0\\0\\1\\0\end{pmatrix}}\,,\quad \mathbf {E} _{3}={\begin{pmatrix}0\\0\\0\\1\end{pmatrix}}} so that: A=(A0A1A2A3){\displaystyle \mathbf {A} ={\begin{pmatrix}A^{0}\\A^{1}\\A^{2}\\A^{3}\end{pmatrix}}} The relation between thecovariantand contravariant coordinates is through theMinkowskimetric tensor(referred to as the metric),ηwhichraises and lowers indicesas follows: Aμ=ημνAν,{\displaystyle A_{\mu }=\eta _{\mu \nu }A^{\nu }\,,} and in various equivalent notations the covariant components are: A=(A0,A1,A2,A3)=A0E0+A1E1+A2E2+A3E3=A0E0+AiEi=AαEα{\displaystyle {\begin{aligned}\mathbf {A} &=(A_{0},\,A_{1},\,A_{2},\,A_{3})\\&=A_{0}\mathbf {E} ^{0}+A_{1}\mathbf {E} ^{1}+A_{2}\mathbf {E} ^{2}+A_{3}\mathbf {E} ^{3}\\&=A_{0}\mathbf {E} ^{0}+A_{i}\mathbf {E} ^{i}\\&=A_{\alpha }\mathbf {E} ^{\alpha }\\\end{aligned}}} where the lowered index indicates it to becovariant. Often the metric is diagonal, as is the case fororthogonal coordinates(seeline element), but not in generalcurvilinear coordinates. The bases can be represented byrow vectors: E0=(1000),E1=(0100),E2=(0010),E3=(0001),{\displaystyle {\begin{aligned}\mathbf {E} ^{0}&={\begin{pmatrix}1&0&0&0\end{pmatrix}}\,,&\mathbf {E} ^{1}&={\begin{pmatrix}0&1&0&0\end{pmatrix}}\,,\\[1ex]\mathbf {E} ^{2}&={\begin{pmatrix}0&0&1&0\end{pmatrix}}\,,&\mathbf {E} ^{3}&={\begin{pmatrix}0&0&0&1\end{pmatrix}},\end{aligned}}}so that:A=(A0A1A2A3){\displaystyle \mathbf {A} ={\begin{pmatrix}A_{0}&A_{1}&A_{2}&A_{3}\end{pmatrix}}} The motivation for the above conventions are that the inner product is a scalar, see below for details. Given two inertial or rotatedframes of reference, a four-vector is defined as a quantity which transforms according to theLorentz transformationmatrixΛ:A′=ΛA{\displaystyle \mathbf {A} '={\boldsymbol {\Lambda }}\mathbf {A} } In index notation, the contravariant and covariant components transform according to, respectively:A′μ=ΛμνAν,A′μ=ΛμνAν{\displaystyle {A'}^{\mu }=\Lambda ^{\mu }{}_{\nu }A^{\nu }\,,\quad {A'}_{\mu }=\Lambda _{\mu }{}^{\nu }A_{\nu }}in which the matrixΛhas componentsΛμνin rowμand columnν, and the matrix(Λ−1)Thas componentsΛμνin rowμand columnν. For background on the nature of this transformation definition, seetensor. All four-vectors transform in the same way, and this can be generalized to four-dimensional relativistic tensors; seespecial relativity. For two frames rotated by a fixed angleθabout an axis defined by theunit vector: n^=(n^1,n^2,n^3),{\displaystyle {\hat {\mathbf {n} }}=\left({\hat {n}}_{1},{\hat {n}}_{2},{\hat {n}}_{3}\right)\,,} without any boosts, the matrixΛhas components given by:[4] Λ00=1Λ0i=Λi0=0Λij=(δij−n^in^j)cos⁡θ−εijkn^ksin⁡θ+n^in^j{\displaystyle {\begin{aligned}\Lambda _{00}&=1\\\Lambda _{0i}=\Lambda _{i0}&=0\\\Lambda _{ij}&=\left(\delta _{ij}-{\hat {n}}_{i}{\hat {n}}_{j}\right)\cos \theta -\varepsilon _{ijk}{\hat {n}}_{k}\sin \theta +{\hat {n}}_{i}{\hat {n}}_{j}\end{aligned}}} whereδijis theKronecker delta, andεijkis thethree-dimensionalLevi-Civita symbol. The spacelike components of four-vectors are rotated, while the timelike components remain unchanged. For the case of rotations about thez-axis only, the spacelike part of the Lorentz matrix reduces to therotation matrixabout thez-axis: (A′0A′1A′2A′3)=(10000cos⁡θ−sin⁡θ00sin⁡θcos⁡θ00001)(A0A1A2A3).{\displaystyle {\begin{pmatrix}{A'}^{0}\\{A'}^{1}\\{A'}^{2}\\{A'}^{3}\end{pmatrix}}={\begin{pmatrix}1&0&0&0\\0&\cos \theta &-\sin \theta &0\\0&\sin \theta &\cos \theta &0\\0&0&0&1\\\end{pmatrix}}{\begin{pmatrix}A^{0}\\A^{1}\\A^{2}\\A^{3}\end{pmatrix}}\ .} For two frames moving at constant relative three-velocityv(not four-velocity,see below), it is convenient to denote and define the relative velocity in units ofcby: β=(β1,β2,β3)=1c(v1,v2,v3)=1cv.{\displaystyle {\boldsymbol {\beta }}=(\beta _{1},\,\beta _{2},\,\beta _{3})={\frac {1}{c}}(v_{1},\,v_{2},\,v_{3})={\frac {1}{c}}\mathbf {v} \,.} Then without rotations, the matrixΛhas components given by:[5]Λ00=γ,Λ0i=Λi0=−γβi,Λij=Λji=(γ−1)βiβjβ2+δij=(γ−1)vivjv2+δij,{\displaystyle {\begin{aligned}\Lambda _{00}&=\gamma ,\\\Lambda _{0i}=\Lambda _{i0}&=-\gamma \beta _{i},\\\Lambda _{ij}=\Lambda _{ji}&=(\gamma -1){\frac {\beta _{i}\beta _{j}}{\beta ^{2}}}+\delta _{ij}=(\gamma -1){\frac {v_{i}v_{j}}{v^{2}}}+\delta _{ij},\\\end{aligned}}}where theLorentz factoris defined by:γ=11−β⋅β,{\displaystyle \gamma ={\frac {1}{\sqrt {1-{\boldsymbol {\beta }}\cdot {\boldsymbol {\beta }}}}}\,,}andδijis theKronecker delta. Contrary to the case for pure rotations, the spacelike and timelike components are mixed together under boosts. For the case of a boost in thex-direction only, the matrix reduces to;[6][7] (A′0A′1A′2A′3)=(cosh⁡ϕ−sinh⁡ϕ00−sinh⁡ϕcosh⁡ϕ0000100001)(A0A1A2A3){\displaystyle {\begin{pmatrix}A'^{0}\\A'^{1}\\A'^{2}\\A'^{3}\end{pmatrix}}={\begin{pmatrix}\cosh \phi &-\sinh \phi &0&0\\-\sinh \phi &\cosh \phi &0&0\\0&0&1&0\\0&0&0&1\\\end{pmatrix}}{\begin{pmatrix}A^{0}\\A^{1}\\A^{2}\\A^{3}\end{pmatrix}}} Where therapidityϕexpression has been used, written in terms of thehyperbolic functions:γ=cosh⁡ϕ{\displaystyle \gamma =\cosh \phi } This Lorentz matrix illustrates the boost to be ahyperbolic rotationin four dimensional spacetime, analogous to the circular rotation above in three-dimensional space. Four-vectors have the samelinearity propertiesasEuclidean vectorsinthree dimensions. They can be added in the usual entrywise way:A+B=(A0,A1,A2,A3)+(B0,B1,B2,B3)=(A0+B0,A1+B1,A2+B2,A3+B3){\displaystyle {\begin{aligned}\mathbf {A} +\mathbf {B} &=\left(A^{0},A^{1},A^{2},A^{3}\right)+\left(B^{0},B^{1},B^{2},B^{3}\right)\\&=\left(A^{0}+B^{0},A^{1}+B^{1},A^{2}+B^{2},A^{3}+B^{3}\right)\end{aligned}}}and similarlyscalar multiplicationby ascalarλis defined entrywise by:λA=λ(A0,A1,A2,A3)=(λA0,λA1,λA2,λA3){\displaystyle \lambda \mathbf {A} =\lambda \left(A^{0},A^{1},A^{2},A^{3}\right)=\left(\lambda A^{0},\lambda A^{1},\lambda A^{2},\lambda A^{3}\right)} Then subtraction is the inverse operation of addition, defined entrywise by:A+(−1)B=(A0,A1,A2,A3)+(−1)(B0,B1,B2,B3)=(A0−B0,A1−B1,A2−B2,A3−B3){\displaystyle {\begin{aligned}\mathbf {A} +(-1)\mathbf {B} &=\left(A^{0},A^{1},A^{2},A^{3}\right)+(-1)\left(B^{0},B^{1},B^{2},B^{3}\right)\\&=\left(A^{0}-B^{0},A^{1}-B^{1},A^{2}-B^{2},A^{3}-B^{3}\right)\end{aligned}}} Applying theMinkowski tensorημνto two four-vectorsAandB, writing the result indot productnotation, we have, usingEinstein notation:A⋅B=AμBνEμ⋅Eν=AμημνBν{\displaystyle \mathbf {A} \cdot \mathbf {B} =A^{\mu }B^{\nu }\mathbf {E} _{\mu }\cdot \mathbf {E} _{\nu }=A^{\mu }\eta _{\mu \nu }B^{\nu }} in special relativity. The dot product of the basis vectors is the Minkowski metric, as opposed to the Kronecker delta as in Euclidean space. It is convenient to rewrite the definition inmatrixform:A⋅B=(A0A1A2A3)(η00η01η02η03η10η11η12η13η20η21η22η23η30η31η32η33)(B0B1B2B3){\displaystyle \mathbf {A\cdot B} ={\begin{pmatrix}A^{0}&A^{1}&A^{2}&A^{3}\end{pmatrix}}{\begin{pmatrix}\eta _{00}&\eta _{01}&\eta _{02}&\eta _{03}\\\eta _{10}&\eta _{11}&\eta _{12}&\eta _{13}\\\eta _{20}&\eta _{21}&\eta _{22}&\eta _{23}\\\eta _{30}&\eta _{31}&\eta _{32}&\eta _{33}\end{pmatrix}}{\begin{pmatrix}B^{0}\\B^{1}\\B^{2}\\B^{3}\end{pmatrix}}}in which caseημνabove is the entry in rowμand columnνof the Minkowski metric as a square matrix. The Minkowski metric is not aEuclidean metric, because it is indefinite (seemetric signature). A number of other expressions can be used because the metric tensor can raise and lower the components ofAorB. For contra/co-variant components ofAand co/contra-variant components ofB, we have:A⋅B=AμημνBν=AνBν=AμBμ{\displaystyle \mathbf {A} \cdot \mathbf {B} =A^{\mu }\eta _{\mu \nu }B^{\nu }=A_{\nu }B^{\nu }=A^{\mu }B_{\mu }}so in the matrix notation:A⋅B=(A0A1A2A3)(B0B1B2B3)=(B0B1B2B3)(A0A1A2A3){\displaystyle {\begin{aligned}\mathbf {A} \cdot \mathbf {B} &={\begin{pmatrix}A_{0}&A_{1}&A_{2}&A_{3}\end{pmatrix}}{\begin{pmatrix}B^{0}\\B^{1}\\B^{2}\\B^{3}\end{pmatrix}}\\[1ex]&={\begin{pmatrix}B_{0}&B_{1}&B_{2}&B_{3}\end{pmatrix}}{\begin{pmatrix}A^{0}\\A^{1}\\A^{2}\\A^{3}\end{pmatrix}}\end{aligned}}}while forAandBeach in covariant components:A⋅B=AμημνBν{\displaystyle \mathbf {A} \cdot \mathbf {B} =A_{\mu }\eta ^{\mu \nu }B_{\nu }}with a similar matrix expression to the above. Applying the Minkowski tensor to a four-vectorAwith itself we get:A⋅A=AμημνAν{\displaystyle \mathbf {A\cdot A} =A^{\mu }\eta _{\mu \nu }A^{\nu }}which, depending on the case, may be considered the square, or its negative, of the length of the vector. Following are two common choices for the metric tensor in thestandard basis(essentially Cartesian coordinates). If orthogonal coordinates are used, there would be scale factors along the diagonal part of the spacelike part of the metric, while for general curvilinear coordinates the entire spacelike part of the metric would have components dependent on the curvilinear basis used. The (+−−−)metric signatureis sometimes called the "mostly minus" convention, or the "west coast" convention. In the (+−−−)metric signature, evaluating thesummation over indicesgives:A⋅B=A0B0−A1B1−A2B2−A3B3{\displaystyle \mathbf {A} \cdot \mathbf {B} =A^{0}B^{0}-A^{1}B^{1}-A^{2}B^{2}-A^{3}B^{3}}while in matrix form:A⋅B=(A0A1A2A3)(10000−10000−10000−1)(B0B1B2B3){\displaystyle \mathbf {A\cdot B} ={\begin{pmatrix}A^{0}&A^{1}&A^{2}&A^{3}\end{pmatrix}}{\begin{pmatrix}1&0&0&0\\0&-1&0&0\\0&0&-1&0\\0&0&0&-1\end{pmatrix}}{\begin{pmatrix}B^{0}\\B^{1}\\B^{2}\\B^{3}\end{pmatrix}}} It is a recurring theme in special relativity to take the expressionA⋅B=A0B0−A1B1−A2B2−A3B3=C{\displaystyle \mathbf {A} \cdot \mathbf {B} =A^{0}B^{0}-A^{1}B^{1}-A^{2}B^{2}-A^{3}B^{3}=C}in onereference frame, whereCis the value of the inner product in this frame, and:A′⋅B′=A′0B′0−A′1B′1−A′2B′2−A′3B′3=C′{\displaystyle \mathbf {A} '\cdot \mathbf {B} '={A'}^{0}{B'}^{0}-{A'}^{1}{B'}^{1}-{A'}^{2}{B'}^{2}-{A'}^{3}{B'}^{3}=C'}in another frame, in whichC′ is the value of the inner product in this frame. Then since the inner product is an invariant, these must be equal:A⋅B=A′⋅B′{\displaystyle \mathbf {A} \cdot \mathbf {B} =\mathbf {A} '\cdot \mathbf {B} '}that is:C=A0B0−A1B1−A2B2−A3B3=A′0B′0−A′1B′1−A′2B′2−A′3B′3{\displaystyle {\begin{aligned}C&=A^{0}B^{0}-A^{1}B^{1}-A^{2}B^{2}-A^{3}B^{3}\\[2pt]&={A'}^{0}{B'}^{0}-{A'}^{1}{B'}^{1}-{A'}^{2}{B'}^{2}-{A'}^{3}{B'}^{3}\end{aligned}}} Considering that physical quantities in relativity are four-vectors, this equation has the appearance of a "conservation law", but there is no "conservation" involved. The primary significance of the Minkowski inner product is that for any two four-vectors, its value isinvariantfor all observers; a change of coordinates does not result in a change in value of the inner product. The components of the four-vectors change from one frame to another;AandA′ are connected by aLorentz transformation, and similarly forBandB′, although the inner products are the same in all frames. Nevertheless, this type of expression is exploited in relativistic calculations on a par with conservation laws, since the magnitudes of components can be determined without explicitly performing any Lorentz transformations. A particular example is with energy and momentum in theenergy-momentum relationderived from thefour-momentumvector (see also below). In this signature we have:A⋅A=(A0)2−(A1)2−(A2)2−(A3)2{\displaystyle \mathbf {A\cdot A} =\left(A^{0}\right)^{2}-\left(A^{1}\right)^{2}-\left(A^{2}\right)^{2}-\left(A^{3}\right)^{2}} With the signature (+−−−), four-vectors may be classified as eitherspacelikeifA⋅A<0{\displaystyle \mathbf {A\cdot A} <0},timelikeifA⋅A>0{\displaystyle \mathbf {A\cdot A} >0}, andnull vectorsifA⋅A=0{\displaystyle \mathbf {A\cdot A} =0}. The (-+++)metric signatureis sometimes called the "east coast" convention. Some authors defineηwith the opposite sign, in which case we have the (−+++) metric signature. Evaluating the summation with this signature: A⋅B=−A0B0+A1B1+A2B2+A3B3{\displaystyle \mathbf {A\cdot B} =-A^{0}B^{0}+A^{1}B^{1}+A^{2}B^{2}+A^{3}B^{3}} while the matrix form is: A⋅B=(A0A1A2A3)(−1000010000100001)(B0B1B2B3){\displaystyle \mathbf {A\cdot B} =\left({\begin{matrix}A^{0}&A^{1}&A^{2}&A^{3}\end{matrix}}\right)\left({\begin{matrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{matrix}}\right)\left({\begin{matrix}B^{0}\\B^{1}\\B^{2}\\B^{3}\end{matrix}}\right)} Note that in this case, in one frame: A⋅B=−A0B0+A1B1+A2B2+A3B3=−C{\displaystyle \mathbf {A} \cdot \mathbf {B} =-A^{0}B^{0}+A^{1}B^{1}+A^{2}B^{2}+A^{3}B^{3}=-C} while in another: A′⋅B′=−A′0B′0+A′1B′1+A′2B′2+A′3B′3=−C′{\displaystyle \mathbf {A} '\cdot \mathbf {B} '=-{A'}^{0}{B'}^{0}+{A'}^{1}{B'}^{1}+{A'}^{2}{B'}^{2}+{A'}^{3}{B'}^{3}=-C'} so that: −C=−A0B0+A1B1+A2B2+A3B3=−A′0B′0+A′1B′1+A′2B′2+A′3B′3{\displaystyle {\begin{aligned}-C&=-A^{0}B^{0}+A^{1}B^{1}+A^{2}B^{2}+A^{3}B^{3}\\[2pt]&=-{A'}^{0}{B'}^{0}+{A'}^{1}{B'}^{1}+{A'}^{2}{B'}^{2}+{A'}^{3}{B'}^{3}\end{aligned}}} which is equivalent to the above expression forCin terms ofAandB. Either convention will work. With the Minkowski metric defined in the two ways above, the only difference between covariant and contravariant four-vector components are signs, therefore the signs depend on which sign convention is used. We have: A⋅A=−(A0)2+(A1)2+(A2)2+(A3)2{\displaystyle \mathbf {A\cdot A} =-\left(A^{0}\right)^{2}+\left(A^{1}\right)^{2}+\left(A^{2}\right)^{2}+\left(A^{3}\right)^{2}} With the signature (−+++), four-vectors may be classified as eitherspacelikeifA⋅A>0{\displaystyle \mathbf {A\cdot A} >0},timelikeifA⋅A<0{\displaystyle \mathbf {A\cdot A} <0}, andnullifA⋅A=0{\displaystyle \mathbf {A\cdot A} =0}. Applying the Minkowski tensor is often expressed as the effect of thedual vectorof one vector on the other: A⋅B=A∗(B)=AνBν.{\displaystyle \mathbf {A\cdot B} =A^{*}(\mathbf {B} )=A{_{\nu }}B^{\nu }.} Here theAνs are the components of the dual vectorA* ofAin thedual basisand called thecovariantcoordinates ofA, while the originalAνcomponents are called thecontravariantcoordinates. In special relativity (but not general relativity), thederivativeof a four-vector with respect to a scalarλ(invariant) is itself a four-vector. It is also useful to take thedifferentialof the four-vector,dAand divide it by the differential of the scalar,dλ: dAdifferential=dAdλderivativedλdifferential{\displaystyle {\underset {\text{differential}}{d\mathbf {A} }}={\underset {\text{derivative}}{\frac {d\mathbf {A} }{d\lambda }}}{\underset {\text{differential}}{d\lambda }}} where the contravariant components are: dA=(dA0,dA1,dA2,dA3){\displaystyle d\mathbf {A} =\left(dA^{0},dA^{1},dA^{2},dA^{3}\right)} while the covariant components are: dA=(dA0,dA1,dA2,dA3){\displaystyle d\mathbf {A} =\left(dA_{0},dA_{1},dA_{2},dA_{3}\right)} In relativistic mechanics, one often takes the differential of a four-vector and divides by the differential inproper time(see below). A point inMinkowski spaceis a time and spatial position, called an "event", or sometimes theposition four-vectororfour-positionor4-position, described in some reference frame by a set of four coordinates: R=(ct,r){\displaystyle \mathbf {R} =\left(ct,\mathbf {r} \right)} whereris thethree-dimensional spaceposition vector. Ifris a function of coordinate timetin the same frame, i.e.r=r(t), this corresponds to a sequence of events astvaries. The definitionR0=ctensures that all the coordinates have the samedimension(oflength) and units (in theSI, meters).[8][9][10][11]These coordinates are the components of theposition four-vectorfor the event. Thedisplacement four-vectoris defined to be an "arrow" linking two events: ΔR=(cΔt,Δr){\displaystyle \Delta \mathbf {R} =\left(c\Delta t,\Delta \mathbf {r} \right)} For thedifferentialfour-position on a world line we have, usinga norm notation: ‖dR‖2=dR⋅dR=dRμdRμ=c2dτ2=ds2,{\displaystyle \|d\mathbf {R} \|^{2}=\mathbf {dR\cdot dR} =dR^{\mu }dR_{\mu }=c^{2}d\tau ^{2}=ds^{2}\,,} defining the differentialline elementdsand differential proper time increment dτ, but this "norm" is also: ‖dR‖2=(cdt)2−dr⋅dr,{\displaystyle \|d\mathbf {R} \|^{2}=(cdt)^{2}-d\mathbf {r} \cdot d\mathbf {r} \,,} so that: (cdτ)2=(cdt)2−dr⋅dr.{\displaystyle (cd\tau )^{2}=(cdt)^{2}-d\mathbf {r} \cdot d\mathbf {r} \,.} When considering physical phenomena, differential equations arise naturally; however, when considering space andtime derivativesof functions, it is unclear which reference frame these derivatives are taken with respect to. It is agreed that time derivatives are taken with respect to theproper timeτ{\displaystyle \tau }. As proper time is an invariant, this guarantees that the proper-time-derivative of any four-vector is itself a four-vector. It is then important to find a relation between this proper-time-derivative and another time derivative (using thecoordinate timetof an inertial reference frame). This relation is provided by taking the above differential invariant spacetime interval, then dividing by (cdt)2to obtain: (cdτcdt)2=1−(drcdt⋅drcdt)=1−u⋅uc2=1γ(u)2,{\displaystyle \left({\frac {cd\tau }{cdt}}\right)^{2}=1-\left({\frac {d\mathbf {r} }{cdt}}\cdot {\frac {d\mathbf {r} }{cdt}}\right)=1-{\frac {\mathbf {u} \cdot \mathbf {u} }{c^{2}}}={\frac {1}{\gamma (\mathbf {u} )^{2}}}\,,} whereu=dr/dtis the coordinate 3-velocityof an object measured in the same frame as the coordinatesx,y,z, andcoordinate timet, and γ(u)=11−u⋅uc2{\displaystyle \gamma (\mathbf {u} )={\frac {1}{\sqrt {1-{\frac {\mathbf {u} \cdot \mathbf {u} }{c^{2}}}}}}} is theLorentz factor. This provides a useful relation between the differentials in coordinate time and proper time: dt=γ(u)dτ.{\displaystyle dt=\gamma (\mathbf {u} )d\tau \,.} This relation can also be found from the time transformation in theLorentz transformations. Important four-vectors in relativity theory can be defined by applying this differentialddτ{\displaystyle {\frac {d}{d\tau }}}. Considering thatpartial derivativesarelinear operators, one can form afour-gradientfrom the partialtime derivative∂/∂tand the spatialgradient∇. Using the standard basis, in index and abbreviated notations, the contravariant components are: ∂=(∂∂x0,−∂∂x1,−∂∂x2,−∂∂x3)=(∂0,−∂1,−∂2,−∂3)=E0∂0−E1∂1−E2∂2−E3∂3=E0∂0−Ei∂i=Eα∂α=(1c∂∂t,−∇)=(∂tc,−∇)=E01c∂∂t−∇{\displaystyle {\begin{aligned}{\boldsymbol {\partial }}&=\left({\frac {\partial }{\partial x_{0}}},\,-{\frac {\partial }{\partial x_{1}}},\,-{\frac {\partial }{\partial x_{2}}},\,-{\frac {\partial }{\partial x_{3}}}\right)\\&=(\partial ^{0},\,-\partial ^{1},\,-\partial ^{2},\,-\partial ^{3})\\&=\mathbf {E} _{0}\partial ^{0}-\mathbf {E} _{1}\partial ^{1}-\mathbf {E} _{2}\partial ^{2}-\mathbf {E} _{3}\partial ^{3}\\&=\mathbf {E} _{0}\partial ^{0}-\mathbf {E} _{i}\partial ^{i}\\&=\mathbf {E} _{\alpha }\partial ^{\alpha }\\&=\left({\frac {1}{c}}{\frac {\partial }{\partial t}},\,-\nabla \right)\\&=\left({\frac {\partial _{t}}{c}},-\nabla \right)\\&=\mathbf {E} _{0}{\frac {1}{c}}{\frac {\partial }{\partial t}}-\nabla \\\end{aligned}}} Note the basis vectors are placed in front of the components, to prevent confusion between taking the derivative of the basis vector, or simply indicating the partial derivative is a component of this four-vector. The covariant components are: ∂=(∂∂x0,∂∂x1,∂∂x2,∂∂x3)=(∂0,∂1,∂2,∂3)=E0∂0+E1∂1+E2∂2+E3∂3=E0∂0+Ei∂i=Eα∂α=(1c∂∂t,∇)=(∂tc,∇)=E01c∂∂t+∇{\displaystyle {\begin{aligned}{\boldsymbol {\partial }}&=\left({\frac {\partial }{\partial x^{0}}},\,{\frac {\partial }{\partial x^{1}}},\,{\frac {\partial }{\partial x^{2}}},\,{\frac {\partial }{\partial x^{3}}}\right)\\&=(\partial _{0},\,\partial _{1},\,\partial _{2},\,\partial _{3})\\&=\mathbf {E} ^{0}\partial _{0}+\mathbf {E} ^{1}\partial _{1}+\mathbf {E} ^{2}\partial _{2}+\mathbf {E} ^{3}\partial _{3}\\&=\mathbf {E} ^{0}\partial _{0}+\mathbf {E} ^{i}\partial _{i}\\&=\mathbf {E} ^{\alpha }\partial _{\alpha }\\&=\left({\frac {1}{c}}{\frac {\partial }{\partial t}},\,\nabla \right)\\&=\left({\frac {\partial _{t}}{c}},\nabla \right)\\&=\mathbf {E} ^{0}{\frac {1}{c}}{\frac {\partial }{\partial t}}+\nabla \\\end{aligned}}} Since this is an operator, it doesn't have a "length", but evaluating the inner product of the operator with itself gives another operator: ∂μ∂μ=1c2∂2∂t2−∇2=∂t2c2−∇2{\displaystyle \partial ^{\mu }\partial _{\mu }={\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}-\nabla ^{2}={\frac {{\partial _{t}}^{2}}{c^{2}}}-\nabla ^{2}} called theD'Alembert operator. Thefour-velocityof a particle is defined by: U=dXdτ=dXdtdtdτ=γ(u)(c,u),{\displaystyle \mathbf {U} ={\frac {d\mathbf {X} }{d\tau }}={\frac {d\mathbf {X} }{dt}}{\frac {dt}{d\tau }}=\gamma (\mathbf {u} )\left(c,\mathbf {u} \right),} Geometrically,Uis a normalized vector tangent to theworld lineof the particle. Using the differential of the four-position, the magnitude of the four-velocity can be obtained: ‖U‖2=UμUμ=dXμdτdXμdτ=dXμdXμdτ2=c2,{\displaystyle \|\mathbf {U} \|^{2}=U^{\mu }U_{\mu }={\frac {dX^{\mu }}{d\tau }}{\frac {dX_{\mu }}{d\tau }}={\frac {dX^{\mu }dX_{\mu }}{d\tau ^{2}}}=c^{2}\,,} in short, the magnitude of the four-velocity for any object is always a fixed constant: ‖U‖2=c2{\displaystyle \|\mathbf {U} \|^{2}=c^{2}} The norm is also: ‖U‖2=γ(u)2(c2−u⋅u),{\displaystyle \|\mathbf {U} \|^{2}={\gamma (\mathbf {u} )}^{2}\left(c^{2}-\mathbf {u} \cdot \mathbf {u} \right)\,,} so that: c2=γ(u)2(c2−u⋅u),{\displaystyle c^{2}={\gamma (\mathbf {u} )}^{2}\left(c^{2}-\mathbf {u} \cdot \mathbf {u} \right)\,,} which reduces to the definition of theLorentz factor. Units of four-velocity are m/s inSIand 1 in thegeometrized unit system. Four-velocity is a contravariant vector. Thefour-accelerationis given by: A=dUdτ=γ(u)(dγ(u)dtc,dγ(u)dtu+γ(u)a).{\displaystyle \mathbf {A} ={\frac {d\mathbf {U} }{d\tau }}=\gamma (\mathbf {u} )\left({\frac {d{\gamma }(\mathbf {u} )}{dt}}c,{\frac {d{\gamma }(\mathbf {u} )}{dt}}\mathbf {u} +\gamma (\mathbf {u} )\mathbf {a} \right).} wherea=du/dtis the coordinate 3-acceleration. Since the magnitude ofUis a constant, the four acceleration is orthogonal to the four velocity, i.e. the Minkowski inner product of the four-acceleration and the four-velocity is zero: A⋅U=AμUμ=dUμdτUμ=12ddτ(UμUμ)=0{\displaystyle \mathbf {A} \cdot \mathbf {U} =A^{\mu }U_{\mu }={\frac {dU^{\mu }}{d\tau }}U_{\mu }={\frac {1}{2}}\,{\frac {d}{d\tau }}\left(U^{\mu }U_{\mu }\right)=0\,} which is true for all world lines. The geometric meaning of four-acceleration is thecurvature vectorof the world line in Minkowski space. For a massive particle ofrest mass(orinvariant mass)m0, thefour-momentumis given by: P=m0U=m0γ(u)(c,u)=(Ec,p){\displaystyle \mathbf {P} =m_{0}\mathbf {U} =m_{0}\gamma (\mathbf {u} )(c,\mathbf {u} )=\left({\frac {E}{c}},\mathbf {p} \right)} where the total energy of the moving particle is: E=γ(u)m0c2{\displaystyle E=\gamma (\mathbf {u} )m_{0}c^{2}} and the totalrelativistic momentumis: p=γ(u)m0u{\displaystyle \mathbf {p} =\gamma (\mathbf {u} )m_{0}\mathbf {u} } Taking the inner product of the four-momentum with itself: ‖P‖2=PμPμ=m02UμUμ=m02c2{\displaystyle \|\mathbf {P} \|^{2}=P^{\mu }P_{\mu }=m_{0}^{2}U^{\mu }U_{\mu }=m_{0}^{2}c^{2}} and also: ‖P‖2=E2c2−p⋅p{\displaystyle \|\mathbf {P} \|^{2}={\frac {E^{2}}{c^{2}}}-\mathbf {p} \cdot \mathbf {p} } which leads to theenergy–momentum relation: E2=c2p⋅p+(m0c2)2.{\displaystyle E^{2}=c^{2}\mathbf {p} \cdot \mathbf {p} +\left(m_{0}c^{2}\right)^{2}\,.} This last relation is useful inrelativistic mechanics, essential inrelativistic quantum mechanicsandrelativistic quantum field theory, all with applications toparticle physics. Thefour-forceacting on a particle is defined analogously to the 3-force as the time derivative of 3-momentum inNewton's second law: F=dPdτ=γ(u)(1cdEdt,dpdt)=γ(u)(Pc,f){\displaystyle \mathbf {F} ={\frac {d\mathbf {P} }{d\tau }}=\gamma (\mathbf {u} )\left({\frac {1}{c}}{\frac {dE}{dt}},{\frac {d\mathbf {p} }{dt}}\right)=\gamma (\mathbf {u} )\left({\frac {P}{c}},\mathbf {f} \right)} wherePis thepowertransferred to move the particle, andfis the 3-force acting on the particle. For a particle of constant invariant massm0, this is equivalent to F=m0A=m0γ(u)(dγ(u)dtc,(dγ(u)dtu+γ(u)a)){\displaystyle \mathbf {F} =m_{0}\mathbf {A} =m_{0}\gamma (\mathbf {u} )\left({\frac {d{\gamma }(\mathbf {u} )}{dt}}c,\left({\frac {d{\gamma }(\mathbf {u} )}{dt}}\mathbf {u} +\gamma (\mathbf {u} )\mathbf {a} \right)\right)} An invariant derived from the four-force is: F⋅U=FμUμ=m0AμUμ=0{\displaystyle \mathbf {F} \cdot \mathbf {U} =F^{\mu }U_{\mu }=m_{0}A^{\mu }U_{\mu }=0} from the above result. The four-heat flux vector field, is essentially similar to the 3dheat fluxvector fieldq, in the local frame of the fluid:[12] Q=−k∂T=−k(1c∂T∂t,∇T){\displaystyle \mathbf {Q} =-k{\boldsymbol {\partial }}T=-k\left({\frac {1}{c}}{\frac {\partial T}{\partial t}},\nabla T\right)} whereTisabsolute temperatureandkisthermal conductivity. The flux of baryons is:[13]S=nU{\displaystyle \mathbf {S} =n\mathbf {U} }wherenis thenumber densityofbaryonsin the localrest frameof the baryon fluid (positive values for baryons, negative forantibaryons), andUthefour-velocityfield (of the fluid) as above. The four-entropyvector is defined by:[14]s=sS+QT{\displaystyle \mathbf {s} =s\mathbf {S} +{\frac {\mathbf {Q} }{T}}}wheresis the entropy per baryon, andTtheabsolute temperature, in the local rest frame of the fluid.[15] Examples of four-vectors inelectromagnetisminclude the following. The electromagneticfour-current(or more correctly a four-current density)[16]is defined byJ=(ρc,j){\displaystyle \mathbf {J} =\left(\rho c,\mathbf {j} \right)}formed from thecurrent densityjandcharge densityρ. Theelectromagnetic four-potential(or more correctly a four-EM vector potential) defined byA=(ϕc,a){\displaystyle \mathbf {A} =\left({\frac {\phi }{c}},\mathbf {a} \right)}formed from thevector potentialaand the scalar potentialϕ. The four-potential is not uniquely determined, because it depends on a choice ofgauge. In thewave equationfor the electromagnetic field: A photonicplane wavecan be described by thefour-frequency, defined as N=ν(1,n^){\displaystyle \mathbf {N} =\nu \left(1,{\hat {\mathbf {n} }}\right)} whereνis the frequency of the wave andn^{\displaystyle {\hat {\mathbf {n} }}}is aunit vectorin the travel direction of the wave. Now: ‖N‖=NμNμ=ν2(1−n^⋅n^)=0{\displaystyle \|\mathbf {N} \|=N^{\mu }N_{\mu }=\nu ^{2}\left(1-{\hat {\mathbf {n} }}\cdot {\hat {\mathbf {n} }}\right)=0} so the four-frequency of a photon is always a null vector. The quantities reciprocal to timetand spacerare theangular frequencyωandangular wave vectork, respectively. They form the components of thefour-wavevectororwave four-vector: K=(ωc,k→)=(ωc,ωvpn^).{\displaystyle \mathbf {K} =\left({\frac {\omega }{c}},{\vec {\mathbf {k} }}\right)=\left({\frac {\omega }{c}},{\frac {\omega }{v_{p}}}{\hat {\mathbf {n} }}\right)\,.} The wave four-vector hascoherent derived unitofreciprocal metersin the SI.[17] A wave packet of nearlymonochromaticlight can be described by: K=2πcN=2πcν(1,n^)=ωc(1,n^).{\displaystyle \mathbf {K} ={\frac {2\pi }{c}}\mathbf {N} ={\frac {2\pi }{c}}\nu \left(1,{\hat {\mathbf {n} }}\right)={\frac {\omega }{c}}\left(1,{\hat {\mathbf {n} }}\right)~.} The de Broglie relations then showed that four-wavevector applied tomatter wavesas well as to light waves:P=ℏK=(Ec,p→)=ℏ(ωc,k→).{\displaystyle \mathbf {P} =\hbar \mathbf {K} =\left({\frac {E}{c}},{\vec {p}}\right)=\hbar \left({\frac {\omega }{c}},{\vec {k}}\right)~.}yieldingE=ℏω{\displaystyle E=\hbar \omega }andp→=ℏk→{\displaystyle {\vec {p}}=\hbar {\vec {k}}}, whereħis thePlanck constantdivided by2π. The square of the norm is:‖K‖2=KμKμ=(ωc)2−k⋅k,{\displaystyle \|\mathbf {K} \|^{2}=K^{\mu }K_{\mu }=\left({\frac {\omega }{c}}\right)^{2}-\mathbf {k} \cdot \mathbf {k} \,,}and by the de Broglie relation:‖K‖2=1ℏ2‖P‖2=(m0cℏ)2,{\displaystyle \|\mathbf {K} \|^{2}={\frac {1}{\hbar ^{2}}}\|\mathbf {P} \|^{2}=\left({\frac {m_{0}c}{\hbar }}\right)^{2}\,,}we have the matter wave analogue of the energy–momentum relation:(ωc)2−k⋅k=(m0cℏ)2.{\displaystyle \left({\frac {\omega }{c}}\right)^{2}-\mathbf {k} \cdot \mathbf {k} =\left({\frac {m_{0}c}{\hbar }}\right)^{2}~.} Note that for massless particles, in which casem0= 0, we have:(ωc)2=k⋅k,{\displaystyle \left({\frac {\omega }{c}}\right)^{2}=\mathbf {k} \cdot \mathbf {k} \,,}or‖k‖ =ω/c. Note this is consistent with the above case; for photons with a 3-wavevector of modulusω / c,in the direction of wave propagation defined by the unit vectorn^.{\displaystyle \ {\hat {\mathbf {n} }}~.} Inquantum mechanics, the four-probability currentor probability four-current is analogous to theelectromagnetic four-current:[18]J=(ρc,j){\displaystyle \mathbf {J} =(\rho c,\mathbf {j} )}whereρis theprobability density functioncorresponding to the time component, andjis theprobability currentvector. In non-relativistic quantum mechanics, this current is always well defined because the expressions for density and current are positive definite and can admit a probability interpretation. Inrelativistic quantum mechanicsandquantum field theory, it is not always possible to find a current, particularly when interactions are involved. Replacing the energy by theenergy operatorand the momentum by themomentum operatorin the four-momentum, one obtains thefour-momentum operator, used inrelativistic wave equations. Thefour-spinof a particle is defined in the rest frame of a particle to beS=(0,s){\displaystyle \mathbf {S} =(0,\mathbf {s} )}wheresis thespinpseudovector. In quantum mechanics, not all three components of this vector are simultaneously measurable, only one component is. The timelike component is zero in the particle's rest frame, but not in any other frame. This component can be found from an appropriate Lorentz transformation. The norm squared is the (negative of the) magnitude squared of the spin, and according to quantum mechanics we have‖S‖2=−|s|2=−ℏ2s(s+1){\displaystyle \|\mathbf {S} \|^{2}=-|\mathbf {s} |^{2}=-\hbar ^{2}s(s+1)} This value is observable and quantized, withsthespin quantum number(not the magnitude of the spin vector). A four-vectorAcan also be defined in using thePauli matricesas abasis, again in various equivalent notations:[19]A=(A0,A1,A2,A3)=A0σ0+A1σ1+A2σ2+A3σ3=A0σ0+Aiσi=Aασα{\displaystyle {\begin{aligned}\mathbf {A} &=\left(A^{0},\,A^{1},\,A^{2},\,A^{3}\right)\\&=A^{0}{\boldsymbol {\sigma }}_{0}+A^{1}{\boldsymbol {\sigma }}_{1}+A^{2}{\boldsymbol {\sigma }}_{2}+A^{3}{\boldsymbol {\sigma }}_{3}\\&=A^{0}{\boldsymbol {\sigma }}_{0}+A^{i}{\boldsymbol {\sigma }}_{i}\\&=A^{\alpha }{\boldsymbol {\sigma }}_{\alpha }\\\end{aligned}}}or explicitly:A=A0(1001)+A1(0110)+A2(0−ii0)+A3(100−1)=(A0+A3A1−iA2A1+iA2A0−A3){\displaystyle {\begin{aligned}\mathbf {A} &=A^{0}{\begin{pmatrix}1&0\\0&1\end{pmatrix}}+A^{1}{\begin{pmatrix}0&1\\1&0\end{pmatrix}}+A^{2}{\begin{pmatrix}0&-i\\i&0\end{pmatrix}}+A^{3}{\begin{pmatrix}1&0\\0&-1\end{pmatrix}}\\&={\begin{pmatrix}A^{0}+A^{3}&A^{1}-iA^{2}\\A^{1}+iA^{2}&A^{0}-A^{3}\end{pmatrix}}\end{aligned}}}and in this formulation, the four-vector is represented as aHermitian matrix(thematrix transposeandcomplex conjugateof the matrix leaves it unchanged), rather than a real-valued column or row vector. Thedeterminantof the matrix is the modulus of the four-vector, so the determinant is an invariant:|A|=|A0+A3A1−iA2A1+iA2A0−A3|=(A0+A3)(A0−A3)−(A1−iA2)(A1+iA2)=(A0)2−(A1)2−(A2)2−(A3)2{\displaystyle {\begin{aligned}|\mathbf {A} |&={\begin{vmatrix}A^{0}+A^{3}&A^{1}-iA^{2}\\A^{1}+iA^{2}&A^{0}-A^{3}\end{vmatrix}}\\[1ex]&=\left(A^{0}+A^{3}\right)\left(A^{0}-A^{3}\right)-\left(A^{1}-iA^{2}\right)\left(A^{1}+iA^{2}\right)\\[1ex]&=\left(A^{0}\right)^{2}-\left(A^{1}\right)^{2}-\left(A^{2}\right)^{2}-\left(A^{3}\right)^{2}\end{aligned}}} This idea of using the Pauli matrices asbasis vectorsis employed in thealgebra of physical space, an example of aClifford algebra. Inspacetime algebra, another example of Clifford algebra, thegamma matricescan also form abasis. (They are also called the Dirac matrices, owing to their appearance in theDirac equation). There is more than one way to express the gamma matrices, detailed in that main article. TheFeynman slash notationis a shorthand for a four-vectorAcontracted with the gamma matrices:A/=Aαγα=A0γ0+A1γ1+A2γ2+A3γ3{\displaystyle \mathbf {A} \!\!\!\!/=A_{\alpha }\gamma ^{\alpha }=A_{0}\gamma ^{0}+A_{1}\gamma ^{1}+A_{2}\gamma ^{2}+A_{3}\gamma ^{3}} The four-momentum contracted with the gamma matrices is an important case inrelativistic quantum mechanicsandrelativistic quantum field theory. In the Dirac equation and otherrelativistic wave equations, terms of the form:P/=Pαγα=P0γ0+P1γ1+P2γ2+P3γ3=Ecγ0−pxγ1−pyγ2−pzγ3{\displaystyle {\begin{aligned}\mathbf {P} \!\!\!\!/=P_{\alpha }\gamma ^{\alpha }&=P_{0}\gamma ^{0}+P_{1}\gamma ^{1}+P_{2}\gamma ^{2}+P_{3}\gamma ^{3}\\[4pt]&={\dfrac {E}{c}}\gamma ^{0}-p_{x}\gamma ^{1}-p_{y}\gamma ^{2}-p_{z}\gamma ^{3}\\\end{aligned}}}appear, in which the energyEand momentum components(px,py,pz)are replaced by their respectiveoperators.
https://en.wikipedia.org/wiki/Four-position
Inphysics, in particular inspecial relativityandgeneral relativity, afour-velocityis afour-vectorin four-dimensionalspacetime[nb 1]that represents the relativistic counterpart ofvelocity, which is athree-dimensionalvectorin space. Physicaleventscorrespond to mathematical points in time and space, the set of all of them together forming a mathematical model of physical four-dimensional spacetime. The history of an object traces a curve in spacetime, called itsworld line. If the object hasmass, so that its speed is necessarily less than thespeed of light, the world line may beparametrizedby theproper timeof the object. The four-velocity is the rate of change offour-positionwith respect to the proper time along the curve. The velocity, in contrast, is the rate of change of the position in (three-dimensional) space of the object, as seen by an observer, with respect to the observer's time. The value of themagnitudeof an object's four-velocity, i.e. the quantity obtained by applying themetric tensorgto the four-velocityU, that is‖U‖2=U⋅U=gμνUνUμ, is always equal to±c2, wherecis the speed of light. Whether the plus or minus sign applies depends on the choice ofmetric signature. For an object at rest its four-velocity is parallel to the direction of the time coordinate withU0=c. A four-velocity is thus the normalized future-directed timelike tangent vector to a world line, and is acontravariant vector. Though it is a vector, addition of two four-velocities does not yield a four-velocity: the space of four-velocities is not itself avector space.[nb 2] The path of an object in three-dimensional space (in an inertial frame) may be expressed in terms of three spatial coordinate functionsxi(t)of timet, whereiis anindexwhich takes values 1, 2, 3. The three coordinates form the 3dposition vector, written as acolumn vectorx→(t)=[x1(t)x2(t)x3(t)].{\displaystyle {\vec {x}}(t)={\begin{bmatrix}x^{1}(t)\\[0.7ex]x^{2}(t)\\[0.7ex]x^{3}(t)\end{bmatrix}}\,.} The components of the velocityu→{\displaystyle {\vec {u}}}(tangent to the curve) at any point on the world line are u→=[u1u2u3]=dx→dt=[dx1dtdx2dtdx3dt].{\displaystyle {\vec {u}}={\begin{bmatrix}u^{1}\\u^{2}\\u^{3}\end{bmatrix}}={\frac {d{\vec {x}}}{dt}}={\begin{bmatrix}{\tfrac {dx^{1}}{dt}}\\{\tfrac {dx^{2}}{dt}}\\{\tfrac {dx^{3}}{dt}}\end{bmatrix}}.} Each component is simply writtenui=dxidt{\displaystyle u^{i}={\frac {dx^{i}}{dt}}} In Einstein'stheory of relativity, the path of an object moving relative to a particular frame of reference is defined by four coordinate functionsxμ(τ), whereμis a spacetime index which takes the value 0 for the timelike component, and 1, 2, 3 for the spacelike coordinates. The zeroth component is defined as the time coordinate multiplied byc,x0=ct,{\displaystyle x^{0}=ct\,,} Each function depends on one parameterτcalled itsproper time. As a column vector,x=[x0(τ)x1(τ)x2(τ)x3(τ)].{\displaystyle \mathbf {x} ={\begin{bmatrix}x^{0}(\tau )\\x^{1}(\tau )\\x^{2}(\tau )\\x^{3}(\tau )\\\end{bmatrix}}\,.} Fromtime dilation, thedifferentialsincoordinate timetandproper timeτare related bydt=γ(u)dτ{\displaystyle dt=\gamma (u)d\tau }where theLorentz factor,γ(u)=11−u2c2,{\displaystyle \gamma (u)={\frac {1}{\sqrt {1-{\frac {u^{2}}{c^{2}}}}}}\,,}is a function of theEuclidean normuof the 3d velocity vectoru→{\displaystyle {\vec {u}}}:u=‖u→‖=(u1)2+(u2)2+(u3)2.{\displaystyle u=\left\|\ {\vec {u}}\ \right\|={\sqrt {\left(u^{1}\right)^{2}+\left(u^{2}\right)^{2}+\left(u^{3}\right)^{2}}}\,.} The four-velocity is the tangent four-vector of atimelikeworld line. The four-velocityU{\displaystyle \mathbf {U} }at any point of world lineX(τ){\displaystyle \mathbf {X} (\tau )}is defined as:U=dXdτ{\displaystyle \mathbf {U} ={\frac {d\mathbf {X} }{d\tau }}}whereX{\displaystyle \mathbf {X} }is thefour-positionandτ{\displaystyle \tau }is theproper time.[1] The four-velocity defined here using the proper time of an object does not exist for world lines for massless objects such as photons travelling at the speed of light; nor is it defined fortachyonicworld lines, where the tangent vector isspacelike. The relationship between the timetand the coordinate timex0is defined byx0=ct.{\displaystyle x^{0}=ct.} Taking the derivative of this with respect to the proper timeτ, we find theUμvelocity component forμ= 0:U0=dx0dτ=d(ct)dτ=cdtdτ=cγ(u){\displaystyle U^{0}={\frac {dx^{0}}{d\tau }}={\frac {d(ct)}{d\tau }}=c{\frac {dt}{d\tau }}=c\gamma (u)} and for the other 3 components to proper time we get theUμvelocity component forμ= 1, 2, 3:Ui=dxidτ=dxidtdtdτ=dxidtγ(u)=γ(u)ui{\displaystyle U^{i}={\frac {dx^{i}}{d\tau }}={\frac {dx^{i}}{dt}}{\frac {dt}{d\tau }}={\frac {dx^{i}}{dt}}\gamma (u)=\gamma (u)u^{i}}where we have used thechain ruleand the relationshipsui=dxidt,dtdτ=γ(u){\displaystyle u^{i}={dx^{i} \over dt}\,,\quad {\frac {dt}{d\tau }}=\gamma (u)} Thus, we find for the four-velocityU{\displaystyle \mathbf {U} }:U=γ[cu→].{\displaystyle \mathbf {U} =\gamma {\begin{bmatrix}c\\{\vec {u}}\\\end{bmatrix}}.} Written in standard four-vector notation this is:U=γ(c,u→)=(γc,γu→){\displaystyle \mathbf {U} =\gamma \left(c,{\vec {u}}\right)=\left(\gamma c,\gamma {\vec {u}}\right)}whereγc{\displaystyle \gamma c}is the temporal component andγu→{\displaystyle \gamma {\vec {u}}}is the spatial component. In terms of the synchronized clocks and rulers associated with a particular slice of flat spacetime, the three spacelike components of four-velocity define a traveling object'sproper velocityγu→=dx→/dτ{\displaystyle \gamma {\vec {u}}=d{\vec {x}}/d\tau }i.e. the rate at which distance is covered in the reference map frame per unitproper timeelapsed on clocks traveling with the object. Unlike most other four-vectors, the four-velocity has only 3 independent componentsux,uy,uz{\displaystyle u_{x},u_{y},u_{z}}instead of 4. Theγ{\displaystyle \gamma }factor is a function of the three-dimensional velocityu→{\displaystyle {\vec {u}}}. When certain Lorentz scalars are multiplied by the four-velocity, one then gets new physical four-vectors that have 4 independent components. For example: Effectively, theγ{\displaystyle \gamma }factor combines with the Lorentz scalar term to make the 4th independent componentm=γmo{\displaystyle m=\gamma m_{o}}andρ=γρo.{\displaystyle \rho =\gamma \rho _{o}.} Using the differential of the four-position in the rest frame, the magnitude of the four-velocity can be obtained by theMinkowski metricwith signature(−, +, +, +):‖U‖2=ημνUμUν=ημνdXμdτdXνdτ=−c2,{\displaystyle \left\|\mathbf {U} \right\|^{2}=\eta _{\mu \nu }U^{\mu }U^{\nu }=\eta _{\mu \nu }{\frac {dX^{\mu }}{d\tau }}{\frac {dX^{\nu }}{d\tau }}=-c^{2}\,,}in short, the magnitude of the four-velocity for any object is always a fixed constant:‖U‖2=−c2{\displaystyle \left\|\mathbf {U} \right\|^{2}=-c^{2}} In a moving frame, the same norm is:‖U‖2=γ(u)2(−c2+u→⋅u→),{\displaystyle \left\|\mathbf {U} \right\|^{2}={\gamma (u)}^{2}\left(-c^{2}+{\vec {u}}\cdot {\vec {u}}\right),}so that:−c2=γ(u)2(−c2+u→⋅u→),{\displaystyle -c^{2}={\gamma (u)}^{2}\left(-c^{2}+{\vec {u}}\cdot {\vec {u}}\right),} which reduces to the definition of the Lorentz factor.
https://en.wikipedia.org/wiki/Four-velocity
In thetheory of relativity,four-accelerationis afour-vector(vector in four-dimensionalspacetime) that is analogous to classicalacceleration(a three-dimensional vector, seethree-acceleration in special relativity). Four-acceleration has applications in areas such as the annihilation ofantiprotons, resonance ofstrange particlesand radiation of an accelerated charge.[1] In inertial coordinates inspecial relativity, four-accelerationA{\displaystyle \mathbf {A} }is defined as the rate of change infour-velocityU{\displaystyle \mathbf {U} }with respect to the particle'sproper timealong itsworldline. We can say:A=dUdτ=(γuγ˙uc,γu2a+γuγ˙uu)=(γu4a⋅uc,γu2a+γu4a⋅uc2u)=(γu4a⋅uc,γu4(a+u×(u×a)c2)),{\displaystyle {\begin{aligned}\mathbf {A} ={\frac {d\mathbf {U} }{d\tau }}&=\left(\gamma _{u}{\dot {\gamma }}_{u}c,\,\gamma _{u}^{2}\mathbf {a} +\gamma _{u}{\dot {\gamma }}_{u}\mathbf {u} \right)\\&=\left(\gamma _{u}^{4}{\frac {\mathbf {a} \cdot \mathbf {u} }{c}},\,\gamma _{u}^{2}\mathbf {a} +\gamma _{u}^{4}{\frac {\mathbf {a} \cdot \mathbf {u} }{c^{2}}}\mathbf {u} \right)\\&=\left(\gamma _{u}^{4}{\frac {\mathbf {a} \cdot \mathbf {u} }{c}},\,\gamma _{u}^{4}\left(\mathbf {a} +{\frac {\mathbf {u} \times \left(\mathbf {u} \times \mathbf {a} \right)}{c^{2}}}\right)\right),\end{aligned}}}where In an instantaneously co-moving inertial reference frameu=0{\displaystyle \mathbf {u} =0},γu=1{\displaystyle \gamma _{u}=1}andγ˙u=0{\displaystyle {\dot {\gamma }}_{u}=0}, i.e. in such a reference frameA=(0,a).{\displaystyle \mathbf {A} =\left(0,\mathbf {a} \right).} Geometrically, four-acceleration is acurvature vectorof a worldline.[2][3] Therefore, the magnitude of the four-acceleration (which is an invariant scalar) is equal to theproper accelerationthat a moving particle "feels" moving along a worldline. A worldline having constant four-acceleration is a Minkowski-circle i.e. hyperbola (seehyperbolic motion) Thescalar productof a particle'sfour-velocityand its four-acceleration is always 0. Even at relativistic speeds four-acceleration is related to thefour-force:Fμ=mAμ,{\displaystyle F^{\mu }=mA^{\mu },}wheremis theinvariant massof a particle. When thefour-forceis zero, only gravitation affects the trajectory of a particle, and the four-vector equivalent of Newton's second law above reduces to thegeodesic equation. The four-acceleration of a particle executing geodesic motion is zero. This corresponds to gravity not being a force. Four-acceleration is different from what we understand by acceleration as defined in Newtonian physics, where gravity is treated as a force. In non-inertial coordinates, which include accelerated coordinates in special relativity and all coordinates ingeneral relativity, the acceleration four-vector is related to thefour-velocitythrough anabsolute derivativewith respect to proper time. Aλ:=DUλdτ=dUλdτ+ΓλμνUμUν{\displaystyle A^{\lambda }:={\frac {DU^{\lambda }}{d\tau }}={\frac {dU^{\lambda }}{d\tau }}+\Gamma ^{\lambda }{}_{\mu \nu }U^{\mu }U^{\nu }} In inertial coordinates theChristoffel symbolsΓλμν{\displaystyle \Gamma ^{\lambda }{}_{\mu \nu }}are all zero, so this formula is compatible with the formula given earlier. In special relativity the coordinates are those of a rectilinear inertial frame, so theChristoffel symbolsterm vanishes, but sometimes when authors use curved coordinates in order to describe an accelerated frame, the frame of reference isn't inertial, they will still describe the physics as special relativistic because the metric is just a frame transformation of theMinkowski spacemetric. In that case this is the expression that must be used because theChristoffel symbolsare no longer all zero.
https://en.wikipedia.org/wiki/Four-acceleration
Inspecial relativity,four-momentum(also calledmomentum–energyormomenergy[1]) is the generalization of theclassical three-dimensional momentumtofour-dimensional spacetime. Momentum is a vector inthree dimensions; similarly four-momentum is afour-vectorinspacetime. Thecontravariantfour-momentum of a particle with relativistic energyEand three-momentump= (px,py,pz) =γmv, wherevis the particle's three-velocity andγtheLorentz factor, isp=(p0,p1,p2,p3)=(Ec,px,py,pz).{\displaystyle p=\left(p^{0},p^{1},p^{2},p^{3}\right)=\left({\frac {E}{c}},p_{x},p_{y},p_{z}\right).} The quantitymvof above is the ordinarynon-relativistic momentumof the particle andmitsrest mass. The four-momentum is useful in relativistic calculations because it is aLorentz covariantvector. This means that it is easy to keep track of how it transforms underLorentz transformations. Calculating theMinkowski norm squaredof the four-momentum gives aLorentz invariantquantity equal (up to factors of thespeed of lightc) to the square of the particle'sproper mass: p⋅p=ημνpμpν=pνpν=−E2c2+|p|2=−m2c2{\displaystyle p\cdot p=\eta _{\mu \nu }p^{\mu }p^{\nu }=p_{\nu }p^{\nu }=-{E^{2} \over c^{2}}+|\mathbf {p} |^{2}=-m^{2}c^{2}}where the following denote: p{\textstyle p}, the four-momentum vector of a particle, p⋅p{\textstyle p\cdot p}, the Minkowski inner product of the four-momentum with itself, pμ{\textstyle p^{\mu }}andpν{\textstyle p^{\nu }}, the contravariant components of the four-momentum vector, pν{\textstyle p_{\nu }}, the covariant form, E{\textstyle E}, the energy of the particle, c{\textstyle c}, the speed of light, |p|{\textstyle |\mathbf {p} |}, the magnitude of the four-momentum vector, m{\textstyle m}, theinvariant mass(rest) of the particle, andημν=(−1000010000100001){\displaystyle \eta _{\mu \nu }={\begin{pmatrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{pmatrix}}}is themetric tensorofspecial relativitywithmetric signaturefor definiteness chosen to be(–1, 1, 1, 1). The negativity of the norm reflects that the momentum is atimelikefour-vector for massive particles. The other choice of signature would flip signs in certain formulas (like for the norm here). This choice is not important, but once made it must for consistency be kept throughout. The Minkowski norm is Lorentz invariant, meaning its value is not changed by Lorentz transformations/boosting into different frames of reference. More generally, for any two four-momentapandq, the quantityp⋅qis invariant. For a massive particle, the four-momentum is given by the particle'sinvariant massmmultiplied by the particle'sfour-velocity,pμ=muμ,{\displaystyle p^{\mu }=mu^{\mu },}where the four-velocityuisu=(u0,u1,u2,u3)=γv(c,vx,vy,vz),{\displaystyle u=\left(u^{0},u^{1},u^{2},u^{3}\right)=\gamma _{v}\left(c,v_{x},v_{y},v_{z}\right),}andγv:=11−v2c2{\displaystyle \gamma _{v}:={\frac {1}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}}is the Lorentz factor (associated with the speedv{\displaystyle v}),cis thespeed of light. There are several ways to arrive at the correct expression for four-momentum. One way is to first define the four-velocityu=dx/dτand simply definep=mu, being content that it is a four-vector with the correct units and correct behavior. Another, more satisfactory, approach is to begin with theprinciple of least actionand use theLagrangian frameworkto derive the four-momentum, including the expression for the energy.[2]One may at once, using the observations detailed below, define four-momentum from theactionS. Given that in general for a closed system withgeneralized coordinatesqiandcanonical momentapi,[3]pi=∂S∂qi=∂S∂xi,E=−∂S∂t=−c⋅∂S∂x0,{\displaystyle p_{i}={\frac {\partial S}{\partial q_{i}}}={\frac {\partial S}{\partial x_{i}}},\quad E=-{\frac {\partial S}{\partial t}}=-c\cdot {\frac {\partial S}{\partial x^{0}}},}it is immediate (recallingx0=ct,x1=x,x2=y,x3=zandx0= −x0,x1=x1,x2=x2,x3=x3in the present metric convention) thatpμ=∂S∂xμ=(−Ec,p){\displaystyle p_{\mu }={\frac {\partial S}{\partial x^{\mu }}}=\left(-{E \over c},\mathbf {p} \right)}is a covariant four-vector with the three-vector part being the canonical momentum. Consider initially a system of one degree of freedomq. In the derivation of theequations of motionfrom the action usingHamilton's principle, one finds (generally) in an intermediate stage for thevariationof the action,δS=[∂L∂q˙δq]|t1t2+∫t1t2(∂L∂q−ddt∂L∂q˙)δqdt.{\displaystyle \delta S=\left.\left[{\frac {\partial L}{\partial {\dot {q}}}}\delta q\right]\right|_{t_{1}}^{t_{2}}+\int _{t_{1}}^{t_{2}}\left({\frac {\partial L}{\partial q}}-{\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {q}}}}\right)\delta qdt.} The assumption is then that the varied paths satisfyδq(t1) =δq(t2) = 0, from whichLagrange's equationsfollow at once. When the equations of motion are known (or simply assumed to be satisfied), one may let go of the requirementδq(t2) = 0. In this case the path isassumedto satisfy the equations of motion, and the action is a function of the upper integration limitδq(t2), butt2is still fixed. The above equation becomes withS=S(q), and definingδq(t2) =δq, and letting in more degrees of freedom,δS=∑i∂L∂q˙iδqi=∑ipiδqi.{\displaystyle \delta S=\sum _{i}{\frac {\partial L}{\partial {\dot {q}}_{i}}}\delta q_{i}=\sum _{i}p_{i}\delta q_{i}.} Observing thatδS=∑i∂S∂qiδqi,{\displaystyle \delta S=\sum _{i}{\frac {\partial S}{\partial {q}_{i}}}\delta q_{i},}one concludespi=∂S∂qi.{\displaystyle p_{i}={\frac {\partial S}{\partial q_{i}}}.} In a similar fashion, keep endpoints fixed, but lett2=tvary. This time, the system is allowed to move through configuration space at "arbitrary speed" or with "more or less energy", the field equations still assumed to hold and variation can be carried out on the integral, but instead observedSdt=L{\displaystyle {\frac {dS}{dt}}=L}by thefundamental theorem of calculus. Compute using the above expression for canonical momenta,dSdt=∂S∂t+∑i∂S∂qiq˙i=∂S∂t+∑ipiq˙i=L.{\displaystyle {\frac {dS}{dt}}={\frac {\partial S}{\partial t}}+\sum _{i}{\frac {\partial S}{\partial q_{i}}}{\dot {q}}_{i}={\frac {\partial S}{\partial t}}+\sum _{i}p_{i}{\dot {q}}_{i}=L.} Now usingH=∑ipiq˙i−L,{\displaystyle H=\sum _{i}p_{i}{\dot {q}}_{i}-L,}whereHis theHamiltonian, leads to, sinceE=Hin the present case,E=H=−∂S∂t.{\displaystyle E=H=-{\frac {\partial S}{\partial t}}.} Incidentally, usingH=H(q,p,t)withp=⁠∂S/∂q⁠in the above equation yields theHamilton–Jacobi equations. In this context,Sis calledHamilton's principal function. The actionSis given byS=−mc∫ds=∫Ldt,L=−mc21−v2c2,{\displaystyle S=-mc\int ds=\int Ldt,\quad L=-mc^{2}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}},}whereLis the relativisticLagrangianfor a free particle. From this, The variation of the action isδS=−mc∫δds.{\displaystyle \delta S=-mc\int \delta ds.} To calculateδds, observe first thatδds2= 2dsδdsand thatδds2=δημνdxμdxν=ημν(δ(dxμ)dxν+dxμδ(dxν))=2ημνδ(dxμ)dxν.{\displaystyle \delta ds^{2}=\delta \eta _{\mu \nu }dx^{\mu }dx^{\nu }=\eta _{\mu \nu }\left(\delta \left(dx^{\mu }\right)dx^{\nu }+dx^{\mu }\delta \left(dx^{\nu }\right)\right)=2\eta _{\mu \nu }\delta \left(dx^{\mu }\right)dx^{\nu }.} Soδds=ημνδdxμdxνds=ημνdδxμdxνds,{\displaystyle \delta ds=\eta _{\mu \nu }\delta dx^{\mu }{\frac {dx^{\nu }}{ds}}=\eta _{\mu \nu }d\delta x^{\mu }{\frac {dx^{\nu }}{ds}},}orδds=ημνdδxμdτdxνcdτdτ,{\displaystyle \delta ds=\eta _{\mu \nu }{\frac {d\delta x^{\mu }}{d\tau }}{\frac {dx^{\nu }}{cd\tau }}d\tau ,}and thusδS=−m∫ημνdδxμdτdxνdτdτ=−m∫ημνdδxμdτuνdτ=−m∫ημν[ddτ(δxμuν)−δxμddτuν]dτ{\displaystyle \delta S=-m\int \eta _{\mu \nu }{\frac {d\delta x^{\mu }}{d\tau }}{\frac {dx^{\nu }}{d\tau }}d\tau =-m\int \eta _{\mu \nu }{\frac {d\delta x^{\mu }}{d\tau }}u^{\nu }d\tau =-m\int \eta _{\mu \nu }\left[{\frac {d}{d\tau }}\left(\delta x^{\mu }u^{\nu }\right)-\delta x^{\mu }{\frac {d}{d\tau }}u^{\nu }\right]d\tau }which is justδS=[−muμδxμ]t1t2+m∫t1t2δxμduμdsds{\displaystyle \delta S=\left[-mu_{\mu }\delta x^{\mu }\right]_{t_{1}}^{t_{2}}+m\int _{t_{1}}^{t_{2}}\delta x^{\mu }{\frac {du_{\mu }}{ds}}ds} δS=[−muμδxμ]t1t2+m∫t1t2δxμduμdsds=−muμδxμ=∂S∂xμδxμ=−pμδxμ,{\displaystyle \delta S=\left[-mu_{\mu }\delta x^{\mu }\right]_{t_{1}}^{t_{2}}+m\int _{t_{1}}^{t_{2}}\delta x^{\mu }{\frac {du_{\mu }}{ds}}ds=-mu_{\mu }\delta x^{\mu }={\frac {\partial S}{\partial x^{\mu }}}\delta x^{\mu }=-p_{\mu }\delta x^{\mu },} where the second step employs the field equationsduμ/ds= 0,(δxμ)t1= 0, and(δxμ)t2≡δxμas in the observations above. Now compare the last three expressions to findpμ=∂μ[S]=∂S∂xμ=muμ=m(c1−v2c2,vx1−v2c2,vy1−v2c2,vz1−v2c2),{\displaystyle p^{\mu }=\partial ^{\mu }[S]={\frac {\partial S}{\partial x_{\mu }}}=mu^{\mu }=m\left({\frac {c}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}},{\frac {v_{x}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}},{\frac {v_{y}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}},{\frac {v_{z}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}\right),}with norm−m2c2, and the famed result for the relativistic energy, E=mc21−v2c2=mrc2,{\displaystyle E={\frac {mc^{2}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}}=m_{r}c^{2},} wheremris the now unfashionablerelativistic mass, follows. By comparing the expressions for momentum and energy directly, one has p=Evc2,{\displaystyle \mathbf {p} =E{\frac {\mathbf {v} }{c^{2}}},} that holds for massless particles as well. Squaring the expressions for energy and three-momentum and relating them gives theenergy–momentum relation, E2c2=p⋅p+m2c2.{\displaystyle {\frac {E^{2}}{c^{2}}}=\mathbf {p} \cdot \mathbf {p} +m^{2}c^{2}.} Substitutingpμ↔−∂S∂xμ{\displaystyle p_{\mu }\leftrightarrow -{\frac {\partial S}{\partial x^{\mu }}}}in the equation for the norm gives the relativisticHamilton–Jacobi equation,[4] ημν∂S∂xμ∂S∂xν=−m2c2.{\displaystyle \eta ^{\mu \nu }{\frac {\partial S}{\partial x^{\mu }}}{\frac {\partial S}{\partial x^{\nu }}}=-m^{2}c^{2}.} It is also possible to derive the results from the Lagrangian directly. By definition,[5]p=∂L∂v=(∂L∂x˙,∂L∂y˙,∂L∂z˙)=m(γvx,γvy,γvz)=mγv=mu,E=p⋅v−L=mc21−v2c2,{\displaystyle {\begin{aligned}\mathbf {p} &={\frac {\partial L}{\partial \mathbf {v} }}=\left({\partial L \over \partial {\dot {x}}},{\partial L \over \partial {\dot {y}}},{\partial L \over \partial {\dot {z}}}\right)=m(\gamma v_{x},\gamma v_{y},\gamma v_{z})=m\gamma \mathbf {v} =m\mathbf {u} ,\\[3pt]E&=\mathbf {p} \cdot \mathbf {v} -L={\frac {mc^{2}}{\sqrt {1-{\frac {v^{2}}{c^{2}}}}}},\end{aligned}}}which constitute the standard formulae for canonical momentum and energy of a closed (time-independent Lagrangian) system. With this approach it is less clear that the energy and momentum are parts of a four-vector. The energy and the three-momentum areseparately conservedquantities for isolated systems in the Lagrangian framework. Hence four-momentum is conserved as well. More on this below. More pedestrian approaches include expected behavior in electrodynamics.[6]In this approach, the starting point is application ofLorentz force lawandNewton's second lawin the rest frame of the particle. The transformation properties of the electromagnetic field tensor, including invariance ofelectric charge, are then used to transform to the lab frame, and the resulting expression (again Lorentz force law) is interpreted in the spirit of Newton's second law, leading to the correct expression for the relativistic three- momentum. The disadvantage, of course, is that it isn't immediately clear that the result applies to all particles, whether charged or not, and that it doesn't yield the complete four-vector. It is also possible to avoid electromagnetism and use well tuned experiments of thought involving well-trained physicists throwing billiard balls, utilizing knowledge of thevelocity addition formulaand assuming conservation of momentum.[7][8]This too gives only the three-vector part. As shown above, there are three conservation laws (not independent, the last two imply the first and vice versa): Note that the invariant mass of a system of particles may be more than the sum of the particles' rest masses, sincekinetic energyin the system center-of-mass frame andpotential energyfrom forces between the particles contribute to the invariant mass. As an example, two particles with four-momenta(5 GeV/c, 4 GeV/c, 0, 0)and(5 GeV/c, −4 GeV/c, 0, 0)each have (rest) mass 3GeV/c2separately, but their total mass (the system mass) is 10GeV/c2. If these particles were to collide and stick, the mass of the composite object would be 10GeV/c2. One practical application fromparticle physicsof the conservation of theinvariant massinvolves combining the four-momentapAandpBof two daughter particles produced in the decay of a heavier particle with four-momentumpCto find the mass of the heavier particle. Conservation of four-momentum givespCμ=pAμ+pBμ, while the massMof the heavier particle is given by−PC⋅PC=M2c2. By measuring the energies and three-momenta of the daughter particles, one can reconstruct the invariant mass of the two-particle system, which must be equal toM. This technique is used, e.g., in experimental searches forZ′ bosonsat high-energy particlecolliders, where the Z′ boson would show up as a bump in the invariant mass spectrum ofelectron–positronormuon–antimuon pairs. If the mass of an object does not change, the Minkowski inner product of its four-momentum and correspondingfour-accelerationAμis simply zero. The four-acceleration is proportional to the proper time derivative of the four-momentum divided by the particle's mass, sopμAμ=ημνpμAν=ημνpμddτpνm=12mddτp⋅p=12mddτ(−m2c2)=0.{\displaystyle p^{\mu }A_{\mu }=\eta _{\mu \nu }p^{\mu }A^{\nu }=\eta _{\mu \nu }p^{\mu }{\frac {d}{d\tau }}{\frac {p^{\nu }}{m}}={\frac {1}{2m}}{\frac {d}{d\tau }}p\cdot p={\frac {1}{2m}}{\frac {d}{d\tau }}\left(-m^{2}c^{2}\right)=0.} For acharged particleofchargeq, moving in an electromagnetic field given by theelectromagnetic four-potential:A=(A0,A1,A2,A3)=(ϕc,Ax,Ay,Az){\displaystyle A=\left(A^{0},A^{1},A^{2},A^{3}\right)=\left({\phi \over c},A_{x},A_{y},A_{z}\right)}whereφis thescalar potentialandA= (Ax,Ay,Az)thevector potential, the components of the (notgauge-invariant) canonical momentum four-vectorPisPμ=pμ+qAμ.{\displaystyle P^{\mu }=p^{\mu }+qA^{\mu }.} This, in turn, allows the potential energy from the charged particle in an electrostatic potential and theLorentz forceon the charged particle moving in a magnetic field to be incorporated in a compact way, inrelativistic quantum mechanics. In the case when there is a moving physical system with a continuous distribution of matter in curved spacetime, the primary expression for four-momentum is a four-vector with covariant index:[9] Four-momentumPμ{\displaystyle P_{\mu }}is expressed through the energyE{\displaystyle E}of physical system and relativistic momentumP{\displaystyle \mathbf {P} }. At the same time, the four-momentumPμ{\displaystyle P_{\mu }}can be represented as the sum of two non-local four-vectors of integral type: Four-vectorpμ{\displaystyle p_{\mu }}is the generalized four-momentum associated with the action of fields on particles; four-vectorKμ{\displaystyle K_{\mu }}is the four-momentum of the fields arising from the action of particles on the fields. EnergyE{\displaystyle E}and momentumP{\displaystyle \mathbf {P} }, as well as components of four-vectorspμ{\displaystyle p_{\mu }}andKμ{\displaystyle K_{\mu }}can be calculated if theLagrangiandensityL=Lp+Lf{\displaystyle {\mathcal {L}}={\mathcal {L}}_{p}+{\mathcal {L}}_{f}}of the system is given. The following formulas are obtained for the energy and momentum of the system: HereLp{\displaystyle {\mathcal {L}}_{p}}is that part of the Lagrangian density that contains terms with four-currents;v{\displaystyle \mathbf {v} }is the velocity of matter particles;u0{\displaystyle u^{0}}is the time component of four-velocity of particles;g{\displaystyle g}is determinant of metric tensor;Lf=∫VLf−gdx1dx2dx3{\displaystyle L_{f}=\int _{V}{\mathcal {L}}_{f}{\sqrt {-g}}dx^{1}dx^{2}dx^{3}}is the part of the Lagrangian associated with the Lagrangian densityLf{\displaystyle {\mathcal {L}}_{f}};vn{\displaystyle \mathbf {v} _{n}}is velocity of a particle of matter with numbern{\displaystyle n}.
https://en.wikipedia.org/wiki/Four-momentum
In thespecial theory of relativity,four-forceis afour-vectorthat replaces the classicalforce. The four-force is defined as the rate of change in thefour-momentumof a particle with respect to the particle'sproper time. Hence,: F=dPdτ.{\displaystyle \mathbf {F} ={\mathrm {d} \mathbf {P} \over \mathrm {d} \tau }.} For a particle of constantinvariant massm>0{\displaystyle m>0}, the four-momentum is given by the relationP=mU{\displaystyle \mathbf {P} =m\mathbf {U} }, whereU=γ(c,u){\displaystyle \mathbf {U} =\gamma (c,\mathbf {u} )}is thefour-velocity. In analogy toNewton's second law, we can also relate the four-force to thefour-acceleration,A{\displaystyle \mathbf {A} }, by equation: F=mA=(γf⋅uc,γf).{\displaystyle \mathbf {F} =m\mathbf {A} =\left(\gamma {\mathbf {f} \cdot \mathbf {u} \over c},\gamma {\mathbf {f} }\right).} Here f=ddt(γmu)=dpdt{\displaystyle {\mathbf {f} }={\mathrm {d} \over \mathrm {d} t}\left(\gamma m{\mathbf {u} }\right)={\mathrm {d} \mathbf {p} \over \mathrm {d} t}} and f⋅u=ddt(γmc2)=dEdt.{\displaystyle {\mathbf {f} \cdot \mathbf {u} }={\mathrm {d} \over \mathrm {d} t}\left(\gamma mc^{2}\right)={\mathrm {d} E \over \mathrm {d} t}.} whereu{\displaystyle \mathbf {u} },p{\displaystyle \mathbf {p} }andf{\displaystyle \mathbf {f} }are3-spacevectors describing the velocity, the momentum of the particle and the force acting on it respectively; andE{\displaystyle E}is the total energy of the particle. From the formulae of the previous section it appears that the time component of the four-force is the power expended,f⋅u{\displaystyle \mathbf {f} \cdot \mathbf {u} }, apart from relativistic correctionsγ/c{\displaystyle \gamma /c}. This is only true in purely mechanical situations, when heat exchanges vanish or can be neglected. In the full thermo-mechanical case, not onlywork, but alsoheatcontributes to the change in energy, which is the time component of theenergy–momentum covector. The time component of the four-force includes in this case a heating rateh{\displaystyle h}, besides the powerf⋅u{\displaystyle \mathbf {f} \cdot \mathbf {u} }.[1]Note that work and heat cannot be meaningfully separated, though, as they both carry inertia.[2]This fact extends also to contact forces, that is, to thestress–energy–momentum tensor.[3][2] Therefore, in thermo-mechanical situations the time component of the four-force isnotproportional to the powerf⋅u{\displaystyle \mathbf {f} \cdot \mathbf {u} }but has a more generic expression, to be given case by case, which represents the supply of internal energy from the combination of work and heat,[2][1][4][3]and which in the Newtonian limit becomesh+f⋅u{\displaystyle h+\mathbf {f} \cdot \mathbf {u} }. Ingeneral relativitythe relation between four-force, andfour-accelerationremains the same, but the elements of the four-force are related to the elements of thefour-momentumthrough acovariant derivativewith respect to proper time. Fλ:=DPλdτ=dPλdτ+ΓλμνUμPν{\displaystyle F^{\lambda }:={\frac {DP^{\lambda }}{d\tau }}={\frac {dP^{\lambda }}{d\tau }}+\Gamma ^{\lambda }{}_{\mu \nu }U^{\mu }P^{\nu }} In addition, we can formulate force using the concept ofcoordinate transformationsbetween different coordinate systems. Assume that we know the correct expression for force in a coordinate system at which the particle is momentarily at rest. Then we can perform a transformation to another system to get the corresponding expression of force.[5]Inspecial relativitythe transformation will be a Lorentz transformation between coordinate systems moving with a relative constant velocity whereas ingeneral relativityit will be a general coordinate transformation. Consider the four-forceFμ=(F0,F){\displaystyle F^{\mu }=(F^{0},\mathbf {F} )}acting on a particle of massm{\displaystyle m}which is momentarily at rest in a coordinate system. The relativistic forcefμ{\displaystyle f^{\mu }}in another coordinate system moving with constant velocityv{\displaystyle v}, relative to the other one, is obtained using a Lorentz transformation: f=F+(γ−1)vv⋅Fv2,f0=γβ⋅F=β⋅f.{\displaystyle {\begin{aligned}\mathbf {f} &=\mathbf {F} +(\gamma -1)\mathbf {v} {\mathbf {v} \cdot \mathbf {F} \over v^{2}},\\f^{0}&=\gamma {\boldsymbol {\beta }}\cdot \mathbf {F} ={\boldsymbol {\beta }}\cdot \mathbf {f} .\end{aligned}}} whereβ=v/c{\displaystyle {\boldsymbol {\beta }}=\mathbf {v} /c}. Ingeneral relativity, the expression for force becomes fμ=mDUμdτ{\displaystyle f^{\mu }=m{DU^{\mu } \over d\tau }} withcovariant derivativeD/dτ{\displaystyle D/d\tau }. The equation of motion becomes md2xμdτ2=fμ−mΓνλμdxνdτdxλdτ,{\displaystyle m{d^{2}x^{\mu } \over d\tau ^{2}}=f^{\mu }-m\Gamma _{\nu \lambda }^{\mu }{dx^{\nu } \over d\tau }{dx^{\lambda } \over d\tau },} whereΓνλμ{\displaystyle \Gamma _{\nu \lambda }^{\mu }}is theChristoffel symbol. If there is no external force, this becomes the equation forgeodesicsin thecurved space-time. The second term in the above equation, plays the role of a gravitational force. Ifffα{\displaystyle f_{f}^{\alpha }}is the correct expression for force in a freely falling frameξα{\displaystyle \xi ^{\alpha }}, we can use then theequivalence principleto write the four-force in an arbitrary coordinatexμ{\displaystyle x^{\mu }}: fμ=∂xμ∂ξαffα.{\displaystyle f^{\mu }={\partial x^{\mu } \over \partial \xi ^{\alpha }}f_{f}^{\alpha }.} In special relativity,Lorentz four-force(four-force acting on a charged particle situated in an electromagnetic field) can be expressed as:fμ=qFμνUν,{\displaystyle f_{\mu }=qF_{\mu \nu }U^{\nu },} where
https://en.wikipedia.org/wiki/Four-force
Inspecialandgeneral relativity, thefour-current(technically thefour-current density)[1]is thefour-dimensionalanalogue of thecurrent density, with units of charge per unit time per unit area. Also known asvector current, it is used in the geometric context offour-dimensionalspacetime, rather than separating time fromthree-dimensionalspace. Mathematically it is afour-vectorand isLorentz covariant. This article uses thesummation conventionfor indices. Seecovariance and contravariance of vectorsfor background on raised and lowered indices, andraising and lowering indiceson how to switch between them. Using theMinkowski metricημν{\displaystyle \eta _{\mu \nu }}ofmetric signature(+ − − −), the four-current components are given by: where: This can also be expressed in terms of thefour-velocityby the equation:[2][3] where: Qualitatively, the change in charge density (charge per unit volume) is due to the contracted volume of charge due toLorentz contraction. Charges (free or as a distribution) at rest will appear to remain at the same spatial position for some interval of time (as long as they're stationary). When they do move, this corresponds to changes in position, therefore the charges have velocity, and the motion of charge constitutes an electric current. This means that charge density is related to time, while current density is related to space. The four-current unifies charge density (related to electricity) and current density (related to magnetism) in one electromagnetic entity. In special relativity, the statement ofcharge conservationis that theLorentz invariantdivergence ofJis zero:[4] where∂/∂xα{\displaystyle \partial /\partial x^{\alpha }}is thefour-gradient. This is thecontinuity equation. In general relativity, the continuity equation is written as: where the semi-colon represents acovariant derivative. The four-current appears in two equivalent formulations ofMaxwell's equations, in terms of thefour-potential[5]when theLorenz gauge conditionis fulfilled: where◻{\displaystyle \Box }is theD'Alembert operator, or theelectromagnetic field tensor: whereμ0is thepermeability of free spaceand ∇αis thecovariant derivative. Ingeneral relativity, the four-current is defined as the divergence of the electromagnetic displacement, defined as: then: The four-current density of charge is an essential component of the Lagrangian density used in quantum electrodynamics.[6]In 1956Semyon GershteinandYakov Zeldovichconsidered the conserved vector current (CVC) hypothesis for electroweak interactions.[7][8][9]
https://en.wikipedia.org/wiki/Four-current
Anelectromagnetic four-potentialis arelativisticvector functionfrom which theelectromagnetic fieldcan be derived. It combines both anelectric scalar potentialand amagnetic vector potentialinto a singlefour-vector.[1] As measured in a givenframe of reference, and for a givengauge, the first component of the electromagnetic four-potential is conventionally taken to be the electric scalar potential, and the other three components make up the magnetic vector potential. While both the scalar and vector potential depend upon the frame, the electromagnetic four-potential isLorentz covariant. Like other potentials, many different electromagnetic four-potentials correspond to the same electromagnetic field, depending upon the choice of gauge. This article usestensor index notationand theMinkowski metricsign convention(+ − − −). See alsocovariance and contravariance of vectorsandraising and lowering indicesfor more details on notation. Formulae are given inSI unitsandGaussian-cgs units. The contravariantelectromagnetic four-potentialcan be defined as:[2] in whichϕis theelectric potential, andAis themagnetic potential(avector potential). The unit ofAαisV·s·m−1in SI, andMx·cm−1inGaussian-CGS. The electric and magnetic fields associated with these four-potentials are:[3] Inspecial relativity, the electric and magnetic fields transform underLorentz transformations. This can be written in the form of a rank twotensor– theelectromagnetic tensor. The 16 contravariant components of the electromagnetic tensor, usingMinkowski metricconvention(+ − − −), are written in terms of the electromagnetic four-potential and thefour-gradientas: If the said signature is instead(− + + +)then: This essentially defines the four-potential in terms of physically observable quantities, as well as reducing to the above definition. Often, theLorenz gauge condition∂αAα=0{\displaystyle \partial _{\alpha }A^{\alpha }=0}in aninertial frame of referenceis employed to simplifyMaxwell's equationsas:[2] whereJαare the components of thefour-current, and is thed'Alembertianoperator. In terms of the scalar and vector potentials, this last equation becomes: For a given charge and current distribution,ρ(r,t)andj(r,t), the solutions to these equations in SI units are:[3] where is theretarded time. This is sometimes also expressed with where the square brackets are meant to indicate that the time should be evaluated at the retarded time. Of course, since the above equations are simply the solution to aninhomogeneousdifferential equation, any solution to the homogeneous equation can be added to these to satisfy theboundary conditions. These homogeneous solutions in general represent waves propagating from sources outside the boundary. When the integrals above are evaluated for typical cases, e.g. of an oscillating current (or charge), they are found to give both a magnetic field component varying according tor−2(theinduction field) and a component decreasing asr−1(theradiation field).[clarification needed] Whenflattenedto aone-form(in tensor notation,Aμ{\displaystyle A_{\mu }}), the four-potentialA{\displaystyle A}(normally written as a vector or,Aμ{\displaystyle A^{\mu }}in tensor notation) can be decomposed[clarification needed]via theHodge decomposition theoremas the sum of anexact, a coexact, and a harmonic form, There isgauge freedominAin that of the three forms in this decomposition, only the coexact form has any effect on theelectromagnetic tensor Exact forms are closed, as are harmonic forms over an appropriate domain, soddα=0{\displaystyle dd\alpha =0}anddγ=0{\displaystyle d\gamma =0}, always. So regardless of whatα{\displaystyle \alpha }andγ{\displaystyle \gamma }are, we are left with simply In infinite flat Minkowski space, every closed form is exact. Therefore theγ{\displaystyle \gamma }term vanishes. Every gauge transform ofA{\displaystyle A}can thus be written as
https://en.wikipedia.org/wiki/Four-potential
Thefour-frequencyof amassless particle, such as aphoton, is afour-vectordefined by whereν{\displaystyle \nu }is the photon'sfrequencyandn^{\displaystyle {\hat {\mathbf {n} }}}is aunit vectorin the direction of the photon's motion. The four-frequency of a photon is always a future-pointing andnull vector. An observer moving withfour-velocityVb{\displaystyle V^{b}}will observe a frequency Whereη{\displaystyle \eta }is theMinkowski inner-product(+−−−) withcovariant componentsηab{\displaystyle \eta _{ab}}. Closely related to the four-frequency is thefour-wavevectordefined by whereω=2πν{\displaystyle \omega =2\pi \nu },c{\displaystyle c}is the speed of light andk=2πλn^{\textstyle \mathbf {k} ={\frac {2\pi }{\lambda }}{\hat {\mathbf {n} }}}andλ{\displaystyle \lambda }is thewavelengthof the photon. The four-wavevector is more often used in practice than the four-frequency, but the two vectors are related (usingc=νλ{\displaystyle c=\nu \lambda }) by Thisrelativity-related article is astub. You can help Wikipedia byexpanding it.
https://en.wikipedia.org/wiki/Four-frequency
Inspecial relativity, afour-vector(or4-vector, sometimesLorentz vector)[1]is an object with four components, which transform in a specific way underLorentz transformations. Specifically, a four-vector is an element of a four-dimensionalvector spaceconsidered as arepresentation spaceof thestandard representationof theLorentz group, the (⁠1/2⁠,⁠1/2⁠) representation. It differs from aEuclidean vectorin how its magnitude is determined. The transformations that preserve this magnitude are the Lorentz transformations, which includespatial rotationsandboosts(a change by a constant velocity to anotherinertial reference frame).[2]: ch1 Four-vectors describe, for instance, positionxμin spacetime modeled asMinkowski space, a particle'sfour-momentumpμ, the amplitude of theelectromagnetic four-potentialAμ(x)at a pointxin spacetime, and the elements of the subspace spanned by thegamma matricesinside theDirac algebra. The Lorentz group may be represented by 4×4 matricesΛ. The action of a Lorentz transformation on a generalcontravariantfour-vectorX(like the examples above), regarded as a column vector withCartesian coordinateswith respect to aninertial framein the entries, is given by X′=ΛX,{\displaystyle X'=\Lambda X,} (matrix multiplication) where the components of the primed object refer to the new frame. Related to the examples above that are given as contravariant vectors, there are also the correspondingcovariant vectorsxμ,pμandAμ(x). These transform according to the rule X′=(Λ−1)TX,{\displaystyle X'=\left(\Lambda ^{-1}\right)^{\textrm {T}}X,} whereTdenotes thematrix transpose. This rule is different from the above rule. It corresponds to thedual representationof the standard representation. However, for the Lorentz group the dual of any representation isequivalentto the original representation. Thus the objects with covariant indices are four-vectors as well. For an example of a well-behaved four-component object in special relativity that isnota four-vector, seebispinor. It is similarly defined, the difference being that the transformation rule under Lorentz transformations is given by a representation other than the standard representation. In this case, the rule readsX′= Π(Λ)X, whereΠ(Λ)is a 4×4 matrix other thanΛ. Similar remarks apply to objects with fewer or more components that are well-behaved under Lorentz transformations. These includescalars,spinors,tensorsand spinor-tensors. The article considers four-vectors in the context of special relativity. Although the concept of four-vectors also extends togeneral relativity, some of the results stated in this article require modification in general relativity. The notations in this article are: lowercase bold forthree-dimensionalvectors, hats for three-dimensionalunit vectors, capital bold forfour dimensionalvectors (except for the four-gradient), andtensor index notation. Afour-vectorAis a vector with a "timelike" component and three "spacelike" components, and can be written in various equivalent notations:[3] A=(A0,A1,A2,A3)=A0E0+A1E1+A2E2+A3E3=A0E0+AiEi=AαEα{\displaystyle {\begin{aligned}\mathbf {A} &=\left(A^{0},\,A^{1},\,A^{2},\,A^{3}\right)\\&=A^{0}\mathbf {E} _{0}+A^{1}\mathbf {E} _{1}+A^{2}\mathbf {E} _{2}+A^{3}\mathbf {E} _{3}\\&=A^{0}\mathbf {E} _{0}+A^{i}\mathbf {E} _{i}\\&=A^{\alpha }\mathbf {E} _{\alpha }\end{aligned}}} whereAαis the magnitude component andEαis thebasis vectorcomponent; note that both are necessary to make a vector, and that whenAαis seen alone, it refers strictly to thecomponentsof the vector. The upper indices indicatecontravariantcomponents. Here the standard convention is that Latin indices take values for spatial components, so thati= 1, 2, 3, and Greek indices take values for spaceand timecomponents, soα= 0, 1, 2, 3, used with thesummation convention. The split between the time component and the spatial components is a useful one to make when determining contractions of one four vector with other tensor quantities, such as for calculating Lorentz invariants in inner products (examples are given below), orraising and lowering indices. In special relativity, the spacelike basisE1,E2,E3and componentsA1,A2,A3are oftenCartesianbasis and components: A=(At,Ax,Ay,Az)=AtEt+AxEx+AyEy+AzEz{\displaystyle {\begin{aligned}\mathbf {A} &=\left(A_{t},\,A_{x},\,A_{y},\,A_{z}\right)\\&=A_{t}\mathbf {E} _{t}+A_{x}\mathbf {E} _{x}+A_{y}\mathbf {E} _{y}+A_{z}\mathbf {E} _{z}\\\end{aligned}}} although, of course, any other basis and components may be used, such asspherical polar coordinates A=(At,Ar,Aθ,Aϕ)=AtEt+ArEr+AθEθ+AϕEϕ{\displaystyle {\begin{aligned}\mathbf {A} &=\left(A_{t},\,A_{r},\,A_{\theta },\,A_{\phi }\right)\\&=A_{t}\mathbf {E} _{t}+A_{r}\mathbf {E} _{r}+A_{\theta }\mathbf {E} _{\theta }+A_{\phi }\mathbf {E} _{\phi }\\\end{aligned}}} orcylindrical polar coordinates, A=(At,Ar,Aθ,Az)=AtEt+ArEr+AθEθ+AzEz{\displaystyle {\begin{aligned}\mathbf {A} &=(A_{t},\,A_{r},\,A_{\theta },\,A_{z})\\&=A_{t}\mathbf {E} _{t}+A_{r}\mathbf {E} _{r}+A_{\theta }\mathbf {E} _{\theta }+A_{z}\mathbf {E} _{z}\\\end{aligned}}} or any otherorthogonal coordinates, or even generalcurvilinear coordinates. Note the coordinate labels are always subscripted as labels and are not indices taking numerical values. In general relativity, local curvilinear coordinates in a local basis must be used. Geometrically, a four-vector can still be interpreted as an arrow, but in spacetime - not just space. In relativity, the arrows are drawn as part ofMinkowski diagram(also calledspacetime diagram). In this article, four-vectors will be referred to simply as vectors. It is also customary to represent the bases bycolumn vectors: E0=(1000),E1=(0100),E2=(0010),E3=(0001){\displaystyle \mathbf {E} _{0}={\begin{pmatrix}1\\0\\0\\0\end{pmatrix}}\,,\quad \mathbf {E} _{1}={\begin{pmatrix}0\\1\\0\\0\end{pmatrix}}\,,\quad \mathbf {E} _{2}={\begin{pmatrix}0\\0\\1\\0\end{pmatrix}}\,,\quad \mathbf {E} _{3}={\begin{pmatrix}0\\0\\0\\1\end{pmatrix}}} so that: A=(A0A1A2A3){\displaystyle \mathbf {A} ={\begin{pmatrix}A^{0}\\A^{1}\\A^{2}\\A^{3}\end{pmatrix}}} The relation between thecovariantand contravariant coordinates is through theMinkowskimetric tensor(referred to as the metric),ηwhichraises and lowers indicesas follows: Aμ=ημνAν,{\displaystyle A_{\mu }=\eta _{\mu \nu }A^{\nu }\,,} and in various equivalent notations the covariant components are: A=(A0,A1,A2,A3)=A0E0+A1E1+A2E2+A3E3=A0E0+AiEi=AαEα{\displaystyle {\begin{aligned}\mathbf {A} &=(A_{0},\,A_{1},\,A_{2},\,A_{3})\\&=A_{0}\mathbf {E} ^{0}+A_{1}\mathbf {E} ^{1}+A_{2}\mathbf {E} ^{2}+A_{3}\mathbf {E} ^{3}\\&=A_{0}\mathbf {E} ^{0}+A_{i}\mathbf {E} ^{i}\\&=A_{\alpha }\mathbf {E} ^{\alpha }\\\end{aligned}}} where the lowered index indicates it to becovariant. Often the metric is diagonal, as is the case fororthogonal coordinates(seeline element), but not in generalcurvilinear coordinates. The bases can be represented byrow vectors: E0=(1000),E1=(0100),E2=(0010),E3=(0001),{\displaystyle {\begin{aligned}\mathbf {E} ^{0}&={\begin{pmatrix}1&0&0&0\end{pmatrix}}\,,&\mathbf {E} ^{1}&={\begin{pmatrix}0&1&0&0\end{pmatrix}}\,,\\[1ex]\mathbf {E} ^{2}&={\begin{pmatrix}0&0&1&0\end{pmatrix}}\,,&\mathbf {E} ^{3}&={\begin{pmatrix}0&0&0&1\end{pmatrix}},\end{aligned}}}so that:A=(A0A1A2A3){\displaystyle \mathbf {A} ={\begin{pmatrix}A_{0}&A_{1}&A_{2}&A_{3}\end{pmatrix}}} The motivation for the above conventions are that the inner product is a scalar, see below for details. Given two inertial or rotatedframes of reference, a four-vector is defined as a quantity which transforms according to theLorentz transformationmatrixΛ:A′=ΛA{\displaystyle \mathbf {A} '={\boldsymbol {\Lambda }}\mathbf {A} } In index notation, the contravariant and covariant components transform according to, respectively:A′μ=ΛμνAν,A′μ=ΛμνAν{\displaystyle {A'}^{\mu }=\Lambda ^{\mu }{}_{\nu }A^{\nu }\,,\quad {A'}_{\mu }=\Lambda _{\mu }{}^{\nu }A_{\nu }}in which the matrixΛhas componentsΛμνin rowμand columnν, and the matrix(Λ−1)Thas componentsΛμνin rowμand columnν. For background on the nature of this transformation definition, seetensor. All four-vectors transform in the same way, and this can be generalized to four-dimensional relativistic tensors; seespecial relativity. For two frames rotated by a fixed angleθabout an axis defined by theunit vector: n^=(n^1,n^2,n^3),{\displaystyle {\hat {\mathbf {n} }}=\left({\hat {n}}_{1},{\hat {n}}_{2},{\hat {n}}_{3}\right)\,,} without any boosts, the matrixΛhas components given by:[4] Λ00=1Λ0i=Λi0=0Λij=(δij−n^in^j)cos⁡θ−εijkn^ksin⁡θ+n^in^j{\displaystyle {\begin{aligned}\Lambda _{00}&=1\\\Lambda _{0i}=\Lambda _{i0}&=0\\\Lambda _{ij}&=\left(\delta _{ij}-{\hat {n}}_{i}{\hat {n}}_{j}\right)\cos \theta -\varepsilon _{ijk}{\hat {n}}_{k}\sin \theta +{\hat {n}}_{i}{\hat {n}}_{j}\end{aligned}}} whereδijis theKronecker delta, andεijkis thethree-dimensionalLevi-Civita symbol. The spacelike components of four-vectors are rotated, while the timelike components remain unchanged. For the case of rotations about thez-axis only, the spacelike part of the Lorentz matrix reduces to therotation matrixabout thez-axis: (A′0A′1A′2A′3)=(10000cos⁡θ−sin⁡θ00sin⁡θcos⁡θ00001)(A0A1A2A3).{\displaystyle {\begin{pmatrix}{A'}^{0}\\{A'}^{1}\\{A'}^{2}\\{A'}^{3}\end{pmatrix}}={\begin{pmatrix}1&0&0&0\\0&\cos \theta &-\sin \theta &0\\0&\sin \theta &\cos \theta &0\\0&0&0&1\\\end{pmatrix}}{\begin{pmatrix}A^{0}\\A^{1}\\A^{2}\\A^{3}\end{pmatrix}}\ .} For two frames moving at constant relative three-velocityv(not four-velocity,see below), it is convenient to denote and define the relative velocity in units ofcby: β=(β1,β2,β3)=1c(v1,v2,v3)=1cv.{\displaystyle {\boldsymbol {\beta }}=(\beta _{1},\,\beta _{2},\,\beta _{3})={\frac {1}{c}}(v_{1},\,v_{2},\,v_{3})={\frac {1}{c}}\mathbf {v} \,.} Then without rotations, the matrixΛhas components given by:[5]Λ00=γ,Λ0i=Λi0=−γβi,Λij=Λji=(γ−1)βiβjβ2+δij=(γ−1)vivjv2+δij,{\displaystyle {\begin{aligned}\Lambda _{00}&=\gamma ,\\\Lambda _{0i}=\Lambda _{i0}&=-\gamma \beta _{i},\\\Lambda _{ij}=\Lambda _{ji}&=(\gamma -1){\frac {\beta _{i}\beta _{j}}{\beta ^{2}}}+\delta _{ij}=(\gamma -1){\frac {v_{i}v_{j}}{v^{2}}}+\delta _{ij},\\\end{aligned}}}where theLorentz factoris defined by:γ=11−β⋅β,{\displaystyle \gamma ={\frac {1}{\sqrt {1-{\boldsymbol {\beta }}\cdot {\boldsymbol {\beta }}}}}\,,}andδijis theKronecker delta. Contrary to the case for pure rotations, the spacelike and timelike components are mixed together under boosts. For the case of a boost in thex-direction only, the matrix reduces to;[6][7] (A′0A′1A′2A′3)=(cosh⁡ϕ−sinh⁡ϕ00−sinh⁡ϕcosh⁡ϕ0000100001)(A0A1A2A3){\displaystyle {\begin{pmatrix}A'^{0}\\A'^{1}\\A'^{2}\\A'^{3}\end{pmatrix}}={\begin{pmatrix}\cosh \phi &-\sinh \phi &0&0\\-\sinh \phi &\cosh \phi &0&0\\0&0&1&0\\0&0&0&1\\\end{pmatrix}}{\begin{pmatrix}A^{0}\\A^{1}\\A^{2}\\A^{3}\end{pmatrix}}} Where therapidityϕexpression has been used, written in terms of thehyperbolic functions:γ=cosh⁡ϕ{\displaystyle \gamma =\cosh \phi } This Lorentz matrix illustrates the boost to be ahyperbolic rotationin four dimensional spacetime, analogous to the circular rotation above in three-dimensional space. Four-vectors have the samelinearity propertiesasEuclidean vectorsinthree dimensions. They can be added in the usual entrywise way:A+B=(A0,A1,A2,A3)+(B0,B1,B2,B3)=(A0+B0,A1+B1,A2+B2,A3+B3){\displaystyle {\begin{aligned}\mathbf {A} +\mathbf {B} &=\left(A^{0},A^{1},A^{2},A^{3}\right)+\left(B^{0},B^{1},B^{2},B^{3}\right)\\&=\left(A^{0}+B^{0},A^{1}+B^{1},A^{2}+B^{2},A^{3}+B^{3}\right)\end{aligned}}}and similarlyscalar multiplicationby ascalarλis defined entrywise by:λA=λ(A0,A1,A2,A3)=(λA0,λA1,λA2,λA3){\displaystyle \lambda \mathbf {A} =\lambda \left(A^{0},A^{1},A^{2},A^{3}\right)=\left(\lambda A^{0},\lambda A^{1},\lambda A^{2},\lambda A^{3}\right)} Then subtraction is the inverse operation of addition, defined entrywise by:A+(−1)B=(A0,A1,A2,A3)+(−1)(B0,B1,B2,B3)=(A0−B0,A1−B1,A2−B2,A3−B3){\displaystyle {\begin{aligned}\mathbf {A} +(-1)\mathbf {B} &=\left(A^{0},A^{1},A^{2},A^{3}\right)+(-1)\left(B^{0},B^{1},B^{2},B^{3}\right)\\&=\left(A^{0}-B^{0},A^{1}-B^{1},A^{2}-B^{2},A^{3}-B^{3}\right)\end{aligned}}} Applying theMinkowski tensorημνto two four-vectorsAandB, writing the result indot productnotation, we have, usingEinstein notation:A⋅B=AμBνEμ⋅Eν=AμημνBν{\displaystyle \mathbf {A} \cdot \mathbf {B} =A^{\mu }B^{\nu }\mathbf {E} _{\mu }\cdot \mathbf {E} _{\nu }=A^{\mu }\eta _{\mu \nu }B^{\nu }} in special relativity. The dot product of the basis vectors is the Minkowski metric, as opposed to the Kronecker delta as in Euclidean space. It is convenient to rewrite the definition inmatrixform:A⋅B=(A0A1A2A3)(η00η01η02η03η10η11η12η13η20η21η22η23η30η31η32η33)(B0B1B2B3){\displaystyle \mathbf {A\cdot B} ={\begin{pmatrix}A^{0}&A^{1}&A^{2}&A^{3}\end{pmatrix}}{\begin{pmatrix}\eta _{00}&\eta _{01}&\eta _{02}&\eta _{03}\\\eta _{10}&\eta _{11}&\eta _{12}&\eta _{13}\\\eta _{20}&\eta _{21}&\eta _{22}&\eta _{23}\\\eta _{30}&\eta _{31}&\eta _{32}&\eta _{33}\end{pmatrix}}{\begin{pmatrix}B^{0}\\B^{1}\\B^{2}\\B^{3}\end{pmatrix}}}in which caseημνabove is the entry in rowμand columnνof the Minkowski metric as a square matrix. The Minkowski metric is not aEuclidean metric, because it is indefinite (seemetric signature). A number of other expressions can be used because the metric tensor can raise and lower the components ofAorB. For contra/co-variant components ofAand co/contra-variant components ofB, we have:A⋅B=AμημνBν=AνBν=AμBμ{\displaystyle \mathbf {A} \cdot \mathbf {B} =A^{\mu }\eta _{\mu \nu }B^{\nu }=A_{\nu }B^{\nu }=A^{\mu }B_{\mu }}so in the matrix notation:A⋅B=(A0A1A2A3)(B0B1B2B3)=(B0B1B2B3)(A0A1A2A3){\displaystyle {\begin{aligned}\mathbf {A} \cdot \mathbf {B} &={\begin{pmatrix}A_{0}&A_{1}&A_{2}&A_{3}\end{pmatrix}}{\begin{pmatrix}B^{0}\\B^{1}\\B^{2}\\B^{3}\end{pmatrix}}\\[1ex]&={\begin{pmatrix}B_{0}&B_{1}&B_{2}&B_{3}\end{pmatrix}}{\begin{pmatrix}A^{0}\\A^{1}\\A^{2}\\A^{3}\end{pmatrix}}\end{aligned}}}while forAandBeach in covariant components:A⋅B=AμημνBν{\displaystyle \mathbf {A} \cdot \mathbf {B} =A_{\mu }\eta ^{\mu \nu }B_{\nu }}with a similar matrix expression to the above. Applying the Minkowski tensor to a four-vectorAwith itself we get:A⋅A=AμημνAν{\displaystyle \mathbf {A\cdot A} =A^{\mu }\eta _{\mu \nu }A^{\nu }}which, depending on the case, may be considered the square, or its negative, of the length of the vector. Following are two common choices for the metric tensor in thestandard basis(essentially Cartesian coordinates). If orthogonal coordinates are used, there would be scale factors along the diagonal part of the spacelike part of the metric, while for general curvilinear coordinates the entire spacelike part of the metric would have components dependent on the curvilinear basis used. The (+−−−)metric signatureis sometimes called the "mostly minus" convention, or the "west coast" convention. In the (+−−−)metric signature, evaluating thesummation over indicesgives:A⋅B=A0B0−A1B1−A2B2−A3B3{\displaystyle \mathbf {A} \cdot \mathbf {B} =A^{0}B^{0}-A^{1}B^{1}-A^{2}B^{2}-A^{3}B^{3}}while in matrix form:A⋅B=(A0A1A2A3)(10000−10000−10000−1)(B0B1B2B3){\displaystyle \mathbf {A\cdot B} ={\begin{pmatrix}A^{0}&A^{1}&A^{2}&A^{3}\end{pmatrix}}{\begin{pmatrix}1&0&0&0\\0&-1&0&0\\0&0&-1&0\\0&0&0&-1\end{pmatrix}}{\begin{pmatrix}B^{0}\\B^{1}\\B^{2}\\B^{3}\end{pmatrix}}} It is a recurring theme in special relativity to take the expressionA⋅B=A0B0−A1B1−A2B2−A3B3=C{\displaystyle \mathbf {A} \cdot \mathbf {B} =A^{0}B^{0}-A^{1}B^{1}-A^{2}B^{2}-A^{3}B^{3}=C}in onereference frame, whereCis the value of the inner product in this frame, and:A′⋅B′=A′0B′0−A′1B′1−A′2B′2−A′3B′3=C′{\displaystyle \mathbf {A} '\cdot \mathbf {B} '={A'}^{0}{B'}^{0}-{A'}^{1}{B'}^{1}-{A'}^{2}{B'}^{2}-{A'}^{3}{B'}^{3}=C'}in another frame, in whichC′ is the value of the inner product in this frame. Then since the inner product is an invariant, these must be equal:A⋅B=A′⋅B′{\displaystyle \mathbf {A} \cdot \mathbf {B} =\mathbf {A} '\cdot \mathbf {B} '}that is:C=A0B0−A1B1−A2B2−A3B3=A′0B′0−A′1B′1−A′2B′2−A′3B′3{\displaystyle {\begin{aligned}C&=A^{0}B^{0}-A^{1}B^{1}-A^{2}B^{2}-A^{3}B^{3}\\[2pt]&={A'}^{0}{B'}^{0}-{A'}^{1}{B'}^{1}-{A'}^{2}{B'}^{2}-{A'}^{3}{B'}^{3}\end{aligned}}} Considering that physical quantities in relativity are four-vectors, this equation has the appearance of a "conservation law", but there is no "conservation" involved. The primary significance of the Minkowski inner product is that for any two four-vectors, its value isinvariantfor all observers; a change of coordinates does not result in a change in value of the inner product. The components of the four-vectors change from one frame to another;AandA′ are connected by aLorentz transformation, and similarly forBandB′, although the inner products are the same in all frames. Nevertheless, this type of expression is exploited in relativistic calculations on a par with conservation laws, since the magnitudes of components can be determined without explicitly performing any Lorentz transformations. A particular example is with energy and momentum in theenergy-momentum relationderived from thefour-momentumvector (see also below). In this signature we have:A⋅A=(A0)2−(A1)2−(A2)2−(A3)2{\displaystyle \mathbf {A\cdot A} =\left(A^{0}\right)^{2}-\left(A^{1}\right)^{2}-\left(A^{2}\right)^{2}-\left(A^{3}\right)^{2}} With the signature (+−−−), four-vectors may be classified as eitherspacelikeifA⋅A<0{\displaystyle \mathbf {A\cdot A} <0},timelikeifA⋅A>0{\displaystyle \mathbf {A\cdot A} >0}, andnull vectorsifA⋅A=0{\displaystyle \mathbf {A\cdot A} =0}. The (-+++)metric signatureis sometimes called the "east coast" convention. Some authors defineηwith the opposite sign, in which case we have the (−+++) metric signature. Evaluating the summation with this signature: A⋅B=−A0B0+A1B1+A2B2+A3B3{\displaystyle \mathbf {A\cdot B} =-A^{0}B^{0}+A^{1}B^{1}+A^{2}B^{2}+A^{3}B^{3}} while the matrix form is: A⋅B=(A0A1A2A3)(−1000010000100001)(B0B1B2B3){\displaystyle \mathbf {A\cdot B} =\left({\begin{matrix}A^{0}&A^{1}&A^{2}&A^{3}\end{matrix}}\right)\left({\begin{matrix}-1&0&0&0\\0&1&0&0\\0&0&1&0\\0&0&0&1\end{matrix}}\right)\left({\begin{matrix}B^{0}\\B^{1}\\B^{2}\\B^{3}\end{matrix}}\right)} Note that in this case, in one frame: A⋅B=−A0B0+A1B1+A2B2+A3B3=−C{\displaystyle \mathbf {A} \cdot \mathbf {B} =-A^{0}B^{0}+A^{1}B^{1}+A^{2}B^{2}+A^{3}B^{3}=-C} while in another: A′⋅B′=−A′0B′0+A′1B′1+A′2B′2+A′3B′3=−C′{\displaystyle \mathbf {A} '\cdot \mathbf {B} '=-{A'}^{0}{B'}^{0}+{A'}^{1}{B'}^{1}+{A'}^{2}{B'}^{2}+{A'}^{3}{B'}^{3}=-C'} so that: −C=−A0B0+A1B1+A2B2+A3B3=−A′0B′0+A′1B′1+A′2B′2+A′3B′3{\displaystyle {\begin{aligned}-C&=-A^{0}B^{0}+A^{1}B^{1}+A^{2}B^{2}+A^{3}B^{3}\\[2pt]&=-{A'}^{0}{B'}^{0}+{A'}^{1}{B'}^{1}+{A'}^{2}{B'}^{2}+{A'}^{3}{B'}^{3}\end{aligned}}} which is equivalent to the above expression forCin terms ofAandB. Either convention will work. With the Minkowski metric defined in the two ways above, the only difference between covariant and contravariant four-vector components are signs, therefore the signs depend on which sign convention is used. We have: A⋅A=−(A0)2+(A1)2+(A2)2+(A3)2{\displaystyle \mathbf {A\cdot A} =-\left(A^{0}\right)^{2}+\left(A^{1}\right)^{2}+\left(A^{2}\right)^{2}+\left(A^{3}\right)^{2}} With the signature (−+++), four-vectors may be classified as eitherspacelikeifA⋅A>0{\displaystyle \mathbf {A\cdot A} >0},timelikeifA⋅A<0{\displaystyle \mathbf {A\cdot A} <0}, andnullifA⋅A=0{\displaystyle \mathbf {A\cdot A} =0}. Applying the Minkowski tensor is often expressed as the effect of thedual vectorof one vector on the other: A⋅B=A∗(B)=AνBν.{\displaystyle \mathbf {A\cdot B} =A^{*}(\mathbf {B} )=A{_{\nu }}B^{\nu }.} Here theAνs are the components of the dual vectorA* ofAin thedual basisand called thecovariantcoordinates ofA, while the originalAνcomponents are called thecontravariantcoordinates. In special relativity (but not general relativity), thederivativeof a four-vector with respect to a scalarλ(invariant) is itself a four-vector. It is also useful to take thedifferentialof the four-vector,dAand divide it by the differential of the scalar,dλ: dAdifferential=dAdλderivativedλdifferential{\displaystyle {\underset {\text{differential}}{d\mathbf {A} }}={\underset {\text{derivative}}{\frac {d\mathbf {A} }{d\lambda }}}{\underset {\text{differential}}{d\lambda }}} where the contravariant components are: dA=(dA0,dA1,dA2,dA3){\displaystyle d\mathbf {A} =\left(dA^{0},dA^{1},dA^{2},dA^{3}\right)} while the covariant components are: dA=(dA0,dA1,dA2,dA3){\displaystyle d\mathbf {A} =\left(dA_{0},dA_{1},dA_{2},dA_{3}\right)} In relativistic mechanics, one often takes the differential of a four-vector and divides by the differential inproper time(see below). A point inMinkowski spaceis a time and spatial position, called an "event", or sometimes theposition four-vectororfour-positionor4-position, described in some reference frame by a set of four coordinates: R=(ct,r){\displaystyle \mathbf {R} =\left(ct,\mathbf {r} \right)} whereris thethree-dimensional spaceposition vector. Ifris a function of coordinate timetin the same frame, i.e.r=r(t), this corresponds to a sequence of events astvaries. The definitionR0=ctensures that all the coordinates have the samedimension(oflength) and units (in theSI, meters).[8][9][10][11]These coordinates are the components of theposition four-vectorfor the event. Thedisplacement four-vectoris defined to be an "arrow" linking two events: ΔR=(cΔt,Δr){\displaystyle \Delta \mathbf {R} =\left(c\Delta t,\Delta \mathbf {r} \right)} For thedifferentialfour-position on a world line we have, usinga norm notation: ‖dR‖2=dR⋅dR=dRμdRμ=c2dτ2=ds2,{\displaystyle \|d\mathbf {R} \|^{2}=\mathbf {dR\cdot dR} =dR^{\mu }dR_{\mu }=c^{2}d\tau ^{2}=ds^{2}\,,} defining the differentialline elementdsand differential proper time increment dτ, but this "norm" is also: ‖dR‖2=(cdt)2−dr⋅dr,{\displaystyle \|d\mathbf {R} \|^{2}=(cdt)^{2}-d\mathbf {r} \cdot d\mathbf {r} \,,} so that: (cdτ)2=(cdt)2−dr⋅dr.{\displaystyle (cd\tau )^{2}=(cdt)^{2}-d\mathbf {r} \cdot d\mathbf {r} \,.} When considering physical phenomena, differential equations arise naturally; however, when considering space andtime derivativesof functions, it is unclear which reference frame these derivatives are taken with respect to. It is agreed that time derivatives are taken with respect to theproper timeτ{\displaystyle \tau }. As proper time is an invariant, this guarantees that the proper-time-derivative of any four-vector is itself a four-vector. It is then important to find a relation between this proper-time-derivative and another time derivative (using thecoordinate timetof an inertial reference frame). This relation is provided by taking the above differential invariant spacetime interval, then dividing by (cdt)2to obtain: (cdτcdt)2=1−(drcdt⋅drcdt)=1−u⋅uc2=1γ(u)2,{\displaystyle \left({\frac {cd\tau }{cdt}}\right)^{2}=1-\left({\frac {d\mathbf {r} }{cdt}}\cdot {\frac {d\mathbf {r} }{cdt}}\right)=1-{\frac {\mathbf {u} \cdot \mathbf {u} }{c^{2}}}={\frac {1}{\gamma (\mathbf {u} )^{2}}}\,,} whereu=dr/dtis the coordinate 3-velocityof an object measured in the same frame as the coordinatesx,y,z, andcoordinate timet, and γ(u)=11−u⋅uc2{\displaystyle \gamma (\mathbf {u} )={\frac {1}{\sqrt {1-{\frac {\mathbf {u} \cdot \mathbf {u} }{c^{2}}}}}}} is theLorentz factor. This provides a useful relation between the differentials in coordinate time and proper time: dt=γ(u)dτ.{\displaystyle dt=\gamma (\mathbf {u} )d\tau \,.} This relation can also be found from the time transformation in theLorentz transformations. Important four-vectors in relativity theory can be defined by applying this differentialddτ{\displaystyle {\frac {d}{d\tau }}}. Considering thatpartial derivativesarelinear operators, one can form afour-gradientfrom the partialtime derivative∂/∂tand the spatialgradient∇. Using the standard basis, in index and abbreviated notations, the contravariant components are: ∂=(∂∂x0,−∂∂x1,−∂∂x2,−∂∂x3)=(∂0,−∂1,−∂2,−∂3)=E0∂0−E1∂1−E2∂2−E3∂3=E0∂0−Ei∂i=Eα∂α=(1c∂∂t,−∇)=(∂tc,−∇)=E01c∂∂t−∇{\displaystyle {\begin{aligned}{\boldsymbol {\partial }}&=\left({\frac {\partial }{\partial x_{0}}},\,-{\frac {\partial }{\partial x_{1}}},\,-{\frac {\partial }{\partial x_{2}}},\,-{\frac {\partial }{\partial x_{3}}}\right)\\&=(\partial ^{0},\,-\partial ^{1},\,-\partial ^{2},\,-\partial ^{3})\\&=\mathbf {E} _{0}\partial ^{0}-\mathbf {E} _{1}\partial ^{1}-\mathbf {E} _{2}\partial ^{2}-\mathbf {E} _{3}\partial ^{3}\\&=\mathbf {E} _{0}\partial ^{0}-\mathbf {E} _{i}\partial ^{i}\\&=\mathbf {E} _{\alpha }\partial ^{\alpha }\\&=\left({\frac {1}{c}}{\frac {\partial }{\partial t}},\,-\nabla \right)\\&=\left({\frac {\partial _{t}}{c}},-\nabla \right)\\&=\mathbf {E} _{0}{\frac {1}{c}}{\frac {\partial }{\partial t}}-\nabla \\\end{aligned}}} Note the basis vectors are placed in front of the components, to prevent confusion between taking the derivative of the basis vector, or simply indicating the partial derivative is a component of this four-vector. The covariant components are: ∂=(∂∂x0,∂∂x1,∂∂x2,∂∂x3)=(∂0,∂1,∂2,∂3)=E0∂0+E1∂1+E2∂2+E3∂3=E0∂0+Ei∂i=Eα∂α=(1c∂∂t,∇)=(∂tc,∇)=E01c∂∂t+∇{\displaystyle {\begin{aligned}{\boldsymbol {\partial }}&=\left({\frac {\partial }{\partial x^{0}}},\,{\frac {\partial }{\partial x^{1}}},\,{\frac {\partial }{\partial x^{2}}},\,{\frac {\partial }{\partial x^{3}}}\right)\\&=(\partial _{0},\,\partial _{1},\,\partial _{2},\,\partial _{3})\\&=\mathbf {E} ^{0}\partial _{0}+\mathbf {E} ^{1}\partial _{1}+\mathbf {E} ^{2}\partial _{2}+\mathbf {E} ^{3}\partial _{3}\\&=\mathbf {E} ^{0}\partial _{0}+\mathbf {E} ^{i}\partial _{i}\\&=\mathbf {E} ^{\alpha }\partial _{\alpha }\\&=\left({\frac {1}{c}}{\frac {\partial }{\partial t}},\,\nabla \right)\\&=\left({\frac {\partial _{t}}{c}},\nabla \right)\\&=\mathbf {E} ^{0}{\frac {1}{c}}{\frac {\partial }{\partial t}}+\nabla \\\end{aligned}}} Since this is an operator, it doesn't have a "length", but evaluating the inner product of the operator with itself gives another operator: ∂μ∂μ=1c2∂2∂t2−∇2=∂t2c2−∇2{\displaystyle \partial ^{\mu }\partial _{\mu }={\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}-\nabla ^{2}={\frac {{\partial _{t}}^{2}}{c^{2}}}-\nabla ^{2}} called theD'Alembert operator. Thefour-velocityof a particle is defined by: U=dXdτ=dXdtdtdτ=γ(u)(c,u),{\displaystyle \mathbf {U} ={\frac {d\mathbf {X} }{d\tau }}={\frac {d\mathbf {X} }{dt}}{\frac {dt}{d\tau }}=\gamma (\mathbf {u} )\left(c,\mathbf {u} \right),} Geometrically,Uis a normalized vector tangent to theworld lineof the particle. Using the differential of the four-position, the magnitude of the four-velocity can be obtained: ‖U‖2=UμUμ=dXμdτdXμdτ=dXμdXμdτ2=c2,{\displaystyle \|\mathbf {U} \|^{2}=U^{\mu }U_{\mu }={\frac {dX^{\mu }}{d\tau }}{\frac {dX_{\mu }}{d\tau }}={\frac {dX^{\mu }dX_{\mu }}{d\tau ^{2}}}=c^{2}\,,} in short, the magnitude of the four-velocity for any object is always a fixed constant: ‖U‖2=c2{\displaystyle \|\mathbf {U} \|^{2}=c^{2}} The norm is also: ‖U‖2=γ(u)2(c2−u⋅u),{\displaystyle \|\mathbf {U} \|^{2}={\gamma (\mathbf {u} )}^{2}\left(c^{2}-\mathbf {u} \cdot \mathbf {u} \right)\,,} so that: c2=γ(u)2(c2−u⋅u),{\displaystyle c^{2}={\gamma (\mathbf {u} )}^{2}\left(c^{2}-\mathbf {u} \cdot \mathbf {u} \right)\,,} which reduces to the definition of theLorentz factor. Units of four-velocity are m/s inSIand 1 in thegeometrized unit system. Four-velocity is a contravariant vector. Thefour-accelerationis given by: A=dUdτ=γ(u)(dγ(u)dtc,dγ(u)dtu+γ(u)a).{\displaystyle \mathbf {A} ={\frac {d\mathbf {U} }{d\tau }}=\gamma (\mathbf {u} )\left({\frac {d{\gamma }(\mathbf {u} )}{dt}}c,{\frac {d{\gamma }(\mathbf {u} )}{dt}}\mathbf {u} +\gamma (\mathbf {u} )\mathbf {a} \right).} wherea=du/dtis the coordinate 3-acceleration. Since the magnitude ofUis a constant, the four acceleration is orthogonal to the four velocity, i.e. the Minkowski inner product of the four-acceleration and the four-velocity is zero: A⋅U=AμUμ=dUμdτUμ=12ddτ(UμUμ)=0{\displaystyle \mathbf {A} \cdot \mathbf {U} =A^{\mu }U_{\mu }={\frac {dU^{\mu }}{d\tau }}U_{\mu }={\frac {1}{2}}\,{\frac {d}{d\tau }}\left(U^{\mu }U_{\mu }\right)=0\,} which is true for all world lines. The geometric meaning of four-acceleration is thecurvature vectorof the world line in Minkowski space. For a massive particle ofrest mass(orinvariant mass)m0, thefour-momentumis given by: P=m0U=m0γ(u)(c,u)=(Ec,p){\displaystyle \mathbf {P} =m_{0}\mathbf {U} =m_{0}\gamma (\mathbf {u} )(c,\mathbf {u} )=\left({\frac {E}{c}},\mathbf {p} \right)} where the total energy of the moving particle is: E=γ(u)m0c2{\displaystyle E=\gamma (\mathbf {u} )m_{0}c^{2}} and the totalrelativistic momentumis: p=γ(u)m0u{\displaystyle \mathbf {p} =\gamma (\mathbf {u} )m_{0}\mathbf {u} } Taking the inner product of the four-momentum with itself: ‖P‖2=PμPμ=m02UμUμ=m02c2{\displaystyle \|\mathbf {P} \|^{2}=P^{\mu }P_{\mu }=m_{0}^{2}U^{\mu }U_{\mu }=m_{0}^{2}c^{2}} and also: ‖P‖2=E2c2−p⋅p{\displaystyle \|\mathbf {P} \|^{2}={\frac {E^{2}}{c^{2}}}-\mathbf {p} \cdot \mathbf {p} } which leads to theenergy–momentum relation: E2=c2p⋅p+(m0c2)2.{\displaystyle E^{2}=c^{2}\mathbf {p} \cdot \mathbf {p} +\left(m_{0}c^{2}\right)^{2}\,.} This last relation is useful inrelativistic mechanics, essential inrelativistic quantum mechanicsandrelativistic quantum field theory, all with applications toparticle physics. Thefour-forceacting on a particle is defined analogously to the 3-force as the time derivative of 3-momentum inNewton's second law: F=dPdτ=γ(u)(1cdEdt,dpdt)=γ(u)(Pc,f){\displaystyle \mathbf {F} ={\frac {d\mathbf {P} }{d\tau }}=\gamma (\mathbf {u} )\left({\frac {1}{c}}{\frac {dE}{dt}},{\frac {d\mathbf {p} }{dt}}\right)=\gamma (\mathbf {u} )\left({\frac {P}{c}},\mathbf {f} \right)} wherePis thepowertransferred to move the particle, andfis the 3-force acting on the particle. For a particle of constant invariant massm0, this is equivalent to F=m0A=m0γ(u)(dγ(u)dtc,(dγ(u)dtu+γ(u)a)){\displaystyle \mathbf {F} =m_{0}\mathbf {A} =m_{0}\gamma (\mathbf {u} )\left({\frac {d{\gamma }(\mathbf {u} )}{dt}}c,\left({\frac {d{\gamma }(\mathbf {u} )}{dt}}\mathbf {u} +\gamma (\mathbf {u} )\mathbf {a} \right)\right)} An invariant derived from the four-force is: F⋅U=FμUμ=m0AμUμ=0{\displaystyle \mathbf {F} \cdot \mathbf {U} =F^{\mu }U_{\mu }=m_{0}A^{\mu }U_{\mu }=0} from the above result. The four-heat flux vector field, is essentially similar to the 3dheat fluxvector fieldq, in the local frame of the fluid:[12] Q=−k∂T=−k(1c∂T∂t,∇T){\displaystyle \mathbf {Q} =-k{\boldsymbol {\partial }}T=-k\left({\frac {1}{c}}{\frac {\partial T}{\partial t}},\nabla T\right)} whereTisabsolute temperatureandkisthermal conductivity. The flux of baryons is:[13]S=nU{\displaystyle \mathbf {S} =n\mathbf {U} }wherenis thenumber densityofbaryonsin the localrest frameof the baryon fluid (positive values for baryons, negative forantibaryons), andUthefour-velocityfield (of the fluid) as above. The four-entropyvector is defined by:[14]s=sS+QT{\displaystyle \mathbf {s} =s\mathbf {S} +{\frac {\mathbf {Q} }{T}}}wheresis the entropy per baryon, andTtheabsolute temperature, in the local rest frame of the fluid.[15] Examples of four-vectors inelectromagnetisminclude the following. The electromagneticfour-current(or more correctly a four-current density)[16]is defined byJ=(ρc,j){\displaystyle \mathbf {J} =\left(\rho c,\mathbf {j} \right)}formed from thecurrent densityjandcharge densityρ. Theelectromagnetic four-potential(or more correctly a four-EM vector potential) defined byA=(ϕc,a){\displaystyle \mathbf {A} =\left({\frac {\phi }{c}},\mathbf {a} \right)}formed from thevector potentialaand the scalar potentialϕ. The four-potential is not uniquely determined, because it depends on a choice ofgauge. In thewave equationfor the electromagnetic field: A photonicplane wavecan be described by thefour-frequency, defined as N=ν(1,n^){\displaystyle \mathbf {N} =\nu \left(1,{\hat {\mathbf {n} }}\right)} whereνis the frequency of the wave andn^{\displaystyle {\hat {\mathbf {n} }}}is aunit vectorin the travel direction of the wave. Now: ‖N‖=NμNμ=ν2(1−n^⋅n^)=0{\displaystyle \|\mathbf {N} \|=N^{\mu }N_{\mu }=\nu ^{2}\left(1-{\hat {\mathbf {n} }}\cdot {\hat {\mathbf {n} }}\right)=0} so the four-frequency of a photon is always a null vector. The quantities reciprocal to timetand spacerare theangular frequencyωandangular wave vectork, respectively. They form the components of thefour-wavevectororwave four-vector: K=(ωc,k→)=(ωc,ωvpn^).{\displaystyle \mathbf {K} =\left({\frac {\omega }{c}},{\vec {\mathbf {k} }}\right)=\left({\frac {\omega }{c}},{\frac {\omega }{v_{p}}}{\hat {\mathbf {n} }}\right)\,.} The wave four-vector hascoherent derived unitofreciprocal metersin the SI.[17] A wave packet of nearlymonochromaticlight can be described by: K=2πcN=2πcν(1,n^)=ωc(1,n^).{\displaystyle \mathbf {K} ={\frac {2\pi }{c}}\mathbf {N} ={\frac {2\pi }{c}}\nu \left(1,{\hat {\mathbf {n} }}\right)={\frac {\omega }{c}}\left(1,{\hat {\mathbf {n} }}\right)~.} The de Broglie relations then showed that four-wavevector applied tomatter wavesas well as to light waves:P=ℏK=(Ec,p→)=ℏ(ωc,k→).{\displaystyle \mathbf {P} =\hbar \mathbf {K} =\left({\frac {E}{c}},{\vec {p}}\right)=\hbar \left({\frac {\omega }{c}},{\vec {k}}\right)~.}yieldingE=ℏω{\displaystyle E=\hbar \omega }andp→=ℏk→{\displaystyle {\vec {p}}=\hbar {\vec {k}}}, whereħis thePlanck constantdivided by2π. The square of the norm is:‖K‖2=KμKμ=(ωc)2−k⋅k,{\displaystyle \|\mathbf {K} \|^{2}=K^{\mu }K_{\mu }=\left({\frac {\omega }{c}}\right)^{2}-\mathbf {k} \cdot \mathbf {k} \,,}and by the de Broglie relation:‖K‖2=1ℏ2‖P‖2=(m0cℏ)2,{\displaystyle \|\mathbf {K} \|^{2}={\frac {1}{\hbar ^{2}}}\|\mathbf {P} \|^{2}=\left({\frac {m_{0}c}{\hbar }}\right)^{2}\,,}we have the matter wave analogue of the energy–momentum relation:(ωc)2−k⋅k=(m0cℏ)2.{\displaystyle \left({\frac {\omega }{c}}\right)^{2}-\mathbf {k} \cdot \mathbf {k} =\left({\frac {m_{0}c}{\hbar }}\right)^{2}~.} Note that for massless particles, in which casem0= 0, we have:(ωc)2=k⋅k,{\displaystyle \left({\frac {\omega }{c}}\right)^{2}=\mathbf {k} \cdot \mathbf {k} \,,}or‖k‖ =ω/c. Note this is consistent with the above case; for photons with a 3-wavevector of modulusω / c,in the direction of wave propagation defined by the unit vectorn^.{\displaystyle \ {\hat {\mathbf {n} }}~.} Inquantum mechanics, the four-probability currentor probability four-current is analogous to theelectromagnetic four-current:[18]J=(ρc,j){\displaystyle \mathbf {J} =(\rho c,\mathbf {j} )}whereρis theprobability density functioncorresponding to the time component, andjis theprobability currentvector. In non-relativistic quantum mechanics, this current is always well defined because the expressions for density and current are positive definite and can admit a probability interpretation. Inrelativistic quantum mechanicsandquantum field theory, it is not always possible to find a current, particularly when interactions are involved. Replacing the energy by theenergy operatorand the momentum by themomentum operatorin the four-momentum, one obtains thefour-momentum operator, used inrelativistic wave equations. Thefour-spinof a particle is defined in the rest frame of a particle to beS=(0,s){\displaystyle \mathbf {S} =(0,\mathbf {s} )}wheresis thespinpseudovector. In quantum mechanics, not all three components of this vector are simultaneously measurable, only one component is. The timelike component is zero in the particle's rest frame, but not in any other frame. This component can be found from an appropriate Lorentz transformation. The norm squared is the (negative of the) magnitude squared of the spin, and according to quantum mechanics we have‖S‖2=−|s|2=−ℏ2s(s+1){\displaystyle \|\mathbf {S} \|^{2}=-|\mathbf {s} |^{2}=-\hbar ^{2}s(s+1)} This value is observable and quantized, withsthespin quantum number(not the magnitude of the spin vector). A four-vectorAcan also be defined in using thePauli matricesas abasis, again in various equivalent notations:[19]A=(A0,A1,A2,A3)=A0σ0+A1σ1+A2σ2+A3σ3=A0σ0+Aiσi=Aασα{\displaystyle {\begin{aligned}\mathbf {A} &=\left(A^{0},\,A^{1},\,A^{2},\,A^{3}\right)\\&=A^{0}{\boldsymbol {\sigma }}_{0}+A^{1}{\boldsymbol {\sigma }}_{1}+A^{2}{\boldsymbol {\sigma }}_{2}+A^{3}{\boldsymbol {\sigma }}_{3}\\&=A^{0}{\boldsymbol {\sigma }}_{0}+A^{i}{\boldsymbol {\sigma }}_{i}\\&=A^{\alpha }{\boldsymbol {\sigma }}_{\alpha }\\\end{aligned}}}or explicitly:A=A0(1001)+A1(0110)+A2(0−ii0)+A3(100−1)=(A0+A3A1−iA2A1+iA2A0−A3){\displaystyle {\begin{aligned}\mathbf {A} &=A^{0}{\begin{pmatrix}1&0\\0&1\end{pmatrix}}+A^{1}{\begin{pmatrix}0&1\\1&0\end{pmatrix}}+A^{2}{\begin{pmatrix}0&-i\\i&0\end{pmatrix}}+A^{3}{\begin{pmatrix}1&0\\0&-1\end{pmatrix}}\\&={\begin{pmatrix}A^{0}+A^{3}&A^{1}-iA^{2}\\A^{1}+iA^{2}&A^{0}-A^{3}\end{pmatrix}}\end{aligned}}}and in this formulation, the four-vector is represented as aHermitian matrix(thematrix transposeandcomplex conjugateof the matrix leaves it unchanged), rather than a real-valued column or row vector. Thedeterminantof the matrix is the modulus of the four-vector, so the determinant is an invariant:|A|=|A0+A3A1−iA2A1+iA2A0−A3|=(A0+A3)(A0−A3)−(A1−iA2)(A1+iA2)=(A0)2−(A1)2−(A2)2−(A3)2{\displaystyle {\begin{aligned}|\mathbf {A} |&={\begin{vmatrix}A^{0}+A^{3}&A^{1}-iA^{2}\\A^{1}+iA^{2}&A^{0}-A^{3}\end{vmatrix}}\\[1ex]&=\left(A^{0}+A^{3}\right)\left(A^{0}-A^{3}\right)-\left(A^{1}-iA^{2}\right)\left(A^{1}+iA^{2}\right)\\[1ex]&=\left(A^{0}\right)^{2}-\left(A^{1}\right)^{2}-\left(A^{2}\right)^{2}-\left(A^{3}\right)^{2}\end{aligned}}} This idea of using the Pauli matrices asbasis vectorsis employed in thealgebra of physical space, an example of aClifford algebra. Inspacetime algebra, another example of Clifford algebra, thegamma matricescan also form abasis. (They are also called the Dirac matrices, owing to their appearance in theDirac equation). There is more than one way to express the gamma matrices, detailed in that main article. TheFeynman slash notationis a shorthand for a four-vectorAcontracted with the gamma matrices:A/=Aαγα=A0γ0+A1γ1+A2γ2+A3γ3{\displaystyle \mathbf {A} \!\!\!\!/=A_{\alpha }\gamma ^{\alpha }=A_{0}\gamma ^{0}+A_{1}\gamma ^{1}+A_{2}\gamma ^{2}+A_{3}\gamma ^{3}} The four-momentum contracted with the gamma matrices is an important case inrelativistic quantum mechanicsandrelativistic quantum field theory. In the Dirac equation and otherrelativistic wave equations, terms of the form:P/=Pαγα=P0γ0+P1γ1+P2γ2+P3γ3=Ecγ0−pxγ1−pyγ2−pzγ3{\displaystyle {\begin{aligned}\mathbf {P} \!\!\!\!/=P_{\alpha }\gamma ^{\alpha }&=P_{0}\gamma ^{0}+P_{1}\gamma ^{1}+P_{2}\gamma ^{2}+P_{3}\gamma ^{3}\\[4pt]&={\dfrac {E}{c}}\gamma ^{0}-p_{x}\gamma ^{1}-p_{y}\gamma ^{2}-p_{z}\gamma ^{3}\\\end{aligned}}}appear, in which the energyEand momentum components(px,py,pz)are replaced by their respectiveoperators.
https://en.wikipedia.org/wiki/Four-wavevector
Inphysics,relativistic angular momentumrefers to the mathematical formalisms and physical concepts that defineangular momentuminspecial relativity(SR) andgeneral relativity(GR). The relativistic quantity is subtly different from thethree-dimensionalquantity inclassical mechanics. Angular momentum is an important dynamical quantity derived from position and momentum. It is a measure of an object's rotational motion and resistance to changes in its rotation. Also, in the same way momentum conservation corresponds to translational symmetry, angular momentum conservation corresponds to rotational symmetry – the connection betweensymmetriesandconservation lawsis made byNoether's theorem. While these concepts were originally discovered in classical mechanics, they are also true and significant in special and general relativity. In terms of abstract algebra, the invariance of angular momentum, four-momentum, and other symmetries in spacetime, are described by theLorentz group, or more generally thePoincaré group. Physical quantitiesthat remain separate in classical physics arenaturally combinedin SR and GR by enforcing the postulates of relativity. Most notably, the space and time coordinates combine into thefour-position, and energy and momentum combine into thefour-momentum. The components of thesefour-vectorsdepend on theframe of referenceused, and change underLorentz transformationsto otherinertial framesoraccelerated frames. Relativistic angular momentum is less obvious. The classical definition of angular momentum is thecross productof positionxwith momentumpto obtain apseudovectorx×p, or alternatively as theexterior productto obtain a second orderantisymmetric tensorx∧p. What does this combine with, if anything? There is another vector quantity not often discussed – it is the time-varying moment of mass polar-vector (notthemoment of inertia) related to the boost of thecentre of massof the system, and this combines with the classical angular momentum pseudovector to form an antisymmetric tensor of second order, in exactly the same way as the electric field polar-vector combines with the magnetic field pseudovector to form the electromagnetic field antisymmetric tensor. For rotating mass–energy distributions (such asgyroscopes,planets,stars, andblack holes) instead of point-like particles, theangular momentum tensoris expressed in terms of thestress–energy tensorof the rotating object. In special relativity alone, in therest frameof a spinning object, there is an intrinsic angular momentum analogous to the "spin" inquantum mechanicsandrelativistic quantum mechanics, although for an extended body rather than a point particle. In relativistic quantum mechanics,elementary particleshavespinand this is an additional contribution to theorbitalangular momentum operator, yielding thetotalangular momentum tensor operator. In any case, the intrinsic "spin" addition to the orbital angular momentum of an object can be expressed in terms of thePauli–Lubanski pseudovector.[1] For reference and background, two closely related forms of angular momentum are given. Inclassical mechanics, the orbital angular momentum of a particle with instantaneous three-dimensional position vectorx= (x,y,z)and momentum vectorp= (px,py,pz), is defined as theaxial vectorL=x×p{\displaystyle \mathbf {L} =\mathbf {x} \times \mathbf {p} }which has three components, that are systematically given bycyclic permutationsof Cartesian directions (e.g. changextoy,ytoz,ztox, repeat)Lx=ypz−zpy,Ly=zpx−xpz,Lz=xpy−ypx.{\displaystyle {\begin{aligned}L_{x}&=yp_{z}-zp_{y}\,,\\L_{y}&=zp_{x}-xp_{z}\,,\\L_{z}&=xp_{y}-yp_{x}\,.\end{aligned}}} A related definition is to conceive orbital angular momentum as aplane element. This can be achieved by replacing the cross product by theexterior productin the language ofexterior algebra, and angular momentum becomes acontravariantsecond orderantisymmetric tensor[2]L=x∧p{\displaystyle \mathbf {L} =\mathbf {x} \wedge \mathbf {p} } or writingx= (x1,x2,x3) = (x,y,z)and momentum vectorp= (p1,p2,p3) = (px,py,pz), the components can be compactly abbreviated intensor index notationLij=xipj−xjpi{\displaystyle L^{ij}=x^{i}p^{j}-x^{j}p^{i}}where the indicesiandjtake the values 1, 2, 3. On the other hand, the components can be systematically displayed fully in a 3 × 3antisymmetric matrixL=(L11L12L13L21L22L23L31L32L33)=(0LxyLxzLyx0LyzLzxLzy0)=(0Lxy−Lzx−Lxy0LyzLzx−Lyz0)=(0xpy−ypx−(zpx−xpz)−(xpy−ypx)0ypz−zpyzpx−xpz−(ypz−zpy)0){\displaystyle {\begin{aligned}\mathbf {L} &={\begin{pmatrix}L^{11}&L^{12}&L^{13}\\L^{21}&L^{22}&L^{23}\\L^{31}&L^{32}&L^{33}\\\end{pmatrix}}={\begin{pmatrix}0&L_{xy}&L_{xz}\\L_{yx}&0&L_{yz}\\L_{zx}&L_{zy}&0\end{pmatrix}}={\begin{pmatrix}0&L_{xy}&-L_{zx}\\-L_{xy}&0&L_{yz}\\L_{zx}&-L_{yz}&0\end{pmatrix}}\\&={\begin{pmatrix}0&xp_{y}-yp_{x}&-(zp_{x}-xp_{z})\\-(xp_{y}-yp_{x})&0&yp_{z}-zp_{y}\\zp_{x}-xp_{z}&-(yp_{z}-zp_{y})&0\end{pmatrix}}\end{aligned}}} This quantity is additive, and for an isolated system, the total angular momentum of a system is conserved. In classical mechanics, the three-dimensional quantity for a particle of massmmoving with velocityu[2][3]N=m(x−tu)=mx−tp{\displaystyle \mathbf {N} =m\left(\mathbf {x} -t\mathbf {u} \right)=m\mathbf {x} -t\mathbf {p} }has thedimensionsofmass moment– length multiplied by mass. It is equal to the mass of the particle or system of particles multiplied by the distance from the space origin to thecentre of mass(COM) at the time origin (t= 0), as measured in thelab frame. There is no universal symbol, nor even a universal name, for this quantity. Different authors may denote it by other symbols if any (for exampleμ), may designate other names, and may defineNto be the negative of what is used here. The above form has the advantage that it resembles the familiarGalilean transformationfor position, which in turn is the non-relativistic boost transformation between inertial frames. This vector is also additive: for a system of particles, the vector sum is the resultant∑nNn=∑nmn(xn−tun)=(xCOM∑nmn−t∑nmnun)=Mtot(xCOM−uCOMt){\displaystyle \sum _{n}\mathbf {N} _{n}=\sum _{n}m_{n}\left(\mathbf {x} _{n}-t\mathbf {u} _{n}\right)=\left(\mathbf {x} _{\mathrm {COM} }\sum _{n}m_{n}-t\sum _{n}m_{n}\mathbf {u} _{n}\right)=M_{\text{tot}}(\mathbf {x} _{\mathrm {COM} }-\mathbf {u} _{\mathrm {COM} }t)}where the system's centre of mass position and velocity and total mass are respectivelyxCOM=∑nmnxn∑nmn,uCOM=∑nmnun∑nmn,Mtot=∑nmn.{\displaystyle {\begin{aligned}\mathbf {x} _{\mathrm {COM} }&={\frac {\sum _{n}m_{n}\mathbf {x} _{n}}{\sum _{n}m_{n}}},\\[3pt]\mathbf {u} _{\mathrm {COM} }&={\frac {\sum _{n}m_{n}\mathbf {u} _{n}}{\sum _{n}m_{n}}},\\[3pt]M_{\text{tot}}&=\sum _{n}m_{n}.\end{aligned}}} For an isolated system,Nis conserved in time, which can be seen by differentiating with respect to time. The angular momentumLis a pseudovector, butNis an "ordinary" (polar) vector, and is therefore invariant under inversion. The resultantNtotfor a multiparticle system has the physical visualization that, whatever the complicated motion of all the particles are, they move in such a way that the system's COM moves in a straight line. This does not necessarily mean all particles "follow" the COM, nor that all particles all move in almost the same direction simultaneously, only that the collective motion of the particles is constrained in relation to the centre of mass. In special relativity, if the particle moves with velocityurelative to the lab frame, thenE=γ(u)m0c2,p=γ(u)m0u{\displaystyle {\begin{aligned}E&=\gamma (\mathbf {u} )m_{0}c^{2},&\mathbf {p} &=\gamma (\mathbf {u} )m_{0}\mathbf {u} \end{aligned}}}whereγ(u)=11−u⋅uc2{\displaystyle \gamma (\mathbf {u} )={\frac {1}{\sqrt {1-{\frac {\mathbf {u} \cdot \mathbf {u} }{c^{2}}}}}}}is theLorentz factorandmis the mass (i.e. the rest mass) of the particle. The corresponding relativistic mass moment in terms ofm,u,p,E, in the same lab frame isN=Ec2x−pt=mγ(u)(x−ut).{\displaystyle \mathbf {N} ={\frac {E}{c^{2}}}\mathbf {x} -\mathbf {p} t=m\gamma (\mathbf {u} )(\mathbf {x} -\mathbf {u} t).} The Cartesian components areNx=mx−pxt=Ec2x−pxt=mγ(u)(x−uxt)Ny=my−pyt=Ec2y−pyt=mγ(u)(y−uyt)Nz=mz−pzt=Ec2z−pzt=mγ(u)(z−uzt){\displaystyle {\begin{aligned}N_{x}=mx-p_{x}t&={\frac {E}{c^{2}}}x-p_{x}t=m\gamma (u)(x-u_{x}t)\\N_{y}=my-p_{y}t&={\frac {E}{c^{2}}}y-p_{y}t=m\gamma (u)(y-u_{y}t)\\N_{z}=mz-p_{z}t&={\frac {E}{c^{2}}}z-p_{z}t=m\gamma (u)(z-u_{z}t)\end{aligned}}} Consider a coordinate frameF′which moves with velocityv= (v, 0, 0)relative to another frame F, along the direction of the coincidentxx′axes. The origins of the two coordinate frames coincide at timest=t′ = 0. The mass–energyE=mc2and momentum componentsp= (px,py,pz)of an object, as well as position coordinatesx= (x,y,z)and timetin frameFare transformed toE′ =m′c2,p′ = (px′,py′,pz′),x′ = (x′,y′,z′), andt′inF′according to the Lorentz transformationst′=γ(v)(t−vxc2),E′=γ(v)(E−vpx)x′=γ(v)(x−vt),px′=γ(v)(px−vEc2)y′=y,py′=pyz′=z,pz′=pz{\displaystyle {\begin{aligned}t'&=\gamma (v)\left(t-{\frac {vx}{c^{2}}}\right)\,,\quad &E'&=\gamma (v)\left(E-vp_{x}\right)\\x'&=\gamma (v)(x-vt)\,,\quad &p_{x}'&=\gamma (v)\left(p_{x}-{\frac {vE}{c^{2}}}\right)\\y'&=y\,,\quad &p_{y}'&=p_{y}\\z'&=z\,,\quad &p_{z}'&=p_{z}\\\end{aligned}}} The Lorentz factor here applies to the velocityv, the relative velocity between the frames. This is not necessarily the same as the velocityuof an object. For the orbital 3-angular momentumLas a pseudovector, we haveLx′=y′pz′−z′py′=LxLy′=z′px′−x′pz′=γ(v)(Ly−vNz)Lz′=x′py′−y′px′=γ(v)(Lz+vNy){\displaystyle {\begin{aligned}L_{x}'&=y'p_{z}'-z'p_{y}'=L_{x}\\L_{y}'&=z'p_{x}'-x'p_{z}'=\gamma (v)(L_{y}-vN_{z})\\L_{z}'&=x'p_{y}'-y'p_{x}'=\gamma (v)(L_{z}+vN_{y})\\\end{aligned}}} For the x-componentLx′=y′pz′−z′py′=ypz−zpy=Lx{\displaystyle L_{x}'=y'p_{z}'-z'p_{y}'=yp_{z}-zp_{y}=L_{x}}the y-componentLy′=z′px′−x′pz′=zγ(px−vEc2)−γ(x−vt)pz=γ[zpx−zvEc2−xpz+vtpz]=γ[(zpx−xpz)+v(pzt−zEc2)]=γ(Ly−vNz){\displaystyle {\begin{aligned}L_{y}'&=z'p_{x}'-x'p_{z}'\\&=z\gamma \left(p_{x}-{\frac {vE}{c^{2}}}\right)-\gamma \left(x-vt\right)p_{z}\\&=\gamma \left[zp_{x}-z{\frac {vE}{c^{2}}}-xp_{z}+vtp_{z}\right]\\&=\gamma \left[\left(zp_{x}-xp_{z}\right)+v\left(p_{z}t-z{\frac {E}{c^{2}}}\right)\right]\\&=\gamma \left(L_{y}-vN_{z}\right)\end{aligned}}}and z-componentLz′=x′py′−y′px′=γ(x−vt)py−yγ(px−vEc2)=γ[xpy−vtpy−ypx+yvEc2]=γ[(xpy−ypx)+v(yEc2−tpy)]=γ(Lz+vNy){\displaystyle {\begin{aligned}L_{z}'&=x'p_{y}'-y'p_{x}'\\&=\gamma \left(x-vt\right)p_{y}-y\gamma \left(p_{x}-{\frac {vE}{c^{2}}}\right)\\&=\gamma \left[xp_{y}-vtp_{y}-yp_{x}+y{\frac {vE}{c^{2}}}\right]\\&=\gamma \left[\left(xp_{y}-yp_{x}\right)+v\left(y{\frac {E}{c^{2}}}-tp_{y}\right)\right]\\&=\gamma \left(L_{z}+vN_{y}\right)\end{aligned}}} In the second terms ofLy′andLz′, theyandzcomponents of the cross productv×Ncan be inferred by recognizingcyclic permutationsofvx=vandvy=vz= 0with the components ofN,−vNz=vzNx−vxNz=(v×N)yvNy=vxNy−vyNx=(v×N)z{\displaystyle {\begin{aligned}-vN_{z}&=v_{z}N_{x}-v_{x}N_{z}=\left(\mathbf {v} \times \mathbf {N} \right)_{y}\\vN_{y}&=v_{x}N_{y}-v_{y}N_{x}=\left(\mathbf {v} \times \mathbf {N} \right)_{z}\\\end{aligned}}} Now,Lxis parallel to the relative velocityv, and the other componentsLyandLzare perpendicular tov. The parallel–perpendicular correspondence can be facilitated by splitting the entire 3-angular momentum pseudovector into components parallel (∥) and perpendicular (⊥) tov, in each frame,L=L∥+L⊥,L′=L∥′+L⊥′.{\displaystyle \mathbf {L} =\mathbf {L} _{\parallel }+\mathbf {L} _{\perp }\,,\quad \mathbf {L} '=\mathbf {L} _{\parallel }'+\mathbf {L} _{\perp }'\,.} Then the component equations can be collected into the pseudovector equationsL∥′=L∥L⊥′=γ(v)(L⊥+v×N){\displaystyle {\begin{aligned}\mathbf {L} _{\parallel }'&=\mathbf {L} _{\parallel }\\\mathbf {L} _{\perp }'&=\gamma (\mathbf {v} )\left(\mathbf {L} _{\perp }+\mathbf {v} \times \mathbf {N} \right)\\\end{aligned}}} Therefore, the components of angular momentum along the direction of motion do not change, while the components perpendicular do change. By contrast to the transformations of space and time, time and the spatial coordinates change along the direction of motion, while those perpendicular do not. These transformations are true forallv, not just for motion along thexx′axes. ConsideringLas a tensor, we get a similar resultL⊥′=γ(v)(L⊥+v∧N){\displaystyle \mathbf {L} _{\perp }'=\gamma (\mathbf {v} )\left(\mathbf {L} _{\perp }+\mathbf {v} \wedge \mathbf {N} \right)}wherevzNx−vxNz=(v∧N)zxvxNy−vyNx=(v∧N)xy{\displaystyle {\begin{aligned}v_{z}N_{x}-v_{x}N_{z}&=\left(\mathbf {v} \wedge \mathbf {N} \right)_{zx}\\v_{x}N_{y}-v_{y}N_{x}&=\left(\mathbf {v} \wedge \mathbf {N} \right)_{xy}\\\end{aligned}}} The boost of the dynamic mass moment along thexdirection isNx′=m′x′−px′t′=NxNy′=m′y′−py′t′=γ(v)(Ny+vLzc2)Nz′=m′z′−pz′t′=γ(v)(Nz−vLyc2){\displaystyle {\begin{aligned}N_{x}'&=m'x'-p_{x}'t'=N_{x}\\N_{y}'&=m'y'-p_{y}'t'=\gamma (v)\left(N_{y}+{\frac {vL_{z}}{c^{2}}}\right)\\N_{z}'&=m'z'-p_{z}'t'=\gamma (v)\left(N_{z}-{\frac {vL_{y}}{c^{2}}}\right)\\\end{aligned}}} For the x-componentNx′=E′c2x′−t′px′=γc2(E−vpx)γ(x−vt)−γ(t−xvc2)γ(px−vEc2)=γ2[1c2(E−vpx)(x−vt)−(t−xvc2)(px−vEc2)]=γ2[Exc2−Evtc2−vpxxc2+vpxvtc2−tpx+xvc2px+tvEc2−xvc2vEc2]=γ2[Exc2−Evtc2−vpxxc2+v2c2pxt−tpx+xvc2px+tvEc2−v2c2Exc2]=γ2[(Exc2−tpx)+v2c2(pxt−Exc2)]=γ2[1−v2c2]Nx=γ21γ2Nx{\displaystyle {\begin{aligned}N_{x}'&={\frac {E'}{c^{2}}}x'-t'p_{x}'\\&={\frac {\gamma }{c^{2}}}(E-vp_{x})\gamma (x-vt)-\gamma \left(t-{\frac {xv}{c^{2}}}\right)\gamma \left(p_{x}-{\frac {vE}{c^{2}}}\right)\\&=\gamma ^{2}\left[{\frac {1}{c^{2}}}\left(E-vp_{x}\right)(x-vt)-\left(t-{\frac {xv}{c^{2}}}\right)\left(p_{x}-{\frac {vE}{c^{2}}}\right)\right]\\&=\gamma ^{2}\left[{\frac {Ex}{c^{2}}}-{\frac {Evt}{c^{2}}}-{\frac {vp_{x}x}{c^{2}}}+{\frac {vp_{x}vt}{c^{2}}}-tp_{x}+{\frac {xv}{c^{2}}}p_{x}+t{\frac {vE}{c^{2}}}-{\frac {xv}{c^{2}}}{\frac {vE}{c^{2}}}\right]\\&=\gamma ^{2}\left[{\frac {Ex}{c^{2}}}{\cancel {-{\frac {Evt}{c^{2}}}}}{\cancel {-{\frac {vp_{x}x}{c^{2}}}}}+{\frac {v^{2}}{c^{2}}}p_{x}t-tp_{x}{\cancel {+{\frac {xv}{c^{2}}}p_{x}}}{\cancel {+t{\frac {vE}{c^{2}}}}}-{\frac {v^{2}}{c^{2}}}{\frac {Ex}{c^{2}}}\right]\\&=\gamma ^{2}\left[\left({\frac {Ex}{c^{2}}}-tp_{x}\right)+{\frac {v^{2}}{c^{2}}}\left(p_{x}t-{\frac {Ex}{c^{2}}}\right)\right]\\&=\gamma ^{2}\left[1-{\frac {v^{2}}{c^{2}}}\right]N_{x}\\&=\gamma ^{2}{\frac {1}{\gamma ^{2}}}N_{x}\end{aligned}}}the y-componentNy′=E′c2y′−t′py′=1c2γ(E−vpx)y−γ(t−xvc2)py=γ[1c2(E−vpx)y−(t−xvc2)py]=γ[1c2Ey−1c2vpxy−tpy+xvc2py]=γ[(1c2Ey−tpy)+vc2(xpy−ypx)]=γ(Ny+vc2Lz){\displaystyle {\begin{aligned}N_{y}'&={\frac {E'}{c^{2}}}y'-t'p_{y}'\\&={\frac {1}{c^{2}}}\gamma (E-vp_{x})y-\gamma \left(t-{\frac {xv}{c^{2}}}\right)p_{y}\\&=\gamma \left[{\frac {1}{c^{2}}}(E-vp_{x})y-\left(t-{\frac {xv}{c^{2}}}\right)p_{y}\right]\\&=\gamma \left[{\frac {1}{c^{2}}}Ey-{\frac {1}{c^{2}}}vp_{x}y-tp_{y}+{\frac {xv}{c^{2}}}p_{y}\right]\\&=\gamma \left[\left({\frac {1}{c^{2}}}Ey-tp_{y}\right)+{\frac {v}{c^{2}}}(xp_{y}-yp_{x})\right]\\&=\gamma \left(N_{y}+{\frac {v}{c^{2}}}L_{z}\right)\end{aligned}}}and z-componentNz′=E′c2z′−t′pz′=1c2γ(E−vpx)z−γ(t−xvc2)pz=γ[1c2(E−vpx)z−(t−xvc2)pz]=γ[1c2Ez−1c2vpzz−tpz+xvc2pz]=γ[(1c2Ez−tpz)+vc2(xpz−zpx)]=γ(Nz−vc2Ly){\displaystyle {\begin{aligned}N_{z}'&={\frac {E'}{c^{2}}}z'-t'p_{z}'\\&={\frac {1}{c^{2}}}\gamma (E-vp_{x})z-\gamma \left(t-{\frac {xv}{c^{2}}}\right)p_{z}\\&=\gamma \left[{\frac {1}{c^{2}}}(E-vp_{x})z-\left(t-{\frac {xv}{c^{2}}}\right)p_{z}\right]\\&=\gamma \left[{\frac {1}{c^{2}}}Ez-{\frac {1}{c^{2}}}vp_{z}z-tp_{z}+{\frac {xv}{c^{2}}}p_{z}\right]\\&=\gamma \left[\left({\frac {1}{c^{2}}}Ez-tp_{z}\right)+{\frac {v}{c^{2}}}(xp_{z}-zp_{x})\right]\\&=\gamma \left(N_{z}-{\frac {v}{c^{2}}}L_{y}\right)\end{aligned}}} Collecting parallel and perpendicular components as beforeN∥′=N∥N⊥′=γ(v)(N⊥−1c2v×L){\displaystyle {\begin{aligned}\mathbf {N} _{\parallel }'&=\mathbf {N} _{\parallel }\\\mathbf {N} _{\perp }'&=\gamma (\mathbf {v} )\left(\mathbf {N} _{\perp }-{\frac {1}{c^{2}}}\mathbf {v} \times \mathbf {L} \right)\\\end{aligned}}} Again, the components parallel to the direction of relative motion do not change, those perpendicular do change. So far these are only the parallel and perpendicular decompositions of the vectors. The transformations on the full vectors can be constructed from them as follows (throughout hereLis a pseudovector for concreteness and compatibility with vector algebra). Introduce aunit vectorin the direction ofv, given byn=v/v. The parallel components are given by thevector projectionofLorNintonL∥=(L⋅n)n,N∥=(N⋅n)n{\displaystyle \mathbf {L} _{\parallel }=(\mathbf {L} \cdot \mathbf {n} )\mathbf {n} \,,\quad \mathbf {N} _{\parallel }=(\mathbf {N} \cdot \mathbf {n} )\mathbf {n} }while the perpendicular component byvector rejectionofLorNfromnL⊥=L−(L⋅n)n,N⊥=N−(N⋅n)n{\displaystyle \mathbf {L} _{\perp }=\mathbf {L} -(\mathbf {L} \cdot \mathbf {n} )\mathbf {n} \,,\quad \mathbf {N} _{\perp }=\mathbf {N} -(\mathbf {N} \cdot \mathbf {n} )\mathbf {n} }and the transformations areL′=γ(v)(L+vn×N)−(γ(v)−1)(L⋅n)nN′=γ(v)(N−vc2n×L)−(γ(v)−1)(N⋅n)n{\displaystyle {\begin{aligned}\mathbf {L} '&=\gamma (\mathbf {v} )(\mathbf {L} +v\mathbf {n} \times \mathbf {N} )-(\gamma (\mathbf {v} )-1)(\mathbf {L} \cdot \mathbf {n} )\mathbf {n} \\\mathbf {N} '&=\gamma (\mathbf {v} )\left(\mathbf {N} -{\frac {v}{c^{2}}}\mathbf {n} \times \mathbf {L} \right)-(\gamma (\mathbf {v} )-1)(\mathbf {N} \cdot \mathbf {n} )\mathbf {n} \\\end{aligned}}}or reinstatingv=vn,L′=γ(v)(L+v×N)−(γ(v)−1)(L⋅v)vv2N′=γ(v)(N−1c2v×L)−(γ(v)−1)(N⋅v)vv2{\displaystyle {\begin{aligned}\mathbf {L} '&=\gamma (\mathbf {v} )(\mathbf {L} +\mathbf {v} \times \mathbf {N} )-(\gamma (\mathbf {v} )-1){\frac {(\mathbf {L} \cdot \mathbf {v} )\mathbf {v} }{v^{2}}}\\\mathbf {N} '&=\gamma (\mathbf {v} )\left(\mathbf {N} -{\frac {1}{c^{2}}}\mathbf {v} \times \mathbf {L} \right)-(\gamma (\mathbf {v} )-1){\frac {(\mathbf {N} \cdot \mathbf {v} )\mathbf {v} }{v^{2}}}\\\end{aligned}}} These are very similar to the Lorentz transformations of theelectric fieldEandmagnetic fieldB, seeClassical electromagnetism and special relativity. Alternatively, starting from the vector Lorentz transformations of time, space, energy, and momentum, for a boost with velocityv,t′=γ(v)(t−v⋅rc2),r′=r+γ(v)−1v2(r⋅v)v−γ(v)tv,p′=p+γ(v)−1v2(p⋅v)v−γ(v)Ec2v,E′=γ(v)(E−v⋅p),{\displaystyle {\begin{aligned}t'&=\gamma (\mathbf {v} )\left(t-{\frac {\mathbf {v} \cdot \mathbf {r} }{c^{2}}}\right)\,,\\\mathbf {r} '&=\mathbf {r} +{\frac {\gamma (\mathbf {v} )-1}{v^{2}}}(\mathbf {r} \cdot \mathbf {v} )\mathbf {v} -\gamma (\mathbf {v} )t\mathbf {v} \,,\\\mathbf {p} '&=\mathbf {p} +{\frac {\gamma (\mathbf {v} )-1}{v^{2}}}(\mathbf {p} \cdot \mathbf {v} )\mathbf {v} -\gamma (\mathbf {v} ){\frac {E}{c^{2}}}\mathbf {v} \,,\\E'&=\gamma (\mathbf {v} )\left(E-\mathbf {v} \cdot \mathbf {p} \right)\,,\\\end{aligned}}}inserting these into the definitionsL′=r′×p′,N′=E′c2r′−t′p′{\displaystyle {\begin{aligned}\mathbf {L} '&=\mathbf {r} '\times \mathbf {p} '\,,&\mathbf {N} '&={\frac {E'}{c^{2}}}\mathbf {r} '-t'\mathbf {p} '\end{aligned}}}gives the transformations. The orbital angular momentum in each frame areL′=r′×p′,L=r×p{\displaystyle \mathbf {L} '=\mathbf {r} '\times \mathbf {p} '\,,\quad \mathbf {L} =\mathbf {r} \times \mathbf {p} }so taking the cross product of the transformationsL′=[r+(γ−1)(r⋅n)n−γtvn]×[p+(γ−1)(p⋅n)n−γEc2vn]=[r+(γ−1)(r⋅n)n−γtvn]×p+(γ−1)(p⋅n)[r+(γ−1)(r⋅n)n−γtvn]×n−γEc2v[r+(γ−1)(r⋅n)n−γtvn]×n=r×p+(γ−1)(r⋅n)n×p−γtvn×p+(γ−1)(p⋅n)r×n−γEc2vr×n=L+[γ−1v2(r⋅v)−γt]v×p+[γ−1v2(p⋅v)−γEc2]r×v=L+v×[(γ−1v2(r⋅v)−γt)p−(γ−1v2(p⋅v)−γEc2)r]=L+v×[γ−1v2((r⋅v)p−(p⋅v)r)+γ(Ec2r−tp)]{\displaystyle {\begin{aligned}\mathbf {L} '&=\left[\mathbf {r} +(\gamma -1)(\mathbf {r} \cdot \mathbf {n} )\mathbf {n} -\gamma tv\mathbf {n} \right]\times \left[\mathbf {p} +(\gamma -1)(\mathbf {p} \cdot \mathbf {n} )\mathbf {n} -\gamma {\frac {E}{c^{2}}}v\mathbf {n} \right]\\&=[\mathbf {r} +(\gamma -1)(\mathbf {r} \cdot \mathbf {n} )\mathbf {n} -\gamma tv\mathbf {n} ]\times \mathbf {p} +(\gamma -1)(\mathbf {p} \cdot \mathbf {n} )[\mathbf {r} +(\gamma -1)(\mathbf {r} \cdot \mathbf {n} )\mathbf {n} -\gamma tv\mathbf {n} ]\times \mathbf {n} -\gamma {\frac {E}{c^{2}}}v[\mathbf {r} +(\gamma -1)(\mathbf {r} \cdot \mathbf {n} )\mathbf {n} -\gamma tv\mathbf {n} ]\times \mathbf {n} \\&=\mathbf {r} \times \mathbf {p} +(\gamma -1)(\mathbf {r} \cdot \mathbf {n} )\mathbf {n} \times \mathbf {p} -\gamma tv\mathbf {n} \times \mathbf {p} +(\gamma -1)(\mathbf {p} \cdot \mathbf {n} )\mathbf {r} \times \mathbf {n} -\gamma {\frac {E}{c^{2}}}v\mathbf {r} \times \mathbf {n} \\&=\mathbf {L} +\left[{\frac {\gamma -1}{v^{2}}}(\mathbf {r} \cdot \mathbf {v} )-\gamma t\right]\mathbf {v} \times \mathbf {p} +\left[{\frac {\gamma -1}{v^{2}}}(\mathbf {p} \cdot \mathbf {v} )-\gamma {\frac {E}{c^{2}}}\right]\mathbf {r} \times \mathbf {v} \\&=\mathbf {L} +\mathbf {v} \times \left[\left({\frac {\gamma -1}{v^{2}}}(\mathbf {r} \cdot \mathbf {v} )-\gamma t\right)\mathbf {p} -\left({\frac {\gamma -1}{v^{2}}}(\mathbf {p} \cdot \mathbf {v} )-\gamma {\frac {E}{c^{2}}}\right)\mathbf {r} \right]\\&=\mathbf {L} +\mathbf {v} \times \left[{\frac {\gamma -1}{v^{2}}}\left((\mathbf {r} \cdot \mathbf {v} )\mathbf {p} -(\mathbf {p} \cdot \mathbf {v} )\mathbf {r} \right)+\gamma \left({\frac {E}{c^{2}}}\mathbf {r} -t\mathbf {p} \right)\right]\end{aligned}}} Using thetriple productrulea×(b×c)=b(a⋅c)−c(a⋅b)(a×b)×c=(c⋅a)b−(c⋅b)a{\displaystyle {\begin{aligned}\mathbf {a} \times (\mathbf {b} \times \mathbf {c} )&=\mathbf {b} (\mathbf {a} \cdot \mathbf {c} )-\mathbf {c} (\mathbf {a} \cdot \mathbf {b} )\\(\mathbf {a} \times \mathbf {b} )\times \mathbf {c} &=(\mathbf {c} \cdot \mathbf {a} )\mathbf {b} -(\mathbf {c} \cdot \mathbf {b} )\mathbf {a} \\\end{aligned}}}gives(r×p)×v=(v⋅r)p−(v⋅p)r(v⋅r)p−(v⋅p)r=L×v{\displaystyle {\begin{aligned}(\mathbf {r} \times \mathbf {p} )\times \mathbf {v} &=(\mathbf {v} \cdot \mathbf {r} )\mathbf {p} -(\mathbf {v} \cdot \mathbf {p} )\mathbf {r} \\(\mathbf {v} \cdot \mathbf {r} )\mathbf {p} -(\mathbf {v} \cdot \mathbf {p} )\mathbf {r} &=\mathbf {L} \times \mathbf {v} \\\end{aligned}}}and along with the definition ofNwe haveL′=L+v×[γ−1v2L×v+γN]{\displaystyle \mathbf {L} '=\mathbf {L} +\mathbf {v} \times \left[{\frac {\gamma -1}{v^{2}}}\mathbf {L} \times \mathbf {v} +\gamma \mathbf {N} \right]} Reinstating the unit vectorn,L′=L+n×[(γ−1)L×n+vγN]{\displaystyle \mathbf {L} '=\mathbf {L} +\mathbf {n} \times \left[(\gamma -1)\mathbf {L} \times \mathbf {n} +v\gamma \mathbf {N} \right]} Since in the transformation there is a cross product on the left withn,n×(L×n)=L(n⋅n)−n(n⋅L)=L−n(n⋅L){\displaystyle \mathbf {n} \times (\mathbf {L} \times \mathbf {n} )=\mathbf {L} (\mathbf {n} \cdot \mathbf {n} )-\mathbf {n} (\mathbf {n} \cdot \mathbf {L} )=\mathbf {L} -\mathbf {n} (\mathbf {n} \cdot \mathbf {L} )}thenL′=L+(γ−1)(L−n(n⋅L))+βγcn×N=γ(L+vn×N)−(γ−1)n(n⋅L){\displaystyle \mathbf {L} '=\mathbf {L} +(\gamma -1)(\mathbf {L} -\mathbf {n} (\mathbf {n} \cdot \mathbf {L} ))+\beta \gamma c\mathbf {n} \times \mathbf {N} =\gamma (\mathbf {L} +v\mathbf {n} \times \mathbf {N} )-(\gamma -1)\mathbf {n} (\mathbf {n} \cdot \mathbf {L} )} In relativistic mechanics, the COM boost and orbital 3-space angular momentum of a rotating object are combined into a four-dimensionalbivectorin terms of thefour-positionXand thefour-momentumPof the object[4][5]M=X∧P{\displaystyle \mathbf {M} =\mathbf {X} \wedge \mathbf {P} } In componentsMαβ=XαPβ−XβPα{\displaystyle M^{\alpha \beta }=X^{\alpha }P^{\beta }-X^{\beta }P^{\alpha }}which are six independent quantities altogether. Since the components ofXandPare frame-dependent, so isM. Three componentsMij=xipj−xjpi=Lij{\displaystyle M^{ij}=x^{i}p^{j}-x^{j}p^{i}=L^{ij}}are those of the familiar classical 3-space orbital angular momentum, and the other threeM0i=x0pi−xip0=c(tpi−xiEc2)=−cNi{\displaystyle M^{0i}=x^{0}p^{i}-x^{i}p^{0}=c\,\left(tp^{i}-x^{i}{\frac {E}{c^{2}}}\right)=-cN^{i}}are the relativistic mass moment, multiplied by−c. The tensor is antisymmetric;Mαβ=−Mβα{\displaystyle M^{\alpha \beta }=-M^{\beta \alpha }} The components of the tensor can be systematically displayed as amatrixM=(M00M01M02M03M10M11M12M13M20M21M22M23M30M31M32M33)=(0−N1c−N2c−N3cN1c0L12−L31N2c−L120L23N3cL31−L230)=(0−NcNTcx∧p){\displaystyle {\begin{aligned}\mathbf {M} &={\begin{pmatrix}M^{00}&M^{01}&M^{02}&M^{03}\\M^{10}&M^{11}&M^{12}&M^{13}\\M^{20}&M^{21}&M^{22}&M^{23}\\M^{30}&M^{31}&M^{32}&M^{33}\end{pmatrix}}\\[3pt]&=\left({\begin{array}{c|ccc}0&-N^{1}c&-N^{2}c&-N^{3}c\\\hline N^{1}c&0&L^{12}&-L^{31}\\N^{2}c&-L^{12}&0&L^{23}\\N^{3}c&L^{31}&-L^{23}&0\end{array}}\right)\\[3pt]&=\left({\begin{array}{c|c}0&-\mathbf {N} c\\\hline \mathbf {N} ^{\mathrm {T} }c&\mathbf {x} \wedge \mathbf {p} \\\end{array}}\right)\end{aligned}}}in which the last array is ablock matrixformed by treatingNas arow vectorwhichmatrix transposesto thecolumn vectorNT, andx∧pas a 3 × 3antisymmetric matrix. The lines are merely inserted to show where the blocks are. Again, this tensor is additive: the total angular momentum of a system is the sum of the angular momentum tensors for each constituent of the system:Mtot=∑nMn=∑nXn∧Pn.{\displaystyle \mathbf {M} _{\text{tot}}=\sum _{n}\mathbf {M} _{n}=\sum _{n}\mathbf {X} _{n}\wedge \mathbf {P} _{n}\,.} Each of the six components forms a conserved quantity when aggregated with the corresponding components for other objects and fields. The angular momentum tensorMis indeed a tensor, the components change according to a Lorentz transformation matrix Λ, as illustrated in the usual way by tensor index notationM′αβ=X′αP′β−X′βP′α=ΛαγXγΛβδPδ−ΛβδXδΛαγPγ=ΛαγΛβδ(XγPδ−XδPγ)=ΛαγΛβδMγδ,{\displaystyle {\begin{aligned}{M'}^{\alpha \beta }&={X'}^{\alpha }{P'}^{\beta }-{X'}^{\beta }{P'}^{\alpha }\\&={\Lambda ^{\alpha }}_{\gamma }X^{\gamma }{\Lambda ^{\beta }}_{\delta }P^{\delta }-{\Lambda ^{\beta }}_{\delta }X^{\delta }{\Lambda ^{\alpha }}_{\gamma }P^{\gamma }\\&={\Lambda ^{\alpha }}_{\gamma }{\Lambda ^{\beta }}_{\delta }\left(X^{\gamma }P^{\delta }-X^{\delta }P^{\gamma }\right)\\&={\Lambda ^{\alpha }}_{\gamma }{\Lambda ^{\beta }}_{\delta }M^{\gamma \delta }\\\end{aligned}},}where, for a boost (without rotations) with normalized velocityβ=v/c, the Lorentz transformation matrix elements areΛ00=γΛi0=Λ0i=−γβiΛij=δij+γ−1β2βiβj{\displaystyle {\begin{aligned}{\Lambda ^{0}}_{0}&=\gamma \\{\Lambda ^{i}}_{0}&={\Lambda ^{0}}_{i}=-\gamma \beta ^{i}\\{\Lambda ^{i}}_{j}&={\delta ^{i}}_{j}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{i}\beta _{j}\end{aligned}}}and the covariantβiand contravariantβicomponents ofβare the same since these are just parameters. In other words, one can Lorentz-transform the four position and four momentum separately, and then antisymmetrize those newly found components to obtain the angular momentum tensor in the new frame. The transformation of boost components are M′k0=ΛkμΛ0νMμν=Λk0Λ00M00+ΛkiΛ00Mi0+Λk0Λ0jM0j+ΛkiΛ0jMij=(ΛkiΛ00−Λk0Λ0i)Mi0+ΛkiΛ0jMij{\displaystyle {\begin{aligned}M'^{k0}&={\Lambda ^{k}}_{\mu }{\Lambda ^{0}}_{\nu }M^{\mu \nu }\\&={\Lambda ^{k}}_{0}{\Lambda ^{0}}_{0}M^{00}+{\Lambda ^{k}}_{i}{\Lambda ^{0}}_{0}M^{i0}+{\Lambda ^{k}}_{0}{\Lambda ^{0}}_{j}M^{0j}+{\Lambda ^{k}}_{i}{\Lambda ^{0}}_{j}M^{ij}\\&=\left({\Lambda ^{k}}_{i}{\Lambda ^{0}}_{0}-{\Lambda ^{k}}_{0}{\Lambda ^{0}}_{i}\right)M^{i0}+{\Lambda ^{k}}_{i}{\Lambda ^{0}}_{j}M^{ij}\\\end{aligned}}}as for the orbital angular momentumM′kℓ=ΛkμΛℓνMμν=Λk0Λℓ0M00+ΛkiΛℓ0Mi0+Λk0ΛℓjM0j+ΛkiΛℓjMij=(ΛkiΛℓ0−Λk0Λℓi)Mi0+ΛkiΛℓjMij{\displaystyle {\begin{aligned}{M'}^{k\ell }&={\Lambda ^{k}}_{\mu }{\Lambda ^{\ell }}_{\nu }M^{\mu \nu }\\&={\Lambda ^{k}}_{0}{\Lambda ^{\ell }}_{0}M^{00}+{\Lambda ^{k}}_{i}{\Lambda ^{\ell }}_{0}M^{i0}+{\Lambda ^{k}}_{0}{\Lambda ^{\ell }}_{j}M^{0j}+{\Lambda ^{k}}_{i}{\Lambda ^{\ell }}_{j}M^{ij}\\&=\left({\Lambda ^{k}}_{i}{\Lambda ^{\ell }}_{0}-{\Lambda ^{k}}_{0}{\Lambda ^{\ell }}_{i}\right)M^{i0}+{\Lambda ^{k}}_{i}{\Lambda ^{\ell }}_{j}M^{ij}\end{aligned}}} The expressions in the Lorentz transformation entries areΛkiΛℓ0−Λk0Λℓi=[δki+γ−1β2βkβi](−γβℓ)−(−γβk)[δℓi+γ−1β2βℓβi]=γ[βkδℓi−βℓδki]ΛkiΛ00−Λk0Λ0i=[δki+γ−1β2βkβi]γ−(−γβk)(−γβi)=γ[δki+γ−1β2βkβi−γβkβi]=γ[δki+(γ−1β2−γ)βkβi]=γ[δki+(γ−γβ2−1β2)βkβi]=γ[δki+(γ−1−1β2)βkβi]=γδki−[γ−1β2]βkβiΛkiΛℓj=[δki+γ−1β2βkβi][δℓj+γ−1β2βℓβj]=δkiδℓj+γ−1β2δkiβℓβj+γ−1β2βkβiδℓj+γ−1β2γ−1β2βℓβjβkβi{\displaystyle {\begin{aligned}{\Lambda ^{k}}_{i}{\Lambda ^{\ell }}_{0}-{\Lambda ^{k}}_{0}{\Lambda ^{\ell }}_{i}&=\left[{\delta ^{k}}_{i}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{k}\beta _{i}\right]\left(-\gamma \beta ^{\ell }\right)-\left(-\gamma \beta ^{k}\right)\left[{\delta ^{\ell }}_{i}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{\ell }\beta _{i}\right]\\&=\gamma \left[\beta ^{k}{\delta ^{\ell }}_{i}-\beta ^{\ell }{\delta ^{k}}_{i}\right]\\{\Lambda ^{k}}_{i}{\Lambda ^{0}}_{0}-{\Lambda ^{k}}_{0}{\Lambda ^{0}}_{i}&=\left[{\delta ^{k}}_{i}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{k}\beta _{i}\right]\gamma -(-\gamma \beta ^{k})(-\gamma \beta ^{i})\\&=\gamma \left[{\delta ^{k}}_{i}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{k}\beta _{i}-\gamma \beta ^{k}\beta ^{i}\right]\\&=\gamma \left[{\delta ^{k}}_{i}+\left({\frac {\gamma -1}{\beta ^{2}}}-\gamma \right)\beta ^{k}\beta _{i}\right]\\&=\gamma \left[{\delta ^{k}}_{i}+\left({\frac {\gamma -\gamma \beta ^{2}-1}{\beta ^{2}}}\right)\beta ^{k}\beta _{i}\right]\\&=\gamma \left[{\delta ^{k}}_{i}+\left({\frac {\gamma ^{-1}-1}{\beta ^{2}}}\right)\beta ^{k}\beta _{i}\right]\\&=\gamma {\delta ^{k}}_{i}-\left[{\frac {\gamma -1}{\beta ^{2}}}\right]\beta ^{k}\beta _{i}\\{\Lambda ^{k}}_{i}{\Lambda ^{\ell }}_{j}&=\left[{\delta ^{k}}_{i}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{k}\beta _{i}\right]\left[{\delta ^{\ell }}_{j}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{\ell }\beta _{j}\right]\\&={\delta ^{k}}_{i}{\delta ^{\ell }}_{j}+{\frac {\gamma -1}{\beta ^{2}}}{\delta ^{k}}_{i}\beta ^{\ell }\beta _{j}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{k}\beta _{i}{\delta ^{\ell }}_{j}+{\frac {\gamma -1}{\beta ^{2}}}{\frac {\gamma -1}{\beta ^{2}}}\beta ^{\ell }\beta _{j}\beta ^{k}\beta _{i}\end{aligned}}}givescN′k=(ΛkiΛ00−Λk0Λ0i)cNi+ΛkiΛ0jεijnLn=[γδki−(γ−1β2)βkβi]cNi+−γβj[δki+γ−1β2βkβi]εijnLn=γcNk−(γ−1β2)βk(βicNi)−γβjδkiεijnLn−γγ−1β2βjβkβiεijnLn=γcNk−(γ−1β2)βk(βicNi)−γβjεkjnLn{\displaystyle {\begin{aligned}cN'^{k}&=\left({\Lambda ^{k}}_{i}{\Lambda ^{0}}_{0}-{\Lambda ^{k}}_{0}{\Lambda ^{0}}_{i}\right)cN^{i}+{\Lambda ^{k}}_{i}{\Lambda ^{0}}_{j}\varepsilon ^{ijn}L_{n}\\&=\left[\gamma {\delta ^{k}}_{i}-\left({\frac {\gamma -1}{\beta ^{2}}}\right)\beta ^{k}\beta _{i}\right]cN^{i}+-\gamma \beta ^{j}\left[{\delta ^{k}}_{i}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{k}\beta _{i}\right]\varepsilon ^{ijn}L_{n}\\&=\gamma cN^{k}-\left({\frac {\gamma -1}{\beta ^{2}}}\right)\beta ^{k}\left(\beta _{i}cN^{i}\right)-\gamma \beta ^{j}{\delta ^{k}}_{i}\varepsilon ^{ijn}L_{n}-\gamma {\frac {\gamma -1}{\beta ^{2}}}\beta ^{j}\beta ^{k}\beta _{i}\varepsilon ^{ijn}L_{n}\\&=\gamma cN^{k}-\left({\frac {\gamma -1}{\beta ^{2}}}\right)\beta ^{k}\left(\beta _{i}cN^{i}\right)-\gamma \beta ^{j}\varepsilon ^{kjn}L_{n}\\\end{aligned}}}or in vector form, dividing bycN′=γN−(γ−1β2)β(β⋅N)−1cγβ×L{\displaystyle \mathbf {N} '=\gamma \mathbf {N} -\left({\frac {\gamma -1}{\beta ^{2}}}\right){\boldsymbol {\beta }}\left({\boldsymbol {\beta }}\cdot \mathbf {N} \right)-{\frac {1}{c}}\gamma {\boldsymbol {\beta }}\times \mathbf {L} }or reinstatingβ=v/c,N′=γN−(γ−1v2)v(v⋅N)−γv×L{\displaystyle \mathbf {N} '=\gamma \mathbf {N} -\left({\frac {\gamma -1}{v^{2}}}\right)\mathbf {v} \left(\mathbf {v} \cdot \mathbf {N} \right)-\gamma \mathbf {v} \times \mathbf {L} }andL′kℓ=(ΛkiΛℓ0−Λk0Λℓi)cNi+ΛkiΛℓjLij=γc(βkδℓi−βℓδki)Ni+[δkiδℓj+γ−1β2δkiβℓβj+γ−1β2βkβiδℓj+γ−1β2γ−1β2βℓβjβkβi]Lij=γc(βkNℓ−βℓNk)+Lkℓ+γ−1β2βℓβjLkj+γ−1β2βkβiLiℓ{\displaystyle {\begin{aligned}L'^{k\ell }&=\left({\Lambda ^{k}}_{i}{\Lambda ^{\ell }}_{0}-{\Lambda ^{k}}_{0}{\Lambda ^{\ell }}_{i}\right)cN^{i}+{\Lambda ^{k}}_{i}{\Lambda ^{\ell }}_{j}L^{ij}\\&=\gamma c\left(\beta ^{k}{\delta ^{\ell }}_{i}-\beta ^{\ell }{\delta ^{k}}_{i}\right)N^{i}+\left[{\delta ^{k}}_{i}{\delta ^{\ell }}_{j}+{\frac {\gamma -1}{\beta ^{2}}}{\delta ^{k}}_{i}\beta ^{\ell }\beta _{j}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{k}\beta _{i}{\delta ^{\ell }}_{j}+{\frac {\gamma -1}{\beta ^{2}}}{\frac {\gamma -1}{\beta ^{2}}}\beta ^{\ell }\beta _{j}\beta ^{k}\beta _{i}\right]L^{ij}\\&=\gamma c\left(\beta ^{k}N^{\ell }-\beta ^{\ell }N^{k}\right)+L^{k\ell }+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{\ell }\beta _{j}L^{kj}+{\frac {\gamma -1}{\beta ^{2}}}\beta ^{k}\beta _{i}L^{i\ell }\\\end{aligned}}}or converting to pseudovector formεkℓnLn′=γc(βkNℓ−βℓNk)+εkℓnLn+γ−1β2(βℓβjεkjnLn−βkβiεℓinLn){\displaystyle {\begin{aligned}\varepsilon ^{k\ell n}L'_{n}&=\gamma c\left(\beta ^{k}N^{\ell }-\beta ^{\ell }N^{k}\right)+\varepsilon ^{k\ell n}L_{n}+{\frac {\gamma -1}{\beta ^{2}}}\left(\beta ^{\ell }\beta _{j}\varepsilon ^{kjn}L_{n}-\beta ^{k}\beta _{i}\varepsilon ^{\ell in}L_{n}\right)\\\end{aligned}}}in vector notationL′=γcβ×N+L+γ−1β2β×(β×L){\displaystyle \mathbf {L} '=\gamma c{\boldsymbol {\beta }}\times \mathbf {N} +\mathbf {L} +{\frac {\gamma -1}{\beta ^{2}}}{\boldsymbol {\beta }}\times ({\boldsymbol {\beta }}\times \mathbf {L} )}or reinstatingβ=v/c,L′=γv×N+L+γ−1v2v×(v×L){\displaystyle \mathbf {L} '=\gamma \mathbf {v} \times \mathbf {N} +\mathbf {L} +{\frac {\gamma -1}{v^{2}}}\mathbf {v} \times \left(\mathbf {v} \times \mathbf {L} \right)} For a particle moving in a curve, thecross productof itsangular velocityω(a pseudovector) and positionxgive its tangential velocityu=ω×x{\displaystyle \mathbf {u} ={\boldsymbol {\omega }}\times \mathbf {x} } which cannot exceed a magnitude ofc, since in SR the translational velocity of any massive object cannot exceed thespeed of lightc. Mathematically this constraint is0 ≤ |u| <c, the vertical bars denote themagnitudeof the vector. If the angle betweenωandxisθ(assumed to be nonzero, otherwiseuwould be zero corresponding to no motion at all), then|u| = |ω| |x| sinθand the angular velocity is restricted by0≤|ω|<c|x|sin⁡θ{\displaystyle 0\leq |{\boldsymbol {\omega }}|<{\frac {c}{|\mathbf {x} |\sin \theta }}} The maximum angular velocity of any massive object therefore depends on the size of the object. For a given |x|, the minimum upper limit occurs whenωandxare perpendicular, so thatθ=π/2andsinθ= 1. For a rotatingrigid bodyrotating with an angular velocityω, theuis tangential velocity at a pointxinside the object. For every point in the object, there is a maximum angular velocity. The angular velocity (pseudovector) is related to the angular momentum (pseudovector) through themoment of inertiatensorIL=I⋅ω⇌Li=Iijωj{\displaystyle \mathbf {L} =\mathbf {I} \cdot {\boldsymbol {\omega }}\quad \rightleftharpoons \quad L_{i}=I_{ij}\omega _{j}}(the dot·denotestensor contractionon one index). The relativistic angular momentum is also limited by the size of the object. A particle may have a "built-in" angular momentum independent of its motion, calledspinand denoteds. It is a 3d pseudovector like orbital angular momentumL. The spin has a correspondingspin magnetic moment, so if the particle is subject to interactions (likeelectromagnetic fieldsorspin-orbit coupling), the direction of the particle's spin vector will change, but its magnitude will be constant. The extension to special relativity is straightforward.[6]For somelab frameF, let F′ be the rest frame of the particle and suppose the particle moves with constant 3-velocityu. Then F′ is boosted with the same velocity and the Lorentz transformations apply as usual; it is more convenient to useβ=u/c. As a four-vector in special relativity, the four-spinSgenerally takes the usual form of a four-vector with a timelike componentstand spatial componentss, in the lab frameS≡(S0,S1,S2,S3)=(st,sx,sy,sz){\displaystyle \mathbf {S} \equiv \left(S^{0},S^{1},S^{2},S^{3}\right)=(s_{t},s_{x},s_{y},s_{z})}although in the rest frame of the particle, it is defined so the timelike component is zero and the spatial components are those of particle's actual spin vector, in the notation heres′, so in the particle's frameS′≡(S′0,S′1,S′2,S′3)=(0,sx′,sy′,sz′){\displaystyle \mathbf {S} '\equiv \left({S'}^{0},{S'}^{1},{S'}^{2},{S'}^{3}\right)=\left(0,s_{x}',s_{y}',s_{z}'\right)} Equating norms leads to the invariant relationst2−s⋅s=−s′⋅s′{\displaystyle s_{t}^{2}-\mathbf {s} \cdot \mathbf {s} =-\mathbf {s} '\cdot \mathbf {s} '}so if the magnitude of spin is given in the rest frame of the particle and lab frame of an observer, the magnitude of the timelike componentstis given in the lab frame also. The boosted components of the four spin relative to the lab frame areS′0=Λ0αSα=Λ00S0+Λ0iSi=γ(S0−βiSi)=γ(ccS0−uicSi)=1cU0S0−1cUiSiS′i=ΛiαSα=Λi0S0+ΛijSj=−γβiS0+[δij+γ−1β2βiβj]Sj=Si+γ2γ+1βiβjSj−γβiS0{\displaystyle {\begin{aligned}{S'}^{0}&={\Lambda ^{0}}_{\alpha }S^{\alpha }={\Lambda ^{0}}_{0}S^{0}+{\Lambda ^{0}}_{i}S^{i}=\gamma \left(S^{0}-\beta _{i}S^{i}\right)\\&=\gamma \left({\frac {c}{c}}S^{0}-{\frac {u_{i}}{c}}S^{i}\right)={\frac {1}{c}}U_{0}S^{0}-{\frac {1}{c}}U_{i}S^{i}\\[3pt]{S'}^{i}&={\Lambda ^{i}}_{\alpha }S^{\alpha }={\Lambda ^{i}}_{0}S^{0}+{\Lambda ^{i}}_{j}S^{j}\\&=-\gamma \beta ^{i}S^{0}+\left[\delta _{ij}+{\frac {\gamma -1}{\beta ^{2}}}\beta _{i}\beta _{j}\right]S^{j}\\&=S^{i}+{\frac {\gamma ^{2}}{\gamma +1}}\beta _{i}\beta _{j}S^{j}-\gamma \beta ^{i}S^{0}\end{aligned}}} Hereγ=γ(u).S′ is in the rest frame of the particle, so its timelike component is zero,S′0= 0, notS0. Also, the first is equivalent to the inner product of the four-velocity (divided byc) and the four-spin. Combining these facts leads toS′0=1cUαSα=0{\displaystyle {S'}^{0}={\frac {1}{c}}U_{\alpha }S^{\alpha }=0}which is an invariant. Then this combined with the transformation on the timelike component leads to the perceived component in the lab frame;S0=βiSi{\displaystyle S^{0}=\beta _{i}S^{i}} The inverse relations areS0=γ(S′0+βiS′i)Si=S′i+γ2γ+1βiβjS′j+γβiS′0{\displaystyle {\begin{aligned}S^{0}&=\gamma \left({S'}^{0}+\beta _{i}{S'}^{i}\right)\\S^{i}&={S'}^{i}+{\frac {\gamma ^{2}}{\gamma +1}}\beta _{i}\beta _{j}{S'}^{j}+\gamma \beta ^{i}{S'}^{0}\end{aligned}}} The covariant constraint on the spin is orthogonality to the velocity vector,UαSα=0{\displaystyle U_{\alpha }S^{\alpha }=0} In 3-vector notation for explicitness, the transformations arest=β⋅ss′=s+γ2γ+1β(β⋅s)−γβst{\displaystyle {\begin{aligned}s_{t}&={\boldsymbol {\beta }}\cdot \mathbf {s} \\\mathbf {s} '&=\mathbf {s} +{\frac {\gamma ^{2}}{\gamma +1}}{\boldsymbol {\beta }}\left({\boldsymbol {\beta }}\cdot \mathbf {s} \right)-\gamma {\boldsymbol {\beta }}s_{t}\end{aligned}}} The inverse relationsst=γβ⋅s′s=s′+γ2γ+1β(β⋅s′){\displaystyle {\begin{aligned}s_{t}&=\gamma {\boldsymbol {\beta }}\cdot \mathbf {s} '\\\mathbf {s} &=\mathbf {s} '+{\frac {\gamma ^{2}}{\gamma +1}}{\boldsymbol {\beta }}\left({\boldsymbol {\beta }}\cdot \mathbf {s} '\right)\end{aligned}}}are the components of spin the lab frame, calculated from those in the particle's rest frame. Although the spin of the particle is constant for a given particle, it appears to be different in the lab frame. ThePauli–Lubanski pseudovectorSμ=12εμνρσJνρPσ,{\displaystyle S_{\mu }={\frac {1}{2}}\varepsilon _{\mu \nu \rho \sigma }J^{\nu \rho }P^{\sigma },}applies to both massive andmassless particles. In general, the total angular momentum tensor splits into an orbital component and aspin component,Jμν=Mμν+Sμν.{\displaystyle J^{\mu \nu }=M^{\mu \nu }+S^{\mu \nu }~.}This applies to a particle, a mass–energy–momentum distribution, or field. The following is a summary fromMTW.[7]Throughout for simplicity, Cartesian coordinates are assumed. In special and general relativity, a distribution of mass–energy–momentum, e.g. a fluid, or a star, is described by the stress–energy tensorTβγ(a second ordertensor fielddepending on space and time). SinceT00is the energy density,Tj0forj= 1, 2, 3 is thejth component of the object's 3d momentum per unit volume, andTijform components of thestress tensorincluding shear and normal stresses, theorbital angular momentum densityabout the position 4-vectorXβis given by a 3rd order tensorMαβγ=(Xα−X¯α)Tβγ−(Xβ−X¯β)Tαγ{\displaystyle {\mathcal {M}}^{\alpha \beta \gamma }=\left(X^{\alpha }-{\bar {X}}^{\alpha }\right)T^{\beta \gamma }-\left(X^{\beta }-{\bar {X}}^{\beta }\right)T^{\alpha \gamma }} This is antisymmetric inαandβ. In special and general relativity,Tis a symmetric tensor, but in other contexts (e.g., quantum field theory), it may not be. Let Ω be a region of 4d spacetime. Theboundaryis a 3d spacetime hypersurface ("spacetime surface volume" as opposed to "spatial surface area"), denoted ∂Ω where "∂" means "boundary". Integrating the angular momentum density over a 3d spacetime hypersurface yields the angular momentum tensor aboutX,Mαβ(X¯)=∮∂ΩMαβγdΣγ{\displaystyle M^{\alpha \beta }\left({\bar {X}}\right)=\oint _{\partial \Omega }{\mathcal {M}}^{\alpha \beta \gamma }d\Sigma _{\gamma }}where dΣγis the volume1-formplaying the role of aunit vectornormal to a 2d surface in ordinary 3d Euclidean space. The integral is taken over the coordinatesX, notX(i.e. Y). The integral within a spacelike surface of constant time isMij=∮∂ΩMij0dΣ0=∮∂Ω[(Xi−Yi)Tj0−(Xj−Yj)Ti0]dxdydz{\displaystyle M^{ij}=\oint _{\partial \Omega }{\mathcal {M}}^{ij0}d\Sigma _{0}=\oint _{\partial \Omega }\left[\left(X^{i}-Y^{i}\right)T^{j0}-\left(X^{j}-Y^{j}\right)T^{i0}\right]dx\,dy\,dz}which collectively form the angular momentum tensor. There is an intrinsic angular momentum in the centre-of-mass frame, in other words, the angular momentum about any eventXCOM=(XCOM0,XCOM1,XCOM2,XCOM3){\displaystyle \mathbf {X} _{\text{COM}}=\left(X_{\text{COM}}^{0},X_{\text{COM}}^{1},X_{\text{COM}}^{2},X_{\text{COM}}^{3}\right)}onthe wordline of the object's center of mass. SinceT00is the energy density of the object, the spatial coordinates of thecenter of massare given byXCOMi=1m0∫∂ΩXiT00dxdydz{\displaystyle X_{\text{COM}}^{i}={\frac {1}{m_{0}}}\int _{\partial \Omega }X^{i}T^{00}dxdydz} SettingY=XCOMobtains the orbital angular momentum density about the centre-of-mass of the object. Theconservationof energy–momentum is given in differential form by thecontinuity equation∂γTβγ=0{\displaystyle \partial _{\gamma }T^{\beta \gamma }=0}where ∂γis thefour-gradient. (In non-Cartesian coordinates and general relativity this would be replaced by thecovariant derivative). The total angular momentum conservation is given by another continuity equation∂γJαβγ=0{\displaystyle \partial _{\gamma }{\mathcal {J}}^{\alpha \beta \gamma }=0} The integral equations useGauss' theoremin spacetime∫V∂γTβγcdtdxdydz=∮∂VTβγd3Σγ=0∫V∂γJαβγcdtdxdydz=∮∂VJαβγd3Σγ=0{\displaystyle {\begin{aligned}\int _{\mathcal {V}}\partial _{\gamma }T^{\beta \gamma }\,cdt\,dx\,dy\,dz&=\oint _{\partial {\mathcal {V}}}T^{\beta \gamma }d^{3}\Sigma _{\gamma }=0\\\int _{\mathcal {V}}\partial _{\gamma }{\mathcal {J}}^{\alpha \beta \gamma }\,cdt\,dx\,dy\,dz&=\oint _{\partial {\mathcal {V}}}{\mathcal {J}}^{\alpha \beta \gamma }d^{3}\Sigma _{\gamma }=0\end{aligned}}} The torque acting on a point-like particle is defined as the derivative of the angular momentum tensor given above with respect to proper time:[8][9]Γ=dMdτ=X∧F{\displaystyle {\boldsymbol {\Gamma }}={\frac {d\mathbf {M} }{d\tau }}=\mathbf {X} \wedge \mathbf {F} }or in tensor components:Γαβ=XαFβ−XβFα{\displaystyle \Gamma _{\alpha \beta }=X_{\alpha }F_{\beta }-X_{\beta }F_{\alpha }}whereFis the 4d force acting on the particle at the eventX. As with angular momentum, torque is additive, so for an extended object one sums or integrates over the distribution of mass. The angular momentum tensor is the generator of boosts and rotations for theLorentz group.[10][11]Lorentz boostscan be parametrized byrapidity, and a 3d unit vectornpointing in the direction of the boost, which combine into the "rapidity vector"ζ=ζn=ntanh−1⁡β{\displaystyle {\boldsymbol {\zeta }}=\zeta \mathbf {n} =\mathbf {n} \tanh ^{-1}\beta }whereβ=v/cis the speed of the relative motion divided by the speed of light. Spatial rotations can be parametrized by theaxis–angle representation, the angleθand a unit vectorapointing in the direction of the axis, which combine into an "axis-angle vector"θ=θa{\displaystyle {\boldsymbol {\theta }}=\theta \mathbf {a} } Each unit vector only has two independent components, the third is determined from the unit magnitude. Altogether there are six parameters of the Lorentz group; three for rotations and three for boosts. The (homogeneous) Lorentz group is 6-dimensional. The boost generatorsKand rotation generatorsJcan be combined into one generator for Lorentz transformations;Mthe antisymmetric angular momentum tensor, with componentsM0i=−Mi0=Ki,Mij=εijkJk.{\displaystyle M^{0i}=-M^{i0}=K_{i}\,,\quad M^{ij}=\varepsilon _{ijk}J_{k}\,.}and correspondingly, the boost and rotation parameters are collected into another antisymmetric four-dimensional matrixω, with entries:ω0i=−ωi0=ζi,ωij=εijkθk,{\displaystyle \omega _{0i}=-\omega _{i0}=\zeta _{i}\,,\quad \omega _{ij}=\varepsilon _{ijk}\theta _{k}\,,}where thesummation conventionover the repeated indicesi, j, khas been used to prevent clumsy summation signs. The generalLorentz transformationis then given by thematrix exponentialΛ(ζ,θ)=exp⁡(12ωαβMαβ)=exp⁡(ζ⋅K+θ⋅J){\displaystyle \Lambda ({\boldsymbol {\zeta }},{\boldsymbol {\theta }})=\exp \left({\frac {1}{2}}\omega _{\alpha \beta }M^{\alpha \beta }\right)=\exp \left({\boldsymbol {\zeta }}\cdot \mathbf {K} +{\boldsymbol {\theta }}\cdot \mathbf {J} \right)}and the summation convention has been applied to the repeated matrix indicesαandβ. The general Lorentz transformation Λ is the transformation law for anyfour vectorA= (A0,A1,A2,A3), giving the components of this same 4-vector in another inertial frame of referenceA′=Λ(ζ,θ)A{\displaystyle \mathbf {A} '=\Lambda ({\boldsymbol {\zeta }},{\boldsymbol {\theta }})\mathbf {A} } The angular momentum tensor forms 6 of the 10 generators of thePoincaré group, the other four are the components of the four-momentum for spacetime translations. The angular momentum of test particles in a gently curved background is more complicated in GR but can be generalized in a straightforward manner. If theLagrangianis expressed with respect to angular variables as thegeneralized coordinates, then the angular momenta are thefunctional derivativesof the Lagrangian with respect to theangular velocities. Referred to Cartesian coordinates, these are typically given by the off-diagonal shear terms of the spacelike part of thestress–energy tensor. If the spacetime supports aKilling vector fieldtangent to a circle, then the angular momentum about the axis is conserved. One also wishes to study the effect of a compact, rotating mass on its surrounding spacetime. The prototype solution is of theKerr metric, which describes the spacetime around an axially symmetricblack hole. It is obviously impossible to draw a point on the event horizon of a Kerr black hole and watch it circle around. However, the solution does support a constant of the system that acts mathematically similarly to an angular momentum. Ingeneral relativitywheregravitational wavesexist, the asymptoticsymmetry groupin asymptotically flat spacetimes is not the expected ten-dimensionalPoincaré groupofspecial relativity, but the infinite-dimensional group formulated in 1962 byBondi, van der Burg, Metzner, and Sachs, the so-called BMS group, which contains an infinite superset of the four spacetime translations, namedsupertranslations. Despite half a century of research, difficulties with “supertranslation ambiguity” persisted in fundamental notions like the angular momentum carried away by gravitational waves. In 2020, novel supertranslation-invariant definitions of angular momentum began to be formulated by different researchers. Supertranslation invariance of angular momentum and other Lorentz charges in general relativity continues to be an active area of research.[12]
https://en.wikipedia.org/wiki/Four-spin
Inmathematics,Ricci calculusconstitutes the rules of index notation and manipulation fortensorsandtensor fieldson adifferentiable manifold, with or without ametric tensororconnection.[a][1][2][3]It is also the modern name for what used to be called theabsolute differential calculus(the foundation of tensor calculus),tensor calculusortensor analysisdeveloped byGregorio Ricci-Curbastroin 1887–1896, and subsequently popularized in a paper written with his pupilTullio Levi-Civitain 1900.[4]Jan Arnoldus Schoutendeveloped the modern notation and formalism for this mathematical framework, and made contributions to the theory, during its applications togeneral relativityanddifferential geometryin the early twentieth century.[5]The basis of modern tensor analysis was developed byBernhard Riemannin a paper from 1861.[6] A component of a tensor is areal numberthat is used as a coefficient of a basis element for the tensor space. The tensor is the sum of its components multiplied by their corresponding basis elements. Tensors and tensor fields can be expressed in terms of their components, and operations on tensors and tensor fields can be expressed in terms of operations on their components. The description of tensor fields and operations on them in terms of their components is the focus of the Ricci calculus. This notation allows an efficient expression of such tensor fields and operations. While much of the notation may be applied with any tensors, operations relating to adifferential structureare only applicable to tensor fields. Where needed, the notation extends to components of non-tensors, particularlymultidimensional arrays. A tensor may be expressed as a linear sum of thetensor productofvectorandcovectorbasis elements. The resulting tensor components are labelled by indices of the basis. Each index has one possible value perdimensionof the underlyingvector space. The number of indices equals the degree (or order) of the tensor. For compactness and convenience, the Ricci calculus incorporatesEinstein notation, which implies summation over indices repeated within a term anduniversal quantificationover free indices. Expressions in the notation of the Ricci calculus may generally be interpreted as a set of simultaneous equations relating the components as functions over a manifold, usually more specifically as functions of the coordinates on the manifold. This allows intuitive manipulation of expressions with familiarity of only a limited set of rules. Tensor calculus has many applications inphysics,engineeringandcomputer scienceincludingelasticity,continuum mechanics,electromagnetism(seemathematical descriptions of the electromagnetic field),general relativity(seemathematics of general relativity),quantum field theory, andmachine learning. Working with a main proponent of theexterior calculusÉlie Cartan, the influential geometerShiing-Shen Chernsummarizes the role of tensor calculus:[7] In our subject of differential geometry, where you talk about manifolds, one difficulty is that the geometry is described by coordinates, but the coordinates do not have meaning. They are allowed to undergo transformation. And in order to handle this kind of situation, an important tool is the so-called tensor analysis, or Ricci calculus, which was new to mathematicians. In mathematics you have a function, you write down the function, you calculate, or you add, or you multiply, or you can differentiate. You have something very concrete. In geometry the geometric situation is described by numbers, but you can change your numbers arbitrarily. So to handle this, you need the Ricci calculus. Where a distinction is to be made between the space-like basis elements and a time-like element in the four-dimensional spacetime of classical physics, this is conventionally done through indices as follows:[8] Some sources use 4 instead of 0 as the index value corresponding to time; in this article, 0 is used. Otherwise, in general mathematical contexts, any symbols can be used for the indices, generally running over all dimensions of the vector space. The author(s) will usually make it clear whether a subscript is intended as an index or as a label. For example, in 3-D Euclidean space and usingCartesian coordinates; thecoordinate vectorA= (A1,A2,A3) = (Ax,Ay,Az)shows a direct correspondence between the subscripts 1, 2, 3 and the labelsx,y,z. In the expressionAi,iis interpreted as an index ranging over the values 1, 2, 3, while thex,y,zsubscripts are only labels, not variables. In the context of spacetime, the index value 0 conventionally corresponds to the labelt. Indices themselves may belabelledusingdiacritic-like symbols, such as ahat(ˆ),bar(¯),tilde(˜), or prime (′) as in: to denote a possibly differentbasisfor that index. An example is inLorentz transformationsfrom oneframe of referenceto another, where one frame could be unprimed and the other primed, as in: This is not to be confused withvan der Waerden notationforspinors, which uses hats and overdots on indices to reflect the chirality of a spinor. Ricci calculus, andindex notationmore generally, distinguishes between lower indices (subscripts) and upper indices (superscripts); the latter arenotexponents, even though they may look as such to the reader only familiar with other parts of mathematics. In the special case that the metric tensor is everywhere equal to the identity matrix, it is possible to drop the distinction between upper and lower indices, and then all indices could be written in the lower position. Coordinate formulae in linear algebra such asaijbjk{\displaystyle a_{ij}b_{jk}}for the product of matrices may be examples of this. But in general, the distinction between upper and lower indices should be maintained. Alower index(subscript) indicates covariance of the components with respect to that index: Anupper index(superscript) indicates contravariance of the components with respect to that index: A tensor may have both upper and lower indices: Ordering of indices is significant, even when of differing variance. However, when it is understood that no indices will be raised or lowered while retaining the base symbol, covariant indices are sometimes placed below contravariant indices for notational convenience (e.g. with thegeneralized Kronecker delta). The number of each upper and lower indices of a tensor gives itstype: a tensor withpupper andqlower indices is said to be of type(p,q), or to be a type-(p,q)tensor. The number of indices of a tensor, regardless of variance, is called thedegreeof the tensor (alternatively, itsvalence,orderorrank, althoughrankis ambiguous). Thus, a tensor of type(p,q)has degreep+q. The same symbol occurring twice (one upper and one lower) within a term indicates a pair of indices that are summed over: The operation implied by such a summation is calledtensor contraction: This summation may occur more than once within a term with a distinct symbol per pair of indices, for example: Other combinations of repeated indices within a term are considered to be ill-formed, such as The reason for excluding such formulae is that although these quantities could be computed as arrays of numbers, they would not in general transform as tensors under a change of basis. If a tensor has a list of all upper or lower indices, one shorthand is to use a capital letter for the list:[9] whereI=i1i2⋅⋅⋅inandJ=j1j2⋅⋅⋅jm. A pair of vertical bars| ⋅ |around a set of all-upper indices or all-lower indices (but not both), associated with contraction with another set of indices when the expression iscompletely antisymmetricin each of the two sets of indices:[10] means a restricted sum over index values, where each index is constrained to being strictly less than the next. More than one group can be summed in this way, for example: When using multi-index notation, an underarrow is placed underneath the block of indices:[11] where By contracting an index with a non-singularmetric tensor, thetypeof a tensor can be changed, converting a lower index to an upper index or vice versa: The base symbol in many cases is retained (e.g. usingAwhereBappears here), and when there is no ambiguity, repositioning an index may be taken to imply this operation. This table summarizes how the manipulation of covariant and contravariant indices fit in with invariance under apassive transformationbetween bases, with the components of each basis set in terms of the other reflected in the first column. The barred indices refer to the final coordinate system after the transformation.[12] TheKronecker deltais used,see also below. Tensors are equalif and only ifevery corresponding component is equal; e.g., tensorAequals tensorBif and only if for allα,β,γ. Consequently, there are facets of the notation that are useful in checking that an equation makes sense (an analogous procedure todimensional analysis). Indices not involved in contractions are calledfree indices. Indices used in contractions are termeddummy indices, orsummation indices. The components of tensors (likeAα,Bβγetc.) are just real numbers. Since the indices take various integer values to select specific components of the tensors, a single tensor equation represents many ordinary equations. If a tensor equality hasnfree indices, and if the dimensionality of the underlying vector space ism, the equality representsmnequations: each index takes on every value of a specific set of values. For instance, if is infour dimensions(that is, each index runs from 0 to 3 or from 1 to 4), then because there are three free indices (α,β,δ), there are 43= 64 equations. Three of these are: This illustrates the compactness and efficiency of using index notation: many equations which all share a similar structure can be collected into one simple tensor equation. Replacing any index symbol throughout by another leaves the tensor equation unchanged (provided there is no conflict with other symbols already used). This can be useful when manipulating indices, such as using index notation to verifyvector calculus identitiesor identities of theKronecker deltaandLevi-Civita symbol(see also below). An example of a correct change is: whereas an erroneous change is: In the first replacement,λreplacedαandμreplacedγeverywhere, so the expression still has the same meaning. In the second,λdid not fully replaceα, andμdid not fully replaceγ(incidentally, the contraction on theγindex became a tensor product), which is entirely inconsistent for reasons shown next. The free indices in a tensor expression always appear in the same (upper or lower) position throughout every term, and in a tensor equation the free indices are the same on each side. Dummy indices (which implies a summation over that index) need not be the same, for example: as for an erroneous expression: In other words, non-repeated indices must be of the same type in every term of the equation. In the above identity,α,β,δline up throughout andγoccurs twice in one term due to a contraction (once as an upper index and once as a lower index), and thus it is a valid expression. In the invalid expression, whileβlines up,αandδdo not, andγappears twice in one term (contraction)andonce in another term, which is inconsistent. When applying a rule to a number of indices (differentiation, symmetrization etc., shown next), the bracket or punctuation symbols denoting the rules are only shown on one group of the indices to which they apply. If the brackets enclosecovariant indices– the rule applies only toall covariant indices enclosed in the brackets, not to any contravariant indices which happen to be placed intermediately between the brackets. Similarly if brackets enclosecontravariant indices– the rule applies only toall enclosed contravariant indices, not to intermediately placed covariant indices. Parentheses, ( ), around multiple indices denotes the symmetrized part of the tensor. When symmetrizingpindices usingσto range over permutations of the numbers 1 top, one takes a sum over thepermutationsof those indicesασ(i)fori= 1, 2, 3, ...,p, and then divides by the number of permutations: For example, two symmetrizing indices mean there are two indices to permute and sum over: while for three symmetrizing indices, there are three indices to sum over and permute: The symmetrization isdistributiveover addition; Indices are not part of the symmetrization when they are: Here theαandγindices are symmetrized,βis not. Square brackets, [ ], around multiple indices denotes theantisymmetrized part of the tensor. Forpantisymmetrizing indices – the sum over the permutations of those indicesασ(i)multiplied by thesignature of the permutationsgn(σ)is taken, then divided by the number of permutations: whereδβ1⋅⋅⋅βpα1⋅⋅⋅αpis thegeneralized Kronecker deltaof degree2p, with scaling as defined below. For example, two antisymmetrizing indices imply: while three antisymmetrizing indices imply: as for a more specific example, ifFrepresents theelectromagnetic tensor, then the equation representsGauss's law for magnetismandFaraday's law of induction. As before, the antisymmetrization is distributive over addition; As with symmetrization, indices are not antisymmetrized when they are: Here theαandγindices are antisymmetrized,βis not. Any tensor can be written as the sum of its symmetric and antisymmetric parts on two indices: as can be seen by adding the above expressions forA(αβ)γ⋅⋅⋅andA[αβ]γ⋅⋅⋅. This does not hold for other than two indices. For compactness, derivatives may be indicated by adding indices after a comma or semicolon.[13][14] While most of the expressions of the Ricci calculus are valid for arbitrary bases, the expressions involving partial derivatives of tensor components with respect to coordinates apply only with acoordinate basis: a basis that is defined through differentiation with respect to the coordinates. Coordinates are typically denoted byxμ, but do not in general form the components of a vector. In flat spacetime with linear coordinatization, a tuple ofdifferencesin coordinates,Δxμ, can be treated as a contravariant vector. With the same constraints on the space and on the choice of coordinate system, the partial derivatives with respect to the coordinates yield a result that is effectively covariant. Aside from use in this special case, the partial derivatives of components of tensors do not in general transform covariantly, but are useful in building expressions that are covariant, albeit still with a coordinate basis if the partial derivatives are explicitly used, as with the covariant, exterior and Lie derivatives below. To indicate partial differentiation of the components of a tensor field with respect to a coordinate variablexγ, acommais placed before an appended lower index of the coordinate variable. This may be repeated (without adding further commas): These components donottransform covariantly, unless the expression being differentiated is a scalar. This derivative is characterized by theproduct ruleand the derivatives of the coordinates whereδis theKronecker delta. The covariant derivative is only defined if aconnectionis defined. For any tensor field, asemicolon(;) placed before an appended lower (covariant) index indicates covariant differentiation. Less common alternatives to the semicolon include aforward slash(/)[15]or in three-dimensional curved space a single vertical bar (|).[16] The covariant derivative of a scalar function, a contravariant vector and a covariant vector are: whereΓαγβare the connection coefficients. For an arbitrary tensor:[17] An alternative notation for the covariant derivative of any tensor is the subscripted nabla symbol∇β. For the case of a vector fieldAα:[18] The covariant formulation of thedirectional derivativeof any tensor field along a vectorvγmay be expressed as its contraction with the covariant derivative, e.g.: The components of this derivative of a tensor field transform covariantly, and hence form another tensor field, despite subexpressions (the partial derivative and the connection coefficients) separately not transforming covariantly. This derivative is characterized by the product rule: AKoszul connectionon thetangent bundleof adifferentiable manifoldis called anaffine connection. A connection is ametric connectionwhen the covariant derivative of the metric tensor vanishes: Anaffine connectionthat is also a metric connection is called aRiemannian connection. A Riemannian connection that is torsion-free (i.e., for which thetorsion tensorvanishes:Tαβγ= 0) is aLevi-Civita connection. TheΓαβγfor a Levi-Civita connection in a coordinate basis are calledChristoffel symbolsof the second kind. The exterior derivative of a totally antisymmetric type(0,s)tensor field with componentsAα1⋅⋅⋅αs(also called adifferential form) is a derivative that is covariant under basis transformations. It does not depend on either a metric tensor or a connection: it requires only the structure of a differentiable manifold. In a coordinate basis, it may be expressed as the antisymmetrization of the partial derivatives of the tensor components:[19]: 232–233 This derivative is not defined on any tensor field with contravariant indices or that is not totally antisymmetric. It is characterized by a graded product rule. The Lie derivative is another derivative that is covariant under basis transformations. Like the exterior derivative, it does not depend on either a metric tensor or a connection. The Lie derivative of a type(r,s)tensor fieldTalong (the flow of) a contravariant vector fieldXρmay be expressedusing a coordinate basis as[20] This derivative is characterized by the product rule and the fact that the Lie derivative of a contravariant vector field along itself is zero: The Kronecker delta is like theidentity matrixwhen multiplied and contracted: The componentsδαβare the same in any basis and form an invariant tensor of type(1, 1), i.e. the identity of thetangent bundleover theidentity mappingof thebase manifold, and so its trace is an invariant.[21]Itstraceis the dimensionality of the space; for example, in four-dimensionalspacetime, The Kronecker delta is one of the family of generalized Kronecker deltas. The generalized Kronecker delta of degree2pmay be defined in terms of the Kronecker delta by (a common definition includes an additional multiplier ofp!on the right): and acts as an antisymmetrizer onpindices: An affine connection has a torsion tensorTαβγ: whereγαβγare given by the components of the Lie bracket of the local basis, which vanish when it is a coordinate basis. For a Levi-Civita connection this tensor is defined to be zero, which for a coordinate basis gives the equations If this tensor is defined as then it is thecommutatorof the covariant derivative with itself:[22][23] since the connection is torsionless, which means that the torsion tensor vanishes. This can be generalized to get the commutator for two covariant derivatives of an arbitrary tensor as follows: which are often referred to as theRicci identities.[24] The metric tensorgαβis used for lowering indices and gives the length of anyspace-likecurve whereγis anysmoothstrictly monotoneparameterizationof the path. It also gives the duration of anytime-likecurve whereγis any smooth strictly monotone parameterization of the trajectory. See alsoLine element. Theinverse matrixgαβof the metric tensor is another important tensor, used for raising indices:
https://en.wikipedia.org/wiki/Ricci_calculus
Inmathematicsandcomputer programming,index notationis used to specify the elements of an array of numbers. The formalism of how indices are used varies according to the subject. In particular, there are different methods for referring to the elements of a list, avector, or amatrix, depending on whether one is writing a formal mathematical paper for publication, or when one is writing acomputer program. It is frequently helpful in mathematics to refer to the elements of an array using subscripts. The subscripts can beintegersorvariables. The array takes the form oftensorsin general, since these can be treated as multi-dimensional arrays. Special (and more familiar) cases arevectors(1d arrays) andmatrices(2d arrays). The following is only an introduction to the concept: index notation is used in more detail in mathematics (particularly in the representation and manipulation oftensor operations). See the main article for further details. A vector treated as an array of numbers by writing as arow vectororcolumn vector(whichever is used depends on convenience or context): Index notation allows indication of the elements of the array by simply writingai, where the indexiis known to run from 1 ton, because of n-dimensions.[1]For example, given the vector: then some entries are The notation can be applied tovectors in mathematics and physics. The followingvector equation can also be written in terms of the elements of the vector (aka components), that is where the indices take a given range of values. This expression represents a set of equations, one for each index. If the vectors each havenelements, meaningi= 1,2,…n, then the equations are explicitly Hence, index notation serves as an efficient shorthand for More than one index is used to describe arrays of numbers, in two or more dimensions, such as the elements of a matrix, (see also image to right); The entry of a matrixAis written using two indices, sayiandj, with or without commas to separate the indices:aijorai,j, where the first subscript is the row number and the second is the column number.Juxtapositionis also used as notation for multiplication; this may be a source of confusion. For example, if then some entries are For indices larger than 9, the comma-based notation may be preferable (e.g.,a3,12instead ofa312). Matrix equationsare written similarly to vector equations, such as in terms of the elements of the matrices (aka components) for all values ofiandj. Again this expression represents a set of equations, one for each index. If the matrices each havemrows andncolumns, meaningi= 1, 2, …,mandj= 1, 2, …,n, then there aremnequations. The notation allows a clear generalization to multi-dimensional arrays of elements: tensors. For example, representing a set of many equations. In tensor analysis, superscripts are used instead of subscripts to distinguish covariant from contravariant entities, seecovariance and contravariance of vectorsandraising and lowering indices. In several programming languages, index notation is a way of addressing elements of an array. This method is used since it is closest to how it is implemented inassembly languagewhereby the address of the first element is used as a base, and a multiple (the index) of the element size is used to address inside the array. For example, if an array of integers is stored in a region of the computer's memory starting at the memory cell with address 3000 (thebase address), and each integer occupies four cells (bytes), then the elements of this array are at memory locations 0x3000, 0x3004, 0x3008, …, 0x3000 + 4(n− 1) (note thezero-based numbering). In general, the address of theith element of an array withbase addressband element sizesisb+is. In theC programming language, we can write the above as*(base + i)(pointer form) orbase[i](array indexing form), which is exactly equivalent because the C standard defines the array indexing form as a transformation to pointer form. Coincidentally, since pointer addition is commutative, this allows for obscure expressions such as3[base]which is equivalent tobase[3].[2] Things become more interesting when we consider arrays with more than one index, for example, a two-dimensional table. We have three possibilities: In C, all three methods can be used. When the first method is used, the programmer decides how the elements of the array are laid out in the computer's memory, and provides the formulas to compute the location of each element. The second method is used when the number of elements in each row is the same and known at the time the program is written. The programmer declares the array to have, say, three columns by writing e.g.elementtype tablename[][3];. One then refers to a particular element of the array by writingtablename[first index][second index]. The compiler computes the total number of memory cells occupied by each row, uses the first index to find the address of the desired row, and then uses the second index to find the address of the desired element in the row. When the third method is used, the programmer declares the table to be an array of pointers, like inelementtype *tablename[];. When the programmer subsequently specifies a particular elementtablename[first index][second index], the compiler generates instructions to look up the address of the row specified by the first index, and use this address as the base when computing the address of the element specified by the second index. In other programming languages such as Pascal, indices may start at 1, so indexing in a block of memory can be changed to fit a start-at-1 addressing scheme by a simple linear transformation – in this scheme, the memory location of theith element withbase addressband element sizesisb+ (i− 1)s.
https://en.wikipedia.org/wiki/Index_notation
Inmathematicsandtheoretical physics, atensorisantisymmetricoralternating on(orwith respect to)an index subsetif it alternatessign(+/−) when any two indices of the subset are interchanged.[1][2]The index subset must generally either be allcovariantor allcontravariant. For example,Tijk…=−Tjik…=Tjki…=−Tkji…=Tkij…=−Tikj…{\displaystyle T_{ijk\dots }=-T_{jik\dots }=T_{jki\dots }=-T_{kji\dots }=T_{kij\dots }=-T_{ikj\dots }}holds when the tensor is antisymmetric with respect to its first three indices. If a tensor changes sign under exchange ofeachpair of its indices, then the tensor iscompletely(ortotally)antisymmetric. A completely antisymmetric covarianttensor fieldoforderk{\displaystyle k}may be referred to as adifferentialk{\displaystyle k}-form, and a completely antisymmetric contravariant tensor field may be referred to as ak{\displaystyle k}-vectorfield. A tensorAthat is antisymmetric on indicesi{\displaystyle i}andj{\displaystyle j}has the property that thecontractionwith a tensorBthat is symmetric on indicesi{\displaystyle i}andj{\displaystyle j}is identically 0. For a general tensorUwith componentsUijk…{\displaystyle U_{ijk\dots }}and a pair of indicesi{\displaystyle i}andj,{\displaystyle j,}Uhas symmetric and antisymmetric parts defined as: Similar definitions can be given for other pairs of indices. As the term "part" suggests, a tensor is the sum of its symmetric part and antisymmetric part for a given pair of indices, as inUijk…=U(ij)k…+U[ij]k….{\displaystyle U_{ijk\dots }=U_{(ij)k\dots }+U_{[ij]k\dots }.} A shorthand notation for anti-symmetrization is denoted by a pair of square brackets. For example, in arbitrary dimensions, for an order 2 covariant tensorM,M[ab]=12!(Mab−Mba),{\displaystyle M_{[ab]}={\frac {1}{2!}}(M_{ab}-M_{ba}),}and for an order 3 covariant tensorT,T[abc]=13!(Tabc−Tacb+Tbca−Tbac+Tcab−Tcba).{\displaystyle T_{[abc]}={\frac {1}{3!}}(T_{abc}-T_{acb}+T_{bca}-T_{bac}+T_{cab}-T_{cba}).} In any 2 and 3 dimensions, these can be written asM[ab]=12!δabcdMcd,T[abc]=13!δabcdefTdef.{\displaystyle {\begin{aligned}M_{[ab]}&={\frac {1}{2!}}\,\delta _{ab}^{cd}M_{cd},\\[2pt]T_{[abc]}&={\frac {1}{3!}}\,\delta _{abc}^{def}T_{def}.\end{aligned}}}whereδab…cd…{\displaystyle \delta _{ab\dots }^{cd\dots }}is thegeneralized Kronecker delta, and theEinstein summation conventionis in use. More generally, irrespective of the number of dimensions, antisymmetrization overp{\displaystyle p}indices may be expressed asT[a1…ap]=1p!δa1…apb1…bpTb1…bp.{\displaystyle T_{[a_{1}\dots a_{p}]}={\frac {1}{p!}}\delta _{a_{1}\dots a_{p}}^{b_{1}\dots b_{p}}T_{b_{1}\dots b_{p}}.} In general, every tensor of rank 2 can be decomposed into a symmetric and anti-symmetric pair as:Tij=12(Tij+Tji)+12(Tij−Tji).{\displaystyle T_{ij}={\frac {1}{2}}(T_{ij}+T_{ji})+{\frac {1}{2}}(T_{ij}-T_{ji}).} This decomposition is not in general true for tensors of rank 3 or more, which have more complex symmetries. Totally antisymmetric tensors include:
https://en.wikipedia.org/wiki/Antisymmetric_tensor
Inmathematics, especially the usage oflinear algebrainmathematical physicsanddifferential geometry,Einstein notation(also known as theEinstein summation conventionorEinstein summation notation) is a notational convention that impliessummationover a set of indexed terms in a formula, thus achieving brevity. As part of mathematics it is a notational subset ofRicci calculus; however, it is often used in physics applications that do not distinguish betweentangentandcotangent spaces. It was introduced to physics byAlbert Einsteinin 1916.[1] According to this convention, when an index variable appears twice in a singletermand is not otherwise defined (seeFree and bound variables), it implies summation of that term over all the values of the index. So where the indices can range over theset{1, 2, 3},y=∑i=13xiei=x1e1+x2e2+x3e3{\displaystyle y=\sum _{i=1}^{3}x^{i}e_{i}=x^{1}e_{1}+x^{2}e_{2}+x^{3}e_{3}}is simplified by the convention to:y=xiei{\displaystyle y=x^{i}e_{i}} The upper indices are notexponentsbut are indices of coordinates,coefficientsorbasis vectors. That is, in this contextx2should be understood as the second component ofxrather than the square ofx(this can occasionally lead to ambiguity). The upper index position inxiis because, typically, an index occurs once in an upper (superscript) and once in a lower (subscript) position in a term (see§ Applicationbelow). Typically,(x1x2x3)would be equivalent to the traditional(xyz). Ingeneral relativity, a common convention is that In general, indices can range over anyindexing set, including aninfinite set. This should not be confused with a typographically similar convention used to distinguish betweentensor index notationand the closely related but distinct basis-independentabstract index notation. An index that is summed over is asummation index, in this case "i". It is also called adummy indexsince any symbol can replace "i" without changing the meaning of the expression (provided that it does not collide with other index symbols in the same term). An index that is not summed over is afree indexand should appear only once per term. If such an index does appear, it usually also appears in every other term in an equation. An example of a free index is the "i" in the equationvi=aibjxj{\displaystyle v_{i}=a_{i}b_{j}x^{j}}, which is equivalent to the equationvi=∑j(aibjxj){\textstyle v_{i}=\sum _{j}(a_{i}b_{j}x^{j})}. Einstein notation can be applied in slightly different ways. Typically, each index occurs once in an upper (superscript) and once in a lower (subscript) position in a term; however, the convention can be applied more generally to any repeated indices within a term.[2]When dealing withcovariant and contravariantvectors, where the position of an index indicates the type of vector, the first case usually applies; a covariant vector can only be contracted with a contravariant vector, corresponding to summation of the products of coefficients. On the other hand, when there is a fixed coordinate basis (or when not considering coordinate vectors), one may choose to use only subscripts; see§ Superscripts and subscripts versus only subscriptsbelow. In terms ofcovariance and contravariance of vectors, They transform contravariantly or covariantly, respectively, with respect tochange of basis. In recognition of this fact, the following notation uses the same symbol both for a vector or covector and itscomponents, as in:v=viei=[e1e2⋯en][v1v2⋮vn]w=wiei=[w1w2⋯wn][e1e2⋮en]{\displaystyle {\begin{aligned}v=v^{i}e_{i}={\begin{bmatrix}e_{1}&e_{2}&\cdots &e_{n}\end{bmatrix}}{\begin{bmatrix}v^{1}\\v^{2}\\\vdots \\v^{n}\end{bmatrix}}\\w=w_{i}e^{i}={\begin{bmatrix}w_{1}&w_{2}&\cdots &w_{n}\end{bmatrix}}{\begin{bmatrix}e^{1}\\e^{2}\\\vdots \\e^{n}\end{bmatrix}}\end{aligned}}} wherev{\displaystyle v}is the vector andvi{\displaystyle v^{i}}are its components (not thei{\displaystyle i}th covectorv{\displaystyle v}),w{\displaystyle w}is the covector andwi{\displaystyle w_{i}}are its components. The basis vector elementsei{\displaystyle e_{i}}are each column vectors, and the covector basis elementsei{\displaystyle e^{i}}are each row covectors. (See also§ Abstract description;duality, below and theexamples) In the presence of anon-degenerate form(anisomorphismV→V∗, for instance aRiemannian metricorMinkowski metric), one canraise and lower indices. A basis gives such a form (via thedual basis), hence when working onRnwith aEuclidean metricand a fixedorthonormal basis, one has the option to work with only subscripts. However, if one changes coordinates, the way that coefficients change depends on the variance of the object, and one cannot ignore the distinction; seeCovariance and contravariance of vectors. In the above example, vectors are represented asn× 1matrices(column vectors), while covectors are represented as1 ×nmatrices (row covectors). When using the column vector convention: The virtue of Einstein notation is that it represents theinvariantquantities with a simple notation. In physics, ascalaris invariant under transformations of basis. In particular, aLorentz scalaris invariant under aLorentz transformation. The individual terms in the sum are not. When the basis is changed, thecomponentsof a vector change by alinear transformationdescribed by a matrix. This led Einstein to propose the convention that repeated indices imply the summation is to be done. As for covectors, they change by theinverse matrix. This is designed to guarantee that the linear function associated with the covector, the sum above, is the same no matter what the basis is. The value of the Einstein convention is that it applies to othervector spacesbuilt fromVusing thetensor productandduality. For example,V⊗V, the tensor product ofVwith itself, has a basis consisting of tensors of the formeij=ei⊗ej. Any tensorTinV⊗Vcan be written as:T=Tijeij.{\displaystyle \mathbf {T} =T^{ij}\mathbf {e} _{ij}.} V*, the dual ofV, has a basise1,e2, ...,enwhich obeys the ruleei(ej)=δji.{\displaystyle \mathbf {e} ^{i}(\mathbf {e} _{j})=\delta _{j}^{i}.}whereδis theKronecker delta. AsHom⁡(V,W)=V∗⊗W{\displaystyle \operatorname {Hom} (V,W)=V^{*}\otimes W}the row/column coordinates on a matrix correspond to the upper/lower indices on the tensor product. In Einstein notation, the usual element referenceAmn{\displaystyle A_{mn}}for them{\displaystyle m}-th row andn{\displaystyle n}-th column of matrixA{\displaystyle A}becomesAmn{\displaystyle {A^{m}}_{n}}. We can then write the following operations in Einstein notation as follows. Theinner productof two vectors is the sum of the products of their corresponding components, with the indices of one vector lowered (see#Raising and lowering indices):⟨u,v⟩=⟨ei,ej⟩uivj=ujvj{\displaystyle \langle \mathbf {u} ,\mathbf {v} \rangle =\langle \mathbf {e} _{i},\mathbf {e} _{j}\rangle u^{i}v^{j}=u_{j}v^{j}}In the case of anorthonormal basis, we haveuj=uj{\displaystyle u^{j}=u_{j}}, and the expression simplifies to:⟨u,v⟩=∑jujvj=ujvj{\displaystyle \langle \mathbf {u} ,\mathbf {v} \rangle =\sum _{j}u^{j}v^{j}=u_{j}v^{j}} In three dimensions, thecross productof two vectors with respect to apositively orientedorthonormal basis, meaning thate1×e2=e3{\displaystyle \mathbf {e} _{1}\times \mathbf {e} _{2}=\mathbf {e} _{3}}, can be expressed as:u×v=εjkiujvkei{\displaystyle \mathbf {u} \times \mathbf {v} =\varepsilon _{\,jk}^{i}u^{j}v^{k}\mathbf {e} _{i}} Here,εjki=εijk{\displaystyle \varepsilon _{\,jk}^{i}=\varepsilon _{ijk}}is theLevi-Civita symbol. Since the basis is orthonormal, raising the indexi{\displaystyle i}does not alter the value ofεijk{\displaystyle \varepsilon _{ijk}}, when treated as a tensor. The product of a matrixAijwith a column vectorvjis:ui=(Av)i=∑j=1NAijvj{\displaystyle \mathbf {u} _{i}=(\mathbf {A} \mathbf {v} )_{i}=\sum _{j=1}^{N}A_{ij}v_{j}}equivalent toui=Aijvj{\displaystyle u^{i}={A^{i}}_{j}v^{j}} This is a special case of matrix multiplication. Thematrix productof two matricesAijandBjkis:Cik=(AB)ik=∑j=1NAijBjk{\displaystyle \mathbf {C} _{ik}=(\mathbf {A} \mathbf {B} )_{ik}=\sum _{j=1}^{N}A_{ij}B_{jk}} equivalent toCik=AijBjk{\displaystyle {C^{i}}_{k}={A^{i}}_{j}{B^{j}}_{k}} For asquare matrixAij, thetraceis the sum of the diagonal elements, hence the sum over a common indexAii. Theouter productof the column vectoruiby the row vectorvjyields anm×nmatrixA:Aij=uivj=(uv)ij{\displaystyle {A^{i}}_{j}=u^{i}v_{j}={(uv)^{i}}_{j}} Sinceiandjrepresent twodifferentindices, there is no summation and the indices are not eliminated by the multiplication. Given atensor, one canraise an index or lower an indexby contracting the tensor with themetric tensor,gμν. For example, taking the tensorTαβ, one can lower an index:gμσTσβ=Tμβ{\displaystyle g_{\mu \sigma }{T^{\sigma }}_{\beta }=T_{\mu \beta }} Or one can raise an index:gμσTσα=Tμα{\displaystyle g^{\mu \sigma }{T_{\sigma }}^{\alpha }=T^{\mu \alpha }}
https://en.wikipedia.org/wiki/Einstein_notation
Inmathematics—more specifically, indifferential geometry—themusical isomorphism(orcanonical isomorphism) is anisomorphismbetween thetangent bundleTM{\displaystyle \mathrm {T} M}and thecotangent bundleT∗M{\displaystyle \mathrm {T} ^{*}M}of aRiemannianorpseudo-Riemannian manifoldinduced by itsmetric tensor. There are similar isomorphisms onsymplectic manifolds. These isomorphisms are global versions of the canonical isomorphism between aninner product spaceand itsdual. The termmusicalrefers to the use of themusical notationsymbols♭{\displaystyle \flat }(flat)and♯{\displaystyle \sharp }(sharp).[1][2] In the notation ofRicci calculusandmathematical physics, the idea is expressed as theraising and lowering of indices. Raising and lowering indices are a form ofindex manipulationin tensor expressions. In certain specialized applications, such as onPoisson manifolds, the relationship may fail to be an isomorphism atsingular points, and so, for these cases, is technically only a homomorphism. Inlinear algebra, afinite-dimensional vector spaceis isomorphic to itsdual space(the space oflinear functionalsmapping the vector space to its base field), but not canonically isomorphic to it. This is to say that given a fixed basis for the vector space, there is a natural way to go back and forth between vectors and linear functionals: vectors are represented in the basis bycolumn vectors, and linear functionals are represented in the basis byrow vectors, and one can go back and forth bytransposing. However, without a fixed basis, there is no way to go back and forth between vectors and linear functionals. This is what is meant by that there is no canonical isomorphism. On the other hand, a finite-dimensional vector spaceV{\displaystyle V}endowed with a non-degeneratebilinear form⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }is canonically isomorphic to its dual. The canonical isomorphismV→V∗{\displaystyle V\to V^{*}}is given by The non-degeneracy of⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }means exactly that the above map is an isomorphism. An example is whereV=Rn{\displaystyle V=\mathbb {R} ^{n}}and⟨⋅,⋅⟩{\displaystyle \langle \cdot ,\cdot \rangle }is thedot product. In a basisei{\displaystyle e_{i}}, the canonical isomorphism above is represented as follows. Letgij=⟨ei,ej⟩{\displaystyle g_{ij}=\langle e_{i},e_{j}\rangle }be the components of the non-degenerate bilinear form and letgij{\displaystyle g^{ij}}be the components of the inverse matrix togij{\displaystyle g_{ij}}. Letei{\displaystyle e^{i}}be the dual basis ofei{\displaystyle e_{i}}. A vectorv{\displaystyle v}is written in the basis asv=viei{\displaystyle v=v^{i}e_{i}}usingEinstein summation notation, i.e.,v{\displaystyle v}has componentsvi{\displaystyle v^{i}}in the basis. The canonical isomorphism applied tov{\displaystyle v}gives an element of the dual, which is called a covector. The covector has componentsvi{\displaystyle v_{i}}in the dual basis given by contracting withg{\displaystyle g}: This is what is meant by lowering the index. Conversely, contracting a covectorα=αiei{\displaystyle \alpha =\alpha _{i}e^{i}}with the inverse ofg{\displaystyle g}gives a vector with components in the basisei{\displaystyle e_{i}}. This process is called raising the index. Raising and then lowering the same index (or conversely) are inverse operations, which is reflected ingij{\displaystyle g_{ij}}andgij{\displaystyle g^{ij}}being inverses: whereδji{\displaystyle \delta _{j}^{i}}is theKronecker deltaoridentity matrix. The musical isomorphisms are the global version of the canonical isomorphismv↦⟨v,⋅⟩{\displaystyle v\mapsto \langle v,\cdot \rangle }and its inverse for thetangent bundleandcotangent bundleof a (pseudo-)Riemannian manifold(M,g){\displaystyle (M,g)}. They are canonical isomorphisms ofvector bundleswhich are at any pointpthe canonical isomorphism applied to thetangent spaceofMatpendowed with the inner productgp{\displaystyle g_{p}}. Because everysmooth manifoldcan be (non-canonically) endowed with a Riemannian metric, the musical isomorphisms show that a vector bundle on a smooth manifold is (non-canonically) isomorphic to its dual. Let(M,g)be a (pseudo-)Riemannian manifold. At each pointp, the mapgpis a non-degenerate bilinear form on the tangent spaceTpM. Ifvis a vector inTpM, itsflatis thecovector inT∗pM. Since this is a smooth map that preserves the pointp, it defines a morphism ofsmooth vector bundles♭:TM→T∗M{\displaystyle \flat :\mathrm {T} M\to \mathrm {T} ^{*}M}. By non-degeneracy of the metric,♭{\displaystyle \flat }has an inverse♯{\displaystyle \sharp }at each point, characterized by forαinT∗pMandvinTpM. The vectorα♯{\displaystyle \alpha ^{\sharp }}is called thesharpofα. The sharp map is a smooth bundle map♯:T∗M→TM{\displaystyle \sharp :\mathrm {T} ^{*}M\to \mathrm {T} M}. Flat and sharp are mutually inverse isomorphisms of smooth vector bundles, hence, for eachpinM, there are mutually inverse vector space isomorphisms betweenTpMandT∗pM. The flat and sharp maps can be applied tovector fieldsandcovector fieldsby applying them to each point. Hence, ifXis a vector field andωis a covector field, and Suppose{ei}is amoving tangent frame(see alsosmooth frame) for the tangent bundleTMwith, asdual frame(see alsodual basis), themoving coframe(amoving tangent framefor thecotangent bundleT∗M{\displaystyle \mathrm {T} ^{*}M}; see alsocoframe){ei}. Then thepseudo-Riemannian metric, which is a 2-covarianttensor field, can be written locally in this coframe asg=gijei⊗ejusingEinstein summation notation. Given a vector fieldX=Xieiand denotinggijXi=Xj, its flat is This is referred to as lowering an index, because the components ofXare written with an upper indexXi, whereas the components ofX♭{\displaystyle X^{\flat }}are written with a lower indexXj. In the same way, given a covector fieldω=ωieiand denotinggijωi=ωj, its sharp is wheregijare thecomponentsof theinverse metric tensor(given by the entries of the inverse matrix togij). Taking the sharp of a covector field is referred to asraising an index. The musical isomorphisms may also be extended, for eachr,s,k, to an isomorphism between the bundle of(r,s){\displaystyle (r,s)}tensors and the bundle of(r−k,s+k){\displaystyle (r-k,s+k)}tensors. Herekcan be positive or negative, so long asr-k≥ 0ands+k≥ 0. Lowering an index of an(r,s){\displaystyle (r,s)}tensor gives a(r−1,s+1){\displaystyle (r-1,s+1)}tensor, while raising an index gives a(r+1,s−1){\displaystyle (r+1,s-1)}. Which index is to be raised or lowered must be indicated. For instance, consider the(0, 2)tensorX=Xijei⊗ej. Raising the second index, we get the(1, 1)tensor In other words, the componentsXik{\displaystyle X_{i}^{k}}ofX♯{\displaystyle X^{\sharp }}are given by Similar formulas are available for tensors of other orders. For example, for a(0,n){\displaystyle (0,n)}tensorX, all indices are raised by:[3] For a(n,0){\displaystyle (n,0)}tensorX, all indices are lowered by: For a mixed tensor of order(n,m){\displaystyle (n,m)}, all lower indices are raised and all upper indices are lowered by Well-formulated expressions are constrained by the rules of Einstein summation notation: any index may appear at most twice and furthermore a raised index must contract with a lowered index. With these rules we can immediately see that an expression such asgijviuj{\displaystyle g_{ij}v^{i}u^{j}}is well formulated whilegijviuj{\displaystyle g_{ij}v_{i}u_{j}}is not. In the context ofexterior algebra, an extension of the musical operators may be defined on⋀Vand its dual⋀V*, and are again mutual inverses:[4] defined by In this extension, in which♭mapsk-vectors tok-covectors and♯mapsk-covectors tok-vectors, all the indices of atotally antisymmetric tensorare simultaneously raised or lowered, and so no index need be indicated:Y♯=(Yi1…ijei1⊗⋯⊗eij)♯=gi1r1…gijrsYi1…iker1⊗⋯⊗ers.{\displaystyle Y^{\sharp }=(Y_{i_{1}\dots i_{j}}\mathbf {e} ^{i_{1}}\otimes \dots \otimes \mathbf {e} ^{i_{j}})^{\sharp }=g^{i_{1}r_{1}}\dots g^{i_{j}r_{s}}\,Y_{i_{1}\dots i_{k}}\,\mathbf {e} _{r_{1}}\otimes \dots \otimes \mathbf {e} _{r_{s}}.} This works not just fork-vectors in the context of linear algebra but also fork-forms in the context of a (pseudo-)Riemannian manifold: More generally, musical isomorphisms always exist between a vector bundle endowed with abundle metricand its dual. Given a(0, 2)tensorX=Xijei⊗ej, we define thetrace ofXthrough the metric tensorgbytrg⁡(X):=tr⁡(X♯)=tr⁡(gjkXijei⊗ek)=gijXij.{\displaystyle \operatorname {tr} _{g}(X):=\operatorname {tr} (X^{\sharp })=\operatorname {tr} (g^{jk}X_{ij}\,{\bf {e}}^{i}\otimes {\bf {e}}_{k})=g^{ij}X_{ij}.} Observe that the definition of trace is independent of the choice of index to raise, since the metric tensor is symmetric. The trace of an(r,s){\displaystyle (r,s)}tensor can be taken in a similar way, so long as one specifies which two distinct indices are to be traced. This process is also called contracting the two indices. For example, ifXis an(r,s){\displaystyle (r,s)}tensor withr> 1, then the indicesi1{\displaystyle i_{1}}andi2{\displaystyle i_{2}}can be contracted to give an(r−2,s){\displaystyle (r-2,s)}tensor with components The covariant4-positionis given by with components: (wherex,y,zare the usualCartesian coordinates) and theMinkowski metrictensor withmetric signature(− + + +) is defined as in components: To raise the index, multiply by the tensor and contract: then forλ= 0: and forλ=j= 1, 2, 3: So the index-raisedcontravariant4-position is: This operation is equivalent to the matrix multiplication Given two vectors,Xμ{\displaystyle X^{\mu }}andYμ{\displaystyle Y^{\mu }}, we can write down their (pseudo-)inner product in two ways: By lowering indices, we can write this expression as In matrix notation, the first expression can be written as while the second is, after lowering the indices ofXμ{\displaystyle X^{\mu }}, For a (0,2) tensor,[3]twice contracting with the inverse metric tensor and contracting in different indices raises each index: Similarly, twice contracting with the metric tensor and contracting in different indices lowers each index: Let's apply this to the theory of electromagnetism. Thecontravariantelectromagnetic tensorin the(+ − − −)signatureis given by[5] In components, To obtain thecovarianttensorFαβ, contract with the inverse metric tensor: and sinceF00= 0andF0i= −Fi0, this reduces to Now forα= 0,β=k= 1, 2, 3: and by antisymmetry, forα=k= 1, 2, 3,β= 0: then finally forα=k= 1, 2, 3,β=l= 1, 2, 3; The (covariant) lower indexed tensor is then: This operation is equivalent to the matrix multiplication
https://en.wikipedia.org/wiki/Raising_and_lowering_indices
Abstract index notation(also referred to as slot-naming index notation)[1]is a mathematical notation fortensorsandspinorsthat uses indices to indicate their types, rather than their components in a particular basis.[2]The indices are mere placeholders, not related to any basis and, in particular, are non-numerical. Thus it should not be confused with theRicci calculus. The notation was introduced byRoger Penroseas a way to use the formal aspects of theEinstein summation conventionto compensate for the difficulty in describingcontractionsandcovariant differentiationin modern abstract tensor notation, while preserving the explicitcovarianceof the expressions involved.[3] LetV{\displaystyle V}be avector space, andV∗{\displaystyle V^{*}}itsdual space. Consider, for example, an order-2covarianttensorh∈V∗⊗V∗{\displaystyle h\in V^{*}\otimes V^{*}}. Thenh{\displaystyle h}can be identified with abilinear formonV{\displaystyle V}. In other words, it is a function of two arguments inV{\displaystyle V}which can be represented as a pair ofslots: Abstract index notation is merely alabellingof the slots with Latin letters, which have no significance apart from their designation as labels of the slots (i.e., they are non-numerical): Atensor contraction(or trace) between two tensors is represented by the repetition of an index label, where one label is contravariant (anupper indexcorresponding to the factorV{\displaystyle V}) and one label is covariant (alower indexcorresponding to the factorV∗{\displaystyle V^{*}}). Thus, for instance, is the trace of a tensort=tabc{\displaystyle t=t_{ab}{}^{c}}over its last two slots. This manner of representing tensor contractions by repeated indices is formally similar to theEinstein summation convention. However, as the indices are non-numerical, it does not imply summation: rather it corresponds to the abstract basis-independent trace operation (ornatural pairing) between tensor factors of typeV{\displaystyle V}and those of typeV∗{\displaystyle V^{*}}. A general homogeneous tensor is an element of atensor productof copies ofV{\displaystyle V}andV∗{\displaystyle V^{*}}, such as Label each factor in this tensor product with a Latin letter in a raised position for each contravariantV{\displaystyle V}factor, and in a lowered position for each covariantV∗{\displaystyle V^{*}}position. In this way, write the product as or, simply The last two expressions denote the same object as the first. Tensors of this type are denoted using similar notation, for example: In general, whenever one contravariant and one covariant factor occur in a tensor product of spaces, there is an associatedcontraction(ortrace) map. For instance, is the trace on the first two spaces of the tensor product.Tr15:V⊗V∗⊗V∗⊗V⊗V∗→V∗⊗V∗⊗V{\displaystyle \mathrm {Tr} _{15}:V\otimes V^{*}\otimes V^{*}\otimes V\otimes V^{*}\to V^{*}\otimes V^{*}\otimes V}is the trace on the first and last space. These trace operations are signified on tensors by the repetition of an index. Thus the first trace map is given by and the second by To any tensor product on a single vector space, there are associatedbraiding maps. For example, the braiding map interchanges the two tensor factors (so that its action on simple tensors is given byτ(12)(v⊗w)=w⊗v{\displaystyle \tau _{(12)}(v\otimes w)=w\otimes v}). In general, the braiding maps are in one-to-one correspondence with elements of thesymmetric group, acting by permuting the tensor factors. Here,τσ{\displaystyle \tau _{\sigma }}denotes the braiding map associated to the permutationσ{\displaystyle \sigma }(represented as a product of disjointcyclic permutations). Braiding maps are important indifferential geometry, for instance, in order to express theBianchi identity. Here letR{\displaystyle R}denote theRiemann tensor, regarded as a tensor inV∗⊗V∗⊗V∗⊗V{\displaystyle V^{*}\otimes V^{*}\otimes V^{*}\otimes V}. The first Bianchi identity then asserts that Abstract index notation handles braiding as follows. On a particular tensor product, an ordering of the abstract indices is fixed (usually this is alexicographic ordering). The braid is then represented in notation by permuting the labels of the indices. Thus, for instance, with the Riemann tensor the Bianchi identity becomes A general tensor may be antisymmetrized or symmetrized, and there is according notation. We demonstrate the notation by example. Let's antisymmetrize the type-(0,3) tensorωabc{\displaystyle \omega _{abc}}, whereS3{\displaystyle \mathrm {S} _{3}}is the symmetric group on three elements. Similarly, we may symmetrize:
https://en.wikipedia.org/wiki/Abstract_index_notation
Inphysics, especially inmultilinear algebraandtensor analysis,covarianceandcontravariancedescribe how the quantitative description of certain geometric or physical entities changes with achange of basis.[2]Briefly, a contravariant vector is a list of numbers that transforms oppositely to a change of basis, and a covariant vector is a list of numbers that transforms in the same way. Contravariant vectors are often just calledvectorsand covariant vectors are calledcovectorsordual vectors. The termscovariantandcontravariantwere introduced byJames Joseph Sylvesterin 1851.[3][4] Curvilinear coordinate systems, such ascylindricalorspherical coordinates, are often used in physical and geometric problems. Associated with any coordinate system is a natural choice of coordinate basis for vectors based at each point of the space, and covariance and contravariance are particularly important for understanding how the coordinate description of a vector changes by passing from one coordinate system to another.Tensorsare objects inmultilinear algebrathat can have aspects of both covariance and contravariance. In physics, a vector typically arises as the outcome of a measurement or series of measurements, and is represented as a list (ortuple) of numbers such as The numbers in the list depend on the choice ofcoordinate system. For instance, if the vector represents position with respect to an observer (position vector), then the coordinate system may be obtained from a system of rigid rods, or reference axes, along which the componentsv1,v2, andv3are measured. For a vector to represent a geometric object, it must be possible to describe how it looks in any other coordinate system. That is to say, the components of the vectors willtransformin a certain way in passing from one coordinate system to another. A simple illustrative case is that of aEuclidean vector. For a vector, once a set of basis vectors has been defined, then the components of that vector will always varyoppositeto that of the basis vectors. That vector is therefore defined as acontravarianttensor. Take a standard position vector for example. By changing the scale of the reference axes from meters to centimeters (that is,dividingthe scale of the reference axes by 100, so that the basis vectors now are.01{\displaystyle .01}meters long), the components of the measuredpositionvectoraremultipliedby 100. A vector's components change scaleinverselyto changes in scale to the reference axes, and consequently a vector is called acontravarianttensor. Avector, which is an example of acontravarianttensor, has components that transform inversely to the transformation of the reference axes, (with example transformations includingrotationanddilation).The vector itself does not change under these operations; instead, the components of the vector change in a way that cancels the change in the spatial axes. In other words, if the reference axes were rotated in one direction, the component representation of the vector would rotate in exactly the opposite way. Similarly, if the reference axes were stretched in one direction, the components of the vector, would reduce in an exactly compensating way. Mathematically, if the coordinate system undergoes a transformation described by ann×n{\displaystyle n\times n}invertible matrixM, so that the basis vectors transform according to[e1′e2′...en′]=[e1e2...en]M{\displaystyle {\begin{bmatrix}\mathbf {e} _{1}^{\prime }\ \mathbf {e} _{2}^{\prime }\ ...\ \mathbf {e} _{n}^{\prime }\end{bmatrix}}={\begin{bmatrix}\mathbf {e} _{1}\ \mathbf {e} _{2}\ ...\ \mathbf {e} _{n}\end{bmatrix}}M}, then the components of a vectorvin the original basis (vi{\displaystyle v^{i}}) must be similarly transformed via[v1′v2′...vn′]=M−1[v1v2...vn]{\displaystyle {\begin{bmatrix}v^{1}{^{\prime }}\\v^{2}{^{\prime }}\\...\\v^{n}{^{\prime }}\end{bmatrix}}=M^{-1}{\begin{bmatrix}v^{1}\\v^{2}\\...\\v^{n}\end{bmatrix}}}. The components of avectorare often represented arranged in a column. By contrast, acovectorhas components that transform like the reference axes. It lives in the dual vector space, and represents a linear map from vectors to scalars. The dot product operator involving vectors is a good example of a covector. To illustrate, assume we have a covector defined asv⋅{\displaystyle \mathbf {v} \ \cdot }, wherev{\displaystyle \mathbf {v} }is a vector. The components of this covector in some arbitrary basis are[v⋅e1v⋅e2...v⋅en]{\displaystyle {\begin{bmatrix}\mathbf {v} \cdot \mathbf {e} _{1}&\mathbf {v} \cdot \mathbf {e} _{2}&...&\mathbf {v} \cdot \mathbf {e} _{n}\end{bmatrix}}}, with[e1e2...en]{\displaystyle {\begin{bmatrix}\mathbf {e} _{1}\ \mathbf {e} _{2}\ ...\ \mathbf {e} _{n}\end{bmatrix}}}being the basis vectors in the corresponding vector space. (This can be derived by noting that we want to get the correct answer for the dot product operation when multiplying by an arbitrary vectorw{\displaystyle \mathbf {w} }, with components[w1w2...wn]{\displaystyle {\begin{bmatrix}w^{1}\\w^{2}\\...\\w^{n}\end{bmatrix}}}). The covariance of these covector components is then seen by noting that if a transformation described by ann×n{\displaystyle n\times n}invertible matrixMwere to be applied to the basis vectors in the corresponding vector space,[e1′e2′...en′]=[e1e2...en]M{\displaystyle {\begin{bmatrix}\mathbf {e} _{1}^{\prime }\ \mathbf {e} _{2}^{\prime }\ ...\ \mathbf {e} _{n}^{\prime }\end{bmatrix}}={\begin{bmatrix}\mathbf {e} _{1}\ \mathbf {e} _{2}\ ...\ \mathbf {e} _{n}\end{bmatrix}}M}, then the components of the covectorv⋅{\displaystyle \mathbf {v} \ \cdot }will transform with the same matrixM{\displaystyle M}, namely,[v⋅e1′v⋅e2′...v⋅en′]=[v⋅e1v⋅e2...v⋅en]M{\displaystyle {\begin{bmatrix}\mathbf {v} \cdot \mathbf {e} _{1}^{\prime }&\mathbf {v} \cdot \mathbf {e} _{2}^{\prime }&...&\mathbf {v} \cdot \mathbf {e} _{n}^{\prime }\end{bmatrix}}={\begin{bmatrix}\mathbf {v} \cdot \mathbf {e} _{1}&\mathbf {v} \cdot \mathbf {e} _{2}&...&\mathbf {v} \cdot \mathbf {e} _{n}\end{bmatrix}}M}. The components of acovectorare often represented arranged in a row. A third concept related to covariance and contravariance isinvariance. Ascalar(also called type-0 or rank-0 tensor) is an object that does not vary with the change in basis. An example of a physicalobservablethat is a scalar is themassof a particle. The single, scalar value of mass is independent to changes in basis vectors and consequently is calledinvariant. The magnitude of a vector (such asdistance) is another example of an invariant, because it remains fixed even if geometrical vector components vary. (For example, for a position vector of length3{\displaystyle 3}meters, if allCartesianbasis vectors are changed from1{\displaystyle 1}meters in length to.01{\displaystyle .01}meters in length, the length of the position vector remains unchanged at3{\displaystyle 3}meters, although the vector components will all increase by a factor of100{\displaystyle 100}). The scalar product of a vector and a covector is invariant, because one has components that vary with the base change, and the other has components that vary oppositely, and the two effects cancel out. One thus says that covectors aredualto vectors. Thus, to summarize: The general formulation of covariance and contravariance refers to how the components of a coordinate vector transform under achange of basis(passive transformation).[5]Thus letVbe avector spaceof dimensionnover afieldofscalarsS, and let each off= (X1, ...,Xn)andf′ = (Y1, ...,Yn)be abasisofV.[note 1]Also, let thechange of basisfromftof′ be given by for someinvertiblen×nmatrixAwith entriesaji{\displaystyle a_{j}^{i}}. Here, each vectorYjof thef′ basis is a linear combination of the vectorsXiof thefbasis, so that which are the columns of the matrix productfA{\displaystyle \mathbf {f} A}. A vectorv{\displaystyle v}inVis expressed uniquely as alinear combinationof the elementsXi{\displaystyle X_{i}}of thefbasis as wherevi[f] are elements of the fieldSknown as thecomponentsofvin thefbasis. Denote thecolumn vectorof components ofvbyv[f]: so that (2) can be rewritten as a matrix product The vectorvmay also be expressed in terms of thef′ basis, so that However, since the vectorvitself is invariant under the choice of basis, The invariance ofvcombined with the relationship (1) betweenfandf′ implies that giving the transformation rule In terms of components, where the coefficientsa~ji{\displaystyle {\tilde {a}}_{j}^{i}}are the entries of theinverse matrixofA. Because the components of the vectorvtransform with theinverseof the matrixA, these components are said totransform contravariantlyunder a change of basis. The wayArelates the two pairs is depicted in the following informal diagram using an arrow. The reversal of the arrow indicates a contravariant change: Alinear functionalαonVis expressed uniquely in terms of itscomponents(elements inS) in thefbasis as These components are the action ofαon the basis vectorsXiof thefbasis. Under the change of basis fromftof′ (via1), the components transform so that Denote therow vectorof components ofαbyα[f]: so that (3) can be rewritten as the matrix product Because the components of the linear functional α transform with the matrixA, these components are said totransform covariantlyunder a change of basis. The wayArelates the two pairs is depicted in the following informal diagram using an arrow. A covariant relationship is indicated since the arrows travel in the same direction: Had a column vector representation been used instead, the transformation law would be thetranspose The choice of basisfon the vector spaceVdefines uniquely a set of coordinate functions onV, by means of The coordinates onVare therefore contravariant in the sense that Conversely, a system ofnquantitiesvithat transform like the coordinatesxionVdefines a contravariant vector (or simply vector). A system ofnquantities that transform oppositely to the coordinates is then a covariant vector (or covector). This formulation of contravariance and covariance is often more natural in applications in which there is a coordinate space (amanifold) on which vectors live astangent vectorsorcotangent vectors. Given a local coordinate systemxion the manifold, the reference axes for the coordinate system are thevector fields This gives rise to the framef= (X1, ...,Xn)at every point of the coordinate patch. Ifyiis a different coordinate system and then the framef'is related to the framefby the inverse of theJacobian matrixof the coordinate transition: Or, in indices, A tangent vector is by definition a vector that is a linear combination of the coordinate partials∂/∂xi{\displaystyle \partial /\partial x^{i}}. Thus a tangent vector is defined by Such a vector is contravariant with respect to change of frame. Under changes in the coordinate system, one has Therefore, the components of a tangent vector transform via Accordingly, a system ofnquantitiesvidepending on the coordinates that transform in this way on passing from one coordinate system to another is called a contravariant vector. In a finite-dimensionalvector spaceVover a fieldKwith a non-degenerate symmetricbilinear formg:V×V→K(which may be referred to as themetric tensor), there is little distinction between covariant and contravariant vectors, because thebilinear formallows covectors to be identified with vectors. That is, a vectorvuniquely determines a covectorαvia for all vectorsw. Conversely, each covectorαdetermines a unique vectorvby this equation. Because of this identification of vectors with covectors, one may speak of thecovariant componentsorcontravariant componentsof a vector, that is, they are just representations of the same vector using thereciprocal basis. Given a basisf= (X1, ...,Xn)ofV, there is a unique reciprocal basisf#= (Y1, ...,Yn)ofVdetermined by requiring that theKronecker delta. In terms of these bases, any vectorvcan be written in two ways: The componentsvi[f] are thecontravariant componentsof the vectorvin the basisf, and the componentsvi[f] are thecovariant componentsofvin the basisf. The terminology is justified because under a change of basis, whereA{\displaystyle A}is an invertiblen×n{\displaystyle n\times n}matrix, and thematrix transposehas its usual meaning. In the Euclidean plane, thedot productallows for vectors to be identified with covectors. Ife1,e2{\displaystyle \mathbf {e} _{1},\mathbf {e} _{2}}is a basis, then the dual basise1,e2{\displaystyle \mathbf {e} ^{1},\mathbf {e} ^{2}}satisfies Thus,e1ande2are perpendicular to each other, as aree2ande1, and the lengths ofe1ande2normalized againste1ande2, respectively. For example,[6]suppose that we are given a basise1,e2consisting of a pair of vectors making a 45° angle with one another, such thate1has length 2 ande2has length 1. Then the dual basis vectors are given as follows: Applying these rules, we find and Thus the change of basis matrix in going from the original basis to the reciprocal basis is since For instance, the vector is a vector with contravariant components The covariant components are obtained by equating the two expressions for the vectorv: so In the three-dimensionalEuclidean space, one can also determine explicitly the dual basis to a given set ofbasis vectorse1,e2,e3ofE3that are not necessarily assumed to be orthogonal nor of unit norm. The dual basis vectors are: Even when theeiandeiare notorthonormal, they are still mutually reciprocal: Then the contravariant components of any vectorvcan be obtained by thedot productofvwith the dual basis vectors: Likewise, the covariant components ofvcan be obtained from the dot product ofvwith basis vectors, viz. Thenvcan be expressed in two (reciprocal) ways, viz. or Combining the above relations, we have and we can convert between the basis and dual basis with and If the basis vectors areorthonormal, then they are the same as the dual basis vectors. The following applies to any vector space of dimensionnequipped with a non-degenerate commutative and distributive dot product, and thus also to the Euclidean spaces of any dimension. All indices in the formulas run from 1 ton. TheEinstein notationfor the implicit summation of the terms with the same upstairs (contravariant) and downstairs (covariant) indices is followed. The historical and geometrical meaning of the termscontravariantandcovariantwill be explained at the end of this section. Considering this figure for the case of an Euclidean space withn=2{\displaystyle n=2}, sincev=OA+OB{\displaystyle \mathbf {v} =\mathbf {OA} +\mathbf {OB} }, if we want to expressv{\displaystyle \mathbf {v} }in terms of the covariant basis, we have to multiply the basis vectors by the coefficientsv1=|OA||e1|,v2=|OB||e2|{\displaystyle v^{1}={\frac {\vert \mathbf {OA} \vert }{\vert \mathbf {e_{1}} \vert }},v^{2}={\frac {\vert \mathbf {OB} \vert }{\vert \mathbf {e_{2}} \vert }}}. Withv{\displaystyle \mathbf {v} }and thusOA{\displaystyle \mathbf {OA} }andOB{\displaystyle \mathbf {OB} }fixed, if the module ofei{\displaystyle \mathbf {e_{i}} }increases, the value of thevi{\displaystyle v^{i}}component decreases, and that's why they're calledcontra-variant (with respect to the variation of the basis vectors module). Symmetrically, corollary (7) states that thevi{\displaystyle v_{i}}components equal the dot productv⋅ei{\displaystyle \mathbf {v} \cdot \mathbf {e_{i}} }between the vector and the covariant basis vectors, and since this is directly proportional to the basis vectors module, they're calledco-variant. If we consider the dual (contravariant) basis, the situation is perfectly specular: the covariant components arecontra-variant with respect to the module of the dual basis vectors, while the contravariant components areco-variant. So in the end it all boils down to a matter of convention: historically the first non-orthonormalbasis of the vector space of choice was called "covariant", its dual basis "contravariant", and the corresponding components named specularly. If the covariant basis becomesorthonormal, the dual contravariant basis aligns with it and the covariant components collapse into the contravariant ones, the most familiar situation when dealing with geometrical Euclidean vectors.G{\displaystyle G}andG−1{\displaystyle G^{-1}}become the identity matrixI{\displaystyle I}, and: If the metric is non-Euclidean, but for instanceMinkowskianlike in thespecial relativityandgeneral relativitytheories, the basis are never orthonormal, even in the case of special relativity whereG{\displaystyle G}andG−1{\displaystyle G^{-1}}become, forn=4,η≜diag(1,−1,−1,−1){\displaystyle n=4,\ \eta \triangleq diag(1,-1,-1,-1)}. In this scenario, the covariant and contravariant components always differ. The distinction between covariance and contravariance is particularly important for computations withtensors, which often havemixed variance. This means that they have both covariant and contravariant components, or both vector and covector components. The valence of a tensor is the number of covariant and contravariant terms, and inEinstein notation, covariant components have lower indices, while contravariant components have upper indices. The duality between covariance and contravariance intervenes whenever a vector or tensor quantity is represented by its components, although moderndifferential geometryuses more sophisticatedindex-free methods to represent tensors. Intensor analysis, acovariantvector varies more or less reciprocally to a corresponding contravariant vector. Expressions for lengths, areas and volumes of objects in the vector space can then be given in terms of tensors with covariant and contravariant indices. Under simple expansions and contractions of the coordinates, the reciprocity is exact; under affine transformations the components of a vector intermingle on going between covariant and contravariant expression. On amanifold, atensor fieldwill typically have multiple, upper and lower indices, where Einstein notation is widely used. When the manifold is equipped with ametric, covariant and contravariant indices become very closely related to one another. Contravariant indices can be turned into covariant indices bycontractingwith the metric tensor. The reverse is possible by contracting with the (matrix) inverse of the metric tensor. Note that in general, no such relation exists in spaces not endowed with a metric tensor. Furthermore, from a more abstract standpoint, a tensor is simply "there" and its components of either kind are only calculational artifacts whose values depend on the chosen coordinates. The explanation in geometric terms is that a general tensor will have contravariant indices as well as covariant indices, because it has parts that live in thetangent bundleas well as thecotangent bundle. A contravariant vector is one which transforms likedxμdτ{\displaystyle {\frac {dx^{\mu }}{d\tau }}}, wherexμ{\displaystyle x^{\mu }\!}are the coordinates of a particle at itsproper timeτ{\displaystyle \tau }. A covariant vector is one which transforms like∂φ∂xμ{\displaystyle {\frac {\partial \varphi }{\partial x^{\mu }}}}, whereφ{\displaystyle \varphi }is a scalar field. Incategory theory, there arecovariant functorsandcontravariant functors. The assignment of thedual spaceto a vector space is a standard example of a contravariant functor. Contravariant (resp. covariant) vectors are contravariant (resp. covariant) functors from aGL(n){\displaystyle {\text{GL}}(n)}-torsorto the fundamental representation ofGL(n){\displaystyle {\text{GL}}(n)}. Similarly, tensors of higher degree are functors with values in other representations ofGL(n){\displaystyle {\text{GL}}(n)}. However, some constructions ofmultilinear algebraare of "mixed" variance, which prevents them from being functors. Indifferential geometry, the components of a vector relative to a basis of thetangent bundleare covariant if they change with the same linear transformation as a change of basis. They are contravariant if they change by the inverse transformation. This is sometimes a source of confusion for two distinct but related reasons. The first is that vectors whose components are covariant (called covectors or1-forms) actuallypull backunder smooth functions, meaning that the operation assigning the space of covectors to a smooth manifold is actually acontravariantfunctor. Likewise, vectors whose components are contravariantpush forwardunder smooth mappings, so the operation assigning the space of (contravariant) vectors to a smooth manifold is acovariantfunctor. Secondly, in the classical approach to differential geometry, it is not bases of the tangent bundle that are the most primitive object, but rather changes in the coordinate system. Vectors with contravariant components transform in the same way as changes in the coordinates (because these actually change oppositely to the induced change of basis). Likewise, vectors with covariant components transform in the opposite way as changes in the coordinates.
https://en.wikipedia.org/wiki/Covariance_and_contravariance_of_vectors
Thegrade(US) orgradient(UK) (also calledstepth,slope,incline,mainfall,pitchorrise) of a physical feature, landform or constructed line is either theelevation angleof that surface to thehorizontalor its tangent. It is a special case of theslope, where zero indicateshorizontality. A larger number indicates higher or steeper degree of "tilt". Often slope is calculated as a ratio of "rise" to "run", or as a fraction ("rise over run") in whichrunis the horizontal distance (not the distance along the slope) andriseis the vertical distance. Slopes of existing physical features such ascanyonsand hillsides,stream and river banks, andbedsare often described as grades, but typically the word "grade" is used for human-made surfaces such as roads,landscapegrading,roof pitches,railroads,aqueducts, and pedestrian or bicycle routes. The grade may refer to thelongitudinalslope or theperpendicularcross slope. There are several ways to express slope: Any of these may be used. When the termgradeis used, the slope is usually expressed as a percentage. If one looks at red numbers on the chart specifying grade, one can see the quirkiness of using the grade to specify slope; the numbers go from 0 for flat, to 100% at 45 degrees, to infinity at vertical. Slope may still be expressed when the horizontal run is not known: the rise can be divided by thehypotenuse(the slope length). This is not the usual way to specify slope; this nonstandard expression follows thesinefunction rather than the tangent function, so it calls a 45 degree slope a 71 percent grade instead of a 100 percent. But in practice the usual way to calculate slope is to measure the distance along the slope and the vertical rise, and calculate the horizontal run from that, in order to calculate the grade (100% × rise/run) or standard slope (rise/run). When the angle of inclination is small, using the slope length rather than the horizontal displacement (i.e., using the sine of the angle rather than the tangent) makes only an insignificant difference and can then be used as an approximation. Railway gradients are often expressed in terms of the rise in relation to the distance along the track as a practical measure. In cases where the difference between sin and tan is significant, the tangent is used. In either case, the following identity holds for all inclinations up to 90 degrees:tan⁡α=sin⁡α1−sin2⁡α{\displaystyle \tan {\alpha }={\frac {\sin {\alpha }}{\sqrt {1-\sin ^{2}{\alpha }}}}}. Or more simply, one can calculate the horizontal run by using the Pythagorean theorem, after which it is trivial to calculate the (standard math) slope or the grade (percentage). In Europe, road gradients are expressed in signage as percentage.[4] Grades are related using the following equations with symbols from the figure at top. The slope expressed as a percentage can similarly be determined from the tangent of the angle: If the tangent is expressed as a percentage, the angle can be determined as: If the angle is expressed as a ratio(1 in n)then: For degrees, percentage (%) and per-mille (‰) notations, larger numbers are steeper slopes. For ratios, larger numbersnof 1 innare shallower, easier slopes. The examples show round numbers in one or more of the notations and some documented and reasonably well known instances. Invehicularengineering, variousland-based designs (automobiles,sport utility vehicles,trucks,trains, etc.) are rated for their ability to ascendterrain. Trains typically rate much lower than automobiles. The highest grade a vehicle can ascend while maintaining a particular speed is sometimes termed that vehicle's "gradeability" (or, less often, "grade ability"). The lateral slopes of a highway geometry are sometimes calledfillsorcutswhere these techniques have been used to create them. In the United States, the maximum grade for federally funded highways is specified in a design table based on terrain and design speeds,[7]with up to 6% generally allowed in mountainous areas and hilly urban areas with exceptions for up to 7% grades on mountainous roads with speed limits below 60 mph (95 km/h). The steepest roads in the world according to the Guinness Book of World Records areBaldwin Streetin Dunedin, New Zealand,Ffordd Pen Llechin Harlech, Wales[8]andCanton Avenuein Pittsburgh, Pennsylvania.[9]The Guinness World Record once again listsBaldwin Streetas the steepest street in the world, with a 34.8% grade (1 in 2.87) after a successful appeal[10]against the ruling that handed the title, briefly, toFfordd Pen Llech. A number of streets elsewhere have steeper grades than those listed in the Guinness Book. Drawing on the U.S. National Elevation Dataset,7x7 (magazine)identified ten blocks of public streets in San Francisco open to vehicular traffic in the city with grades over 30 percent. The steepest at 41 percent is the block of Bradford Street above Tompkins Avenue in theBernal Heightsneighborhood.[11]TheSan Francisco Municipal Railwayoperates bus service amongthe city's hills. The steepest grade for bus operations is 23.1% by the67-Bernal Heightson Alabama Street between Ripley and Esmeralda Streets.[12] Likewise, the Pittsburgh Department of Engineering and Construction recorded a grade of 37% (20°) for Canton Avenue.[13]The street has formed part of a bicycle race since 1983.[14] Grade, pitch, and slope are important components inlandscape design,garden design,landscape architecture, andarchitecturefor engineering and aesthetic design factors. Drainage, slope stability, circulation of people and vehicles, complying with building codes, and design integration are all aspects of slope considerations inenvironmental design. Ruling gradientslimit the load that alocomotivecan haul, including the weight of the locomotive itself. Pulling a heavily loaded train at 20 km/h may require ten times the force climbing a 1% slope than on level track. Early railways in the United Kingdom were laid out with very gentle gradients, such as 0.07575% (1 in 1320) and 0.1515% (1 in 660) on theGreat Western main line, nicknamed Brunel's Billiard Table, because the early locomotives (and their brakes) were feeble. Steep gradients were concentrated in short sections of lines where it was convenient to employassistant enginesorcable haulage, such as the 1.2-kilometre (0.75-mile) section fromEustontoCamden Town. Extremely steep gradients require mechanical assistance. Cable systems are used in cases like the Scenic Railway atKatoomba Scenic Worldin Australia, which reaches a maximum grade of 122% (52°) and is claimed to be the world's steepest passenger-carryingfunicular.[15]For somewhat gentler inclines,rack railwaysare employed, such as thePilatus Railwayin Switzerland, which has a maximum grade of 48% (26°) and is considered the steepest rack railway.[16] Gradients can be expressed as an angle, as feet per mile, feet perchain, 1 inn,x% oryper mille. Since designers like round figures, the method of expression can affect the gradients selected.[citation needed] The steepestrailway linesthat do not use rack systems include: Gradients on sharp curves are effectively a bit steeper than the same gradient on straight track, so to compensate for this and make theruling gradeuniform throughout, the gradient on those sharp curves should be reduced slightly. In the era before they were provided withcontinuous brakes, whetherair brakesorvacuum brakes, steep gradients made it extremely difficult for trains to stop safely. In those days, for example, aninspectorinsisted thatRudgwick railway stationinWest Sussexbe regraded. He would not allow it to open until the gradient through the platform was eased from 1 in 80 to 1 in 130.
https://en.wikipedia.org/wiki/Grade_(slope)
Animage gradientis a directional change in the intensity or color in an image. The gradient of the image is one of the fundamental building blocks inimage processing. For example, theCanny edge detectoruses image gradient foredge detection. Ingraphics softwarefordigital image editing, the term gradient orcolor gradientis also used for a gradual blend ofcolorwhich can be considered as an evengradationfrom low to high values, and seen from black to white in the images to the right. Another name for this iscolor progression. Mathematically, thegradientof a two-variable function (here the image intensity function) at each image point is a 2Dvectorwith the components given by thederivativesin the horizontal and vertical directions. At each image point, the gradient vector points in the direction of largest possible intensity increase, and the length of the gradient vector corresponds to the rate of change in that direction.[1] Since the intensity function of a digital image is only known at discrete points, derivatives of this function cannot be defined unless we assume that there is an underlyingcontinuousintensity function which has been sampled at the image points. With some additional assumptions, the derivative of the continuous intensity function can be computed as a function on the sampled intensity function, i.e., the digital image. Approximations of these derivative functions can be defined at varying degrees of accuracy. The most common way to approximate the image gradient is toconvolvean image with a kernel, such as theSobel operatororPrewitt operator. Image gradients are often utilized inmapsand other visual representations ofdatain order to convey additional information.GIStools use color progressions to indicateelevationandpopulation density, among others. Incomputer vision, image gradients can be used to extract information from images. Gradient images are created from the original image (generally by convolving with a filter, one of the simplest being theSobel filter) for this purpose. Each pixel of a gradient image measures the change in intensity of that same point in the original image, in a given direction. To get the full range of direction, gradient images in the x and y directions are computed. One of the most common uses is in edge detection. After gradient images have been computed, pixels with large gradient values become possible edge pixels. The pixels with the largest gradient values in the direction of the gradient become edge pixels, and edges may be traced in the direction perpendicular to the gradient direction. One example of an edge detection algorithm that uses gradients is theCanny edge detector. Image gradients can also be used for robust feature and texture matching. Different lighting or camera properties can cause two images of the same scene to have drastically different pixel values. This can cause matching algorithms to fail to match very similar or identical features. One way to solve this is to compute texture or feature signatures based on gradient images computed from the original images. These gradients are less susceptible to lighting and camera changes, so matching errors are reduced. The gradient of an image is a vector of itspartials:[2]: 165 where: Thederivativeof an image can be approximated byfinite differences. If central difference is used, to calculate∂f∂y{\displaystyle \textstyle {\frac {\partial f}{\partial y}}}we can apply a 1-dimensional filter to the imageA{\displaystyle \mathbf {A} }byconvolution: where∗{\displaystyle *}denotes the 1-dimensional convolution operation. This 2×1 filter will shift the image by half a pixel. To avoid this, the following 3×1 filter can be used. The gradient direction can be calculated by the formula:[2]: 706 and the magnitude is given by:[3]
https://en.wikipedia.org/wiki/Image_gradient
Atime derivativeis aderivativeof a function with respect totime, usually interpreted as the rate of change of the value of the function.[1]The variable denoting time is usually written ast{\displaystyle t}. A variety of notations are used to denote the time derivative. In addition to the normal (Leibniz's) notation, A very common short-hand notation used, especially in physics, is the 'over-dot'. I.E. (This is calledNewton's notation) Higher time derivatives are also used: thesecond derivativewith respect to time is written as with the corresponding shorthand ofx¨{\displaystyle {\ddot {x}}}. As a generalization, the time derivative of a vector, say: is defined as the vector whose components are the derivatives of the components of the original vector. That is, Time derivatives are a key concept inphysics. For example, for a changingpositionx{\displaystyle x}, its time derivativex˙{\displaystyle {\dot {x}}}is itsvelocity, and its second derivative with respect to time,x¨{\displaystyle {\ddot {x}}}, is itsacceleration. Even higher derivatives are sometimes also used: the third derivative of position with respect to time is known as thejerk. Seemotion graphs and derivatives. A large number of fundamental equations in physics involve first or second time derivatives of quantities. Many other fundamental quantities in science are time derivatives of one another: and so on. A common occurrence in physics is the time derivative of avector, such as velocity or displacement. In dealing with such a derivative, both magnitude and orientation may depend upon time. For example, consider a particle moving in a circular path. Its position is given by the displacement vectorr=xı^+yȷ^{\displaystyle r=x{\hat {\imath }}+y{\hat {\jmath }}}, related to the angle,θ, and radial distance,r, as defined in the figure: For this example, we assume thatθ=t. Hence, the displacement (position) at any timetis given by This form shows the motion described byr(t) is in a circle of radiusrbecause themagnitudeofr(t) is given by using thetrigonometric identitysin2(t) + cos2(t) = 1and where⋅{\displaystyle \cdot }is the usual Euclidean dot product. With this form for the displacement, the velocity now is found. The time derivative of the displacement vector is the velocity vector. In general, the derivative of a vector is a vector made up of components each of which is the derivative of the corresponding component of the original vector. Thus, in this case, the velocity vector is: Thus the velocity of the particle is nonzero even though the magnitude of the position (that is, the radius of the path) is constant. The velocity is directed perpendicular to the displacement, as can be established using thedot product: Acceleration is then the time-derivative of velocity: The acceleration is directed inward, toward the axis of rotation. It points opposite to the position vector and perpendicular to the velocity vector. This inward-directed acceleration is calledcentripetal acceleration. Indifferential geometry, quantities are often expressed with respect to the localcovariant basis,ei{\displaystyle \mathbf {e} _{i}}, whereiranges over the number of dimensions. The components of a vectorU{\displaystyle \mathbf {U} }expressed this way transform as a contravarianttensor, as shown in the expressionU=Uiei{\displaystyle \mathbf {U} =U^{i}\mathbf {e} _{i}}, invokingEinstein summation convention. If we want to calculate the time derivatives of these components along a trajectory, so that we haveU(t)=Ui(t)ei(t){\displaystyle \mathbf {U} (t)=U^{i}(t)\mathbf {e} _{i}(t)}, we can define a new operator, the invariant derivativeδ{\displaystyle \delta }, which will continue to return contravariant tensors:[2] whereVj=dxjdt{\displaystyle V^{j}={\frac {dx^{j}}{dt}}}(withxj{\displaystyle x^{j}}being thejth coordinate) captures the components of the velocity in the local covariant basis, andΓjki{\displaystyle \Gamma _{jk}^{i}}are theChristoffel symbolsfor the coordinate system. Note that explicit dependence onthas been repressed in the notation. We can then write: as well as: In terms of thecovariant derivative,∇j{\displaystyle \nabla _{j}}, we have: Ineconomics, many theoretical models of the evolution of various economic variables are constructed incontinuous timeand therefore employ time derivatives.[3]: ch. 1-3One situation involves astock variableand its time derivative, aflow variable. Examples include: Sometimes the time derivative of a flow variable can appear in a model: And sometimes there appears a time derivative of a variable which, unlike the examples above, is not measured in units of currency:
https://en.wikipedia.org/wiki/Time_derivative
Incontinuum mechanics, thematerial derivative[1][2]describes the timerate of changeof some physical quantity (likeheatormomentum) of amaterial elementthat is subjected to a space-and-time-dependentmacroscopic velocity field. The material derivative can serve as a link betweenEulerianandLagrangiandescriptions of continuumdeformation.[3] For example, influid dynamics, the velocity field is theflow velocity, and the quantity of interest might be thetemperatureof the fluid. In this case, the material derivative then describes the temperature change of a certainfluid parcelwith time, as it flows along itspathline(trajectory). There are many other names for the material derivative, including: The material derivative is defined for anytensor fieldythat ismacroscopic, with the sense that it depends only on position and time coordinates,y=y(x,t):DyDt≡∂y∂t+u⋅∇y,{\displaystyle {\frac {\mathrm {D} y}{\mathrm {D} t}}\equiv {\frac {\partial y}{\partial t}}+\mathbf {u} \cdot \nabla y,}where∇yis thecovariant derivativeof the tensor, andu(x,t)is theflow velocity. Generally the convective derivative of the fieldu·∇y, the one that contains the covariant derivative of the field, can be interpreted both as involving thestreamlinetensor derivativeof the fieldu·(∇y), or as involving the streamlinedirectional derivativeof the field(u·∇)y, leading to the same result.[10]Only this spatial term containing the flow velocity describes the transport of the field in the flow, while the other describes the intrinsic variation of the field, independent of the presence of any flow. Confusingly, sometimes the name "convective derivative" is used for the whole material derivativeD/Dt, instead for only the spatial termu·∇.[2]The effect of the time-independent terms in the definitions are for the scalar and tensor case respectively known asadvectionand convection. For example, for a macroscopicscalar fieldφ(x,t)and a macroscopicvector fieldA(x,t)the definition becomes:DφDt≡∂φ∂t+u⋅∇φ,DADt≡∂A∂t+u⋅∇A.{\displaystyle {\begin{aligned}{\frac {\mathrm {D} \varphi }{\mathrm {D} t}}&\equiv {\frac {\partial \varphi }{\partial t}}+\mathbf {u} \cdot \nabla \varphi ,\\[3pt]{\frac {\mathrm {D} \mathbf {A} }{\mathrm {D} t}}&\equiv {\frac {\partial \mathbf {A} }{\partial t}}+\mathbf {u} \cdot \nabla \mathbf {A} .\end{aligned}}} In the scalar case∇φis simply thegradientof a scalar, while∇Ais the covariant derivative of the macroscopic vector (which can also be thought of as theJacobian matrixofAas a function ofx). In particular for a scalar field in a three-dimensionalCartesian coordinate system(x1,x2,x3), the components of the velocityuareu1,u2,u3, and the convective term is then:u⋅∇φ=u1∂φ∂x1+u2∂φ∂x2+u3∂φ∂x3.{\displaystyle \mathbf {u} \cdot \nabla \varphi =u_{1}{\frac {\partial \varphi }{\partial x_{1}}}+u_{2}{\frac {\partial \varphi }{\partial x_{2}}}+u_{3}{\frac {\partial \varphi }{\partial x_{3}}}.} Consider a scalar quantityφ=φ(x,t), wheretis time andxis position. Hereφmay be some physical variable such as temperature or chemical concentration. The physical quantity, whose scalar quantity isφ, exists in a continuum, and whose macroscopic velocity is represented by the vector fieldu(x,t). The (total) derivative with respect to time ofφis expanded using the multivariatechain rule:ddtφ(x(t),t)=∂φ∂t+x˙⋅∇φ.{\displaystyle {\frac {\mathrm {d} }{\mathrm {d} t}}\varphi (\mathbf {x} (t),t)={\frac {\partial \varphi }{\partial t}}+{\dot {\mathbf {x} }}\cdot \nabla \varphi .} It is apparent that this derivative is dependent on the vectorx˙≡dxdt,{\displaystyle {\dot {\mathbf {x} }}\equiv {\frac {\mathrm {d} \mathbf {x} }{\mathrm {d} t}},}which describes achosenpathx(t)in space. For example, ifx˙=0{\displaystyle {\dot {\mathbf {x} }}=\mathbf {0} }is chosen, the time derivative becomes equal to the partial time derivative, which agrees with the definition of apartial derivative: a derivative taken with respect to some variable (time in this case) holding other variables constant (space in this case). This makes sense because ifx˙=0{\displaystyle {\dot {\mathbf {x} }}=0}, then the derivative is taken at someconstantposition. This static position derivative is called the Eulerian derivative. An example of this case is a swimmer standing still and sensing temperature change in a lake early in the morning: the water gradually becomes warmer due to heating from the sun. In which case the term∂φ/∂t{\displaystyle {\partial \varphi }/{\partial t}}is sufficient to describe the rate of change of temperature. If the sun is not warming the water (i.e.∂φ/∂t=0{\displaystyle {\partial \varphi }/{\partial t}=0}), but the pathx(t)is not a standstill, the time derivative ofφmay change due to the path. For example, imagine the swimmer is in a motionless pool of water, indoors and unaffected by the sun. One end happens to be at a constant high temperature and the other end at a constant low temperature. By swimming from one end to the other the swimmer senses a change of temperature with respect to time, even though the temperature at any given (static) point is a constant. This is because the derivative is taken at the swimmer's changing location and the second term on the rightx˙⋅∇φ{\displaystyle {\dot {\mathbf {x} }}\cdot \nabla \varphi }is sufficient to describe the rate of change of temperature. A temperature sensor attached to the swimmer would show temperature varying with time, simply due to the temperature variation from one end of the pool to the other. The material derivative finally is obtained when the pathx(t)is chosen to have a velocity equal to the fluid velocityx˙=u.{\displaystyle {\dot {\mathbf {x} }}=\mathbf {u} .} That is, the path follows the fluid current described by the fluid's velocity fieldu. So, the material derivative of the scalarφisDφDt=∂φ∂t+u⋅∇φ.{\displaystyle {\frac {\mathrm {D} \varphi }{\mathrm {D} t}}={\frac {\partial \varphi }{\partial t}}+\mathbf {u} \cdot \nabla \varphi .} An example of this case is a lightweight, neutrally buoyant particle swept along a flowing river and experiencing temperature changes as it does so. The temperature of the water locally may be increasing due to one portion of the river being sunny and the other in a shadow, or the water as a whole may be heating as the day progresses. The changes due to the particle's motion (itself caused by fluid motion) is calledadvection(or convection if a vector is being transported). The definition above relied on the physical nature of a fluid current; however, no laws of physics were invoked (for example, it was assumed that a lightweight particle in a river will follow the velocity of the water), but it turns out that many physical concepts can be described concisely using the material derivative. The general case of advection, however, relies on conservation of mass of the fluid stream; the situation becomes slightly different if advection happens in a non-conservative medium. Only a path was considered for the scalar above. For a vector, the gradient becomes atensor derivative; fortensorfields we may want to take into account not only translation of the coordinate system due to the fluid movement but also its rotation and stretching. This is achieved by theupper convected time derivative. It may be shown that, inorthogonal coordinates, thej-th component of the convection term of the material derivative of avector fieldA{\displaystyle \mathbf {A} }is given by[11][(u⋅∇)A]j=∑iuihi∂Aj∂qi+Aihihj(uj∂hj∂qi−ui∂hi∂qj),{\displaystyle [\left(\mathbf {u} \cdot \nabla \right)\mathbf {A} ]_{j}=\sum _{i}{\frac {u_{i}}{h_{i}}}{\frac {\partial A_{j}}{\partial q^{i}}}+{\frac {A_{i}}{h_{i}h_{j}}}\left(u_{j}{\frac {\partial h_{j}}{\partial q^{i}}}-u_{i}{\frac {\partial h_{i}}{\partial q^{j}}}\right),} where thehiare related to themetric tensorsbyhi=gii.{\displaystyle h_{i}={\sqrt {g_{ii}}}.} In the special case of a three-dimensionalCartesian coordinate system(x,y,z), andAbeing a 1-tensor (a vector with three components), this is just:(u⋅∇)A=(ux∂Ax∂x+uy∂Ax∂y+uz∂Ax∂zux∂Ay∂x+uy∂Ay∂y+uz∂Ay∂zux∂Az∂x+uy∂Az∂y+uz∂Az∂z)=∂(Ax,Ay,Az)∂(x,y,z)u{\displaystyle (\mathbf {u} \cdot \nabla )\mathbf {A} ={\begin{pmatrix}\displaystyle u_{x}{\frac {\partial A_{x}}{\partial x}}+u_{y}{\frac {\partial A_{x}}{\partial y}}+u_{z}{\frac {\partial A_{x}}{\partial z}}\\\displaystyle u_{x}{\frac {\partial A_{y}}{\partial x}}+u_{y}{\frac {\partial A_{y}}{\partial y}}+u_{z}{\frac {\partial A_{y}}{\partial z}}\\\displaystyle u_{x}{\frac {\partial A_{z}}{\partial x}}+u_{y}{\frac {\partial A_{z}}{\partial y}}+u_{z}{\frac {\partial A_{z}}{\partial z}}\end{pmatrix}}={\frac {\partial (A_{x},A_{y},A_{z})}{\partial (x,y,z)}}\mathbf {u} } where∂(Ax,Ay,Az)∂(x,y,z){\displaystyle {\frac {\partial (A_{x},A_{y},A_{z})}{\partial (x,y,z)}}}is aJacobian matrix. There is also avector-dot-del identityand the material derivative for a vector fieldA{\displaystyle \mathbf {A} }can be expressed as:
https://en.wikipedia.org/wiki/Material_derivative